WARNING: AI is safe. AGI may not be

The trust deficit: Why AGI may be the most dangerous game in history

By Mike O’Brien | Jam Partnership Blog

There’s a technological alteration unfolding – not across borders or industries, but in the very nature of intelligence itself. Tech giants are racing to build Artificial General Intelligence (AGI): machines capable of performing any intellectual task a human can do – and eventually, much more.

At first glance, the ambition is intoxicating: solve one problem, and by doing so solve all other problems: climate change, eradicate disease, end scarcity. But history warns us not to mistake technological prowess for moral wisdom. Nor to blindly trust those who promise us a better future.

Trust Lost: What the FTX Collapse Teaches Us About the Future

The story of Sam Bankman-Fried and the collapse of FTX offers a chilling warning.
Once hailed as a genius innovator and a force for good, SBF built a crypto empire fuelled by buzzwords like “ethics”, “altruism” and “effective innovation”.

In reality, FTX was a web of reckless risk-taking and deception. When the collapse came, it wasn’t just financial capital that was destroyed – it was trust: in innovators, in markets, and in the broader narrative of “tech as salvation”.

Today, as AGI development accelerates, the parallels are too glaring to ignore.
Can we trust the technologists and corporations racing to reshape the future?

Behind the Curtain: A culture of secrecy and speed

Parmy Olson’s well researched work, Supremacy, reveals a world inside AGI labs marked by secrecy, ambition, and a culture that often treats alignment with human values as a secondary issue.

Max Tegmark, in Life 3.0, warns that without deeply embedding ethics and foresight into AGI design, we risk creating intelligence vastly more capable than ourselves – but indifferent, or even hostile, to human needs.

Yanis Varoufakis, in Technofeudalism, describes how platform monopolies have already supplanted many traditional market structures, concentrating data, influence, and wealth into the hands of a few unelected players.

In short: the infrastructures for global trust and governance are already eroding.
AGI may not start the fire – but it could ensure we cannot control it.

The Advertising Industry: A microcosm of bigger failures

Today’s digital advertising ecosystem offers a stark early warning.

  • Global advertising spend is forecast to top $1 trillion in 2024 (WARC, 2024) – an all-time record.
  • Google, Meta, and Amazon now control between 80% and 90% of digital ad spend outside China (FIPP/WARC, 2024).
  • Yet 17.9% of global ad traffic is fraudulent, driven by bots and fake clicks (DoubleVerify, 2024).

Still the money pours in. 

Despite trillion-dollar revenues and decades of AI innovation, tech giants have failed to eliminate even basic fraud. They profit handsomely regardless. Hopefully not from the thousands of professionals and graduates we have trained.

If today’s AI cannot even distinguish real from fake in a sector as commercially critical as advertising, what makes us believe AGI – infinitely more complex and autonomous – will act reliably and ethically when it comes to law, governance, healthcare, defence or democracy itself?

Global Regulation: Progress, but a dangerous lag

Governments are beginning to respond to AI’s rapid advance:

  • The European Union’s AI Act introduces a comprehensive, risk-based legal framework.
  • South Korea’s AI Basic Act seeks to blend innovation with safeguards for transparency, safety, and human rights.
  • The US AI Bill of Rights sets out non-binding principles for fairness, privacy, and transparency.

International coalitions like the G7 have also begun work on ethical AI frameworks.

But there are serious gaps:

  • Regulations are often piecemeal and reactive, not proactive.
  • Most frameworks were designed for today’s narrow AI – not the transformative potential of AGI.
  • Industry-driven governance still dominates in many markets, meaning private corporations largely set the standards by which they are judged.

In short: regulation is running a marathon, while AGI is sprinting towards deployment.

The Strategic imperative

  • Transparency must be non-negotiable.
    No critical AI system should be a black box to regulators or citizens.
  • Ethical governance must match technical speed.
    Frameworks must evolve as rapidly as technology – or risk becoming irrelevant.
  • Trust must be earned – verified, not assumed.
    It is no longer enough for tech leaders to promise good outcomes. Independent audit, public accountability, and global oversight are essential.

Conclusion: Governance or gambit?

AGI is not simply a technology. It is a force capable of redefining human agency, economic power, and political authority.

Parmy Olson exposes the secretive, competitive culture behind AGI’s development.
Max Tegmark warns of the existential dangers of misaligned intelligence.
Yanis Varoufakis reveals the creeping power of private platforms beyond democratic reach. And today’s bot-riddled advertising industry shows how quickly even trillion-dollar systems can lose credibility and accountability.

The lesson is clear:
We cannot afford to trust without questioning. We cannot afford to innovate without governing.

The final question for our age is not whether we can build AGI –  it’s whether we can still govern those who do.

References:

  • Olson, P. (2023). Supremacy: The Race to Control AI.
  • Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence.
  • Varoufakis, Y. (2023). Technofeudalism: What Killed Capitalism.
  • DoubleVerify. (2024). Global Insights Report. Retrieved from doubleverify.com
  • Statista. (2024). Market Share of Leading Digital Ad Platforms Worldwide.
  • FIPP/WARC. (2024). Size Matters: How the Big Five Are Dominating Global Ad Expenditure. Retrieved from fipp.com
  • WARC. (2024). Global Advertising to Top $1 Trillion in 2024 as Big Five Attract Most Spending. Retrieved from warc.com
  • European Commission (2024). EU Artificial Intelligence Act Overview.
  • White House (2022). Blueprint for an AI Bill of Rights.


Discover more from jam partnership

Subscribe to get the latest posts sent to your email.