May
01
Tags
Mediocre machines and lost leaders

Are we handing the future to AI by default?
As the global conversation around Artificial General Intelligence (AGI) intensifies, a tantalising – and troubling – question emerges:
Could AGI actually be an upgrade on the world’s current leaders?
At first glance, the idea sounds oddly reassuring. After all, AGI is, by definition, designed to reason, learn and decide at or beyond human level. In theory, that means it could:
- Process information at impossible scale – making sense of the dense policy documents politicians barely glance at.
- Model complex global systems – economics, climate, geopolitics – with a level of precision well beyond any minister armed with a four-page briefing.
- Optimise solutions free from ego, emotion or political gain – unburdened by the party whips, the headlines or the lobbyists.
From that perspective, AGI doesn’t just look attractive. It looks necessary. A system led by AGI could favour long-term sustainability over short-term election cycles. It could elevate equity over populism. It could offer a rational, strategic alternative to the reactive, often self-interested leadership models we know too well.
But here’s the problem: Leadership Isn’t Just Intelligence. It’s Emotional.
True leadership requires empathy, legitimacy, moral authority and the ability to inspire – across cultures, ideologies and experiences. No machine, however intelligent, currently demonstrates these traits in any meaningful or human way.
To be fair, many of our current leaders struggle with those qualities too. But at least we can hold them to account – however imperfectly.
And even a ‘benevolent’ AGI may simply reflect the values and blind spots of its creators. Programmed poorly, or maliciously, it could become the ultimate enforcer of narrow worldviews. Even perfectly rational decisions, if coldly imposed, risk alienating the very people they aim to serve.
The uncomfortable truth? AGI might make an exceptional civil servant – but a terrible leader.
Mediocre machines are taking over
If all this sounds like a distant problem, think again.
Visit OpenAI’s new GPT Store and you’ll find thousands of AI “apps” – most created by amateurs, with minimal oversight. Among the most popular? Dating GPTs. Tools that simulate flirting, relationship advice, even emotional intimacy.
These aren’t AGIs. They’re narrow, simplistic tools optimised for instant emotional gratification. And yet, they are already being entrusted with deeply human domains: communication, intimacy, trust.
It reveals a more immediate trend: We are normalising the delegation of human complexity to simplistic machines.
Instead of demanding accountable, ethical AI to enhance leadership, we’re sleepwalking into a world where shallow, convenient tools manage critical life functions – not because they’re better, but because they’re easier, cheaper, or less emotionally demanding.
What happens when we no longer expect nuance, empathy, or critical thinking – from ourselves or our tools?
From idealism to corporate capture.
So what happened to the lost Guardians of AGI? The pursuit of AGI wasn’t supposed to look like this. When the idea first gripped the tech elite, it carried both utopian dreams and apocalyptic warnings. James Barrat’s, Our Final Invention,painted a terrifying picture. Meanwhile, idealists like:
- Sam Altman (OpenAI) imagined AGI solving humanity’s grand challenges – poverty, education, climate.
- Demis Hassabis (DeepMind) envisioned breakthroughs in biology, physics and sustainability beyond human cognition.
Both began with ethical ambitions. OpenAI even launched as a non-profit, warning against AGI being controlled by powerful elites. But fast forward a few years:
- OpenAI restructured into a “capped-profit” corporation, with Microsoft now a major investor.
- DeepMind was absorbed into Alphabet (Google’s parent), its ethical oversight arm dismantled.
Idealism gave way to the gravitational pull of global capital. Not out of malice – but inevitability. Shareholder value and access to computing capacity has a way of reprogramming purpose.
AGI was meant to serve humanity. Instead, before it even arrives, it’s been folded into ecosystems optimised for profit, not public good.
A future at risk – unless we reclaim it
However, you look at it, we face a double danger:
- Shallow AI tools continue to invade core aspects of human life – diminishing expectations, weakening capabilities.
- True AGI, when it comes, may be owned and shaped by those best positioned to monetise it – not those best placed to guide it.
AGI could be an upgrade. But only if we upgrade ourselves first. That means elevating our public discourse. Reasserting the role of ethics in innovation. Demanding transparency, accountability and purpose from those building the tools of tomorrow.
Otherwise, we’re not preparing for a future led by wise machines.
We’re preparing for a future ruled by mediocrity – faster processors, deeper biases, stronger incentives to look away.
At Jam Partnership, we believe technology should augment human intelligence – not replace it.
Leadership demands more than data. It needs wisdom.
More than speed. It needs soul.
Picture that pivotal moment in The Matrix. Agent Smith looms over Neo in the subway:
Agent Smith:
“That is the sound of inevitability. It is the sound of your death. Goodbye, Mr. Anderson.”
But Neo, bloodied and pinned, finds his voice:
Neo:
“My name… is Neo.”
Then he rises. Fights back. A moment where choice defies fate – where a man becomes more than a programme’s prediction.
Say it with me: “My name is Neo.”
Smells good, doesn’t it?
A deep dive podcast on the above article by Ai – my thanks to Larry and Mandy at Notebook LM
Discover more from jam partnership
Subscribe to get the latest posts sent to your email.
