top of page

Should the UN monopolize global AI governance?

By Craig Warren Smith

[first in a series of 5 articles on global AI governance]



Last week the UN General Assembly adopted on a resolution on Artificial Intelligence (AI) governance. The declaration was the first to put the focus of AI regulations on the Global South to do the unfinished to work of “closing the digital divide,” a cause I introduced to UN Secretary General Kofi Annan two decades ago. 


But the declaration, tied to non-enforceable voluntary compliance, is not good news; it will only accelerate the digital divide. With that, equitable benefit to the internet would only be available to fewer of those who arguably need it most. 

The United Nations is the wrong “governor” of AI. Thirty years ago, the UN stepped forward to “own” the existential climate change cause.  Now that emissions are 20% worse, we can conclude that its efforts have failed. All its attempts lacked teeth –the Kyoto accords, the Paris accords, the Strategic Development Goals, etc.  The UN now wants to use the same playbook to own AI governance?    


For decades, the AI issue has been slumbering -- though many leaders warned that once AI would reach take-off stage, unmitigated AI innovation without regulatory guardrails would become a greater existential risk than even nuclear war or climate change.


 The Big Bang came last year, when ChatGPT surged.  Insiders in Silicon Valley noted that an exponential 10x yearly rise of innovation in the frontier models of AI would bring Artificial General Intelligence (abbreviated as AGI) to the point where the artificial neural networks of machines could exceed the natural neural networks of humans.  At that point, which some experts say may now be only months away,


AIs could produce existential harm and/or existential benefit to humanity.  Insiders knew that both benefits and harm could be augmented only through AI governance.

As this reality became clear, many leaders jumped in to offer an AI governance model that would have traction. Several strategies, by individuals and governments, were put forth: 

Individuals demanded governments to act, mostly by signing letters. The most influential of these leaders was Stephen Hawking, the courageous professor at University of Cambridge. Just before his death of a neurological disease, and just after he postulated Black Holes in the cosmology, he stayed alive just long enough to say, about AI and through AI, using his eyes to type: “It will either be the best thing that's ever happened to us, or it will be the worst thing. If we're not careful, it very well may be the last thing.”  


Based on his view, Elon Musk signed a letter, co-signed by 33,000 other experts, demanding a pause in advanced AI research until governments contain AI with firm regulation. 

Indeed, Musk and his Silicon Valley colleagues did not have a clue about how to regulate AI or whom should receive their letters.  Located far from the regulatory capitals of Washington, Brussels and Geneva, the Internet itself is unregulated and Silicon Valley leaders have fought hard to keep it that way. Musk said, “If governments can mandate seatbelts on Teslas, why in hell can’t they make AI safe?”  But he quickly violated his own edict in order to compete with Chat GPT using his ecosystem of Xs – the former Twitter, Neuralink, SpaceX, Tesla and xAI, which is his direct competitor to ChatGPT 4. 

When Musk got no response, other leaders joined in. The European Union’s President Ursula von der Leyen made a celebratory announcement of a regulatory approach that the Organisation for Economic Co-operation and Development (OECD) hoped could be a model for the world.  But her latest announcement, made February 14, was one of many false starts. 

She’s been at this since 2021, but the bureaucracy in Brussels is like herding cats—and quite frankly, all 27 EU nations are not yet on the same page, Meanwhile, in one of his first acts as Prime Minister of the UK, Rishi Sunak stepped forward to bring together an international Bletchley Park Summit in honor of the hapless Alan Turing, an inventor of the AI field and the Enigma Code which helped end World War Two.  (Was he hoping to vindicate the UK’s violent punishment of Turing, who was castrated for being gay?)  Twenty-eight nations signed on to the summit, which would have required nations to vet AIs for safety before being released.  But the most important nation, China, slipped out the door before the signing took place.


The EU approach is protectionist, and seeks to promote European innovation, possibly preventing European AI startups from cooperating with US-based frontier models of AI, which lead the world.

Then came the USA.  Biden seemed to have a clear path on AI governance, since both Republicans and Democrats in both houses of Congress were united in favor of harnessing the Big AI players. But in one closed-door session with CEOs, Senator Chuck Schumer apparently commented, “If we regulate AI, won’t it give China a chance to catch up with us?” 

When it was clear that no regulation was forthcoming, Silicon Valley embraced self-regulation.  After the kerfuffle of Sam Altman’s firing and rehiring at Open AI, he committed his nonprofit company to a policy of “alignment” -- that is, not activating models of AI unless they become extremely aligned with human values.  The implication, reinforced by Silicon Valley’s top venture capitalist, Mark Andreessen, is that government regulation would only get in the way of the utopian future that AI could offer to the world. 


As for China, Xi Jing Ping was reportedly furious that the November 30, 2022 release of ChatGPT coincided with a planned announcement of China’s AI law, which it touted as the first such law in the world.  In truth, China had to go back to the drawing boards. At least two companies – Huawei and Baidu -- are rapidly imitating the best frontier AIs from Silicon Valley in their effort to achieve AI dominance. China is also looking ahead to create AIs that are more empathetic and emotionally intimate. Perhaps they are trying to imitate Silicon Valley-based Inflection AI’s Pi, designed to be a “kind and supportive companion” to consumers. That may also be the aim of Baidu’s ERNIE, which sounds like a goofy character from Sesame Street.  But it actually stands for Enhanced Representation through Knowledge Integration, not so cuddly after all.

China prohibits ChatGPT and other US-based generative AI models.  But the nation’s youth are going through WeChat to circumvent government restrictions by using Virtual Private Network (VPN) downloads illegally to catch up with the Americans. 


Meanwhile in America, the markets have already absorbed the convictions that AI governance will never happen. In 2023, expectations that AI would release trillions of dollars in productivity enhancements globally ended predictions of American recession. The AI bump turned into a sustainable surge that goes on and on – as long as regulators keep hands off.  


The big advisory firms -- Goldman Sachs, McKinsey & Co, Pricewaterhousecoopers and Deloitte -- have already predicted as much as a $17 trillion rise due to AI by 2030.  ARK Invest’s Cathie Wood thinks that’s just spare change.  She predicts a $200 trillion rise in markets, about twice this year’s total global GDP. 

But Wall Street need not be perturbed by Secretary General Antonio Guterres’s recent coup.  Following the same playbook with AI as he has done with climate change, expect lots of “high-level” commissions, lots of talk, followed by lots of voluntary, unenforceable commitments. In truth, climate change was a big win for UN finance, and more money is needed to keep its 17 agencies afloat. Maybe AI can help. 


It may seem that the UN has a monopoly on global AI governance. But it may not be true. This year, Brazilian President Lula da Silva will convene the G21. Never heard of it? It refers to the G20 you know of, plus one – the 55 nations of the African Union. Together they represent more than 90 percent of global GDP, and have as deep a reach into the developing world as the UN itself.


In the past, China tried to augment BRIC (Brazil, Russia, India, China) to compete with a US-dominated G20. Therefore, Lula da Silva, who takes an agnostic view toward US-China relations, can produce a model which brings the US and China together in a complementary way to boost the economies of the Global South.


The critical year is the twelve months between November 2024 and November 2025, when the G21 meets again in South Africa. During that year, a framework can emerge to design and create an enforceable global AI law which the UN is too weak to produce.

 

[next in this series:

2: How G21 can overtake the UN in effective AI governance

3: The business model of global AI governance: it’s not what you think

4: Why Indonesia, not India, will be the AI model for the Global South

5: The middle way in global AI governance:  why China and the USA will join together to make it happen]

Comments


Craig's Blog

bottom of page