California Dreaming? UK Leader’s AI Hopes Face Hurdles – OpEd

By

By Andrew Hammond

When UK Prime Minister Rishi Sunak announced in June that he would build momentum toward launching a big global AI initiative in November, he claimed the country should become the “geographical home” of AI safety. Laudable as that lofty goal is, however, it is unlikely to be fully realized.

Sunak is known to be fascinated by technology after years spent in California before he became a politician. Moreover, his father-in-law is the founder of the Indian multinational technology company Infosys.

He has frequently argued that Britain has fallen behind other economies, especially the US, because of an innovation gap, and, therefore, wants to “make sure the UK is the country where the next great scientific discoveries are made, and where the brightest minds and the most ambitious entrepreneurs will turn those ideas into companies, products, and services that can change the world.”

Few would disagree with this ambition, but realizing it in practice is not so easy. This is likely to be true, too, with his otherwise commendable AI ambitions.

To be sure, there is little question that the UK can be influential on the AI agenda. The nation has precedents for developing rules for governing emerging technologies, for example in stem-cell research.

One of the key challenges for Sunak with this huge agenda is that it comes in the context of multiple existing initiatives from other powers, including the US. On Monday alone, President Joe Biden, who will not attend the summit, signed an executive order that he claims represents the most far-reaching action on AI of any country. The new order requires AI system developers that pose risks to national security, the economy, or public health to share the results of safety tests with the US government, in accordance with the Defense Production Act, before being released to the public.

Meanwhile, the EU is working on AI legislation to align alongside other digital regulations, such as the General Data Protection Regulation and the Digital Services Act. China, too, is also moving forward with its own AI regulatory frameworks.

At the multilateral level, moreover, the Japanese-chaired G7 is planning soon to announce joint principles, likely to be a code of conduct for firms developing advanced AI systems. Meanwhile, there is also a separate Global Partnership on Artificial Intelligence event in India in December.

So, if anything, the UK is catching up with regulatory moves in other key political jurisdictions rather than “leading the pack.” Nonetheless, its initiative can still do good work on this important agenda in the period to come. For one, the proposed new, UK-based “world’s first AI safety institute” could play a key role in looking into the capabilities of new types of AI, and share its findings with the rest of the globe. After all, the computing power needed to research and develop large AI models can be out of reach even for medium-sized states given the expense.

In so doing, the UK can help build a stronger international consensus on bringing AI more clearly into more inclusive global governance structures. At present, there is a significant risk of private sector tech firms ruling the roost. Unlike some previous era-defining technological advances, such as space or nuclear, AI is mostly being developed by private companies which are disproportionately located in the US.

The UK-driven initiative, therefore, can add value in the period to come by deepening shared international understanding of major AI opportunities and challenges. This includes offering innovation to help address AI knowledge gaps, and offering more inclusion, including for so-called global south nations without the financial means to develop a critical mass of AI capacity.

However, yet another area of challenge is whether the specific focus of the UK initiative on so-called “frontier AI” makes most sense. That is, systems that severely threaten public safety and global security, and the best approaches to safeguarding them. The public conversation about this topic is important, for sure. Yet, some critics argue that this threat is over-hyped in the short to medium term, and that the government is wrong to put so much emphasis on dangers.

The University of Oxford’s Keegan McBride, for instance, claims that “AI systems based on technology that we have now and in the foreseeable future aren’t able to rise to the level of sophistication and intelligence that governments — the UK, basically — and companies like OpenAI are discussing.” His argument is that there are regulatory frameworks already in place for these serious threats, and that the industry is exaggerating the dangers of AI to shut out would-be rivals to centralize AI development.

Whatever the merits of this view, some key figures, such as Elon Musk, disagree profoundly. The Tesla founder has often warned about the dangers he perceives in advanced AI systems, even signing a letter last spring warning that “out of control” advancement could “pose profound risks to society and humanity,” and calling for a pause on AI development.

From this vantage point, it is curious for Sunak to champion this agenda, too, given his position that the UK will not “rush to regulate” so as not to stifle innovation. This is despite his avowed concerns over the speed of AI development and the possibility of humanity’s “extinction” as a result of the technology.

Taken together, the UK AI initiative is unlikely to deliver the huge ambition Sunak hopes for. Laudable as the prime minister’s ambitions are, outcomes may be more modest — albeit still potentially important — than he intends. 

  • Andrew Hammond is an Associate at LSE IDEAS at the London School of Economics.

Arab News

Arab News is Saudi Arabia's first English-language newspaper. It was founded in 1975 by Hisham and Mohammed Ali Hafiz. Today, it is one of 29 publications produced by Saudi Research & Publishing Company (SRPC), a subsidiary of Saudi Research & Marketing Group (SRMG).

Leave a Reply

Your email address will not be published. Required fields are marked *