How dangerous is AI and who should supervise it?

In a statement issued on Tuesday, a group of leading AI experts issued stark warnings about the technology, comparing the risks with pandemics and nuclear war. Sam Altman, CEO of ChatGPT creator OpenAI, was among the signatories. He proposes the establishment of an international authority analogous to the International Atomic Energy Agency (IAEA). Commentators are mostly dubious.

Open/close all quotes
The Spectator (GB) /

Not comparable with control of nuclear weapons

The regulation of artificial intelligence suggested by OpenAI is unlikely to achieve much, The Spectator puts in:

“Computer power isn't as scarce and trackable as uranium ore. If superintelligence is as transformative as OpenAI and others suggest, there's a powerful incentive to cheat and plenty of opportunity to do so. ... In contrast to the poorly camouflaged silos of the Cuban missile crisis, you could smuggle your finished superintelligence around on a hard drive. Proliferation looks comparatively easy. ... OpenAI's regulatory suggestions are inadequate, but the real service Altman and co. have done is to sound the alert. Fresh ideas are needed, and quickly.”

Der Spiegel (DE) /

Shifting responsibility to the politicians

For Spiegel columnist Sascha Lobo the warnings are a defence strategy driven by anticipatory guilt:

“Artificial intelligence is powerful, and the likelihood of something going wrong with it is high. So it makes sense for the AI crowd to ensure that their own role is that of the admonisher, because in the eyes of the public these are rarely the guilty party. Such warnings from the AI elite shift at least part of the responsibility to politicians. If something goes wrong, they can say: hey, we wanted you to regulate quickly and effectively but you didn't, and this is the result.”

The Independent (GB) /

Time to be clear about who's in charge here

The Independent also finds the statements coming from the AI sector unhelpful:

“Warning that AI could be as terrible as pandemics also has the peculiar effect of making artificial intelligence's dangers seem as if they just arise naturally in the world, like the mutation of a virus. But every dangerous AI is the product of intentional choices by its developers – and in most cases, from the companies that have signed the new statement. ... Who are these companies talking to? After all, they are the ones who are creating the products that might extinguish life on Earth. It reads a little like being hectored by a burglar about your house's locks not being good enough.”

Neue Zürcher Zeitung (CH) /

Ambivalent transparency

Neue Zürcher Zeitung takes a look at the lines of conflict regarding stricter regulation of the technology:

“If Open AI is forced to disclose how it has trained its AI, first of all it will seem less 'magical'. Secondly, the competition could copy its ideas. And thirdly, it is not clear whether the company even had the necessary rights to the data that flowed into the AI. A number of artists have already filed lawsuits against producers of image AI. Those affected are the ones that have disclosed what data they used to train the AI. Transparency makes people more vulnerable. ... It's very difficult to make it completely secure and correct. Altman has the right to lobby against it. And of course the EU should carefully weigh up which rules are really necessary.”

News.bg (BG) /

Fear of the unknown is all too human

People have always regarded technological breakthroughs with scepticism, news.bg points out:

“In the early stages of the Industrial Revolution workers believed that machines would take away their daily bread, but in reality they led to a huge boost in the quality of life. The first vaccines were met with similar distrust. ... There were those who saw the electrification of human settlements as Satanism, yet today we cannot imagine life without electricity. AI optimises our lives. This means that with its help we will spend much less time and energy on the same activities and have more time for other, possibly more meaningful things.”