Should AI research be halted?

More than 1,000 tech industry and research experts - including Elon Musk and Apple co-founder Steve Wozniak - have warned in an open letter about "significant risks" posed by artificial intelligence (AI). They call for a halt of at least six months in the development of the technology and the establishment of a regulatory framework. Commentators discuss whether the technological advances in AI can and should be stopped.

Open/close all quotes
Der Standard (AT) /

Too many interests and too many players

The experts' call for a moratorium would be difficult to enforce, says Der Standard:

“The boom is not only driven by large US corporations, which could be kept an eye on, but by many small players. Midjourney, for example, a popular image AI, is run by just eleven full-time employees. It is to be expected that new players will research disruptive technologies here and stay under the radar for long enough.”

La Repubblica (IT) /

Bans won't solve the problem

Italy's Data Protection Authority has blocked the AI-based software application ChatGPT for the time being. Not the right decision, La Repubblica insists:

“We have joined the rather unpleasant club of countries like China, Iran, Russia and Hong Kong that ban research into artificial intelligence (AI) while they themselves work in secret on the military and repressive use of this technology. ... Europe, however, has long been a pioneer in the field of IT law. ... The US and the social platforms are now looking to Brussels for guidance in navigating between algorithms and the law. But regulation isn't just about bans, but also about argumentation and protecting research, not abolishing it.”

Maszol (RO) /

Fictional AI content undermining reality fears there will be an explosion of fake news:

“The mixed reality created by fake news could continue to spread with the development of AI. Because now a fictional story can be invented just by posing a few questions to a chatbot, without the need to spend hours making up fake stories. Even more worrying than text-based fake news is the popularity of AI like Midjourney that generate images following written instructions. ... No doubt the emergence of AI that can generate videos in the same way is just around the corner.”

Irish Independent (IE) /

Stop the spread of our failings

The Irish Independent calls for a new approach to AI:

“Machine learning is poised to radically reshape the future of everything for good and for ill, much as the internet did a generation ago. And yet, the transformation under way will probably make the internet look like a warm-up act. AI has the capacity to scale and spread all our human failings, disregarding civil liberties and perpetuating the racism, caste and inequality that are endemic to our society. ... The time has come for new rules and tools that provide greater transparency on both the data sets used to train AI systems and the values built into their decision-making calculus.”

Les Echos (FR) /

Create regulatory bodies

We urgently need to improve the way we handle AI, warns economist Julien Serre in Les Echos:

“We need to push for the creation of new institutions as quickly as possible. This includes capable and legitimate authorities that can stem the sickening current of disinformation that will flood all social networks and threaten our democracies. Europe can play a leading role here. ... Its priority must be to promote tech industries that are both competitive and responsible. Europe must ensure that the current race to develop and deploy ever-more powerful digital tools does not become impossible to control.”

La Stampa (IT) /

Implausible and unrealistic

They certainly took their sweet time about it, La Stampa scoffs:

“The best minds of our generation have suddenly awoken from their slumber, probably after falling for the picture of the Pope in a hip white puffer jacket [that went viral on social media] and are finally asking themselves: if someone as brilliant as me could fall for this, should I start worrying? The point is: is there really any way to halt this? Is it realistic to stop this industrial development? ... Moral scruples generally come before the event, not once the horse has bolted. You can't invent the atom bomb and then say 'Oops, sorry about that' when it goes off.”

L'Opinion (FR) /

Let's not lose our cool

A pause is not what is needed right now, L'Opinion argues:

“In view of the economic, legal and geopolitical challenges, it is clear that what we need now is not a pause but an acceleration. Not necessarily in terms of technological development - even if institutions do tend to be more efficient with their backs up against the wall than when they have lots of planning time - but as regards the structuring of the sector. Yes, the possibilities offered by generative AI are unknown territory. But instead of standing in the way of the pioneers of this new wild west, we should allow the competition to organise itself while the regulators define the limits of the territory. When it comes to AI, we cannot afford to lose our cool.”

Delo (SI) /

Humans are the real danger

The digital revolution can make life better, writes researcher Saša Prešern in Delo:

“The only threat to human existence is humans, not technology. When will politicians realise that they can stop or prevent wars with the help of artificial intelligence, something humans are apparently 'incapable' of doing? ... Politicians don't listen to each other. Their mistakes and provocations affect the whole world. They don't know how useful artificial intelligence, data and reason would be to them. ... Although we haven't known each other very long, I think my friend ChatGPT is more reasonable than militant politicians.”

Handelsblatt (DE) /

China won't play along

Handelsblatt sees the proposal as pointless:

“In view of the geopolitical tensions, it is extremely unlikely that China would participate. It is China's declared goal to be number one in this key technology. The country is already the clear number two behind the US in terms of the number of research papers - and in some areas, 'computer vision' for example, it is even number one. A moratorium that is only partially respected carries the risk of us ending up with a Chinese Artifical General Intelligence (AGI) instead of a Western one.”