(© picture alliance / CHROMORANGE / Christian Ohde)

  Artificial intelligence

  20 Debates

The world's first comprehensive AI law is now a reality: the EU Parliament on Wednesday approved the version of the Artificial Intelligence Act negotiated with the EU member states. Practices such as social scoring or emotion recognition in the workplace will be banned, with exceptions for law enforcement authorities in the case of facial recognition. For some commentators the law has been too watered down. Others see it as a pioneering success.

The rapid advances in the field of artificial intelligence and the resulting challenges were dominant topics in 2023. At the turn of the year commentators continue the debate, also with an eye to other scientific and technological developments.

Negotiators from the EU Parliament and member states have agreed on the key points of the AI Act which has been in preparation since 2021. The law aims to guarantee transparency on the use of artificial intelligence, ensure high quality of the data used in development and protect copyrights. In the areas of data protection and security-relevant applications, human-controlled risk management will become mandatory.

Following a frenetic back-and-forth, Sam Altman is once again CEO of ChatGPT developer OpenAI after being sacked in a surprise move by the company's board of directors last Friday. Shortly after his dismissal it was announced that he was switching to Microsoft. But apparently the threats of many OpenAI employees that they would quit unless Altman was reinstated had an impact. The press observes the circus with concern.

Sam Altman is no longer CEO of software company OpenAI, which developed the AI chatbot ChatGPT. One of the main faces of the AI boom, Altman, 38, is switching to Microsoft following his surprise dismissal. Now a large majority of the startup's staff are calling for his return and threatening to quit. Commentators see the controversy as significant from several perspectives.

At an international summit in the UK, 28 countries from five continents, including China, have agreed to work together to regulate artificial intelligence. In a declaration they emphasised their intention to better understand and collectively manage the risks of AI. Prime Minister Rishi Sunak spoke of a "milestone". Commentators discuss where to go from here.

The advance of artificial intelligence (AI) into many existential areas of life is sparking heated debates and concern about the future. Regulatory legislation is still in the early stages. Commentators examine whether we should fear being replaced in the working world or whether art will lose its value.

Artificial intelligence (AI) is increasingly becoming part of our daily lives - for example in the form of apps or machine control software - and raising many questions. Will it develop beyond our control, replace humans in the labour market to a harmful degree or create new risks in the fields of weapons and research? Commentators discuss the rationale and possibilities when it comes to regulating this powerful technology.

The European Parliament yesterday passed the world's first law regulating artificial intelligence. The legislation defines different risk levels for different applications. Programmes such as facial recognition software that are considered particularly risky are to be banned; others will only be allowed under certain conditions. While some commentators welcome the decision, others fear overregulation.

In a statement issued on Tuesday, a group of leading AI experts issued stark warnings about the technology, comparing the risks with pandemics and nuclear war. Sam Altman, CEO of ChatGPT creator OpenAI, was among the signatories. He proposes the establishment of an international authority analogous to the International Atomic Energy Agency (IAEA). Commentators are mostly dubious.

Europe wants lead the way in the regulation of artificial intelligence (AI) and is coming ever closer to its goal: the corresponding European Parliament committees have endorsed an amended version of the EU Commission's Artificial Intelligence Act, which includes a ban on face and emotion recognition systems. It will now be discussed in the European Parliament at the beginning of next week.

Concerns about artificial intelligence are growing. Yesterday, Geoffrey Hinton, one of the pioneers of the technology, joined the critical voices. Hinton quit Google saying he is worried that humans will lose control of the technology and warning that people may soon "not be able to know what is true anymore". Commentators echo his fears.

More than 1,000 tech industry and research experts - including Elon Musk and Apple co-founder Steve Wozniak - have warned in an open letter about "significant risks" posed by artificial intelligence (AI). They call for a halt of at least six months in the development of the technology and the establishment of a regulatory framework. Commentators discuss whether the technological advances in AI can and should be stopped.

Launched in November 2022, the AI-based chatbot ChatGPT can answer almost every question and is able to formulate its responses in a natural, conversational way. Whether - and in which genres - the texts generated can match those produced by humans is a matter of debate. Commentators discuss how to handle this new and powerful tool.

The artificial intelligence-based chatbot ChatGPT has been open to the public on the website of provider OpenAI for a week now. Within just a few days over a million users registered to ask the programme questions and chat. Commentators discuss what the innovative language generation model can and can't do.

The European Commission has presented the world's first legal framework governing the use of artificial intelligence (AI). In future, AI systems the use of which potentially pose a risk to the safety or rights of people will be subject to strict regulations. In the case of particularly clear risks, for example if free will is manipulated, a ban will apply. The initiative is aimed at boosting trust in AI.

The European Commission is to present three papers on digital strategy, its second key project alongside the European Green Deal today. In the reports the Commission explains among other things how it proposes to promote and regulate artificial intelligence. Not a moment too soon, Europe's media find.

The EU has presented ethical guidelines for the use of artificial intelligence. Businesses, research institutes and authorities will now test the guidelines in a pilot phase after which legislation is to be drawn up. Politics and society are still far too passive when it comes to shaping the future, commentators complain.

Artificial intelligence has created huge expectations for the future of the economic system, the labour market, mobility and day-to-day life. At the same time over and above concerns about data protection and cyber security, fears are growing that robots could replace people. Commentators in Europe look at how AI can be developed in such a way that everyone can benefit.