AI in warfare: what are the limits?
The US Department of Defense also relies on artificial intelligence in war, collaborating with companies such as xAI. The Pentagon recently blacklisted the AI developer Anthropic after the company refused to make its technologies fully available for military purposes. Commentators examine potential applications and risks.
Technological totalitarianism looms
Expresso outlines the danger of using AI in war:
“A recent study by King's College London shows that in 95 percent of cases, when the most advanced language models are used to solve military conflicts, they chose the nuclear option. Basically they lack the animal survival instinct. ... The difference between a system that helps to find a solution (by identifying a suspect or a target) and a system that makes decisions (who to arrest, deport or kill) does not lie in the technology itself. It lies in whatever remains of a centuries-old ethical construct. The ultimate and most overwhelming form of totalitarianism will be technological in nature.”
The aim is confusion
AI-generated images are making the internet itself into a battlefield, observes the Süddeutsche Zeitung:
“Whoever is producing these images - the Iranian secret service, independent internet trolls, third parties - has only one aim. It is not the lie itself that is important, but its long-term impact: general confusion. ... These tactics show above all that the world of digital information is the open flank of the Western powers. To defend it will be one of the greatest challenges of future wars. The flood of fake AI images shows that interested groups are often a lot quicker to exploit new technologies than the defence forces are.”
Accountability is disappearing
Habertürk warns against automation processes:
“'The final decision is always made by a human.' Much lies behind this statement. Above all the question: If artificial intelligence generates a list of targets within seconds and presents it to you, is your approval a genuine decision or are you merely pressing a button to activate the machine? Experts refer to this as 'automation bias.' When presented with a list, you tend to trust it. … In practice, accountability is disappearing. Was it the faulty software that made the mistake, the company that developed the software, the soldier who gave final approval, or the commander who ordered that soldier to act?”
Experts must be brought in
Writing in Delfi, journalist Māris Zanders calls on politicians to act responsibly:
“The problem is that due to their lack of expertise, policymakers tend to view AI primarily as an opportunity rather than a risk. Given that the military aspect of AI will continue to evolve (until disaster strikes…), it's crucial that policymakers do not have exclusive authority to decide on the use of these technologies. Industry and experts must have the opportunity to challenge these decisions. From this perspective - although we have no illusions about the altruism of large IT corporations - it is positive that Google, Amazon, Apple, and Microsoft have sided with Anthropic in the conflict.”