Amazon, Microsoft and other tech giants are developing AI that could power Terminator-like killer robots, study says

Published: 
Agence France-Presse
Listen to this article

New study by Dutch NGO ranked 50 companies on their stance on lethal autonomous weapons such as drones and AR headsets for soldiers

Agence France-Presse |
Published: 
Comment

Latest Articles

South China Morning Post wins big at global media awards

Faithful phrases: 9 idioms that will surely add a pious twist to your writing

Companion dogs comfort Hong Kong’s seniors through new programme

Taylor Swift’s storytelling shines in The Tortured Poets Department

Glowing animals go much further back in time than we thought

SOTY 2022/23: Art runs in the family for Visual Artist second runner-up

Could killer robots such as Terminator (portrayed here by Arnold Schwarzenegger) one day become the norm?

Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

Google, which last year published guiding principles eschewing AI for use in weapons systems, was among seven companies found to be engaging in “best practice” in the analysis that spanned 12 countries, as was Japan’s Softbank, best known for its humanoid Pepper robot.

In total, 22 companies were of “medium concern”, while 21 fell into a “high concern” category, notably Amazon and Microsoft who are both bidding for a US$10-billion Pentagon contract to provide cloud infrastructure for the US military.

Others in the “high concern” group include Palantir, a company with roots in a CIA-backed venture capital organisation that was awarded a US$800 million contract to develop an AI system “that can help soldiers analyse a combat zone in real time.”

“Autonomous weapons will inevitably become scalable weapons of mass destruction, because if the human is not in the loop, then a single person can launch a million weapons or a hundred million weapons,” said Stuart Russell, a computer science professor at the University of California, Berkeley.

Lessons plans: drones

“The fact is that autonomous weapons are going to be developed by corporations, and in terms of a campaign to prevent autonomous weapons from becoming widespread, they can play a very big role.”

The development of AI for military purposes has triggered debates and protest within the industry: last year Google declined to renew a Pentagon contract called Project Maven, which used machine learning to distinguish people and objects in drone videos.

It also dropped out of the running for Joint Enterprise Defence Infrastructure (JEDI), the cloud contract that Amazon and Microsoft are hoping to bag. The report noted that Microsoft employees had also voiced their opposition to a US Army contract for an augmented reality headset, HoloLens, that aims at “increasing lethality” on the battlefield.

More worrying still are new categories of autonomous weapons that don’t yet exist – these could include armed mini-drones like those featured in the 2017 short film Slaughterbots.

“With that type of weapon, you could send a million of them in a container or cargo aircraft – so they have destructive capacity of a nuclear bomb but leave all the buildings behind,” said Russell. “[Using facial recognition technology, the drones could] wipe out one ethnic group or one gender, or using social media information you could wipe out all people with a political view”.

Sign up for the YP Teachers Newsletter
Get updates for teachers sent directly to your inbox
By registering, you agree to our T&C and Privacy Policy
Comment