close
close
Thu. Jul 18th, 2024

Why AI warfare is both commonplace and dangerous

By Vaseline May26,2024

Joanna Andreasson/DALL-E4

Everyone knows what the AI ​​apocalypse should look like. The cinema War games And The terminator feature a super-intelligent computer that takes control of weapons in an attempt to end humanity. Fortunately, that scenario is unlikely for now. U.S. nuclear missiles, which run on decades-old technology, require a human with a physical key to launch.

But AI is already killing people around the world in more boring ways. The US and Israeli militaries use AI systems to sift through intelligence and plan airstrikes Bloomberg News, The guardAnd +972 Magazine.

This type of software allowed commanders to find and inventory targets much faster than the personnel could do themselves. The attacks are then carried out by human pilots, either with manned aircraft or with remote-controlled drones. “The machine did it in a cool way. And that made it easier,” said an Israeli intelligence officer The guard.

Further afield, Turkish, Russian and Ukrainian weapons manufacturers claim they have built “autonomous” drones that can attack targets even if their connection to the remote pilot is lost or jammed. However, experts are skeptical about whether these drones have truly committed autonomous killings.

In both war and peace, AI is a tool that enables people to do what they want more efficiently. Human leaders will make decisions about war and peace the same way they always have. In the near future, most weapons will require a flesh-and-blood fighter to pull the trigger or press a button. Thanks to AI, the people in the middle – staff officers and intelligence analysts in windowless rooms – can mark their enemies for death with less effort, less time and less thought.

“That Terminator The killer robot image obscures all the pre-existing ways in which data-driven warfare and other areas of data-driven policing, profiling, border control, and so on already pose serious threats,” said Lucy Suchman, a retired professor of medical science. anthropology and member of the International Committee for Robot Arms Control.

Suchman argues that it is very useful to understand AI as a “stereotyping machine” running on top of legacy surveillance networks. “Thanks to the availability of enormous amounts of data and computing power,” she says, these machines can learn to distinguish the kinds of patterns and people that governments are interested in. Minority report rather than The terminator.

Even as humans judge AI decisions, the speed of automated targeting leaves “less and less room for judgment,” Suchman says. “It’s a very bad idea to try and automate an area of ​​human practice that is fraught with all kinds of problems.”

AI can also be used to approach targets already chosen by humans. For example, Turkey’s Kargu-2 attack drone can track down a target even after the drone loses connection with its operator, according to a United Nations report on a 2021 battle in Libya involving the Kargu-2.

The usefulness of “autonomous” weapons is “really very situational,” says Zachary Kallenborn, a policy fellow at George Mason University who specializes in drone warfare. For example, a ship’s missile defense system should shoot down dozens of incoming missiles and has little danger of hitting anything else. While an AI-driven weapon could be useful in that situation, Kallenborn argues, unleashing autonomous weapons on “people in an urban environment is a terrible idea,” due to the difficulties of distinguishing between friendly forces, enemy combatants and bystanders .

The scenario that really keeps Kallenborn awake is the ‘drone swarm’, a network of autonomous weapons that instruct each other, because an error can cascade over dozens or hundreds of killing machines.

Several human rights activists, including Suchman’s committee, are pushing for a treaty that bans or regulates autonomous weapons. This also applies to the Chinese government. While Washington and Moscow are reluctant to submit to international scrutiny, they have placed internal limits on AI weapons.

The US Department of Defense has issued regulations requiring human oversight of autonomous weapons. More quietly, Russia appears to have disabled the AI ​​capabilities of its Lancet-2 drones, according to an analysis cited by the military-focused online magazine Break defense.

The same impulse that drove the development of AI warfare also seems to define its limits: human leaders’ hunger for control.

Military commanders “want to be very careful about the amount of force you inflict,” Kallenborn says, “because ultimately you only do it to support larger political goals.”

Related Post