he video is stark. Two menacing men stand next to a white van in a field, holding remote controls. They open the van’s back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The killer robots flood in through windows and vents. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. Terrorists could easily deploy them. And existing defences are weak or nonexistent.
Some military experts argued that Slaughterbots – which was made by the Future of Life Institute, an organisation researching existential threats to humanity – sensationalised a serious problem, stoking fear where calm reflection was required. But when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which “Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff”. One “microsystems collaborative” has already released Octoroach, an “extremely small robot with a camera and radio transmitter that can cover up to 100 metres on the ground”. It is only one of many “biomimetic”, or nature-imitating, weapons that are on the horizon.
Who knows how many other noxious creatures are now models for avant garde military theorists. A recent novel by PW Singer and August Cole, set in a near future in which the US is at war with China and Russia, presented a kaleidoscopic vision of autonomous drones, lasers and hijacked satellites. The book cannot be written off as a techno-military fantasy: it includes hundreds of footnotes documenting the development of each piece of hardware and software it describes.
Advances in the modelling of robotic killing machines are no less disturbing. A Russian science fiction story from the 60s, Crabs on the Island, described a kind of Hunger Games for AIs, in which robots would battle one another for resources. Losers would be scrapped and winners would spawn, until some evolved to be the best killing machines. When a leading computer scientist mentioned a similar scenario to the US’s Defense Advanced Research Projects Agency (Darpa), calling it a “robot Jurassic Park”, a leader there called it “feasible”. It doesn’t take much reflection to realise that such an experiment has the potential to go wildly out of control. Expense is the chief impediment to a great power experimenting with such potentially destructive machines. Software modelling may eliminate even that barrier, allowing virtual battle-tested simulations to inspire future m
Source: ‘Machines set loose to slaughter’: the dangerous rise of military AI | News | The Guardian