The militaristic use of AI is somewhat controversial, so this story in Artificial Intelligence News sparked my interest. According to the story, the British military has been trialling the use of AI to “scan for hidden attackers in a mock battlefield environment.” The testing ground is actually in Montreal, Canada rather than the UK, as Canada and the UK, along with the US, Australia and New Zealand are members of what is called ‘Five Eyes’, which is a security partnership.
The AI tool is called SAPIENT (Sensors for Asset Protection using Integrated Electronic Network Technology)
and it has been developed with the aim of using sensors to detect battlefield hazards while freeing up human soldiers for other operational activities. To some it may sound like something you might find while playing “Call of Duty,” except this too will not be deployed in a virtual reality.
Keeping pace with Russia
Not everyone welcomes the introduction of AI to warfare, but those that do support it point to Russia and China’s heavy investment in military AI and say we must keep pace with them. Russia’s bellicose president Vladimir Putin has already said, “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
Google’s Project Maven
Perhaps that is a chilling enough statement to send most nations scurrying off to work on AI and why the ‘Five Eyes’ are continuing to test news tools extensively. Yet others are wary. For example, earlier in 2018 you may remember the furore that erupted at Google’s offices over its Project Maven contract with the Pentagon to develop AI technology for drones for the Pentagon. Google was forced to drop the contract following an internal backlash and staff resignations. Some 4,000 staff members demanded that Google would never again “build warfare technology.” Google CEO Sundar Pichai wrote in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”
But others are undeterred by such considerations, and it is a extremely interesting area of debate, especially for ‘civilian’ companies like Google. When companies typically associated with the military work on technology, you can expect a different response from their employees, because this is simply what they do. That is the case with SAPIENT, which was developed by the Defence Science and Technology Laboratory (DSTL) and industry partners, and co-funded initially by Innovate UK. Since 2016, the programme has been funded solely by DSTL, which is part of the Ministry of Defence.
The UK’s Defence Minister Stuart Andrew said of SAPIENT:
“This British system can act as autonomous eyes in the urban battlefield. This technology can scan streets for enemy movements so troops can be ready for combat with quicker, more reliable information on attackers hiding around the corner.
Investing millions in advanced technology like this will give us the edge in future battles.”
Sapience, by the way, means “the ability to act with judgement.” The question is, can an AI tool that monitors people approaching a checkpoint, or changes in people’s behaviour, rally replace human intelligence? There are many stories about human errors and innocent people have been mistakenly shot. Will AI deliver better responses? Sadly, we will only know after it has been deployed.