Military applications for artificial intelligence (AI). The very phrase evokes visions of robot super soldiers and autonomous drone strikes, but AI could have all sorts of non-combat applications too. It might even be used to prevent combat from occurring in the first place.
“There has been a lot of debate on the use and misuse of AI in warfare,” says Dr. Alex S. Wilner, an Assistant Professor in the Norman Paterson School of International Affairs.
“But there is not a lot of understanding on how AI can be used in strategy or tactical decision making.”
Funded through the Department of National Defence’s Innovation for Defence Excellence and Security (IDEaS) program, Wilner’s research focuses on how AI might be applied in deterrence. It’s possible that AI could improve the defensive strategies that countries use to manipulate an adversary’s behaviour, but it isn’t yet clear exactly how to do that– or even whether some countries might be doing it already.
“The nexus between AI and deterrence has barely been explored,” says Wilner.
“But we anticipate that it could improve the certainty and severity of a coercive message. That should affect the behaviour of your adversary. We don’t know yet, but it could help make better use of a threat.”
Probing the Artificial Intelligence Applications for Defence
On February 24-25, Wilner is hosting “AI: Implications for Defence, National Security and Intelligence”. Funded through the Department of National Defence’s Mobilizing Insights in Defence and Security (MINDS) program, the conference will be held at Carleton, and will seek to better understand the possibilities – and the challenges. It will bring together 60 participants from academia, think tanks, government, the military, and the private sector.
“A lot of AI, machine learning and related technologies are being developed by the private sector, for the purposes of the companies that are developing them,” Wilner says.
“The relationship between private sector actors and governments isn’t always an easy one. There has been some concern from major players like Google, after working with the Pentagon. There is some discussion as to what the private sector’s role in military affairs should be in liberal democratic countries.”
Policy makers in these countries can have a very different set of concerns.
“There is a worry about the control of AI, and about the data upon which it is trained and approved,” says Wilner.
“There is also an overarching question of human control. This is where AI intersects with ethics in warfare — human control of weaponry.”
AI Creates Unique Challenges for Liberal Democratic Countries
These questions create unique challenges for liberal democratic countries. In China, AI is being developed primarily by and for the state. Russia is also investing in AI research, but debates over the ethical questions of the military applications of AI are less prominent in both countries.
Still, AI is being deployed within the militaries of liberal democracies like Canada and the United States, and it’s already having an impact on the way that they function.
“Artificial intelligence is great at making sense of large amounts of data,” says Wilner.
“It is already helping intelligence analysts make sense of what they are seeing in the data. That is directly linked to policy and strategy. AI is used in planning certain events, logistics, training, veterans affairs. It’s in the back office, but we’re probably approaching a place where artificial intelligence will be used to identify of targets. There is the current state of the art, and then there is the expectation of where it will go in five years. Part of this conference is really about building the concepts from the bottom up, and then driving toward — hopefully — some policy.”
Thursday, February 20, 2020 in Faculty of Public and Global Affairs
Share: Twitter, Facebook