Militaries are still waiting for the AI revolution

22 Aug 2024 | News

Defence ministries are overly focused on data and yet to harness the full potential of artificial intelligence, according to a multinational study

AI tactic for a swarm attack, developed in a virtual twin of the battlefield. Photo credits: GhostPlay / 21strategies

The adoption of AI technology is yet to have the transformative impact on warfare many are predicting, as its use continues to revolve around the processing large amounts of data, which is difficult to obtain in a military environment.

This is the conclusion of an international research project looking at how AI systems are currently being used for defence purposes in 25 countries, including NATO allies and Russia, China, South Korea and Israel. The findings were recently published in the open access book The Very Long Game.

The work was coordinated by the Hamburg-based Defence AI Observatory (DAIO), which was launched as part of GhostPlay, a project centred on AI-enhanced decision-making funded by the German Ministry of Defence.

The hype around defence AI is a bit like a soufflé, says Heiko Borchert, co-director of DAIO and one of the editors of the book. It looks impressive, but once you start to poke around, it collapses, he said.

“This data centricity is most likely more of an obstacle to the quick adoption of defence AI than it is a lubricant,” Borchert told Science|Business.

The strategy makes sense in the commercial sector, where customers readily share their data with digital platforms, but in defence environments, data, particularly that of adversaries, is much harder to come by.

“Also, data is always backward looking. And data management needs a lot of energy, it needs manpower, it needs sophisticated digital infrastructure,” Borchert said.

The US defence research agency, DARPA, distinguishes between ‘second-wave’ AI systems, which are very good at classifying data and learning patterns but have limited capability to reason, and future ‘third-wave’ models that will be able to understand context and adapt to changing circumstances.

“These are solutions that properly understand the environment in which they operate, and are aware of the consequences of the decisions they take, but also take into account the consequences of adversarial decisions,” said Borchert.

All the countries studied are currently using AI with a focus on data. According to the researchers, the US is the only nation that has officially discussed and explored the military benefits of third-wave AI, with dedicated programmes managed by DARPA.

Germany’s Ministry of Defence is also exploring third-wave AI with the GhostPlay project, which builds a virtual twin of the battlefield to develop AI tactics.

Lessons from Ukraine

The book evaluates the different uses of AI based on the level of technological autonomy, and concludes that almost every country takes a “human-centric” approach, meaning AI is used to complement, not replace humans.

A notable exception is Ukraine, which wants machines to be more autonomous. That is based on its battlefield experience, with Russia using jammers so that unmanned Ukrainian drones lose their connection with operators on the ground.

“Once you lose connectivity, you have a problem, because then you have no idea what the unmanned asset is going to do. That's why Ukraine says that to remain a responsible actor, we want not less autonomy, but more machine or technical autonomy,” Borchert said.

In other countries, such as Iran and South Korea, the use of unmanned systems is seen as a solution to declining populations and the shrinking number of military recruits.

Technological autonomy in a military context can sound like a scary prospect. There are already examples of where this could lead, such as the Israel Defence Force system known as ‘the gospel’, which uses AI to process data and recommend targets for bombing at an unprecedented rate. A human does review the targets but “need not spend a lot of time on them”, according to testimonies gathered by Israeli-Palestinian magazine +972 and Hebrew-language news site Local Call.

However, Borchert says the “killer robot” narrative is unhelpful, as it ignores the many AI use cases that do not involve deciding when to pull a trigger. For example, technological autonomy could also mean machines are used to decide how best to monitor a given area. Artificial intelligence can also be used to develop tactics, and predict vehicle maintenance problems before they arise.

The most interesting lesson from the war in Ukraine, Borchert argues, is instead related to ethical and regulatory questions. “Wars readjust your normative preferences,” he said. “If your enemy is behaving in a certain way, and you lose our own soldiers, you lose your civilian population, your infrastructure is being damaged, this re-adjusts what type of priorities you're setting in order to defend yourself.”

For example, Ukraine is currently using facial recognition software from the US company Clearview AI, including to help identify enemy suspects at checkpoints as well as dead Russian soldiers. The technology is deemed illegal in several European countries, and heavily restricted in the US.

The EU’s AI Act provides an exemption for systems used exclusively for military and defence purposes, meaning it is up to member states to manage the risks, although dual-use technologies that also have commercial uses do have to comply with the EU rules.

This does not necessarily mean democracies are slower to implement AI in the defence realm. The Very Long Game shows that authoritarian regimes such as Russia and China face many of the same challenges when implementing technology.

For instance, China’s ability to access vast amounts of data from its population does not mean its AI solutions will be superior, says Borchert. “If defence AI has only been trained on Chinese data, how well is it going to behave against non-Chinese data?”

Nor is there a correlation between the level of digitisation in the public sector and a country’s adoption of defence AI systems, according to the case studies, as cultural barriers may still exist. “Simply because the technology is available, doesn't say anything about the readiness and the willingness of armed forces to use that specific technology.”

International collaboration

Most of the work on defence AI is being undertaken at national level, as questions of sovereignty and data sharing can be barriers to cooperation. However, Borchert says there is a growing appetite for bilateral and multilateral R&D projects.

The US, UK and Australia are undertaking a series of collaborative trials into AI-enabled drones as part of the Resilient and Autonomous Artificial Intelligence Technologies initiative.

The €1 billion NATO Innovation Fund has also invested in venture capital funds focusing on AI and other deep tech startups. And NATO’s DIANA start-up accelerator lists AI as one of its main focus areas.

At the EU level, the main vehicle for supporting R&I in the field is the European Defence Fund, which funds cross border projects to develop AI-enabled solutions, including image recognition systems and language technology.

On 9 and 10 September, South Korea will host the second summit on Responsible Artificial Intelligence in the Military Domain, bringing together stakeholders to discuss the responsible use of AI.

Never miss an update from Science|Business:   Newsletter sign-up