How long before it becomes a powerful tool to gain supremacy?

Talks about different military-connected AI systems have progressed in the last few years. Things have been happening and changing from the first alleged AI-coordinated drone strikes by the Israeli military in Gaza to an announcement by USA and EU high officials that more money will be invested in this research field.

What is the actual situation? We still need to be entirely sure. Logically most of that information is considered to be of utmost national security importance and highly classified, and we will only see a few transparent reports. But we might see some developments in practice.

The current war in Ukraine allowed us to peak in the future of AI-fueled warfare. What can we conclude about it so far?

First, let's be aware of the context in which the UN-established group discussing limitations or banning Lethal Autonomous Weapon Systems did not get support for its decisive actions from the following states: USA, Russia, UK, India, and Israel. So from those who are most advanced in AI development. What could be the reasons?

Parts of the answers might be in already well-known predictions, one from Vladimir Putin back in 2017 — that "whoever becomes the leader in AI will become the ruler of the world". But also from Kai-Fu Lee, who predicted that AI would be the third revolution in warfare after gunpowder and nuclear weapons. And to underline that autonomous weapons are one aspect, but AI also has the potential to scale data analysis, misinformation, and content curation beyond what was possible in major conflicts historically.

Although it might bring the next revolution, it does not mean it is similar to previous innovations. A key difference is that AI is not a weapon but a range of functions and technologies. As Michael C. Horowitz has noted, AI can be defined as enabling technology: "AI is not a single widget, unlike a semiconductor or even a nuclear weapon." In other words, AI is many technologies and techniques.

Economy and investment reactions were followed shortly. One study found that between 2005 and 2015, the United States had 26 per cent of all new AI patents granted in the military domain, and China had 25 per cent. In the years since, China has outperformed America. China is believed to have made particular advancements in military-grade facial recognition, pouring billions of dollars into the effort. Last October, NATO launched an AI strategy and a $1 billion fund to develop new AI defence technologies.

Russia's invasion of Ukraine is shaping up to be a key proving ground for artificial intelligence and its military applications. But what are the options and potential applications? They could be listed in a few ways, so I propose this one:

  1. Development of Autonomous Weapons (LAWS) (autonomous tanks, swarming munitions/drones, etc.)
  2. Military operations and their optimization (logistics, command and control, resources planning)
  3. Platforms for intelligence collection and analysis (data from TikTok and Telegram to news reports and publicly available satellite imagery).
  4. Detection of disinformation / or creation of it (posts and videos generated by troll farms on Social media — Twitter, Tik Tok, YouTube, Telegram, etc.)

Right now, those weapons and systems are still in their infancy and with a massive potential for development. However, the reality is that AI-guided weapons that once were the stuff of science fiction — and were still primarily in that realm when the UN committee first began talking about autonomous weapons in 2014 are now being deployed on battlefields around the globe.

It is yet to be seen if some mentioned aspects/applications (of AI) could allow civil society groups to fact-check the claims made by every side in the conflict and document potential atrocities and human rights violations. That could be vital for future war crimes prosecutions and have many legal consequences.

The problem that many, I included, have been warning about is that developing technology needs to be able to grasp the implications of what they are building and how it might be used in the future. To use comparison, when anybody first creates fully autonomous weapons — the catchall description for algorithms that help decide where and when a weapon should fire — it will make the human-commandeered drone strike of recent years look as outdated as an attack with a bayonet.

Daan Kayser from the Dutch group, Pax for Peace, said: "I believe it's just a matter of policy at this point, not technology. "Any one of several countries could have computers killing without a single human anywhere near it. And that should frighten everyone."

There are some optimistic predictions about the influence of AI in the military. For example, some armies believe using AI will help shorten the length of the fighting and boost effectiveness and speed in gathering targets using super-cognition, combined with greater precision of attacks, and achieve a lower number of civil casualties.

It will be an exciting field for ethicists, philosophers, psychologists, and sociologists to explore different aspects of it and its potential consequences. Nancy Sherman, a Georgetown professor who has written numerous books on ethics and the military, said: "Just cause in going to war is important, and that happens because of consequences to individuals. When you reduce the consequences to individuals, you decide to enter a war too easily."

Sources of Article

Read the original article here: https://acomomcilovic.medium.com/ai-in-warfare-is-the-new-revolution-here-part-1-4ef1a0d200f

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in