Civil society organizations and AI scientists have become increasingly worried about developing lethal autonomous weapons systems, such as AI-enabled weapons capable of identifying targets and killing people without any human involvement. 

As a result, the United Nations had made a deliberate effort to prevent or restrict the use of such systems. However, such discussions have yielded few results thus far.

Meanwhile, the development of autonomous weaponry has progressed at a breakneck speed, although these weapons are still in their infancy. One of the biggest concerns in the Russia-Ukraine war is about swarms of "slaughter bot" drones coming true. 

So, how is A.I. being used on the front lines of the Ukrainian conflict?

Ukraine has already been employing the Turkish-made TB2 drone, which can autonomously take off, land, and cruise, albeit it still needs human oversight to select when to release the laser-guided bombs it carries. (The drone can also guide artillery attacks using lasers.)

Meanwhile, Russia possesses the 'Lantset,' a "Kamikaze" drone with autonomous capabilities. It was reportedly used in Syria and might be utilized in Ukraine. The Lantset is a "loitering munition" that targets tanks, vehicle columns, and troop concentrations. Once launched, it circles a specific geographic zone until it finds a predefined target type. The warhead it carries is subsequently detonated when it collides with the target.

A.I. is a strategic priority for Russia. The country's president, Vladimir Putin, declared in 2017 that whoever becomes the leader in Artificial Intelligence "will become the master of the world.” (source)

However, according to a recent study by academics at the Center for Naval Analyses, which the US government supports, Russia trails behind the US and China in developing A.I. defense capabilities.

As per a Politico article, Russia is considering using Artificial Intelligence to analyze war data, such as surveillance photographs from drones in Ukraine. China may offer Russia more advanced AI-enabled weaponry in Ukraine in exchange for information on how Russia successfully integrates drones into combat operations, an area in which Russia has battle-tested knowledge.

Artificial Intelligence (A.I.) may also play a crucial part in the information war. Many fear that Artificial Intelligence (AI) tools like 'deepfakes'—highly realistic video fakes made with an A.I. technique—will accelerate Russian disinformation efforts, although deepfakes have yet to be employed. However, misinformation can be detected with the help of machine learning and these technologies are already in use on the major social media sites, albeit their track record for successfully detecting and deleting misinformation is patchy at best.

Some agencies have also suggested that Artificial intelligence (AI) could aid in the analysis of the massive amount of open-source intelligence coming out of Ukraine. That includes everything from TikTok videos and Telegram posts of troop formations and attacks uploaded by regular Ukrainians to publicly available satellite imagery. This assists the civil society organizations in fact-checking assertions made by both sides in the conflict and documenting potential atrocities and human rights violations that might be crucial in future war crimes trials.

What about the Rest of the World

For better or worse, Artificial Intelligence in warfare is likely to become a mainstay of battles beyond Ukraine. Many countries have put their support behind it, including the United States, which expects to invest $874 million in AI-related technologies this year as part of its $2.3 billion science and technology research budget.

After activating its Response Force for the first time last week as a defensive measure in response to Russia's invasion, the North Atlantic Treaty Organization (NATO) launched an AI policy and a $1 billion fund to study new AI defense technology. In a suggestion, NATO emphasized the significance of member "engagement and cooperation" on "any themes important to AI for transatlantic defense and security," such as human rights and humanitarian law.

So, What Next?

This war has had a significant impact on AI researchers worldwide, as it has on everyone else. Many eminent researchers have discussed how best to respond to the crisis on Twitter, as well as how the technology they work on may help end the present conflict and ease human suffering, or at the very least prevent wars in the future. However, much of the conversation on forums and social media appears to be strangely naïve and detached from the reality of world politics, war, and peace.

Nuclear physicists were at the forefront of attempts to regulate atomic weapons because they immediately saw the ramifications of what they were creating. Unfortunately, many computer scientists today appear to be blissfully oblivious of their work's political and military implications. And much too quick to delegate the difficult job of figuring out how to regulate Artificial Intelligence to others. Hopefully, this war will serve as a wake-up call for them.

Ultimately, whether or not the use of AI in war is ethical is determined by one's intrinsic optimism in the field of AI. You may embrace a pragmatic view and appreciate all the lives saved by precise and calculated robot strikes without the death of soldiers, or you can take a practical ethical approach and grieve the loss of human lives by killer drones.

The use of AI on the battlefield appears to be unavoidable, stringent, and many strict rules may be required to make AI for war ethical, but this is no guarantee. 

So, whether the seeming military benefits of AI outweigh the consequences, only Ukraine will provide the answers; all we can hope for is that it does so with as few fatalities as possible.