top of page

The Intersection Of Technology and Military : Understanding the rise of Military AI

  • Jul 29, 2025
  • 6 min read

Updated: Aug 20, 2025

Guest Author: Nikita Ambwani reachable at Linkedin. The views and opinions in the article are authors own.


On February 16th 2023, in an effort to impose order on a new technology that has the capacity to alter how war is fought, the United States announced an initiative that encourages global collaboration on the responsible use of autonomous weapons and artificial intelligence by the military. Military organizations have a natural incentive to maintain an advantage over rivals or at the very least avoid falling behind. The international rush to build military AI might intensify into an all-out weapons war.


Governments may be tempted to quickly purchase and integrate military AI without putting in place the necessary regulations to ensure that systems are secure and dependable due to the absence of international agreement on standards of responsible development and usage. This condition may lead to a "race to the bottom", which would endanger people's capacity to control military AI systems. While pursuing new technology is common, some create questions due to their effects on stability. Because AI is a fast evolving technology, it is important to establish clear guidelines for ethical behavior when it comes to military applications, keeping in mind that these applications will surely alter over the next several years. The rules of international humanitarian law (IHL), which aim to shield defenseless people from the atrocities of war, must be followed by all countries.

The use of autonomous weapons, or those that can locate and eliminate targets without human operators participating in the decision-making process, raises important issues of moral responsibility, the preservation of human dignity, and who should be held liable for destructive action if the incorrect targets are hit.

At the Hague meeting, 60 countries—including the US and China— issued a call for action, encouraging widespread collaboration in the research and appropriate military use of artificial intelligence. This underscored the seriousness with which AI and autonomous weapons are being addressed internationally.


As nuclear weapons entered the field of international combat, given the overwhelming destructive power of such weapons, there were worries about their possible use. As a response, measures focusing on arms control were developed. Therefore, it seems even more natural that policymakers are focusing on possible concerns when AI is incorporated into military systems. The hazards associated with such integration are complex since artificial intelligence (AI) technologies are not discrete.


AI AND ITS APPLICATION IN MILITARY


When systems of artificial intelligence are integrated into military, depending upon the context, several questions about the determination of the ultimate responsibility regarding the use and consequences of lethal force are evident. How to adequately explain the level of human engagement in such circumstances is the first problem that arises. Human-machine interactions of this kind are frequently categorized by developers into three main groups: "human-in-the loop, (When an operator must decide whether to engage a target, it is only semiautonomous and will not fire without a clear human command)" "human-on-the-loop (when a weapon can autonomously locate, recognize, and engage targets without human intervention, but an operator is keeping an eye on the situation and has the power to take action to stop or halt the engagement)" and "human-out-of-the-loop (if human operators are unable to interfere in the encounter)" or alternately "semiautonomous," "supervised autonomous," and "fully autonomous."


The ‘Observe, Orient, Decide, and Act (OODA) loop’, a theory created by Colonel John Boyd in the 1980s, is the loop to which this alludes. The observe and orient components in this paradigm apply to locating and classifying targets, whilst the components of decide and act allude to interacting with and perhaps killing those objects. The goal is to finish the OODA loop and eliminate enemy fighters before they can finish their own OODA loop and launch an assault or flee. Although the distinctions between these designs have substantial legal and moral ramifications, there could be barely any variation in the construction of the weapons. This becomes a significant factor in terms of international relations and conflict, in addition to safety and development initiatives. When AI technologies are used in military systems, accordance with international humanitarian law has to be ensured so that security in the international arena can be maintained and upheld.


ADVANTAGES AND POTENTIAL RISKS INVOLVED


Thanks to advances in data, computing power, and machine learning, the potential of artificial intelligence (AI) has multiplied in recent years. AI, being an enabling technology that can be used for a variety of general purposes, will therefore have numerous uses. Strategy, operations, supplies, personnel, training, and every other area of the military will undoubtedly be impacted by AI. Just as the militarization of computers or electricity is not necessarily concerning, neither is the militarization of artificial intelligence.


However, some AI-based military applications, such as the use of lethal autonomous weapons or AI in nuclear operations, could have fatal results. First and foremost, AI systems will be intelligent, utilizing the integration of knowledge focused analytical AI capabilities. In order to utilize a collection of both digital and physical fields, including sensors, organizations, people, and autonomous agents, the AI solutions will then be connected. This will also allow them to take advantage of blockchain technology's advantages for data integrity.


Lastly, in order to facilitate new disruptive impacts, the physical, human and information worlds will be integrated. According to the “DoD Growth in Artificial Intelligence: The Frontline of a New Age in Defense, 2019” The Pentagon is exploring how to use artificial intelligence (AI) to its full potential for benefits like autonomous battlespaces, intelligence analysis, record monitoring, predictive maintenance, and military medical.


For the DoD, AI is a critical area for increasing investment. When it comes to C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance), the combating units employ trusted AI-enabled autonomous systems that are capable of performing a variety of tasks and that offer AI-enabled decision support for war scenarios and AI-recommended courses of action. Better indicators and warnings, information and knowledge management systems, and more accurate intelligence analysis are the outcomes of integrating AI into the C4ISR process.


With AI help, autonomous systems and unmanned vehicles such as unmanned aerial vehicles, autonomous underwater vehicles, etc. may operate at significantly greater efficiency and safety levels. Robotic navigational capabilities will be considerably increased by the integration of deep learning algorithms into unmanned platforms. The use of artificial intelligence to supplement and enhance human intellect will add a new dimension to the conflict. Physical and digital machines will be able to do tasks independently, at least within specific parameters.


The adaptability, dependability, and universality of human intellect are qualities that no AI system, or even collection of systems, can match. Because warfare and like situations are highly susceptible to unpredictability, it is crucial to understand the strengths and weaknesses of machine and human intelligence. AI will play a role in combat in the cognitive era, but human intelligence—which will continue to play a significant role in warfare in the immediate future—will also play a role.

Mentioned below are a few concerns and probable risks surrounding the integration of AI into the military as pointed out by researchers.


The issue for the military is that there is very little real-world information regarding the conditions of war on which to base system evaluations. Here are the key pointers:


  • Military AI systems can be put to the test in virtual or real-world training environments, but real-world operational testing won't be possible until a conflict has broken out. The chaos and bloodshed of war are impossible for militaries to properly replicate in peacetime, despite their greatest efforts to accurately resemble genuine operating situations.

  • Humans are flexible and are expected to improvise in times of battle utilizing their prior experience as a base. But unlike human intellect, artificial intelligence is not as malleable and versatile. Failures might create mishaps or just render military systems useless. Keeping people involved and limiting AI systems to merely provide recommendations are not sufficient solutions to these problems. Automation bias is the term for the tendency that people have to place too much reliance in machines.

  • Overreliance on technology may result in errors and mishaps even before hostilities start. In 2003, two instances using the highly automatic US Patriot air and missile defense system involved humans, but humans were unable to prevent the tragedies.

  • Assuming that artificial intelligence systems operate flawlessly, one issue that countries may have is their inability to foresee their own potential course of action in a crisis. Humans pre-delegate power to a computer for certain activities when they install autonomous systems. The issue lies in the fact that in a real crisis scenario, leaders can decide to adopt a different strategy.


CONCLUSION


It is impossible to overstate the effect of the rapidly growing commercial market on the development of autonomous systems. However, in the near future, it will be even more difficult to fully comprehend the worldwide consequences of the noticeable change in the foundation of AI specialization from the military to private businesses. Researchers working to address the transparency, comprehensibility and simplicity problems with AI have already made significant strides.


Numerous of these developments are probably also applicable to military AI applications. Given the risks, the caliber of the data, the regulations, etc., military demands could be very different, and some sorts of accessibility could not even be pertinent. The use of AI in military operations is inevitable, but the environment is changing quickly and in potentially negative ways.


It will be some time before artificial intelligence (AI) can even come close to matching human intellect in high-uncertainty situations, given the difficulties that still remain in providing computers with true knowledge and expert-based behaviour, as well as the limits of perceptual sensors.egarding non-performance of feedback.   

 
 
 

Recent Posts

See All

Comments


AI Testing
AI Audit

Get in Touch

Located in Coimbatore,

Tamil Nadu, India

+91 96003 21970

  • LinkedIn

Thanks for submitting!

bottom of page