Ramsha
The use of artificial intelligence (AI) is transforming the world of war and strategy for all nations. Military organisations use AI for multiple purposes, such as intelligence, surveillance, and reconnaissance (ISR); autonomous weapons; decision-support systems; and cyber defence.
For the great powers, the use of AI in war has come to be at the centre of their debate regarding deterrence and strategic stability. Because South Asia is a nuclear region with an ever-present rivalry between India and Pakistan and increasing Chinese involvement, the stakes are much greater, and the margin for error is much smaller.
South Asia’s past is full of examples of how quickly tensions between rival nuclear-armed countries have escalated. Examples include the Kargil Crisis in 1999, the 2008 Mumbai attacks, and the 2019 Balakot episode. Each of these episodes occurred before advanced AI systems were placed in command-and-control centres. Therefore, there is a critical question being asked today: Does the addition of AI enhance stability in South Asia by reducing uncertainty through better situational awareness and restraining escalations? Or does AI exacerbate instability by creating conditions for faster escalation and misunderstanding?
The idea of “strategic stability” is based upon the premise of Cold War era deterrence theory that no nation would choose to engage in a nuclear first strike if they believed they could be retaliated against with equal or greater force. Strategic stability also requires “crisis stability,” which is the ability of leaders to resist escalating under pressure of crisis, and “arms race stability,” which is the absence of competition in technology that could create instability.
In South Asia, deterrence is not just about capability; it is also about perception, signalling, and the ability of leaders to manage crises within time constraints. There are several ways in which AI provides advantages. They include enhanced ISR (intelligence, surveillance, and reconnaissance), the fusion of data from different domains, and support for decision making, thereby providing leaders with a more accurate picture of what is happening during a developing situation.
In theory, this clarity of picture should lead to less uncertainty and less overreaction. However, the same tools may provide faster decision times, less human oversight, and increased mistrust in a region where nuclear threshold levels are obscure, and communications channels are weak.
The regional imbalance in AI development complicates matters further. India has moved quickly, establishing a Defence AI Council, strengthening research through the Defence Research and Development Organisation, and deploying AI-enabled ISR along its borders. Collaboration with the United States and Israel has accelerated this process. Pakistan, by contrast, remains in the early stages.
Its 2023 National AI Policy outlines ambitions, but much of its defence-relevant AI capability depends on cooperation with China and Turkey. China itself stands as a regional AI power, with extensive investments in military applications and a civil-military fusion strategy that spills into South Asia through technology transfer.
This layered asymmetry has created a very common security dilemma. The actions of one state to protect itself will be perceived by other states as a threat to themselves and will cause other states to take countermeasures. Therefore, an example of this would be when India invests in autonomous ISR (Intelligence Surveillance Reconnaissance) and decision support tools; Islamabad may perceive this investment as an enhancement of India’s potential to strike at counterforces. Similarly, China’s advancements in AI will likely reinforce India’s threat perceptions and thus lead to additional pressures in the region.
AI is a dual-use technology and therefore can blur the lines between defensive innovation and offensive preparation. However, there are some opportunities for stability in South Asia. With improved ISR architecture, the number of false alarms can be reduced and the true threats separated from the noise, especially if verification processes are still robust. Likewise, structured decision support systems can help to ensure that leaders evaluate options carefully during times of crisis, if authority remains firmly in the hands of humans.
The limited use of automation in defence (for example, to protect communication lines or to defend against drones) can enhance second strike confidence and deter pre-emption instead of encouraging it. Confidence-building measures (crisis hotlines, agreed-upon norms for preventing accidents, etc.) are still possible even in the presence of intense political hostility.
However, the risks associated with AI are significant and occur immediately. For example, “automation bias” refers to the fact that humans have been shown to tend to over-reliance on outputs from algorithms. Reliance on opaque AI recommendations in high-pressure situations can impair judgment instead of improving it.
Furthermore, the time constraints inherent in a nuclearized region increase the chance of unintended escalation. Additionally, the use of AI-enabled drones and swarming technologies raises further questions. These technologies allow for rapid action, ambiguous identification and the possibility to overwhelm defences, making small skirmishes potentially become strategic crises if misinterpreted.
Additionally, an AI-driven arms race poses significant sustainability challenges, particularly for Pakistan. Although imports and partnerships can provide temporary capability enhancements, they cannot replace sovereign software development, testing, and doctrinal integration. As such, the potential for dependency without resilience exists, thereby creating the potential for instability in the long run.
Cyber vulnerability compounds these risks. AI systems can be compromised through data manipulation or deception and integrating AI into command-and-control systems provides new paths to disrupt those systems. In the absence of sufficient protections, AI could result in the generation of false alarms or loss of control over nuclear decision-making. Recent regional events demonstrate these risks.
The increased surveillance and information-gathering activities during the China – India stand-off of 2020 heightened tensions between the two countries, although a full-scale conflict did not occur. Encounters between India and Pakistan since 2019 illustrate the potential for escalations resulting from rapid and frequent intrusions and countermeasures.
Therefore, the question is not whether AI is inherently stabilising or destabilising. Rather, it is how it changes the context of the decisions being made. Through its ability to accelerate the timeline of potential actions, to generate greater levels of uncertainty and to widen the range of possible actions available to states, AI has the potential to alter escalation dynamics in ways that South Asia is ill-equipped to deal with.
For the near to mid-term future, the risks of using AI appear to outweigh the benefits. However, a safer path does exist. Limiting the use of AI to purely defensive uses, ensuring human control over nuclear decision-making and implementing modest confidence-building measures all represent a path forward in managing the impact of AI. Ultimately, strategic stability in South Asia will be determined by the political choices made by leaders regarding how much they wish to rely on machines to make decisions.

The author is a MPhil scholar of Strategic Studies from National Defence University. She has previously worked for several reputed research institutes including Institute of Policy Studies (IPS), Institute of Regional Studies (IRS) etc. Also, she has multiple research paper publications in Journal of Peace and Diplomacy, Research Consortium Archive and IRS journal.




