Abstract: Artificial intelligence is transforming warfare by compressing the time available for human judgement. As conflict environments accelerate, states face growing challenges around strategic autonomy, decision-making, sovereignty, and the ability to maintain meaningful human oversight in increasingly machine-driven systems.

Dr Joan Swart

Keywords: AI, asymmetry, military strategy, intelligence, drones

AI and the Compression of Modern Warfare

Artificial intelligence is increasingly discussed in the context of warfare, often through the imagery of autonomous weapons, drone swarms, robotic systems, and so-called “killer robots.” The rapid evolution of these technologies, particularly in conflicts such as Ukraine, has accelerated public and military interest in how AI may reshape future battlefields. Yet much of this discussion remains focused on the visible systems themselves. The deeper transformation may lie elsewhere.

The strategic significance of AI may not primarily be that it changes the tools of war, but that it compresses the time available for human judgement within war.

In a recent discussion on artificial intelligence and the changing character of warfare, Prof. Abel Esterhuyse noted that despite technological transformation, war remains fundamentally a human and political phenomenon. This observation is important because modern debates around AI often create the impression that technology itself is becoming the primary actor. In reality, technology alters the environment within which human beings, institutions, and states must operate. AI may therefore be less important as an autonomous decision-maker than as a system that increasingly accelerates the tempo of military and political interaction.

Historically, warfare contained forms of strategic friction. Information travelled more slowly. Intelligence required time to verify. Political leaders, diplomats, and commanders operated within longer decision cycles. This did not eliminate miscalculation or escalation, but it created space for reflection, reassessment, and restraint.

AI increasingly compresses this space.

Modern militaries are already integrating AI-assisted intelligence analysis, target identification, predictive modelling, autonomous or semi-autonomous drone systems, and real-time battlefield processing. The effect is not simply greater efficiency. It is the acceleration of the entire operational environment. Detection becomes faster. Interpretation becomes faster. Response expectations become faster. The pressure to make decisions at machine-supported speed intensifies.

This compression has implications beyond the battlefield itself.

The OODA loop — observe, orient, decide, and act — has long shaped military thinking around operational tempo and decision superiority. AI systems increasingly accelerate each phase simultaneously. The state or actor capable of processing information, identifying patterns, and reacting faster may gain a significant strategic advantage. Yet this creates a deeper question: what happens when political and institutional systems struggle to adapt to the speed of the environments they increasingly inhabit?

This challenge may prove particularly significant for lower and middle powers.

Historically, states without overwhelming military superiority often relied on diplomatic manoeuvre, ambiguity, balancing strategies, and time itself to preserve strategic flexibility. Accelerated conflict environments reduce these buffers. States with advanced AI ecosystems, integrated intelligence architectures, and superior computational infrastructure may increasingly shape the tempo within which others are forced to respond.

This creates a new form of asymmetry.

This asymmetry is not limited to military hardware alone. Effective AI integration depends on a broader ecosystem of computational infrastructure, advanced telecommunications networks, satellite integration, data acquisition, software engineering capacity, semiconductor access, cyber capability, and highly specialised human capital. States lacking these foundations may find themselves increasingly dependent on external providers for both technological capability and strategic interpretation.

This introduces important constraints for many middle and lower powers.

Historically, military modernisation was often understood in terms of acquiring platforms: aircraft, tanks, ships, artillery systems, or missiles. AI-driven warfare alters this equation. The decisive factor increasingly becomes the ability to integrate information across multiple domains in real time, process large volumes of data rapidly, and distribute actionable intelligence throughout operational structures. The challenge therefore shifts from simply acquiring equipment to sustaining complex technological ecosystems.

The costs associated with such ecosystems are substantial. Advanced AI development requires enormous computational resources, highly skilled technical personnel, secure digital infrastructure, and continuous software adaptation. This may widen existing global disparities between technologically dominant powers and states already facing developmental or fiscal constraints.

For many middle powers, including those in Africa, independent development of frontier AI systems at scale may prove unrealistic in the near term. This creates growing pressure toward technological dependence, strategic alignment, or participation in broader alliance structures capable of sharing intelligence, infrastructure, and digital capability. Military alliances may therefore increasingly evolve beyond traditional defence cooperation into integrated information and computational partnerships.

At the same time, regional cooperation may become more strategically important rather than less. States unable to compete individually at the highest technological level may still improve resilience through shared intelligence frameworks, interoperable systems, coordinated cyber defence, regional satellite initiatives, and collaborative research and development structures. In this environment, strategic isolation may become increasingly costly.

Yet dependence also introduces risks.

Reliance on externally developed AI systems may gradually shape not only operational capability, but strategic perception itself. Systems trained, designed, and maintained outside local political and cultural environments may carry embedded assumptions, priorities, or biases that influence how threats are identified and interpreted. Smaller states may therefore face a growing challenge in preserving strategic autonomy within increasingly centralised technological ecosystems dominated by larger powers and multinational corporations.

This challenge is compounded by the growing reliance on AI systems whose internal reasoning processes are often not fully transparent even to their operators. As machine-learning systems increasingly generate probabilistic assessments rather than clearly explainable conclusions, political and military leaders may face growing pressure to act on outputs they cannot entirely interrogate or independently verify. Former National Intelligence chief executive member Johan Mostert recently observed that the contemporary intelligence challenge is shifting away from acquiring sufficient information toward determining what can still be trusted within increasingly saturated and manipulated information environments. Increased speed does not necessarily produce increased clarity. In highly compressed environments, acceleration may amplify uncertainty as much as it reduces it.

Military power has traditionally been measured through industrial capacity, economic strength, logistics, and conventional force projection. AI introduces an additional layer centred around computational dominance, data access, information integration, and decision-cycle compression. The strategic advantage may increasingly belong not only to those with the most weapons, but to those capable of interpreting and acting upon reality faster than their competitors.

This also raises important questions regarding sovereignty and strategic autonomy.

Advanced AI systems are not evenly distributed. Their development is concentrated among a relatively small number of major states and powerful technology firms with access to immense computational resources, data ecosystems, and financial capital. As these systems become increasingly integrated into military, intelligence, media, and governance environments, dependence on externally developed technologies may gradually shape how states interpret information, define threats, and formulate responses.

The implications therefore extend beyond military affairs alone.

As information environments become increasingly AI-mediated, societies may face growing pressure over informational sovereignty and strategic culture. The issue is no longer simply who controls territory or military hardware, but increasingly who controls the systems that process information, shape narratives, and structure decision-making environments. In increasingly AI-mediated environments, the strategic challenge may no longer lie primarily in acquiring information, but in determining what can still be trusted within increasingly saturated and manipulated information ecosystems. Dependence in these domains may create forms of cognitive and strategic vulnerability that are less visible than conventional military dependence, but potentially just as significant over time.

At the same time, it would be a mistake to assume that AI eliminates the enduring realities of war.

War remains shaped by uncertainty, fear, political interest, ideology, miscalculation, and human limitation. Technology may accelerate detection, targeting, and response, but it does not remove friction from human affairs. In some respects, accelerated systems may even intensify instability by reducing the time available for verification, diplomacy, and strategic restraint.

This may become one of the defining strategic tensions of the coming decades.

AI promises greater efficiency, precision, and operational awareness. Yet the same processes may also compress the space available for political judgement and institutional reflection. The danger may therefore lie less in autonomous machines themselves than in the growing difficulty human systems may face in maintaining meaningful oversight, restraint, and strategic autonomy within increasingly accelerated environments.

The future battlefield may not simply be more automated. It may be faster, denser, more interconnected, and increasingly unforgiving toward hesitation, ambiguity, or delay. Under such conditions, the central strategic challenge may no longer be whether humans remain involved in decision-making, but whether human judgement can retain meaningful space within systems operating at machine-supported speed.

NONGQAI’S Strategic Security Analist Dr Joan Swart is a forensic psychologist with an MBA and an MA in Military Studies. Her work focuses on African security, geopolitics, state fragility, substate dynamics, and the intersection between governance, legitimacy, and coercive power. She is the author of several books and regularly publishes long-form analysis and opinion pieces on security and governance issues. Her writing has appeared in outlets including DefenceWebMaroela MediaNetwerk24, RSG, Visegrad, and other policy and public-affairs platforms. She has a weekly slot on SAfm The Global Briefing to analyse world affairs. Her work bridges academic research, policy analysis, and applied strategic assessment, and she is currently completing a second PhD at the University of Stellenbosch Military Academy. Follow her on X/Twitter, Substack, and LinkedIn.