How Transformer-Based AI Models Are Reshaping Irregular Warfare

5 min read

How Transformer-Based AI Models Are Reshaping Irregular Warfare

War’s nature, violence in pursuit of political objectives, remains constant, while its character shifts with technology. Clausewitz reminded strategists that the goal is to break an adversary’s will, not merely to win on the battlefield, and Mao showed how decentralization and local initiative can offset material shortfalls. Large and small language models (LLMs and SLMs) are the newest shift, widening access to strategic knowledge, altering communication channels, and enabling autonomous decision-making in irregular warfare.

Democratization of Expertise

LLMs are democratizing expertise once reserved for states and militaries. Anyone with an internet connection can now query a sophisticated model for tactical advice or technical know-how. LLMs essentially compress the internet’s collective knowledge—including openly available military information—into portable files that can run offline and leave few digital traces. Security research, such as HiddenLayer’s “Policy Puppetry Attack,” shows that jailbreak techniques can bypass guardrails on several commercial models, reducing the effort required for non-state actors to obtain or share material on CBRN threats, explosives, and other forms of violence. These same jailbreak techniques, along with numerous other methods, can be used on open-source LLMs/SLMs or distilled versions of closed-source models, which can be easily downloaded and used for inference from platforms such as HuggingFace. A motivated individual can therefore consult an inexpensive model as a real-time tutor when planning irregular warfare.

Embedding Domain Knowledge

Interpretability remains a significant research gap. Leading AI labs can detail a model’s architecture and pre- and post-training processes, but they still lack fine-grained insight into how neural connections derive information, generate probabilistic outputs, and map specific weights to specific concepts. Deliberate embedding of content, however, is well understood: full-parameter supervised fine-tuning, direct preference optimization, steganographic embeddings, special tokens, and custom tokenization techniques can enable operators to store doctrine, operational plans, and tactics, techniques, and procedures (TTPs) inside a model. By fine-tuning with proprietary data, operators can embed strategic and operational knowledge directly in the weights of LLM/SLM. Short “trigger” tokens can unlock this concealed knowledge, creating hidden doctrinal layers accessible only through specific decoding methods. Now it’s possible to have distributed and off-grid communications where non state actors could communicate via small language models or even large language models and send decoded instructions in a USB stick or other physical media devices. The analogy with World War II-era codebreaking, such as Alan Turing’s work on Nazi Enigma machines, highlights the difficulty of interpreting modern AI “ciphers.” Unlike mechanical encryption, transformer-based models encode doctrine statistically and conceptually, presenting a new cryptanalytic frontier. Decoding and predicting adversarial strategies embedded in neural networks is now essential for maintaining strategic advantage and operational clarity.

Decentralized Autonomy

Small open-source models also strengthen tactical-edge capabilities. Running on modest hardware with no network connection, they let dispersed units analyze local conditions, consult embedded doctrine, and draft orders without higher-level oversight. This decentralized autonomy greatly increases the adaptability and unpredictability of tactical units, consistent with historical theories that emphasize mobility, local initiative, and grassroots empowerment.

Fog of War - Deception

LLM-enabled tools thicken the “fog of war.” LLMs' ongoing interpretability challenges—such as deception, sycophancy, hallucination, and confabulation—underscore how disinformation campaigns and information warfare will play into irregular warfare. If you can alter the perception of history and facts, such as the China-based LLM model DeepSeek’s recollection of Tiananmen Square DeepSeek's recollection of Tiananmen Square, you can control the narrative in unparalleled ways with LLMs and SLMs alongside social media, especially in communities without sophisticated education systems. Anthropic CEO Dario Amodei highlights ongoing interpretability challenges, underscoring the need for advanced decoding methods to understand complex neural-network communications. Despite issues inherent in transformer-based AI architectures, systems can still be engineered around these AI limitations to provide insurgents and motivated individuals with capable and semi-deterministic systems. Broad access to transformer models is already reshaping irregular warfare. Mastery of technology, artificial intelligence, and systems thinking now ranks alongside traditional soldiering skills at every level of conflict.

Written by Jhordan

AI/ML Engineer. Building tools and writing about the intersection of AI, military, and society.