Psychopolitics of warfare: algorithmic violence and the digital unconscious in military AI systems
Haq, Shoaib Ul ORCID: https://orcid.org/0000-0002-8899-290X
(2026)
Psychopolitics of warfare: algorithmic violence and the digital unconscious in military AI systems.
Philosophy of Management.
ISSN 1740-3812 (Print), 2052-9597 (Online)
(doi:10.1007/s40926-026-00387-1)
Preview |
PDF (Open Access Article)
53324 HAQ_Psychopolitics_Of_Warfare_Algorithmic_Violence_And_The_Digital_Unconscious_(OA)_2026.pdf - Published Version Available under License Creative Commons Attribution. Download (1MB) | Preview |
Abstract
This article examines how artificial intelligence systems deployed in military operations generate ‘algorithmic psychopolitics’ which is a transformation of warfare wherein automated decision architectures operate beyond human moral comprehension. Drawing on Byung-Chul Han’s philosophy and news report analysis of the Israel Defense Forces’ AI targeting systems, particularly the Lavender program, this investigation shows how military AI establishes a ‘digital unconscious’ that mediates between institutional intent and lethal outcomes. The analysis identifies three mechanisms through which this transformation occurs: the distribution of moral responsibility across human-machine networks, the introduction of computational opacity that resists accountability, and the merger of psychological and physical violence through algorithmic mediation. Unlike defensive systems that intercept projectiles, Lavender processes personal data to determine which individuals should die, assigning numerical threat scores to Gaza’s 2.3 million residents. This shift from anti-materiel to anti-personnel AI applications raises fundamental questions about the nature of human agency in warfare. The article synthesizes Han’s concepts of psychopolitics, non-things, and the digital panopticon with Katherine Hayles’ notion of nonconscious cognition and Zygmunt Bauman’s analysis of bureaucratic violence to develop new theoretical tools for understanding algorithmic warfare. This study shows that military AI systems, though formally integrated into command structures and bound by legal regulations, produce decision patterns through computational processes beyond the prediction or control of individual operators. This creates a paradox: the systems simultaneously reflect and escape human intent. This phenomenon carries significant implications for governance, military ethics, and the evolving role of human agency in an era dominated by algorithmic decision-making.
| Item Type: | Article |
|---|---|
| Uncontrolled Keywords: | algorithmic warfare, artificial intelligence, Byung-Chul Han, digital unconscious, ethics, military philosophy, psychopolitics, technological autonomy, violence, warfare |
| Subjects: | B Philosophy. Psychology. Religion > BF Psychology T Technology > T Technology (General) U Military Science > U Military Science (General) |
| Faculty / School / Research Centre / Research Group: | Greenwich Business School Greenwich Business School > Networks and Urban Systems Centre (NUSC) Greenwich Business School > School of Business, Operations and Strategy |
| Last Modified: | 05 May 2026 09:43 |
| URI: | https://gala.gre.ac.uk/id/eprint/53324 |
Actions (login required)
![]() |
View Item |
Downloads
Downloads per month over past year
Tools
Tools