AI, Automation, and War

The Rise of a Military-Tech Complex

by Anthony King

Cover of AI, Automation, and War

AI, Automation, and War

The Rise of a Military-Tech Complex

by Anthony King

Online Description

Anthony King’s book is a hard corrective to the most fashionable claims about AI and war. His central argument is not that AI is unimportant. It is that most of the current literature asks the wrong question. The near-term military significance of AI lies less in autonomous warbots replacing commanders than in AI’s ability to process huge volumes of data for planning, targeting, and cyber operations. That capability only becomes militarily useful when armed forces reorganize themselves around it and integrate civilian tech expertise into operational practice (Preface, pp. viii-x; pp. 43-59, 99-165).

The deeper transformation, then, is organizational and political. AI is making possible a new “military-tech complex” in which armed forces, defense ministries, cloud providers, software firms, data engineers, and commercial satellite companies become intertwined. That can produce genuine warfighting advantages, but it also shifts influence toward private firms and may make future war more transparent, more targetable, and therefore slower, more positional, and more attritional rather than faster and more decisive (pp. 60-81, 166-183).

Author Background

Anthony King is a sociologist of war and the armed forces. In the preface he situates this book after prior work on cohesion, coordination, collaboration, teamwork, divisional command, and urban warfare; in the acknowledgements he notes service at Warwick and then Exeter, and records 126 interlocutors across armed forces, defense ministries, think tanks, and tech companies in the UK, US, and Israel (Preface, pp. viii-x; Acknowledgements, pp. xii-xiii).


60-Second Brief

  • Core claim: AI will matter most in the near term as a data-processing enabler for planning, targeting, and cyber operations, not as an autonomous substitute for political leaders, commanders, or human combatants (pp. 43-59, 99-148, 166-171).
  • Causal logic in a phrase: data abundance + machine learning + procurement reform + embedded civilian expertise = decision advantage and a new military-tech complex, not automated war (pp. 60-81, 162-165, 168-175).
  • Main level(s) of analysis / lens: organizational sociology, political economy of defense, civil-military relations, and operational analysis of recent wars (Preface, pp. viii-x; pp. 60-81).
  • Why it matters for SAASS 660:
    • It is a direct rebuttal to technological determinism in future-war debates (pp. 14-18, 166-171).
    • It shows how military innovation is mediated by organizations, culture, procurement, industry, and war experience rather than technology alone (pp. 60-98, 149-165).
    • It argues that improved kill chains can coexist with slower, uglier, more attritional campaigns (pp. 179-183).
  • Best single takeaway: The decisive innovation in this book is not “AI replacing humans,” but militaries building hybrid military-tech teams that turn commercial data and software into operational advantage while simultaneously giving private firms new leverage over strategy (pp. 165, 172-178).

SAASS 660 Lens

King sits firmly on the anti-determinist side of the spectrum. He repeatedly argues that AI should not be treated as a pristine, autonomous technology that simply imposes outcomes on militaries. In the preface he explicitly reframes AI as “a manifestation of collective human expertise,” and in the body he criticizes technological determinism by insisting that AI only becomes militarily meaningful when situated in its social, organizational, and institutional context (Preface, p. ix; pp. 60-61, 168-171). In SAASS terms, this is primarily a Phase II book, but it reaches back into Phase I by rejecting determinism and forward into Phase III by offering a skeptical account of future war.

On the sources of military innovation, King’s answer is clear: innovation comes from the interaction of operational problems, data availability, computing power, commercial software, and organizational redesign. AI does not innovate on its own. Militaries innovate when they define bounded problems, reform procurement, build data infrastructure, and embed civilian programmers and data scientists alongside operators and staff officers who can translate software into warfighting effect (pp. 72-79, 89-98, 99-130, 162-165).

The intervening factors that matter most in this book are organizational design, industry, civil-military relations, and war experience. Culture matters too, especially the experimental, boundary-spanning culture of special operations forces and the inclusive leadership styles that make civilian-military collaboration possible. Law and ethics matter, but mostly as second-order issues relative to the immediate problem of integrating talent, data, and authority into usable systems (pp. 89-98, 149-165, 175-178).

On RMAs and future war, King is neither dismissive nor utopian. He thinks AI is historically significant and will reshape military organization and operations. But he rejects the stronger claim that it heralds clean, decisive, autonomous war. His future-war argument is paradoxical: AI accelerates particular strikes and deepens targetability, but pervasive sensing and long-range precision also make maneuver harder and campaigns slower. The result is likely to be more positional and attritional warfare, not less (pp. 166-183).

For SAASS 660’s emphasis on effectiveness rather than mere efficiency, King is especially useful. Some of his examples are only efficiency gains. AI route-planning tools and administrative copilots save time. But his strongest cases cross the threshold into warfighting effectiveness: Project Maven, Ukrainian and Israeli battle-management systems, Palantir-enabled targeting, and cyber defenses all change what forces can actually find, decide, and strike in combat (pp. 103-114, 117-130, 131-148, 162-165). That distinction is seminar gold.

For contemporary technologies, the book is immediately relevant to AI, autonomy, cyber, commercial space, precision strike, and military-civil fusion. It also has indirect relevance to ACE and dispersion by implication: if units can be seen and struck more quickly, survivability, deception, counter-sensor operations, and distributed basing become more important. That inference is not King’s explicit language, but it follows from his argument about transparent battlefields and slowed operations (pp. 179-183).

Seminar Placement

  • Unit: Seminar Two: Technology and the Future of War
  • Seminar: Technology and the Future of War
  • Why this book is in this seminar: It is the course text that most directly addresses AI, automation, and the future of war. But its distinctive move is to pull the discussion out of science fiction and back into actual organizational practice, recent campaigns, and the political economy of adoption (Preface, pp. viii-x; pp. 43-59, 60-81, 166-183).
  • Closest neighboring texts in the syllabus:
    • Evron & Bitzinger on military-civil fusion
    • Schneider & MacDonald on the politics behind unmanned systems
    • Krepinevich on RMAs
    • Biddle on whether new technology really changes combat outcomes

Seminar Questions (from syllabus)

  • How is MCF different from the civil-military integration of the past?
  • What is the role of the Cold War in the development of MCF as a strategy?
  • How can the knowledge of civil-economic actors be best channeled into military innovation?
  • How strong is the relationship between military innovation and basic science research?
  • What are the primary obstacles to successful implementation of MCF?
  • What is the role of socio-economic systems in facilitating and/or limiting the effective implementation of MCF?

✅ Direct Responses to Seminar Questions

  • How is MCF different from the civil-military integration of the past? King does not use “military-civil fusion” as his preferred term; his analogue is the military-tech complex. The difference from older civil-military integration is that tech firms are not merely building finished platforms or providing peripheral services. They provide software, data, compute, cloud access, and expertise that remain entangled with ongoing operations. Their personnel often have to work with operators inside headquarters, not just deliver a product and leave (pp. 20-22, 60-65, 170-171).
  • What is the role of the Cold War in the development of MCF as a strategy? The Cold War matters in King mostly as inheritance and contrast. The old military-industrial complex and its waterfall procurement system were built for platforms, long timelines, and centralized acquisition. That legacy now obstructs AI adoption. At the same time, the US strategic lineage of First Offset, Second Offset, and then Third Offset provides the policy bridge by which AI becomes a strategic imperative against China and Russia (pp. 44-49, 72-76).
  • How can the knowledge of civil-economic actors be best channeled into military innovation? By embedding it in operations. King’s answer is not “buy more tech.” It is: define bounded operational problems, reform procurement, create bridging institutions like DIU and JAIC, and place civilian programmers, data scientists, and software engineers close to military users so tools can be iteratively refined in contact with live missions (pp. 74-79, 89-98, 162-165).
  • How strong is the relationship between military innovation and basic science research? Strong, but indirect. King’s history of AI shows that breakthroughs in machine learning, neural networks, and transformers matter a great deal. But the decisive military edge comes less from basic science alone than from commercial ecosystems that convert research into usable software, cloud services, and compute at scale. In his telling, innovation depends on translation and integration as much as discovery (pp. 26-34, 64-66, 168-171).
  • What are the primary obstacles to successful implementation of MCF? Bad or biased data, brittle models, overambitious expectations, talent shortages, cultural mistrust between militaries and tech firms, outdated procurement systems, and legal-ethical controversy. The Google revolt over Maven, the JEDI procurement fight, and the UK’s struggle with legacy contracting all show that institutions can be as limiting as technology (pp. 30-35, 67-79, 117-120, 175-178).
  • What is the role of socio-economic systems in facilitating and/or limiting the effective implementation of MCF? Decisive. Silicon Valley’s venture capital, talent density, cloud infrastructure, and research spending make US adoption possible. Israel’s conscription system and dense civil-military ties make integration especially deep. The UK, by contrast, lags because of smaller budgets and more cumbersome procurement rules. King’s core point is that military innovation with AI is inseparable from the wider socio-economic system that produces and sustains it (pp. 62-81).

Chapter-by-Chapter Breakdown

Chapter 1: Robot Wars (pp. 1-22)

  • One-sentence thesis: The debate on AI and war is dominated by inflated claims about automation, so the real task is to examine how AI has actually been used and what evidence supports those claims (pp. 1-22).
  • What happens / what the author argues: King opens by surveying AI optimism and alarmism: Kurzweil, Lovelock, Suleyman, Hinton, Kissinger, Payne, Garcia, Russell, Scharre, and others. He then brings in skeptics who stress uncertainty, friction, and the limits of current AI. The chapter ends by setting the book’s empirical agenda around recent wars and the organizational conditions of adoption (pp. 1-22).
  • Key concepts introduced: automation bias, autonomous weapons, teleology, skeptical empiricism, military-tech complex, data-driven war (pp. 7-22).
  • Evidence / cases used: AlphaGo, AlphaFold, commercial AI, AlphaDogfight, autonomous drone-swarm experiments, Ukraine, Gaza, and the 2023 Tucker Hamilton controversy about a rogue drone simulation (pp. 2-19).
  • Why it matters for SAASS 660: This is the framing chapter that warns against confusing technological possibility with military innovation. It forces the seminar back to evidence, organizations, and warfighting effect (pp. 14-22).
  • Links to seminar questions: Especially 3, 5, and 6.
  • Notable quotes:
    • “AI is an existential security issue.” (p. 22)

Chapter 2: What AI Can Do (pp. 23-42)

  • One-sentence thesis: Contemporary AI is powerful but narrow, probabilistic, and brittle, which makes full military automation unlikely even while leaving substantial room for augmentation (pp. 23-42).
  • What happens / what the author argues: King explains AI’s intellectual and technical history from Babbage and Turing to Dartmouth, GOFAI, second-generation machine learning, deep learning, and generative AI. He then turns to the limits of current AI and uses commercial cases to show where augmentation succeeds and full automation struggles (pp. 23-42).
  • Key concepts introduced: GOFAI, second-generation AI, machine learning, neural networks, deep learning, generative AI, hallucination, augmentation (pp. 25-35).
  • Evidence / cases used: Winograd schemas, Microsoft’s Tay, ChatGPT, Waymo, Tesla, Amazon, and Chinese tech companies (pp. 30-42).
  • Why it matters for SAASS 660: It gives the technical baseline needed to evaluate claims of innovation. Without this chapter, seminar debate risks mistaking statistical pattern recognition for human judgment (pp. 30-42).
  • Links to seminar questions: Especially 3, 4, and 6.
  • Notable quotes:
    • “Humans are underrated.” (p. 41)

Chapter 3: AI Strategy (pp. 43-59)

  • One-sentence thesis: Actual defense strategy documents show that militaries want AI primarily for intelligence, situational awareness, planning, and targeting rather than for autonomous command (pp. 43-59).
  • What happens / what the author argues: King traces the US Third Offset, Work’s speeches, the 2018 National Defense Strategy, the NSCAI report, JADC2, Air Force thinking, US Army FM 3-0, British AI strategy, and NATO’s AI approach. The through-line is “data is king” and AI is mainly an enabler of multidomain C2 and intelligence (pp. 44-59).
  • Key concepts introduced: Third Offset, JADC2, multidomain operations, decision advantage, data-centric warfare (pp. 44-57).
  • Evidence / cases used: US national strategy documents, UK strategy papers, NATO doctrine, and interviews with Shanahan and British officers (pp. 49-59).
  • Why it matters for SAASS 660: This chapter shifts the debate from futurism to state strategy. It answers how militaries themselves think they will leverage technological revolution (pp. 43-59).
  • Links to seminar questions: All six, but especially 2, 3, 5, and 6.
  • Notable quotes:
    • “Data is king.” (p. 57)

Chapter 4: A Military-Tech Complex (pp. 60-81)

  • One-sentence thesis: AI is not a free-floating military technology; its military utility depends on the commercial ecosystems, talent pools, venture capital, and regulatory reform that generate a new military-tech complex (pp. 60-81).
  • What happens / what the author argues: King critiques technological determinism, reviews the sociology of technology, narrates the rise of Silicon Valley, compares tech-sector and defense spending, tracks the political realignment of tech elites like Thiel and Schmidt, and explains reforms such as DIU and JAIC that try to connect military demand to commercial innovation (pp. 60-79).
  • Key concepts introduced: social construction of technology, venture capital, regulatory reform, alliance capitalism, military-tech complex (pp. 60-79).
  • Evidence / cases used: Fairchild and the “Traitorous Eight,” Big Tech R&D data, Thiel and Schmidt, DIU, JAIC, JEDI, UK procurement struggles, Israel’s closer civil-military industrial integration (pp. 62-81).
  • Why it matters for SAASS 660: This is the book’s core causal chapter. It explains how a technological revolution becomes military innovation only through political economy and organizational redesign (pp. 60-81).
  • Links to seminar questions: All six.
  • Notable quotes:
    • “A new triangle is appearing—a ‘digital triangle’.” (p. 81)

Chapter 5: The Special Relationship (pp. 82-98)

  • One-sentence thesis: The most important bridge between the tech sector and the military has been elite operational users—especially special operations forces—who serve as early adopters, translators, and advocates for AI-enabled systems (pp. 82-98).
  • What happens / what the author argues: King moves from policy to practice by examining Palantir’s evolution from PayPal fraud detection to military targeting support, the role of commanders such as Tunnell and Flynn, JSOC’s data-centric campaigns, and the unusually strong ties among SOF, tech firms, and defense innovation (pp. 83-97).
  • Key concepts introduced: military-tech lifeworld, supersellers, SOF as organizational entrepreneurs, direct user-tech partnership (pp. 84-97).
  • Evidence / cases used: PayPal’s Igor, Palantir Gotham and Metaconstellation, Iraq and Afghanistan, JSOC Baghdad, SOCOM resourcing, Unit 8200 (pp. 83-98).
  • Why it matters for SAASS 660: It identifies who actually carries military innovation across the civil-military boundary. Innovation is not just policy from above; it is a social process mediated by trusted operators and expert users (pp. 89-98).
  • Links to seminar questions: Especially 1, 3, 5, and 6.
  • Notable quotes:
    • “Data is the engine. It is the locomotive of strategy.” (p. 95)

Chapter 6: AI and Planning (pp. 99-114)

  • One-sentence thesis: AI does not replace command, but it can automate specific staff functions and improve planning enough to alter operational effectiveness (pp. 99-114).
  • What happens / what the author argues: King distinguishes judgment from calculation, then shows how AI supports planning through predictive tools, route analysis, battle-management systems, and LLM-enabled staff work. He closes with the analogy that AI in planning resembles a new form of military cartography (pp. 99-114).
  • Key concepts introduced: decision support, battle-management systems, digital common operating picture, military cartography analogy (pp. 103-114).
  • Evidence / cases used: MCOSM, BRAWLER, urban IED and destruction-detection tools, UK Spearhead/Microworld, Elbit’s Torch, Ukraine’s Delta and Kropyva, Anduril’s Lattice, Royal Navy StormCloud, Hermes LLM experiments (pp. 102-114).
  • Why it matters for SAASS 660: It draws the line between staff efficiency and true warfighting effect. Planning support is where AI becomes useful without being mythologized (pp. 103-114).
  • Links to seminar questions: Especially 3, 4, 5, and 6.
  • Notable quotes:
    • “AI may be seen as a new form of military cartography.” (p. 113)

Chapter 7: AI and Targeting (pp. 115-130)

  • One-sentence thesis: Targeting is where AI’s operational value is most obvious, because machine learning can fuse massive datasets to identify, track, and prioritize targets at a speed and depth human analysts cannot match (pp. 115-130).
  • What happens / what the author argues: King studies Project Maven, the British Army’s Covid response in Liverpool as a targeting analogy, and Israeli AI-enabled targeting in Gaza through the Gospel and Lavender systems. The result is not autonomous killing, but a much faster, broader, and more data-intensive kill chain (pp. 117-130).
  • Key concepts introduced: signatures, dynamic targeting, data fusion, kill chain, decision advantage (pp. 117-130).
  • Evidence / cases used: full-motion video exploitation, CIPHA and wastewater mapping in Liverpool, Gospel’s target generation, Lavender’s identification of Hamas personnel (pp. 117-130).
  • Why it matters for SAASS 660: This chapter most clearly shows how AI can produce genuine military effectiveness rather than just administrative efficiency. It also raises immediate legal and ethical issues (pp. 127-130).
  • Links to seminar questions: Especially 3, 5, and 6.
  • Notable quotes:
    • “it generated 100 new targets every day.” (p. 127)

Chapter 8: AI and Cyber Operations (pp. 131-148)

  • One-sentence thesis: AI is deeply useful in cyber sabotage, defense, espionage, and information operations, but cyber remains a supporting arm of warfare rather than a substitute for force (pp. 131-148).
  • What happens / what the author argues: King works through Stuxnet, NotPetya, Russian cyber operations against Ukraine, Microsoft and Ukrainian cyber defense, deepfakes and bots, Operation Glowing Symphony against ISIS, and algorithmic activism in the Nagorno-Karabakh war. He argues that AI matters greatly in cyberspace, but human direction and political purpose remain central (pp. 134-148).
  • Key concepts introduced: sabotage, espionage, subversion, bots, deepfakes, algorithmic amplification (pp. 133-145).
  • Evidence / cases used: Natanz, Viasat, Industroyer2, Russian troll operations, Telegram, Ukrainian IT Army, Task Force Ares, Armenian diaspora activism (pp. 134-148).
  • Why it matters for SAASS 660: It is a useful corrective to claims that cyber or AI can replace physical war. It also shows how AI can matter greatly without being decisive on its own (pp. 145-148).
  • Links to seminar questions: Especially 3, 5, and 6.
  • Notable quotes:
    • “You can’t cyber your way across a river.” (p. 132)

Chapter 9: The Human-Machine Team (pp. 149-165)

  • One-sentence thesis: The dominant concept of “human-machine teaming” misdescribes what is happening; the real innovation is “military-tech teaming,” with civilian technicians and military professionals working together in hybrid headquarters (pp. 149-165).
  • What happens / what the author argues: King engages theorists and practitioners of human-machine teaming, critiques the fetishization of AI, shows how hidden human labor underwrites seemingly autonomous systems, and then uses Task Force Dragon and the strike on Gerasimov as the empirical payoff for his alternative concept (pp. 149-165).
  • Key concepts introduced: human-machine team, commodity fetishism, complementarity, military-tech teaming, Task Force Dragon (pp. 149-165).
  • Evidence / cases used: AlphaGo, financial compliance systems, US Navy littoral combat ship crews, Erik Kurilla, Christopher Donahue, Jared Summers, Palantir forward contractors, Gerasimov strike (pp. 154-165).
  • Why it matters for SAASS 660: This is the book’s strongest organizational claim. It relocates innovation from the machine to the new expert team that uses and continuously refines it (pp. 163-165).
  • Links to seminar questions: Especially 1, 3, 5, and 6.
  • Notable quotes:
    • “This is not human-machine teaming but military-tech teaming.” (p. 165)

Chapter 10: War at the Speed of Light (pp. 166-184)

  • One-sentence thesis: AI is an important military development, but its likely near-term effect is not autonomous or decisive war; instead it may yield deeper strikes, more private-sector influence, and slower, more attritional campaigns (pp. 166-184).
  • What happens / what the author argues: King compares AI hype to early airpower utopianism, recaps the “digitised military,” examines Starlink and the quasi-privatization of strategy, discusses data failure and October 7, and argues from Bakhmut and Ukraine toward a future of transparent, heavily targeted, positional war, including in a possible US-China/Taiwan conflict (pp. 166-184).
  • Key concepts introduced: digitised military, quasi-privatization of strategy, data issues, speed paradox, future war under AI (pp. 168-183).
  • Evidence / cases used: Douhet and strategic bombing, Starlink in Ukraine, Hamas’s October 7 attack, Bakhmut, the sinking of Moskva, EABO, Taiwan/Taipei (pp. 166-183).
  • Why it matters for SAASS 660: This is King’s answer to the future-war debate. It is both more historically grounded and more cautionary than most AI futurism (pp. 179-183).
  • Links to seminar questions: All six, plus the course-wide future-war question.
  • Notable quotes:
    • “War at the speed of data may, ironically, descend into a long, slow struggle for small pieces of terrain.” (p. 182)

Theory / Framework Map

  • Central problem: Will AI automate war, or is its real significance found in the organizational and political-economic arrangements that make data exploitable for warfighting? (Preface, pp. viii-x; pp. 18-22, 166-171)
  • Dependent variable(s): military effectiveness in planning, targeting, and cyber operations; the organizational form of the armed forces; and the character/tempo of campaigns under AI-enabled conditions (pp. 99-148, 166-183).
  • Key independent variable(s): access to data, computing power, software, civilian technical talent, procurement reform, bounded operational problems, and leadership willing to integrate nontraditional expertise (pp. 64-79, 89-98, 160-165).
  • Causal mechanism(s): AI processes massive datasets that humans cannot handle at speed; when embedded in military-tech teams and battle-management systems, that yields better situational awareness and faster kill chains; because opponents adapt under pervasive sensing and deep strike, campaigns become more transparent and often more attritional (pp. 49-59, 117-130, 179-183).
  • Scope conditions: contemporary second-generation AI; data-rich operational environments; states with access to advanced commercial tech ecosystems; cases drawn mainly from the US, UK, Israel, Ukraine, Gaza, and recent campaigns against ISIS (pp. 18-22, 60-81).
  • Rival explanations or competing schools: techno-determinist automation/RMA arguments; strong versions of human-machine teaming; platform-centric procurement logic; claims that cyber or autonomy alone will replace traditional military force (pp. 14-18, 149-165, 166-171).
  • Observable implications:
    • More data cells, CTO-like roles, and civilian contractors in operational HQs (pp. 160-165).
    • More software/cloud contracts and procurement reform efforts (pp. 72-79).
    • AI concentrated in planning, targeting, and cyber rather than autonomous command (pp. 49-59, 99-148).
    • Faster strikes but slower campaigns under pervasive sensing (pp. 179-183).
  • What would weaken the author’s argument? Repeated evidence that AI can make reliable open-ended command decisions in live combat without deep human organizational support; convincing cases of states achieving AI-enabled operational effect without close commercial-tech integration; or evidence that AI routinely restores decisive maneuver rather than producing mutual targetability and attrition (pp. 30-35, 166-183).

Key Concepts & Definitions (author’s usage)

  • Artificial intelligence: computer programs that manipulate data independently enough to produce unprogrammed but useful results; King uses a practical, performative definition rather than a metaphysical one (pp. 23-24).
  • First-wave AI / GOFAI: logic-based, deductive, symbol-manipulating AI associated with the early post-Dartmouth era (pp. 25-26).
  • Second-generation AI: probabilistic, inductive machine learning built on huge datasets, neural networks, and computing power (pp. 26-30).
  • Generative AI / large language models: transformer-based deep-learning models trained on massive text corpora that can generate plausible outputs but still hallucinate and lack grounded understanding (pp. 31-35).
  • Data: digitally stored, computable information; for King, the raw strategic asset that AI turns into military advantage (pp. 27, 53-59).
  • Military-tech complex: the emergent alliance of defense ministries, militaries, and tech companies supplying data, software, cloud, computing, and expertise in immediate support of operations (pp. 21-22, 60-81, 170-175).
  • Human-machine team: a rival concept King thinks overstates AI agency by treating software as a teammate rather than a tool (pp. 149-156).
  • Military-tech teaming: King’s preferred concept; hybrid teams of commanders, staff officers, civilian programmers, and data engineers working together inside operational headquarters (pp. 163-165).
  • Decision advantage: the ability to make better and faster decisions by fusing and analyzing more data than the enemy can (pp. 49-52, 117-120, 163).
  • Multidomain operations / JADC2 logic: AI-enabled integration of information and effects across land, sea, air, cyber, and space (pp. 49-52).
  • Digitised military: a force whose core warfighting processes increasingly depend on sensors, networks, cloud architecture, and AI-driven data exploitation (pp. 167-171).
  • Automation bias: the risk that humans defer too readily to AI outputs, especially under pressure (p. 8).
  • Targeting: the AI-enabled fusion of diverse data feeds to locate, classify, and prioritize targets at scale and speed (pp. 115-130).
  • Data issues: the operational, ethical, and legal problems created by bad data, brittle models, and civilian technicians inside the kill chain (pp. 175-178).

Key Arguments & Evidence

  • AI will not automate war in the near term. Evidence: King’s technical review of AI’s probabilistic limits; the failure of simulation evidence like AlphaDogfight to translate directly into real war; the continued centrality of human commanders in Ukraine and Gaza (pp. 14-18, 30-35, 99-101, 166-171).
  • AI’s actual military utility lies in planning, targeting, and cyber operations. Evidence: US, UK, and NATO strategy documents; Microworld and route-planning tools; Project Maven; Delta/Torch; cyber defense and offensive cyber examples (pp. 43-59, 103-114, 117-148).
  • Military innovation with AI depends on commercial ecosystems, not just military desire. Evidence: Big Tech R&D compared with defense spending; Silicon Valley venture capital; procurement reforms like DIU and JAIC; UK struggles and Israeli advantages (pp. 62-81).
  • SOF and related elite users are the critical bridge between tech and operations. Evidence: Palantir’s early military adoption through commanders and special operations; JSOC’s data-centric targeting; SOCOM’s autonomy and resources (pp. 85-97).
  • The most important organizational change is the rise of military-tech teams. Evidence: Task Force Dragon, Palantir forward contractors, Jared Summers as CTO, and the need for continuous software refinement close to operational data (pp. 160-165).
  • AI sharpens kill chains without guaranteeing decisive strategy. Evidence: Gospel and Lavender in Gaza, Gerasimov targeting, Ukrainian data fusion, but also Bakhmut and the broader attritional pattern of the Russo-Ukrainian War (pp. 127-130, 162-165, 179-183).
  • The greatest near-term risk is not runaway autonomy but bad data, brittle models, and privatized influence. Evidence: AI bias and hallucination in chapter 2; October 7 as a failure of Israeli data dependence; Starlink’s influence over Ukrainian operations; civilian contractors inside kill chains (pp. 30-35, 172-178).

Barriers, Determinants, and Causal Logic

What drives innovation?

  • A clear operational problem: King’s best cases start with bounded tasks: processing full-motion video, route planning, identifying Hamas personnel, protecting or attacking networks (pp. 74-79, 117-120).
  • Data density. AI only works where there is enough high-quality data to train and sustain models (pp. 27-30, 117-130).
  • Organizational reform. Institutions like DIU, JAIC, DAIC, and special data cells matter because they translate commercial capability into military practice (pp. 72-79, 160-165).
  • War pressure and real adversaries. Ukraine, Gaza, ISIS, and counterterror campaigns accelerate experimentation and adoption (pp. 19-22, 117-148).
  • Leadership. Figures such as Work, Shanahan, Kurilla, Donahue, and Israeli commanders matter because they create permission structures for integration and experimentation (pp. 45-52, 52-53, 160-165).

What blocks innovation?

  • Poor or biased data; AI is brittle and fails when data is thin, manipulated, or misleading (pp. 30-35, 176-178).
  • Legacy procurement. Waterfall acquisition is designed for platforms, not iterative software development (pp. 72-79).
  • Talent gaps. Militaries cannot match commercial salaries or always attract the best programmers (pp. 64-66, 96-97).
  • Cultural mistrust. Military and tech communities often speak different languages and carry different norms; the Google protest over Maven is the sharpest example (pp. 67-68).
  • Overclaiming. Automation hype causes organizations to chase fantasies rather than narrow, solvable problems (pp. 14-18, 31-35).

Which actors matter most?

  • Commanders and staff officers, because they define missions, priorities, and acceptable risk (pp. 99-101, 163-165).
  • Civilian data scientists, software engineers, and programmers, because they build and continually refine the tools (pp. 89-98, 163-165).
  • SOF and other elite early adopters, because they bridge operational credibility and experimentation (pp. 89-97).
  • Tech firms and venture capital, because they own the compute, cloud, software, and talent base (pp. 62-66).
  • Defense ministries and procurement organizations, because they can enable or suffocate adaptation (pp. 72-79).

What role do organizations, service cultures, bureaucracies, politicians, scientists, firms, and operational experience play?

  • Organizations matter most. King’s entire book is a case for organizational mediation (pp. 60-81, 149-165).
  • Service cultures matter where they make experimentation easier; SOF are especially important (pp. 89-97).
  • Bureaucracies matter because acquisition law, authorities, and funding pathways shape what can scale (pp. 72-79).
  • Politicians matter in setting offsets, AI strategies, and the boundaries of public-private cooperation (pp. 44-49, 67-71).
  • Scientists and firms matter because AI is produced outside the armed forces and must be pulled inward (pp. 64-66, 82-98).
  • Operational experience matters because it reveals what tasks are actually amenable to AI and forces iteration under pressure (pp. 117-148, 160-165).

What distinguishes success from failure?

  • Success comes when militaries use AI on narrow problems, with good data, in iterative feedback loops, close to operational users, while preserving human corroboration and command judgment (pp. 117-120, 163-165).
  • Failure comes when organizations assume AI is magical, rely on corrupted or incomplete data, separate programmers from users, or build brittle systems overconfidently. October 7 is King’s most severe cautionary case (pp. 176-178).

⚖️ Assumptions & Critical Tensions

  • Technology vs organization: King assumes organization is the decisive mediator of technology. That is persuasive, but it means he sometimes places less analytical weight on the independent tactical effects of new platforms than some RMA literature would (pp. 60-81, 166-171).
  • Speed of strike vs speed of campaign: The book’s most important tension is that AI can accelerate kill chains while slowing overall campaigns by making movement visible and punishable (pp. 179-183).
  • Civilian innovation vs military autonomy: Militaries need commercial expertise to innovate, but that dependence dilutes the state’s monopoly over key military functions and can shift leverage to private firms (pp. 170-175).
  • Data abundance vs data quality: King often treats data as the new strategic asset, but he repeatedly shows that more data does not automatically mean better understanding (pp. 27-30, 176-178).
  • Warfighting effectiveness vs legal/ethical restraint: AI can sharpen targeting, but bad data, liberalized strike thresholds, and civilians inside kill chains create legal and moral danger (pp. 127-130, 175-178).
  • Centralization vs decentralization: JADC2-style architectures aim at shared pictures and central fusion, while the actual innovation often comes from small, agile, semi-autonomous teams and forward contractors (pp. 49-52, 89-98, 160-165).

Critique Points

  • Strongest contribution: King’s best move is conceptual. He recenters the debate from “Will AI automate war?” to “What organizational, political, and operational arrangements make AI militarily useful?” The chapters on procurement reform, Palantir/SOF, and Task Force Dragon are especially strong because they show mechanism rather than reciting slogans (pp. 60-98, 160-165).
  • Biggest blind spot: The book’s evidentiary core is heavily Western and Israeli. China looms as a competitor, but the book does not study Chinese adoption in equivalent depth. For Seminar Two, that matters, because the strongest alternative model of military-civil integration is therefore more backdrop than fully worked comparison (pp. 44-49, 181-183).
  • Where the evidence is strongest: Recent operational cases. Maven, UK command-and-control experiments, JSOC/Palantir, cyber operations, Ukrainian targeting, and Israeli AI-enabled targeting make the empirical chapters concrete and persuasive (pp. 103-165).
  • Where the evidence is thin or contestable: The farther King moves from recent documented practice toward future-war forecasting, the more contestable the argument becomes. The Taiwan/Taipei discussion is thoughtful but inferential, and the attrition thesis may be context-dependent rather than universal (pp. 181-183).
  • Threshold issue for SAASS: Some examples are clearly efficiency gains, not yet military innovation in the strict SAASS 660 sense. Microworld route planning is useful, but it is not the same as a warfighting revolution. King’s strongest “innovation” cases are those that change targeting depth, kill-chain tempo, and operational reach, not those that simply save staff time (pp. 103-110 vs. 117-130, 162-165).
  • What kind of evidence would change my mind: Robust evidence that AI can reliably handle open-ended command decisions in live combat; comparative cases showing non-Western AI integration at equivalent depth; or campaign-level evidence that AI-enabled forces consistently regain decisive maneuver rather than settling into mutual attrition.

Policy & Strategy Takeaways

  • Build data architecture first. AI without clean, shared, accessible data is theater, not innovation (pp. 49-59, 176-178).
  • Reform acquisition for iterative software development, not just platform procurement. Continuous updates and close user feedback are essential (pp. 72-79, 163-165).
  • Stand up permanent military-tech teams in operational headquarters. Civilian programmers and data engineers should be treated as integral support to command, not external vendors (pp. 163-165).
  • Preserve human corroboration and legal accountability in the kill chain. The central near-term danger is faulty data and brittle models, not just rogue autonomy (pp. 127-130, 175-178).
  • Expect a more transparent and targetable battlefield. Force design should therefore prioritize deception, counter-sensor operations, dispersion, survivability, and resilience (pp. 179-183).
  • Treat commercial tech dependence as a strategic vulnerability. Starlink in Ukraine shows that indispensable private services can become points of political leverage (pp. 172-175).

660 Final Brief Utility

  • Most useful historical analogies or cases from this book:
    • Douhet and strategic bombing hype as a warning against AI overclaim (pp. 166-167)
    • Military cartography as an analogy for AI-enabled planning (pp. 113-114)
    • The military-industrial complex vs the new military-tech complex (pp. 170-175)
    • JSOC/Palantir and Task Force Dragon as cases of operational integration (pp. 93-98, 160-165)
    • Bakhmut as the emblem of AI-enabled attrition rather than decisive automation (pp. 179-180)
  • What emerging idea, technology, or technological system this book helps analyze:
    • AI-enabled C2
    • kill-chain software
    • commercial cloud/satellite dependence
    • autonomy as a subset rather than the whole story
    • LLM-assisted staff work (pp. 99-114, 172-175)
  • Shapers of events / adoption:
    • data availability
    • procurement reform
    • tech-sector talent density
    • special operations culture
    • threat urgency
    • leadership willing to mix civilian and military expertise (pp. 72-79, 89-98, 160-165)
  • Barriers to integration:
    • brittle data
    • legacy procurement
    • cultural mistrust
    • salary/talent gaps
    • legal ambiguity around civilians in kill chains
    • fear of overdependence on private firms (pp. 67-79, 96-97, 175-178)
  • Determinants of success or failure:
    • success = bounded problems + good data + embedded technicians + iterative refinement + human corroboration
    • failure = magical thinking + bad data + separated contractors + overautomation + adversary adaptation (pp. 117-130, 163-165, 176-178)
  • Limits of the analogy:
    • most evidence is from data-rich recent conflicts
    • Western/Israeli models may not travel cleanly
    • Taiwan is a projection, not an observed case
    • some examples still show efficiency gains more clearly than innovation (pp. 181-183)
  • Best way to use this book in a 20-minute SAASS 660 brief:
    • Use it as the corrective text. Start with the popular claim that AI will automate war.
    • Then pivot to King’s answer: actual innovation lies in data exploitation plus organizational integration.
    • Use one operational case for proof (Maven or Task Force Dragon).
    • Use one cautionary case for limits (October 7 or Starlink).
    • Close on the future-war paradox: better targeting, slower campaigns.

⚔️ Cross-Text Synthesis (SAASS 660)

McNeill / Evron & Bitzinger / King

King reinforces McNeill’s broad proposition that technology matters for power, but he sharply complicates any simple adoption story. States do not become powerful by merely acquiring AI. They become powerful by building the institutional, economic, and organizational ecosystems that let them exploit it. In that sense, King is especially close to Evron & Bitzinger’s concern with how civil-economic structures shape military innovation. The difference is that King’s preferred model is not Chinese-style state-directed fusion but a Western, commercially led military-tech complex.

Posen / Rosen / Hone

King fits more naturally with Rosen and Hone than with simple technological narratives. His account of AI adoption is organizational, elite-driven, and problem-focused. Protected spaces, reforming institutions, special users, and leaders who reshape organizations all matter. That is much closer to classic military-innovation literature than to claims that technology alone compels change.

Mackenzie / Bridger / Hankins / Farrell-Rynning-Terriff / Schneider-MacDonald

King is closest to Mackenzie’s social construction of technology. He insists that AI is socially made, socially sustained, and politically situated. He also shares Schneider-MacDonald’s interest in the institutional hands behind systems: SOF, policy entrepreneurs, and service cultures are crucial. Compared with Bridger, King is less centrally about scientists’ ethics; compared with Farrell-Rynning-Terriff, he is more willing to specify concrete mechanisms of transformation rather than simply embracing complexity.

Krepinevich / Biddle

King challenges the more exuberant side of RMA discourse. He thinks AI matters greatly, but he denies that it necessarily yields decisive, autonomous, maneuver-centric warfare. On that point he ends up closer to Biddle’s caution: new technology changes how forces fight, but battlefield effectiveness still depends on integration, adaptation, and the persistence of brutal close combat. If anything, King suggests AI may deepen the conditions under which attrition and positional warfare endure.

❓ Open Questions for Seminar / Briefing

  • At what point does AI-enabled decision support cross the line from staff efficiency into true military innovation in the SAASS 660 sense?
  • Is the Western military-tech complex more adaptable than Chinese-style military-civil fusion, or merely more politically acceptable to Western audiences?
  • Does Ukraine prove AI’s operational value, or does it mostly prove the value of US and commercial data ecosystems under unusually favorable coalition conditions?
  • If AI accelerates kill chains but slows campaigns, what does that imply for force design, operational art, and theories of decision?
  • Can democracies build deep military-tech integration without giving private firms politically unhealthy leverage over strategy?
  • Are civilian programmers inside operational headquarters a temporary expedient, or the first step toward a durable new civil-military settlement?
  • Does pervasive sensing advantage defenders more than attackers, especially in urban and littoral warfare?
  • What mechanisms should exist to audit wartime data quality, model bias, and civilian participation in targeting?

✍️ Notable Quotes & Thoughts

  • “as a manifestation of collective human expertise” (Preface, p. ix) This is the book’s methodological key. King’s entire argument follows from this line: AI is powerful, but it is socially produced and socially enacted.
  • “Humans are underrated.” (p. 41) A perfect anti-automation summary. King uses commercial cases to show that even advanced firms keep rediscovering the stubborn value of human judgment and dexterity.
  • “Data is king.” (p. 57) This is the cleanest condensation of the book’s actual military theory of AI. The center of gravity is not autonomy; it is data exploitation.
  • “This is not human-machine teaming but military-tech teaming.” (p. 165) The book’s sharpest conceptual contribution. It names the real unit of innovation: hybrid teams of military professionals and civilian technicians.
  • “War at the speed of data may, ironically, descend into a long, slow struggle for small pieces of terrain.” (p. 182) The best single sentence for seminar discussion. It directly challenges the intuition that better information and precision automatically produce faster or more decisive war.
  • “We are on the edge of a historic reformation of military affairs.” (p. 183) King is cautious, but not dismissive. The point is not that nothing is changing; it is that what is changing is more organizational and political than the warbot literature admits.