Humanoid Robots in Defence and Security

Of all the applications being explored for humanoid robots, none provokes stronger reactions than defence and security. The prospect of human-shaped machines operating on battlefields, patrolling borders, or standing guard at sensitive facilities touches on deep questions about the nature of warfare, the value of human life, and the kind of future we're building.

Yet the momentum is real. Governments are funding research programmes, startups are securing military contracts, and the technology is advancing faster than the regulatory frameworks meant to govern it. This article examines where humanoid robots sit within the defence and security landscape today — the genuine capabilities, the realistic timelines, and the serious ethical questions that accompany them.

The Military Case for Humanoid Form

Militaries have used robots for decades. Bomb disposal units, aerial drones, tracked reconnaissance vehicles, and quadruped load carriers are already established parts of modern armed forces. So why build robots that walk on two legs?

The answer lies in infrastructure. The human world — its buildings, staircases, vehicles, corridors, and doorways — was built for human bodies. A wheeled robot cannot climb a stairwell in a bombed-out building. A tracked vehicle cannot fit through a standard doorway. A quadruped struggles with ladders. The humanoid form factor, for all its engineering difficulty, is the only one that can move through human environments without modification.

For military planners, this matters in several specific scenarios:

  • Urban warfare: Clearing buildings, navigating rubble-strewn streets, and moving through indoor spaces where wheeled and tracked robots cannot operate effectively.
  • CBRN environments: Operating in areas contaminated by chemical, biological, radiological, or nuclear hazards — places where sending human soldiers carries extreme risk.
  • Logistics and resupply: Carrying equipment, ammunition, and medical supplies across terrain that requires bipedal movement, including stairs, uneven ground, and narrow passages.
  • Forward reconnaissance: Entering unknown or hostile spaces ahead of human troops to gather intelligence and identify threats.

The logic is straightforward: if the environment was designed for humans, a human-shaped robot has a structural advantage over every other form factor.

Where Things Stand Today

It is important to be clear about current reality. As of early 2026, no fully humanoid robot has been deployed in active combat operations. The technology remains in development, testing, and early-stage field trials. However, the trajectory is accelerating.

Foundation Future Industries and the Phantom MK1

The most prominent company explicitly targeting military applications is Foundation Future Industries, a San Francisco-based startup. Its Phantom MK1 is a bipedal humanoid standing roughly 5 feet 9 inches tall and weighing around 175 to 180 pounds, with a payload capacity exceeding 20 kilograms. The robot is designed for reconnaissance, bomb disposal, and operations in environments too dangerous for soldiers.

Foundation's CEO, Sankaet Pathak, has been unusually direct about the company's military ambitions, stating publicly that humanoid robots should serve as the "first body in" during dangerous missions. The company has outlined plans to manufacture up to 50,000 units by the end of 2027, with robots leased to the military rather than sold outright — an estimated cost of around $100,000 per unit annually.

Foundation emphasises a human-in-the-loop model: the robot handles movement and navigation autonomously, but human operators retain decision-making authority over any use of force. This mirrors the operational model already established for military drone operations.

The Broader Military Robotics Ecosystem

While Foundation is the most visible humanoid-focused defence company, the wider military robotics picture includes significant non-humanoid activity that is shaping expectations and infrastructure for future humanoid deployment:

  • Boston Dynamics' Spot has been evaluated by multiple militaries for reconnaissance and hazardous environment inspection, though the company's terms of service prohibit using its robots to harm people.
  • Ghost Robotics produces quadruped robots explicitly designed for defence applications, including configurations that can carry weapons systems.
  • Milrem Robotics in Estonia manufactures the THeMIS (Tracked Hybrid Modular Infantry System), a tracked unmanned ground vehicle that has seen actual combat deployment in Ukraine — used for ammunition transport, casualty evacuation, and armed operations.
  • The U.S. Army's Project Convergence exercises have tested formations where unmanned ground vehicles operate alongside manned units, sharing sensor data and providing overwatch for infantry. These exercises are building the doctrinal and communications frameworks that humanoid robots would eventually slot into.

In July 2025, U.S. Defense Secretary Pete Hegseth issued a memorandum directing all service branches to accelerate the acquisition of drone and robotic systems, including establishing dedicated robotic units by fiscal year 2027 and increasing funding for human-machine teaming research by 40 percent.

China and the Global Picture

China is developing military robotics aggressively, having demonstrated quadruped robots equipped with firearms in military exercises, including scenarios involving aerial deployment for urban operations. Chinese firms like Unitree are developing walking robots with potential dual-use applications. Analysts note that while the United States and China are closely matched in humanoid research and development, China holds advantages in manufacturing capacity and speed of production scaling.

Security Applications: Closer to Reality

While battlefield deployment of humanoid robots remains some years away, security applications are already entering the market — and in many ways represent the nearer-term opportunity for humanoid platforms.

Facility and Perimeter Security

Autonomous security robots are already patrolling distribution centres, corporate campuses, residential complexes, and public spaces. Companies like Knightscope deploy AI-driven units that conduct 24/7 patrols, monitor for anomalies, read licence plates, and integrate with existing surveillance infrastructure. SMP Robotics manufactures wheeled patrol robots equipped with thermal imaging, facial recognition, and AI-powered threat detection for industrial sites and critical infrastructure.

These current-generation security robots are predominantly wheeled, not humanoid. But the industry is moving in a humanoid direction. RAD Security has introduced HERO (Humanoid Enforcement and Response Officer), a bipedal security robot with autonomous patrol capabilities, natural language interaction, and real-time threat response. The security industry publication Security Management has identified more than 20 humanoid robots in various stages of development that could adopt security applications.

The advantages of a humanoid form in security contexts mirror those in military settings: the ability to navigate stairs, open doors, operate lifts, and move through spaces designed for people. A wheeled security robot is limited to flat, accessible surfaces. A humanoid can, in principle, go anywhere a human guard can go.

Border and Critical Infrastructure Protection

Border security agencies and operators of critical infrastructure — power stations, water treatment facilities, ports — face a persistent challenge: vast perimeters that require constant monitoring, often in remote or harsh conditions. Human patrols are expensive, fatiguing, and limited in coverage. Robots can operate continuously, do not suffer from fatigue, and can be equipped with sensor suites far exceeding human perception — thermal imaging, acoustic monitoring, chemical detection, and real-time data transmission.

Humanoid robots would add the ability to respond physically to incidents in ways wheeled platforms cannot: climbing fences, entering buildings, navigating obstacles, and interacting with people in a way that a human-shaped presence enables.

Bomb Disposal and Hazardous Response

Explosive ordnance disposal (EOD) is one of the oldest applications for military and security robots. Current EOD robots are typically tracked or wheeled platforms with manipulator arms. A humanoid robot offers a significant potential advantage: the ability to use the same tools and access the same spaces as a human bomb disposal technician, without requiring specialised end effectors for every task. If a humanoid robot has dexterous hands, it can theoretically manipulate any object a human technician could — turning handles, cutting wires, opening containers — in environments where a human presence would be lethally dangerous.

The Emerging Operational Model: Human-Machine Teaming

The prevailing vision across both defence and security is not one of autonomous robots replacing humans, but of mixed teams where robots and humans operate together, each contributing distinct strengths.

In this model, robots handle tasks characterised by danger, monotony, or endurance — the "dull, dirty, and dangerous" work — while humans provide judgment, decision-making, and contextual understanding. A forward reconnaissance humanoid might enter a building first, streaming sensor data back to a human team that makes tactical decisions based on what the robot finds. A security humanoid might patrol a perimeter autonomously for hours, but alert a human operator when it encounters something that requires a judgment call.

The U.S. military's concept of human-machine integrated formations (H-MIF) envisions units where unmanned systems operate alongside soldiers, sharing information through common tactical networks. The Army's investment in JADC2 (Joint All-Domain Command and Control) networking is building the communications backbone for this kind of integrated operation.

This teaming model is also the primary answer to the most pressing ethical concern: no one is proposing fully autonomous lethal decision-making. The human remains in the loop for decisions involving the use of force.

At least, that is the stated intention today.

The Ethical Landscape

No discussion of defence robotics is complete without confronting the ethical dimensions, which are profound and unresolved.

The Autonomy Question

Every company currently developing military humanoid robots emphasises human-in-the-loop control. But the history of military technology suggests a pattern: systems that begin with full human oversight tend, over time, to acquire greater autonomy as operational demands increase and technology improves. The gap between a robot that recommends a target for human approval and one that engages a target autonomously can narrow very quickly under battlefield conditions where speed is critical.

The international community is engaged in active debate. The UN General Assembly has adopted resolutions on autonomous weapons systems three years running, with a November 2025 vote seeing 156 states in favour of addressing the challenges they pose. The UN Secretary-General has called for a legally binding instrument to prohibit lethal autonomous weapons that function without human control, recommending completion by 2026. More than 120 countries support negotiating a treaty on the subject.

Progress has been slow, however, with major military powers — including the United States, Russia, India, and Israel — resisting binding restrictions within the Convention on Conventional Weapons framework, where consensus rules allow a single state to block action.

The Accountability Gap

When a human soldier makes a decision that results in civilian harm, there is a chain of accountability — the soldier, their commanding officer, the military chain of command. When a robot causes harm, the question of responsibility becomes far more complex. Is the operator liable? The commanding officer who deployed it? The software developer whose algorithm made the targeting recommendation? The company that built it?

Legal scholars and human rights organisations describe this as a "responsibility gap" — a situation where harmful actions occur without identifiable moral agents who can be held accountable. Existing frameworks of international humanitarian law, international criminal law, and human rights law provide foundational principles, but they were not designed for machines that make or recommend lethal decisions.

Lowering the Threshold for Conflict

One of the most nuanced concerns is that by removing soldiers from direct physical risk, robotic systems could make military action politically easier to initiate. If no human soldiers are in danger, the political cost of deploying force drops. This could mean more frequent military interventions, not fewer — a dynamic that runs counter to the humanitarian arguments often made in favour of military robotics.

The Arms Race Dynamic

If one major military power deploys effective humanoid combat robots, rivals will feel compelled to follow. This creates a classic arms race dynamic, with the added complication that autonomous weapons technology is far easier to proliferate than, say, nuclear weapons. The components — AI software, commercial-grade sensors, standard actuators — are widely available. The barrier to entry is engineering integration, not access to exotic materials.

This proliferation risk extends to non-state actors. Unlike a fighter jet or aircraft carrier, a humanoid robot requires no massive industrial base to manufacture. As costs fall and capabilities improve, the technology becomes accessible to a wider range of actors, including those outside any regulatory framework.

The Privacy Dimension in Security Applications

In civilian security contexts, humanoid robots introduce surveillance capabilities that raise significant privacy concerns. A robot equipped with facial recognition, audio recording, and continuous video surveillance — patrolling public spaces and residential areas — represents a qualitative shift from static cameras. It moves through the world, actively approaching people, and collects data that can be aggregated, analysed, and stored.

Questions about who owns this data, how long it is retained, who can access it, and whether individuals consent to being monitored by a mobile autonomous platform are largely unanswered in most jurisdictions. The presence of security robots in public spaces blurs the line between security monitoring and mass surveillance.

What Comes Next

Several developments will shape the trajectory of humanoid robots in defence and security over the coming years:

  • Technology maturation: Battery life, bipedal stability on rough terrain, dexterous manipulation, and autonomous navigation in unstructured environments all need significant improvement before humanoid robots are operationally viable in demanding defence roles.
  • Cost reduction: Current humanoid platforms cost roughly $100,000 to $150,000 per unit. For mass military deployment, costs need to fall substantially — a trajectory that mirrors the history of military drones, which have dropped dramatically in price over the past decade.
  • Regulatory frameworks: Whether the international community achieves binding regulation of autonomous weapons systems — or whether development outpaces governance — will shape the boundaries of permissible use.
  • Doctrine and training: Militaries need to develop operational doctrine for human-machine teaming — how mixed units train, communicate, make decisions, and deal with the inevitable technical failures.
  • Public acceptance: Both military personnel and civilian populations will need to develop comfort with robotic systems operating in security roles. Public trust will be shaped by early experiences — both positive outcomes and inevitable incidents.

The Bottom Line

Humanoid robots in defence and security are not science fiction, but neither are they an imminent reality on the battlefield. The technology is in a transitional phase: serious investment is flowing, prototypes are being tested, and military doctrine is being developed. Security applications — perimeter patrol, facility monitoring, hazardous environment response — will likely see meaningful deployment before combat roles.

The question is not whether humanoid robots will play a role in defence and security. The investment, the strategic logic, and the technological trajectory all point in that direction. The real questions are about governance: who controls these systems, what decisions they are permitted to make, and how we maintain meaningful human accountability as machines become more capable.

These are not questions that can wait until the technology is mature. They need to be answered now, while the architecture is still being designed.


This article is part of Droid Brief's Applications & Use Cases series. For related reading, see Will Humanoid Robots Take Our Jobs?, Safety Standards & Regulation, and Hazardous Environments.

Last updated: March 2026