Why Your Robot Butler Still Can't Open a Door

Your humanoid robot can walk, talk, and probably beat you at chess. But ask it to open a door or climb stairs, and suddenly it's as helpless as a toddler. The reason isn't AI — it's physics. Specifically, the physics that robots still can't feel.

Here's a humbling fact: despite billions in funding, terabytes of training data, and enough hype to fill a stadium, humanoid robots still can't reliably handle stairs and doors. This isn't a software bug or a sensor limitation. It's a fundamental gap in how robots interact with the physical world. They can see. They can plan. They just can't feel.

The problem is called force control, and it's the dirty secret of the robotics revolution. While everyone's obsessing over large language models and visual processing, the real bottleneck is something much more basic: teaching a machine to know how hard it's pushing something. And it's turning out to be one of the hardest problems in robotics.

The Three Paradigm Shifts (And Why They Weren't Enough)

Robotics has seen three major breakthroughs in the past decade. First came deep learning, which gave robots the ability to recognize patterns and make decisions from data. Then in 2016, proprioceptive electric actuators arrived — motors that could sense their own position and movement with remarkable precision. Most recently, in 2023, Vision-Language-Action (VLA) models emerged, allowing robots to translate visual input and natural language commands directly into physical actions.

Each of these was genuinely revolutionary. Each pushed the field forward. And each hit the same wall: they teach robots where to move, not how hard to push.

Think about the difference between holding an egg and holding a steel ball. Same size, same shape, completely different forces required. A human knows this instinctively — we feel the resistance, adjust our grip, modulate our pressure. Robots don't. They can calculate the correct force mathematically, but they can't feel when they're wrong. And in the messy, unpredictable real world, being able to feel is the difference between success and a very expensive omelet.

Why Manipulation Is Harder Than Locomotion

There's a reason Boston Dynamics' Atlas can do backflips but struggles with doorknobs. Locomotion — walking, running, balancing — is fundamentally about managing your own body. The physics are complex, but they're self-contained. You know your mass, your momentum, your center of gravity. The ground is (usually) predictable.

Manipulation is different. When a robot touches something, it enters a feedback loop with an external object that has its own mass, friction, and unpredictability. Push too hard, and you break something. Push too soft, and nothing happens. Push at the wrong angle, and the object moves in ways you didn't anticipate.

As MIT's Russ Tedrake puts it, robots lack "physics mastery." They can simulate physics. They can calculate physics. But they can't intuit physics the way humans do. And intuition, it turns out, is what makes manipulation possible.

What the Experts Say

Pulkit Agrawal, another MIT researcher, frames the problem clearly: current approaches learn position, not force. A robot learns to put its hand at coordinates X, Y, Z. But it doesn't learn what that should feel like. When reality deviates from the model — a slightly sticky door, an unexpectedly heavy object — the robot has no way to adapt.

Scott Kuindersma, who led Boston Dynamics' Atlas team before moving to AI research, has noted that even the most advanced humanoids are essentially "blind" to force. They can detect it, sure. But detecting isn't the same as understanding. It's the difference between a smoke alarm and a firefighter.

Jonathan Hurst at Agility Robotics — makers of the Digit humanoid — has been more direct. We're in the "Volta stage" of robotics, he says, comparing the current era to the early days of electricity. We know it's important. We can do some impressive demos. But we haven't yet figured out the fundamental principles that will make it truly useful.

The Droid Brief Take

The force control problem is robotics' great humbler. It's the reason your robot vacuum gets stuck on a sock. It's why factory robots work in cages and why "human-robot collaboration" is still mostly a PowerPoint slide. All the AI in the world can't compensate for a machine that doesn't know its own strength.

What's fascinating is how little attention this gets. Investors pour billions into language models and vision systems while the fundamental physics problem remains unsolved. It's like building a race car with a perfect dashboard and no suspension. Looks great in the showroom, but try driving it.

The good news? This is a solvable problem. Humans solve it constantly, without thinking. The bad news? Biology had millions of years to figure it out. Robotics has had decades. And the gap between "can walk" and "can open a door" turns out to be much larger than anyone expected.

What to Watch

• Tactile sensor advances — new skin-like sensors that can detect pressure, texture, and temperature could give robots the feedback they need

• Simulation-to-reality transfer — better physics engines that can accurately model force interactions before robots encounter them in the real world

• Hybrid approaches — combining learned position control with traditional force control algorithms, rather than treating them as separate problems