A robot gets a command to turn left. Half a second later, your Wi-Fi stutters. The robot thinks it's still executing that turn - but the network thinks the command expired three hundred milliseconds ago. The robot keeps turning. This is a ghost command, and it's how delivery bots end up in flowerbeds.
A new safety layer for ROS 2 catches these moments before the robot does something stupid. It's called ros2_kinematic_guard, and it watches for command-state misalignment - the technical term for "the robot thinks it's doing one thing but the network disagrees".
The Problem: Motion Commands Go Stale
ROS 2 robots listen to a topic called /cmd_vel - velocity commands that tell them where to move. These commands arrive as a stream. When the stream is smooth, everything works. When Wi-Fi drops packets or adds latency, the robot ends up executing commands that are no longer valid.
Most systems handle this with timeouts. If no new command arrives within, say, 500 milliseconds, the robot stops. But that's a blunt instrument. A brief Wi-Fi hiccup triggers a full stop, even if the command is still safe. The robot jerks to a halt, waits, then resumes. Not great for a delivery bot weaving through pedestrians.
The kinematic guard takes a different approach. Instead of asking "did a command arrive?", it asks "does this command still make sense given the robot's current state?" It scores commands for kinematic consistency - whether the motion path matches what the robot should be doing right now.
How It Works: Consistency Scoring
Every time a motion command arrives, the guard checks three things. First - does the command timestamp match the robot's internal clock? If the command is more than a few hundred milliseconds old, that's a red flag. Second - does the commanded velocity align with the robot's current trajectory? A sudden 90-degree turn when the robot was moving straight is suspicious. Third - are there gaps in the command stream that suggest packet loss?
The system assigns a consistency score to each command window. High score - the command is fresh and aligned with the robot's motion. Low score - something is off. Maybe the Wi-Fi dropped packets. Maybe the command is stale. Either way, the robot shouldn't execute it.
When the score drops below a threshold, the guard intervenes. It doesn't kill power or trigger an emergency stop. It just sends a brake command - a smooth deceleration that brings the robot to a controlled halt. The intervention latency is 50 milliseconds. Fast enough to stop a ghost command before it causes damage. Slow enough that the robot doesn't jerk around on every minor Wi-Fi wobble.
Why This Matters for Deployment
Delivery robots, warehouse bots, outdoor rovers - they all operate in environments where Wi-Fi is unreliable. A factory floor has metal shelves that block signals. A university campus has dead zones between buildings. A delivery route has spots where the robot switches between cell towers.
Traditional timeout-based safety systems treat every network hiccup as a crisis. The robot stops, waits for the connection to recover, then resumes. This works, but it's inefficient. The robot spends more time waiting than moving. Customers see a bot that stutters and stalls.
Kinematic consistency scoring lets the robot keep moving through brief network issues - as long as the commands it's executing are still valid. Only when the command-state alignment breaks does the guard step in. The result is smoother operation in real-world conditions. The robot doesn't stop for every Wi-Fi hiccup, but it does stop before executing a stale command that sends it into a wall.
The Real Test: Edge Cases
The hard part isn't detecting obvious problems - a five-second command delay is easy to catch. The hard part is the edge cases. What about a command that's only 200 milliseconds stale but happens to align with the robot's current motion? The timestamp is off, but the trajectory looks fine. Should the guard intervene?
This is where the scoring system earns its keep. It doesn't rely on a single metric. A slightly stale command with good trajectory alignment gets a medium score - not great, not terrible. The guard lets it through but watches closely. If the next command is also stale, the score drops further and the guard intervenes. It's a sliding scale, not a binary decision.
The other edge case is false positives. A perfectly valid command that happens to trigger the guard's suspicion. Maybe the robot made a sharp turn and the next command looks inconsistent because the turn was legitimate. The 50-millisecond intervention latency helps here - the guard reacts fast, but it also recovers fast. If the next command looks good, the robot resumes immediately. No lingering paranoia.
This is the kind of unglamorous infrastructure work that makes autonomous systems actually work. Not the flashy demos where everything goes right. The safety layer that catches the moments when everything goes wrong.