A violent crash at a San Francisco intersection has put fresh scrutiny on how driverless cars behave in the chaos that follows a collision. Video from the scene shows a Waymo robotaxi near the wreck, then pulling away as bystanders rush in, and the vehicle’s next moves are raising new questions about accountability when no human is behind the wheel.
The incident comes at a tense moment for autonomous vehicles in California, where regulators, local officials, and residents are already debating how safely robotaxis interact with emergency workers, unpredictable traffic, and human error on the streets.
What happened
Witness video captured a white Waymo Jaguar I-Pace stopped close to an intersection where a speeding car slammed into another vehicle, sending debris and people scattering. In the clip described by one account, the robotaxi is positioned near the crash site as the impact unfolds, then appears to edge away while pedestrians run toward the damaged cars.
The footage shows the Waymo vehicle with its roof-mounted sensors active, brake lights illuminated, and the car apparently assessing the scene. After a short pause, it begins to move, steering around the area where the collision occurred and leaving the immediate vicinity as people gather to help. There is no visible human driver inside the vehicle, and no one steps out to speak with witnesses or police.
According to that description, the crash itself involved human-driven cars, not the robotaxi. The initial impact came from a non-autonomous vehicle that entered the intersection at high speed and struck another car broadside, an impact that bystanders characterized as severe. The Waymo was not hit, did not appear to lose control, and did not collide with any pedestrians or debris.
The key controversy is what happened next. People at the scene and viewers of the video have questioned whether the robotaxi should have remained in place, attempted to contact emergency services, or otherwise behaved more like a human driver who had just witnessed a major collision. Instead, the car’s decision to leave has been interpreted by some as a kind of algorithmic flight from a chaotic and potentially dangerous environment.
Waymo has faced related complaints in San Francisco that its vehicles sometimes stop in awkward places, block traffic, or behave unpredictably around active emergencies. In earlier incidents detailed in a review of Waymo problems, robotaxis have reportedly driven into areas with downed power lines, obstructed fire engines, or failed to respond smoothly to hand signals from first responders. The new crash-adjacent video feeds directly into those broader concerns about how well the software handles rare but high-stakes scenarios.
Why it matters
The debate over the robotaxi’s response is not simply about optics. It goes to the heart of how autonomous vehicles are programmed to weigh safety, liability, and traffic rules when something goes very wrong nearby, even if they are not directly involved.
In a typical multi-car crash, human drivers who witness the event are expected to stop if they were part of the chain of events or if their testimony might matter. They may call 911, warn oncoming traffic, or help injured people. Traffic laws in many states require drivers to remain on scene in collisions that cause injury or significant damage. Yet those statutes were written for humans who can render aid and answer questions, not for a driverless car that might be operating without any passenger at all.
Waymo’s software is designed to prioritize safety for its passengers and other road users, which often means avoiding secondary impacts and clearing out of unpredictable zones. From a purely risk-based standpoint, moving away from a fresh crash can reduce the chance that the robotaxi is hit by another car, struck by flying debris, or trapped in a pileup. The system may interpret the intersection as blocked or hazardous and choose what it calculates to be a safer nearby location.
That logic, however, clashes with public expectations about shared responsibility on the road. Viewers who see a vehicle calmly steering away from a frightening scene, even if it did nothing wrong, can perceive that behavior as cold and unaccountable. The absence of a human driver to make a judgment call, step out, and engage with others amplifies the sense that no one is in charge when something unexpected happens.
Regulators in California have already been wrestling with these tensions. State agencies have granted permits that allow companies like Waymo to operate fully driverless taxis in parts of San Francisco, but city officials and residents have complained that the vehicles sometimes freeze in intersections, block bus lanes, or ignore informal cues from traffic officers. Reports of AVs interfering with emergency workers have become a particular flashpoint, with fire and police departments arguing that the cars do not always recognize flares, tape, or improvised detours.
The crash-adjacent incident adds a new twist: what obligations, if any, an autonomous vehicle has as a witness rather than a participant. Should the system be required to remain in place long enough to preserve sensor data, provide a clear visual for investigators, or ensure that its own cameras captured the full sequence? Or is it enough for the operator to store and share that data later, while the car itself prioritizes clearing the area?
There is also a liability dimension. If a robotaxi stays close to a crash scene, it might be more likely to be drawn into legal disputes or insurance claims, even if it was not at fault. If it leaves quickly, critics may argue that the operator is trying to minimize entanglement at the expense of transparency. Clear rules about data retention, cooperation with investigators, and communication with emergency services could help resolve that tension, but those standards are still being developed.
What to watch next
The fallout from this incident will likely unfold on several fronts: technical updates, regulatory pressure, and public trust.
On the technical side, the question is whether Waymo and other developers will adjust their decision-making logic for crash-adjacent events. That could mean programming vehicles to pull to the curb and stop within line of sight of a collision, automatically contact emergency dispatch if certain sensor thresholds are met, or flash external messages to signal that data is being recorded and help is on the way. Each of those choices involves trade-offs between safety, privacy, and operational efficiency.
Regulators in California and other states are watching these edge cases closely. The same scrutiny that has followed reports of AVs blocking fire trucks or driving into active emergency scenes is likely to extend to how they behave just outside the impact zone. Transportation agencies could require detailed incident response plans as a condition of operating permits, specifying how long a driverless car must remain nearby, when it can move, and how its operator must coordinate with local police and fire departments.
City governments, which bear the brunt of resident complaints, may also push for more granular control over where and when robotaxis can operate. If local officials conclude that AVs do not handle chaotic intersections or nightlife corridors well, they could seek time-of-day restrictions or geographic carve-outs that limit exposure to the most unpredictable environments.
Public perception is another critical factor. Videos of driverless cars behaving oddly tend to spread quickly, and each new clip shapes how residents feel about sharing the road with software. The sight of a robotaxi inching away from a violent crash, even if consistent with its programming, can erode confidence in the technology’s judgment and in the companies that deploy it.
Waymo and its rivals will likely respond with a mix of technical explanations and promises of improvement, but the burden is shifting. Communities are no longer asking only whether the cars can avoid causing crashes. They are asking how these vehicles behave as civic actors when others make mistakes, when infrastructure fails, or when pure bad luck produces a dangerous scene.
More From Fast Lane Only:






