
Raffi Krikorian, Mozilla’s CTO and the previous head of Uber’s self-driving automotive division, totaled his Tesla Mannequin X whereas utilizing “Full Self-Driving” on a residential avenue. His youngsters have been within the again seat.
In a brand new essay revealed in The Atlantic, Krikorian breaks down the accident and provides what is perhaps essentially the most knowledgeable critique but of the elemental downside with Tesla’s strategy to “supervised” autonomy — from somebody who actually constructed self-driving programs for a dwelling.
The accident
Krikorian describes a Sunday drive he had executed lots of of occasions — taking his son to a Boy Scouts assembly via Bay Space residential streets. His Tesla was in FSD mode, and the system was driving with out concern till it instantly wasn’t.
Because the Mannequin X entered a flip, FSD appeared to lose its bearings, Krikorian describes the wheel jerking erratically and the automotive decelerating with out warning. He grabbed the steering wheel, however couldn’t get better in time. The automotive slammed right into a concrete wall and was totaled. Krikorian suffered a concussion, a stiff neck, and days of complications. His kids have been unhurt.
What makes this account notably placing is Krikorian’s background. At Uber’s Superior Applied sciences Middle, he ran the group constructing autonomous autos and educated human security drivers on precisely when and tips on how to intervene when a self-driving system fails. Throughout his two years main the division, Uber’s early pilot applications had zero accidents.
Regardless of all of that experience, FSD nonetheless obtained him.
He writes that he began utilizing FSD on highways, the place clear lane markers and predictable site visitors made sense. Then he tried it on native roads, it labored effectively, and it turned behavior. Earlier than the crash, his fingers have been on the wheel. He was doing what Tesla asks drivers to do, monitor, not steer. However as he places it, the system had conditioned him to belief it.
After the crash, his title was on the insurance coverage report. Not Tesla’s. That’s how each FSD crash works beneath the present authorized framework, Tesla’s system is assessed as Stage 2, that means the motive force is accountable always.
Krikorian additionally raises a troubling level about Tesla’s knowledge practices. The automotive continuously logs the motive force’s hand place, response time, and eye monitoring, and Tesla has used this knowledge after crashes to shift blame onto drivers. In the meantime, drivers who request their very own knowledge say they’ve obtained solely fragments. Within the landmark Florida wrongful-death case that resulted in a $243 million verdict, plaintiffs needed to rent a hacker to get better vital proof from the crashed car’s pc chip as a result of Tesla claimed the information couldn’t be discovered.
The supervision downside
Essentially the most invaluable a part of Krikorian’s essay is his evaluation of why “supervised” self-driving is basically damaged, a subject we’ve coated extensively at Electrek.
His core argument: Tesla is asking people to oversee a system that’s particularly designed to make supervision really feel pointless. As he places it, an unreliable machine retains you alert, and an ideal machine wants no oversight, however one which works virtually completely creates a entice the place drivers belief it simply sufficient to cease paying consideration.
The analysis backs this up. Psychologists name it the “vigilance decrement”, monitoring an almost excellent system is boring, boredom results in mind-wandering, and drivers want 5 to eight seconds to mentally reengage after an automatic system fingers management again. However emergencies unfold sooner than that.
Krikorian cites an Insurance coverage Institute for Freeway Security examine displaying that after only one month of utilizing adaptive cruise management, drivers have been greater than six occasions as probably to have a look at their telephones. Tesla’s personal web site warns FSD customers to not grow to be complacent, however the system’s clean efficiency actively trains that complacency.
He factors to 2 well-known crashes for instance the unattainable math. Within the 2018 Mountain View accident that killed Apple engineer Walter Huang, the motive force had six seconds earlier than his Tesla steered right into a concrete median. He by no means touched the wheel. Within the 2018 Uber crash in Tempe, Arizona, sensors detected a pedestrian with 5.6 seconds of warning, however the security driver seemed up with lower than a second remaining.
In Krikorian’s personal case, he did take motion, however he was requested to snap from passenger again to pilot in a fraction of a second, overriding months of conditioning. The logs present he turned the wheel. They don’t present the unattainable math of that transition.
The sample Krikorian describes ought to sound acquainted to anybody who has adopted Tesla’s FSD controversies: situation the motive force to depend on the system, erode their vigilance via months of clean efficiency, then level to the phrases of service and blame them when one thing breaks. When FSD works, Tesla will get credit score. When it doesn’t, the motive force will get blamed.
Krikorian additionally contrasts Tesla’s strategy with a notable instance of accountability from a competitor. In July 2025, BYD introduced it could pay for harm brought on by crashes involving its autonomous parking function — no insurance coverage declare required, no influence on the motive force’s document. It’s a restricted instance, however it demonstrates that shared legal responsibility between automaker and driver is a alternative, not an impossibility.
Electrek’s Take
We’ve been saying this for years: Tesla’s FSD is getting extra harmful because it will get higher. The smoother it will get, the extra it lulls drivers right into a false sense of safety — and the more durable it turns into to snap again when the system inevitably makes a mistake.
What makes Krikorian’s account so compelling is that he’s not some random Tesla critic. He constructed self-driving vehicles at Uber. He educated security drivers on intervention protocols. He understood the danger intellectually, and he nonetheless obtained conditioned into complacency. If somebody with that degree of experience can get caught, the typical Tesla proprietor doesn’t stand an opportunity.
The “supervised” label is a authorized protect, not a security answer. Tesla is aware of that people can not reliably supervise a system that works 99% of the time — the analysis is obvious, and Krikorian lays it out plainly. But the corporate continues to promote “Full Self-Driving” whereas pointing to the tremendous print when issues go improper.
With NHTSA at the moment investigating 80+ FSD incidents overlaying 2.88 million autos, and a rising flood of lawsuits following the $243 million Florida verdict, the stress on Tesla to really share legal responsibility for its system’s failures is mounting. BYD confirmed it’s doable. The query is whether or not Tesla will ever select accountability over blame-shifting.
FTC: We use revenue incomes auto affiliate hyperlinks. Extra.


