Later, federal investigators will determine the details of the latest fatal Tesla crash with autopsy-like precision.

They’ll answer the specific questions. Such as whether Autopilot was engaged. Or whether it had been in the seconds before the Model S veered off the road, slammed into a tree and killed two people. Or whether the driver-assist system had been engaged but inadvertently deactivated when the driver climbed into the front passenger seat.

Yet the biggest mystery related to the crash, which occurred April 17, in Spring, Texas — what on Earth would compel the driver to abdicate responsibility for driving and physically move from behind the wheel — may already be solved.

“We have witness statements from people that said they left to test drive the vehicle without a driver, and to show the friend how it can drive itself,” Mark Herman, constable of Harris County Precinct 4, told Reuters last week.

Of course, Tesla vehicles cannot drive themselves. No automaker sells a car for public purchase today capable of autonomous performance. But many motorists either willfully ignore that reality or inadvertently conflate driver-assist systems with self-driving ones.

Which is Autopilot?

Tesla’s legal statements are clear. The Model S owner’s manual says “it is the driver’s responsibility to stay alert, drive safely and be in control of the vehicle at all times.”

Separately, the company’s general counsel told California regulators that Autopilot and the “Full Self-Driving” feature under development are, in fact, driver-assist systems that require an attentive human who’s responsible for driving.

Convincing Tesla owners — who greet such provisions and others like them with a wink and a nod — of its seriousness has become an urgent safety challenge, one stitched through the four completed and 24 ongoing investigations into crashes involving Teslas being conducted by NHTSA. YouTube is awash with videos of Tesla drivers circumventing sensors that are supposed to ensure human drivers have their hands on the steering wheel. More egregious are the videos that show drivers reading newspapers or sitting somewhere other than the driver’s seat.

It remains unclear whether Autopilot was engaged either at the time of or in the moments leading up to the April 17 collision.

CEO Elon Musk indicated that Tesla retrieved crash data from the car that showed the system was not engaged at impact. Authorities are waiting to review that data and served Tesla with search warrants last week. What was clear, according to police, was that nobody was seated behind the wheel.

Before addressing Autopilot technology, addressing reckless behavior strikes at the heart of the problem. Stopping it will be paramount in preventing crashes, says Phil Koopman, chief technology officer at Edge Case Research, which advises companies on automated-vehicle testing and safety validation.

“In my mind, the thing that matters is preventing the next crash, and none of the specifics of the technology here seem likely to have a role in the next death,” he said. What matters is, “ ’Is somebody going to try this again?’ Of course. Will one of them eventually get unlucky enough to die unless something changes? Seems pretty likely.”

Koopman offers education as one potential solution. In January, he authored a paper that proposed new language for discussing the capabilities and limitations of automated-driving systems. They are usually classified using SAE International’s engineering-minded Levels of Automation, from Level 0 to Level 5.

Koopman favors more consumer-friendly classifications: assistive, supervised, automated and autonomous.

Such terminology could indeed provide an underpinning for behavioral changes. But, he concedes, “it’s hard for education to undo an effective marketing strategy, and that’s what’s going on here.”

Countering the Tesla culture’s early-adopter, beta-test-friendly mindset may require a technical backstop. Autopilot is supposed to monitor driver engagement using steering-wheel torque.

Other automakers use inward-facing cameras to monitor drivers, to ensure their eyes and attention are focused on the road. These systems issue warnings when those parameters are not met, and ultimately the driver-assist systems disconnect after repeated breaches.

Yet after the latest crash, Consumer Reports took a Model Y to a proving ground and found Autopilot could be “easily tricked” into driving with no one in the driver’s seat.

“The fact Tesla Autopilot could be used when no driver is in the driver seat is a searing indictment of how flawed Tesla’s driver monitoring system is,” William Wallace, Consumer Reports’ manager of safety policy, told Automotive News.

“We’ve expressed concerns for a long, long time. … What the new demonstration showcases is that Tesla’s so-called safeguards not only failed to make sure a driver is paying attention, but couldn’t tell if a driver was there at all. To us, that only underscores the terrible deficiencies that exist in the system Tesla is using to verify driver engagement.”

Automation complacency, already linked to at least three fatal Tesla crashes by federal investigators, is one thing; the complete absence of a driver is another.

Ensuring adequate driver monitoring could be a straightforward answer. Fixing a culture that encourages egregious driving behavior? That’s a more vexing matter.

Similar Posts