Wednesday, July 27, 2016

Catching up on Tesla

This story has had a bit more legs than I expected, probably in part due to the way it was handled by the company. Tesla, like its founder, has always been exceptionally good at working sympathetic and (let's be frank) credulous reporters, but it has a horrible record of overreacting to even mild criticism. Of course, a large sale of stock between the time of the accident and its announcement did not help either.

I suspect that had we seen a more low-key response and fewer angry demands that reporters "do the bloody math," recent developments in the story would get less press.

For example:
The National Transportation Safety Board today issued the preliminary report of its investigation into the May 7th accident which involved a Tesla Model S operating in Autopilot mode. The crash resulted in the death of the driver, Joshua Brown, who was 40 years old. The agency found that Brown’s car was traveling nine miles over the posted speed limit at the time of the crash, and the report also includes the first officially released images of the accident.

Brown’s car was traveling at 74 miles per hour before it made impact with a tractor trailer that was crossing its path, according to the NTSB. The posted speed limit on the divided highway where the accident took place — US Highway 27A, near Williston, Florida — was 65 mph. The NTSB states findings don’t contain "any analysis of data" beyond that, and the agency says that probable cause has yet to be officially determined. Tesla has stated that a combination of the "high, white side of the box truck" and "a radar signature that would have looked very similar to an overhead sign" are what caused the car's automatic braking not to fire, but the company declined to comment on the NTSB's preliminary report.





With this or any other complex story, it is important we guard against thinking too much in terms of scalars and linear relationships. Specifically, we need to push past the simplistic "is this technology better or safer than human drivers" and start asking what does the technology do best when should we use it and how can we best incorporate it into our transportation system?

We've all seen optical illusions that prove how easy it can be to deceive the eye, but the flipside of that is that there are also situations where humans are remarkably good at extracting information from visual data. It is entirely possible that a human being actively engaged in driving would've done a better job distinguishing between a large white corrugated metal box and a bright blue sky.






This is by no means a damning criticism of Tesla's Autopilot. Even if autonomous systems were overwhelmingly superior on average, we would expect to find at least a few special situations where humans performed better.

What is far more troubling is when an autonomous system screws up something and autonomous system ought to do well. Maintaining a safe and legal speed would be high on that list.

This may be another one of those cases where people think they're discussing technology when, in fact, they are actually focused on policy and public relations. You would think that limiting the cars in autonomous mode to the posted speed limit would be a fairly trivial matter. If so, this would  appear to have been a policy choice, and one with potential legal ramifications in the casse of a wrongful death suit.

No comments:

Post a Comment