BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Should Tesla Take The Initiative To Better Monitor And Manage Driver Behavior With Autopilot?

Following
This article is more than 4 years old.

By any account, Tesla’s Autopilot feature has come a long way since being released to consumers through an Over-The-Air (OTA) update just over 4 years ago. At inception, it was at best an incredibly sophisticated bootstrapped system based upon a Mobileye sensing platform and limited processing capabilities. With Tesla’s newer computing and data platforms, Autopilot is clearly one of the world’s more complex and impressive applications of a learning based Artificial Intelligence (AI) system.

The safety of Autopilot continues to be questioned by experts worldwide. A recent letter from Senator Markey to Elon Musk adds to an already heated conversation. In a post earlier this year on the Tesla automation strategy, I raised the question of whether it is appropriate for consumers to be used as test subjects, the need for a well validated measure of risk associated with the use of Autopilot, and the need for camera-based driver monitoring to manage inattention. In the months since, visuals in the media of drivers falling asleep, engrossed in non-driving activities, and otherwise participating in inattentive driving continue to appear. Seemly random, but realistically predictable, crashes keep happening that might be preventable with camera-based driver monitoring and driver management.

The reason crashes are predictable is that humans have long been known to be poor overseers of highly automated systems. When a supervisor feels that the automation is trustable, the irony is that there will be an increase in the number of system failures that are not mitigated correctly by a human supervisor. It simply becomes increasingly difficult to sustain attention as automation becomes more reliable. There is a painful history in transportation safety showing this. Recently, the National Transportation Safety Board (NTSB) detailed these types of failures in investigations of several Autopilot crashes and the tragic 2018 Uber fatality.

Perhaps paradoxically, it could be argued that with the explosive use of Autopilot, we would expect to see more crashes. However, the work from the late Raja Parasuraman and his students would suggest that imperfect automation (like Autopilot) keeps many users from becoming too complacent. In essence, it’s possible that the failure of a few individuals to appropriately oversee and collaborate with the automation is mixed with more responsible use by many. 

This is not a new phenomenon for automotive engineers. It has long been known that it is difficult to engineer for all. Some drivers choose to speed, pick-up their phones, and be otherwise irresponsible with respect to other drivers with which they share the road. Are close-calls and crashes with Autopilot really more frequent than other cases of attention failures? Or are they just more easily associated with the evolution of a new and perhaps transformative (and perhaps over marketed) technology? Over the coming years there will likely be many efforts to better understand, and arguments around answering, these formidable questions. One clear difference from the introduction of previous generations of automotive systems is that today’s social media driven society is helping to highlight problematic events.

The images and video that accompany today’s social posts have increasingly brought what was once largely hidden from public view into the limelight. This naturally presses consumers, safety advocates, regulators and even politicians into a position of judging the social ethics of how technologies are implemented.

Given this new reality, Senator Marky’s letter to Mr. Musk questioning if Tesla is tracking or monitoring online videos to learn about inappropriate use of the Autopilot, is right on track. Something else to consider is that Tesla Model 3s are equipped with a cabin camera. Might Tesla use this sensor, and a fraction of their technological image processing prowess, to better gauge the state of drivers? Even a rudimentary driver monitoring system might be able to detect an outright inattentive driver asleep at the wheel or head down for seconds on end.

Manufacturers such as GM and BMW have introduced camera-based driver monitoring and management systems with the launch of their collaborative driving systems (i.e., SAE L2). One has to suspect other manufacturers will follow this seemingly reasonable route to market, bridging the time till automated systems that do not depend on attentive human back-up are truly available. 

Just as outliers may be responsible for the majority of risky behavior associated with Autopilot, might Tesla be passively accepting undue risk compared to the rest of the market? Growing evidence suggests that Autopilot is placing drivers, and perhaps the automotive industry, in a precarious position. The aviation industry has long known that one airline’s accident impacts trust in all airlines.  

With a call to reinvent driving through the greater use of automation, is it time for the automotive industry to collaborate to increase the likelihood that the introduction and refinement of these technological approaches to making driving safer in the years ahead will have the greatest potential to succeed?

Following efforts in Europe, and using a framework similar to the NHTSA-IIHS collaborative industry agreement on AEB, could manufacturers and the government come together to take another step forward in safety by collaboratively agreeing to install camera-based driver monitoring systems to work alongside collaborative driving features?

Some might argue that a detailed, performance-based standard for systems is needed. However, starting with a fairly broad and open concept may be a more realistic starting point, and a Federal Motor Vehicle Safety Standard (FMVSS) could be more easily justified as experience is gained and the cost / benefit justification of a standard emerges. Working together to develop guardrails around the deployment and testing of automation on public roads might be the most important step to accelerate the adoption of potentially lifesaving automated vehicle technologies.

Follow me on Twitter or LinkedInCheck out my website