In May of 2016, a driver and his “driverless” Tesla Model S driver did not see a tractor-trailer crossing the road. Failure to take control from the autopilot proved to be a fatal delay.
A subsequent investigation by the National Highway Traffic Safety Administration (NHTSA) raised concerns on the limitations of the autopilot feature and Tesla’s definition of the term.
The Risks Of Robot-Human Interactions
Sufficient grounds did not exist for a product recall. Yet, the tragedy played a part in a recent study that revealed an alarming trend in the interaction between humans and robotics. Authors of “Takeover Time in Highly Automated Vehicles: Noncritical Transitions To and From Manual Control,” published in the journal Human Factors, observed 26 men and women from 20 to 52 years old taking part in a simulated driving test.
While “driving” at 70 miles per hour without distraction, the vehicle issued various takeover requests during simulated normal conditions. At that point, the “drivers” had to either assume or relinquish control of the vehicle to the autopilot. That important transition showed delayed response times, ranging from two to 26 second, revealing potentially serious safety hazards.
Average response time in a driverless car is an important factor in emergency and non-emergency situations. However, the results of the study fine-tune the focus to the differences among individual human reactions.
From the moment autopilots were introduced in airplanes, problems have existed, some fatal. A large body of work exists that shows evidence of problems in transferring human control to the autopilot and back due to poor monitoring. The common denominator is the operator relying too much on autopilot and failing to provide assistance when necessary.
Simply stated, robotics must consider the human factor.