Friday, July 5, 2024

Full Self Driving Fallacy (aka, ultimately it's a user problem)

 Tesla made the earliest stunning promises of Fully Self Driving cars back in 2016 and you can read a good history of full self driving and Tesla Autopilot on the Wikipedia page.

Technology predictions are notoriously hard, but I'm going to make the case here that whilst fully autonomous (often called level 5) may be possible, the Tesla execution of incrementally adding greater autonomy whilst requiring driver presence and overall responsibility (essentially vehicle autonomy levels 1 through 4) is going to fail. It will fail not for a technical reason, but for a human and social one.

Consider the following incremental 'improvements' to driving automation (I've used bullets rather than numbers so it's not confused with the defined autonomous driving levels)

  • Traffic-Aware Cruise-Control (allows you to set a desired speed but will match with slower vehicles if they’re obstructing)
  • Autosteer (which adds the ability for the car to track within lanes)
  • Navigate on Autopilot (which was introduced in the context of highway driving, getting you from on-ramp to off-ramp, crucially being able to change lanes when the driver indicates).
  • Auto Lane Change (adding the ability to automatically change lanes on highways rather than requiring driver assistance).
  • Full Self-Driving (start to end destination auto driving by the car, with success measured in the fewest number of driver interactions).

There's also some 'point' features like Summon and AutoPark which I'm not going to discuss here. Tesla have some nuances in how these capabilities have changed over time, in particular in relation to the degree to which driver attention is measured. From requiring sensors in the steering wheel to ensure hands are present (easily circumvented) to cameras tracking eyes, Tesla has recognized that in any level of autonomy under level 5, having the driver intervene is important, and therefore they want to be sure the driver is attentive.

Let's just go down the list above and consider what degree of driver attention is needed.
  • For Traffic-Aware Cruise-Control, the driver is actively steering and having to maintain an awareness of the surrounding road, nearby vehicle proximity etc. The driver is also able to pay less attention to their right foot and the pressure it's applying, with speed being kept constant and long freeway driving as well as some stop-go traffic becoming much less tiring.
  • With Autosteer added, the driver is still actively engaged in the driving process, but freed from the immediacy of lane-drift and car following range. The driver has to plan - e.g. should I overtake when my exit is coming up in 2 miles, should I pull out now to overtake, or will the faster moving vehicle behind me have overtaken by the time I reach that point. My assessment would be that driving like this is a nice balance of the car doing the drudge work of ongoing micro-adjustments of lane placement left/right and speed placement in relation to vehicles in the same lange, whilst the driver thinks more strategically.
  • Adding Auto Lane Change in theory means slightly more strategic thinking on behalf of the driver (“Do I feel like a rest stop is a good idea in a few miles, or should I wait another 20?”), but the relative speeds of vehicles, tracking of different kinds of vehicles and speed differentials for overtaking means that the car will invariably not behave as a driver would, e.g.
    • I better speed up a little to get past this truck as it’s doing 63 and I’m set for 65, but there’s a bunch of faster moving cars a mile back that will be stacked behind my long overtake.
    • I better track a little over to the left here as that truck is wide and cutting closer to me than I would like on this curve.
    • I'm going to track a little more to the right to avoid that pothole. 
    • I’m just going to hold back here for a few minutes to let that red mustang I can see way back there get by me, as I don’t want to be involved in any craziness.
What this leads to is a mismatch between the work that the car is doing vs what the driver is doing. That is - the driver is now having to plan for what they expect their car to do in addition to those around them.
  • Full Self-Driving now takes over the lion's share of the driving work, but, crucially, requires the driver to be monitoring everything and able to take over at a moments notice if the car deems an unsafe situation has occurred, or (conversely) if the driver feels that the car is about to perform an unsafe maneuver. The driver’s role has transitioned from being fully engaged, but supported, in the driving process, to one where they are a spectator until they need to fully take over in a challenging situation that the car can’t handle.
My point of stepping through this is essentially to make the argument that once you reach a certain point of driver replacement through automation, only 100% accurate self driving is good enough at that point, since requiring driver re-engagement to troubleshoot in milliseconds is a recipe for disaster.  This seems to be borne out by some of the accident data that is coming out of NHTSA. To save you a click to the article, here's the summary :


"This analysis, conducted before Recall 23V838, indicated that drivers involved in the crashes were not sufficiently engaged in the driving task and that the warnings provided by Autopilot when Autosteer was engaged did not adequately ensure that drivers maintained their attention on the driving task. The drivers were involved in crashes while using Autopilot despite fulfilling Tesla’s pre-recall driver engagement monitoring criteria. Crashes with no or late evasive action attempted by the driver were found across all Tesla hardware versions and crash circumstances."


Unfortunately, if the metrics you measure are safe passenger miles driven by Full Self Driving, the data will lead you astray, as you will have an awful lot of miles well driven by auto-pilot, and a large number of accidents ‘caused’ by drivers after they take over.


So, my contention is that anything under full autonomous level 5 driving is going to skew to this unfortunate requirement of requiring full user attention at precisely a time when the autonomous driving is failing, all the time whilst the semi-autonomy has removed full user attention. Full autonomous level 5 driving may be feasible, since designing for the system to do everything without enabling intervention and will necessarily mean availability in known tested scenarios. There are an awful lot of 'edge' cases with level 5 that will need to be ironed out.

No comments: