This article is well worth reading to help understand about the FSD technology, the human element of self driving and an interesting comparison between the Tesla and Waymo approach ...
“Why Tesla's FSD Approach is Flawed” - Thoughts from a Quirky Llama
The only folks more prone to rants than myself are Tesla fans talking about neural nets. So I thought, why not combine the power of the rants? I'll rant about Tesla fans ranting about the power of neural nets...
To get myself good and wound up, I listened to a Tesla Daily Podcast about Full Self Driving (FSD) and Neural Nets.
"Jimmy D" is the guest and he talks with some authority about neural nets. He doesn't get anything terribly wrong on the basic tech. Everything he says about how Tesla plans to exploit it seemed reasonable to me. If you are a Tesla skeptic, it's a good way to understand Tesla bull thinking.
Of course, like everyone else who's familiar with the AV space, I think Jimmy is 100% wrong about Tesla's prospects. He's a typical Tesla fan - smart, technically savvy, and without any detailed domain knowledge. He's drawn in by the surface logic of Tesla's approach, but isn't expert enough to know that it's flawed in critical ways.
In the Podcast, Jimmy gets four things fundamentally wrong:
- Neural Nets aren't good enough. 99% accuracy simply isn't good enough for AV.
- Lidar isn't used just because Waymo started long ago.
- Tesla "training data" is virtually worthless.
- Partial autonomy isn't safer.
- Tesla's aren't safer.
- Llama's don't know how to count.
Before we talk about why Tesla's approach is flawed, let's make clear that I'm
Bullish on Autonomy
Don't take anything I say here to mean I don't believe in autonomous vehicles. I do. They are coming, and when they are fully mature, they will upend society as much as the Model T did.
I also happen to think Waymo is way ahead of the competition - Cruise/Uber/Lyft/etc... Perhaps by 2-3 years or more. I might be a Google partisan as a former employee, but I think if you look at what people have demonstrated, Waymo is way ahead. As we'll talk about below, the hardest part is getting high reliability.
Virtually anyone can throw together a demo which mostly works. But those demos require a human behind the wheel in case of failure. Getting rid of the human is the hardest part. Really it's the only hard part. Only Waymo has demonstrated this ability in real-world conditions.
In short- I believe in AVs, I think they are coming perhaps sooner than people expect, and I think Tesla's approach is flawed and none of the Tesla's on the road will ever be capable of FSD.
So why do most experts think Tesla's approach won't work?
Neural Nets Aren't Good Enough
If any fanboys read this, let me try to defuse your anger by saying neural nets are Great! I work with them all the time. The technology has made AMAZING strides. We can do things now that seemed like science fiction 6 years ago. Image recognition in particular has undergone a revolution.
NN's are incredibly useful to autonomous vehicles. NN's have grown much more accurate in recent years. However, they are not nearly good enough for FSD. This is the core problem that Jimmy elides over. It's why AV experts think Tesla is a joke.
It's possible that NN's are incredible and great for a ton of applications, but also not nearly good enough for driving safety-related decisions. Software that's right 99% of the time isn't good enough when deciding if a biker is in your lane.
NN's and Image Recognition
Advances in Deep Learning have made various image processing and recognition tasks possible. Smartphones now do extensive computational photography using neural nets. Nest cams use neural nets to identify people, as do Facebook and other social networks.
These tools generally work well. However, they don't work 100% of the time. In fact, they are not that close to 100%. If Portrait mode fails once every fifty photos, who cares? If Facebook suggests the wrong person in a photo, does it really matter? These tools are commercially viable because they mostly work, and 98% accuracy is far more than needed for most use cases.
98% isn't nearly enough for autonomous vehicles.
In fact, 99.9% isn't good enough. I'd hazard that AV safety engineers probably want 5 or more 9's of reliability- 99.999%. That might seem excessive, but even that allows for a 1:100,000 chance of misidentifying a pedestrian in your path. Given what we know about existing solutions and the difficulty of the problem, it's unlikely Tesla's NNs even get to 99.5% error rates for many safety critical classification tasks. 99.5% would be a major achievement. To use a phrase familiar to Tesla fans, Tesla is orders-of-magnitude away from a viable FSD solution. Then need their system to be at least 100x more accurate.
This is why every other company pursuing AVs is using lidar. Lidar is extremely reliable and accurate. If your lidar sensor says there's nothing in your path, then there's nothing in your path, especially when you have two independent sensors looking in the direction of travel. That's what's needed to get to 99.999% reliability. For all the talk about NN advances, the fact of the matter is that error rates for critical decisions are still way too high.
No one in the field has any idea how to lower those error rates another 10x, let alone 100x. It's going to take a major breakthrough (perhaps more than one) to get visions systems reliable enough to depend on for driving.
So next time someone starts talking about using neural nets for FSD, ask if they think those systems can get to 99.999% accuracy. Ask if anyone anywhere has every demonstrated a vision system on a real-world task this accurate.
Lidar Isn't Ancient
One way Tesla fans explain away the ubiquity of lidar in AV, but it's absence in Tesla is by saying that lidar is old. It was needed before the Deep Learning revolution. Waymo's efforts started before DL, so they built up an architecture around lidar.
This is sort of true. The original DARPA program was before the recent revolution in neural nets. And Google's program started before this as well. However, many other programs were started well after Google. Cruise started in 2013. Anthony Levandowski formed Otto in 2016 (2 years after the first Inception paper). In fact, Google & Uber fought a long legal battle over lidar. Seems weird that these two would fight over something supposedly made obsolete by tech developments 2 years beforehand.
There have been nearly a dozen significant new entrants to the AV space over the past 4 years. Every single one of them is using lidar. That alone should tell you how the experts in the space feel about Tesla's approach.
Now about that training data...
Stop It With the Fleet Training Already
Tesla fans incessantly talk about all the data Tesla gathers from its fleet. While I don't doubt that Tesla gets some valuable mapping data (though probably 100-1000x less data than Google gets from Maps/Waze users), the visual data that the fleet gathers is virtually worthless. It's not even clear if this data is being collected due to its size and lack of utility.
When Jimmy talks about the fleet data, he's aware that the raw data isn't that useful. In order to be used, the data must be labelled. Humans must curate the videos, labelling roads, cars, bikes, pedestrians, etc... It turns out that labelling is way more expensive than just collecting data.
Think about it. Anyone can put a few cameras on a car and drive thousands of miles. It's not expensive and it doesn't take that long. What's hard is having a human operator go through every second and frame of imagery and accurately labelling the video data. This is painstaking work, even with the best assistance software.
Tesla's fleet advantage is no advantage at all. You can easily collect road imagery for less than $1/mile. Getting that mile accurately labelled by human operators probably costs 100x that. So congrats to Tesla, they save 1% on training data costs. Of course, it's worse than that. If you pay to collect data, you control exactly where the car drives, under what conditions, with what sensors. Tesla's data is a random hodgepodge of wherever their customers happen to drive. Even before they curate, they have to pay someone to sort through this data and figure out which bits to use.
My guess is, between the cost of curating, the low quality of random drives, and the costs of uploading terabytes of data from end users, Tesla probably doesn't even upload much video data at all. I'm guessing almost all their real-life video training data is based on explicit driving they've done rather than customer drives.
About That Partial Autonomy
It's true that Waymo decided to move directly to FSD, skipping partial autonomy. Jimmy however, is misinformed about why Waymo made this decision.
Unlike Tesla, Waymo uses data and testing to make decisions. Five years ago, they let employees try out the tech for a few weeks. What Waymo found was extremely troubling. At first, users were nervous and didn't trust the car. However, that quickly reversed and they came to trust the car too much. It's very hard for humans to pay attention when not making decisions. Waymo found that users simply were not able to take over control of the car in a reliable fashion. When partial autonomy failed, it became dangerous because the human was not prepared to take over control.
Tesla has learned this the hard way. Rather than testing, they just released the software on their customers. After a series of fatal and near-fatal accidents involving AutoPilot, they've made various efforts to ensure the driver stays engaged. The cars are not properly equipped to do this, so it's both annoying and ineffective.
This is why most of the energy in the space is directed towards FSD. It's widely believed now that partial autonomy is fool's gold. The repeated failures of AutoPilot have only underlined this. If the system is relying on a human for its ultimate safety, then it is not a safe system.
About Those Safety Stats
Jimmy mentions that AutoPilot is already saving lives. This is clear nonsense. He quotes Tesla's stats, which he admits are not great, but as a Tesla fan, he doesn't fully grasp how completely ridiculous the stats are.
Comparing safety data for new luxury cars with the whole US auto fleet is absurd. New cars are safer than old cars (the age of the average American car is 11 years), luxury car buyers are older, more cautious drivers. This isn't a small difference- it's huge and it has nothing to do with Tesla's safety versus similar cars. We can see how absurd Tesla's comparison is if we try and reconstruct similar stats for other luxury brands (BMW, Mercedes). Fortunately someone has already done this, so I don't have to do any actual work.
The data shows that Tesla's are probably 2-3x more dangerous than other new luxury cars. Some of that is due to Teslas being (very) fast cars, but most is due to AutoPilot being dangerous as many of the reported deaths can be attributed to AP. Modern luxury cars are very safe, so even the small number of known AP-related deaths is a significant number of deaths for a luxury car fleet.
In short, Tesla isn't saving any lives. If anything, it's risking lives in ways that other, more sober companies have intentionally avoided. It's also giving self-driving a bad name. While other OEMs have been extremely careful, so as to avoid popular and regulatory backlash, Tesla has been pushing unsafe technology and overblown claims, confusing the public.
To Sum Up
- Jimmy is a nice guy and pretty knowledgeable about neural networks.
- Tesla fools guys like Jimmy with tech talk that obscures the real challenges.
- Neural nets have made great strides.
- Neural nets are not nearly accurate enough for safety decisions.
- Neural nets need at least 100-1000x more accuracy before they alone can be used.
- Every other player in the AV space uses lidar, regardless of when they started.
- Partial Autonomy (aka AutoPilot) has been rejected by most others in the space because it's not safe. Either the car can reliably drive itself or it cannot. Depending on a human for backup is not safe.
- The data from Tesla's fleet is not valuable. Labelled data is valuable. Random videos of driving are not.
- Tesla safety stats are very misleading. Best guess is Tesla's are 2-3x less safe than other luxury cars.
- None of the Tesla vehicles on the road will ever have FSD.
Link: http://blog.quirkyllama.org/2018/11/why-teslas-fsd-approach-is-flawed.html
...for those that are interested and open-minded enough to read this post, you are welcome!