Crown

Board: Porsche - Taycan Language: English Region: Worldwide Share/Save/Bookmark Close

Forum - Thread


    Re: Tesla Roadster

    B1E31B7C-60EC-4E20-9D5A-830E3DF4465D.jpegFor the price of ceramic brakes it’s a good deal. 
    After using just the advanced autopilot which is included in the standard price I can’t even imagine long drives without it. 
    To argue it is not safe when used as advertised is simply not true. It is like arguing that you can do math faster in your head than your calculator. Open your eyes guys. It’s the 21 st century. 
    Of course full self driving is very much more difficult than a plane or boat. This explains why those systems already exist. It will be introduced in stages as we are witnessing. How long would it take if you did not take steps and gathered experience. This is what the AI is doing along with millions of miles of drivers training the system. Only one guy has the balls to do it. I can’t see any point in not being amazed unless one is just - I don’t know how to put this - insecure. 
    Will there be accidents- of course. But it will be safer than human drivers by a lot. It already is. Look at insurance stats. 
    For now the current Tesla hardware and software are obvious not at level 5. But it is a state of the art drivers aid. Well worth every penny- which if you don’t get the navigate on autopilot option - is included in the price. If you want to amuse yourself ponder how much Porsche would have charged to for my free advanced autopilot as an option ?  

    I tend to dismiss arguments from companies who can’t get similar systems to work even a little.

    btw my little car drove up and down Pikes peak on auto steer. Never used the brakes on the way down and gained 20 miles in charge.  The smell of brake linings wasting away all that energy was quite obvious around us  

     


    Re: Tesla Roadster

    This review is a must-watch.

    https://www.youtube.com/watch?v=SPEWaBYj4lA

     


    Re: Tesla Roadster

    RC:
    Tim:

    This mirrors my experience with these kind of systems. They work well in 90% of use-cases but fail in more complex scenarios. I don’t know whether that’s due to technical limitations or regulation though. 
    As a driver I find these driving aids in their current form really intrusive and would not spec them when ordering a new car. I don’t want my car to break or steer on its own unless I specifically ask it to do so. To be honest I actually feel less safe in cars that have these aids on by default because I’m not really fully in control anymore. I can see the benefit of a fully autonomous car but I think it will take another decade before I can enter my car drunk and tell it to take me home with basically 0 risk of crashing. 
     

    Another decade may not be enough. I had the chance to talk to an engineer (I won't say which car company because I do not know if he was supposed to tell me that) about fully autonomous (level 5) vehicles and he told me that right now, the technology (sensors, cameras but also software) isn't available yet. He also told me that his bosses often underestimate the complexity of these systems and the necessary AI systems (incl. software) development necessity for this to work really well. He thinks that it will take more than a decade to develop almost fail free systems (he said that even the slightest error margin is unacceptable) because of the complexity of these systems and that new hardware and new software algorithm approaches are necessary to achieve that. He wasn't optimistic at all about a safe level 5 vehicle development and said that what Elon Musk said (level 5 in 2021) is impossible, unless he is compromising safety to an unacceptable level. He also said that in his opinion, Tesla is quite a step ahead in this domain compared to other companies but the advancement is maybe three or four years, not more.

    I have no clue about this technology but I think that it is much more complex to build a fully autonomous car than for example a fully autonomous airplane. Are we there yet? I doubt it. I think Elon Musk is a little bit too optimistic and he mistakes US driving conditions with European driving conditions (roads, speeds, etc.).

    I always have to think of this Volvo engineer who once explained in an interview that self-driving will only work (if at all) in an all-self-driven environment. His point was: if self-driving systems are driving together with regular drivers the regular drivers will gain by driving aggressively as the self-driving software will always be prorammed such that it avoids all dangerous situations. I felt this "moral hazard" argument is quite an interesting one.

    Also: how many people really desire a fully automated drive. Ok, maybe in boring situations it would be nice to just switch it on. But only then Smiley


    Re: Tesla Roadster

    Leawood911:

    B1E31B7C-60EC-4E20-9D5A-830E3DF4465D.jpegFor the price of ceramic brakes it’s a good deal. 
    After using just the advanced autopilot which is included in the standard price I can’t even imagine long drives without it. 
    To argue it is not safe when used as advertised is simply not true. It is like arguing that you can do math faster in your head than your calculator. Open your eyes guys. It’s the 21 st century. 
    Of course full self driving is very much more difficult than a plane or boat. This explains why those systems already exist. It will be introduced in stages as we are witnessing. How long would it take if you did not take steps and gathered experience. This is what the AI is doing along with millions of miles of drivers training the system. Only one guy has the balls to do it. I can’t see any point in not being amazed unless one is just - I don’t know how to put this - insecure. 
    Will there be accidents- of course. But it will be safer than human drivers by a lot. It already is. Look at insurance stats. 
    For now the current Tesla hardware and software are obvious not at level 5. But it is a state of the art drivers aid. Well worth every penny- which if you don’t get the navigate on autopilot option - is included in the price. If you want to amuse yourself ponder how much Porsche would have charged to for my free advanced autopilot as an option ?  

    I tend to dismiss arguments from companies who can’t get similar systems to work even a little.

    btw my little car drove up and down Pikes peak on auto steer. Never used the brakes on the way down and gained 20 miles in charge.  The smell of brake linings wasting away all that energy was quite obvious around us  

     

    Cannot spot the Tesla in this picture Smiley


    Re: Tesla Roadster

    F7CBA7C8-1E26-4A95-8036-9FF83FDC2613.jpeg


    Re: Tesla Roadster

    Yes. For long boring drives. 
    The engineers who claim it can’t be done crack me up. 
    Human drivers will always be the weak link that’s for certain. But the number of hazards the Tesla tracks and pays 100% attention to at the same time would take a bus load of drivers to watch out for. 
    This will be an excellent thread in a few short years. Hope we don’t lose any more data. 
    btw I watch a YouTube video of futurists in 1900 predicting what the world would be like in 2000. That is truly must watch. They predicted we would live to 50, not just 35!  I’ll try to post a link. Need to keep a hand on the wheel you know. 


    Re: Tesla Roadster

    MKSGR:
     

    I always have to think of this Volvo engineer who once explained in an interview that self-driving will only work (if at all) in an all-self-driven environment. His point was: if self-driving systems are driving together with regular drivers the regular drivers will gain by driving aggressively as the self-driving software will always be prorammed such that it avoids all dangerous situations. I felt this "moral hazard" argument is quite an interesting one.

    Also: how many people really desire a fully automated drive. Ok, maybe in boring situations it would be nice to just switch it on. But only then Smiley

     

    You are very correct. The system can only survive in it's own little bubble.

    Human drivers will always be safer as human brains and eyes can process much more information. Car sensors are only program to spot what is being programmed. They cannot assess the situation and predict what might happen. 

    Say on a neighbourhood street, kids playing ball on the driveway, side walks, human driver will instinctively be more alert and slow down because that ball might come onto the road and a kid will run after it, self-driving program won't have that prediction in it's algorithm, heck it's sensors won't even pick up on the kids.

    Another would be a 4 way stop sign or just a simple stop sign for cyclists crossing. Human eyes during shoulder check can pick up cyclists barrelling down with no intention of stopping and adjust accordingly. Self driving car will only try to stop if such cyclist appears in it's field of view, could be too late.

    Many many more examples why a computer can't replace human experiences. 

     


    --

     

     


    Re: Tesla Roadster

    “VW CEO Says No Deal Cooking After Meeting Elon Musk in Germany...”  C7BEAB03-A272-4B0C-AF73-D01AEAA5AA36.gif

    (7 September 2020)

    Volkswagen AG CEO Herbert Diess took to LinkedIn on Monday to clarify that there was no deal in the works after he met Tesla CEO Elon Musk last week in Germany.

    “We just drove the ID.3 and had a chat - there is no deal/cooperation in the making,” he wrote on the social network, posting a selfie with a smiling Musk. 

    The two auto executives met for about two hours on Thursday evening at a small airport in Braunschweig where Musk test-drove VW’s ID.3 electric-car. Diess has praised Musk’s achievements several times in the past.

    “Thanks for the visit, Elon!” Diess wrote in a separate comment on LinkedIn where he teased Musk for trying to drive the ID.3 like a sports car on the runway. “For this you should try our Porsche Taycan. Looking forward to our next meeting!”

    [Source: Bloomberg]


    Re: Tesla Roadster

    No treasure here, don't look under the tree.


    --

     

     


    Re: Tesla Roadster

    I think it’s the interaction between machine and human where problems arise. Let’s take the following scenario. There are two cars in the right lane. The one in the front is travelling at reduced speed so the other one is closing the gap and likely to overtake at some point. You are travelling in the left lane with even higher speed. If I’m driving on my own I can anticipate that the driver on the right closest to me might pull out (without paying attention) so I’m either extra vigilant or will even reduce speed (if the speed delta is too high) to give me time to react. A car on “auto-pilot” will just ignore this situation and keep on travelling at the same speed. Even worse if the car pulls out it will deactivate and tell me to take-over to initiate the emergency braking. I had this exact scenario in multiple cars with the latest driving aids (albeit no TESLA), that’s the reason I believe there are not really ready for prime time yet. Is the Tesla system smarter?

    Of course all this could be solved in a fully-autonomous world - but I doubt we will get there quickly (or at all). Plus I would hate to give up this freedom. 


    Re: Tesla Roadster

    Forget about it. That is a simple problem.  Drive a Tesla. It keeps track of ten times the interactions you describe. And it never rests and is always on the lookout. Even with autopilot deactivated it would jump in and react if there was a vehicle in your path. It uses the concept of drivable space. For actual information into how they solve problems like the one above check out their last investor video where they describe the hardware, software and how the AI learns based on all the miles the fleet has driven.  It is remarkable and as a software developer I get it. To think a human can track as many possible outcomes and decision points is just silly. It will only keep getting better. 
    I totally get that the automakers who can’t do this - yet - as well as Tesla claim it is difficult. It is. But to think everyone needs to be in an autonomous car is silly. There are still pedestrians and bikes as well as weather and tons of natural hazards.  The car will still be millions of calculations ahead in all these scenarios than a person. 
    Drove 10 hours back on i70 yesterday. At night the wind from the north made staying on the road quite difficult for everyone around me. Without autopilot it would have been a long night keeping the car going in its lane. Instead I just turned the system on and it allowed me to pay extra attention to those other hazards. The car fought the wind and turns effortlessly. Most impressive at 85 mph.  The car is especially useful in the fog when a human would not even see a car. The Tesla even sees when a car ahead in traffic is stopped long before a human would be able to see it with the other car in the way. 
    Expect close to a ten fold increase in safety according to the insurance institute. That is due to active safety which is always on standby. It also makes you a better driver, if you don’t use your turn signal all the time you will soon get tired of the system reminding you - love that. 


    Re: Tesla Roadster

    Leawood911:

    Forget about it. That is a simple problem.  Drive a Tesla. It keeps track of ten times the interactions you describe. And it never rests and is always on the lookout. Even with autopilot deactivated it would jump in and react if there was a vehicle in your path. It uses the concept of drivable space. For actual information into how they solve problems like the one above check out their last investor video where they describe the hardware, software and how the AI learns based on all the miles the fleet has driven.  It is remarkable and as a software developer I get it. To think a human can track as many possible outcomes and decision points is just silly. It will only keep getting better. 
    I totally get that the automakers who can’t do this - yet - as well as Tesla claim it is difficult. It is. But to think everyone needs to be in an autonomous car is silly. There are still pedestrians and bikes as well as weather and tons of natural hazards.  The car will still be millions of calculations ahead in all these scenarios than a person. 
    Drove 10 hours back on i70 yesterday. At night the wind from the north made staying on the road quite difficult for everyone around me. Without autopilot it would have been a long night keeping the car going in its lane. Instead I just turned the system on and it allowed me to pay extra attention to those other hazards. The car fought the wind and turns effortlessly. Most impressive at 85 mph.  The car is especially useful in the fog when a human would not even see a car. The Tesla even sees when a car ahead in traffic is stopped long before a human would be able to see it with the other car in the way. 
    Expect close to a ten fold increase in safety according to the insurance institute. That is due to active safety which is always on standby. It also makes you a better driver, if you don’t use your turn signal all the time you will soon get tired of the system reminding you - love that. 

    With all due respect, a Tesla on “autopilot” crashing into a stationary firetruck is not a strong endorsement... Smiley

    B9284E2B-6432-4B5B-831C-B399630B60E8.jpeg

    ...surely you are a better driver than that? Smiley

    Link: https://www.latimes.com/business/story/2019-09-03/tesla-was-on-autopilot-when-it-hit-culver-city-fire-truck-ntsb-finds

    PS: A wise man once said: “If you can acknowledge to yourself that Tesla is not perfect, your myopia will soon heal...” Smiley


    Re: Tesla Roadster

    Tim:

    I think it’s the interaction between machine and human where problems arise. Let’s take the following scenario. There are two cars in the right lane. The one in the front is travelling at reduced speed so the other one is closing the gap and likely to overtake at some point. You are travelling in the left lane with even higher speed. If I’m driving on my own I can anticipate that the driver on the right closest to me might pull out (without paying attention) so I’m either extra vigilant or will even reduce speed (if the speed delta is too high) to give me time to react. A car on “auto-pilot” will just ignore this situation and keep on travelling at the same speed. Even worse if the car pulls out it will deactivate and tell me to take-over to initiate the emergency braking. I had this exact scenario in multiple cars with the latest driving aids (albeit no TESLA), that’s the reason I believe there are not really ready for prime time yet. Is the Tesla system smarter?

    Of course all this could be solved in a fully-autonomous world - but I doubt we will get there quickly (or at all). Plus I would hate to give up this freedom. 

    No matter how many things the sensors track, the autopilot can only "react" to things that have happened. The human mind however can "predict" things before they  happen. Such as the scenarios that you and Nick have described. We can drive defensively in situations where we can predict something may happen, the car can only react when they happen and hope for the best, it is frankly put, stupid.

    In a world where all vehicles are on selfdriving mode then that is not that big a problem, save cities with pedestrians and such, but mixing selfdriving and hunan drivers, no so much, then you add to the fact that drivers with half baked autopilot systems engaged don't pay attention to the road anymore as they relegate responsibility on the system due to progressively adquired confidence in the system and eventual distraction,  then it's even worse.

    Everybody says they still pay attention while on autopilot but everybody knows that is not true,  that is not how the brain works,  you will drop your concentration sober or later if your brain is not challenged by the task. So half working systems are the worst,  it's better if it's a simple more limited driver aid system that still requires the driver to be in control,  or a fully autonomous self driving that doesn't require the driver at all. 


    --

    ⇒ Carlos - Porsche 991 Carrera GTS


    Re: Tesla Roadster

    You probably know by now that I love new tech and I am also a firm believer that technology will solve many, if not all, of our current problems with the environment and other issues. Maybe not too soon but sometimes in the future.

    That said, I do not think that level 5 autonomous driving is even close. Not because I'm an expert in this tech (I'm certainly not), not because I've driven such a car and maybe experienced the deficiencies first hand, not because I don't want this technology to work because I am one of these self-driving dinosaur morons, not because I think humans can't do it. No, I do not think we are even close to level 5 autonomous driving, simply because, from my tech brain and tech experience point of view, neither the technology nor the software solutions are available yet or will be available in the near (next 10 years) future.

    Take smartphones for example: Amazing tech. As a kid, I wouldn't have dreamt of the performance of my current iPhone 11 Pro Max. Or the camera quality (videos and photos). Or the battery life.

    So here's the question: Are smartphones 100% reliable? 99.9% reliable? Do they crash or not? Do they function without any issues? This is technology which is around 20 years old (SE R380 was the first...I think). It still doesn't work perfectly and/or without any issues. It gets better and better (and at the same time more complex as well) but we are still far away from a device which would work without any issues.

    Can we allow a level 5 autonomous car to have issues? Even small ones? We are talking about a two tons driving gadget here, not a simple smartphone which cannot do much harm, unless it catches fire or falls down from a tall house or whatever on someone's head. Smiley

    Take modern car entertainment systems for example: Are they flawless? I mean these systems use the latest tech and are often an integral part of the car's development, not some sort of third part add-on. No issues with these systems? I doubt it.

    Sorry but I do not share your optimism. Technology is great, I love it but we need to stay realistic here: I'd love to have teleportation in a decade or two but with the current tech level and development, I think that teleportation is at least 100 years away, if not longer. This is of course not a technology we can currently even comprehend or theorize in a useful manner about but I see a similar problem with fully autonomous driving. We just don't have the required hardware and software capabilities yet and any person who works with hardware/software understands this. You don't have to be an expert in automotive technology to realize this fact.

    In ten years? Maybe. I still doubt it. This technology needs to be fail proof, it needs to be at a safety level we have never reached yet in technology. We are talking about human lives here, just imagine a two ton vehicle running at full speed into a house, a group of people or a fuel station or whatever. This technology needs also to be tamper proof, hacker proof, whatever. Just imagine specialized hackers using level 5 cars as weapons.

    Yes, in a total autonomous driving bubble, this tech could work even now or soon. Do you want such a "bubble"? I don't. Many other people don't want this.

    Self-driving cars are not a problem nowadays but the safety is. Such cars cannot be "unsafe", the error margin needs to be minimal and I just don't see that with the current technology, even if there would be redundant systems available.  


    --

     

    RC (Germany) - Rennteam Editor Lamborghini Huracan Performante (2019), Mercedes GLC63 S AMG (2020), Mercedes C63 S AMG Cab (2019), Range Rover Evoque Si4 Black Edition (2019)

     


    Re: Tesla Roadster

    Guys. I hear you. The fact remains that a car with these safety aids is much safer than one without it when used properly. All cars can be driven improperly.  Humans will always be the weak link.  And yes there are accidents in Tesla’s. 10 times safer clearly is not 100% safe. So what is the point of posting an article of a Tesla accident. Bottom line is I would still rather have all these driver aids than not. Period. It is a silly argument to focus on level5 and not see the benefits of getting there with the small steps.  
    Why not argue against anti lock brakes?  Or better yet look at all the current accepted safety aids and make the same argument.

    If I had my choice of safe cars for my kids to drive it would be a Tesla.  Do I feel safer on the road around a Tesla or a human steered ICE battering ram?  I know the Tesla will at least try to prevent that driver from running into me. 
     

    This is a perfect example of perfect being the enemy of good. Add in some random fears and plain old insecurity and the road to understanding is blocked.  Oh well. Keep waiting for level 5 and watch everyone around you crash at an unnecessary rate. OR educate yourself on how the current system actually works and benefits people. It needs to be used properly. Period. 


    Re: Tesla Roadster


    Why not argue against anti lock brakes?  Or better yet look at all the current accepted safety aids and make the same argument.

    Unfortunately, anti-lock brakes are not the safety panacea anticipated when initially introduced thirty years ago because driver training is lacking in its effective use.  Too often drivers react to the ABS pulsation as brake system failure and lift off the pedal when maximum braking effectiveness is required.  This is rectified through driver training; something that is sorely lacking in the States.

    Continuing on the subject of autonomous system and advanced driver's aids, the research literature is replete with the experiences of automation in aviation and its failures.  The Man-in-the-Loop is a classic example of failures in the man-machine interface that caused numerous aircraft catastrophes including Air France 447 crashing into the depths of the South Atlantic Ocean about a decade ago.  This doesn't even begin to discuss the shortcomings in test and verification of Deep Machine Learning systems employed by a few automakers.  The fact remains that without proper instruction and training of advanced driver's aids, these technologies offer little to no perceptible benefits to the driver. 


    Re: Tesla Roadster

    This article is well worth reading to help understand about the FSD technology, the human element of self driving and an interesting comparison between the Tesla and Waymo approach ... C7BEAB03-A272-4B0C-AF73-D01AEAA5AA36.gif

    “Why Tesla's FSD Approach is Flawed” - Thoughts from a Quirky Llama

    The only folks more prone to rants than myself are Tesla fans talking about neural nets. So I thought, why not combine the power of the rants? I'll rant about Tesla fans ranting about the power of neural nets...

    To get myself good and wound up, I listened to a Tesla Daily Podcast about Full Self Driving (FSD) and Neural Nets. 

    "Jimmy D" is the guest and he talks with some authority about neural nets. He doesn't get anything terribly wrong on the basic tech. Everything he says about how Tesla plans to exploit it seemed reasonable to me. If you are a Tesla skeptic, it's a good way to understand Tesla bull thinking.

    Of course, like everyone else who's familiar with the AV space, I think Jimmy is 100% wrong about Tesla's prospects. He's a typical Tesla fan - smart, technically savvy, and without any detailed domain knowledge. He's drawn in by the surface logic of Tesla's approach, but isn't expert enough to know that it's flawed in critical ways.

    In the Podcast, Jimmy gets four things fundamentally wrong:

    1. Neural Nets aren't good enough. 99% accuracy simply isn't good enough for AV.
    2. Lidar isn't used just because Waymo started long ago.
    3. Tesla "training data" is virtually worthless.
    4. Partial autonomy isn't safer.
    5. Tesla's aren't safer.
    6. Llama's don't know how to count.

     Before we talk about why Tesla's approach is flawed, let's make clear that I'm

    Bullish on Autonomy

    Don't take anything I say here to mean I don't believe in autonomous vehicles. I do. They are coming, and when they are fully mature, they will upend society as much as the Model T did.

    I also happen to think Waymo is way ahead of the competition - Cruise/Uber/Lyft/etc... Perhaps by 2-3 years or more.  I might be a Google partisan as a former employee, but I think if you look at what people have demonstrated, Waymo is way ahead. As we'll talk about below, the hardest part is getting high reliability. 

    Virtually anyone can throw together a demo which mostly works. But those demos require a human behind the wheel in case of failure. Getting rid of the human is the hardest part. Really it's the only hard part. Only Waymo has demonstrated this ability in real-world conditions.

    In short- I believe in AVs, I think they are coming perhaps sooner than people expect, and I think Tesla's approach is flawed and none of the Tesla's on the road will ever be capable of FSD.

    So why do most experts think Tesla's approach won't work?

    Neural Nets Aren't Good Enough

    If any fanboys read this, let me try to defuse your anger by saying neural nets are Great! I work with them all the time. The technology has made AMAZING strides. We can do things now that seemed like science fiction 6 years ago. Image recognition in particular has undergone a revolution.

    NN's are incredibly useful to autonomous vehicles. NN's have grown much more accurate in recent years. However, they are not nearly good enough for FSD. This is the core problem that Jimmy elides over.  It's why AV experts think Tesla is a joke.

    It's possible that NN's are incredible and great for a ton of applications, but also not nearly good enough for driving safety-related decisions. Software that's right 99% of the time isn't good enough when deciding if a biker is in your lane.

    NN's and Image Recognition

    Advances in Deep Learning have made various image processing and recognition tasks possible.  Smartphones now do extensive computational photography using neural nets. Nest cams use neural nets to identify people, as do Facebook and other social networks.

    These tools generally work well. However, they don't work 100% of the time. In fact, they are not that close to 100%. If Portrait mode fails once every fifty photos, who cares? If Facebook suggests the wrong person in a photo, does it really matter? These tools are commercially viable because they mostly work, and 98% accuracy is far more than needed for most use cases. 

    98% isn't nearly enough for autonomous vehicles. 

    In fact, 99.9% isn't good enough. I'd hazard that AV safety engineers probably want 5 or more 9's of reliability- 99.999%. That might seem excessive, but even that allows for a 1:100,000 chance of misidentifying a pedestrian in your path. Given what we know about existing solutions and the difficulty of the problem, it's unlikely Tesla's NNs even get to 99.5% error rates for many safety critical classification tasks.  99.5% would be a major achievement. To use a phrase familiar to Tesla fans, Tesla is orders-of-magnitude away from a viable FSD solution.  Then need their system to be at least 100x more accurate. 

    This is why every other company pursuing AVs is using lidar. Lidar is extremely reliable and accurate. If your lidar sensor says there's nothing in your path, then there's nothing in your path, especially when you have two independent sensors looking in the direction of travel.  That's what's needed to get to 99.999% reliability. For all the talk about NN advances, the fact of the matter is that error rates for critical decisions are still way too high.  

    No one in the field has any idea how to lower those error rates another 10x, let alone 100x. It's going to take a major breakthrough (perhaps more than one) to get visions systems reliable enough to depend on for driving. 

    So next time someone starts talking about using neural nets for FSD, ask if they think those systems can get to 99.999% accuracy. Ask if anyone anywhere has every demonstrated a vision system on a real-world task this accurate.

    Lidar Isn't Ancient

    One way Tesla fans explain away the ubiquity of lidar in AV, but it's absence in Tesla is by saying that lidar is old. It was needed before the Deep Learning revolution. Waymo's efforts started before DL, so they built up an architecture around lidar. 

    This is sort of true. The original DARPA program was before the recent revolution in neural nets. And Google's program started before this as well. However, many other programs were started well after Google. Cruise started in 2013. Anthony Levandowski formed Otto in 2016 (2 years after the first Inception paper). In fact, Google & Uber fought a long legal battle over lidar. Seems weird that these two would fight over something supposedly made obsolete by tech developments 2 years beforehand. 

    There have been nearly a dozen significant new entrants to the AV space over the past 4 years.  Every single one of them is using lidar. That alone should tell you how the experts in the space feel about Tesla's approach. 

    Now about that training data...

    Stop It With the Fleet Training Already

    Tesla fans incessantly talk about all the data Tesla gathers from its fleet. While I don't doubt that Tesla gets some valuable mapping data (though probably 100-1000x less data than Google gets from Maps/Waze users), the visual data that the fleet gathers is virtually worthless.  It's not even clear if this data is being collected due to its size and lack of utility. 

    When Jimmy talks about the fleet data, he's aware that the raw data isn't that useful. In order to be used, the data must be labelled. Humans must curate the videos, labelling roads, cars, bikes, pedestrians, etc... It turns out that labelling is way more expensive than just collecting data.  

    Think about it. Anyone can put a few cameras on a car and drive thousands of miles. It's not expensive and it doesn't take that long. What's hard is having a human operator go through every second and frame of imagery and accurately labelling the video data. This is painstaking work, even with the best assistance software.

    Tesla's fleet advantage is no advantage at all. You can easily collect road imagery for less than $1/mile. Getting that mile accurately labelled by human operators probably costs 100x that. So congrats to Tesla, they save 1% on training data costs. Of course, it's worse than that. If you pay to collect data, you control exactly where the car drives, under what conditions, with what sensors. Tesla's data is a random hodgepodge of wherever their customers happen to drive. Even before they curate, they have to pay someone to sort through this data and figure out which bits to use. 

    My guess is, between the cost of curating, the low quality of random drives, and the costs of uploading terabytes of data from end users, Tesla probably doesn't even upload much video data at all. I'm guessing almost all their real-life video training data is based on explicit driving they've done rather than customer drives.

    About That Partial Autonomy

    It's true that Waymo decided to move directly to FSD, skipping partial autonomy.  Jimmy however, is misinformed about why Waymo made this decision.

    Unlike Tesla, Waymo uses data and testing to make decisions. Five years ago, they let employees try out the tech for a few weeks. What Waymo found was extremely troubling. At first, users were nervous and didn't trust the car.  However, that quickly reversed and they came to trust the car too much. It's very hard for humans to pay attention when not making decisions. Waymo found that users simply were not able to take over control of the car in a reliable fashion. When partial autonomy failed, it became dangerous because the human was not prepared to take over control.

    Tesla has learned this the hard way. Rather than testing, they just released the software on their customers. After a series of fatal and near-fatal accidents involving AutoPilot, they've made various efforts to ensure the driver stays engaged.  The cars are not properly equipped to do this, so it's both annoying and ineffective.

    This is why most of the energy in the space is directed towards FSD. It's widely believed now that partial autonomy is fool's gold. The repeated failures of AutoPilot have only underlined this. If the system is relying on a human for its ultimate safety, then it is not a safe system.

    About Those Safety Stats

    Jimmy mentions that AutoPilot is already saving lives. This is clear nonsense. He quotes Tesla's stats, which he admits are not great, but as a Tesla fan, he doesn't fully grasp how completely ridiculous the stats are.

    Comparing safety data for new luxury cars with the whole US auto fleet is absurd.  New cars are safer than old cars (the age of the average American car is 11 years), luxury car buyers are older, more cautious drivers. This isn't a small difference- it's huge and it has nothing to do with Tesla's safety versus similar cars. We can see how absurd Tesla's comparison is if we try and reconstruct similar stats for other luxury brands (BMW, Mercedes). Fortunately someone has already done this, so I don't have to do any actual work.

    The data shows that Tesla's are probably 2-3x more dangerous than other new luxury cars. Some of that is due to Teslas being (very) fast cars, but most is due to AutoPilot being dangerous as many of the reported deaths can be attributed to AP. Modern luxury cars are very safe, so even the small number of known AP-related deaths is a significant number of deaths for a luxury car fleet.

    In short, Tesla isn't saving any lives. If anything, it's risking lives in ways that other, more sober companies have intentionally avoided. It's also giving self-driving a bad name. While other OEMs have been extremely careful, so as to avoid popular and regulatory backlash, Tesla has been pushing unsafe technology and overblown claims, confusing the public.

    To Sum Up

    1. Jimmy is a nice guy and pretty knowledgeable about neural networks.
    2. Tesla fools guys like Jimmy with tech talk that obscures the real challenges.
    3. Neural nets have made great strides.
    4. Neural nets are not nearly accurate enough for safety decisions.
    5. Neural nets need at least 100-1000x more accuracy before they alone can be used.
    6. Every other player in the AV space uses lidar, regardless of when they started.
    7. Partial Autonomy (aka AutoPilot) has been rejected by most others in the space because it's not safe. Either the car can reliably drive itself or it cannot. Depending on a human for backup is not safe.
    8. The data from Tesla's fleet is not valuable. Labelled data is valuable. Random videos of driving are not.
    9. Tesla safety stats are very misleading.  Best guess is Tesla's are 2-3x less safe than other luxury cars.
    10. None of the Tesla vehicles on the road will ever have FSD.

    Link: http://blog.quirkyllama.org/2018/11/why-teslas-fsd-approach-is-flawed.html

    ...for those that are interested and open-minded enough to read this post, you are welcome! Smiley 


    Re: Tesla Roadster

    Boxster Coupe GTS:

    This article is well worth reading to help understand about the FSD technology, the human element of self driving and an interesting comparison between the Tesla and Waymo approach ... C7BEAB03-A272-4B0C-AF73-D01AEAA5AA36.gif

    “Why Tesla's FSD Approach is Flawed” - Thoughts from a Quirky Llama

    The only folks more prone to rants than myself are Tesla fans talking about neural nets. So I thought, why not combine the power of the rants? I'll rant about Tesla fans ranting about the power of neural nets...

    To get myself good and wound up, I listened to a Tesla Daily Podcast about Full Self Driving (FSD) and Neural Nets. 

    "Jimmy D" is the guest and he talks with some authority about neural nets. He doesn't get anything terribly wrong on the basic tech. Everything he says about how Tesla plans to exploit it seemed reasonable to me. If you are a Tesla skeptic, it's a good way to understand Tesla bull thinking.

    Of course, like everyone else who's familiar with the AV space, I think Jimmy is 100% wrong about Tesla's prospects. He's a typical Tesla fan - smart, technically savvy, and without any detailed domain knowledge. He's drawn in by the surface logic of Tesla's approach, but isn't expert enough to know that it's flawed in critical ways.

    In the Podcast, Jimmy gets four things fundamentally wrong:

    1. Neural Nets aren't good enough. 99% accuracy simply isn't good enough for AV.
    2. Lidar isn't used just because Waymo started long ago.
    3. Tesla "training data" is virtually worthless.
    4. Partial autonomy isn't safer.
    5. Tesla's aren't safer.
    6. Llama's don't know how to count.

     Before we talk about why Tesla's approach is flawed, let's make clear that I'm

    Bullish on Autonomy

    Don't take anything I say here to mean I don't believe in autonomous vehicles. I do. They are coming, and when they are fully mature, they will upend society as much as the Model T did.

    I also happen to think Waymo is way ahead of the competition - Cruise/Uber/Lyft/etc... Perhaps by 2-3 years or more.  I might be a Google partisan as a former employee, but I think if you look at what people have demonstrated, Waymo is way ahead. As we'll talk about below, the hardest part is getting high reliability. 

    Virtually anyone can throw together a demo which mostly works. But those demos require a human behind the wheel in case of failure. Getting rid of the human is the hardest part. Really it's the only hard part. Only Waymo has demonstrated this ability in real-world conditions.

    In short- I believe in AVs, I think they are coming perhaps sooner than people expect, and I think Tesla's approach is flawed and none of the Tesla's on the road will ever be capable of FSD.

    So why do most experts think Tesla's approach won't work?

    Neural Nets Aren't Good Enough

    If any fanboys read this, let me try to defuse your anger by saying neural nets are Great! I work with them all the time. The technology has made AMAZING strides. We can do things now that seemed like science fiction 6 years ago. Image recognition in particular has undergone a revolution.

    NN's are incredibly useful to autonomous vehicles. NN's have grown much more accurate in recent years. However, they are not nearly good enough for FSD. This is the core problem that Jimmy elides over.  It's why AV experts think Tesla is a joke.

    It's possible that NN's are incredible and great for a ton of applications, but also not nearly good enough for driving safety-related decisions. Software that's right 99% of the time isn't good enough when deciding if a biker is in your lane.

    NN's and Image Recognition

    Advances in Deep Learning have made various image processing and recognition tasks possible.  Smartphones now do extensive computational photography using neural nets. Nest cams use neural nets to identify people, as do Facebook and other social networks.

    These tools generally work well. However, they don't work 100% of the time. In fact, they are not that close to 100%. If Portrait mode fails once every fifty photos, who cares? If Facebook suggests the wrong person in a photo, does it really matter? These tools are commercially viable because they mostly work, and 98% accuracy is far more than needed for most use cases. 

    98% isn't nearly enough for autonomous vehicles. 

    In fact, 99.9% isn't good enough. I'd hazard that AV safety engineers probably want 5 or more 9's of reliability- 99.999%. That might seem excessive, but even that allows for a 1:100,000 chance of misidentifying a pedestrian in your path. Given what we know about existing solutions and the difficulty of the problem, it's unlikely Tesla's NNs even get to 99.5% error rates for many safety critical classification tasks.  99.5% would be a major achievement. To use a phrase familiar to Tesla fans, Tesla is orders-of-magnitude away from a viable FSD solution.  Then need their system to be at least 100x more accurate. 

    This is why every other company pursuing AVs is using lidar. Lidar is extremely reliable and accurate. If your lidar sensor says there's nothing in your path, then there's nothing in your path, especially when you have two independent sensors looking in the direction of travel.  That's what's needed to get to 99.999% reliability. For all the talk about NN advances, the fact of the matter is that error rates for critical decisions are still way too high.  

    No one in the field has any idea how to lower those error rates another 10x, let alone 100x. It's going to take a major breakthrough (perhaps more than one) to get visions systems reliable enough to depend on for driving. 

    So next time someone starts talking about using neural nets for FSD, ask if they think those systems can get to 99.999% accuracy. Ask if anyone anywhere has every demonstrated a vision system on a real-world task this accurate.

    Lidar Isn't Ancient

    One way Tesla fans explain away the ubiquity of lidar in AV, but it's absence in Tesla is by saying that lidar is old. It was needed before the Deep Learning revolution. Waymo's efforts started before DL, so they built up an architecture around lidar. 

    This is sort of true. The original DARPA program was before the recent revolution in neural nets. And Google's program started before this as well. However, many other programs were started well after Google. Cruise started in 2013. Anthony Levandowski formed Otto in 2016 (2 years after the first Inception paper). In fact, Google & Uber fought a long legal battle over lidar. Seems weird that these two would fight over something supposedly made obsolete by tech developments 2 years beforehand. 

    There have been nearly a dozen significant new entrants to the AV space over the past 4 years.  Every single one of them is using lidar. That alone should tell you how the experts in the space feel about Tesla's approach. 

    Now about that training data...

    Stop It With the Fleet Training Already

    Tesla fans incessantly talk about all the data Tesla gathers from its fleet. While I don't doubt that Tesla gets some valuable mapping data (though probably 100-1000x less data than Google gets from Maps/Waze users), the visual data that the fleet gathers is virtually worthless.  It's not even clear if this data is being collected due to its size and lack of utility. 

    When Jimmy talks about the fleet data, he's aware that the raw data isn't that useful. In order to be used, the data must be labelled. Humans must curate the videos, labelling roads, cars, bikes, pedestrians, etc... It turns out that labelling is way more expensive than just collecting data.  

    Think about it. Anyone can put a few cameras on a car and drive thousands of miles. It's not expensive and it doesn't take that long. What's hard is having a human operator go through every second and frame of imagery and accurately labelling the video data. This is painstaking work, even with the best assistance software.

    Tesla's fleet advantage is no advantage at all. You can easily collect road imagery for less than $1/mile. Getting that mile accurately labelled by human operators probably costs 100x that. So congrats to Tesla, they save 1% on training data costs. Of course, it's worse than that. If you pay to collect data, you control exactly where the car drives, under what conditions, with what sensors. Tesla's data is a random hodgepodge of wherever their customers happen to drive. Even before they curate, they have to pay someone to sort through this data and figure out which bits to use. 

    My guess is, between the cost of curating, the low quality of random drives, and the costs of uploading terabytes of data from end users, Tesla probably doesn't even upload much video data at all. I'm guessing almost all their real-life video training data is based on explicit driving they've done rather than customer drives.

    About That Partial Autonomy

    It's true that Waymo decided to move directly to FSD, skipping partial autonomy.  Jimmy however, is misinformed about why Waymo made this decision.

    Unlike Tesla, Waymo uses data and testing to make decisions. Five years ago, they let employees try out the tech for a few weeks. What Waymo found was extremely troubling. At first, users were nervous and didn't trust the car.  However, that quickly reversed and they came to trust the car too much. It's very hard for humans to pay attention when not making decisions. Waymo found that users simply were not able to take over control of the car in a reliable fashion. When partial autonomy failed, it became dangerous because the human was not prepared to take over control.

    Tesla has learned this the hard way. Rather than testing, they just released the software on their customers. After a series of fatal and near-fatal accidents involving AutoPilot, they've made various efforts to ensure the driver stays engaged.  The cars are not properly equipped to do this, so it's both annoying and ineffective.

    This is why most of the energy in the space is directed towards FSD. It's widely believed now that partial autonomy is fool's gold. The repeated failures of AutoPilot have only underlined this. If the system is relying on a human for its ultimate safety, then it is not a safe system.

    About Those Safety Stats

    Jimmy mentions that AutoPilot is already saving lives. This is clear nonsense. He quotes Tesla's stats, which he admits are not great, but as a Tesla fan, he doesn't fully grasp how completely ridiculous the stats are.

    Comparing safety data for new luxury cars with the whole US auto fleet is absurd.  New cars are safer than old cars (the age of the average American car is 11 years), luxury car buyers are older, more cautious drivers. This isn't a small difference- it's huge and it has nothing to do with Tesla's safety versus similar cars. We can see how absurd Tesla's comparison is if we try and reconstruct similar stats for other luxury brands (BMW, Mercedes). Fortunately someone has already done this, so I don't have to do any actual work.

    The data shows that Tesla's are probably 2-3x more dangerous than other new luxury cars. Some of that is due to Teslas being (very) fast cars, but most is due to AutoPilot being dangerous as many of the reported deaths can be attributed to AP. Modern luxury cars are very safe, so even the small number of known AP-related deaths is a significant number of deaths for a luxury car fleet.

    In short, Tesla isn't saving any lives. If anything, it's risking lives in ways that other, more sober companies have intentionally avoided. It's also giving self-driving a bad name. While other OEMs have been extremely careful, so as to avoid popular and regulatory backlash, Tesla has been pushing unsafe technology and overblown claims, confusing the public.

    To Sum Up

    1. Jimmy is a nice guy and pretty knowledgeable about neural networks.
    2. Tesla fools guys like Jimmy with tech talk that obscures the real challenges.
    3. Neural nets have made great strides.
    4. Neural nets are not nearly accurate enough for safety decisions.
    5. Neural nets need at least 100-1000x more accuracy before they alone can be used.
    6. Every other player in the AV space uses lidar, regardless of when they started.
    7. Partial Autonomy (aka AutoPilot) has been rejected by most others in the space because it's not safe. Either the car can reliably drive itself or it cannot. Depending on a human for backup is not safe.
    8. The data from Tesla's fleet is not valuable. Labelled data is valuable. Random videos of driving are not.
    9. Tesla safety stats are very misleading.  Best guess is Tesla's are 2-3x less safe than other luxury cars.
    10. None of the Tesla vehicles on the road will ever have FSD.

    Link: http://blog.quirkyllama.org/2018/11/why-teslas-fsd-approach-is-flawed.html

    ...for those that are interested and open-minded enough to read this post, you are welcome! Smiley 

    Excellent summary of all the issues debunking all the Tesla cool aid and propaganda... we have mentioned here a few,  system is not accurate enough for FSD application, hardware is not ready either like lack of Lidar, Teslas are not safer, partial self driving is dangerous because humans stop concentrating on the road, etc... but hey what does everybody know,  Musk said next year he will have FSD and robotaxis, seems legit 


    --

    ⇒ Carlos - Porsche 991 Carrera GTS


    Re: Tesla Roadster

    Can autopilot do what human does when driving a car regularly? Say sees a pot hole or manhole cover and drive around the bump? Or time traffic lights intervals to avoid stopping unnecessarily? 

     


    --

     

     


    Re: Tesla Roadster

    Again -  confusion seems to exist between what a driving aid is vs level 5 autonomy.

    I will trust my experience over these well intentioned articles which focus on total autonomy. Ask why insurers rate the Tesla so much safer?  Silly to argue it is not. Minds throws the whole legitimacy of the author out the window. 
     

     


    Re: Tesla Roadster

    Whoopsy:

    Can autopilot do what human does when driving a car regularly? Say sees a pot hole or manhole cover and drive around the bump? Or time traffic lights intervals to avoid stopping unnecessarily? 

     

    Again. It is a very useful driving aid when used properly. It can see cars stopped which you can’t see. It augments what you can do. There are a thousand reasons why it helps and is safer. There are also a thousand reason why you should always pay attention and use the system properly.  
    I have never chased an accident in my life and I can not get past how much safer I am with the addition of the Tesla safety nets.  Autopilot is essentially always ready to wake up and help if it sees the driver is making a mistake. People don’t even realize this. All the reading in the world about Tesla will not make up for lack of trying the car for a few days. 

    You guys are not interested in obvious limitations or benefits.  The only focus is on edge cases or features it lacks to be level 5.  I don’t know how you all even get out of bed and make it through the day. Do you realize that every product is not 100% safe or foolproof. The trade off for me is without question a bargain.  


    Re: Tesla Roadster

    Whoopsy:

    Can autopilot do what human does when driving a car regularly? Say sees a pot hole or manhole cover and drive around the bump? Or time traffic lights intervals to avoid stopping unnecessarily? 

     

    If they managed to build a rocket that takes off and safely returns by itself, it does sound like a child’s play to avoid a pot hole IMHO. It is just a matter of time. 


    --

    2016 Porsche 981 GT4 | Racing Yellow
    2018 Audi S6 Avant | Ibis White


    Re: Tesla Roadster

    Leawood911:

    Again -  confusion seems to exist between what a driving aid is vs level 5 autonomy.

    I will trust my experience over these well intentioned articles which focus on total autonomy. Ask why insurers rate the Tesla so much safer?  Silly to argue it is not. Minds throws the whole legitimacy of the author out the window. 

    Leawood, we know you like the technology, so I sincerely encourage you to read the article again and try to understand the Waymo experience...  C7BEAB03-A272-4B0C-AF73-D01AEAA5AA36.gif

    Waymo uses data and testing to make decisions. Five years ago, they let employees try out the tech for a few weeks. What Waymo found was extremely troubling. At first, users were nervous and didn't trust the car.  However, that quickly reversed and they came to trust the car too much. It's very hard for humans to pay attention when not making decisions. Waymo found that users simply were not able to take over control of the car in a reliable fashion. When partial autonomy failed, it became dangerous because the human was not prepared to take over control...

    ...you may be making the same mistake as those Waymo employees using partial autonomy — you trust your car too much! Smiley


    Re: Tesla Roadster

    bluelines:
    Whoopsy:

    Can autopilot do what human does when driving a car regularly? Say sees a pot hole or manhole cover and drive around the bump? Or time traffic lights intervals to avoid stopping unnecessarily? 

     

    If they managed to build a rocket that takes off and safely returns by itself, it does sound like a child’s play to avoid a pot hole IMHO. It is just a matter of time. 

    Actually, I think the pothole thing is more complicated from a programming/AI point of view. Smiley The rocket always starts from and always lands at a predefined location. Potholes are different in size, depth and they appear suddenly out of the blue (for example after freezing winters), there is no predefined size or location. Very tricky.

    This is why I mentioned autonomous airplanes. It would be much "easier" to build autonomous planes than autonomous cars in my opinion. With cars, there are way too many factors and probabilities to consider, even humans have issues with that. 


    --

     

    RC (Germany) - Rennteam Editor Lamborghini Huracan Performante (2019), Mercedes GLC63 S AMG (2020), Mercedes C63 S AMG Cab (2019), Range Rover Evoque Si4 Black Edition (2019)

     


    Re: Tesla Roadster

    RC:
    bluelines:
    Whoopsy:

    Can autopilot do what human does when driving a car regularly? Say sees a pot hole or manhole cover and drive around the bump? Or time traffic lights intervals to avoid stopping unnecessarily? 

     

    If they managed to build a rocket that takes off and safely returns by itself, it does sound like a child’s play to avoid a pot hole IMHO. It is just a matter of time. 

    Actually, I think the pothole thing is more complicated from a programming/AI point of view. Smiley The rocket always starts from and always lands at a predefined location. Potholes are different in size, depth and they appear suddenly out of the blue (for example after freezing winters), there is no predefined size or location. Very tricky.

    This is why I mentioned autonomous airplanes. It would be much "easier" to build autonomous planes than autonomous cars in my opinion. With cars, there are way too many factors and probabilities to consider, even humans have issues with that. 


    --

     

    RC (Germany) - Rennteam Editor Lamborghini Huracan Performante (2019), Mercedes GLC63 S AMG (2020), Mercedes C63 S AMG Cab (2019), Range Rover Evoque Si4 Black Edition (2019)

     

    There are no pot holes 30,000 ft in the skySmiley


    --

     

     


    Re: Tesla Roadster

    bluelines:
    Whoopsy:

    Can autopilot do what human does when driving a car regularly? Say sees a pot hole or manhole cover and drive around the bump? Or time traffic lights intervals to avoid stopping unnecessarily? 

     

    If they managed to build a rocket that takes off and safely returns by itself, it does sound like a child’s play to avoid a pot hole IMHO. It is just a matter of time. 

    It’s SpaceX, not Tesla, that built the rockets, btwSmiley

    recognizing pot holes is pretty much impossible for a 2D camera image, even with help from radar, which is what Tesla uses in it’s system. They just don’t have the resolution and field of view to make that possible. 
     

    A lidar system, however can do that. Lasers can see holes on the ground and adjust course. Higher tech, higher resolution imaging wise.

     


    --

     

     


    Re: Tesla Roadster

    Leawood911:

    Forget about it. That is a simple problem.  Drive a Tesla. It keeps track of ten times the interactions you describe. And it never rests and is always on the lookout. Even with autopilot deactivated it would jump in and react if there was a vehicle in your path. It uses the concept of drivable space. For actual information into how they solve problems like the one above check out their last investor video where they describe the hardware, software and how the AI learns based on all the miles the fleet has driven.  It is remarkable and as a software developer I get it. To think a human can track as many possible outcomes and decision points is just silly. It will only keep getting better. 
    I totally get that the automakers who can’t do this - yet - as well as Tesla claim it is difficult. It is. But to think everyone needs to be in an autonomous car is silly. There are still pedestrians and bikes as well as weather and tons of natural hazards.  The car will still be millions of calculations ahead in all these scenarios than a person. 
    Drove 10 hours back on i70 yesterday. At night the wind from the north made staying on the road quite difficult for everyone around me. Without autopilot it would have been a long night keeping the car going in its lane. Instead I just turned the system on and it allowed me to pay extra attention to those other hazards. The car fought the wind and turns effortlessly. Most impressive at 85 mph.  The car is especially useful in the fog when a human would not even see a car. The Tesla even sees when a car ahead in traffic is stopped long before a human would be able to see it with the other car in the way. 
    Expect close to a ten fold increase in safety according to the insurance institute. That is due to active safety which is always on standby. It also makes you a better driver, if you don’t use your turn signal all the time you will soon get tired of the system reminding you - love that. 

    Out of curiosity - how does a Tesla with the latest software react in the scenario mentioned above? This is a pretty common scenario, right... .


     


    Re: Tesla Roadster

    BTW seeing these rocket boosters land back on earth is the coolest thing I’ve seen in recent years. Looked like a scene straight out of a SciFi movie.


    Re: Tesla Roadster

    Tim:
    Leawood911:

    Forget about it. That is a simple problem.  Drive a Tesla. It keeps track of ten times the interactions you describe. And it never rests and is always on the lookout. Even with autopilot deactivated it would jump in and react if there was a vehicle in your path. It uses the concept of drivable space. For actual information into how they solve problems like the one above check out their last investor video where they describe the hardware, software and how the AI learns based on all the miles the fleet has driven.  It is remarkable and as a software developer I get it. To think a human can track as many possible outcomes and decision points is just silly. It will only keep getting better. 
    I totally get that the automakers who can’t do this - yet - as well as Tesla claim it is difficult. It is. But to think everyone needs to be in an autonomous car is silly. There are still pedestrians and bikes as well as weather and tons of natural hazards.  The car will still be millions of calculations ahead in all these scenarios than a person. 
    Drove 10 hours back on i70 yesterday. At night the wind from the north made staying on the road quite difficult for everyone around me. Without autopilot it would have been a long night keeping the car going in its lane. Instead I just turned the system on and it allowed me to pay extra attention to those other hazards. The car fought the wind and turns effortlessly. Most impressive at 85 mph.  The car is especially useful in the fog when a human would not even see a car. The Tesla even sees when a car ahead in traffic is stopped long before a human would be able to see it with the other car in the way. 
    Expect close to a ten fold increase in safety according to the insurance institute. That is due to active safety which is always on standby. It also makes you a better driver, if you don’t use your turn signal all the time you will soon get tired of the system reminding you - love that. 

    Out of curiosity - how does a Tesla with the latest software react in the scenario mentioned above? This is a pretty common scenario, right... .


     

     

    Teslas on autopilot just drive straight over the pot holes, they don't react to them as they can't see them. Many have damaged rims as a result.

    Autopilot are programmed to stay within the white lines. 


    --

     

     


    Re: Tesla Roadster

    Boxster Coupe GTS:
    Leawood911:

    Again -  confusion seems to exist between what a driving aid is vs level 5 autonomy.

    I will trust my experience over these well intentioned articles which focus on total autonomy. Ask why insurers rate the Tesla so much safer?  Silly to argue it is not. Minds throws the whole legitimacy of the author out the window. 

    Leawood, we know you like the technology, so I sincerely encourage you to read the article again and try to understand the Waymo experience...  C7BEAB03-A272-4B0C-AF73-D01AEAA5AA36.gif

    Waymo uses data and testing to make decisions. Five years ago, they let employees try out the tech for a few weeks. What Waymo found was extremely troubling. At first, users were nervous and didn't trust the car.  However, that quickly reversed and they came to trust the car too much. It's very hard for humans to pay attention when not making decisions. Waymo found that users simply were not able to take over control of the car in a reliable fashion. When partial autonomy failed, it became dangerous because the human was not prepared to take over control...

    ...you may be making the same mistake as those Waymo employees using partial autonomy — you trust your car too much! Smiley

    Smiley  The idea of Leawood actually really blindly trusting the car is some really funny stuff. How often in Europe do you take a drive in excess of 2,000km and only stop the absolute minimum number of times for a very quick fuel stop ?  Leawood has a history of taking such drives multiple times per year and I have been along on a number of these to see that he never used cruise control. I would feel confident with a driving history of that sort, that he is still closely monitoring the auto pilot while possibly improving his intake of every event within visual range fore and aft

    I guess if the system allows the mind to wonder, it would be a question of whether your thoughts are still focused on all the information you can take in from your surroundings or are you going to try and read a book while driving.


     
    Edit

    Forum

    Board Subject Last post Rating Views Replies
    Porsche Sticky SUN'S LAST RUN TO WILSON, WY - 991 C2S CAB LIFE, END OF AN ERA (Part II) 5/15/24 8:44 AM
    art.italy
    855536 1808
    Porsche Sticky OFFICIAL: Porsche 911 (992) GT3 RS - 2022 3/12/24 8:28 AM
    DJM48
    294489 323
    Porsche Sticky The new Macan: the first all-electric SUV from Porsche 1/30/24 9:18 AM
    RCA
    103288 45
    Porsche Sticky OFFICIAL: Taycan 2024 Facelift 3/15/24 1:23 PM
    CGX car nut
    22746 50
    Porsche The moment I've been waiting for... 7/27/24 8:44 PM
    Pilot
     
     
     
     
     
    921660 1365
    Porsche GT4RS 4/21/24 11:50 AM
    mcdelaug
    446207 1454
    Porsche Welcome to the new Taycan Forum! 2/10/24 4:43 PM
    nberry
    433768 1526
    Porsche Red Nipples 991.2 GT3 Touring on tour 5/12/24 6:23 PM
    blueflame
    307123 669
    Motor Sp. 2023 Formula One 12/19/23 5:38 AM
    WhoopsyM
    150806 685
    BMW M 2024 BMW M3 CS Official Now 12/29/23 9:04 AM
    RCA
    140412 303
    Porsche 2022 992 Safari Model 3/7/24 4:22 PM
    WhoopsyM
    97567 239
    AMG Mercedes-Benz W124 500E aka Porsche typ 2758 2/23/24 10:03 PM
    blueflame
    84833 297
    Porsche 992 GT3 RS 3/3/24 7:22 PM
    WhoopsyM
    74992 314
    Motor Sp. Porsche 963 5/18/24 9:44 PM
    Wonderbar
    40087 249
    Ferrari Ferrari 296 GTB (830PS, Hybrid V6) 1/21/24 4:29 PM
    GT-Boy
    32209 103
    BMW M 2022 BMW M5 CS 4/8/24 1:43 PM
    Ferdie
    29998 140
    AMG [2022] Mercedes-AMG SL 4/23/24 1:24 PM
    RCA
    25384 225
    Porsche 911 S/T 9/21/24 5:26 AM
    Pilot
    24198 59
    Porsche Macan EV 1/25/24 9:10 AM
    RCA
    21996 28
    Motor Sp. Formula 1 2024!! 6/15/24 11:25 AM
    Anderson
    21938 217
    Porsche 993 4S 3/10/24 12:54 PM
    blueflame
    17036 49
    Porsche 992FL 12/27/23 5:49 PM
    Wonderbar
    16434 16
    BMW M New device for DD 11/23/23 11:58 PM
    WhoopsyM
    11260 9
    Porsche win the lottery? what would you buy? 9/14/24 12:36 AM
    Pilot
    8135 61
    BMW M BMW XM 2/22/24 9:56 PM
    Enmanuel
    7052 23
    Porsche Taycan ST vs EQS580 1/5/24 1:45 AM
    mcdelaug
    6558 14
    Porsche New Panamera Turbo order 3/29/24 3:25 PM
    N11KOY
    6299 7
    Ferrari The SSO Awards: 2023 12/30/23 6:57 PM
    CGX car nut
    6270 6
    McLaren McLaren 650S Spider: 5 Years as my Daily Driver 10/17/24 6:16 AM
    sour
    6057 12
    Porsche
    Asia
    red 997 gt3 S-Go 5104 11/20/23 9:38 AM
    Oj911
    5871 6
    52 items found, displaying 1 to 30.