Consumer Reports: Tesla's Navigate on Autopilot tech is 'far less competent' than a human...

Polycount

Posts: 3,017   +590
Staff

Tesla and consumer advocacy website Consumer Reports (CR) have a pretty rocky history. When the former first released its Model 3 vehicle to the public, CR chose not to give the car its recommendation due to safety concerns regarding its braking distance (among other things).

However, after Tesla patched the problems via an over-the-air software update, CR decided to recommend the vehicle after all. Months later, though, CR changed its mind yet again, opting to retract its recommendation due to rising "reliability issues," which they discovered through their latest owner satisfaction survey.

Now, CR and Tesla are at odds yet again: the former has published an article claiming that Tesla's Navigate on Autopilot technology (a more advanced iteration of its standard Autopilot tool) is "far less competent" than a human driver. Indeed, CR claims Autopilot requires "significant" intervention from drivers. "The system’s role should be to help the driver, but the way this technology is deployed, it’s the other way around," said Jake Fisher, CR's senior director of auto testing.

Fisher goes on to note that Navigate on Autopilot doesn't seem to react to brake lights or turn signals, while also failing to anticipate what other drivers will do. As such, he says, "you constantly have to be one step ahead of it."

These problems become particularly prominent, Fisher claims, during automatic lane changes when "lane change confirmation" alerts are switched off.

Apparently, Autopilot tech has difficulty detecting vehicles that approach "quickly" from behind, leading a given Tesla car to "cut off" cars that are driving much faster than it -- a risky move, to say the least, and one that could spell disaster for a driver who isn't paying attention. Tesla, for its part, insists that drivers should be ready to take over at any moment (the usual statement it gives when Autopilot comes under fire).

Tesla's response to CR's report wasn't particularly enlightening. The company merely pointed the site to a blog post published on April 3, where it assures readers that it "consistently reviews data" from instances where drivers felt forced to take control of an Autopilot-enabled vehicle and uses that to enhance its systems.

The company also says it has heard "overwhelmingly" from drivers in Navigate on Autopilot's Early Access Program that they enjoy using the feature for everything from road trips to daily commutes.

Time will tell whether or not Tesla will take CR's feedback into account in the long run, but for now, the carmaker seems satisfied with the performance of its driver-assistance tech.

Permalink to story.

 
Common sense tells us that computers will never be able to respond to the billions of daily variables across the world. Let alone the software and hardware failures, which get exponential the older a vehicle gets. Plus outdated and no longer supported. lol
 
Humans have the ability to anticipate. Computers don't.

There's no computer that can "see" something about to happen that is illogical - or respond to it in time.

For example: can a Tesla sense a tree falling ahead and slow to a stop before the tree enters its field of sensor sweeps?

Can a Tesla roll up the windows and speed up when approached by alcoholic bums?

Can a Tesla anticipate a street walker whose high heel breaks and sends her plummeting into the road?
 
Fisher goes on to note that Navigate on Autopilot doesn't seem to react to brake lights or turn signals, while also failing to anticipate what other drivers will do. As such, he says, "you constantly have to be one step ahead of it."
Seriously?!?!? I wonder if the insurance industry has raised their rates considerably on this car? Or when those raises will be coming ..... you can bet that with this degree of "reliability" somebody's rates are going to skyrocket!
 
Humans have the ability to anticipate. Computers don't.

There's no computer that can "see" something about to happen that is illogical - or respond to it in time.

For example: can a Tesla sense a tree falling ahead and slow to a stop before the tree enters its field of sensor sweeps?

Can a Tesla roll up the windows and speed up when approached by alcoholic bums?

Can a Tesla anticipate a street walker whose high heel breaks and sends her plummeting into the road?

The average human response time is 221ms for something as basic as noticing a color has changed and responding to it. It goes much higher for unorthodox situations. A computer can have sub ms reaction times.

I'll respond to your examples in order

1) Driverless cars use LIDAR in conjunction with other technologies. They have a high definition range of 150 ft and their low definition range can extend much farther depending on the sensor suite used. In addition, Human vision is only clear where you are focusing and much less so elsewhere. Driverless cars have no such issue, their "vision" is clear everywhere at all times. So in your example the driverless car would see the falling tree from a great distance away, certainly enough time for even the cheapest car to break. In addition, it's fast reaction time would ensure that you are getting the best outcome even if a tree is falling close to your car and because the LIDAR system can see all around you it can even dodge trees falling from the sides of you.

2) Rolling up the windows really has nothing to do with driverless cars. You are thinking more along the lines of a AI assistant. Driverless cars aren't promising to profile everyone you meet and assess the risk. That said, nothing is stopping you from rolling up your own windows until that tech comes along.

3) This one is kind of funny. This isn't an example of anticipation. You didn't know her heel was going to break until you saw her start to stumble and noticed the signs, all of which will take place within a second. The part where anticipation comes in is the part where drivers assume people walking will keep walking and forget to pay attention and either bump into the girl or run her over. A driverless car does not have this issue. If it sees a person in the way it will break before you exceed safe distance and it will do so before a human could possibly react. Mind you this is ideally. Driverless cars are not perfect right now but the ingredients are there.
 
The average human response time is 221ms for something as basic as noticing a color has changed and responding to it. It goes much higher for unorthodox situations. A computer can have sub ms reaction times.

I'll respond to your examples in order

1) Driverless cars use LIDAR in conjunction with other technologies. They have a high definition range of 150 ft and their low definition range can extend much farther depending on the sensor suite used. In addition, Human vision is only clear where you are focusing and much less so elsewhere. Driverless cars have no such issue, their "vision" is clear everywhere at all times. So in your example the driverless car would see the falling tree from a great distance away, certainly enough time for even the cheapest car to break. In addition, it's fast reaction time would ensure that you are getting the best outcome even if a tree is falling close to your car and because the LIDAR system can see all around you it can even dodge trees falling from the sides of you.

2) Rolling up the windows really has nothing to do with driverless cars. You are thinking more along the lines of a AI assistant. Driverless cars aren't promising to profile everyone you meet and assess the risk. That said, nothing is stopping you from rolling up your own windows until that tech comes along.

3) This one is kind of funny. This isn't an example of anticipation. You didn't know her heel was going to break until you saw her start to stumble and noticed the signs, all of which will take place within a second. The part where anticipation comes in is the part where drivers assume people walking will keep walking and forget to pay attention and either bump into the girl or run her over. A driverless car does not have this issue. If it sees a person in the way it will break before you exceed safe distance and it will do so before a human could possibly react. Mind you this is ideally. Driverless cars are not perfect right now but the ingredients are there.
1 is giving Tesla quite a bit of trouble, especially when their cars drive into stationary objects like police cars, fire trucks, and cement barriers.

3 Collision avoidance is there in some cars without the driverless aspect.

As to driverless cars not being perfect, things will improve when they can act cooperatively - https://techxplore.com/news/2019-05-driverless-cars-traffic.html
 
The company also says it has heard "overwhelmingly" from drivers in Navigate on Autopilot's Early Access Program that they enjoy using the feature for everything from road trips to daily commutes.
Typical Tesla/Musk statement. Play up the marketing and pretend the issue is not real.

IMO, Musk really has no idea that he has is head up his rear. If he did, he might have gotten it out of there by now.
 
1 is giving Tesla quite a bit of trouble, especially when their cars drive into stationary objects like police cars, fire trucks, and cement barriers.

3 Collision avoidance is there in some cars without the driverless aspect.

As to driverless cars not being perfect, things will improve when they can act cooperatively - https://techxplore.com/news/2019-05-driverless-cars-traffic.html

Tesla cars are not driverless nor do they include the suite of sensors a driverless car would typically have. The issues they are experiencing is likely in part due to that and software that isn't good enough.
 
It's easy to anticipate what the car might not do. Like slowing down as it approaches a vehicle in front of you. You have a few seconds from being at a safe distance, questionable distance, dangerous distance and collision as you approach. And maybe you'll brake.

What you cannot anticipate is the car suddenly vearing into a median. You always have to be on guard. Now your the safety backup for a car that's making bad decisions. With much slower response times. It's kind of like falling asleep at the wheel, by the time you are aware it's probably too late.
 
Same old excuses from a company that might not be a company in a year or so.

I bought the car with the optional AP, to be honest, a big part of me just want to help training it.

But I still think Tesla should adopt more in-house drivers in different countries/cities doing first-class expert training 24H/365, bit like how Google does with their Maps Street View; Maybe they don't need that, or maybe they already doing that, I don't know.

Anyways, regardless of what disbelievers say, the company that gets AP right first is going to dominate the industry, it is that disruptive. I believe Tesla might just as well end up selling charging stations and AP systems, no one is ahead or even come close in both regards.
 
People who actually want to know how this feature works should go watch the videos made by the guy I'm link-ing below. It's been really fun watching the stuff he does with the car. It should silence many of the ultra-haters and also give insight on how the tech works and just how much it can do.
 
1 is giving Tesla quite a bit of trouble, especially when their cars drive into stationary objects like police cars, fire trucks, and cement barriers.

3 Collision avoidance is there in some cars without the driverless aspect.

As to driverless cars not being perfect, things will improve when they can act cooperatively - https://techxplore.com/news/2019-05-driverless-cars-traffic.html

Tesla cars are not driverless nor do they include the suite of sensors a driverless car would typically have. The issues they are experiencing is likely in part due to that and software that isn't good enough.
The software, according to a past article that TS published, is written by AI, though Musk saying that AI is to be feared makes him a hypocrite. https://www.washingtonpost.com/news...icial-intelligence-we-are-summoning-the-demon and https://www.artificialintelligenceinindia.com/artificial-intelligence-elon-musk/
The most recent TS article said that the "bug" of driving into stationary objects had been removed, however, it had reappeared. :facepalm:

People who actually want to know how this feature works should go watch the videos made by the guy I'm link-ing below. It's been really fun watching the stuff he does with the car. It should silence many of the ultra-haters and also give insight on how the tech works and just how much it can do.
As one of those that fall into that category of "ultra haters", no video is going to change my opinion of Tesla or Musk. My dislike goes far beyond the fact that it has been widely publicized that "autopilot" Teslas cannot seem to avoid large, stationary objects and as a result, get people killed and/or cause serious accidents in the process.

If Musk/Tesla stop whitewashing these serious issues and recognize their responsibility in the problem, that would somewhat improve my opinion of them.

The bigger issue is that Tesla is literally losing hundreds of millions of dollars every quarter and tries to spin that as everything is going just fine. Musk then routinely tries to launch a SpaceX rocket a few days before a Tesla earnings report in a pure PR stunt intended to distract attention from the fact that Tesla has its jugular sliced open and no clue on how to repair it.

Musk lovers literally eat that :poop: up as if it were nourishing manna from heaven and somehow seem to ignore the fact that these "signs" are typical of a company in free-fall to oblivion.

I'll give Musk extremely minor credit the recent article where he says that Tesla might live beyond the next 10-months if they can control those extreme losses. If there is more than a single quarter that Tesla has earned a profit, I would be surprised.

I will also be surprised if Tesla can find a way to control those losses after having spent so much on the US gigafactory and trying to do the same thing in China - without his anticipated result of making production costs cheaper as production quantities go up.

IMO, Musk is a petulant brat that thinks if he throws money at anything, it will be successful. So far, he's been throwing :poop: at a wall, blindly, in the hopes some of his :poop: will stick yet almost none of it has.

My strong tone might not be appreciated, however, its my pathetic attempt to call Musk out for his childish actions when it seems he is totally incapable of hearing anything remotely like this from anyone. He would be far, far, far better off, IMO, if he learned how to keep his mouth shut, like other manufacturers working on EVs, and instead, focused on delivering a first and foremost safe, credible, working product in a timely fashion without all the corporate bleeding and all the verbal diarrhea.
 
Last edited:
Humans have the ability to anticipate. Computers don't.

There's no computer that can "see" something about to happen that is illogical - or respond to it in time.

For example: can a Tesla sense a tree falling ahead and slow to a stop before the tree enters its field of sensor sweeps?

Can a Tesla roll up the windows and speed up when approached by alcoholic bums?

Can a Tesla anticipate a street walker whose high heel breaks and sends her plummeting into the road?

The average human response time is 221ms for something as basic as noticing a color has changed and responding to it. It goes much higher for unorthodox situations. A computer can have sub ms reaction times.

I'll respond to your examples in order

1) Driverless cars use LIDAR in conjunction with other technologies. They have a high definition range of 150 ft and their low definition range can extend much farther depending on the sensor suite used. In addition, Human vision is only clear where you are focusing and much less so elsewhere. Driverless cars have no such issue, their "vision" is clear everywhere at all times. So in your example the driverless car would see the falling tree from a great distance away, certainly enough time for even the cheapest car to break. In addition, it's fast reaction time would ensure that you are getting the best outcome even if a tree is falling close to your car and because the LIDAR system can see all around you it can even dodge trees falling from the sides of you.

2) Rolling up the windows really has nothing to do with driverless cars. You are thinking more along the lines of a AI assistant. Driverless cars aren't promising to profile everyone you meet and assess the risk. That said, nothing is stopping you from rolling up your own windows until that tech comes along.

3) This one is kind of funny. This isn't an example of anticipation. You didn't know her heel was going to break until you saw her start to stumble and noticed the signs, all of which will take place within a second. The part where anticipation comes in is the part where drivers assume people walking will keep walking and forget to pay attention and either bump into the girl or run her over. A driverless car does not have this issue. If it sees a person in the way it will break before you exceed safe distance and it will do so before a human could possibly react. Mind you this is ideally. Driverless cars are not perfect right now but the ingredients are there.

So, you are saying computers can calculate faster than humans? I'm sure none of us knew that. Thanks for enlightening us. edit: Can they also know to compensate for weather conditions? Like knowing to brake sooner for snow or black ice patches? That the tires are worn and don't brake/corner as well, or that the car has been loaded and now heavier, so now needs much more braking/following distance? These are all immediately common sense for a human (although some are wreckless ID10Ts and choose not to).

1) A human does not need "clear" vision to know there is a problem. Can they also see vehicles through the vehicle in front of you? Not that I know of. I can see through windows and react far before something happens. A computer is only re-active and cannot predict impending accidents whereas this is second nature to a human. How many billions (trillions?) of $ have been spent and it will still never be figured out.

2) But, "driverless" cars are supposed to be that - AI assistants... lol. If they think they can supposedly program one to drive itself, this would be menial. When there is a situation to drive away from an attack (which a human knows by common sense), then why wouldn't these things? Oh right, because they will never be able to detect/program for these situations. They wouldn't care if you were killed like in a Johnny Cab.

3) I agree with reaction time. Again - REactive. Another example: When a woman is walking and her shoe is falling off, a human will know she is going to stop. It is common sense and is PREactive. A computer will not, and never will. If you would like to set up a demonstration, please share video of one of these cars coming at you in this situation.
 
Last edited:
So, you are saying computers can calculate faster than humans? I'm sure none of us knew that. Thanks for enlightening us. edit: Can they also know to compensate for weather conditions? Like knowing to brake sooner for snow or black ice patches? That the tires are worn and don't brake/corner as well, or that the car has been loaded and now heavier, so now needs much more braking/following distance? These are all immediately common sense for a human (although some are wreckless ID10Ts and choose not to).

1) A human does not need "clear" vision to know there is a problem. Can they also see vehicles through the vehicle in front of you? Not that I know of. I can see through windows and react far before something happens. A computer is only re-active and cannot predict impending accidents whereas this is second nature to a human. How many billions (trillions?) of $ have been spent and it will still never be figured out.

2) But, "driverless" cars are supposed to be that - AI assistants... lol. If they think they can supposedly program one to drive itself, this would be menial. When there is a situation to drive away from an attack (which a human knows by common sense), then why wouldn't these things? Oh right, because they will never be able to detect/program for these situations. They wouldn't care if you were killed like in a Johnny Cab.

3) I agree with reaction time. Again - REactive. Another example: When a woman is walking and her shoe is falling off, a human will know she is going to stop. It is common sense and is PREactive. A computer will not, and never will. If you would like to set up a demonstration, please share video of one of these cars coming at you in this situation.

Computers in modern cars already compensate for bad weather. Modern tracking control says hello. Everything you listed in your first paragraph is already being done, let alone what will be done.

1) A human having unclear vision is as much of a detriment to them as it is to a machine. It increases the processing time for both. Machines are only reactive? You must have forgotten we already have machines that can predict missile trajectories in patriot missile systems and THAD. Predicting something at the speed of a car? Piece of cake for a machine.

2) Nope, A digital assistant is a whole different ball game. Maybe later they will start selling cars with an assistant that can provide advanced features like that but I imagine that will take longer and cost an up-charge. Driverless cars should focus on getting the driverless part right first. If you have to worry about dangerous people coming up to your car you likely can't afford it anyways.

3) Modern cars already do, so I don't get this "never will" part. And you speak as if every human would instantly notice, which 90% won't. I'd bet that the human error rate is far higher then even current implementations and these aren't even driverless cars. Humans can be very perceptive when they are really paying attention, it's just a shame very few drivers are that, let alone 100% of the time. The number of car accidents are a testament to that and that's considering that a bunch of grown adults have to be coerced by the law to do things that could get themselves or others killed like texting while driving.
 
Computers in modern cars already compensate for bad weather. Modern tracking control says hello. Everything you listed in your first paragraph is already being done, let alone what will be done.

1) A human having unclear vision is as much of a detriment to them as it is to a machine. It increases the processing time for both. Machines are only reactive? You must have forgotten we already have machines that can predict missile trajectories in patriot missile systems and THAD. Predicting something at the speed of a car? Piece of cake for a machine.

2) Nope, A digital assistant is a whole different ball game. Maybe later they will start selling cars with an assistant that can provide advanced features like that but I imagine that will take longer and cost an up-charge. Driverless cars should focus on getting the driverless part right first. If you have to worry about dangerous people coming up to your car you likely can't afford it anyways.

3) Modern cars already do, so I don't get this "never will" part. And you speak as if every human would instantly notice, which 90% won't. I'd bet that the human error rate is far higher then even current implementations and these aren't even driverless cars. Humans can be very perceptive when they are really paying attention, it's just a shame very few drivers are that, let alone 100% of the time. The number of car accidents are a testament to that and that's considering that a bunch of grown adults have to be coerced by the law to do things that could get themselves or others killed like texting while driving.

What I have seen of "modern tracking" is a very poor attempt. The control devices I have personally seen can't even tell you are drifting off the road. Laughable. Please provide modern tracking control articles on what you are stating. I guess I am living under a rock.

1) Missile trajectories are basic math and don't relate to this point at all. If I see part of a bridge has fallen in the highway, or a sofa, or a truck broke down... and the vehicle in front of me is not slowing down (or a sensor fails, or whatever), then one of these cars are not going to be able to stop when the vehicle in front slams into a wall. Whereas me as a human will (predictably) be slowing down long before getting near.

2) Really? You are profiling against poor people? That is really sad, and reinforces your disdain towards other people's safety.

3) Can you provide proof they are already proactive? Or that human error rate is far higher? I agree with you that people have lost their minds in playing with their phones instead of being responsible and caring drivers. People that want these cars want them so they can keep playing on their phones even more instead of driving. They are just as guilty as the people putting these cars on the road. It is laughable you are comparing the number of accidents by humans versus these cars. How about we look at percentages instead? Multiple the accidents/deaths of these against the billions of daily miles real people drive. Let's also compare these cars that only drive in perfect conditions to the wide variety of weather conditions people drive in daily. You know that the only things reported are what the news tells us? These companies are not going to publish all of the undercover wrecks/tickets/violations. See how poor these cars are fairing? I can point out numerous examples of some really STUPID crap these cars have done - and not only killed their owners, but people around them. There are a lot of people that would still be alive if it weren't for these cars being forced on us at our safety risk. I really don't understand how people clamor to these self-wrecking cars despite people being killed directly because of them.
 
WARNING: Your Tesla was unable to avoid a collision due to the following error(s): Insufficient Sensor Data, Collision Mitigation System is incomplete or missing. Please contact Tesla Customer Support for assistance.
 
This article is completely wrong. The latest post by Consumer Reports is only about the decisions to make lane changing when the setting to require blinker stalk confirmations before doing so is turned off (so the car will change lanes by itself).

From the article: "In practice, we found that the new Navigate on Autopilot lane-changing feature lagged far behind a human driver’s skills."

Nowhere does it say that "Tesla's Navigate on Autopilot technology (a more advanced iteration of its standard Autopilot tool) is 'far less competent' than a human driver."

It's a complete lie that "CR claims Autopilot requires 'significant' intervention from drivers."

In reality, it does require monitoring but rarely an intervention (I know this from personal experience).

The following is also something taken ridiculously out of context: "Fisher goes on to note that Navigate on Autopilot doesn't seem to react to brake lights or turn signals, while also failing to anticipate what other drivers will do."

It definitely responds to braking and vehicles changing lanes (it does it very well actually); here too CR's concerns are limited to when the vehicle changes lanes by itself towards a vehicle doing doing of these things.

The only time the TechSpot article even mentions lane changing, it says "These problems become particularly prominent, Fisher claims, during automatic lane changes when 'lane change confirmation' alerts are switched off." Even this is blatantly wrong; the TechSpot article makes it sound like these issues exist during normal usage of NoA.

In reality, here's what Jake Fisher said today, that their review of NoA 6 months ago should be looked to about general Navigate on Autopilot feedback: https://twitter.com/CRcarsJake/status/1131519858171174912

He also clarified his article to say exactly what I'm saying, that the concern is with the vehicle changing lanes without driver input: https://twitter.com/CRcarsJake/status/1131518552291385345

He personally added that every other driver assist system isn't nearly as capable as Tesla's: https://twitter.com/CRcarsJake/status/1131522842527510528
 
Common sense tells us that computers will never be able to respond to the billions of daily variables across the world. Let alone the software and hardware failures, which get exponential the older a vehicle gets. Plus outdated and no longer supported. lol

No, that's not what common sense tells us. I still remember older gents saying: "A computer will never beat a human being in chess, because a computer can't think and can't anticipate attacks". Which was refuted in 1997 and has stayed that way till today. You should have seen their faces when Kasparov lost to Deep Blue.

What common sense tells us is that computers can process a lot more inputs in PARALLEL than humans can. And that computers can focus their attention to more than one task, something that humans can't (not even women can really multi-task, they just do task-switching a bit faster than men).

Computers will be able to drive a car by just using 2 cameras (like we humans do) but they won't be limited to it. If the prices of LIDAR and other devices drop, computers will use those inputs too, while humans will have a hard time processing and comparing that many inputs.

The only reason why computers are still so stupid is..... we're too stupid. We haven't figured out how our brain works, so we have trouble copying that to computers. But don't worry, that will change. As soon as we produce the first hardware that can copy itself with a little bit of change, it will evolve so fast that we won't be able to track it. After that happens, in the period before computers destroy us, there will be a short period where computers will serve us. We'll finally have really smart slaves. But as usual with slaves, eventually they will rebel.
 
No, that's not what common sense tells us. I still remember older gents saying: "A computer will never beat a human being in chess, because a computer can't think and can't anticipate attacks". Which was refuted in 1997 and has stayed that way till today. You should have seen their faces when Kasparov lost to Deep Blue.

What common sense tells us is that computers can process a lot more inputs in PARALLEL than humans can. And that computers can focus their attention to more than one task, something that humans can't (not even women can really multi-task, they just do task-switching a bit faster than men).

Computers will be able to drive a car by just using 2 cameras (like we humans do) but they won't be limited to it. If the prices of LIDAR and other devices drop, computers will use those inputs too, while humans will have a hard time processing and comparing that many inputs.

The only reason why computers are still so stupid is..... we're too stupid. We haven't figured out how our brain works, so we have trouble copying that to computers. But don't worry, that will change. As soon as we produce the first hardware that can copy itself with a little bit of change, it will evolve so fast that we won't be able to track it. After that happens, in the period before computers destroy us, there will be a short period where computers will serve us. We'll finally have really smart slaves. But as usual with slaves, eventually they will rebel.

A classic assumption based on a game with very limited inputs. Common sense is the real world has trillions of variables that change every day and you cannot even begin to compare these two. Add on new variables every day that have never happened before.

Add more and more sensors and cameras, and you further complicate the system which leads to increased chances of failures. You realize vehicles have mirrors which cost what? $10? How much are one of these cameras? Plus drive up the price and maintenance, which people don't do anyway. Cameras get dirty and fail. Now you need a cleaning system for each one? Exponential programming hardships, etc etc. Then they get outdated, scrapped, and end up in the trash compactor. Where are all the lefty climate changers?

Sounds like you took the red pill. ;)
 
I think many people who can see where there will be piles of money just years from now invest or even start up car autopilots. Just excluding the pay for someone driving you is huge.
And think of the productivity of anybody driving to work or meeting.
We have millions of people who if not become stupid rich will be able to do so much more while getting driven where they need to be.
There is no amount of crashes that will get this tech banned.
Of course the big problem is, this is hard. This is so hard making
a 100% independent autopilot. Even the very first companies that have years of data
and advances are probably years and years before they can make a functional
any situation proof autopilot.
 
Can't expect people that take money from big oil to know that machine learning is an ongoing process that only improves over time.
 
Back