Accidents have eroded public trust in self-driving vehicle technology

People make mistakes. People designed the software. Hmm..... *scratches head*

How's this really any different from normal accidents? You should say it was caused by the software, or the environment... or you could blame it on the human factor.

I haven't done my research, but have the accidents been caused by some additional human interference?

If so, you could potentially argue that there would be no accidents if all vehicles were AI driven.

IDK
For me, unless an argument is supported by sound science, it remains an argument.
 
I guess they just need to have more video/stories of autonomous vehicles avoiding accidents (like this:
) to gain the trust back. Also, this tech is still heavily in development, give it a couple years and it'll be better. Until then, I get to go home and deal with an annoying 40 minute commute.
Unless all self-driving vehicles can avoid accidents reliably, this is marketing designed to do only one thing - sell self-driving vehicles.
 
The problem with the stats is that if they are not improved before the cars are used everywhere, then that means that statistically, the number of incidents goes up since you have more self-driving cars on the road that, statistically, could be involved in accidents. How many miles are there on self-driving cars compared to cars that are not self-driving? My guess would be that that number is a fraction of a percent, and those who stand to make a buck from their tech will use big numbers in their marketing to make it look as if the tech has been proven beyond a shadow of a doubt - which - at this point - it has not been.
IMO, that the car drove into that big solid object directly in front of it is ludicrous, not to mention an egregious failure of the tech. No matter how many people try to blame those crashes on the driver including Musk himself, THE CAR SHOULD HAVE STOPPED!
I am not saying self-driving tech will not improve, just that I do not believe it is at the point where it is reliable enough to take over the day to day driving of all drivers. Some day, yes, but not yet.
Well, look into the stats. I'm sure you'll be surprised how many hours of autopilot have actually been logged with no incident. I'm not saying they're almost perfect (far from it), all I'm saying is that a lot of people are being ignorant over them (hearing 1 or 2 bad things and basing their entire opinion on that).

As for the block, it wasn't technically autonomous self driving tech. It was a glorified driver assist. It didn't know what it was and let the driver know that they (the driver) needed to react. Heck, the driver was supposed to be paying attention anyways, as per the warnings along with using the system. 100% the drivers fault for (funny enough) being ignorant (and if the majority of people actually looked into situations like this without being ignorant, they'd know that too).

As where the tech stands, and what stage it's in, I trust that it will keep on improving (and that it's safer in the basic ways). I see an id!ot driver pretty much every time I go out driving, so how could this be worse? :p
 
The problem with the stats is that if they are not improved before the cars are used everywhere, then that means that statistically, the number of incidents goes up since you have more self-driving cars on the road that, statistically, could be involved in accidents. How many miles are there on self-driving cars compared to cars that are not self-driving? My guess would be that that number is a fraction of a percent, and those who stand to make a buck from their tech will use big numbers in their marketing to make it look as if the tech has been proven beyond a shadow of a doubt - which - at this point - it has not been.
IMO, that the car drove into that big solid object directly in front of it is ludicrous, not to mention an egregious failure of the tech. No matter how many people try to blame those crashes on the driver including Musk himself, THE CAR SHOULD HAVE STOPPED!
I am not saying self-driving tech will not improve, just that I do not believe it is at the point where it is reliable enough to take over the day to day driving of all drivers. Some day, yes, but not yet.
Well, look into the stats. I'm sure you'll be surprised how many hours of autopilot have actually been logged with no incident. I'm not saying they're almost perfect (far from it), all I'm saying is that a lot of people are being ignorant over them (hearing 1 or 2 bad things and basing their entire opinion on that).

As for the block, it wasn't technically autonomous self driving tech. It was a glorified driver assist. It didn't know what it was and let the driver know that they (the driver) needed to react. Heck, the driver was supposed to be paying attention anyways, as per the warnings along with using the system. 100% the drivers fault for (funny enough) being ignorant (and if the majority of people actually looked into situations like this without being ignorant, they'd know that too).

As where the tech stands, and what stage it's in, I trust that it will keep on improving (and that it's safer in the basic ways). I see an id!ot driver pretty much every time I go out driving, so how could this be worse? :p
Perhaps I did not make myself clear. THE CAR SHOULD HAVE STOPPED. It has sensors that can recognize obstacles. It is not like the Kangaroo that jumped out of the way.

Also, if it is not "autopilot" then Tesla/Musk should not market it as such even though laws allow them to with reams of fine print. Musk is a spoiled brat that has no honor.

Drawing a parallel between idjit drivers and self-driving AI is a red herring/irrelevant conclusion - https://en.wikipedia.org/wiki/Irrelevant_conclusion#Red_herring
 
Last edited:
Perhaps I did not make myself clear. THE CAR SHOULD HAVE STOPPED. It has sensors that can recognize obstacles. It is not like the Kangaroo that jumped out of the way.
Also, if it is not "autopilot" then Tesla/Musk should not market it as such even though laws allow them to with reams of fine print. Musk is a spoiled brat that has no honor.
Drawing a parallel between idjit drivers and self-driving AI is a red herring/irrelevant conclusion - https://en.wikipedia.org/wiki/Irrelevant_conclusion#Red_herring
Perhaps I did not make myself clear. THE PERSON SHOULD HAVE STOPPED. There, I can "yell" too. It was his responsibility to pay attention and he (regrettably) dropped the ball for the several seconds that he was warned for.

And I agree with your marketing comment, it shouldn't be called autopilot. But you won't catch me being ignorant enough to assume it's function based only on the name.

Lol, all I'm saying is that I'd like to replace a lot of the idjit drivers (especially the drunk ones) with decent autopilot tech.
 
Perhaps I did not make myself clear. THE PERSON SHOULD HAVE STOPPED. There, I can "yell" too. It was his responsibility to pay attention and he (regrettably) dropped the ball for the several seconds that he was warned for.

And I agree with your marketing comment, it shouldn't be called autopilot. But you won't catch me being ignorant enough to assume it's function based only on the name.

Lol, all I'm saying is that I'd like to replace a lot of the idjit drivers (especially the drunk ones) with decent autopilot tech.
It's plain that we are not going to agree on this.

I'll pose another scenario.

Hypothetically speaking: Suppose that the driver had a heart attack or some other medical scenario that caused them to be unable to control the vehicle. This does happen and such medical conditions go undetected such that there may be no advance warning that this kind of thing might happen to a driver - which is my long way of saying that in such a hypothetical scenario, the person is legally able to drive the car, but unexpectedly develops an inability to control the vehicle due to the medical condition - so no "driver's fault" here.

Are you saying that in such a scenario, the car should just continue to drive into a solid object at high speed that the car can easily detect with its advanced sensors that include radar and lidar and the ability to determine "hey, that thing is not moving"? As I see it, it could just simply pull off the to the side of the road and stop.

As I see it, self-driving cars need to have that ability and should have the ability to determine if a driver is paying attention and then simply refuse to go any further if they are not. We are talking about people's lives here, and for me, this becomes an ethical matter. Would you prefer your self-driving car drives you into the wall in this type of scenario?

In fact, there are reports that Tesla refutes (and unless they want their a$$ sued off, they have to refute them) that Tesla engineers wanted to include such safety features and they were rejected because of cost. https://www.techspot.com/community/...fety-features-over-costs.246608/#post-1684441

So if you don't like seeing people driving that you believe are drunk, why not call 911 on them when you see them? Program 911 into your hands-free speed-dial and let the police take car of them. You might be doing the drunk driver, and other people, for that matter, a favor.
 
Last edited:
It's plain that we are not going to agree on this.
I'll pose another scenario.
Hypothetically speaking: Suppose that the driver had a heart attack or some other medical scenario that caused them to be unable to control the vehicle. This does happen and such medical conditions go undetected such that there may be no advance warning that this kind of thing might happen to a driver - which is my long way of saying that in such a hypothetical scenario, the person is legally able to drive the car, but unexpectedly develops an inability to control the vehicle due to the medical condition - so no "driver's fault" here.
Are you saying that in such a scenario, the car should just continue to drive into a solid object at high speed that the car can easily detect with its advanced sensors that include radar and lidar and the ability to determine "hey, that thing is not moving"? As I see it, it could just simply pull off the to the side of the road and stop.
As I see it, self-driving cars need to have that ability and should have the ability to determine if a driver is paying attention and then simply refuse to go any further if they are not. We are talking about people's lives here, and for me, this becomes an ethical matter. Would you prefer your self-driving car drives you into the wall in this type of scenario?
In fact, there are reports that Tesla refutes (and unless they want their a$$ sued off, they have to refute them) that Tesla engineers wanted to include such safety features and they were rejected because of cost. https://www.techspot.com/community/...fety-features-over-costs.246608/#post-1684441
So if you don't like seeing people driving that you believe are drunk, why not call 911 on them when you see them? Program 911 into your hands-free speed-dial and let the police take car of them. You might be doing the drunk driver, and other people, for that matter, a favor.
Yes (to be blunt).
In it's current state, it is a driver assist. It did not know what the object was and sudden deceleration could also pose a problem (hence why it warned the driver to take charge). I understand what you're trying to get at, but it is not the system's fault in this context. Whether or not this specific system would be active, your hypothetical situation would still result in the same end.

We're not talking about a competent autonomous driving AI (because it doesn't exist in 2018). It's still in it's infancy and still needs a human to monitor it. And frankly, you don't seem to understand that (signalling an end to this conversation).

Otherwise, you missed the context with my idjit drivers comment again. It's a wish. Nothing more.
 
Yes (to be blunt).
In it's current state, it is a driver assist. It did not know what the object was and sudden deceleration could also pose a problem (hence why it warned the driver to take charge). I understand what you're trying to get at, but it is not the system's fault in this context. Whether or not this specific system would be active, your hypothetical situation would still result in the same end.

We're not talking about a competent autonomous driving AI (because it doesn't exist in 2018). It's still in it's infancy and still needs a human to monitor it. And frankly, you don't seem to understand that (signalling an end to this conversation).

Otherwise, you missed the context with my idjit drivers comment again. It's a wish. Nothing more.
The car supposedly told the driver several times to put his hands on the wheel for some time before the accident. If there is time to warn the driver several times of a potential problem, there was likely time for the car to decelerate safely. Depending on who you listen to, it was the choice of Tesla not to incorporate the feature.

It is not that much of a stretch, as I see it from my perspective as a programmer with over 20-years experience, to incorporate a program branch that says "car needs driver control, car warns driver, driver is not responding, car decelerates and pulls of to road shoulder or as far as possible off the road safely, car refuses to go any further if driver does not respond."

Personally, I really do not care what features the car had, this type of feature should be incorporated by default as a matter of ethics.
 
Back