Uber reboots autonomous vehicle tests with an emphasis on safety following fatal crash

Polycount

Posts: 3,017   +590
Staff

After an autonomous Uber vehicle fatally struck an Arizona pedestrian back in March, the company brought all of its self-driving vehicle tests to an abrupt halt.

The incident prompted Uber to conduct a full-scale review of its self-driving policies, leading to the termination of roughly 100 safety drivers. Uber's new focus seems to be on quality over quantity - a wise move, given the widespread public backlash it faced following the March incident.

Now that the dust from the tragedy has settled, Uber has decided to restart its self-driving vehicle tests. However, there's a catch this time around: a "Mission Specialist" will be at the helm for each test, in full control of the vehicle.

It's not clear exactly how much Uber's autonomous systems will be put to the test in these conditions, but they may function like Tesla's Autopilot, where the driver keeps their hands on the wheel, but the vehicle handles steering and lane changes.

At the very least, Uber's tech will be learning from experienced drivers in the background, without the ability to do any real harm.

To further increase safety as the company ramps up its self-driving tests once more, each Uber car will have a secondary Specialist in the passenger seat, whose sole job consists of documenting "notable events." By "events," Uber is likely referring to vehicle swerving or sudden stops.

Perhaps most importantly, human error will be mitigated through Uber's new "Real-time Driver Monitoring" tech. The monitoring system will be able to detect inattentive behavior (such as watching Hulu behind the wheel) and give drivers an audio alert to keep them focused.

It's not clear when Uber plans to begin fully autonomous testing again, but the company is clearly taking things slow this time around.

Permalink to story.

 
Given that last time it was human error, humans killing humans in car crashes is nothing new.

Please explain how it was human error when the computer was driving??? A person was staring at their phone doesn't mean it was their fault. The computer was in complete control. What do you think people are going to be doing when they become complacent if these things unfortunately would be selling or being used? People want these death machines so they can play on their phones instead of be responsible adults.
 
Please explain how it was human error when the computer was driving??? A person was staring at their phone doesn't mean it was their fault. The computer was in complete control. What do you think people are going to be doing when they become complacent if these things unfortunately would be selling or being used? People want these death machines so they can play on their phones instead of be responsible adults.

A person who was supposed to be moderating the vehicle was starring at their phone, how is that not human error? The computer was in control but the person was there because they knew it was going to mess up. It's like blaming your kids for eating candy all day when you are the one who failed to feed them something proper.

Death machines? Are you referring to people or autonomous cars? Total of 4 autonomous fatalities throughout all time, all as a result of human error. There are over 3,000 deaths from road crashes in human only cases every single day.
 
Given that last time it was human error, humans killing humans in car crashes is nothing new.

Please explain how it was human error when the computer was driving??? A person was staring at their phone doesn't mean it was their fault. The computer was in complete control. What do you think people are going to be doing when they become complacent if these things unfortunately would be selling or being used? People want these death machines so they can play on their phones instead of be responsible adults.

While I generally agree with your statement, you could contrive that it was human error because humans decided it was ok to only give an AI driving system limited sensor input. If they really cared about safety they would provide as much sensor data as is possible, including but not limited to thermal imaging, short-wave IR with forward IR illumination, laser scanning, radar, etc.

I guarantee AI controlled vehicles using only optical cameras for sensor input will NEVER be completely safe. Now that isn't to say it can't be safer than your average human driver, because I believe it can be; but 100% safe is not possible if you only give your AI driver 25% or less of potentially useful sensor data.

I do believe with all the sensor tech available to us, and with a mature AI, we could see a near-100% safe self driving vehicle on public roads. You'll never achieve 100% though with passive sensors but you'll get damn close. But it probably comes down to cost and practicality, and that is just not the direction anyone wants to go because no one will buy a $3.5 million car with a huge box of sensors on top just to be carted around in.
 
A person who was supposed to be moderating the vehicle was starring at their phone, how is that not human error? The computer was in control but the person was there because they knew it was going to mess up. It's like blaming your kids for eating candy all day when you are the one who failed to feed them something proper.

Death machines? Are you referring to people or autonomous cars? Total of 4 autonomous fatalities throughout all time, all as a result of human error. There are over 3,000 deaths from road crashes in human only cases every single day.

Still trying to blame the human for killing someone that the computer obviously did?

So, you are comparing the BILLIONS of miles driven daily by humans versus a few thousand by self-wrecking cars in perfect conditions? I'm sorry, but you've lost all credibility.
 
While I generally agree with your statement, you could contrive that it was human error because humans decided it was ok to only give an AI driving system limited sensor input. If they really cared about safety they would provide as much sensor data as is possible, including but not limited to thermal imaging, short-wave IR with forward IR illumination, laser scanning, radar, etc.

I guarantee AI controlled vehicles using only optical cameras for sensor input will NEVER be completely safe. Now that isn't to say it can't be safer than your average human driver, because I believe it can be; but 100% safe is not possible if you only give your AI driver 25% or less of potentially useful sensor data.

I do believe with all the sensor tech available to us, and with a mature AI, we could see a near-100% safe self driving vehicle on public roads. You'll never achieve 100% though with passive sensors but you'll get damn close. But it probably comes down to cost and practicality, and that is just not the direction anyone wants to go because no one will buy a $3.5 million car with a huge box of sensors on top just to be carted around in.

Autonomous car prototypes don't use just one sensor method. Most have at least Camera's and LIDAR. LIDAR costs $12,000 for a quality system and that price will only go down. It has never had the potential to spike the cost of the car anywhere near even $1 million.

LIDAR combines radar technology with lasers, making it more accurate and faster. With the help of machine learning and improvements to the technology, car companies are currently training AI's to work in extreme conditions like snow flurries.
 
Still trying to blame the human for killing someone that the computer obviously did?

So, you are comparing the BILLIONS of miles driven daily by humans versus a few thousand by self-wrecking cars in perfect conditions? I'm sorry, but you've lost all credibility.

Nope, just stating the facts as they were reported. No one denies that account of the situation. If you can provide evidence otherwise then do so. If not, stop spreading fake news.
 
While I generally agree with your statement, you could contrive that it was human error because humans decided it was ok to only give an AI driving system limited sensor input. If they really cared about safety they would provide as much sensor data as is possible, including but not limited to thermal imaging, short-wave IR with forward IR illumination, laser scanning, radar, etc.

I guarantee AI controlled vehicles using only optical cameras for sensor input will NEVER be completely safe. Now that isn't to say it can't be safer than your average human driver, because I believe it can be; but 100% safe is not possible if you only give your AI driver 25% or less of potentially useful sensor data.

I do believe with all the sensor tech available to us, and with a mature AI, we could see a near-100% safe self driving vehicle on public roads. You'll never achieve 100% though with passive sensors but you'll get damn close. But it probably comes down to cost and practicality, and that is just not the direction anyone wants to go because no one will buy a $3.5 million car with a huge box of sensors on top just to be carted around in.

Even with all of these sensors, will the computer be able to proactively predict an accident is about to occur like a human can? Can it detect objects through the car ahead of it like I can? I constantly scan ahead of the car in front of me - either through their windows or by slightly moving to the left occasionally. I have yet to see them even in the slightest assure me by releasing any kind of video showing it can do what a typical human can't. Have you? All I hear about are deaths and wrecks by them doing stupid crap, such decapitating its own driver and mundane crap such as ramming into the side of a bus to avoid a sandbag in the gutter... I have zero confidence in these things.
 
Nope, just stating the facts as they were reported. No one denies that account of the situation. If you can provide evidence otherwise then do so. If not, stop spreading fake news.

Provide evidence of what? That you are throwing meaningless numbers around? I guess you don't have any idea how many hundreds of millions of people drive daily? Do a little elementary school math on that. if you need me to provide evidence of that math, let me know. You have yet to provide evidence of "fake news". more like trying to open your eyes to some common sense. You have a closed mind and blind to pushing this failed technology out at the public's safety risk. Have kids? Share some video of you letting them cross the road in front of one of these. I fear especially for all the motorcycle riders out there - one of which I am.
 
Provide evidence of what? That you are throwing meaningless numbers around? I guess you don't have any idea how many hundreds of millions of people drive daily? Do a little elementary school math on that. if you need me to provide evidence of that math, let me know. You have yet to provide evidence of "fake news". more like trying to open your eyes to some common sense. You have a closed mind and blind to pushing this failed technology out at the public's safety risk. Have kids? Share some video of you letting them cross the road in front of one of these. I fear especially for all the motorcycle riders out there - one of which I am.

https://www.usatoday.com/story/opin...ar-death-blame-human-driver-column/730754002/

Don't let the fact get in the way of your opinion.
 
Autonomous car prototypes don't use just one sensor method. Most have at least Camera's and LIDAR. LIDAR costs $12,000 for a quality system and that price will only go down. It has never had the potential to spike the cost of the car anywhere near even $1 million.

LIDAR combines radar technology with lasers, making it more accurate and faster. With the help of machine learning and improvements to the technology, car companies are currently training AI's to work in extreme conditions like snow flurries.
Cool. I'm suggesting way more than just that though.You start adding tons of sensors and spend time on AI that can safely interpret ALL the data and costs are going to go up like crazy.
 
Even with all of these sensors, will the computer be able to proactively predict an accident is about to occur like a human can? Can it detect objects through the car ahead of it like I can? I constantly scan ahead of the car in front of me - either through their windows or by slightly moving to the left occasionally. I have yet to see them even in the slightest assure me by releasing any kind of video showing it can do what a typical human can't. Have you? All I hear about are deaths and wrecks by them doing stupid crap, such decapitating its own driver and mundane crap such as ramming into the side of a bus to avoid a sandbag in the gutter... I have zero confidence in these things.
That would be up to the AI, so I don't know. Computers do have faster reaction times though, and in some situations that is enough to avoid a collision. As for looking at traffic in front of the vehicle in front of you, sure why couldn't AI do that? Optical cameras see pretty much the same as the human eye.

As for seeing "intelligence" demonstrations with this self driving tech, no I haven't really seen anything. It is all reaction based, and as you mention there have been instances where it didn't react correctly, or at all. AI has a looooong way to go for sure, even with just the current sensor data. Even longer if they were to actually incorporate more sensors. And where it didn't react at all, is why I'm saying it needs more sensors.

Some examples, the woman crossing the street at night in Arizona should have been detected by radar or lidar, could have probably been detected by an IR camera with good long distance forward illumination, and may have been detected by thermal imaging. The Tesla which crashed broadside into a semi-tractor trailer crossing the highway killing the driver is in the same situation. Radar, Lidar, or thermal imaging could have detected the truck crossing the road where the optical camera obviously did not. Same goes for the Tesla which crashed and glanced off a concrete barrier on the freeway in Texas, could have been detected by radar, lidar, and possibly thermal imaging. All three of these crashes were likely preventable if they would just put more sensors on the vehicles.

I personally think these companies are not willing to spend the money on more sensors, and these crashes WILL continue to happen. Eventually leading to the government banning all self driving vehicles on public roads. Just you wait...
 
Back