Self-driving cars and the ethics of AI

Shawn Knight

Posts: 15,305   +193
Staff member
Bottom line: I strongly feel that self-driving vehicles will be remembered as one of the greatest technological achievements of the modern era. I’m an even firmer believer that we’re about to open Pandora’s Box without fully considering (or even realizing) the impact that it’ll have on seemingly unrelated segments of society.

Personal transportation in a world void of human drivers will presumably be much safer, right? That’s great, but it also means that a lot of people are going to be without jobs. Insurance companies won’t need nearly as many claims adjusters, the DMV won’t need to be nearly as large as it is today, police forces could be greatly reduced and as morbid as it sounds, hospitals won’t need as many doctors (in 2012, motor vehicle collisions sent nearly 7,000 Americans to the ER each day).

With fewer people dying in auto accidents, there won’t be nearly as many organ donations, meaning that some sick people who might have survived thanks to a transplant won’t live as long. It’s a seemingly endless chain reaction of cause and effect.

In some instances, however, there will be casualties. Take the Trolley problem, for example.

In this common thought experiment, you see a runaway trolley racing down the tracks towards five people. You have access to a lever that, if pulled, will divert the trolley to another track where it kills just one person instead of five. Do you pull the level and spare the lives of five people by sacrificing one? How do you justify that decision?

It’s inevitable that self-driving cars will have to make similar decisions at some point. To gauge public perception on global moral preferences, researchers at the MIT Media Lab in 2014 launched an experiment called the Moral Machine. It’s a game-like platform that gathers feedback on how people believe self-driving cars should handle variations of the trolley problem.

Four years later, the project has logged more than 40 million decisions from people in 233 countries and territories around the globe, highlighting how different cultures prioritize ethics.

The test focused on nine different comparisons including how a self-driving car should prioritize humans over pets, more lives over fewer, passengers over pedestrians, young over old, women over men, healthy over sickly, law breakers over lawful citizens, higher social status over lower and whether the car should swerve (take action) or stick to its course (take no action).

The results are fascinating, if not a bit stereotypical. In countries with more individualistic cultures, a stronger emphasis was put on sparing more lives, perhaps due to people seeing the value in each individual. In regions like Japan and China where there is a greater respect for the elderly, participants were less likely to spare young over old.

Interestingly enough, Japan and China were on opposite ends of the spectrum with regard to sparing pedestrians versus passengers. Those in Japan would rather ride in a car with a greater emphasis on sparing pedestrians while those in China were more concerned about the safety of a vehicle’s passengers.

Edmond Awad, an author of the paper, said they used the trolley problem because it’s a very good way to collect data but they hope the discussion of ethics doesn’t stay within that theme. Instead, Awad believes it should move to risk analysis – weighing who is at more risk or less risk – versus who should or shouldn’t die.

Permalink to story.

 
I honestly think self-driving cars will eventually be banned on public roads due to deaths in events that are later realized to be obviously avoidable in this article because I don't feel the tech will ever reach the level of sophistication where it can determine many of the things mentioned. I realize this tech is still in its infancy but the lack of sufficient sensor data and quality AI to make good decisions based on the data, along with the infinite variables involved in driving a vehicle on roads will eventually lead to its demise.
 
I've said it a million times. AI-driven cars only need to be slightly better drivers to be worth it. They will (100% chance) take over the roads eventually and will offer way more benefits than we even can see right now.

I did the moral machine a while back and the questions just...kept...going...forever so I think I gave up at some point. It got to be too hard of situations to make a judgement. Keep in mind humans in those situations would likely not react the same as we'd expect AI to react.
 
I honestly think self-driving cars will eventually be banned on public roads due to deaths in events that are later realized to be obviously avoidable in this article because I don't feel the tech will ever reach the level of sophistication where it can determine many of the things mentioned. I realize this tech is still in its infancy but the lack of sufficient sensor data and quality AI to make good decisions based on the data, along with the infinite variables involved in driving a vehicle on roads will eventually lead to its demise.

I agree. Some things I can think and avoid in the blink of an eye - a computer can't interpret whether it's really a danger and to steer/brake evasively or not. We have already seen multiple instances of avoidable wrecks by these things, which are by a heartless computer. They are being pushed on us, in a roundabout way, by people who are also heartless by continuing on. The multiple instances of wrecks will happen every time - a computer will react the exact same way in the same scenarios. It's the way computers compute - the same every time. There are billions of daily obstacles humans interact with across the globe. How are you going to program for them? There is no way I would ever buy one of these and definitely not drive near one. It will be unfortunate when one of these jumps into my lane and injures me. There will be a multi-billion dollar class action lawsuit.

I've heard some argue about these only need to be "slightly better" than humans. In what way? Less fender benders? Less lives lost? Let's count meaningless and heartless deaths. Do these same people want to be the one to test when a sensor fails during a curve around a cliff? It's one thing when it's my own or a human's fault for an accident. It's totally different when it's a computer (prone to fail) written by a person behind a monitor that has determined my/our fate. I feel strongly for people's safety. No thanks. Not now. Not ever.
 
I honestly think self-driving cars will eventually be banned on public roads due to deaths in events that are later realized to be obviously avoidable in this article because I don't feel the tech will ever reach the level of sophistication where it can determine many of the things mentioned. I realize this tech is still in its infancy but the lack of sufficient sensor data and quality AI to make good decisions based on the data, along with the infinite variables involved in driving a vehicle on roads will eventually lead to its demise.

I agree. Some things I can think and avoid in the blink of an eye - a computer can't interpret whether it's really a danger and to steer/brake evasively or not. We have already seen multiple instances of avoidable wrecks by these things, which are by a heartless computer. They are being pushed on us, in a roundabout way, by people who are also heartless by continuing on. The multiple instances of wrecks will happen every time - a computer will react the exact same way in the same scenarios. It's the way computers compute - the same every time. There are billions of daily obstacles humans interact with across the globe. How are you going to program for them? There is no way I would ever buy one of these and definitely not drive near one. It will be unfortunate when one of these jumps into my lane and injures me. There will be a multi-billion dollar class action lawsuit.

I've heard some argue about these only need to be "slightly better" than humans. In what way? Less fender benders? Less lives lost? Let's count meaningless and heartless deaths. Do these same people want to be the one to test when a sensor fails during a curve around a cliff? It's one thing when it's my own or a human's fault for an accident. It's totally different when it's a computer (prone to fail) written by a person behind a monitor that has determined my/our fate. I feel strongly for people's safety. No thanks. Not now. Not ever.
Absolutely, I do agree that the technology is not there yet. However, I also think that some day, it will be whether its five, ten, fifty, or one-hundred years from now. If the tech is on the road before it is truly ready, I also agree that it is a class action lawsuit waiting to happen. There have been numerous times in the past that humanity has done things only to later find out that what was done was not at all a good idea.

Perhaps this article is more interesting in what is says about the regional norms with respect to the moral dilemmas posed by the questionnaire.
 
While I agree AI is still not ready, people have to admit AI generate less casualties. Especially theses days when looking at your smartphone is something everyone does when driving for GPS usage or anything else.
Also AI don't get tired, emotional, sick or anything. We can't deny that human error is the main cause in accidents.
 
While I agree AI is still not ready, people have to admit AI generate less casualties. Especially theses days when looking at your smartphone is something everyone does when driving for GPS usage or anything else.
Also AI don't get tired, emotional, sick or anything. We can't deny that human error is the main cause in accidents.

No, I do not admit they have less casualties. Percentage-wise. You can't compare a few hundred(?) or so to billions of people daily. We also don't hear about all of the incidents. Not just the problems where a human had to interrupt or it would have been a wreck/casualty... to where it slammed on the brakes for no apparent reason... and who knows how many unreported incidents. Not to mention they are only running in perfect conditions.

A person using a smartphone while driving should be treated as drunk driving, with appropriate fines. We should also consider jail time and license revocation.

AI/computers break down and fail - both hardware and software. I work with computers daily. Go ask any mechanic - even on brand new cars. These systems are many times more complicated, and will fail accordingly. Go ask a mechanic what they think on having to maintain/replace/troubleshoot these.

Human error is the main cause? Well duh. Because only humans are driving. That is why they call them "accidents". lol :)
 
Can't teach an AI a new command. The process is to hit the people in the crosswalk or to come to a complete stop? What about a seeing eye dog. The self-driving has a list of commands it has been programmed to do. Sounds like it's setup as What if? What should I do in this driving situation? As a programmer and developer for many years now I can see the logically way is to think as robotic what would it do or what couldn't it do? It can only react to what it's maker made it to do. But is it smart enough to avoid an accident? Is it smart to know what people are in the crosswalk or a seeing eye dog? What happens too those who just run out between cars? What does the Self-Drive-AI do? Stop or keep going?
 
I've said it a million times. AI-driven cars only need to be slightly better drivers to be worth it. They will (100% chance) take over the roads eventually and will offer way more benefits than we even can see right now.

Logically you should be correct, but the world does not run on logic. People are concerned about machines, afraid. Even if AI was 10 times better, people are far more forgiving of the foibles of their fellows than they are of failures in machines. People will see one failure in AI and make a worldwide case of it, while excusing the tired, the drunk and the stupid.
 
The brave new world scenario of city streets filled with only autonomous vehicles and guided by 5G is not going to happen. As I've said before, there are too many executive decisions that have to be made by the car operator during most urban trips, and a machine will never be able to make them. The car will never even be as smart as a horse. *horse-whinnying sound effect* Most likely, cars will have a switch to convert between "Autonomous" and "Driver" modes.

To the degree that cars are used in autonomous mode it will cut down on accidents and save lives by eliminating drunk driving, texting and other distracted behaviors, as well as having faster reaction times for braking.

One thing that the Moral Machine experiment doesn't take into consideration is that in most accidents drivers don't have time to make judgemental decisions. I imagine that in most situations people will simply direct the car in the direction that will injure the fewest or cause the least damage. One might be tempted to think that certain individuals, for example, would choose to kill five members of another 'race' rather than one of their own, but I really do doubt it. Cars, which would not have moral judgement, would react essentially the same, then, as humans.

There can be an advantage to machine decisions. How many drivers have swerved to avoid hitting a dog and ended up smashing into another car or a street post, doing themselves in instead? The car can be programmed to just hit the dog.
 
One of the unfortunate point that this discussion always misses is the fact that while AI and autonomous vehicles(AV) might become very popular, there is no REQUIREMENT to pick one or the other. They very well can co-exist. The problem that might eventually cause the AI to be selected will be the added cost of a human operated vehicle. Insurance rates will soar, requirements for regular maintenance and testing will also soar, in fact everything about owning and operating your own auto will be more expensive. The AV will most certainly have it's place in the world as will the human operated vehicle have the same and let's not forget that the support networks needed for AV's will greatly expand.

Since the dawn of time human evolution has caused our awareness and function-ability to constantly expand, not to detract. The creation of computers was going to eliminate the need for paper yet today paper manufacturing has bigger than ever. It's uses have changed, but the volume continues to increase.

AI and AV's will certainly change the landscape, but highly doubtful that any will be totally eliminated. Records and disks were to be eliminated by digital sound yet today there are many thriving businesses that are pressing disks. Film camera's were deemed obsolescent a decade ago yet today their popularity remains and Kodak just announced a new "film".

There human evaluation, decision and selection are involved there will never be a constant. There will always be individual choice and selection .... and thank God for that!
 
It's very hard to teach a computer to 'see'. Think about it. You walk into a situation and without effort and in stunning little time can tell what is important, what is a concern, what is dangerous. A computer must recognize every element in the 'picture' and then determine what is important. It doesn't innately know that the scrap of paper on the sidewalk is any more or less significant than a person moving towards the street. You know this without any conscious thought. I'm not expecting fully auto cars anytime soon.
 
That is why they call them "accidents". lol :)
Not quite -- it's a legalistic issue:
  • accident:- unintentional and can be argued no one is at fault
  • collision:- one vehicle strikes one or more others and the arguments become WHO's at fault, was it negligence or willful
 
Back