At AAMCO Colorado, we know that you need your car to get you where you need to be, and that means not only trusting the vehicle to make it to your destination, but to make it there safely. If your vehicle needs a brake repair, transmission repair, or an overall auto tune-up service, check for the nearest AAMCO location and schedule an appointment. Because those that know, know to go to AAMCO!
How Would An Autonomous Vehicle Be Able To Choose Between What Is Right And Wrong?
For those of you unaware, the future is nigh, and robots are going to be doing more for us than ever before in history. With the advent of autonomous vehicles, some really hard hitting questions are being asked, and some worry that these same robots are going to take away some of our privileges, such as driving.
We can all agree that computers can calculate and react much faster than humans for most activities, driving certainly not being excluded. So in the future, when our cars drive us and it is no longer the other way around, these autonomous vehicles are going to be able to do the job both better and safer. Which eventually could even get to the point that humans will no longer be able to drive. Machines don’t have a bad day, get stressed, distracted, tired, or drive under any sort of influence. It’s certainly not a long shot to assume then that if overall it’s safer and better to let the machines drive, who would want a human to operate a vehicle when all that does is add extra danger.
But there is one aspect of autonomous vehicles that has been discussed well before the idea of an actual autonomous vehicle was ever realized. In certain situations where human lives are risk, how can we trust that our vehicle will make the best decision for us? After all, any ethical decisions made on the behalf of a machine were programmed into it by someone we probably don’t know. How can we be sure that the ethical decisions programmed match our own, and how can we feel comfortable that our own cars driving us won’t make a decision that does against what we believe?
Before we begin to dive into that question, let’s go over what Isaac Asimov, the Godfather of autonomous robotic theory, declared as the laws all autonomous machines should have to follow.
Isaac Asimov Machine Laws
- A robot may not injure a human being or through inaction allow a human being to home to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.
How Asimovs Laws Make Sense But Ethically Are Not Practical
The first problem with Asimovs Laws is that for autonomous vehicles, there are inevitably going to be instances where the machine will have to make life or death decisions for both the passenger in the vehicle, and anyone else around them. Let’s give a few examples to highlight this point.
Example 1: You are driving on a one way road where to either side of the road is a steep cliff. Out of no where a person appears in the road and there is not enough time for the vehicle to stop. So the vehicle has but two decisions to make. It can either swerve off the road and fall off the cliff avoiding the pedestrian whilst killing the passenger in the process. Or it can apply the brakes and continue forward hitting the pedestrian which will most likely cause great harm, or even death.
Example 2: Same scenario as before, but this time there is a car that you will crash into where there are multiple passengers inside. Your vehicle can swerve as before saving the others, or it can slow down and allow the crash to occur potentially harming or killing many.
After these examples, make up some of your own and consider the ethical decisions you would make. Whether a child runs in front of the car, a pregnant woman, a drunk homeless man, a criminal trying to car jack you, or any other number of scenarios. What will soon become apparent is that your actions will not always be the same, and under some conditions you may well take the chance of putting yourself into harms way to avoid hurting others. But there are probably certain instances where you would take the chance of harming someone else so that you yourself can be saved. And remember, there is no right or wrong decision as they are each personal and will not be the same for everyone. But if you’re riding in an autonomous vehicle that you own, you would want the car to act as you would, or follow the same ethical decisions you would make.
The problem at hand is that the decisions to be made are programmed by someone you don’t know, but more importantly, it’s by someone that doesn’t know you. So how can you trust riding in an autonomous vehicle when if the car senses it’s going to run into another person or persons, it may put you into harms way in an attempt to avoid harming them. I can only imagine a few instances where I would personally put myself into harms way to save another, and those instances are pretty few and far in between. I would also want my vehicle to understand that as well.
When you go over Asimovs laws, it soon becomes apparent that they certainly do not cover every type of scenario that could arise, and nor could they. Because no two people are the same, and where one might willingly toss themselves off a cliff to avoid harming a family, others might take the chance to ensure they themselves survive. While again, there is nothing wrong with either decision.
So we’ll have to wait and see what unfolds for ethics for machines, but one thing is certain, it is going to be interesting nonetheless. I would argue that we all live in one of the best times now, where technology is getting better like the proverbial snowball down a hill. The better we make it, the faster new improvements come along. Just consider the fact that no more than 20 years ago we were all watching movies that were VHS. VHS people… you could fit 6 or so iPhones into a single VHS tape, and imagine exactly what those phones are capable of. Times are certainly changing, and they’ve never changed faster before.