The Ethical Dilemma Of Self-Driving Cars

by CXOtoday News Desk    Aug 06, 2015

selfdrive

The self-driving car which has long been the thing of science fiction, may soon become a reality as more consumers are looking to get rid of problems such as traffic jams, car crashes, petrolheads and so on in their daily lives. The race to create self-driving cars has become a gold rush by both automotive and technology giants. Google wants to launch driverless cars in 2017. Its rival Tesla Motors want to have vehicles ready by 2016. Nissan has given a 2020 date for their own versions. Auto majors such as Audi, BMW and Daimler buying Nokia’s HERE Maps division to ready them for future. There will be many more coming in the future.

All this however is leading to one big question. Will driverless cars lead to ethical dilemma. While a new study by McKinsey & Company projects that widespread adoption of self-driving cars could lead to a 90 percent reduction in vehicle crashes, with a potential savings of nearly $200 billion a year from significantly fewer injuries and deaths in the US alone, experts believe that human drivers are engaged every day not just in navigating roads, but also in making ethical decisions as they drive, and these too will have somehow to be programmed into the software of the self-driving car. 

Drivers, for instance, know that it is right to swerve to avoid an animal racing across the road, though not at any risk to their passengers. But they are also prepared to take a little more risk with the passengers to avoid a cat or a dog.

More importantly, what will these cars do when a school bus packed with children come on its way or even an aging bystander. Who would compute the moral reasoning behind them to be programmed into the self-driving car. And there are even simpler but still real ethical dilemmas that human drivers understand - say, that a speed limit of the car on a sunny day than on a rainy or foggy day.

Each self-driving car should have its own ethical engine or else they will never become mainstream anyday, believe experts. Chris Gerdes, a professor at Stanford University and Patrick Lin, a professor of philosophy at Cal Poly, are exploring the ethical dilemmas that may arise when vehicle self-driving is deployed in the real world and believe as the technology advances, however, and cars become capable of interpreting more complex scenes, automated driving systems may need to make split-second decisions that raise real ethical questions.

Pietro Boggia, principal of automotive and transportation at research company Frost & Sullivan believes that driverless cars may need to communicate directly with each other using systems similar to aeroplane transponders - transmitting location, speed and direction to other vehicles. “Standardization will be the biggest challenge for driverless cars,” says he.

Nonetheless, self-driving cars have been involved in very few accidents so far and Of course, the car industry is well aware of all these ethical issues. Google’s automated cars have covered nearly a million miles of road with just a few rear-enders, and these vehicles typically deal with uncertain situations. As Gerdes believes that human drivers are engaged in making ethical decisions as they drive, and these will have to be programmed into the software of the self-driving car. There will be many more such developments before self-driving cars reach mainstream adoption - making it a wait and watch technology.