Self-driving cars are slowly but surely on the rise, but what does that mean for us- as consumers, as pedestrians, as drivers, as lawyers? The video below will give a brief introduction to Google’s new Self-Driving Car Project.
Some may argue that these cars are scientifically programmed to take into thousands of factors when they are on the move, but some also may argue that they would be more of a danger than a solution. These cars are programmed by other humans, but they definitely do not have the same instincts or reflexes the way humans would. Who makes the decisions when the car is on the road? For instance, if a self-driving car is going to be in an unavoidable accident, and it has to make a quick decision to crash into either Car A or Car B (or Pedestrian C)… which does it choose? In addition, who would be at fault for that accident- the passenger in the car, the programmer, the car manufacturer, insurance agency? What does it mean for personal injury lawyers- will they have to learn a new set of laws?
There are so many ethical implications and legal decisions that need to be made regarding this technology that it makes it hard to believe that they will reach the market anytime soon. What other issues do you see arising from self-driving cars? Who do you believe will be liable for any collisions that would come out of these vehicles? What does that mean for you- as a consumer, as a pedestrian, as a driver, as a lawyer?