No More Accidents Edition – Autonomous Vehicles

Share this: Facebooktwitterredditpinterestlinkedinmail


December 18, 2015

No More Accidents Edition – Autonomous Vehicles

image courtesy of
image courtesy of


Well, this is awkward…

According to the Automotive News article linked above, autonomous cars (which are supposed to reduce accidents) seem to be in a lot of…um…accidents. Oops! The problem is that the cars aren’t becoming more self-aware when accidents occur by learning new ethical contexts when they crash. Far from it. Human drivers keep hitting autonomous vehicles because of the auto-driver’s inability to adapt to and learn from complex situations.

How? Well, for starters, the self-drivers obey traffic laws—all of them. For instance, to an autonomous vehicle, there is no such thing as merging with the flow of traffic by gaging the speed of other drivers. There is only the proper speed limit. Have you ever had to dangerously merge onto the highway from the on-ramp while someone in the slow lane is driving the posted speed limit but is too slow to allow for you and other incoming cars to safely merge with traffic? Yeah, It’s like that.

The article provides a few other scenarios, too. And it isn’t like the accidents are horrific or even serious, they are just low speed fender benders. And of course it’s our fault—it couldn’t possibly be the autonomous vehicle’s programming at fault because computers are perfect, right? Just like their human programmers that never make mistakes…wait, never mind.

While it’s nice that people want to eliminate the human factor from driving to reduce accidents, let’s remember that fallible humans are programming and developing these autonomous cars, so a perfect system is not likely. At least not soon, which is what really bothers me about this whole self-drive, autonomous vehicle thing.

In the mad dash to say, “We did it—we’ve replaced ourselves with perfect machines” are people really analyzing whether or not the developers creating these autonomous vehicles are putting out the safest, best software and equipment? Why wouldn’t developers and programmers work out a lot of the big and small ethical questions (ethical big picture but also small picture like when people speed up and bend the law to prevent accidents) before putting the self-drive, autonomous cars on the street? I mean… technology always works perfectly, just like Google Glass (uh-huh).

This is a serious concern for me. Autonomous driving vehicles can not (or at least should not) already have alpha and beta versions that are upgraded without thoroughly testing their ability to comprehend complex ethical situations and speed control scenarios. This isn’t a smart phone or something where a patch fix will help the battery life or keep an app from crashing. If software is faulty and needs corrective patch fixes or system upgrades, property and life could be at stake. I don’t mean to sound alarmist. This isn’t a sky is falling moment, I just think we need to really ask hard questions about this technology. And I say “we” for not just the programmers and developers, because “we” are the ones who will be affected most by autonomous vehicle safety.

Between ethical issues and speed control issues I just don’t know if we are ready for this. Maybe I am just being stubborn and resistant to change because I love to drive.

What do you think?

Post your comments at



My Final Thought:



I didn’t want an NSX anyway…

The new NSX will start at $156K. So much for an attainable sports car, right?!

Post your comments at

Love Your Car! See you next week!

–Lauren Fix


Follow Us: Facebooktwitterlinkedinrssyoutube

Leave a Reply

You must be logged in to post a comment.