View Single Post
Old 30 Mar 2016, 01:01 (Ref:3628592)   #49
Paul D
Veteran
 
Paul D's Avatar
 
Join Date: Feb 2010
England
Southport, Merseyside
Posts: 827
Paul D should be qualifying in the top 10 on the gridPaul D should be qualifying in the top 10 on the grid
Quote:
Originally Posted by chillibowl View Post


'the internal sensor has detected an onboard explosive device...it will now drive you to a secluded area and self destruct. thank you for choosing Google.'
Nice one!

But seriously, on the subject of autonomous cars - I'm afraid it has to be a no, and here's why:

For as long as the autonomous cars have to share the roads with humans, it can never work. Why? Because it doesn't matter how sophisticated your software may be, you can't program it to make a moral or ethical decision. It's been said that these cars will be incapable of making an error. Sorry, but not with current technology they won't. Computer crash, glitch, momentary lock-up - all things that happen with alarming regularity with current technology.

Don't even get me started on hackers! By definition, these autonomous cars will have to be connected to the internet, and the moment they are, they're hackable. The hackers would have a field day. They're already hacking into some of the latest cars allowing them to bypass security systems, start engines, etc. Imagine the fun they'd have with driverless cars!

Someone else said it may work if all vehicles were driverless, because then there would be no human element to err, and the autonomous cars wouldn't be capable of error. Well, even ignoring what I've already said above, there are still a few sticking points here. Firstly, there is no way that anyone reading this will be alive long enough to see a scenario where all vehicles are driverless - it just isn't going to happen in our lifetime. Secondly, even if it did - vehicles aren't the only road users are they? What about pedestrians, cyclists, animals? For as long as there are humans sharing the same environment as the autonomous cars, then there is the potential for a human to do something unpredictable that the computer hasn't anticipated (for which read: been programmed for). And as it simply isn't practical to completely separate all humans from any roads, then this will always be a problem.

Now, I haven't even mentioned yet the legal black-hole that autonomous cars will create! Just imagine it - the lawyers must be rubbing their hands together already just at the thought of it all! Picture this scenario: you're happily being driven along a hilly road in your autonomous car, sitting back and enjoying this month's edition of 'Computer Geek Monthly' when, without warning, a woman pushing a twin buggy pram just shoves it out into the road to cross, without looking - right in front of your motorised computer. In the split second that it has to consider things, your computer, being capable, as it is, of a gazillion calculations a second, decides that braking in a straight line will not allow it to stop before hitting the woman: likely result, woman and possibly two babies die. So it considers other options. Unfortunately, there are half a dozen cyclists three abreast oncoming on the other side of the road, and calculations show that swerving in that direction to miss the woman & pram will result in collision with the cylists: casualties calculated at two or possibly three cyclists die, with one or possibly two others suffering serious injury. So, now running out of options, the computer realises that there is only one realistic option remaining, and that is to swerve the other way, away from the oncoming cyclists and away from the woman & pram - but this means leaving the road to the left where, sadly, just beyond the pavement but before the car can be stopped, there is an almost sheer drop down into a river: likely outcome - you die! Despite this, the computer decides that this is the best course of action, because one death (yours) is a better outcome than the deaths of a woman and children or the deaths of several cyclists. Result: car crashes off the embankment into the river, and you die, but the woman, children and cyclists are all unharmed. The computer considers this the best possible outcome. Do you?

What if the ensuing inquiry reveals that the woman with the pram is an alcoholic and was completely p*ssed that day when she just stepped out without thinking? As a result, she is prosecuted, and found guilty of gross negligence (or whatever) - not gonna be much comfort to you is it?

You might say it's an extreme example, and maybe so, but the fact is that these life or death decisions do arise, on a daily basis, on our roads. You may also argue that given the same scenario with a human driving, someone still had to die. Yes, agreed, but here's the difference, and it's a huge one: to err is to be human, and whatever decision a human made in that split second is difficult to criticize, because, as humans, we have emotions, morals, ethics and self-preservation instincts - and the humans involved in such events simply have to live with the decision they made.

But hand that decision to a computer - and this is where the lawyers will be getting excited - and all of a sudden, there's a huge (read wealthy, and very public) company behind that decision, and it's been taken by a software engineer in a pre-meditated manner, not by someone in a split-second life or death situation. And guess what? All of a sudden, there's culpability, at least enough for the lawyers to get in a fight over it, and then it's all going to get very messy, very quickly I reckon.

So, I think I'll just stick to being in charge of my own fate thanks, whatever it may be! I won't be going in any autonomous cars...

Last edited by Paul D; 30 Mar 2016 at 01:10.
Paul D is offline  
__________________
"Light travels faster than sound - that's why, at first, some people appear bright... until you hear them speak!"
Quote