Man vs. Machine

google-driverlesscar-01-large

Google thinks it’s a safer driver than you, and it’s prepared to break the law to prove it. It emerged earlier this month that Google’s self-driving cars are pre-programmed to defy speeding laws, if necessary to ensure driver safety.

Research has shown that it’s dangerous to obey speed limits when surrounding cars are travelling much faster, so Google’s auto-automobiles are designed to break speed limits by up to 10 mph, or 16 kmph, in certain traffic conditions. Google has been testing its cars extensively, racking up 700,000 miles on Californian roads and many more in its Matrix-like virtual-simulation of California. The tech giant has recently been lobbying the Californian state’s regulators to certify self-driving vehicles based on this virtually-collected data, rather than practical testing, arguing virtual testing allows much greater variety of tests.

One natural question, which so far remains unanswered, is who is responsible if a speeding driverless car is involved in a collision: the driver, the manufacturer, or the software developer? Will Google cars be able to flirt their way out of a speeding ticket? The UK government has already made a commitment to encourage the development of driverless cars, but will need to face up to these tricky questions and the accompanying legislative upheaval.

In a world where robots are smart enough to beat humans at Jeopardy, cars drive themselves, and the use of semi-automated drones is becoming more commonplace in both civilian and military use, the legal questions surrounding the culpability of automated systems are becoming just as complex as the technical ones. There’s still a lot of work to be done in both law courts and tech labs to get to the Morgan Stanley-predicted “utopian society” of autonomous cars by 2026.

This article originally appeared in Moving World Wednesday

Comments