The bad habits we need to kick before driverless cars hit the road

The bad habits we need to kick before driverless cars hit the road

Vehicles are increasingly being driven by motherboards, not motors. Anti-lock brakes, navigation, fuel injectors, even seat memory settings are using some type of computer to operate. The ever-increasing numbers of sensors and cameras helping us park, reverse and merge make driving much safer and are paving the way for the dawn of driverless cars. These advances, while enough to start the engines of anyone with a passing interest in technology, lead to some troubling consequences, namely the potential of cars being hacked because of bad habits consumers and developers have already picked using today’s devices.

Around the world, governments are backing the self-driving sector. The US House of Representatives, in a rare show of bipartisanship, recently passed the SELF-DRIVE Act which would ease regulations and allow manufacturers to test their vehicles on roads sooner. The UK Government earlier this year passed the Vehicle Technology and Aviation bill which aims to iron out safety issues such as who is liable in the event of an autonomous accident while in Australia, South Australians took pole position when the State Government introduced legislation to permit on-road trials.

It is clear governments around the world are taking the technology seriously. This is no longer navel gazing, and the enthusiastic embrace the sector has received can – in large part – be chalked up to safety improvements. The U.S. Department of Transportation’s National Highway Traffic Safety Administration, for example, released a report last year which found that “ninety-four percent of crashes can be tied back to a human choice or error”. With these figures in mind, it’s easy to see why autonomous vehicles are so appealing.

While the technology can slash the number of accidents caused by mindlessness, one major concern is the potential for accidents caused by maliciousness. While hacking is obviously the first thing that springs to mind, some threats are more low-tech. For example, researchers recently found that a handful of home-printed stickers can confuse the cameras upon which self-driving cars rely – tricking the car can into thinking a ‘Stop’ sign is a speed limit sign.

Ultimately, anything relying on computers can potentially be hacked. There was the much-publicised case of security researchers who hacked into Jeep’s Uconnect system, allowing them to command braking, steering, and even the ability to kill the engine. The potential for hackers to wreak havoc on these vehicles is being taken seriously enough that a recent parliamentary enquiry in Australia called on the Federal Government to charge the National Cyber Security Strategy with the investigation of automated vehicles (and associated transport systems) to address potential vulnerabilities.

Craig Smith, who runs the car hacking village at Defcon and is research director of transportation security at Rapid7, rightly points out that as self-driving cars have so many different sensors and cameras to deliver commands and drive the car, simply compromising one sensor wouldn’t be enough to commandeer the car. This is because the radars act independently and do not trust each other. 

All this isn’t to say we should halt the research and innovation driving the autonomous vehicle industry. Rather, it is a call for both developers and future users to put security front-and-centre. For example, many app developers today sacrifice security, placing speed-to-market or user-experience as their primary concerns. At the same time, time and time again, users fall victim to strains of malware that wouldn’t have worked if they’d only updated their software or operating systems. 

If we’re going to put our trust in automated vehicles, these are habits we need to kick. And, if the legislation being introduced around the world is any indication, we need to kick them quickly.  

Tags: No tags

Comments are closed.