Tesla and the vulnerability of autonomous vehicles
Your engine warning light has come on, and you’ve taken your car to the local garage. What is the first thing the mechanic will do? Lift the bonnet and look at the engine? Start the engine and listen for any audible signs of a problem? Odds are that the first thing your mechanic will do will be to plug a laptop into your car’s on-board diagnostic port and check for the fault that way.
It is a sign of the times that mechanics are increasingly requiring IT skills and that our cars are essentially mobile computers with wheels and an engine, containing such technology that makes them unrecognisable from the cars we drove only a decade ago. Motor vehicles truly have come along way, and with the dawn of the autonomous vehicle on the horizon, there is clearly more change to come.
The analogy of a car being a mobile computer requires careful consideration. If our cars are indeed mobile computers, does this mean we need to install antivirus protection on our cars, can our personal data be stolen from our cars, and more worryingly, can our cars be compromised? The issue of cyber security in the context of a car has never been more important than it is now.
In the last few weeks there have been reports in the press of a team of hackers from a Chinese security company taking control of a Tesla Model S remotely from a distance of 12 miles away. They were able to access the cars controller area network (also known as the Can bus), which connects a modern vehicle’s systems. The hackers were initially able to take control of the indicators, windscreen wipers, dashboard display units, they could open doors and the boot whilst the car was in motion, and move seats backwords and forwards. Alarmingly, they were also even able to overcome Tesla’s “gateway” system and gain control of the cars safety critical driving systems, enabling them to control the brakes adding a more sinister dimension to the hack.
In this particular case, Tesla were fortunate, as this was a so called ‘ethical hack’ where the hackers were looking for holes in the IT security system of the car and immediately reported their findings to Tesla. To Tesla’s credit, they acted immediately and issued a software update over the air to their vehicles to address the issue whilst taking immediate steps to inform their customers of the security breach.
It’s not just Tesla that has fallen foul of cyber security. There are anecdotal reports of a mainstream manufacturer neglecting to take sufficient security precautions with their over the air software update system. It is reported that this manufacturer used http protocol as opposed to the more secure https protocol leaving their vehicles computer systems unsecured and ripe for attack.
In February last year, BMW responded to reports of a security flaw, which potentially allowed hackers to unlock some of its vehicles, with an over the air security patch, in much the same way Tesla did.
These incidents have served to highlight weaknesses which if exploited by an individual or group with malevolent intent, are particularly chilling. There are already many examples in criminal law of cars being used as weapons. If ‘ethical hackers’ are able to control the brakes and steering of a vehicle, what sort of carnage could a hacker with a sinister motive achieve?
In a world where our cars are connecting over the air with manufacturers and other third parties, they are vulnerable to the same cyber attacks as our home computers. Throw into the mix that our cars’ computer systems control almost every safety critical function and that vehicles with increasingly autonomous features are handing more and more control to the computers. It is not difficult to envisage hackers causing a multi vehicle accident by hijacking the connected cars.
It is well known that viruses are capable of migrating from one computer. This multiplies the risk when you consider that fully autonomous vehicles and those with more advanced driver assistance systems will need to communicate with smart roadside furniture to optimise journey times and establish safe operation of an autonomous road network.
There is no doubt that manufacturers are taking their security obligations seriously, evidenced by the speed with which both Tesla and BMW issued software updates to patch the holes in their systems. Indeed, consumer confidence in fledgling autonomous technologies would be seriously eroded if such prompt action was not taken.
The Tesla incident, in particular, raises a number of interesting legal questions; firstly is the manufacturer responsible for keeping its vehicle systems secure and is the manufacturer liable if they don’t. Alternatively, is the consumer responsible for ensuring their security systems are up to date much as they are with their own computers and smart phones? Secondly, who is liable in the event of a hack; and how do we access sufficient information to establish what actually happened?
Volvo has said they will accept liability for accidents where their autonomous systems are at fault, but others have not. Will this extend to a fault caused by a hack which could have been preventable had the manufacturer kept its security system up to date?
There is undoubtedly a burden on manufacturers to ensure that their vehicles are secure, but some responsibility is also likely to pass to the consumer at the point of sale. Ultimately, it is not difficult to imagine a situation whereby liability could rest with the consumer, the manufacturer, the software programmer or a combination thereof.
Whilst motor insurers may view a hack as similar to the theft of a vehicle, such incidents could see insurers looking to recover monies paid out in the event of a party failing to maintain adequate security systems on the car. This could well erode consumer confidence, and impact sales. It is perhaps worth noting that insurers are likely to exclude liability for a terrorist incident; a concern recently raised in the Department for Transport’s recent consultation.
However, the determination of liability in such a situation will be extremely complex and the data collected by the autonomous vehicle itself will prove to be crucial in understanding exactly what caused a car to malfunction, or what allowed the car to be hacked.
Manufacturers already receive data through connected devices installed into their cars and this data will be need to be shared and scrutinised to understand the cause of breach of a car’s security systems (and for that matter in determining the cause of an accident). Some manufacturers already understand the importance that data will make in the determination of liability, Tesla for example immediately released the data collected by their vehicle to assist regulators, insurers, and other interested parties to understand just what the caused the hacking incident.
Parking the privacy, data protection and intellectual property issues which are not insurmountable hurdles to data sharing, without access to the data collated by automotive systems, manufacturers, software developers, consumers, and their insurers have little hope of being able to understand the cause of a hacking incident and the determination of liability could be almost impossible. Not only could a failure to share this data impede an understanding of the cause of an incident, but it could impede the development of new security systems and thus compromise the security of our connected cars. Such a gap in security could have a catastrophic impact on society and road safety.
Cyber security is therefore an issue of paramount importance for the manufacturing industry and must continue to be a fundamental component of autonomous vehicle research and development. Manufacturers, software developers, consumers, and insurers must also work together to facilitate cost effective access to the data collated to allow everyone involved to accurately and quickly determine liability.
Kurt Rowe, Emerson Wallwork, and Chris Ball are members of the Motor Technology Group at national law firm Weightmans LLP