20221101_bpe_teams_1184_wide

News & Events

;
Insight

Man or machine... who's to blame?

Iain Garfield, a partner in BPE’s Commercial team, discusses the issue of liability when robots go wrong. In the past, this wasn’t much of an issue, but with autonomous, driverless cars the shape of the future, who is at fault when crashes occur?


American author and professor of biochemistry Isaac Asimov created the three laws that robots must adhere to. Although created for fiction, they have become widely accepted in technology and the media alike, and in reality, they make a great deal of sense. For those who aren’t aware of them, the three laws of robotics are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


These laws have been adhered to in almost every film and book written or created since Asimov first came up with them. Whether you’re a fan of Robby the Robot in Forbidden Planet, or Murphy and his prime directives in the 1980s classic film, RoboCop, what remains to be seen is whether the EU is about to add a Fourth Law to the established trinity.


This all becomes more real in the wake of recent news from Florida where a driverless car failed to distinguish between a white lorry and the white sky surrounding it. Failing to take evasive action, it ploughed into the lorry and killed the occupant of the car. The all-important question is who is to blame for the accident and death of the car’s occupant? A recent report to the European Parliament provides some interesting thoughts on this.


Over the years, criticising some of the decisions emanating from the EU has been something of a sport in parts of the British media, but this report submitted to the European Parliament by a Luxembourg MEP has started some ripples of concern through the technology sector. In short, the report calls for robots to be held ‘personally’ liable for any damage caused by their actions and malfunctions.


The report recommends that damage caused by a robot should be assessed, and if the damage was caused by an error in manufacture or programming, then the respective manufacturer or programmer should be held liable. If however, the damage was caused by an autonomous decision made by the robot itself, then the robot should face the consequences of its actions, accepting punishment as if it were human. So are we looking at a future where prison inmates will be sharing cells with robots? In all honesty, this is unlikely, and while an amusing diversion, the root cause of the report is a much more serious concern.


In the Florida example, if the autonomous ‘brain’ of the car determined that no evasive action was required, then it was at fault. If however, the decision is that the programming of the robotic system failed to allow it to distinguish between like-coloured objects and backgrounds, then the programmer could be at fault.


Put into a wider context, a recent US study found that between 2000 and 2013, 144 people lost their lives during robotic-assisted surgery, and more than 8,000 incidents of robotic malfunction were reported in the same period. In perspective, those figures are a tiny proportion of the 1.7 million robot-assisted operations carried out in the US in the second-half of that period alone.


The draft EU law proposes that robots capable of autonomous thought would be required to be registered, and participate in a mandatory liability insurance scheme. Any party thereafter injured as a result of a robot malfunction would be entitled to bring a claim against that insurance fund.


What is certain, is that nothing will happen in the immediate future. With autonomous driverless cars still very much in the testing phase, there is a great deal of research and legislative discussion required from all parties before it could become law.


While it may not happen soon, it’s surely only a matter of time before the EU or a national government passes a similar law. Autonomous robots are coming, and they will require legislative instruments to control, govern and potentially protect them.


This blog started with the laws of Isaac Asimov, and will finish with thoughts from Bill Gates and Professor Stephen Hawking. The Microsoft-founder has publicly expressed concerns that robots could develop to the extent that they are no longer under the control of humans. Professor Hawking proclaimed in response that “success in creating artificial intelligence would be the biggest event in human history – unfortunately, it might also be the last.”


Could the premise behind the Matrix trilogy actually prove true, are the robots destined to serve humans, or be served by us?


Time will tell, and resistance may indeed be futile.

 

These notes have been prepared for the purpose of an article only. They should not be regarded as a substitute for taking legal advice.

Get in touch

Talk to us about your legal challenges and discover how our expert, pragmatic legal advice and broad commercial acumen can help.