How AI changes the law

February 20, 2019
5 min
By:
Editorial

In 2017, the humanoid robot Sophia Hanson was granted citizenship in Saudi Arabia. Suddenly, a robot had rights, could be punished and had the event occurred in a more democratic country, she might even have had the chance to vote. For some, it was just a witty marketing stunt (admittedly, she is fun). For others, a sinister reminder that a robot suddenly had more rights than most Saudi women, since she was allowed to travel and did not have to wear a headscarf (the excuse being, she is hairless).

Simultaneously, the European Union discussed some of the more serious implications of granting rights to robots. A report by the European Parliament in 2017 raised the idea of granting “electronic personhood” to certain complicated robots. The implication was that robots could be given rights to protection, earn money and even be punished, just as humans, companies, states, and even natural resources - such as a river in New Zealand.

The statement has caused an uproar and provoked some fierce debate, mainly because it raises more questions than provides answers. The idea of punishing robots was called “inappropriate”. As one professor pointed out, some “manufacturers were merely trying to absolve themselves of responsibility for the actions of their machines”. And a group of experts wrote an open letter warning about the “overvaluation of the actual capabilities of even the most advanced robots”. According to researchers at the Artificial Intelligence and Legal Disruption Research Group of the Faculty of Law at Copenhagen University, we are far from having a sufficient legal framework to handle all the new technologies. (For Danish readers, see this Danish article.)

This week, Legal Tech Weekly takes a break from our discussion about how technologies influence the legal industry and how legal professionals practise law. Instead, we direct our attention to some of the implications that emerging new technologies pose to the law itself. They are more comprehensive than you might think.


The responsibility gap  

In 2004 Andreas Matthias identified the responsibility gap in his article The responsibility gap: Ascribing responsibility for the actions of learning automata. He noted that while a manufacturer of a machine is usually held responsible and liable for any damages it may do, autonomous machines based on neural networks and machine-learning creates a whole other situation. Some artificial intelligence is programmed to act autonomously and to learn by observing exterior behaviour which is essentially outside the manufacturer's control. The manufacturer is not, therefore, able to predict the behaviour of the machine.

On the other hand, making the owner of the product liable for the machine's behaviour may be equally unjust, as the product owner cannot control the machine. This creates a gap where no-one has causal responsibility, making liability issues incredibly complex. According to Hin-Yan Liu from the above mentioned research group, we may even get to a point where our core idea of personal responsibility breaks down.

Most artificial intelligences are not isolated entities. They work more as overlapping systems and connected networks. Take autonomous vehicles. These will not be isolated entities that drive like a human being but will be swarms of vehicles that constantly communicate to optimize the traffic. In these types of cases, legal systems will need fundamental rethinking, as it will be hard, if not impossible, to define a single entity as responsible.

The rise of killer-robots

This responsibility problem will exist at many levels. It has the potential, for example, to destabilise copyright law in instances where robots make new inventions. As Hin-Yan Liu wrote in a 2015 article, autonomous weapon systems (AWS) might also challenge International Humanitarian Law. An AWS cannot be categorised as a soldier or a weapon so there is a risk that this yet-to-be-invented technology will land in a no-mands-land between two categories.

The big question is what happens if a killer robot goes rogue around innocent civilians? Should the manufacturer be held liable or is it the commanding general? Hin-Yan Liu noted that, “On the one hand, assigning responsibility to artificial agents is unsatisfying and seems to entail impunity. On the other hand, imputing responsibility to proximate individuals raises the risk of scapegoating the individuals associated with these operations.”

The dilemma has led to numerous campaigns to ban killer robots and a demand that there always be a human in control in the loop. There is, however, a problem with automation bias which is that humans tend to favour a machine's decision over their gut feelings; a trend which is evident from the tragic phenomena Death by GPS.

Some also believe that it may be too late, as China and the USA are too far in the process to accept a ban. Today, AWS are used in defensive systems such as missile defence systems. And some states are well underway in creating robots that can detect, attach and eliminate enemies.

Technological management

From self-executing smart contracts to what Roger Brownsword of Dickson Poon School of Law, King's College, London, calls “technological management”, technologies will be able to execute the law. We will see the rule of law “applied to a regulatory environment that is technologically managed rather than rule-based”. Imagine, for instance, programming laws into a drone so that it will not be able to fly in airport territory. What is impermissible will also be impossible. Instead of having a set of norm-based laws that everyone can violate or comply with in exercise of their free will, it will be possible to simply regulate in advance to get the desired behaviour.

This trend is further emphasised by the rise of big data. The Danish municipality of Gladsaxe is already using algorithms to identify children at risk of abuse. It is gathering and processing data about the employment status of parents, for example, and dentist appointments that are missed. By using AI's pattern recognition ability, it can identify cases of abuse. But why stop there? Using random forest algorithms could potentially lead a government to predict crimes before they happen and identify future criminals. The big questions is what you would do with such information?


How AI changes the law

These are just some of the questions that the increase in artificial intelligence will raise. But some of the most fundamental discussions have not yet even started. Artificial intelligence has, for example, sneaked into the American criminal justice system via the backdoor. Without proper discussion and full debate. For its part, the European Commission recently published a Draft Ethic Guidelines For Trustworthy AI. Although well-meaning, the draft calling for human-centricity is, at this stage, an extremely fluffy text that is close to being completely meaningless.

That will change. Having autonomous and somehow intelligent machines work freely in society will force us to create a new legal category and re-think questions of agency, predictability and ownership. So, not only will artificial intelligence disrupt the way lawyers work but it will also disrupt what lawyers are working on.

You are more than welcome to contact us with ideas or feedback at mb@contractbook.dk or through the form at Contact us.

Author Bio

What do you think?

Stay updated on legal tech with our weekly newsletter

✌️ Thanks! See you in your inbox every Wednesday!
Oops! Something went wrong while submitting the form.

Read more

Stay updated on Legal Tech with our weekly newsletter

✌️ Thanks! See you in your inbox every Wednesday!
Oops! Something went wrong while submitting the form.
×
No thanks, I don't want to stay updated.