Legal (AI)d: The findings of the President of the Law Society’s keynote speech on AI in law

August 1, 2019

3 min read

Sign up to our mailing list! 👇

What's going on here?

The Law Society has been studying the use of Artificial Intelligence (AI) in the legal services sector. In June this year, Law Society president Christina Blacklaws delivered the findings in her keynote speech.

What does this mean?

AI is becoming an increasingly important asset in law, but it is not without its dangers. The Law Society took an in-depth look into the use of AI in law and found that, although there are many benefits, it could vastly be improved.

Turning first to where AI is primarily used, the Society found that AI is heavily used across the criminal justice system. Some examples of its use include; photographic/ video analysis, DNA profiling, predictive crime mapping, mobile phone data extraction tools and social media intelligence.

One of the key reasons for its heavy use, according to President Blacklaws, is political pressure to be proactive rather than reactive in police work. There is also a desire to improve efficiency. AI can help in this respect as it can automatically do tasks, like filling in forms, much quicker than a human could.

Further benefits of AI also included being able to tailor rehabilitation for offenders through enhanced analysis. It also helps maintain some kind of consistency across the justice system, as the same algorithms are used in multiple departments.

However, the report also found that there are large areas for improvement. One key problem is that there is often not enough oversight over the AI’s decisions. They use opaque systems making it hard for humans to decide whether or not a decision was ‘legitimate, justified or legal.’ Concerns were also raised about whether facial recognition software and similar uses of AI had explicit lawful basis.

Further, the Society thought that algorithms were not being critically assessed, creating risks to the justice system and rule of law. Though the belief is not universal, the Society believed that the technology was now out of its experimental phase and can be critically assessed. If this assessment is not sufficient, as is currently the case, three potential risks arise:

1) Risk to Fairness: The data that the AI uses to inform its decision (its training data) is almost certainly biased. If we have overpoliced or underpoliced certain sections of society in the past, AI will replay this data to make its decisions. If we can’t see into its workings, it will be impossible to see on what basis judgments were made.

2) Risk to Human Rights: AI can take seemingly insensitive data, like from social media accounts for example, and make judgments about an individual based on that data. This could be considered an invasion of privacy.

3) Risk to the effective delivery of justice and the rule of law: Concerns were expressed over a dehumanised justice system. It calls into question the nature of a fair “human” trial, and it may come to the ECHR for contravention of Art. 6 of the ECHR.

What's the big picture effect?

With so many key issues surrounding technology in law, it is clear that the practice around AI needs to be improved, especially if firms are increasingly reliant on it. To this end, the Law Society made a number of recommendations. As we continue to use AI, these recommendations may be crucial in ensuring individuals’ safety and privacy.

The first major recommendation was a call for greater oversight. It was suggested that a new code of practice for algorithms, and the creation of a national register of algorithms, to show what characteristics they have, would help this. Further, the Soceity recommended the Centre for Data Ethics should be given statutory footing to examine the algorithms. This would hopefully ensure that algorithms are more transparent.

The other key recommendation was that facial recognition needs to have a clear legal basis. To ensure this, they want more powers for the Biometrics Commissioner and the creation of a new body to make sure facial recognition isn’t being misused. The Society wants this legal basis to be made public, including what factors they use in their decision-making.

If these recommendations were followed the threat of AI would be greatly reduced, but for now they remain non-binding suggestions.

Report written by Luke Hatch

If you’d like to write for LittleLaw, click here!

Share this now!

Check out our recent reports!