Facing Facts: The Need for Regulation of Facial Recognition

June 18, 2019

2 min read

Sign up to our mailing list! 👇

What's going on here?

There has been a call for greater regulation of automatic facial recognition technologies in order to prevent abuse of citizens’ privacy rights.

What does this mean?

One doesn’t need to look far to realise how widespread Automatic Facial Recognition (AFR) Systems are. They are becoming increasingly integrated in today’s society, yet their regulation isn’t thorough. You may have encountered AFR systems through consumer services such as Facebook or the Face ID login system of your smartphone. Governments around the world are also getting on board with the use of AFR as an advanced surveillance tool. They use it in order to speed up the identification process at airports or to match real-time CCTV to a watch list of persons of interest. A key example is Amazon’s AFR system ‘Rekognition’ which is currently being piloted by law enforcement in the US. While the US lacks any concrete AFR regulation, Amazon itself has formally recommended that law enforcement agents do not use the technology unless there is a 99% or higher confidence rating of a match.

What's the big picture effect?

The deployment of AFR raises two major concerns. Firstly, AFR technology is often used without citizen consent and is highly unregulated. Secondly, AFR systems are commonly inaccurate, consistently containing racial and gender biases which actively promote discriminatory practices in law enforcement by flagging innocent citizens as ‘suspicious’. As such, AFR has many critics (including civil liberty groups) on the basis that it would further discrimination in law enforcement.

Some attempts are starting to be made to regulate AFR, but adequate coverage remains an issue. In March 2019, the US introduced a bipartisan bill proposing greater consumer consent for facial recognition. However, it only focuses on risks specific to the commercial side of AFR technology. It does not include any provisions to regulate the technology’s use in law enforcement. The EU will follow a stricter path of legislation, basing any legislative framework it creates for automated facial recognition on its General Data Protection Regulation (GDPR).

Until legislative frameworks catch up with quickly advancing AFR technology, it is clear that companies developing AI facial recognition will continue to push the boundaries potentially at the risk of peoples’ security.

Report written by Lina J

If you’d like to write for LittleLaw, click here!

Share this now!

Check out our recent reports!