B for Bias: BPTC facial recognition exam software to put BAME students at a disadvantage

August 11, 2020

3 min read

Sign up to our mailing list! 👇

What's going on here?

In August 2020, Bar Professional Training Course (BPTC) exams will be held remotely using facial recognition software. This technology aims to ensure fairness and prevent cheating, but its rate of algorithmic bias means it is likely to disadvantage Black, Asian and Minority Ethnic (BAME) students.

What does this mean?

Passing the BPTC exams is a necessary step in the route to qualifying as a barrister in England and Wales. In light of the ongoing pandemic, this cohort’s exams will be conducted remotely. The facial recognition software that will be used to monitor them uses artificial intelligence (AI) to detect and verify the identity of the student in front of the webcam.

It is well documented that facial recognition technology commonly fails to detect faces with darker skin tones (to see our article on that, click here). Recently, in the USA a Black man was mistaken for a suspected criminal because of the inaccuracy of police facial recognition technology. He was wrongfully arrested in Detroit, only for him to later be released. This prompted the Detroit Police Department to admit that its software makes errors 96% of the time.

The unpredictability of facial recognition software has raised the concern that BAME students taking the BPTC exams will be more likely to encounter technical difficulties that could adversely affect their performance. Students argue that a technological disruption could make them lose their composure or heighten their nerves, which is enough to mean the difference between passing and failing in an already challenging and stressful exam. It has been alleged that the Bar Standards Board (BSB) decided to utilise this technology with its ideal candidate in mind: a white person who will pass the facial matching checks without a hitch. This decision is likely to undermine the BSB’s efforts to promote diversity, inclusion and social mobility at the Bar, a profession in which BAME individuals are vastly underrepresented (only 13.6% in 2019).

What's the big picture effect?

This story highlights the irony that the software being used to ensure fairness in the BPTC exams is inherently unfair due to algorithmic bias against darker skin. In order to create an algorithm for facial recognition software, a data set containing faces and other random objects is used to train the machine to recognise what a face looks like. If the data set lacks representation of Black people, as is often the case in the white-dominated Western tech industry, the machine is not trained to recognise their faces, thus producing algorithmic bias.

Joy Buolamwini, a Black computer scientist renowned for her work in the coding industry, was tasked with building a robot that could play peek-a-boo. The generic facial recognition software she used could only recognise her face if she wore a white mask. This is yet another example of how racial inequality and discrimination is embedded in our society, with the potential to make Black people feel inferior and excluded.

Algorithmic bias is arguably even more damaging than human bias, as an algorithm can spread bias much quicker and on a much larger scale than a human being. Moreover, there is a common misconception that outcomes produced by algorithms are more objective and factual than those produced by humans. As AI and algorithms become increasingly advanced and are deployed in more and more areas of our daily lives (unlocking our phones, monitoring our exams, aiding in law enforcement), the consequences of bias are becoming more widespread and varied.

The first step in overcoming algorithmic bias is incorporating more diverse data sets into the machine learning process. The onus is on tech companies to ensure that their systems work for everyone.

Report written by Isobel Deane

Share this now!

Check out our recent reports!