“Luddites” or Human Rights?: Trade unions seek regulation of AI in the workplace
April 9, 2021
2 min read
What's going on here?
The Trade Union Congress (TUC), a federation of trade unions, has reported that there are significant gaps in the UK legislation for the use of artificial intelligence (AI) in the workplace.
What does this mean?
The past year has seen an enormous escalation in the integration of high-tech management tools in UK businesses. As workers have been trapped at home, employers have sought ways of monitoring their work remotely and determining performance targets.
According to the TUC, a body dedicated to the protection of British workers, legislation is required to regulate the use of AI tools that determine which employees are not meeting performance targets and therefore should be made redundant. The TUC claims these uses of technology are unfair. It believes any redundancy decisions should be taken by humans, as these can then be challenged directly should they seem incorrect or discriminatory.
One of the lawyers responsible for the TUC report warned of “red lines” in the deployment of technologies, which he claimed should not be crossed by employers. To do so, he claimed, would open companies up to discrimination claims.
The report suggests several other changes, not least legislation requiring businesses to consult with trade unions before deploying “high-risk technologies” and a right to “switch off” for home workers.
What's the big picture effect?
The proposed regulation is built on the idea that serious decisions must be justifiable by an employer. AI algorithms are not always clear on the reasons why decisions were made. Therefore, they cannot always clearly explain why an employee should be fired. When a human makes this decision, they can be challenged and reasons for that dismissal must be provided clearly. Dismissal generally requires a high degree of proof that the employee has failed to live up to their responsibilities.
The TUC’s report does not demand a prohibition on the use of AI within HR departments but asks that legislation be put in place that holds humans responsible for the decisions that are made. The AI may be used to monitor and make recommendations, but ultimately, a human would be required to make the final decision, and be confident in defending it.
AI systems were once thought to be a step forward in areas such as employment and HR, where discrimination can run rampant. However, evidence has mounted that algorithms often possess similar biases to the humans who programme them. An example of this is an algorithm that was used by Uber Eats to fire BAME couriers. Uber Eats used facial recognition software which has been shown to be ineffective at correctly identifying people from ethnic minorities. The company claims decisions of this nature involve human input.
AI has undeniable potential to improve the lives of workers in almost every industry. However, it cannot, at least in its current form, do everything that a human can do. At least for now, AI being solely responsible for hiring and firing remains contested.
Report written by Joshua White
Share this now!
Check out our recent reports!