Robotic Baby Steps: Progress made on ethics in AI

July 12, 2020

2 min read

Sign up to our mailing list! 👇

What's going on here?

Artificial Intelligence (AI) conferences are forcing researchers to think about the ethical consequences of their work.

What does this mean?

Scientific research on AI culminates in annual industry conferences. The organisations that run these conferences are beginning to think about the ethical risks of AI development, which is important given their role as gatekeepers of research publications

This latest development has come after the publication of several ethically questionable papers, for example predicting the likelihood that someone would commit a crime based solely on their face. Many conferences have therefore brought in measures which will require scientists to consider the ethical impacts of their research. Some, for example, now have the right to reject research on ethical grounds. They may also demand a statement addressing the effect of the research on society, informed by immediate ethical concerns and future societal consequences (intended and unintended). 

What's the big picture effect?

The aim of publishing restrictions is to establish incentives to perform ethical research and curb that which may be harmful. By making such demands of researchers, the AI conferences are effectively able to change the direction of AI research because of the vital impact that publishing a paper has on research careers. This is important as while there is no doubt that the advancement of AI will bring innovation and efficiency to everyday life, changing how businesses and industries operate, its use is not without risk. This is a risk not only to individuals, but also to society and even humanity as a whole. Many scholars in AI have recognised these dangers and have called for greater regulation in the area. Having scientists consider the wider implications to their research is a step in this direction, but it is likely something which we can expect to see a lot more of in the future. 

However, while the need for regulation in the area is apparent, this will be offset by countries’ desire to be at the forefront of research and development. Countries around the world are competing to be leaders in the area because AI’s capacity for suprahuman accuracy and efficiency has the potential to supercharge economies. Restrictions that are too strict could therefore come at high economic cost, necessitating a balanced approach. 

One way to avert a cross-border race to the bottom on ethics in AI would be to implement international regulation, however this would be a highly challenging task. This is not least because fields in AI are hard to define, whereas a ban on nuclear weapons for example is relatively clear cut. In addition, not all countries feel the same way with regards to regulation. 

Forcing ethical considerations appears to be the first step in what looks to be a long process of regulation in the industry.

Report written by Julie Lawford

Share this now!

Check out our recent reports!