Why Do We Need Responsible AI? | Quisitive
Why Do We Need Responsible AI?
July 22, 2020
Quisitive
AI is becoming more and more pervasive in today's world, and its footprint is only accelerating.

AI is becoming more and more pervasive in today’s world, and its footprint is only accelerating. AI has proven to be a valuable tool, but like all tools it can be used for both good and, if we don’t use responsible AI, it can also be used for ill. It may not be hyperbole to compare the current state of AI to the early days of nuclear power – something good (that can provide cheap energy to millions) that can also be used to cause great harm (by building a weapon of mass destruction). In the case of AI, nobody is (yet!) saying that humans are about to be enslaved by our robot overlords, but there are still real cases of harm that can be done by AI systems.

A brief case study

Consider a well-known case – the COMPAS system, which was used by a number of criminal justice agencies to determine reoffending rates. The original article from ProPublica can be found here . This model used an AI algorithm to determine, based on somebody’s demographic information and criminal history, the risk that somebody might reoffend. Consider the following four cases for petty theft and drug possession:

Case 1 – 2 armed robberies, 1 attempted robbery

Case 2 – 4 juvenile misdemeanors

Case 3 – 1 attempted burglary

Case 4 – 1 resisting arrest without violence

Now, most people might think that Cases 1 and 3 would be the most likely to reoffend (and that is actually what happened). But what did the AI algorithm predict?

Case 1 – Low risk (3/10)

Case 2 – High risk (8/10)

Case 3 – Low risk (3/10)

Case 4 – High risk (10/10) So what happened?

In this case, cases 1 and 3 were white people, and cases 2 and 4 were black people. In the data that was used to build the AI algorithm (“the training set”), black people were more likely to be incarcerated than white people, and the AI learned this.

As a result the AI model learned racial bias, because it was not designed responsibly. Now given, these results have been cherry picked as illustrative examples of the problem. However, a study published in Science Advances in 2018 showed the overall accuracy of the model was around 65%, which is comparable to the combined average results of untrained people.

In addition it found that: “Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their black counterparts at 28.0% .”

Given that this algorithm might be used by parole boards, or to decide if someone gets jail time or community sentencing, the potential for genuine harm here is huge.

So what can we do about it?

In the last couple of years, there has been huge research interest in addressing the problems of AI, and how to balance its huge benefits with its potential to cause harm.

Good news – these techniques and technologies are starting to come into wider usage, and this year Microsoft has announced its six principles for Responsible AI

– Fairness

– Inclusiveness

– Safety and Reliability

– Privacy and Security

– Transparency

– Accountability

You can read more about these principles here , and over the next few weeks Catapult’s data science team will be digging into responsible AI’s principles in more detail. We will also show you how to implement some of these yourself using some cool modern python packages!