Responsible AI: Developing AI for Everybody | Quisitive
Computer with code feature image. Developer works on coding Responsible AI.
Responsible AI: Developing AI for Everybody
July 14, 2023
Carlos Garcia-Vaso
AI is already having a major impact on society. However, as AI evolves, it is important that we develop responsible AI. Learn more.
Computer with code feature image. Developer works on coding Responsible AI.

As new and more powerful Artificial Intelligence (AI) technologies are being developed, AI products are becoming increasingly integrated into our lives. From the algorithms that power search engines, to facial recognition software, AI is already having a major impact on society, and it has the potential of revolutionize many aspects of our lives. However, with great power comes great responsibility. As AI continues to evolve, it is important that we ensure that it is developed and used responsibly. 

Why do we need responsible AI? 

Historically, the advancement of technologies, even if developed with the best intentions, can have unintended consequences, that can be even harmful. Developing responsible AI can help anticipate and prevent the potential issues caused by AI. There are many reasons why developing responsible AI software is important, but here are a few of the most important reasons: 

Avoiding bias and discrimination 

Most AI software relays on machine learning (ML) models to learn how to respond to inputs. The ML models are trained using data (this data is named training data). If the training data is not selected carefully, it can contain biases that are then introduced to the AI software. Selecting the training data is a very important but hard task. ML models usually learn from existing real world data, due to the large amount of data needed to properly train them. This means that even accurate models can learn and sometimes amplify pre-existing biases in the data based on race, gender, religion, etc. 

One well known example of how unintended harmful biases can be introduced in AI is the COMPAS system, which was used by criminal justice agencies to estimate future reoffending rates of individuals. This model used an AI algorithm that determined the risk of an inmate to reoffend, based on criminal history and demographic information.  The intention of this program was to create a fair system that was not influenced by the unconscious biases a person may have. 

However, a study published in Science Advances in 2018 showed that the model was biased, and it discriminated against Black individuals. Moreover, the overall accuracy of the model was around 65%, which is comparable to the combined average results of untrained people. The study cites: 

“Black defendants who did not recidivate were incorrectly predicted to reoffend at a rate of 44.9%, nearly twice as high as their white counterparts at 23.5%; and white defendants who did recidivate were incorrectly predicted to not reoffend at a rate of 47.7%, nearly twice as high as their black counterparts at 28.0%.” 

It was ultimately identified that the source of bias was in the data that was used to build the AI algorithm. In the training data set, black people were more likely to be incarcerated than white people. By overlooking this fact,  AI was not designed responsibly, and the AI model learned a racial bias as a result. Since the COMPAS AI algorithm might be used by parole boards and other judicial bodies, there is great potential for harm, which highlights the importance for responsible AI design. 

Promoting fairness and equity 

Another troubling example lies in facial recognition software. Through various studies, researchers have demonstrated that some widely used AI based facial recognition algorithms have trouble recognizing subjects that are female, Black, and between 18-30 years old. This can lead to degraded experiences to users on those categories and even biases against those users. The problem here seems to be that the training data is not representative of all the users for the facial recognition algorithms. Reflecting possible oversight during data collection, inadequate sampling data, poor design or budgetary limitations. Regardless of the cause, the potential for harm is real.  

Responsible AI practices can help to ensure that AI systems are designed to be fair and to promote equity between its users. The idea behind fairness is that AI systems should be beneficial to everyone, not just a select few, and they should treat everyone fairly and impartially. However, fairness is far from a solved problem, as shown by the previously mentioned examples. 

For fair responsible AI design we should make sure the training data is sampled in a way that is representative of users. For example, if the model will be used for people of all ages, but you only have training data for young adults, that will likely not be a fair model. To train a fair model, developers should make sure to include data for people of all ages in the training data set. 

Protecting privacy and security 

AI systems collect and process a lot of personal data, like images of faces, age, demographics, etc. That data is used to train ML models to provide a service to users. However, the use of personal user data raises privacy and security concerns. If this data is not properly protected, it could be used for malicious purposes, like identity theft. 

Moreover, AI systems trained on personal data can be used to track people’s movements, monitor their online activity, and even to predict their behavior. For example, facial recognition algorithms are currently being used in China to track millions of citizens daily lives, which is already rising privacy concerns. Building responsible AI can help protect the privacy and security of its users. In order to develop responsible AI, we must balance security, convenience and privacy. 

What can we do about it? 

Alongside the fast advancements in AI in the last few years, there has also been a lot of research conducted about how to solve and prevent the problems of AI. The huge potential benefits of AI are clear, but researchers are looking for a way to balance its benefits with the potential to cause harm. Many of the companies leading the development of AI technologies, like Microsoft and Google, are also invested in research to guarantee the AI systems they develop are fair, safe and protect the user’s privacy. In other words, they are invested in the development of responsible AI. 

In order to guide organizations in developing responsible AI, Microsoft developed six guiding principles: 

  1. Fairness: AI systems should treat all people fairly. 
  1. Inclusiveness: AI systems should empower everyone and engage people. 
  1. Reliability and Safety: AI systems should perform reliably and safely. 
  1. Transparency: AI systems should be understandable. 
  1. Privacy and Security: AI systems should be secure and respect privacy. 
  1. Accountability: People should be accountable for AI systems. 

You can find more information on Microsoft’s principles for responsible AI at https://www.microsoft.com/en-us/ai/responsible-ai.


Ready to talk to an expert about how to establish responsible AI practices in your organization?