How to save AI from humans?

Factors to consider before we build AI 2.0

Mischella Felix-Jayachandran
5 min readSep 22, 2021

The first generation of AI has picked up on human biases. Among many disturbing cases of biased AI systems resulting in discriminatory outcomes, the most heart-breaking ones were cases involving unfair elongation of prison sentence, unfair credit card decision, and home appraisal outcomes. So, how does bias get into AI systems? Read on to find out what it takes to build socially responsible AI.

There are two broad reasons:

  1. Minimal consideration to human-centric design: Up until 2010, AI systems were notoriously difficult to build. Most of the focus of first-generation AI systems was on the engineering aspects — getting an AI proof of concept working and scaling in production. (This is still a tough challenge to this day. Stay tuned for more information on scaling AI). Solving engineering and data problems was a humongous task. AI developers and designers were happy to see an AI system predicting the next unhappy customer or the next customer likely to leave the brand. This added business value and all was fine in AI land until we started using AI in selection scenarios such asWho is most likely to pay back a loan?”, “Who is most likely to be a better homeowner?”, etc. Disaster ensued when AI started to pick up on all the biases that existed in the non-digital world and started discriminating based on gender, ethnicities, marital status, age, and a wide plethora of characteristics.
  2. Non-existent structures to account for bias: There were no quantifiable measures to validate that the AI system is unbiased and such measures weren’t mandated. People involved in building AI did not often consist of ethics specialists, anthropologists, or social scientists. One of the largest computer vision projects, ImageNet, was originally trained using Amazon Turks where the data labelers could be anyone on the planet. This introduced tons of disturbing biases, prejudices, and stereotypes into the system, that AI eventually picked up and made obvious.

While this is by no means an excuse, it does point to the key problem — almost no focus was given to ensuring the moral, social, and responsible aspect of AI- often termed Ethical AI.

A 2019 Gartner study reported that by 2022, 30% of the companies will invest in explainable ethical AI, from almost none in 2019. 30% is still a dangerously lower number given the societal, cultural, psychological, and organizational ramifications, biased AI can cause.

In summary, the presence of unregulated human actors and the absence of structures founded on ethics, morality, and fairness have been the source of bias in AI. While addressing bias at the human level is a slow continual process, are we trapped with biased systems until we free human beings of biases? The answer is No. I believe it is easier to address bias in AI than in human beings. Let me explain why…

The Silver Lining:

AI systems, (through their bias-related mishaps) have indirectly served as a platform to bring to the forefront, the systemic biases that have slipped through for generations. What was once perceived as a “theoretical accusation” with no evidence is now provable because of a faulty AI model. Data used to train this model adds concrete evidence which was previously collected through years of anthropological research and vetting.

If anything, AI has created increased evidence backed awareness of areas of bias . A single AI mishap such as Amazon’s recruiting AI model favoring male resumes created widespread awareness of an issue that has been plaguing women for generations. Now companies are addressing that problem faster than ever.

Awareness is the first step to solving a problem and the unfortunate AI accidents have provided the impetus and data points that were lacking before.

Figure 1: Image by Author: The silver lining is bleaker

Can we fix them? Absolutely yes

There are two steps to this

Step 1: Demonstrate AI safety and trustworthiness

This involves building AI with a focus on three attributes- interpretability, explainability, and trainability. At all stages of the AI design to deployment process, consider the visibility of these attributes in your end system. This will demonstrate Ethical and Responsible AI — a technology that has kept human wellbeing as its core aspect- and serve to increase public trust and safety of AI systems.

Figure 2: Three attributes to help demonstrate AI safety and trustworthiness

Attribute 1: AI has to be Interpretable

Consumers and producers of an AI system should be aware of the mechanics of the AI model. With such understanding, the reasoning for decision-making becomes transparent. Transparency generates greater trust in the system.

Interpretability can be demonstrated with descriptions consisting of graphs, charts, and other visualization methods into models, algorithms, SDKs, and libraries used. When AI is highly interpretable, it becomes highly observable, i.e. flaws, biases, prejudices patterns can be easily caught. Internally with an organization, high interpretability also democratizes the use of AI to more than the intended audience within an organization thus promoting increased use and every employee becoming a champion of public/customer satisfaction.

Attribute 2: AI has to be Explainable

AI is explainable if businesses are able to comfortably explain the reasons for their predictions. This starts with data. Thoroughly understand, collect, process, and manage data to build the capacity to explain your decisions with a high degree of clarity. High explainability signals businesses’ social responsibility. Follow this blog for my next post on how to ensure explainability while maintaining a competitive advantage.

Attribute 3: AI has to be Trainable

AI is trainable when the negative patterns observed (with a high degree of Interpretability) and the cause of it (with a high degree of explainability is achieved) can be addressed in a timely manner (high degree of trainability). Thus AI trainability enables actionable changes in the right direction.

One may argue that all AI algorithms are inherently trainable. The question is not whether an AI model is trainable, but how confident is the organization that the feedback loop and adjustment results in a better system — reduced undesirable patterns, and a sustainable model. This confidence can only be guaranteed when we build for high degrees of trainability. A highly trainable model further simplifies the process to prevent AI model degradation over time.

Step 2: Build human-centric AI design and engineering teams:

Hire ethicists and social scientists in addition to technical folks. Embed them in the design and engineering processes so they can act as advocates of the above three attributes. Product owners of AI systems should be trained in ethics. Train AI product leaders in probabilistic reasoning and other cognitive biases elimination training. Shannon Vallor in her book “Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting” provides a practical framework on developing techno-moral values in next-gen technology, more specifically AI.

A people-first approach to engineering AI that is interpretable, explainable, and trainable, increases the public’s opinion on the safety and trustworthiness of AI models and algorithms, a crucial component to successful AI strategy.

Sources:

Explainable_Artificial_Intelligence_A_Survey, Došilović, Filip Karlo; Brčić, Mario; Hlupić, Nikica. Proceedings of the 41st International Convention on Information and Communication Technology, Electronics and Microelectronics MIPRO 2018

Explaining Explanations: An Overview of Interpretability of Machine Learning, Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal, Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139

--

--

Mischella Felix-Jayachandran

Happy Living Proponent | #crypto #AI #neuroscience #multi-cloud