Artificial Intelligence is Against humanity. Wait, WHAT?

Mohammad Amin Dadgar
4 min readAug 18, 2022

We’ve always heard the story of robots or machines against humans. Did you know this had actually happened in real life? To find out, please read this article.

Figure 1. A representative of a complex Artificial Neural Network from unsplash.com, showing the chaos of not knowing what is happening

We’ve always heard the stories of robots or a piece of code that becomes against humans, but rarely it was told that it had happened in real life. From the day that black-box AI models became popular and their use became popular in the industry, the rationale behind the AI decision-making process becomes opaque to users.

So, What is the Black-Box AI models?

Black-box Artificial Intelligence models are models such as deep neural networks in which there are a lot of parameters tuned to make a decision. Using a huge number of parameters to make a decision, made it hard to understand what rationale the model is working on to give a simple decision.

Why Black-Box models can be assumed against humanity?

Most black box methods are famously known they can achieve high performance in problems that humans cannot solve. As said they can outperform humans in many problems but finding the reasons behind their decisions of it is not an easy task. Un-knowing the reason or rationale of decisions can cause different problems while using them in the industry. For example, think if the model’s decisions are wrong there is no easy way to find out why.

How they had become against humans?

In the last decade using these methods had created huge problems in AI systems as:

1- Algorithms behind Apple’s credit card were accused to be gender biased

  • Apple’s credit card co-founder Steve Wozniak had a limitation of ten times higher than his wife, despite the very same account transactions ( was biased against females) [1]

2- Amazon’s recruiter AI system did prefer men rather than women to be hired [2]

3- AI was biased against black people in predicting future criminals [3]

  • Huge difference between predicting the risk of white and black person’s future criminal

These were examples of AI systems that are against humanity. So to fix it a new concept named “Explainable AI (XAI)” is introduced.

Explainable Artificial Intelligence (XAI)

The classic black-box artificial intelligence methods (Specifically deep learning methods), normally did not explain the rationale behind a decision. To find the reason behind the decision of a black-box AI method or the rationale behind its process, new methods are introduced to give explanations for the classic AI methods.

What are these Explanation methods?

Different methods are proposed to explain a decision of a black-box model (Local Explanations) or to reveal the whole decision-making process (Global Explanations).

We just give a brief summary of what these methods are. To find out more there are tons of books, blogs, and videos that talk in depth about these methods' functionality.

1- Local Interpretable Model-agnostic Explanations (LIME): This method tries to learn a transparent model using a set of data from the original model. The transparent model is simulatable and decomposable by humans and also its algorithm is easy to understand by humans. LIME method outputs the feature importance for the decision of a data sample. (https://github.com/marcotcr/lime)

2- SHAP: This method finds the features' importance the same as LIME, by trying to average different formations and availability of features’ prediction. (https://github.com/slundberg/shap)

3- Layer-wise Relevance Propagation (LRP), Integrated Gradients (IG), and DeepLift: These three methods tries to find the feature importances by analyzing the neurons in a neural network.(https://github.com/albermax/innvestigate)

4- Accumulated Local Effects (ALE) plots: Visualize the feature effect on the output. (https://github.com/DanaJomar/PyALE)

Also, there are much more XAI methods available. I will leave some links to XAI toolboxes that can be used in AI systems.

innvestigate: A toolbox to iNNvestigate neural networks’ predictions. (https://github.com/albermax/innvestigate)

Quantus: Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations (https://github.com/understandable-machine-intelligence-lab/Quantus)

OpenXAI: Towards a Transparent Evaluation of Model Explanations(https://github.com/AI4LIFE-GROUP/OpenXAI)

Final words

To fight properly against AI systems we need to be better than them and outperform them, but these days the need for a really intelligent AI system is crucial. From automated surgery to automated judgment systems the AI system must be well intelligent but also be trustworthy — Able to rely on its decisions of it by knowing what’s happening— to humans. To achieve both high performance and trustworthiness of models, there is a solution that is to use XAI methods.

It is important to note that these XAI methods are growing and they need more attention both from scientists and developers in order to evaluate these methods and also use them in industry. so to make a better community and be superior to these AI systems we need to explore and have more research on them. I will encourage computer scientists to work and research more on these methods and also encourage developers to use XAI methods in their systems to build a better world.

References

[1] Duffy, Clare. 2019. “Apple co-founder Steve Wozniak says Apple Card discriminated against his wife.” CNN Business.

[2] Dastin, Jeffrey. 2018. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women.” Reuters

[3] There’s software used across the country to predict future criminals. And it’s biased against blacks. by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica 2016

Want to read more? Try out these resources

[4] “Interpretable machine learning: a guide for making black box models explainable”, Christoph Molnar, leanpub, 2022

[5] Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more, Book by Aditya Bhattacharya, 2022

[6] “xxAI — Beyond Explainable AI”, Andreas Holzinger, Randy Goebel, Ruth Fong, Taesup Moon, Klaus-Robert Müller, Wojciech Samek International Workshop, Held in Conjunction with ICML 2020

--

--

Mohammad Amin Dadgar

C.S Artificial Intelligence Master’s degree student and data analyst at TogetherCrew. My LinkedIn profile link: https://www.linkedin.com/in/mramin22/