Artificial Intelligence (AI) is becoming a big part of our lives, helping us in many ways, from recommending movies to assisting in medical diagnoses. But sometimes, it acts like a “black box,” making decisions without explaining how these decisions are made. This is where Explainable AI (XAI) comes in. XAI helps us to understand and trust AI by explaining how it makes the decisions.

What is Explainable AI?

Explainable AI (XAI) refers to techniques and methods that make AI decisions clear and understandable to humans. Imagine you have a robot that always wins at tic-tac-toe game but if robot lost in the game and we want to know why robot lost, XAI help us to understand this.

Lets take another example, you have built machine learning classifier to classify the patient has diabetic or not. You used this model for prediction and let us assume your model has given output as diabetic. Now you want to understand how model has taken this decision, XAI comes here to understand this. XAL can be used in various complex AI, machine and deep learning models.

Why is Explainable AI Important?

  1. Trust: Knowing how AI makes decisions builds trust.
  2. Learning: We can learn from AI’s decisions and improve our own understanding.
  3. Safety: Understanding AI helps in identifying and fixing errors.
  4. Fairness: We can ensure AI decisions are fair and unbiased.

Using LIME for Explainable AI

LIME (Local Interpretable Model-agnostic Explanations) is a popular tool that explains AI model predictions.

We’ll look at 2 examples: one for classification and one for regression.

Example 1: Classification with LIME and Decision Tree

We’ll use the Iris dataset to classify different types of iris flowers. For decision tree classifier we can plot the tree which can help us to understand decision made in each prediction. Also, we’ll use LIME to explain the model’s predictions and plot the decision tree.

import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree

# Load the dataset
iris = load_iris()
X = iris.data
y = iris.target
feature_names = iris.feature_names

# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train a Random Forest classifier
model = DecisionTreeClassifier()
model.fit(X_train, y_train)

# Plot the decision tree
plt.figure(figsize=(10, 10))
plot_tree(model, feature_names=feature_names, class_names=iris.target_names, filled=True)
plt.show()

Now Let us check with LIME

import lime

# Use LIME to explain a prediction
explainer = lime.lime_tabular.LimeTabularExplainer(X_train, feature_names=feature_names, class_names=iris.target_names, discretize_continuous=True)

# Choose an instance to explain
i = 0
exp = explainer.explain_instance(X_test[i], model.predict_proba, num_features=4)

# Show the explanation
exp.show_in_notebook(show_all = True)

Explanation

  • Loading the Dataset: We load the Iris dataset, which includes features like petal length and petal width.
  • Training the Model: We train a Decision Tree classifier on the data.
  • Plotting the Decision Tree: We plot the decision tree to visualize how decisions are made.
  • Explaining with LIME: We use LIME to explain the model’s prediction for a single instance from the test set.

Example 2: Regression with LIME and Random Forest

Let’s use a regression example with the California Housing dataset to predict house prices and explain the predictions using LIME.

import lime
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor

# Load the dataset
california = fetch_california_housing()
X = california.data
y = california.target
feature_names = california.feature_names

# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train a Random Forest regressor
model = RandomForestRegressor()
model.fit(X_train, y_train)


# Use LIME to explain a prediction
explainer = lime.lime_tabular.LimeTabularExplainer(X_train, 
                                                   feature_names=feature_names,
                                                   mode = 'regression',
                                                   discretize_continuous=True)

# Choose an instance to explain
i = 0
exp = explainer.explain_instance(X_test[i], model.predict, num_features = 4)

# Show the explanation
exp.show_in_notebook(show_all=False)

Explanation

  • Loading the Dataset: We load the California Housing dataset, which includes features like median income and housing median age.
  • Training the Model: We train a Random Forest regressor on the data.
  • Explaining with LIME: We use LIME to explain the model’s prediction for a single instance from the test set.

Conclusion

Explainable AI is crucial for building trust, ensuring safety, and promoting fairness in AI systems. Tools like LIME help us understand complex AI models by breaking down their decisions in a way that’s easy to grasp. Whether for classification with a Decision Tree or regression with a Random Forest, explainable AI makes it easier to see how AI thinks, helping us to use it more effectively and responsibly.

Leave a Reply