This repository contains the official implementation of the paper Translating XAI to Image-Like Multivariate Time Series. This project aims to translate eXplainable AI (XAI) methods developed for image data to interpret the decisions made by a machine learning model designed for crash detection based on insurance telematics data. The XAI methods investigated include Grad-CAM, Integrated Gradients, and LIME. The project was developed within a collaboration between Assicurazioni Generali Italia, ELIS Innovation Hub and University Campus Bio-Medico of Rome. The insurance company deploys the model to be explained, and it is assumed to be a black box that cannot be changed to simulate a real-world scenario.
git clone
cd translating-xai-mts
- First create a Python virtual environment (optional) using the following command:
python -m venv translating-xai-mts
- Activate the virtual environment using the following command:
source translating-xai-mts/bin/activate
- Install the dependencies using the following command:
pip install -r requirements.txt
For Conda users, you can create a new Conda environment using the following command:
conda env create -f environment.yml
The code was tested with Python 3.9.0, PyTorch 2.0.1, and Tensorflow 2.13.0. For more informations about the requirements, please check the requirements.txt or environment.yml files.
- In the yaml file under the folder configs, you can find all the settings used for each XAI method as well as other parameters;
- We cannot share all the data used in the paper due to privacy reasons. However, we provide dummy samples created as a perturbation of the original signals in the folder. Note that the network is assumed to be a black box that we cannot change;
- To run the code, and explain the network on the dummy samples, change in yaml file the following paths:
fold_dir: ./data/processed/holdout -> fold_dir: ./data_share/processed/holdout;
data_dir: ./data/data_raw/holdout -> data_dir: ./data_share/data_raw/holdout;
- you can use the script main.py to obtain explanations from Grad-CAM, LIME, and Integrated Gradients:
python src/main.py
If something goes wrong, please check the config file and the data path.
If you want to use your own dataset, you can follow the steps below:
- Create a folder with your own data with the following structure.
data
├── data_raw
│ ├── exp-name
│ │ ├── mode-1
│ │ │ ├── train
│ │ │ │ ├── 0.npy
│ │ │ │ ├── ...
│ │ │ ├── valid
│ │ │ │ ├── 10.npy
│ │ │ │ ├── ...
│ │ │ ├── test
│ │ │ │ ├── 42.npy
│ │ │ │ ├── ...
│ │ ├── mode-2
│ │ │ ├── train
│ │ │ │ ├── 0.npy
│ │ │ │ ├── ...
│ │ │ ├── valid
│ │ │ │ ├── 10.npy
│ │ │ │ ├── ...
│ │ │ ├── test
│ │ │ │ ├── 42.npy
│ │ │ │ ├── ...
├── processed
│ ├── exp-name
│ │ ├── train.txt
│ │ ├── valid.txt
│ │ ├── test.txt
In data_raw, you can put your data in the form of numpy arrays. In the folder processed, you can put the list of samples to be used for training, validation, and testing. Check the folder for an example and the scripts here for more details;
- Train your own model using Tensorflow and save the model in the folder exp-name/models, under the root directory of the project. If you want to use PyTorch, you can change the code accordingly;
- Change the yaml file accordingly;
- Run the script main.py to obtain explanations from Grad-CAM, LIME, and Integrated Gradients:
python src/main.py
Note that the XAI engine is designed to work with telematics data; each sample contains a multivariate acceleration signal (mode-1) and a univariate speed magnitude signal (mode-2). Also, the XAI methods were adapted to explain the black box model provided by the insurance company. Thus, if your data or architecture are different, you may need to change the code accordingly.
Here we report the results obtained explaining a crash and non-crash samples using Grad-CAM, Integrated Gradients, and LIME.
This code is based on the following papers and repositories
Papers
- Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
- Axiomatic Attribution for Deep Networks
- "Why Should I Trust You?" Explaining the Predictions of Any Classifier
Repositories
If you have any questions or if you are just interested in having a virtual coffee about eXplainable AI, please don't hesitate to reach out to me at: [email protected].
May be the AI be with you!
This code is released under the MIT License.