|

Interpretable Artificial Intelligence through Locality Guided Neural Networks.

Researchers

Journal

Modalities

Models

Abstract

In current deep learning architectures, each of the deeper layers in networks tends to contain hundreds of unorganized neurons which makes it hard for humans to understand how they interact with each other. By organizing the neurons using correlation as the criteria, humans can observe how clusters of neighbouring neurons interact with each other. Research in Explainable Artificial Intelligence (XAI) aims to all alleviate the black-box nature of current AI methods and make them understandable by humans. In this paper, we extend our previous algorithm for XAI in deep learning, called Locality Guided Neural Network (LGNN). LGNN preserves locality between neighbouring neurons within each layer of a deep network during training. Motivated by Self-Organizing Maps (SOMs), the goal is to enforce a local topology on each layer of a deep network such that neighbouring neurons are highly correlated with each other. Our algorithm can be easily plugged into current state of the art Convolutional Neural Network (CNN) models to make the neighbouring neurons more correlated. A cluster of neighbouring neurons activating for a class makes the network both quantitatively and qualitatively more interpretable when visualized, as we show through our experiments. This paper focuses on image processing with CNNs, but can theoretically be applied to any type of deep learning architecture. In our experiments, we train VGG and WRN networks for image classification on CIFAR100 and Imagenette. Our experiments analyse different perceptible clusters of activations in response to different input classes.Copyright © 2022 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *