|

Implementation of model explainability for a basic brain tumor detection using convolutional neural networks on MRI slices.

Researchers

Journal

Modalities

Models

Abstract

While neural networks gain popularity in medical research, attempts to make the decisions of a model explainable are often only made towards the end of the development process once a high predictive accuracy has been achieved.
In order to assess the advantages of implementing features to increase explainability early in the development process, we trained a neural network to differentiate between MRI slices containing either a vestibular schwannoma, a glioblastoma, or no tumor.
Making the decisions of a network more explainable helped to identify potential bias and choose appropriate training data.
Model explainability should be considered in early stages of training a neural network for medical purposes as it may save time in the long run and will ultimately help physicians integrate the network’s predictions into a clinical decision.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *