Visual Interpretability of Capsule Network for Medical Image analysis

Authors: MIGHTY ABRA AYIDZOE, YU YONGBIN, PATRICK KWABENA MENSAH, JINGYE CAI, FAIZA UMAR BAWAH

Abstract: Deep learning (DL) models are currently not widely deployed for critical tasks such as in health. This is attributable to the "black box", making it difficult to gain the trust of practitioners. This paper proposes the use of visualizations to enhance performance verification, improve monitoring, ensure understandability, and improve interpretability needed to gain practitioners' confidence. These are demonstrated through the development of a CapsNet model for the recognition of gastrointestinal tract infection. The gastrointestinal tract comprises several organs joined in a long tube from the mouth to the anus. It is susceptive to diseases that are difficult for medics to diagnose, since it is not easy to have physical access to the sick regions. Consequently, manual access and analysis of images of the unhealthy parts requires the skills of an expert, as it is tedious, prone to errors, and costly. Experimental results show that visualizations in the form of post-hoc interpretability can demonstrate the reliability and interpretability of the CapsNet model applied to gastrointestinal tract diseases. The outputs can also be explained to gain practitioners? confidence in paving the way for its adoption in critical areas of society.

Keywords: Capsule network, explainable artificial intelligence (XAI), convolutional neural network

Full Text: PDF