Interpretable machine learning models for medical imaging
Deep learning models have achieved unprecedented success in the diagnosis of diseases from medical imaging, with the potential to provide faster and more accurate medical diagnosis.
However, a major limitation that has prevented the widespread deployment of deep learning in healthcare is the models’ black-box nature. Current deep learning models do not provide any indication of what evidence was used for classification, which limits trust and confidence in the model outputs.
Inspection of existing models has found that spurious correlations in the training data not related to the disease (such as ‘portable’ labels on x-ray images, which give an indication of a patient’s condition but do not constitute the disease) have been used to make classifications.
Accepting model outputs without any evidence of the decision-making process could result in misdiagnosis. On the other hand, existing studies on interpretability only rely on small sample sizes for training, which leads to an increased risk of overfitting and misinterpretation, especially in complex models.
To alleviate the aforementioned problems, this project will develop a user interface which aims to improve explainability of deep learning models when applied to medical imaging by removing spurious dimensions of the data and introducing non-stationarity in the training process, as well as generating transferable synthetic data on which the accuracy of the models can be validated.
This interface will help medical practitioners in providing faster and more accurate diagnosis and healthcare. This project builds upon the team’s existing work and expertise in computer vision for medical imaging, stationarity analysis, and building of interpretable and transferable models.
This will be combined with our recent work that provides the ability to understand both where an image contributed to a model’s output and what features of the model were used in the decision making.