Moving away from the black box nature of AI and making Machine Learning more accountable


The drive towards greater penetration of machine learning (ML) in healthcare is being accompanied by increased calls for machine learning and AI based systems to be regulated and held accountable in healthcare. Explainable machine learning models can be instrumental in holding machine learning systems accountable. Healthcare offers unique challenges for ML where the demands for explainability, model fidelity and performance in general are much higher as compared to most other domains.

While healthcare ML has demonstrated significant value, one pivotal impediment relates to the black box nature, or opacity, of many machine learning algorithms. Explainable models help move away from the black box nature.

In this research paper, KenSci:

  • Explores the impact of explainable or interpretable ML Models in Healthcare
  • Review the notion of interpretability within the context of healthcare, the various nuances associated with it, challenges related to interpretability which are unique to healthcare and the future of interpretability in healthcare.
  • Demonstrates how the user can understand the arrival of predictions by the ML models


Please fill out the form to see how health systems with AI platforms are now able to move away from the blackbox nature of AI and hold the predictions delivered by the models accountable.