In this episode, we explore the intersection of model compression and interpretability in medical AI with the authors of the newly published research paper, Interpretability-Aware Pruning for Efficient Medical Image Analysis. Join us as Vinay Kumar Sankarapu, Pratinav Seth and Nikita Malik from AryaXAI discuss how their framework enables deep learning models to be pruned using attribution-based methods—retaining critical decision-making features while drastically reducing model complexity.
We cover:
Whether you're a machine learning researcher, AI engineer in medtech, or working on explainable AI (XAI) for regulated environments, this conversation unpacks how to build models that are both efficient and interpretable—ready for the real world.
📄 Read the full paper: https://arxiv.org/abs/2507.08330
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More