Research Note

Explainable AI and Model Evaluation in Healthcare

Research and implementation work focused on making healthcare AI systems more interpretable, testable, and communicable to collaborators.

Year
2025
Domain
Explainable AI
Publication
University of Guelph / Graduate Research
Tags
Explainable AI, Evaluation, Computer Vision

Thesis

AI systems in healthcare need more than predictive performance. They need evaluation workflows and explanation methods that help collaborators understand what a model is doing, where it fails, and how much confidence to place in its outputs.

Approach

  • Apply explainable AI methods to inspect model behaviour and highlight limitations.
  • Build end-to-end segmentation and evaluation pipelines rather than isolated notebooks.
  • Present findings in a way that supports cross-functional discussion with research collaborators.

Outcome

This work sits at the intersection of machine learning engineering and research communication: the system has to be technically sound, but its conclusions also have to be interpretable by the people using it.