Thesis
AI systems in healthcare need more than predictive performance. They need evaluation workflows and explanation methods that help collaborators understand what a model is doing, where it fails, and how much confidence to place in its outputs.
Approach
- Apply explainable AI methods to inspect model behaviour and highlight limitations.
- Build end-to-end segmentation and evaluation pipelines rather than isolated notebooks.
- Present findings in a way that supports cross-functional discussion with research collaborators.
Outcome
This work sits at the intersection of machine learning engineering and research communication: the system has to be technically sound, but its conclusions also have to be interpretable by the people using it.