For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

Summary

The boom of machine learning models and their deployment in practice is accompanied by a growing demand for interpretability. This PhD project broadly focuses on interpretability in machine learning, and can be split into two main phases: First, this PhD project focuses into what interpretability means, specifically in the context of healthcare. While many researchers invest efforts into creating interpretability tools, it is not yet established what interpretability means and what the end user really finds interpretable. Second, the goal of this PhD project is to develop optimization-based methods for interpretability. To this end, I am currently working on a scalable algorithm for learning with linear programming, and on post-hoc counterfactual explanations using optimization with constraint learning.