For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.

Summary

This thesis investigates interpretability of machine learning models from a holistic standpoint by combining optimization-based XAI methods with evaluation grounded in user studies. We propose optimization-based methods to generate rule sets for binary and multi-class classification as well as to generate (robust) counterfactual explanations for common machine learning models. In addition, we discuss the implications of XAI in practical settings like healthcare and conduct user studies with potential end users and domain experts. In doing so, we approach the field of interpretable machine learning from a distinctive angle and incorporate insights from the social sciences in research and development of XAI methods.