[PDF]

Algorithms for Interpretable Machine Learning with Quantitative Argumentation Frameworks


Oliver Turk

08/05/2025

Supervised by Nico Potyka; Moderated by Jing Wu

Interpretable Machine Learning aims at designing machine learning models that are human-interpretable [A]. While classical approaches like decision trees, rules or regression models satisfy this criterion, their learning performance is often inferior to black-box models like ensemble classifiers or neural networks. Recently, it has been shown that there is a close relationship between neural networks and quantitative argumentation frameworks [B]. This connection can be used to combine parameter learning ideas for neural networks and structure learning ideas for graphical models to learn human-interpretable argumentative classifiers from data [C]. The learning problem is a difficult combinatorial optimization problem, which is currently solved by general-purpose meta-heuristics [C]. The goal of this project is to develop more sophisticated learning algorithms and to improve the representational power of the argumentation frameworks to improve the learning performance and interpretability on larger datasets (this project focuses on tabular data). Possible tasks include

a) Improving the representational power of the learnt argumentation frameworks. For example, by replacing classical arguments with fuzzy arguments and independent attacks and supports with joint attacks and supports to capture correlated effects of features more compactly. b) Designing and implementing new learning algorithms. This may include the refinement of meta-heuristics or the design of new algorithms tailored to the problem structure. c) Developing and implementing ideas for automatically drawing and visualizing the learnt argumentation frameworks in the most comprehensible way. d) Designing experiments to evaluate and to compare the human interpretability of different interpretable machine learning methods.

[A] Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl: Interpretable Machine Learning - A Brief History, State-of-the-Art and Challenges. PKDD/ECML Workshops 2020: 417-431. https://link.springer.com/chapter/10.1007/978-3-030-65965-3_28 [B] Nico Potyka: Interpreting Neural Networks as Quantitative Argumentation Frameworks. AAAI 2021: 6463-6470. https://ojs.aaai.org/index.php/AAAI/article/view/16801 [C] Nico Potyka, Mohamad Bazo, Jonathan Spieler, Steffen Staab: Learning Gradual Argumentation Frameworks using Meta-heuristics. ArgML 2022: 96-108. http://ceur-ws.org/Vol-3208/paper7.pdf

Requirements Essential: Basic understanding of machine learning and search algorithms. Familiarity with Python.

Useful: Familiarity with deep learning libraries and/or argumentation frameworks is helpful, but the necessary skills can also be acquired during the project.

In terms of modules, the content of Machine Learning, Combinatorial Optimization or Knowledge Representation can be helpful for the project.


Initial Plan (03/02/2025) [Zip Archive]

Final Report (08/05/2025) [Zip Archive]

Publication Form