Dear all,
Dr. Francesco Leofante
(invited by Prof. Dr. Matthias Thimm)
will deliver the next Oberseminar (14th November,10:00, via Zoom):
Title:
Robustness issues in algorithmic recourse.
Abstract:
Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can
be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CEs. Since a lack of robustness may compromise the validity of CEs, techniques to mitigate this risk are in order.
In this talk we will begin by introducing the problem of (lack of) robustness, discuss its implications and present some recent solutions we developed to compute CEs with robustness guarantees.
Bio:
Francesco is an Imperial College Research Fellow affiliated with the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe
and explainable AI, with special emphasis on counterfactual explanations and their robustness. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four-year effort devoted to the formal study of robustness
issues arising in XAI. More details about Francesco and his research can be found at https://fraleo.github.io/.
**We cordially ask the attendees to turn on their cameras and turn off their microphones during the presentation.
---
Current Scheduled Talks ----
If you do not wish to receive messages from this mailing list anymore, please let me know by simply responding „unsubscribe“ to this mail (steffi.bluemel@fernuni-hagen.de).
Best regards,
Steffi Blümel
- Sekretariat -
----------------------------------------------------
FernUniversität in Hagen
Fakultät Mathematik und Informatik /
Lehrgebiet Künstliche Intelligenz
Universitätsstraße 1
58097 Hagen
Fon: + 49 23 31 - 9 87 40 06
E-Mail:
steffi.bluemel@fernuni-hagen.de