Explainable AI
As AI models become increasingly complex, understanding AI decisions becomes increasingly difficult. Here, Explainable AI provides methods to trace decisions and to ensure the appropriate behavior of complex AIs.
Publications
2024[ to top ]
-
Generative Inpainting for Shapley-Value-Based Anomaly Explanation. . In The World Conference on eXplainable Artificial Intelligence (xAI 2024) - to appear. 2024.
-
Data Generation for Explainable Occupational Fraud Detection. . In 47th German Conference on Artificial Intelligence (KI 2024) - to appear. 2024.
2023[ to top ]
-
Feature relevance XAI in anomaly detection: Reviewing approaches and challenges. . In Frontiers in Artificial Intelligence, 6. 2023.
-
Evaluating feature relevance XAI in network intrusion detection. . In The World Conference on eXplainable Artificial Intelligence (xAI 2023). 2023.
-
Occupational Fraud Detection through Agent-based Data Generation. . In The 8th Workshop on MIning DAta for financial applicationS MIDAS 2023. 2023.
2022[ to top ]
-
Open ERP System Data For Occupational Fraud Detection. . In arxiv. 2022.
-
Towards Explainable Occupational Fraud Detection. . In Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2022, Communications in Computer and Information Science(1753), pp. 79–96. 2022.
2021[ to top ]
-
A financial game with opportunities for fraud. . In IEE COG 2021, 2021. 2021.
2020[ to top ]
-
Evaluation of Post-hoc XAI Approaches Through Synthetic Tabular Data. . In Foundations of Intelligent Systems, D. Helic, G. Leitner, M. Stettinger, A. Felfernig, Z. W. Raś (eds.), pp. 422–430. Springer International Publishing, Cham, 2020.
-
Evaluation of post-hoc XAI approaches through synthetic tabular data. . In International Symposium on Methodologies for Intelligent Systems. Springer, 2020.