Awards
We are pleased to announce the Best Paper Award and the Best Paper Runner-Up Award for KI24. This awards highlight research that demonstrate originality, rigorous methodology, and significant contributions to AI. Congratulations to the authors on this well-deserved recognition.
Best Paper Award
Graph2RETA: Graph Neural Networks for Pick-up and Delivery Route Prediction and Arrival Time Estimation
Abstract:
This research proposes an effective way to address the issues faced by pick-up and delivery services. The real-world variables that affect delivery routes are frequently overlooked by traditional routing technologies, resulting in differences between intended and actual trajectories. Similarly, the issue of forecasting the Estimated Time of Arrival involves unique challenges due to its high dimensionality. We suggest an integrated predictive modeling methodology that tackles routing prediction in a dynamic environment and ETA prediction at the same time to overcome these difficulties. Our method, Graph2RETA, uses a dynamic
spatial-temporal graph-based model to forecast delivery workers’ future routing behaviors while integrating route inference into ETA prediction. Graph2RETA leverages rich decision context and spatial-temporal information to improve the prediction accuracy of the concurrent state-of-theart while capturing dynamic interactions between workers and timesteps by incorporating the underlying graph structure and features.
Best Paper Runner-Up Award
Quantifying the Trade-Offs between Dimensions of Trustworthy AI - An Empirical Study on Fairness, Explainability, Privacy, and Robustness
Abstract:
Trustworthy AI encompasses various requirements for AI systems, including explainability, fairness, privacy, and robustness. Addressing these dimensions concurrently is challenging due to inherent tensions and trade-offs between them. Current research highlights these trade-offs, focusing on specific interactions, but comprehensive and systematic evaluations remain insufficient. This study aims to enhance the understanding of trade-offs among explainability, fairness, privacy, and robustness in AI. By conducting extensive experiments in the domain of iimage classification, it quantitatively assesses how methods to improve
one requirement impact others. More specifically, it explores different training adaptations to enhance each requirement and measures their effects on the others on four datasets for gender classification. The experiments revealed that the Local Gradient Alignment method improved explainability and robustness but introduced trade-offs in fairness, privacy, and accuracy. Fairness-focused training adaptations only enhanced fairness for the most biased models. For all other cases, fairness, explainability and robustness are reduced. Differential Privacy improved privacy but compromised explainability, fairness, and accuracy, with varied impacts on robustness. Data augmentation techniques enhanced robustness, explainability and accuracy with minor trade-offs in privacy and fairness.