Resilient AI Networking Sub-Group
The Resilient AI Networking (RAN) sub-group undertakes research to advance the resiliency of AI algorithms in the context of 5G and 6G networks. We strive to develop new solutions that enhance the understanding and transparency of AI models and investigate methods for detecting and mitigating adversarial attacks against AI systems, thereby ensuring the robustness and security of these critical components of 5G and 6G networks.
- Jan – 24. Abhishek joins as PhD student in the context of the national project bRAIN. Welcome aboard.
- Dec – 23. SAFE was very successful: more than 15 attendees contributed a fruitful discussion on topics of AI explainability, safety and robustenss.
- Dec – 23. Two papers accepted at IEEE INFOCOM 2024! One presents AIChronoLens, a new tool for AI time-series explainability (congrats to the team Eloy, Pablo, Hossein, Marco and Joerg). The second presents a study on roaming with 5G based on a measurement data campaing in Spain, France, Germany and Italy (congrats Ross, Eman, Jason, Daqing, Yiling, Feng, Joerg and Zhi-li).
- Dec – 23. MohammadErfan joins as research engineer in the context of the national project bRAIN. Welcome aboard.
- Nov – 23. Artifact available & functional & reproduced! ACM has evaluated EXPLORA’s artifacts and awarded all the reproducibility badges to our work.
- Nov – 23. Accepted paper at ACM CoNEXT! We will present our work EXPLORA, a tool that generates explanations to understand and control the logic of Deep Reinforcement Learning agents applied to network slicing in Open RAN systems. Work done in collaboration with Leonardo Bonati, Salvatore D’Oro, Michele Polese and Tommaso Melodia from Northeastern University and Joerg Widmer from IMDEA.
- Oct – 23. We received the Best Paper Runner-Up at ACM WiNTECH for our work that makes publicly available a unique dataset of LTE control information at millisecond level, the granularity of the actual operation of the LTE network. This dataset is instrumental to the design of AI/ML techniques.
Tools for AI explainability and robustness.
We study specific AI models applied to network problems like mobile traffic forecasting and resource allocation and develop tools that augment the understanding of the corresponding models’ logic and their robustness. We strive to ensure that explanations are comprensible and actionable, i.e., network experts can relate the semantic of the explanations with their expert knowledge and the knowledge of the explanations can be capitalized by routines and algorithms for further optimizations. Examples of such tools are DeExp, EXPLORA, and AIChronoLens.
- Jan – 24. Pablo is interviewed about his research experience as part of the INVESTIGO program in the TV national program Aquí hay trabajo: clip here.