Federated Learning (FL) is a distributed learning paradigm that enables
training models across distributed clients without accessing their data.
In the context of network security, FL can be used to collaboratively
train Intrusion Detection System (IDS) models across multiple
organizations, allowing participants to share knowledge without
compromising data privacy. However, the distributed nature of FL raises
new challenges, notably the heterogeneity of clients’ data distributions
and the identification of malicious contributions.
This three-part tutorial introduces the audience to (i) the principles
of FL, (ii) its application to network security, focusing on building
Collaborative Intrusion Detection Systems (CIDSs) using FL, and (iii)
the security challenges associated with deploying Federated Intrusion
Detection System (FIDS), with a focus on poisoning attacks. Each part is
illustrated with hands-on exercises, with step-by-step instructions
provided in the companion material.
Tutorial content
1. Fundamentals of FL,
2. FL for collaborative security,
3. security of FL architectures.
Fundamentals of FL. The first lecture introduces the audience to the core principles of FL with examples of applications. In the hands-on, participants will be introduced to Flower, an open-source framework for FL in Python, and to existing datasets for that can be used for FL. The goal is to lay down the foundations for the rest of the tutorial.
FL for collaborative security. The second lecture will focus on the application of FL to network security, and more specifically to the training of Collaborative Intrusion Detection Systems (CIDS) models. This part will focus on the challenges raised by the heterogeneity of the clients’ environments, and how to address them. The hands-on will consist of building a simple CIDS model using Flower and a dataset of network traffic, and experiment some of these challenges with toy examples.
Security in collaborative FL. The last lecture will address some challenges of deploying and running Federated Intrusion Detection Systems (FIDSs). Depending on the nature of the federation (public or private, trustworthiness of the participants, etc.), such systems can be vulnerable to various attacks. In particular, we will focus on poisoning attacks, where a participant tries to degrade the global model by sending malicious contributions, before discussing possible countermeasures. The hands-on will consist of simulating a poisoning attack on the CIDS model built in the previous part, and experimenting with strategies to detect and mitigate such attacks.
Link to our GitHub: https://github.com/leolavaur/icdcs_2025

Yann BUSNEL
Institut Mines-Télécom, France

Léo LAVAUR
SnT, University of Luxembourg, Luxembourg