Featured Keynote Talks

ICDCS 2025 is honored to welcome our distinguished leaders in distributed computing systems as our keynote speakers.

ICDCS 2025 Keynote Speakers
Professor Peter Druschel
2025 IEEE TCDP Award

Prof. Peter Druschel

Max Planck Institute for Software Systems (MPI-SWS)

Recipient of the 2025 IEEE TCDP Award for Outstanding Technical Achievement for pioneering contributions to the design and implementation of large-scale distributed systems

Peter Druschel is the founding director of the Max Planck Institute for Software Systems (MPI-SWS) and past Chair of the Chemistry, Physics, and Technology Section of the Max Planck Society in Germany. Previously, he was a Professor of Computer Science and Electrical and Computer Engineering at Rice University in Houston, Texas.

His research interests include distributed systems, mobile systems, privacy-preserving, secure, and compliant systems. He is the recipient of an Alfred P. Sloan Fellowship, the ACM SIGOPS Mark Weiser Award, a Microsoft Research Outstanding Collaborator Award, and the EuroSys Lifetime Achievement Award. Peter is a member of Academia Europaea and the German Academy of Sciences Leopoldina.

Towards Scalable Secure Analytics of Personal Data

Abstract

Awareness is growing that the statistical analysis of personal data, such as individuals’ mobility, financial, and health data, could be of significant benefit to society. However, liberal societies have refrained from such analytics, arguably due to the lack of a trusted analytics platforms that scale to billions of records while reliably preventing the leakage and misuse of personal data.

In this talk, I will sketch CoVault, a prototype analytics platform that leverages server-aided secure multi-party computation (MPC) and trusted execution environments (TEEs) to colocate the MPC parties in a single datacenter without reducing security. This, CoVault can scale MPC horizontally to the datacenter’s available resources. For example, CoVault can scale the DualEx 2PC protocol to perform epidemic analytics for a country of 80M people (about 11.85B data records/day) on a continuous basis using one core pair for every 30,000 people.

This talk is based on joint work with Roberta De Viti, Deepak Garg, Isaac Sheff, Noemi Glaeser, Baltasar Dinis, Rodrigo Rodrigues, Bobby Bhattacharjee, and Anwar Hithnawi.

Professor Ling Liu

Prof. Ling Liu

Georgia Institute of Technology

Ling Liu is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the research programs in the Distributed Data Intensive Systems Lab (DiSL), examining various aspects of Internet-scale big data powered artificial intelligence (AI) systems, algorithms and analytics, including performance, reliability, privacy, security and trust. Her research in the ML systems area is mainly centered on efficient AI systems and Algorithms, as well as trustworthy AI through developing AI security and AI privacy guardrails.

Prof. Ling Liu’s current research is primarily supported by National Science Foundation under CISE programs, CISCO and IBM.

Responsible Finetuning of Large GenAI Models with Heterogeneous Edge Clients

Abstract

The human-like generative ability of Large GenAI Models has ushered in a new era of foundational models, unlocking new possibilities and driving multi-modal cross-domain innovations. However, the transformative potential of these large models is hindered by significant accessibility challenges: (i) These large GenAI models are powered by over-parameterization, demanding high GPU resources for learning and inference, and facing deployment challenges on heterogeneous platforms and on learning downstream tasks with proprietary data, making their accessibility for all a grand challenge. (ii) Large GenAI models trained on massive public domain data may introduce problematic hallucinations, which can lead to misinformation and harmful content, making safe finetuning of GenAI models another grand challenge.

This keynote will first review existing techniques for responsible finetuning of pretrained large GenAI models and then present responsible finetuning frameworks for adapting GenAI to domain-specific learning by leveraging efficient distribution and partitioning of heterogeneous compute resources, aiming to tackle the above mentioned two accessibility challenges.

Professor Peter Pietzuch

Prof. Peter Pietzuch

Imperial College London

Peter Pietzuch is a Professor of Distributed Systems at Imperial College London, where he leads the Large-scale Data & Systems (LSDS) group. His research work focuses on the design and engineering of scalable, reliable and secure data-intensive and cloud software systems, with a particular interest in machine learning, data management and security issues.

Currently, he serves as a Co-Director for Imperial’s I-X initiative on AI, data and digital, a Programme Committee Co-Chair for the ACM European Conference on Computer Systems (EuroSys 2026) and a General Co-Chair for the International Conference on Very Large Data Bases (VLDB 2025). He received the ACM SIGMOD 2023 Test-of-Time Award for his work on scalable stream processing systems. Before joining Imperial College London, he was a post-doctoral Fellow at Harvard University. He holds PhD and MA degrees from the University of Cambridge.

Making Future Distributed AI Systems More Adaptive

Abstract

Distributed AI model training and inference are becoming the most important workloads in modern data centres. Despite the fast progress in the domain of AI systems, we still do not know the best way to design a distributed AI software stack. In this talk, I will trace the evolution of AI software stacks from the perspective of distributed computing. I will make the case that existing designs are not sufficiently adaptive to react to changes in data centre resources and workload requirements. This poses new interesting research challenges to the distributed computing community on how to design the next generation of adaptive AI systems. Based on our prior work at Imperial College London, I will give examples of how adaptive features can be added to today’s distributed AI software stacks.