Nils Lukas
PhD candidate, Trustworthy ML & Secure Computation
University of Waterloo
Canada
Guten Tag , I am a PhD candidate at the University of Waterloo supervised by Florian Kerschbaum and part of the Cryptography, Security and Privacy (CrySP) group.
I am researching the threats that arise when deploying deep neural networks from three perspectives: (1) Privacy when the model is trained on private data, (2) Reliability when the model’s training data cannot be trusted (e.g., due to data poisoning), and (3) Model Misuse when the users cannot be trusted (e.g., due to deepfake generation).
My work includes studying privacy attacks against large language models fine-tuned on private datasets, developing defenses against data poisoning, and creating multiple methods for controlling model misuse. In collaboration with our group, I have also contributed to developing Secure Multi-Party Computation protocols for Private Information Retrieval and the Secure Inference of Deep Neural Networks. I received a Master of Science from the RWTH-Aachen (Germany) in 2019.
If you have any questions or would like to learn more about my work, please feel free to contact me. You can also find my work on GitHub.
News
Jan 16, 2024 | Two papers accepted at ICLR’24 in Vienna. These include our paper on Adaptive Attacks against Image Watermarks and our work on Universal Backdoor Attacks. |
---|---|
Dec 11, 2023 | I gave a research talk at Meta about the privacy of personal information in fine-tuned LLMs. |
Oct 9, 2023 | I gave an research talk at the University of California, Berkeley and I won the Best Poster Award at the 2023 Cheriton Symposium at the University of Waterloo. |
Jun 26, 2023 | I gave a research talk at Google and MongoDB. Our IEEE S&P’23 paper was featured on the Microsoft Research Focus blog. |
May 24, 2023 | One paper accepted at USENIX’23 Security in Anaheim, California. We worked on Learnable Watermarks for Pre-trained Image Generators. |
Apr 3, 2023 | One paper accepted at IEEE S&P’23 in San Francisco, California. We worked on studying the Leakage of Personal Information in Language Models. 🏆 Our IEEE S&P’23 paper won the Distinguished Contribution Award at Microsoft’s internal Machine Learning, AI & Data Science Conference (MLADS) in Fall 2023. |
May 1, 2022 | I am doing a research internship at Microsoft Research supervised by Shruti Tople and Lukas Wutschitz. |
Feb 1, 2022 | 🏆 I was awarded the David R. Cheriton Scholarship for outstanding academic achievements. |
Jan 1, 2022 | One paper accepted at IEEE S&P’22 in San Francisco, California. We handcraft Adaptive Attacks against Eleven Model Watermarks. |
Aug 1, 2021 | 🏆 One paper accepted at ICLR’21 with a spotlight presentation (notable 5%). We propose a Fingerprint with Robustness against Model Extraction Attacks. |