Nils Lukas

PhD candidate, Trustworthy ML & Secure Computation

nils-profile.jpg

University of Waterloo

Canada

Guten Tag :wave: , I am a PhD candidate at the University of Waterloo supervised by Florian Kerschbaum and part of the Cryptography, Security and Privacy (CrySP) group.

[Curriculum Vitae], [Research Statement]


I am researching the threats that arise when deploying deep neural networks from three perspectives: (1) Privacy when the model is trained on private data, (2) Reliability when the model’s training data cannot be trusted (e.g., due to data poisoning), and (3) Model Misuse when the users cannot be trusted (e.g., due to deepfake generation).

My work includes studying privacy attacks against large language models fine-tuned on private datasets, developing defenses against data poisoning, and creating multiple methods for controlling model misuse. In collaboration with our group, I have also contributed to developing Secure Multi-Party Computation protocols for Private Information Retrieval and the Secure Inference of Deep Neural Networks. I received a Master of Science from the RWTH-Aachen (Germany) in 2019.

If you have any questions or would like to learn more about my work, please feel free to contact me. You can also find my work on GitHub.

News

Jan 16, 2024 :fire: Two papers accepted at ICLR’24 in Vienna. These include our paper on Adaptive Attacks against Image Watermarks and our work on Universal Backdoor Attacks.
Dec 11, 2023 I gave a research talk at Meta about the privacy of personal information in fine-tuned LLMs.

Oct 9, 2023 I gave an research talk at the University of California, Berkeley and I won the Best Poster Award at the 2023 Cheriton Symposium at the University of Waterloo.

Jun 26, 2023 I gave a research talk at Google and MongoDB.

Our IEEE S&P’23 paper was featured on the Microsoft Research Focus blog.

May 24, 2023 :fire: One paper accepted at USENIX’23 Security in Anaheim, California. We worked on Learnable Watermarks for Pre-trained Image Generators.
Apr 3, 2023 :fire: One paper accepted at IEEE S&P’23 in San Francisco, California. We worked on studying the Leakage of Personal Information in Language Models.

🏆 Our IEEE S&P’23 paper won the Distinguished Contribution Award at Microsoft’s internal Machine Learning, AI & Data Science Conference (MLADS) in Fall 2023.
May 1, 2022 I am doing a research internship at Microsoft Research supervised by Shruti Tople and Lukas Wutschitz.
Feb 1, 2022 🏆 I was awarded the David R. Cheriton Scholarship for outstanding academic achievements.
Jan 1, 2022 :fire: One paper accepted at IEEE S&P’22 in San Francisco, California. We handcraft Adaptive Attacks against Eleven Model Watermarks.
Aug 1, 2021 :fire: 🏆 One paper accepted at ICLR’21 with a spotlight presentation (notable 5%). We propose a Fingerprint with Robustness against Model Extraction Attacks.

Selected Publications

  1. adaptive_attacks_preview.png
    Leveraging Optimization for Adaptive Attacks on Image Watermarks
    Nils Lukas, Abdulrahman Diaa, Lucas Fenaux, and 1 more author
    The Twelfth International Conference on Learning Representations (ICLR 2024), 2024
  2. deepfake.png
    PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators
    Nils Lukas, and Florian Kerschbaum
    32nd USENIX Security Symposium, 2023
  3. leakage.png
    Analyzing Leakage of Personally Identifiable Information in Language Models
    Nils Lukas, Ahmed Salem, Robert Sim, and 3 more authors
    44th IEEE Symposium on Security and Privacy (SP), 2023
  4. dnn-robustness.png
    Sok: How Robust is Image Classification Deep Neural Network Watermarking?
    Nils Lukas, Edward Jiang, Xinda Li, and 1 more author
    In 43rd IEEE Symposium on Security and Privacy (SP), 2022
  5. dnn-fingerprint.png
    Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
    Nils Lukas, Yuxuan Zhang, and Florian Kerschbaum
    Splotlight Presentation at The Ninth International Conference on Learning Representations (ICLR 2021), 2021