Nils Lukas

Assistant Professor @MBZUAI, nils.lukas@mbzuai.ac.ae

nils-profile.jpg

Guten Tag :wave: , I am an incoming Assistant Professor at MBZUAI in Abu Dhabi, beginning August 2024. Previously, I did my PhD at the University of Waterloo in Canada, where I was supervised by Florian Kerschbaum. During my PhD I interned at Microsoft Research and Borealis AI and I was generously supported by the David R. Cheriton scholarship. In 2024, I received the Mathematics Doctoral Prize at the University of Waterloo and was nominated for the Governor General’s Gold Medal.

[Curriculum Vitae], [Research Statement]


Please reach out to me via e-mail if you are interested in joining MBZUAI as a student or a visting researcher/visiting postdoc with me.

My research focuses on designing safe and reliable Machine Learning systems in the presence of untrustworthy

  1. Providers: Confidential computing via Homomorphic Encryption & Secret Sharing.
  2. Data: Mitigate data poisoning during training & prompt injection during inference.
  3. Models: Protect training data privacy through PII scrubbing & differential privacy.
  4. Users: Control misuse by detecting generated (mis)information with watermarking

My work includes studying privacy attacks against large language models fine-tuned on private datasets, developing defenses against data poisoning, and creating multiple methods for controlling model misuse. In collaboration with our group, I have also contributed to developing Secure Multi-Party Computation protocols for Private Information Retrieval and the Secure Inference of Deep Neural Networks. I received a Master of Science from the RWTH-Aachen (Germany) in 2019.

If you have any questions or would like to learn more about my work, please feel free to contact me. You can also find my work on GitHub.

News

Apr 25, 2024 I am excited to have received the Mathematics Doctoral Prize and being nominated for the University of Waterloo Governor General’s Gold Medal.
Jan 16, 2024 :fire: Two papers accepted at ICLR’24 in Vienna. These include our paper on Adaptive Attacks against Image Watermarks and our work on Universal Backdoor Attacks.
Dec 11, 2023 I gave a research talk at Meta about the privacy of personal information in fine-tuned LLMs.

Oct 9, 2023 I gave an research talk at the University of California, Berkeley and I won the Best Poster Award at the 2023 Cheriton Symposium at the University of Waterloo.

Jun 26, 2023 I gave a research talk at Google and MongoDB.

Our IEEE S&P’23 paper was featured on the Microsoft Research Focus blog.

May 24, 2023 :fire: One paper accepted at USENIX’23 Security in Anaheim, California. We worked on Learnable Watermarks for Pre-trained Image Generators.
Apr 3, 2023 :fire: One paper accepted at IEEE S&P’23 in San Francisco, California. We worked on studying the Leakage of Personal Information in Language Models.

🏆 Our IEEE S&P’23 paper won the Distinguished Contribution Award at Microsoft’s internal Machine Learning, AI & Data Science Conference (MLADS) in Fall 2023.
May 1, 2022 I am doing a research internship at Microsoft Research supervised by Shruti Tople and Lukas Wutschitz.
Feb 1, 2022 🏆 I was awarded the David R. Cheriton Scholarship for outstanding academic achievements.
Jan 1, 2022 :fire: One paper accepted at IEEE S&P’22 in San Francisco, California. We handcraft Adaptive Attacks against Eleven Model Watermarks.

Selected Publications

  1. adaptive_attacks_preview.png
    Leveraging Optimization for Adaptive Attacks on Image Watermarks
    Nils Lukas, Abdulrahman Diaa, Lucas Fenaux, and 1 more author
    The Twelfth International Conference on Learning Representations (ICLR 2024), 2024
  2. deepfake.png
    PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators
    Nils Lukas, and Florian Kerschbaum
    32nd USENIX Security Symposium, 2023
  3. leakage.png
    Analyzing Leakage of Personally Identifiable Information in Language Models
    Nils Lukas, Ahmed Salem, Robert Sim, and 3 more authors
    44th IEEE Symposium on Security and Privacy (SP), 2023
  4. dnn-robustness.png
    Sok: How Robust is Image Classification Deep Neural Network Watermarking?
    Nils Lukas, Edward Jiang, Xinda Li, and 1 more author
    In 43rd IEEE Symposium on Security and Privacy (SP), 2022
  5. dnn-fingerprint.png
    Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
    Nils Lukas, Yuxuan Zhang, and Florian Kerschbaum
    Splotlight Presentation at The Ninth International Conference on Learning Representations (ICLR 2021), 2021