Nils Lukas

Assistant Professor @MBZUAI, nils.lukas@mbzuai.ac.ae

nils-profile.jpg

Guten Tag :wave: , I am an Assistant Professor at MBZUAI in Abu Dhabi focusing on Secure and Private Machine Learning. Previously, I earned my PhD at the University of Waterloo in Canada, where I was supervised by Florian Kerschbaum. I interned at Microsoft Research and Borealis AI and I was generously supported by the David R. Cheriton scholarship. My dissertation was awarded the Top Mathematics Doctoral Prize and I received Waterloo’s Alumni Gold medal.

[Curriculum Vitae], [Research Statement], [Google Scholar]


If you are interested in joining my group as a Master/PhD student, please fill out this form and optionally send me a brief e-mail to “nils.lukas@mbzuai.ac.ae”.

My research focuses on designing secure and private Machine Learning systems in the presence of untrustworthy

  1. Providers: Confidential computing via Homomorphic Encryption & Secret Sharing.
  2. Data: Mitigate data poisoning during training & prompt injection during inference.
  3. Models: Protect training data privacy through PII scrubbing & differential privacy.
  4. Users: Control misuse by detecting generated content via watermarking
Description of the image

An overview of my research interests.

My work includes studying privacy attacks against large language models fine-tuned on private datasets, developing defenses against data poisoning, and creating multiple methods for controlling model misuse. In collaboration with our group, I have also contributed to developing Secure Multi-Party Computation protocols for Private Information Retrieval and the Secure Inference of Deep Neural Networks. I received a Master of Science from the RWTH-Aachen (Germany) and a PhD from the University of Waterloo (Canada).

If you have any questions or would like to learn more about my work, please feel free to contact me. You can also find my work on GitHub.

News

Jun 4, 2024 :fire: Two papers accepted at USENIX’24: On Private Set Intersections and on Secure Inference with Deep Neural Networks.
Apr 25, 2024 I am excited to have received the Mathematics Doctoral Prize and being nominated for the University of Waterloo Governor General’s Gold Medal.
Jan 16, 2024 :fire: Two papers accepted at ICLR’24 in Vienna. These include our paper on Adaptive Attacks against Image Watermarks and our work on Universal Backdoor Attacks.
Dec 11, 2023 I gave a research talk at Meta about the privacy of personal information in fine-tuned LLMs.

Oct 9, 2023 I gave an research talk at the University of California, Berkeley and I won the Best Poster Award at the 2023 Cheriton Symposium at the University of Waterloo.

Jun 26, 2023 I gave a research talk at Google and MongoDB.

Our IEEE S&P’23 paper was featured on the Microsoft Research Focus blog.

May 24, 2023 :fire: One paper accepted at USENIX’23 Security in Anaheim, California. We worked on Learnable Watermarks for Pre-trained Image Generators.
Apr 3, 2023 :fire: One paper accepted at IEEE S&P’23 in San Francisco, California. We worked on studying the Leakage of Personal Information in Language Models.

🏆 Our IEEE S&P’23 paper won the Distinguished Contribution Award at Microsoft’s internal Machine Learning, AI & Data Science Conference (MLADS) in Fall 2023.
May 1, 2022 I am doing a research internship at Microsoft Research supervised by Shruti Tople and Lukas Wutschitz.
Feb 1, 2022 🏆 I was awarded the David R. Cheriton Scholarship for outstanding academic achievements.

Selected Publications

  1. adaptive_attacks_preview.png
    Leveraging Optimization for Adaptive Attacks on Image Watermarks
    Nils Lukas, Abdulrahman Diaa, Lucas Fenaux, and 1 more author
    The Twelfth International Conference on Learning Representations (ICLR 2024), 2024
  2. deepfake.png
    PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators
    Nils Lukas, and Florian Kerschbaum
    32nd USENIX Security Symposium, 2023
  3. leakage.png
    Analyzing Leakage of Personally Identifiable Information in Language Models
    Nils Lukas, Ahmed Salem, Robert Sim, and 3 more authors
    44th IEEE Symposium on Security and Privacy (SP), 2023
  4. dnn-robustness.png
    Sok: How Robust is Image Classification Deep Neural Network Watermarking?
    Nils Lukas, Edward Jiang, Xinda Li, and 1 more author
    In 43rd IEEE Symposium on Security and Privacy (SP), 2022
  5. dnn-fingerprint.png
    Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
    Nils Lukas, Yuxuan Zhang, and Florian Kerschbaum
    Splotlight Presentation at The Ninth International Conference on Learning Representations (ICLR 2021), 2021