May 12, 2026 Granada, Spain

AI Safety Workshop: Alignment, Oversight and Deception

ADIA Lab & UGR Summer School 2026 — Responsible AI in the Generative and Agentic AI Era

A workshop on AI safety for the generative and agentic era, covering how we specify what models should do (alignment), how we verify they actually do it (oversight), and how deceptive behavior can emerge and be detected in increasingly capable systems.

Sept 17, 2025 Meta (FAIR), Paris — remote

Adaptively Robust and Forgery-Resistant Watermarking

Invited talk to the watermarking team at Meta (FAIR), hosted by Hady Elsahar.

An overview of recent work on content watermarks for language and image models that hold up under adaptive attacks and resist forgery, including takeaways from our ICML'25 spotlight on adaptive attacks against LLM watermarks.