PhD Researcher in AI Safety
Lead R&D LLMs, City of Amsterdam
Amsterdam, The Netherlands
I am a PhD Researcher in Safety in Multimodal Large Language Models at the University of Amsterdam, focusing on safety in multimodal generative models, red teaming, and bias in Large Language Models. Working under the supervision of Professor Yuki Asano and Professor Sennay Ghebreab.
In parallel, I am Lead R&D LLMs at the City of Amsterdam, leading an development team implementing LLMs throughout the organization and benchmarking them on bias, truthfulness, and sustainability in Dutch.
News
| Jun 30, 2025 | New preprint on arXiv! With Alexandre Pires, Sennay Ghebreab, and Fernando Santos, we explore “How large language models judge and influence human cooperation”. We assess 21 LLMs on cooperative judgements and show their long-term impact on human prosociality through evolutionary game theory. Read on arXiv → |
|---|---|
| May 27, 2025 | Excited to share our work on “Little Data, Big Impact: Privacy-Aware Visual Language Models via Minimal Tuning”! We introduce PrivBench and PrivTune, showing that VLM privacy-awareness can be dramatically improved with just 100 training samples, surpassing GPT-4 performance. Read on arXiv → |
| Jan 15, 2025 | Started new role as Lead R&D LLMs at City of Amsterdam, leading implementation of Large Language Models across the organization. |
| Feb 01, 2024 | Our work on Amsterdam’s scan bike featured in Binnenlands Bestuur - citizen participation in AI development. |
| Jan 10, 2024 | “Blurring as a Service” featured by the World Economic Forum as a model for privacy-preserving AI. |
| Oct 15, 2023 | Major coverage in Dutch media (Parool, AD, Computable) about our privacy-preserving algorithm for public cameras. |
| Jul 01, 2023 | Started PhD research on Safety in Multimodal Generative Models at University of Amsterdam, supervised by Prof. Yuki Asano and Prof. Sennay Ghebreab. |
Latest Publications
- How Large Language Models Judge and Influence Human CooperationarXiv preprint, 2025Under Review
-