Talks and presentations

The Limits of Speech Systems: Navigating Adversarial and Poisoning Threats with Robust Defenses.

January 31, 2025

Talk, USC - SAIL Seminar, 01-2025, Los Angeles, CA, USA

Abstract: The rapid proliferation of voice-controlled devices and speech recognition systems has heightened the need for robust security measures to safeguard their reliability and trustworthiness. These technologies are increasingly targeted by adversarial and data poisoning attacks, which exploit system vulnerabilities to degrade performance or manipulate outputs. This talk examines the evolving threat landscape for speech systems, with a focus on the detection and classification of adversarial attacks to better understand their mechanisms and impacts. We further explore both dirty- and clean-label poisoning strategies, where malicious data is covertly embedded into training sets, undermining model integrity. Finally, we present and evaluate a range of defense strategies designed to counteract such threats, strengthening the resilience of speech recognition systems against manipulation.

About Neural systems vulnerabilities: Classical attacks and recent defenses.

December 18, 2024

Talk, LIUM Seminar, 12-2024, Le Mans, France

Abstract: The widespread adoption of voice-controlled devices and speech recognition systems underscores the critical need for robust security measures to ensure their reliability. These systems face growing threats from adversarial and poisoning attacks, which exploit vulnerabilities to degrade performance or manipulate outcomes. This talk explores the evolving landscape of adversarial attacks on speech systems, focusing on their detection and classification to illuminate their characteristics and impacts. We also investigate dirty and clean label poisoning attacks, where malicious data is stealthily introduced into training datasets, compromising system integrity. Finally, we present a range of defense mechanisms designed to counteract poisoning attacks, enhancing the resilience and trustworthiness of speech recognition technologies.

Do you trust your data? A Journey through Adversarial and Poisoning Attacks and Defenses on Speech Systems.

June 25, 2024

Talk, ETS Seminar, 06-2024, Montreal, Canada

Abstract: As the prevalence of voice-controlled devices and speech systems continues to grow, so too does the importance of ensuring their security and reliability. However, these systems are increasingly vulnerable to adversarial and poisoning attacks, which can exploit vulnerabilities and compromise their performance. In this talk, we delve into the intricate landscape of adversarial attacks targeting speech systems, presenting our research on detecting and classifying these attacks to better understand their nuances and impact. Furthermore, we discuss the creation of dirty and clean label poisoning attacks, where maliciously crafted data is injected into training datasets, and explore their implications on system integrity. We also examine a range of defenses designed to mitigate the effects of poisoning attacks, aiming to increase the resilience of speech recognition systems against such threats.

Adversarial and Poisoning attacks against speech systems: where to find them?

January 22, 2024

Talk, CLSP Seminar, 01-2024, Baltimore, Maryland

Abstract: The majority of today’s machine learning algorithms share common foundations and core concepts, rendering them susceptible to various attacks. In this short talk, I would like to dive into the world of adversarial attacks and poisoning attacks on speech systems. What are they, how dangerous are they, and what can be done against them?

An Introduction to Voice Conversion

June 22, 2022

Talk, JSALT Summer School 2022, CLSP, Baltimore, Maryland

I gave an 1h15 talk about the bases of Voice Conversion, which was then followed by a 3h competitive lab on antispoofing techniques against various voice conversion and TTS systems, co-animated by Thibault Gaudier and Valentin Pelloin. This talk was targetting PhD student and grad students.