Workshop Announcement: Adversarial Threats on Real Life Learning Systems – 17/09/2025

9 July 2025

Author: Cédric Gouy-Pailler / CEA

 

Workshop Announcement: Adversarial Threats on Real Life Learning Systems

We are pleased to announce the “Adversarial Threats on Real Life Learning Systems” workshop.

Date: September 17, 2025, 9:30 – 17:00

Location: Esclangon building, 1st floor, Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France

This workshop will focus on adversarial and backdoor attacks targeting real-life machine learning systems, examining vulnerabilities, attack vectors, and defense mechanisms for robust ML deployment. Key topics include adversarial attacks on production ML systems, backdoor attacks and detection methods, real-world robustness evaluation, defense mechanisms for deployed models, and security implications of ML in critical applications.

We are honoured to host two distinguished researchers:

  1. @Benjamin NEGREVERGNE: “Adversarial attacks and mitigations”
  2. @Kassem KALLAS: “Backdoors in Artificial Intelligence: Stealth Weapon or Structural Weakness?”

We invite researchers, academics, and industry professionals to submit their work for presentation. Submissions are accepted for both talks and posters. You may submit published work (providing DOI, arXiv link, or full citation) or an abstract (maximum 300 words). Priority for talk slots will be given to submissions based on peer-reviewed publications.

Important Dates:

  • Submission Deadline: July 18, 2025
  • Notification Date: July 25, 2025
  • Final Program Announcement: August 1, 2025

Registration is mandatory and free of charge, with limited places. A confirmation email will be sent upon successful registration.

For more details, please visit the workshop webpage: ML Security Workshop