Resilient AI Research from KINAITICS Presented at IEEE CSR 2025

10 September 2025

Author: Efi Kafali / CERTH 

 

AI is shaping industries from healthcare to energy, but with great power come significant risks. Malicious data poisoning, model tampering, or subtle errors can disrupt operations and erode trust.

 

Our new research proposes a Resilient-by-Design Framework that integrates:

 

  • A detection layer, integrating vulnerability detectors to spot compromised AI models or data, adversarial triggers, or privacy leaks;

 

  • A correction layer, integrating recovery mechanisms that enable rolling back malicious changes or removing poisoned data without full retraining;

 

  • Explainability tools (XAI) to make vulnerabilities and recovery steps clear to experts and regulators.

 

This approach enables AI deployments to adapt and recover quickly, reducing downtime and mitigating operational risks in critical settings.

 

The research directly supports the objectives of KINAITICS, which is developing advanced tools to protect cyber-physical systems from emerging AI-driven threats. The proposed framework strengthens both the detection of AI vulnerabilities and the recovery of affected models in high-stakes environments.

 

Thanks to its modular architecture and clear design principles, the framework can be integrated into existing AI pipelines, allowing organizations to adapt the methods to their specific deployment contexts. Beyond the technical contribution, this work also advances the design of AI systems that are aligned with European regulatory frameworks such as the AI Act, while respecting user rights and privacy requirements — including the right to be forgotten.

 

The presentation at IEEE CSR 2025 gave international visibility to this contribution within the global cybersecurity community. Moving forward, our priorities include validating the framework through real-world demonstrators, engaging with industry partners to bring the approach into practice, and exploring future directions such as continuous resilience in autonomous AI systems.

 

Read the full paper here