Context and perspectives

AI is profoundly modifying products and systems in various sectors.

On the one hand, its adoption creates new risks for systems, while 60% of companies adopting AI acknowledge that the cybersecurity risks generated by AI are among the most critical.

On the other hand, AI has an impact on cyber-physical security practices, both on the attack and defence sides. We observe a convergence between the cyber world and the physical world. The interconnection between Information Technologies (IT) and Operational Technologies (OT) is underway, with many impacts on the related cybersecurity of the industrial, transportation, energy, or health systems.

For a long time, we have known that attacks or malfunctions in the cyber world can have critical impacts on the physical world, especially in critical infrastructures. Conversely, intentional perturbations of physical systems, through e.g., attacks on sensor measurements, can have disastrous consequences on digital control mechanisms, and consequently on physical processes. In this interconnected cyber-physical world, the advent of Artificial Intelligence (AI) opens the door to various new kinds of attacks, and also offers numerous defense capabilities.
For example, on the attacker side, an AI-based cyber-kinetic attack could consist in using advanced knowledge of the physical behavior (obtained using digital twin theft or behavioral monitoring) to identify instants or locations of the most impacting software attacks (based on a sleeping malware previously introduced).

AI-based kinetic cyber attacks can also be used by attackers: we already know that adversarial attacks have been effective in real conditions to fool AI using imperceptible modifications of an image. Those attacks have also been demonstrated in other kinds of data, such as time series (energy load forecasting, electrocardiogram). Therefore AI-based control of adversarial attacks (when or where to fool perceptive sensors) could be a substantial threat in future systems involving AI-based perceptive sensors and control.


In the KINAITICS project, we aim at exploring the new attack opportunities offered by the introduction of AI-based control and perceptive systems, as well as those offered by combination of behavioral understanding of physical systems and cyber-attacks.

On the defense side, we aim at offering an innovative spectrum of tools and methodologies, to combine behavioral monitoring and classical cybersecurity tools to protect against these new threats.
Importantly, we also target innovative methodologies, which incorporate human factors and their uncertainties in the tools. This last point raises crucial challenges on trustworthy approaches, explanations provided, and how to deal with uncertainties in response decisions.

Aim of the project and specific objectives

As a new paradigm emerges from the ubiquitous use of AI in cyber-physical systems, threat and risk assessments on systems need to be redefined to take into account the interconnection of the cyber and physical worlds and the dual use of AI. KINAITICS addresses this challenge by undertaking in-depth technical research to understand the emerging risks, and by adopting innovative defence approaches to protect systems from attack and ensure their robustness and resilience. The ambition of the KINAITICS project is to develop tools adapted to these requirements while taking into account the highest ethical standards. The project, which will take into account existing EU laws and regulations, aims to foster cross-fertilisation between technical and legal stakeholders in order to position itself beyond the current expectations of the European Commission.

The specific targets in KINAITICS are:

  • Design an integrated framework for legal, ethical and technical requirements to ensure human-aware cyber-physical security
  • Go beyond the state of the art in evaluating the risk of physical attacks Design, research and develop an advanced attack exploitation framework leveraging AI, providing effective attacks to compromise either physical systems or AI-enabled ones
  • Go beyond the state of the art in defense strategies in the context of cyber-physical systems security
  • Advance capabilities of advanced simulators, enabling accurate training on realistic contexts

KINAITICS outcomes



Improve the scientific knowledge on AI in cybersecurity:

As a research and innovation action, KINAITICS will target scientific and technical publications. Those results will serve as basis for scientific workshops, aiming at disseminating results of the project in the cybersecurity and AI communities.


Provide the demonstration of a cyber-defence platform:

The objective of this platform will be to demonstrate a set of potential kinetics attacks on artificial intelligence in physical systems.


Complete 7 tools for AI attacks and defences, integrating cyberange issues:

Based on specific approaches devoted to four main use cases (finance, CBRN, computer simulations, health), the project aims at producing a set of tools. Those tools will advance current state of the art on attack and defence sides and will be integrated into an operational framework able to simulate cyber-events to train cyber-experts, and include their reactions in the analysis.


Improve EU knowledge and policies on AI legal and ethical requirements:

The KINAITICS consortium involves a legal and ethics team from KU Leuven. The ambition of the project is to articulate technical work with the highest standards in terms of ethics, pushing guidelines and threat-risks assessment beyond current EU expectations in the field of AI, cybersecurity and ethics.


Provide wider societal and economic impact:

By extending the set of stakeholders by demonstrating trustworthy products and systems to companies of interest.

KINAITICS use cases

7 scenarios will be addresses to illustrate the 4 use cases to be studied within the project


Funded by the European Union The KINAITICS has received funding from Horizon Europe under Grant Agreement 101070176