AI in healthcare and security issues

5 March 2024

Authors: Stefano Dalmiani and Paolo Marcheschi / FTGM


Healthcare is a complex ecosystem or ‘socio-technical system’ environment where the software is tightly integrated with other systems, technologies, infrastructure, and domains and where it is configured to support local clinical and business processes. Improving healthcare applications and supporting decision-making for medical professionals using methods from AI is a rapidly expanding field with numerous studies producing AI models, and many companies are investing in medial field producing diagnostic tools based on AI, also certified as medical devices.

In October 2023 the World Health Organization (WHO) has published a new paper listing the main regulatory considerations on AI applied to health. The publication highlights the importance of establishing the safety and effectiveness of AI systems, quickly making suitable systems available to those who need them, and promoting dialogue between stakeholders, including developers, regulators, manufacturers, healthcare professionals and patients.

Among the different approach in AI, Deep learning has emerged as a powerful technology in the field of healthcare, offering new possibilities for improving the support for diagnosis and monitoring of patients in hospitals and homecare settings.

The technical and methodological maturity of the different medical areas are various: radiology is an example of the more advanced fields of application in healthcare having images as data source, while Electronic Health Record (EHR) is another example having texts/numerical data as source and expressing a wider potential in healthcare processes innovation.

Deep learning refers to a branch of artificial intelligence that utilizes neural networks with multiple layers to process and analyze complex data. In the context of healthcare, deep learning algorithms have shown many results in medical imaging tasks such as image classification, segmentation, and detection. These algorithms can be trained on large datasets to learn patterns and features that aid in accurate diagnosis and monitoring of various conditions.

One of the key applications where deep learning in healthcare expresses its potential is medical image classification. This involves training deep learning models to identify and categorize different types of medical images, such as X-rays, CT scans, MRIs, and ultrasounds. By utilizing vast amounts of annotated data, deep learning algorithms can learn to recognize patterns and abnormalities that may be indicative of specific medical conditions.

While deep learning offers immense potential in healthcare, it is not without its vulnerabilities.

When using health data for training, AI systems have access to sensitive personal information, and this requires robust legal and regulatory frameworks to safeguard privacy, security and integrity. Furthermore AI models used for medical image classification are susceptible to cyberattacks, especially during the training or reinforcement phase, applied to finalize and enhance model performance. The attack can drive the model to recognize different patterns and results, or, in the clinical use, add or remove findings in the images to alter the final diagnosis, producing eventually a patient’s harm.

KINAITICS focus on the definition and development of possible attack strategies, also based on the application of AI, in order to develop defense and detection tools capable to enhance the resiliency of AI models used in the medical field, improving at the same time their global performances. This security feature is a specific requirement in healthcare, where each IT system used for diagnosis or treatment needs to be a medical device, and, as such, virtually error-free.