Author: Sebastian Wittor
Lead Cybersecurity Expert at BAYOOMED
The digitalization of the healthcare sector is progressing rapidly. Medical apps, digital health applications(DiGAs) and software for medical devices are increasingly being used to monitor patients, support treatments and optimize clinical processes. However, the more widespread these technologies become, the greater the vulnerability to cyberattacks.
Cybersecurity as a key challenge in digital healthcare
The consequences of cybersecurity-Incidents in the healthcare sector can be serious: The theft of sensitive patient data, disruption of important medical processes or even tampering with vital equipment are just some of the possible scenarios. Last but not least, such security breaches can lead to legal consequences and a loss of trust – on the part of patients as well as regulatory authorities and business partners.
Software as a medical device (SaMD) in particular is subject to stringent regulatory requirements. Even small security gaps can have serious consequences. What’s more, healthcare data is considered particularly sensitive personal data. In view of these risks, it is essential to make cybersecurity an integral part of the early development phase – and not just when the product is about to be launched on the market.
Top 10 cybersecurity fails in the software engineering of medical devices
In the following, we take a look at the top 10 cybersecurity fails in the engineering of software for medical devices and provide practical examples to illustrate how easily errors can creep in and what consequences they can have.
1. lack of consideration of cybersecurity in product planning
The topic of cyber security is often only mentioned in passing at the start of a project or is seen as a later “finishing touch”. As a result, there is a lack of defined security requirements and corresponding budgets at an early stage.
A manufacturer of medical monitoring devices is planning a new generation of devices, but is neglecting security features in the core architecture. Initial penetration tests show that a fundamental redesign is required to adequately secure the devices. This significantly delays the product launch and leads to high additional costs.
2. insecure authentication and authorization procedures
Default passwords, a lack of two-factor authentication (2FA) or the absence of role-based access rights make it easier for attackers to penetrate systems.
A DiGA that analyzes patients’ vital signs and transmits them to the doctor uses a simple password procedure without 2FA. An attacker thus gains access to the health data and exposes serious security gaps. The application then has to be taken offline for weeks in order to retrofit security measures and analyze the damage.
3. unencrypted or weakly encrypted data transmission
If sensitive data such as patient data or device information is transmitted unencrypted or only weakly encrypted over the network, it is an easy target for man-in-the-middle attacks.
A hospital uses a cloud-based patient data management system that uses encryption algorithms, some of which are insecure. An external security team quickly decrypts passwords and personal data to gain access to patient records. Fortunately, this process is discovered in a test environment, but this reveals significant vulnerabilities.
4. inadequate protection of personal health data
In many medical applications, sensitive patient data is only superficially secured. A lack of encryption or inadequate access controls ensure that unauthorized persons can access this data relatively easily. This omission leads to an increased risk of data protection breaches and potentially serious legal consequences.
A provider of a digital health application (DiGA) stores all recorded patient data in a central database without implementing appropriate access and encryption mechanisms. An attacker then analyzes the data traffic and gains access to the user accounts. This allows him to read not only personal information, but also specific health data, resulting in massive data disclosure. When the incident came to light, the provider had to inform both those affected and the relevant data protection authorities and make extensive improvements to prevent further breaches.
5. irregular or missing security updates
In many medical applications, the regular installation of security updates – for example for the operating system or third-party libraries – is neglected. This results in open doors for hackers.
Medical software for analyzing image data runs on an outdated operating system for which no security patches have been provided for years. Only an active attack that paralyzes the image analysis station forces the company to switch to an up-to-date system, which entails expensive and time-consuming adjustments.
6. lack of validation of manual entries and transmitted data
In many medical software projects, the verification of user or system input is neglected. If unvalidated data is used directly in critical processes or databases, manipulation – for example through SQL injections or other code injection attacks – is often only a matter of time.
A medical portal accepts patient data and feedback forms, but only validates the entries superficially. An attacker enters specially prepared character strings in an input field and can thus infiltrate malicious code unhindered. The vulnerability allows him to access sensitive patient data and change parts of the database. When the incident is discovered, the company has to revise its software, evaluate extensive log files and inform those affected – resulting in considerable costs and loss of reputation.
7. inadequate risk and vulnerability management
Regular penetration tests and risk analyses are essential in the medical device sector. Those who fail to identify and rectify vulnerabilities at an early stage risk gaps in live operation.
A cloud-based medical documentation software is launched on the market without a comprehensive security analysis. Only when customers report strange behavior does an external cybersecurity analysis determine that attackers can gain access to all databases through an SQL injection. This omission leads to high recourse claims from customers.
8. insecure interfaces (APIs)
Modern software solutions often use external APIs or offer interfaces themselves. If these are not adequately secured, attacks have an easy time of it.
A telemedicine app that forwards patient data to external clinics relies on a self-developed API with rudimentary authentication. A hacker uses automated test scripts and reads patient data in real time.
9. insufficient logging and monitoring
If log files are inadequately configured or do not even exist, attacks often remain undetected for a long time. Important information for forensic analysis is also often missing.
In medical cloud software, the log levels are set so low that general usage is logged, but no unusual login attempts. An accumulation of login failures remains undetected until a successful attack finally takes place. Subsequent analysis is made more difficult as no information about the attack path is available.
10. vulnerabilities in external code due to missing SBOM
Many software developers for medical devices rely on external components and open source libraries without creating a software bill of materials (SBOM). If there is no precise overview of the third-party components used, it is difficult to identify security gaps or outdated versions. This increases the risk of criticalities going unnoticed and ultimately being exploited by attackers.
A telemedicine service uses an open source component that is responsible for transferring patient data. As the developers do not maintain an SBOM, a critical security vulnerability in this library goes unnoticed for months. It was only when attackers gained unauthorized access to the database and extracted personal health information that it became clear that the version of the software used had been classified as insecure for months. The company is now forced to inform both the people affected and the relevant authorities and to revise its entire software architecture in order to prevent new vulnerabilities of this kind.

Conclusion: “Security by design” pays off
The above examples clearly show how easily cybersecurity failures can creep in and what serious consequences they have in the healthcare sector. Particularly with medical applications – whether DiGAs, cloud-based hospital systems or software for medical devices – a single attack can have serious consequences, both for patients and for the companies involved.
Cybersecurity in the healthcare sector: Mandatory, not optional
It is therefore essential to include cybersecurity in the early design phase. This approach, often referred to as “security by design”, includes, among other things
- Early risk analyses and threat models
- Clear definition of security requirements and budget items for security
- Regular security checks (penetration tests, code reviews, etc.) during the entire development process and before a release
- Consistent product maintenance through vulnerability management and software updates, even after product launch
- Establishing clear responsibilities and training measures to continuously expand the team’s expertise
Security by design saves resources in the long term
It may initially seem more complex and expensive to invest in security mechanisms right from the start. However, the costs incurred afterwards for subsequent improvements, product recalls, claims for damages or reputation restoration are usually significantly higher.
At a time when patient data is one of the most valuable assets for cyber criminals and healthcare facilities are repeatedly the target of ransomware attacks, cybersecurity should be considered a fundamental part of any healthcare software project. This not only strengthens the trust of patients and partners, but also ensures long-term competitiveness.
In short, those who consistently implement security by design benefit from better product quality, a higher level of compliance and a faster response to emerging threats. In this way, the risk of serious cybersecurity failures can be significantly reduced. But the really frightening thing is that all of the above-mentioned failures can be avoided relatively easily if you take care of them consistently and understand cybersecurity as an integral part of the entire development process.