3 Questions to Help Your Team Balance Cybersecurity & Usability

With the growing risks of potentially fatal hacks of interconnected medical devices, FDA is more concerned than ever that product development teams are incorporating robust cybersecurity measures into their device design. No matter what measures teams decide to implement, FDA’s main concern is for maximizing user safety while mitigating risk as much as possible.

However, implementation of cybersecurity measures does not necessarily mean devices are inherently safer. Depending on the measures, they could have negative effects on a device’s usability. To evaluate this, teams should be asking the following three questions:

1. What’s the relationship between the cybersecurity measures and device access?

Users need to be able to get into the device to access data, control doses, and so on. Measures such as firewalls and passwords can be used to better ensure that only the intended parties are able to use the device. To augment these measures, teams may incorporate fail-safes to avoid Distributed Denial of Service (DDoS) attacks or other targeted activities that will compromise them.

These actions are critical in maximizing device safety, but they can open the door for use error. If, for example, the login process is too complex, a user—especially one with limited capacities—may fail to initiate it correctly. The fail-safes kick in after repeated login attempts to ensure attackers do not make it through; they cannot distinguish between the user and an attacker. If a user is locked out from accessing the device, this can put a patient into a hazardous situation (such as being unable to receive medication) that could lead to harm (illness, death, etc.). So, development teams need to think through how they can alleviate any use errors that may result from the cybersecurity measures.

2. Will routine maintenance affect the patient?

Since cybersecurity is an ever-evolving and dynamic field, measures originally implemented into a device will likely need to be patched over time. Planning an approach to this varies based on what the device requires, but having a process and design features to accommodate patching is vital.

It is possible for use errors to occur during routine patching and maintenance. A user who is not well-educated on how these processes work might, perhaps, become confused if the device is equipped to auto-update. This can cause the user to respond in ways that were not intended, leading to a potential hazardous situation. Among the possible use errors: perceiving the update as a device error and shutting it down, incorrectly initiating the updating process (if the user is required to do so), and downloading the incorrect updates or too many updates, among others. All of these can lead to software malfunctions that impact device operation and expose users to hazards.

3. Do the cybersecurity measures protect the user from hazardous situations?

This question is meant to address a principle that the National Institute of Standards and Technology developed with their Usability of Cybersecurity Team: they assert that a robust cybersecurity system must make it “…easy for the user to do the right thing, hard to do the wrong thing, and easy to recover when the wrong things happen anyway.” Applied to medical devices, this principle seeks to ensure that controls are in place to mitigate or eliminate hazardous situations even if use error occurs.

Take, for example, a user who must plug a device into a computer or connect to a wireless network in order to update. If the device connection must happen on a secure network, and the user instead connects it to an insecure one, that use error could potentially result in device malfunction due to hacking. If connecting to a wireless network is necessary for your organization’s device, applying the principle of making it “hard to do the wrong thing” with things such as warning messages or authentication measures could mitigate the use error and the associated hazardous situations.

Likewise, should the use error happen and the device become compromised, having security measures that can restore the device to normal, safe function are key. Because users interact with devices in unintended, unpredictable ways, measures that can maintain device integrity in spite of those use errors will better control the hazards.

One of the best ways teams can address these questions is through risk activities, including hazard analysis, failure modes effects analysis, use error analysis, etc. These activities, implemented into the usability engineering process, go a long way in evaluating and controlling the resultant risks from use errors related to cybersecurity. The more thorough these activities are in answering the questions above can inform a smart, dynamic approach to cybersecurity in medical devices.

Interested in how Cognition addresses risk management for medical device, pharmaceutical, and combination product teams during product development? Register for our next installment of the 2017 “From the Helm” webinar series. These sessions will focus on templates for risk management in an online, guided environment. For the September 20, 2017 webinar, click here. For September 21, 2017, click here.

About the Author

Nick Schofield is a content creator for Cognition Corporation. A graduate of the University of Massachusetts Lowell, he has written for newspapers, the IT industry, and cybersecurity firms. In his spare time, he is writing, hanging out with his girlfriend and his cats, or geeking out over craft beer. He can be reached at nick.schofield@cognition.us.