In your product development cycle, your teams get heavily involved in usability engineering—particularly during risk management. This process is meant to be intensive: the more exhaustive your team’s efforts in identifying as many user interactions as possible, the greater benefits they yield. FDA is then satisfied you have applied their “Inherent Safety by Design” principle, and there is less overall risk to the user.
So, you’ve got to be as thorough as you can. But even if you think you’ve turned over every stone during your usability efforts, there are always things that pop up. Here are some scenarios worth analyzing.
1. The same use error occurring in different use environments
There are many factors involved in why a use error occurs, especially the environment in which it occurs. Time of day, physical location, lighting, temperature, etc. are all environmental factors that can and do impact device use. As a result, you want to design your device in such a way that those particular use environments affect device use less.
However, what if the same perceptive, cognitive, and/or active user actions lead to the same use error but in a different environment? Imagine the following sequence of user actions is being analyzed for risk:
- User is alerted to visual prompt from device interface (perception)
- User interprets prompt as “Begin Dosing” (cognitive)
- User presses dosing button (active)
In the transition between steps 1 and 2, your team might identify that certain environmental factors such as low lighting conditions make it hard to see the visual prompt. This causes a cognitive error, which then results in the user doing something incorrect in the sequence, like not pressing a button to start dosing. However, when the same sequence is analyzed under different environmental factors (such as using the device in a well-lit area) or is controlled for, and the same error occurs in the sequence, what does this mean?
There are two perspectives to consider. First, it could be that the device interface and/or the user tasks may exceed the user’s capabilities and need to have controls in place. Alternately, the risk controls for one environment may expose the user to risks in another. Determining which (or perhaps all) of these are the case is important in developing risk controls that make device use consistent and safe across use environments.
2. Intended users are not the ones operating the device
Most of usability engineering during product development is focused on the intended users; after all, they are the ones operating the device most often. However, any number of factors can lead to someone other than the intended user operating the device for a patient. This opens up a whole host of risks to users and patients that must be analyzed.
The most pressing concern over what risk controls should be implemented in these types of scenarios is how easy the device is to use overall. In the event a patient (acting as a user) is unable to operate their device and requires assistance from an untrained person, is the device interface simple and intuitive enough for that person to use? And does the lack of any specialized knowledge on the part of the patient/intended user open up the door to hazards and harms?
3. The device interface is too interpretive
In the drive to make a device as easy to use as possible for all and reduce risk, though, teams may make the interface too simple. When the interface features corresponding to particular user functions are simplified to the point of being open to interpretation, the risk of use error increases. As well, design choices can misalign with the user’s culture and expectations.
Imagine, for example, your team decides to design a device with two separate and essential functions the user has to initiate. Your team might decide then to design one with green and the other with red, in order to differentiate them. If the user’s culture informs them that green symbolizes “go” and red symbolizes “stop” or “warning,” that would inform their expectations of the device and its interface. And, without proper features to control that interpretation, the probability of use error increases.
4. Exposure to an existing hazard can result in use error
When you explore root causes for use errors, they may not always originate in the user or the functional device interface. Exposure to an existing hazard has the likelihood of resulting in another use error, which exposes the user to more hazards, and so on. For example, a user may incorrectly handle the power supply for the device and end up exposing wires. The user, recognizing the electrical hazard, might adapt and avoid it by holding or operating the device differently. As a result, they might perform their actions incorrectly and be exposed to another hazard. These sorts of chains of events need to be examined.
There are a number of factors to account for during the usability engineering process. The more thorough you can be and the more broadly you can think through all the scenarios where use error might occur, the safer your device design can be. Bringing scenarios such as these into your usability activities make your efforts that much more productive.
About the Author
Nick Schofield is a content creator for Cognition Corporation. A graduate of the University of Massachusetts Lowell, he has written for newspapers, the IT industry, and cybersecurity firms. In his spare time, he is writing, hanging out with his girlfriend and his cats, or geeking out over craft beer. He can be reached at email@example.com.