How to Design Medical Devices with Dynamic Regulations?

How to Design Medical Devices with Dynamic Regulations?

With advances in the technology industry, instantaneous data access is not only ubiquitous, but also expected in the medical device industry. Health delivery and monitoring services are no longer solely in healthcare facilities; these services are becoming more pervasive in the patients’ homes, shifting the medical treatment paradigm and creating opportunities for new disruptive electronic technologies. The shift also engenders challenges to prevent compromise or misuse of patient data. In this blog, we’ll show how one can begin designing a medical device in the context of certain regulations to minimize rework between prototype and production design builds.

Consideration #1: What is a process for designing medical device electronics?

System engineering, which consists of understanding and defining the system requirements (including regulatory and customer expectations) are paramount to the design process. Federal regulations for both medical devices (and even aerospace systems) both regard traceability as essential. Traceability is a relationship mapping between two or more products of the development process. Increased traceability rigor is required as the criticality of the medial device is increased (Class III vs Class I, for example).

We traditionally design systems from the top-down, which means that we look at the overall system perspective and consider all the available requirements from all stakeholders. So, requirements are allocated to appropriate software or hardware designations, broken down into increased specificity, and implemented. Validation means Do you have the right requirements? while verification asks the question does your implementation meet the requirement? This process is called V&V (Validation & Verification).  

Requirements Validation 
Requirements Verification 
Time

Design and verification occur in opposite directions:

3 
2 
Design (Top Down) 
1 
1 
2 
Verification (Bottom Up) 
3 
Lowest Level Subsystem Module Highest Level Subsystem Module 
Order in Design Process 
Lowest Level Subsystem Module Highest L I Subsystem Module 
Order in Verification Process

Design is performed from the top-down, which means that high level software modules are defined before lower level ones. Lower level modules are combined to make up one or more higher level modules. Lower-level modules are verified first to mitigate integration risks. It’s recommended to perform requirements-based testing before system-level testing.

Consideration #2: What are some landmines to avoid when designing new medical devices?

Functional requirements, for example those pertaining to customer expectations, as well as regulatory requirements, must be considered simultaneously in the design process. Even the initial goal is to develop a prototype demonstrating the principle for your stakeholders, neglecting to consider the holistic system perspective by asking who are all of my stakeholders? And what do they care about? can lead to rework. This is because hardware or software components chosen to meet the functional requirements may not meet regulatory ones. So, for example, embedded computers (and other components) may have to be replaced in the production design. How much additional effort would we inadvertently append to the overall hardware and software development effort? 20%? 40%? From our experience, the cost delta can range from 30% – 50%.

Ephemeral goals without long-term design strategy context likely increases overall development cost. It’s a common observation we’ve seen with the myriad companies working in this space. Let’s study how certain regulatory requirements rework can be avoided for a medical device example by considering all critical elements sufficiently early in the design process.

Consideration #3: What are some example regulations to consider?

According to the Food and Drug Administration (FDA), regarding medical records, Title 21: Food & Drugs – Part 21 – Protection of Privacy (21.71 (5)) states that a “record is to be transferred in a form that is not individually identifiable.”  In another statement, (21.70 (a)) states that “Names and other identifying information are first deleted and under circumstances in which the recipient is unlikely to know the identify of the subject of the record.” [1]

To paraphrase, transmission and storage of customer records must be performed in a way that protects patient confidentiality. Simply transmitting the patient’s name and storing the name and data in one place won’t put federal regulators at ease. Suppose we’re designing a subsystem called the Diagnostic Reporting Function (DRF) of our next medical device, and we want to comply with the Part 21 regulations above. Perhaps we would enforce a requirement on our medical device, such as:

Diagnostic Reporting Function (DRF) shall generate a unique identifier for each heart-rate monitor reading. [REQ-1]

Diagnostic Reporting Function (DRF) shall transfer the generated unique identifier along with the recorded heart-rate monitor reading to the Transmission Endpoint System (TES). [REQ-2]

Medical Device 
Diagnostic 
Reporting 
Function 
Internet 
Transmission Endpoint System 
Unique 
ID/Patient 
Isolator 
Customer/Patient

We won’t get into too much detail in the above figure, but it’s a basically a underscores how one may decouple identifiable patient information with patient medical data to make it harder for unauthorized individuals to view sensitive information. Note that transferring patient data within an organization may be acceptable, if the Transmission Endpoint System (TES) is operated by a single organization and/or appropriate vendor risks are mitigated [1].

We have now decoupled health records from patient identification, a key regulatory requirement. Clearly this requirement impacts the software development of our example medical device. We also have other regulatory requirements to mitigate other cyber-risks. What about encryption? Sending the output of the medical device in plain-text (not encrypted) technically satisfies our above REQ-1 and REQ-2, but do not satisfy other regulatory requirements or guidelines. So, in the spirit of the V&V figure described previously, we have not yet satisfied our requirements validation process, since we need more requirement coverage.

Consideration #4:  What questions should we answer in order to develop a more secure medical device?

As an illustration, Remote Patient Monitoring (RPM) patients expect an intuitive user interface (UI) for their monitoring products. And, for device manufactures to deliver on this expectation safely, reviewing the overall system architecture is paramount to ensuring that security and privacy vulnerabilities risks are mitigated, according to a recent National Institute of Standards and Technology (NIST) article titled Securing Telehealth Remote Patient Monitoring Ecosystem [2].

Based on the NIST Cybersecurity Framework Version 1.1 [3], the same article ([2]) defines a set of Desired Security Characteristics guidelines, pertinent to medical device designers. Let’s pose the guidelines as questions to frame additional requirements for our medical device example (Note that these may be more relevant to the TES mentioned above):

  • Identify
    • How are network assets identified and managed?
    • What are the vendor risks, such as cloud providers or technology developers?
  • Protect
    • How are users identified and authorized appropriately with access control?
    • Is least privilege enforced for every user? 
    • Is data integrity and encryption (including data at rest) enforced?

The protect category especially echoes the principles outlined in the excellent book Zero Trust Networks: Building Secure Systems in Untrusted Networks by Evan Gilman and Doug Barth) [4].

  • Detect
    • What are the ways in which cyber threats are detected?
    • How are continuous security monitoring practiced?
    • In what ways are user account behaviors studied with analytics?
    • What type of information do security logs contain?
  • Respond
    • When a cyber threat event is discovered, what are the processes to limit the extent or limit of damage?
  • Recover
    • After a cyber incident, what are the processes to restore systems affected by the event?
    • How does the cyber incident recover process handle external and internal parties?

Consideration #5:  What is the impact of missing regulatory requirements in the design process?

Through the guidelines above, we can determine useful considerations to ensure systems are resilient against most common forms of cyber threats. Neglecting security best practices puts patient (and thus company reputation) in peril; in fact, there’s more than one cyber hack every 39 seconds, according to Michel Cukier of the University of Maryland [5]. To ensure the medical device we’re designing will be more prepared, let’s look at one question from the above – Is data integrity and encryption (including at rest) enforced?

Data integrity means that the data has not been tampered. A variety of common algorithms to check that the data has not been altered (mistakenly or intentionally) is checksum or hash. SHA-256 is an example.

So, what can some ancillary requirement be?

Some example, though not exhaustive, could be:

DRF shall transmit the computed SHA-256 of the message payload to TES. [REQ-3]

DRF shall encrypt all outbound communication to TES with the Transport Layer Security (TLS) version 1.2 or higher standard using the Advanced Encryption Standard (AES)-256 cipher. [REQ-4]

If we developed our medical device prototype without considering the full system view (ex: only REQs 1-2) of the end product, we may have (for example) chosen a microcontroller unit (MCU) that didn’t have the relevant encryption capability, resulting in prodigious hardware and software rework, and in our experience, may increase costs by more than 30-50%.

Fortunately, there is a way to mitigate these types of technical risks. And, we can help you accomplish both your short-term and long-term objectives through these disciplined processes. There are of course many more risks than we can ever write in this tutorial, and as you develop your product, we can help you avoid them. If you’d like us to brainstorm with you or if you’d like us to cover a future blog article topic, you’re welcome to comment or send us a message on our Contact Us page.

References:

[1] FDA Title 21: Food & Drugs – Part 21

[2] Securing Telehealth Remote Patient Monitoring Ecosystem       https://www.nccoe.nist.gov/sites/default/files/library/project-descriptions/hit-th-project-description-final.pdf

[3] NIST Cybersecurity Framework Version 1.1 https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf

[4] Zero Trust Networks: Building Secure Systems in Untrusted Networks by Evan Gilman and Doug Barth https://www.oreilly.com/library/view/zero-trust-networks/9781491962183/

[5] Hackers Attack every 39 Seconds (NBC News) http://www.nbcnews.com/id/17034719/ns/technology_and_science-security/t/hackers-attack-every-seconds/

Leave a Reply

%d bloggers like this: