Blog

Wireless Design Considerations for Medical Devices

INTRODUCTION

Wireless connectivity for medical devices is no longer a long-shot dream, but an expectation from patients. Wireless devices are complex; however, with some planning, major risks can be avoided. Below we outline an introduction to the general categories and delve into specific examples. The broad categories include:

  • Intended Application
  • Intended Region of Operation
  • Medical Regulations
  • Wireless Regulations & Certifications
  • Miscellaneous Regulations
  • Additional Design Considerations
  • Manufacturing Considerations

These categories should be viewed together, since they can influence each other. We can start by asking specific questions to better understand the solution process.

INTENDED APPLICATION

The intended application includes the specific problem, proposed solution, and target population.

What is the specific problem?

What is the proposed solution?

Who is the target population?

The previous three questions help shed light on what wireless design approach is viable. Asking more questions may help increase awareness. For example, what is the technical aptitude of the target population? Must all wireless components work autonomously without user interaction? How should the user interact with the system?

INTENDED REGION OF OPERATION

What countries is the device expected to operate?

Of course, the operating country will impact the appropriate medical and wireless regulations. For example, the FDA is the regulatory body of medical devices marketed in the United States. Also, because of FCC regulations, the devices’ operating frequency ranges will be limited and, therefore, a design factor. For example, in the case of LTE CAT 1, frequency Band 2 (downlink 1960 MHz and uplink 1880 MHz) can be used in the US [1]. However, in Europe, Band 2 is not permitted, and Band 28 must be used (downlink 780.5 MHz and uplink 725.5 MHz). If the device will be operated in both regions, one could either choose bands that are common to the target locations (ex. Band 1 – downlink 2140 MHz and uplink 1950 MHz) or provide a separate software configuration that is chosen depending on the location.  The frequencies can impact the size requirements of the circuit, since lower frequencies with multiple bands tend to take more space than a single banded high frequency circuit. So, tradeoffs between the operating bands/frequency and size (as well as operating power) are ubiquitous.

What about operating device user proximity?

Specific health guidelines also include safe distance of the wireless device from human tissue. For example, the Specific Absorption Rate (SAR) measures the rate at which RF energy is absorbed by the body. SAR testing uses models of the human body that are filled with liquids to simulate human tissue RF absorption [2]. For the frequency bands of interest, the SAR values are tested at the most severe (not necessarily typical) operating conditions. Therefore, in some cases, the SAR value may pertain to a position or direction that is seldomly used.

MEDICAL REGULATIONS

Considering the medical regulations as a whole package helps ensure nothing is missed in the early product development phases. Some regulations include:

StandardTitle
ISO 13485Medical devices – Quality management systems
ISO 14971Application of risk management to medical devices
IEC 60601Medical electrical equipment requirements
IEC 62304Medical device software – software life cycle processes

Ensuring that adequate medical device quality standard processes (ISO 13485) are in place prior to development is key. In terms of risk management (ISO 14971), identifying, documenting, and mitigating risks is paramount. So, addressing any Specific Absorbance Rate (SAR) concerns for wireless products is an example risk captured in the risk management process. Additionally, understanding how IEC 60601 requirements would impact the wireless design is essential. For example, if an audible alarm is required for IEC 60601 to warm of an imminent failure, must that capture any failures associated with the wireless components?  Finally, understanding how the software aspects of medical device certification in the context of the hardware component selection is important. What specific wireless functions must be certified to what specific software class level? The answer clearly depends on the consequence of the component’s failure.

As a result, all these aspects are interconnected and should be analyzed together.

WIRELESS REGULATIONS & CERTIFICATIONS

What regulatory body(s) is/are required?

The FCC is the regulatory body in the United States that specifies whether a specific device can operate at a specific frequency with a specified power level in specific directions for a designated application.

  • FCC Part 15B: Unintentional Radiator
  • FCC Part 15C-F, H: Intentional Radiator

The unintentional radiator categoryspecifies the acceptable power levels for frequencies operating between 9 KHz and 3 THz but not “intended to emit RF energy wirelessly.” [3]

This applies to, for example, an onboard microcontroller that has a CPU operating at 1 MHz. In this case, even though wireless power is not intentionally transmitted, there is still generated RF energy (due to Maxwell’s equations). The acceptable power levels in this category are generally lower than the intentional radiator part and, therefore, still require electromagnetic interference (EMI) minimization techniques.

The intentional radiator category specifies the acceptable power levels for frequencies intended to be emitted. Full certification to this category can be mitigated if using an antenna and network like one already found to be compliant; this observation, therefore, can reduce development costs.

For both categories, specific design considerations should be used to mitigate these risks. This is addressed typically in the mechanical packaging and, most importantly, the Printed Circuit Board (PCB) layout portions of the design process.

What are some additional certifications?

Additional certifications will depend on the specific use case. For example, in the case of cellular Internet of Things (IoT), the 3rd Generation Partnership Project (3GPP) standards for cellular specifies that the allowed circuit voltage must be at least a specified voltage. Also, adherence to the PCS Type Certification Review Board (PTCRB) certification may be required by various cellular carries, like Verizon. This observation is critical, since in connected care applications, maintaining compliance to suppliers’ requirements may be overlooked but result in integration risk. Of course, other certifications may be applicable depending on the application.

MISCELLANEOUS REGULATIONS

Depending on the specific application, other regulations may apply. For example, the Health Insurance Portability and Accountability Act (HIPPA) in the United States protects sensitive patient health information. Also, the General Data Protection Regulations (GDPR) are a set of compliance regulations that protects citizens of the European Union.

WIRELESS DESIGN CONSIDERATIONS

What is the approach for wireless antenna?

Choosing the topology of the antenna is not trivial and a critical system design choice; a full treatment of the subject is out of scope. Instead, several guiding principles will be mentioned. In the case of a 2.4 GHz application, most antennas follow three general approaches – 1) Wire Antenna, 2) PCB Antenna, and 3) Chip Antenna.

Antenna TypeSizeCostEfficiencyEase of Manufacturing
WireGreatestGreatestGreatestLowest
PCBMiddleLowestLowestGreatest
ChipLowestMiddleMiddleMiddle

So, if size is a constraint, a chip antenna may be best. If ease of manufacturing is important but not efficiency, a PCB antenna could be suitable. If efficiency must be optimized over all other variables, a wire antenna can be a viable option.

What is the approach for wireless antenna tuning?

Consider the following topics:

  • Ground Clearance around antenna
  • Optimal Antenna Placement
  • Antenna Feed Consideration
  • Antenna Matching network

In terms of the antenna feed consideration and antenna match network, maximizing the power delivered to the antenna by minimizing reflections is a commonly employed technique in wireless design. A common tool used in the technique by RF engineers is the Smith Chart, as shown below. Fundamentally, the impedance is measured at the frequency range of interest, plotted on the chart, and modified by using capacitors, inductors, and, in some cases, resistors. The goal is to move the impedance to the middle of the diagram (labeled “Matched Impedance”).  

Fig. 1: Simplified Smith Chart

The process of tuning is to ensure the impedance from the perspective of the integrated circuit (IC) is equal to the impedance from the perspective of the antenna and equal to the characteristic impedance of the RF trace. Otherwise, significant reflections will result in power dissipation, and therefore, significantly reduce the distance of wireless operation.

Return Loss (dB)Power Reflected %Power Delivered to Antenna %
0.0199.770.23
0.197.722.28
179.4320.57
101090
20199

As the previous table demonstrates, due to conservation of energy, the more energy that is reflected, the less useful power is delivered into the antenna, degrading the performance of the overall system. Therefore, tuning the antenna is a key element of the wireless design process.

Specifically, the following points can simplify the tuning process:

  1. Calibrate network analyzers prior to tuning.
  2. Use only high-Q components.
  3. Ensure capacitors have a series resonance at least double the operating frequency.
  4. Ensure inductors have a self-resonance at least double the operating frequency.
  5. Shunt components should be on the RF trace.
  6. Measure impedance at the same location at which components will ultimately lie.
  7. If multiple bands will be operated, tune the lower frequency band first.

Also, to help minimize the EMI emissions and simplify the PCB layout process, the following PCB layout is recommended [4]:

Fig. 2: Four Layer PCB Stackup

On the other hand, two-layer PCBs may be used in some cost-constrained applications but make the PCB routing more difficult because their characteristic impedance is directly proportional to the substrate height and would, therefore, require thicker RF traces. For completeness, an example 2-layer stackup could be considered:

Fig. 3: Two Layer PCB Stackup

ADDITIONAL DESIGN CONSIDERATIONS

What other non-wireless functions are required?

Considering the wireless function requirements in the context of the non-wireless requirements is important.

Fig. 4: Wireless & non-wireless function separation

The wireless requirements may dictate that a specific chipset with certain set of characteristics be used. But, only a subset of those specific chipsets may address the non-wireless functions as well. In the case of a medical device, a portion of the system will be used to perform some diagnostic or treatment operation. This may be performed with non-wireless components. The transmission of the data of interest (ex. breathing rate, medication status, etc.) will be performed by a wireless function component. The delineation as well as interactions between these two subsystems is a critical design choice – what is the best interface?   

What are the space requirements?

In the ideal case, the mechanical packaging is designed around the antenna, not the other way. Otherwise, compromises on the size may negatively impact system performance. Therefore, the space requirements highly impact the system design. In practice, however, there are physical constraints. The specific dimension that is constrained impacts the type of antenna that may be used.

What are the mechanical provisions?

The mechanical casing can impact the effective dielectric constant of the transmission media, from the perspective of the antenna. Ergo, understanding the mechanical casing and placing it nearby the antenna during tuning is strongly recommended.

MANUFACTURING CONSIDERATIONS

For manufacturing, there are various common categories to consider; three categories include design-for-manufacturing (DFM), part obsolescence, and manufacturing quality.

Design-for-Manufacturing considers the set of criteria to minimize product failures by maximizing the quality of the design decision process. This may include:

  • Tenting PCB vias wherever possible
  • Allowing an additional space clearance between PCB elements beyond the minimum mandated by the board manufacturer
    • An example is keeping traces as far apart as practical and keeping traces away from the edge of the board (to limit board edge oxidation).

Of course, there additional criteria beyond these examples.

Parts obsolescence is another consideration. Wherever possible, choosing electronic components with common PCB footprints and electrical properties in the event of part obsolescence reduces significant change control process rework. General part obsolescence risk mitigation can be documented as part of the company’s ISO 14971 risk management process for the project. 

Manufacturing quality can be decomposed into PCB board manufacturing and PCB assembly. Both components of the process are critical to ensuring adequate quality control.

In terms of PCB Assembly, fabrication to the IPC-A-610D Class 3 is also recommended for safety critical applications including medical devices.

Certifying to an IPC class 2 standards allow for extended life when compared to Class 1 but does not ensure uninterrupted service. If continuous operation of the wireless portion of the system is not as critical in the application, Class 2 may be possible. Furthermore, IPC-A-600 covers the PCB board manufacturing, itself. Note that there are additional standards in PCB manufacturing.

Fig. 5: IPC-A-600 & IPC-A-610 Simplified Relationship

Additionally, the manufacturing quality process must be documented consistently and integrated with the medical product quality management system, like the flow of Fig. 6.

Fig. 6: Manufacturing Quality Process

REFERENCES

[1] – Haltian Global IoT Frequency Bands E-Book

[2] – Specific Absorption Rates for Cell Phones – https://www.fcc.gov/sites/default/files/sar_for_cell_phones_-_what_it_means_for_you.pdf

[3] – FCC parts – https://www.fcc.gov/oet/ea/rfdevice

[4] – Cypress Antenna Design and RF Layout Guidelines

How will 5G and healthcare tango and why should I care?

As technology evolves, it’s becomes difficult to keep up. Fall behind, then your tech becomes obsolete and your competitors are all over you. Now, 5G, which is the fifth generation of wireless technology, is here.


“But, why should I care about the 5G tango?”

Video and/or health monitoring

Would a parent watching their infant with a monitoring system accept a choppy experience? Or can a vital signs monitoring system have spotty coverage? No, not these days.

It’s important to get ahead of the tango because:

  • Previously difficult problems can be solved now, meaning now is the time to solidify your vision of making the world a better place
    • Significant financial opportunities exist for ancillary features, such as remote control and monitoring
      • A $76 billion revenue opportunity for addressing the 5G healthcare transformation is predicted [2] 
  • Entire industries (your partners & competitors included) are moving, making it easy to fall behind
    • Due to your customers expecting seamless connectivity for more demanding services – current tech won’t suffice
  • Legacy technologies will be phased out
    • T-Mobile plans to remove 2G support by the end of 2020 [1]

Virtually all parts of healthcare will be affected; though the telehealth and remote patient monitoring (RPM) sections will be especially affected. The telemedicine market is expected to grow at an annual rate of 16.5% until 2023, elucidating ample opportunities to introduce new tech [3].

Not only do we have a technological shift, but also a patient expectation transformation. Patients expect seamless connectivity regardless of their locations. They no longer accept connectivity confined to home Wi-Fi or spotty outside coverage. They want their medical solution to work regardless of their location. Clearly a connectivity solution that considers the various available communication links is critical.

Part of that challenge has been partially solved with legacy wireless systems. But, medical products have yet to adapt to the 5G (and eventually 6G) revolutions. And that’s where we can achieve a rich user experience as well as effective diagnostics and treatments.

In the table below, we look at different wireless technologies that have evolved over time.

TechTheor. Data RateLatencyApplication
2G50 Kbps750 msSMS, pictures, MSM
3G2 Mbps300 msVoice, Video Calling, Internet
4G100 Mbps20 msHigh Res. Video Stream
5G20 Gbps1 msAR/VR/Ultra High Res. Video
Bandwidth for different technologies. Data from [4] and [5]


In practice, the true data rate1 is a function of multiple variables, including:

  • Surrounding devices
    • Similar devices broadcasting at same frequency can interfere
  • Modulation
  • Transmit Power
  • Weather
  • Other factors

The realized data rate may only be a tenth of the theoretical, but the table nonetheless underscores the growth potential. Implementing a 5G solution yields an effective throughput increase of about 20x.

However, the real benefit isn’t only the data rate. The latency, or time lag between sending and receiving messages, is key – 5G offers a substantial (20x) reduction compared to 4G. The reduction is crucial for Virtual Reality (VR) and telemedicine applications. Also, vital signs could be streamed with an error rate less than a billionth, making remote surgical operations possible; 4G, on the other hand, is insufficiently equipped [4].

Therefore, even though legacy systems may be fast enough for some RPM (and other healthcare applications), legacy systems in several cases do not meet the latency requirement.

Now, let’s study the data rate and size requirements for different applications:

ApplicationData Size or Data Rate
Image File – PET Scanner1 GB (Size)
Video Conference2 Mbps (Speed)
Virtual Reality (Training)50 Mbps (Speed)
Surgery (4K Camera)75 Mbps (Speed)
Augmented Reality (6 DoF) (Assisted Surgery)5 Gbps (Speed)
Data sizes and rates for different applications. Data from [4-7]


Therefore, as the application becomes demanding, legacy systems become less practical.

So, it’s more than simply device connectivity. It’s about providing access to all for a better, faster, more available healthcare solution.

“What if I just select some 5G chipset and call it good?”

Careful. Select the wrong 5G chipset and you’ll be in a world of hurt. The right choice requires a well-thought, forward-thinking exercise. Speaking from experience on chip selection.

What to do now?

  • What are some practical ways to get ahead of the competition before time is lost?
  • How can these learnings complement an existing strategy and product?
    • And what 5G chipsets to consider or avoid?
      • Well, what about integration?
  • What about in the context of medical regulations?

These are great questions. You could send us a message here and we can stir up some ideas.

Footer

1Data rateA->B is the data rate from point A to point B and is oftentimes asymmetric (Data RateA->B ≠ Data RateB->A) due to the different allocated transponder frequency bandwidths in either directions. Some texts refer to bandwidth as the same as data rate. However, we don’t mix the terms here because the term bandwidth has multiple meanings (such as range of frequencies).

Also, the common term download is related to, but different from, data rate. Download refers to an application level transmission of data that usually uses acknowledgements in the opposite direction during the transfer. Therefore, download is usually a function of the latency as well as the (asymmetric) data rate. The concept of bandwidth-delay product (BDP) becomes central.

References

[1] – https://usatcorp.com/anticipated-cellular-carriers-2g-3g-sunset-dates/
[2] – https://www.ericsson.com/en/networks/trending/insights-and-reports/5g-healthcare
[3] – https://www.business.att.com/learn/updates/how-5g-will-transform-the-healthcare-industry.html
[4] – https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6927096/
[5] – https://hpbn.co/mobile-networks/
[6] – https://www.rcrwireless.com/20200204/5g/4-ways-5g-is-transforming-medical-field
[7] – https://www.qualcomm.com/media/documents/files/vr-and-ar-pushing-connectivity-limits.pdf

Tutorial: Medical Device Design with MATLAB Simulink MBSE

We will discuss an emerging system design technique that has been proven to reduce complex product development costs by 55% [1].

INTRODUCTION & WHY?

As product complexity in virtually every industry has increased exponentially, processes to design and prove device correctness have become more involved. Traditional methods for complex system design in highly regulated industries, such as medical device and aerospace products include the following (simplified) steps1:

1). Manual gathering of requirements
2). Design system and/or architecture
3). Coding and/or development
4). Validation & Verification

Typically the process outlined above is performed in what is referred to as a waterfall approach, in which each of these steps is followed sequentially and fully. Particularly, airborne software standards follow the DO-178C standard and medical software components follow the life cycle processes outlined in IEC 62304.

Suppose during step (4), (validation), a missing requirement is caught. Adding the requirement would require redesign, re-coding, re-verification, and re-validation. This is one reason highly regulated products oftentimes require greater capital investments than consumer electronics.

One seemingly obvious solution is to ensure requirements are fully correct and comprehensive in the early product development stages. In practice, achieving such a feat isn’t always realistic (especially if the product is complex) due to a gamut of reasons including limited foresight, evolving interfaces controlled by external groups, and dynamic regulatory requirements (think General Data Protection Regulation (GDPR)).

Another solution (albeit a poor choice) could be to attempt abridging certain subsets of the process. However, such an endeavor would prove to be haphazard, error-prone, and likely to reduce product quality. An alternative to the above quandary is not necessarily attempting to avoid the process but rather shortening the burden of re-running the steps through automation; Model-Based System Engineering (MBSE) is one such approach.

WHAT IS MODEL BASED SYSTEM ENGINEERING (MBE)?

MBSE is a system design technique that decomposes a system as a representation of simpler elements (via models), connects the models together, and continues to represent complex models in terms of smaller ones. The following terms make up a “system,” as [2] excellently defines:

  • Entity are components that make up a system
  • Attributes are characteristics of entities, like temperature or volume
  • Relationships are associations between attributes and entities based on causality

The key observation is that a system can be broken down hierarchically. So starting with the highest level of abstraction, the top level (system) is designed first and broken into its elements, namely subsystems. The process is repeatedly performed until the lowest level is sufficiently simple and therefore requires no additional decomposition. The definitions of these levels (in progressively greater granularity) are as follow:

  • System
  • Sub-systems
  • Assemblies
  • Sub-assemblies
  • Parts
Fig.1: System Decomposition

HOW TO PUT MBSE INTO PRACTICE?

We can follow a process similar to the formal System Analysis and Design Technique (SADT). Classically, engineers and scientists use these methods by drawing and breaking down the system on a whiteboard or a piece of paper. However, with advances in computer-aided (CAD) modeling, system decomposition can be performed on a computer, using tools, such as CAMEO or MATLAB Simulink. After the design phase is completed, the model is verified, and the code is automatically generated. So, the burden of writing software is significantly reduced.

Consider the impact in time savings. Instead of having to create a 1 million-line program, the vast majority of the coding could be automated. The savings is further underscored when feature enhancements are considered in the development cycle (typically via a change control process). Therefore, effort impacts from required changes are greatly reduced. Faster product cycles (weeks instead of years) becomes possible.

REQUIREMENTS ANALYSIS & GATHERING

For the purposes of this tutorial, we’ll use MATLAB Simulink to design part of a mechanical ventilator medical device. When we start with a medical product, generally we gather product requirements, which include regulatory guidelines. IEC 60601 generally provides additional guidance, such as a required medical device’s alarm volume in decibels and expected operating modes. For this exercise, we will specify a simplified set of requirements, as shown in Fig. 2.

Fig. 2: Requirements Subset

With Simulink modeling, the requirements in Fig. 2 can be linked to the model itself, with the following steps accompanying Fig. 3:

  1. Open the requirements document in word and highlight the requirements text
  2. Right-click a specific Simulink block
  3. Select Requirements
  4. Select Link to Selection in Word
Fig. 3: Linking Requirements to Model

Fig. 4 shows how to ensure the requirement mapping is successful.

Fig. 4: Linked Requirements to Model

DESIGN INTERFACES & HIGH LEVEL

Let us breaking our system in three main components:

  1. User Input (Pre)Processor
  2. Tube & Patient Lung
  3. Ventilator
    • Alarm Subsystem
    • Control Micro-controller Unit (MCU) Subsystem
    • Pneumatic Subsystem
      • Sensors
    • Pneumatic Controller

The Ventilator will take user input from the (pre)processor/panel and accordingly adjust the output airflow (for example) to the patient. Inside the ventilator, there is a primary control MCU that controls the Pneumatic Controller, which drives the Pneumatic subsystems (which, in turn, includes sensors and pneumatic components). The alarm subsystem will make a noise and visual indicator if certain fault conditions, such as power loss, are detected.

Consistent with the above, the Simulink Model Browser shows the following:

Fig.5: System Decomposition

At the highest level, we design the following interface and major entities comprising the system.

Fig.6: High-level Block Diagram & User Interface Simulation

DESIGN LOW LEVELS

Next, we will dig into the yellow box of Fig. 6 and create a Control Logic MCU, Alarm Subsystem, Pneumatic Subsystem, and Pneumatic Controller.

Fig. 7: 2nd Level Decomposition

We continually decompose the system until the constituent parts are sufficiently simple and thus no longer divisible. Additionally, the primary control logic implementing the requirements of Fig. 2 is shown in Fig. 8.

Fig. 8: 3rd Level Decomposition – Control Sequence of MCU

For tutorial succinctness, we’ll stop at the third level.

SIMULATION RESULTS

Of course, testing the model is paramount. Testing phases before and after automatic code generation are commonly employed. For now, we will test the model itself. As shown in Fig. 9, the system was able to detect fault conditions when the ventilator was turned off abruptly during ventilation mode (shown visually with the red alarm indicator when the system ON input turned red as well).

Fig. 9: User Turns Machine off during Patient Ventilator – Fault Detected

By running the simulation while the state machine window is open, the control logic’s state machine can be debugged using Simulink’s Stateflow. The Blue boxes represent the current state machine’s state.

Fig. 10: Video Debugging & simulation of state machine

For specific plots pertaining to the patient’s flow, pressure, and volume, Fig. 11 shows the simulated readings from the ventilator’s sensors.

Fig. 11: Image of MBSE-designed Ventilator Patient Simulation

And, we conclude testing with a video showing the readings in live-action.

Fig. 12: Video of MBSE-designed Ventilator Patient Simulation

AUTOMATIC C/C++ CODE GENERATION

As mentioned, a prodigious added benefit of MBSE is automatic code generation, which has saved significant product development costs by ~55% [1]. Code generation is an excellent topic for a future blog article.

NEXT STEPS & ENHANCEMENTS

Verification (and coverage) should be performed as well. An example test includes mathematically integrating the output flow every cycle to ensure the tidal volume is truly delivered to the patient within an error margin. An assertion can be tied to this specific condition for every breathing cycle (often referred in computer science as an invariant).

FOOTER COMMENTARY

1 Implementing a commercial product generally requires several additional steps, such as design reviews. In the context of medical products, risk management processes and quality management processes, for example, need to be in place before, during, and after the product development process. ISO 14971 and ISO 13485 standards, although not directly discussed in this article, would, therefore, be required.

2 Depending on the product’s safety classification (design assurance), the MBSE elements (and the design tool itself) would need to be qualified, which means proven to be developed with a sufficiently high confidence level for meeting the product’s requirements.

REFERENCES

[1] MBE Cost Benefits (PTC)

[2] MBSE Primer

ABOUT SIMPLONICS

Simplonics is a leading electronics design consulting and a supplier firm for companies across the United States in highly regulated industries, including medical devices. Our common services include electrical hardware design, software development, and sensor system design.

Our customers tell us horror stories about ex-vendors who delivered defective systems, resulting in safety concerns, vulnerable software, and other issues. We’ve successfully helped companies out of these situations by consistently delivering superior quality and simultaneously reducing recurring costs by more than 20%.

Are you working on a technical challenge and want a fresh perspective? Feel free to reach out by visiting the Contact Us page and submitting your contact information. Let’s build a great relationship for the future.

Tutorial: How to Detect Cancer Cells with Machine Learning?

The medical device industry has been advancing towards solving diagnostic and treatment problems with machine learning (ML), which is a data prediction technique. Therefore, investing an effort to understand this multidisciplinary area for your own application can help you maintain an edge over your competition. To help you accomplish that end, we’ll cover a relevant case study for (ML) by first defining necessary terms, going into some theory, and implementing a Python coding example.

Conventional methods of data predictions use statistical techniques, such as regression, to classify data points or predict future values. The increase of computers’ computational power has made solving sophisticated algorithms with intense operations possible. In this tutorial, we will study and implement a Support Vector Machine (SVM) technique to categorize whether medical tumor cells are cancerous by studying their features; the principles of this tutorial can be applied to ubiquitous classification problems.

DEFINITIONS

Let’s start with a few definitions. Two common types of machine learning technique are supervised learning and unsupervised learning.

Supervised Learning– When the correct categories pertaining to input data points are known. The Support Vector Machine (SVM) is an example we’ll be studying here.

Unsupervised Learning – Occurs when the output targets aren’t known in the given problem. We would analyze commonalities among the data itself to find groupings of similar data together.

Features (Inputs) – Specific input that map to an output class target. A cell’s mean area and mean smoothness are two examples we’ll study here.

Target (Output) – “Correct” answers (determined classes) pertaining to the specific feature inputs.

Training Set – Subset of data used to build machine learning model. These data points are not used in the testing stage.

Test Set – Subset of data used to determine accuracy of model. These data points are not used in the training stage.

Class – Categories to which the input features pertain.  In this example, Malignant and Benign are the two possible classes for tumor cells. Other applications may have more than two possible classes.

Interference – Using the trained ML model, deduce to which class a test input pertains.

Margin – Distance between closest points of different classes in the context of Support Vector Machine. The support vectors are simply the points closest to the opposing class. During training, the support vectors are computed to determine the hyper-plane (in sufficiently high dimensions). Fortunately, after training, almost all data points can be disposed and only the support vectors are retained, resulting in significant storage space reductions.

THEORY

Support Vector Machines (SVMs) are a type of supervised learning algorithm that attempts to find a dividing line/curve (or hyper-plane in higher dimensions) so that unknown data points can be categorized in the appropriate class. It’s best to illustrate with some diagrams

Fig.1: Data points with Features 1 and 2 Plotted

Clearly, a line between the data points for the two classes (X’s and O’s) would serve as a reasonable divider for the data points. But, what’s the equation of that line? And what does it look like in higher dimensions?

Fig.2: Data points with Features 1 and 2 Plotted after SVM Invocation

The goal is to find where to draw the thick red line above in Fig. 2. Our goal is to maximize the margin. The data points (X’s and O’s above) closest to the thin red lines are called the support vectors.

The example above appears relatively simple and may not require using the SVM technique. So, why is SVM useful? It becomes useful when the data points don’t appear to be linearly separable, which means separation with a single decision surface. Because we are effectively solving for an equation that separates the data, transforming a low dimensional non-linearly separable to a higher linearly separable one will simplify the solution. The function used for this purpose is a kernel function, which is used to transform input data to higher dimensions.

1). Linear (‘linear’)

2). Polynomial (‘poly’)

3). Radial (‘rdf’)

Mathematically, we can write the SVM training equation, according to [1]:

In Eq [1] above, K is the kernel function, x is a matrix containing inputs we’d like to train, t represents targets, and the second term is added to help make the equation linearly separable in higher dimensions. We’ll use the Sklearn [2] library in python solve this equation for us. Other packages, such as cvxopt [3], would use a form similar to Eq [1], whose form is the same as the Lagrange Multiplier solutions.

IMPLEMENTATION

1. Import Libraries

First, we import the sklearn, numpy, matplotlib, and math libraries into our Python program.

from sklearn import svm
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
import math 

2. Load Data

Secondly, we’ll load the breast cancer data set and also calculate the number of data points we have. We’ve got around 569 samples.

	dataset = load_breast_cancer()   
    sampleSize = dataset.data.shape[0] #sample size
    trainSize = math.floor(0.9*sampleSize) #90% of dataset is used for training
                                           #Thus, remaining 10% used for testing

3. Select Featuers

Next, we need to select a couple features to analyze.

#Choose fourth and fift columns as features 1 and 2, respectively
    #Off by one because of zero indexing
    feat1Index = 3
    feat2Index = feat1Index + 1
    feat1Name = (dataset['feature_names'][feat1Index])
    feat2Name = (dataset['feature_names'][feat2Index])

4. Structure Data for SVM Input

Additionally, we’ll have three sets of variables housing our data to make the example clear. First, we’ll get all of the data, then we’ll designate about 90% of our data for training, and the rest will be reserved for testing. For analysis and plotting purposes later, we further split the data depending on whether the target is malignant or benign (XMal and XBen, respectively).

    [f1, f2, y] = sliceData(dataset, 0, sampleSize, feat1Index, feat2Index) 
    #all data 
    X, XBen, XMal = separateFeaturesViaClasses(f1,f2,y)
    
    
    [f1Tr, f2Tr, yTr] = sliceData(dataset, 0, trainSize, feat1Index, feat2Index) 
    #train data 
    XTr, XBenTr, XMalTr = separateFeaturesViaClasses(f1Tr,f2Tr,yTr)
    
    [f1Te, f2Te, yTe] = sliceData(dataset, trainSize, sampleSize, feat1Index, 
                                  feat2Index) #Test Data
    XTe, XBenTe, XMalTe = separateFeaturesViaClasses(f1Te,f2Te,yTe)
def separateFeaturesViaClasses(f1, f2, y):
    # Creates and returns TWO (2) separate input features matrices - each 
    # pertaining to one of either target classes as well 
    # ONE (1) input features matrix pertaining to both target classes
    assert((len(f1) == len(f2) == len(y)))
    #Create scatter plot inputs for each class
    X = [[f1[i],f2[i]] for i in range(len(f1))]
    XBen = np.array([X[i] for i in range(len(f1)) if y[i] == 1]) 
    #Class 1 - Benign  
    
    XMal = np.array([X[i] for i in range(len(f1)) if y[i] == 0])
    #Class 2 - Malignant
    
    return X, XBen, XMal
def sliceData(dataset, start, end, feat1Index, feat2Index):
    #Slices features and output arrays based on indicies
    f1 = dataset.data[start:end,feat1Index]
    f2 = dataset.data[start:end,feat2Index]
    y = dataset.target[start:end] #same as the outcome ("Correct Answers")
    return f1, f2, y

5. Invoke SVM Algorithm

To have Python solve Eq. [1] for us, we’ll need to provide our training data set and correct target labels.

    #Fit the input parameters to an SVM model. Assume a linear kernel
    #We only want to provide the training data so we'll have some 
    #left for testing
    clf=svm.SVC(kernel='linear')
    clf.fit(XTr,yTr)

6. Analyze Results

We’ll set the accuracy to the ratio of correct test outputs divided by the total number of test attempts. We’ll see that we got 2 samples wrong out of about 60 test attempts.

    #Now we perform the inferencing step and analyze accuracy results
    modelOutput = clf.predict(XTe)
    correctOutput = y[trainSize:]
    result = modelOutput == correctOutput
    #get indices for misclassified samples
    wrongIndices = [i for i in range(len(result)) if (result[i] == False)] 
    xWrong = np.array(XTe)[wrongIndices]
    accuracy = sum(result)/len(result)
    accuracyStr = "Accuracy is: " + str(round(accuracy*100,2)) + "%"
    print(accuracyStr)

7. Plot Data

Lastly, we plot our data. We also draw the SVM decision curve by extracting the line’s slope and intercept points.

  # Calculate SVM Curve for plotting 
    w = clf.coef_[0]
    a = -w[0]/w[1]
    xx = np.linspace(650,700)
    TERM = (clf._intercept_[0]/w[1])
    yy = a*xx + TERM
    plt.plot(xx,yy)
    
	#Plot Data points
    plt.scatter(XBenTr[:,0],XBenTr[:,1], label='Benign - Train Data',
                marker='o', color='blue')
    plt.scatter(XBenTe[:,0],XBenTe[:,1], label='Benign - Test Data', 
                marker='o', color='orange')
    plt.scatter(XMalTr[:,0],XMalTr[:,1], label='Malignant - Train Data', 
                marker='x', color='blue')
    plt.scatter(XMalTe[:,0],XMalTe[:,1], label='Malignant - Test Data',
                marker='x', color='orange')
    plt.scatter(xWrong[:,0],xWrong[:,1], label='Incorrect Test Outputs', 
                marker='+',color='red')
    plt.legend()
    plt.xlabel(feat1Name)
    plt.ylabel(feat2Name)
    plt.title("Support Vector Machine Example for Cancer Cell Classification")
    plt.text(400, 0.22, accuracyStr,bbox=dict(facecolor='red', alpha=0.5))
    plt.show()

Below, we have the plot from our work. We achieved about a 96.5% accuracy.

Fig.3: SVM Model Performance – 96.5% Accuracy

NEXT QUESTIONS

In production, we would optimize our accuracy further and consider the computation resources for the training and inference stages. Here are some questions to consider.

  1. How does varying the kernel function affect performance?
  2. How would the code example be modified to accommodate higher dimensions, such as three features? Would that change improve accuracy?
  3. What features are optimal for the above problem?
  4. How does the training and inference time grow with the number of features? Does this agree with theoretical estimates?
  5. What’s the optimal value of gamma and C, as defined in [2]?

REFERENCES

[1] – Machine Learning An Algorithmic Perspective. 2nd Edition. Stephen Marsland.

[2] – https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html

[3] – https://cvxopt.org/

Let us know when you’d like to discuss how the learning in this tutorial may be applicable to the technical problem you’re trying to solve. Our fresh view of your problem may give you a different, valuable perspective to consider.

How to Design Medical Devices with Dynamic Regulations?

With advances in the technology industry, instantaneous data access is not only ubiquitous, but also expected in the medical device industry. Health delivery and monitoring services are no longer solely in healthcare facilities; these services are becoming more pervasive in the patients’ homes, shifting the medical treatment paradigm and creating opportunities for new disruptive electronic technologies. The shift also engenders challenges to prevent compromise or misuse of patient data. In this blog, we’ll show how one can begin designing a medical device in the context of certain regulations to minimize rework between prototype and production design builds.

Consideration #1: What is a process for designing medical device electronics?

System engineering, which consists of understanding and defining the system requirements (including regulatory and customer expectations) are paramount to the design process. Federal regulations for both medical devices (and even aerospace systems) both regard traceability as essential. Traceability is a relationship mapping between two or more products of the development process. Increased traceability rigor is required as the criticality of the medial device is increased (Class III vs Class I, for example).

We traditionally design systems from the top-down, which means that we look at the overall system perspective and consider all the available requirements from all stakeholders. So, requirements are allocated to appropriate software or hardware designations, broken down into increased specificity, and implemented. Validation means Do you have the right requirements? while verification asks the question does your implementation meet the requirement? This process is called V&V (Validation & Verification).  

Requirements Validation 
Requirements Verification 
Time

Design and verification occur in opposite directions:

3 
2 
Design (Top Down) 
1 
1 
2 
Verification (Bottom Up) 
3 
Lowest Level Subsystem Module Highest Level Subsystem Module 
Order in Design Process 
Lowest Level Subsystem Module Highest L I Subsystem Module 
Order in Verification Process

Design is performed from the top-down, which means that high level software modules are defined before lower level ones. Lower level modules are combined to make up one or more higher level modules. Lower-level modules are verified first to mitigate integration risks. It’s recommended to perform requirements-based testing before system-level testing.

Consideration #2: What are some landmines to avoid when designing new medical devices?

Functional requirements, for example those pertaining to customer expectations, as well as regulatory requirements, must be considered simultaneously in the design process. Even the initial goal is to develop a prototype demonstrating the principle for your stakeholders, neglecting to consider the holistic system perspective by asking who are all of my stakeholders? And what do they care about? can lead to rework. This is because hardware or software components chosen to meet the functional requirements may not meet regulatory ones. So, for example, embedded computers (and other components) may have to be replaced in the production design. How much additional effort would we inadvertently append to the overall hardware and software development effort? 20%? 40%? From our experience, the cost delta can range from 30% – 50%.

Ephemeral goals without long-term design strategy context likely increases overall development cost. It’s a common observation we’ve seen with the myriad companies working in this space. Let’s study how certain regulatory requirements rework can be avoided for a medical device example by considering all critical elements sufficiently early in the design process.

Consideration #3: What are some example regulations to consider?

According to the Food and Drug Administration (FDA), regarding medical records, Title 21: Food & Drugs – Part 21 – Protection of Privacy (21.71 (5)) states that a “record is to be transferred in a form that is not individually identifiable.”  In another statement, (21.70 (a)) states that “Names and other identifying information are first deleted and under circumstances in which the recipient is unlikely to know the identify of the subject of the record.” [1]

To paraphrase, transmission and storage of customer records must be performed in a way that protects patient confidentiality. Simply transmitting the patient’s name and storing the name and data in one place won’t put federal regulators at ease. Suppose we’re designing a subsystem called the Diagnostic Reporting Function (DRF) of our next medical device, and we want to comply with the Part 21 regulations above. Perhaps we would enforce a requirement on our medical device, such as:

Diagnostic Reporting Function (DRF) shall generate a unique identifier for each heart-rate monitor reading. [REQ-1]

Diagnostic Reporting Function (DRF) shall transfer the generated unique identifier along with the recorded heart-rate monitor reading to the Transmission Endpoint System (TES). [REQ-2]

Medical Device 
Diagnostic 
Reporting 
Function 
Internet 
Transmission Endpoint System 
Unique 
ID/Patient 
Isolator 
Customer/Patient

We won’t get into too much detail in the above figure, but it’s a basically a underscores how one may decouple identifiable patient information with patient medical data to make it harder for unauthorized individuals to view sensitive information. Note that transferring patient data within an organization may be acceptable, if the Transmission Endpoint System (TES) is operated by a single organization and/or appropriate vendor risks are mitigated [1].

We have now decoupled health records from patient identification, a key regulatory requirement. Clearly this requirement impacts the software development of our example medical device. We also have other regulatory requirements to mitigate other cyber-risks. What about encryption? Sending the output of the medical device in plain-text (not encrypted) technically satisfies our above REQ-1 and REQ-2, but do not satisfy other regulatory requirements or guidelines. So, in the spirit of the V&V figure described previously, we have not yet satisfied our requirements validation process, since we need more requirement coverage.

Consideration #4:  What questions should we answer in order to develop a more secure medical device?

As an illustration, Remote Patient Monitoring (RPM) patients expect an intuitive user interface (UI) for their monitoring products. And, for device manufactures to deliver on this expectation safely, reviewing the overall system architecture is paramount to ensuring that security and privacy vulnerabilities risks are mitigated, according to a recent National Institute of Standards and Technology (NIST) article titled Securing Telehealth Remote Patient Monitoring Ecosystem [2].

Based on the NIST Cybersecurity Framework Version 1.1 [3], the same article ([2]) defines a set of Desired Security Characteristics guidelines, pertinent to medical device designers. Let’s pose the guidelines as questions to frame additional requirements for our medical device example (Note that these may be more relevant to the TES mentioned above):

  • Identify
    • How are network assets identified and managed?
    • What are the vendor risks, such as cloud providers or technology developers?
  • Protect
    • How are users identified and authorized appropriately with access control?
    • Is least privilege enforced for every user? 
    • Is data integrity and encryption (including data at rest) enforced?

The protect category especially echoes the principles outlined in the excellent book Zero Trust Networks: Building Secure Systems in Untrusted Networks by Evan Gilman and Doug Barth) [4].

  • Detect
    • What are the ways in which cyber threats are detected?
    • How are continuous security monitoring practiced?
    • In what ways are user account behaviors studied with analytics?
    • What type of information do security logs contain?
  • Respond
    • When a cyber threat event is discovered, what are the processes to limit the extent or limit of damage?
  • Recover
    • After a cyber incident, what are the processes to restore systems affected by the event?
    • How does the cyber incident recover process handle external and internal parties?

Consideration #5:  What is the impact of missing regulatory requirements in the design process?

Through the guidelines above, we can determine useful considerations to ensure systems are resilient against most common forms of cyber threats. Neglecting security best practices puts patient (and thus company reputation) in peril; in fact, there’s more than one cyber hack every 39 seconds, according to Michel Cukier of the University of Maryland [5]. To ensure the medical device we’re designing will be more prepared, let’s look at one question from the above – Is data integrity and encryption (including at rest) enforced?

Data integrity means that the data has not been tampered. A variety of common algorithms to check that the data has not been altered (mistakenly or intentionally) is checksum or hash. SHA-256 is an example.

So, what can some ancillary requirement be?

Some example, though not exhaustive, could be:

DRF shall transmit the computed SHA-256 of the message payload to TES. [REQ-3]

DRF shall encrypt all outbound communication to TES with the Transport Layer Security (TLS) version 1.2 or higher standard using the Advanced Encryption Standard (AES)-256 cipher. [REQ-4]

If we developed our medical device prototype without considering the full system view (ex: only REQs 1-2) of the end product, we may have (for example) chosen a microcontroller unit (MCU) that didn’t have the relevant encryption capability, resulting in prodigious hardware and software rework, and in our experience, may increase costs by more than 30-50%.

Fortunately, there is a way to mitigate these types of technical risks. And, we can help you accomplish both your short-term and long-term objectives through these disciplined processes. There are of course many more risks than we can ever write in this tutorial, and as you develop your product, we can help you avoid them. If you’d like us to brainstorm with you or if you’d like us to cover a future blog article topic, you’re welcome to comment or send us a message on our Contact Us page.

References:

[1] FDA Title 21: Food & Drugs – Part 21

[2] Securing Telehealth Remote Patient Monitoring Ecosystem       https://www.nccoe.nist.gov/sites/default/files/library/project-descriptions/hit-th-project-description-final.pdf

[3] NIST Cybersecurity Framework Version 1.1 https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf

[4] Zero Trust Networks: Building Secure Systems in Untrusted Networks by Evan Gilman and Doug Barth https://www.oreilly.com/library/view/zero-trust-networks/9781491962183/

[5] Hackers Attack every 39 Seconds (NBC News) http://www.nbcnews.com/id/17034719/ns/technology_and_science-security/t/hackers-attack-every-seconds/

Tutorial: How to Solder PCBs?

Disclaimer: This tutorial is for simplified demonstration/educational purposes and not intended for production applications. We cannot be held responsible for any misuse, errors, damages, or losses. Use at your own risk. Production implementations of our products follow much more stringent quality control processes that those shown in this tutorial, per our clients’ requirements.

Printed Circuit Boards (PCBs) are ubiquitous – they’re in cell phones, TVs, computers – the list goes on. When companies are developing their medical device, home automation, or even aerospace application, prototyping the concept for customer validation prior to large scale manufacturing, is critical. In this scenario, hand-soldering becomes handy for smaller quantities and for functional validation.

In this tutorial, we’ll show how PCBs can be soldered, principles we hope will help you get validation from your clients, just like how the circuit shown below helped our client obtain critical feedback from his customers. Speed and quality are key in winning in today’s competitive markets.

It’s best to start out with defining our goal, which is to solder all the though-hole and surface mount device (SMD) components for the PCB shown below. The former components penetrate all layers of the board, while the latter only resides on one side of the board.

Assembled PCB (Our Goal)

Our approach will be to solder the SMD devices first. SMD soldering can be accomplished with solder paste, which when heated, will re-flow and solder our components. For this step, it is possible to use a metal cutout, referred to as a stencil, that will apply the solder paste at the SMD pad locations (Step 1A). In this tutorial, we’ll show both; it is possible to get by without a stencil, though not using one usually is a bit messier at the beginning and requires some rework to remove excess solder (Step 1B).

Processing (Batching) Several Boards Simultaneously

1A. Use Stencil to Apply Solderpaste (Optional)

We can choose to apply solderpaste by using a stencil, which is a metal cutout that aligns with our PCB.

PCB Stencil

Next, we place the stencil on top of the PCB and align the cutout exactly with the SMD pads. Taping the PCB and the stencil in place is helpful.

PCB Stencil Aligned on top of PCB (SMD Pads aligning through stencil hole cutout)

Now, we squeeze the solder paste from our syringe and apply it on the stencil.

Solder paste Application

Additionally, we spread the solder paste with a credit-card-like applier.

Solder paste spread with card

Finally, we remove the stencil and reveal the PCB with the applied solder paste.

PCB with Solderpaste

1B. Apply Solder Paste without Stencil

Even without a stencil to cleanly apply solder paste, we’re going to prove that it’s possible to develop a quality circuit. For the purposes of this tutorial, (and to prove our point) we made a mess, as shown below.

Solder paste applied without stencil use

2. Solder SMD Components with Heat Gun

For this video, we will demonstrate SMD soldering without stencil use (from Step 1A). Even with this method, it is still possible complete the board, although solder bridge defects are more likely and need to be reworked in the next step. It’s also possible to solder SMD components with a solderiron; some elements of that process are covered in step (3) below.

SMD Soldering with Heatgun (Video Accelerated)

3. Solder Through-hole Components & Remove Solder Bridges

In this step, through-hole components are soldered. The PCB metal contact should be heated first with the solder iron and then the pin of the component, preventing what is referred to as “cold-solder joints.” In this section, we’ll focus on how to remove solder bridges. The key is to use soldering flux, which is a chemical that helps remove the metal oxides generated from the high heat; this helps lower the soldering temperature and time, therefore reducing the probability for solder bridges or thermally damaging the board electronics.

Solder Bridge Removal

4. Clean-up

Soaking the PCB with 99% Isopropyl Alcohol for a few seconds and scrubbing the circuit with a non-static or conductive brush helps clean the PCB. (A regular brush may induce static electricity during cleaning and thus potentially damage semiconductors on the PCB). The clean-up step removes the solder flux applied in previous steps. Since the solder flux is corrosive, failing to remove the flux may damage the PCB over time.

Isopropyl Alcohol for Electronics Cleaning
Conductive Brush for Electronics Cleaning

We demonstrate this cleaning process.

PCB Cleaning with Isopropyl Alcohol & Non-ESD Brush

And finally, drying with anti-static wipes, such as Kimwipes, helps accelerate drying and prevent a post-dry chalk-like reside.

PCB Drying with Non-ESD Wipes

5. Functional Testing

We need to conduct a final functional test after the clean-up step to ensure there are no clear defects. One reason that that during clean-up, solder balls may become loose and get stuck between adjacent and tightly space pins, causing a short (similar to a solder-bridge). We caught this issue a few times while making the circuits for this blog and were sure to remove them. The pump used for functional testing in the next video is from one of our industry partners, Simply Pumps!

Functional Test

6. Visual Inspection (Final)

Lastly, we ensure the board is aesthetically sound. Any observed defects are noted and corrected. With the process outlined in this tutorial, we have successfully soldered several boards that pass functional test. Happy soldering!

Boards Batch Successfully Soldered & Passed Functional Verification

The datasheet for the motor driver is available here:

Tutorial: Network Protocol Basics

In the previous tutorial, we discussed how to load a mobile app onto an android phone for basic testing. In this tutorial, we’ll go over the basics of network systems and protocols to better understand how that app works. This understanding will help us customize a solution for you to connect your system to others. If you have questions on how to enable your customization, please reach out and we’ll provide a complimentary consultation to see how we can help.

Due to great advances in technology such as the internet and computers, our lives are more connected than ever. Regardless of almost any website that you browse, your computer connects to another computer and sends as well as receives data. This is a simplified view:

Fig. 1 – Computer/Server Interaction

Right before you began reading this page, your computer or mobile device connected to our server and transmits bursts of data, in which each burst looks like Fig. 3. Let’s dig deeper. Every computer on the internet has an Internet Protocol (IP) Address, which is used to distinguish itself from the rest of the web. Currently, simplonics.com has a current IP address of 192.0.78.128, which my computer uses to connect to it. Another important concept is the port abstraction, which are operating system interfaces that provide a concept similar to mailboxes. When you send mail to an apartment address, which is similar in concept to IP addresses, you also have to specify the apartment number, which is analogous to ports in the network system world. HTTPS, which is HTTP and TLS, uses port 443 by convention.

By “pingingsimplonics.com with a command-line terminal, as shown below, we’re able to find its IP address:

Fig. 2 – Ping server for IP address

But, how does your computer know that simplonics.com corresponds to 192.0.78.128? Great question! It turns out that there are servers on the web called Domain-Name System servers that basically map domain names (ex: simplonics.com ) to IP addresses. It turns out that there can be a many to one mapping between domain names and IP addresses (but not the other way around). Common reasons for why this is include reverse-proxying for security enhancements and load balancing, which is what high load servers, like google use to distribute web traffic based on geographical considerations. These topics are advanced and out of scope for this tutorial but quite interesting.

Fig. 3 – OSI layering with packet representation

Ok, so we are building the understand of Fig. 3. The IP protocol helps to get data from one computer to the other and Transmission Control Protocol (TCP) is what is used to help ensure data reliability.

How does TCP ensure data reliability?
Well, this transport layer protocol is complex, but in a nutshell, the sender retransmits data whenever the receiver does not send an acknowledgements of receiving a certain range of numbered packets.

Alright! We successfully sent a packet through the internet with this understanding! What’s next?
Furthermore, Transport Layer Security (TLS) is part of the presentation layer of the OSI model. At the sender side, TLS encrypts each packet based on a complicated algorithm called a cipher, and at the receiver side, TLS decrypts each packet. SHA-256 is one example algorithm and TLS has two primary variants, namely, symmetric and asymmetric key algorithms. The former uses the same key for encryption and decryption, whereas the latter (and more popular version on the web) uses asymmetric key algorithms (aka public-key cryptography).

Lastly, we’ve arrived at the Hyper-Text Transfer Protocol (HTTP), which is an application layer protocol. This protocol sends the request to “GET” a webpage, for example.

So, in summary,

A Sender:
1. Finds the IP address of the target server with DNS
2. Create an IP packet that includes source and destination IP addresses
3. Places the IP packet in a TCP packet with additional information
4. Embed HTTP requests or responses in the TCP packet
5. Encrypts the TCP packet with TLS
6. Sends the packet to the internet

A Receiver:
At the receiver, the target server:
1. Receives the packet
2. Decrypts the packet
3. Extracts the HTTP request
4. Processes the request and sends data (similar in fashion to the sender’s steps above)

This is a high-level view of how the web works, and with this understanding, we are able to build pretty complex, yet modular systems. Please like this page if you found this useful and let us know your thoughts on how we can improve this article.

Cellular Product Regulatory Pathway Whitepaper

PREFACE

Building a new cellular-based product requires various technical and regulatory considerations. We will examine possible product cellular certification paths, requirements, and suggestions useful for project planning; specifically, we will examine the subject in the following order:

  • Introduction
  • General Regulations
  • Relevant LTE Bands
  • Carrier Comparisons
  • Specific Carries’ Certification Flow
  • Regulatory Cost Analysis
  • References

INTRODUCTION

To develop a successful cellular product to market, three pillars must be simultaneously considered during project planning and execution – development, regulatory, and manufacturing. This whitepaper will address the regulatory portion.

Figure 1: Three Pillars of Project Execution

Common questions regulatory certification groups will ask:

  1. What chipset/module is the device based on? What is the FCC ID and (if applicable) PTCRB certification?
  2. Will the device be used fewer than 20 cm from a person?
    1. If yes, specific Absorption Rate (SAR) Tests are required. Otherwise not.
  3. Does the device have an internal or external antenna?
    1. If the cabling to antenna less than 20 cm?
      1. If yes, carrier OTA tests are required.
  4. How many SIM cards are present? Are they removable or soldered?
  5. What technology (i.e. LTE CAT M1) is used? What bands are enabled?
    1. A module can be programmed to only enable certain bands of interest, helping limit regulatory testing by targeting the optimal market.
  6. Does the device use special command software, such as AT commands for settings or configuration?

GENERAL REGULATIONS

  • FCC (All Carriers)
    • If using a pre-approved module, submission for an FCC cert or ID is not required. Instead, the FCC requires the manufacturer to put a label specifying which pre-certified module is in the device.
    • Unintentional emission tests are still required and can be performed by a third-party lab. No submission of the test results to the FCC is required. This is referred to the Suppliers Declaration of Conformity (SDoC) [1].
  • PTCRB (AT&T and T-Mobile, not Verizon carriers)
    • With a pre-certified module, there are the following tests:
      • OTA (TRP, TIS)
      • RSE
      • U-SIM
    • Additional requirements require OTA and cybersecurity considerations
    • Manufacturer submitted for a PTCRB cert will still need to register device on the PTCRB website, even if using a pre-approved chipset
  • Carrier specific testing (subsequently described)

Important consideration:

Not all carriers mandate PTCRB. For example, Verizon does not need PTCRB and runs its own certification process known as the open development initiatives (ODI). This process can be custom and based on the number of simultaneous radios (referred to as coexistence); the cost can be substantial for complex designs. However, if the end goal is to ensure carrier operation on AT&T or T-Mobile, PTCRB would be required.

Additionally, every three years, a PTCRB recertification is required; however, an abbreviated effort is allowed (typically one-third the original cost) if the design is unmodified. If the design is modified, such as due to an antenna change, then an engineering change order (ECO) is required, and the cost is still typically a fraction of the original certification cost.

RELEVANT LTE BANDS

BandUplink (Lo) Uplink (Hi)Downlink (Lo) Downlink (Hi)
21850 MHz1910 MHz1930 MHz1990 MHz
41710 MHz1755 MHz2110 MHz2155 MHz
5824 MHz849 MHz869 MHz894 MHz
12699 MHz716 MHz729 MHz746 MHz
13777 MHz787 MHz746 MHz756 MHz
14788 MHz798 MHz758 MHz768 MHz
17704 MHz716 MHz734 MHz746 MHz
251850 MHz1915 MHz1930 MHz1995 MHz
26814 MHz849 MHz859 MHz894 MHz
29N/A717 MHz728 MHz
302305 MHz2315 MHz2350 MHz2360 MHz
412496 MHz 2690 MHz2496 MHz 2690 MHz
483550 MHz3700 MHz3550 MHz3700 MHz
661710 MHz1780 MHz2110 MHz2200 MHz
71663 MHz698 MHz617 MHz652 MHz
Table 1‑1: 3GPP E-UTRA Operating Bands based on section 5.5 of [2]

CARRIER COMPARISON

A device operating on a specific carrier does not necessarily need all the possible bands, since the exact chosen bands depend on the operating location.

AT&TT-MobileSprintVerizon
PTCRB requiredYesYesNoNo
Possible 4G LTE Bands2, 4, 5, 12, 14, 17, 29, 30, 662, 4, 12, 48, 66, 7125, 26, 412, 4, 5, 13, 66
Table 1‑2: Carrier Comparison

SPECIFIC CARRIERS’ CERTIFICATION FLOW

AT&T:

Figure 2: AT&T Certification Flow

Trendi testing is the Testing Requirements for Network Ready Devices for IoT, an AT&T process. This is a 24-hour test in which the carrier sends test SMS messages to the device. The user is expected to interact with the cellular device to allow the carrier to baseline device performance and data usage.

Expected Behavior:

  • Ensure devices is not aggressive when unable to reach network.
  • Route device reset is permitted. However, steady state behavior must not exceed once every four hours.
  • Number of authentication requests must be fewer than 19 per hour. Ideal target is fewer than 6.

Verizon:

Figure 3: Verizon Certification Flow

T-Mobile:

Figure 4: T-Mobile Certification Flow

REGULATORY COST ANALYSIS

The following variables impact the certification cost:

  • Technology (CAT M1 vs CAT M1 + NBIoT)
  • Specific Bands
  • Supported Carries
  • Multiple Radios
  • Fallback Mechanism (if LTE not available)
  • Antenna Cable Length

Carriers and specific bands impact the certification costs. Also, integrating multiple radios (Bluetooth, Wi-Fi, Cellular, etc.) require coexistence to limit radio cross talk and would increase cert cost. Note that the exact chipset that is chosen does not significantly impact the regulatory cost, if it is already pre-certified and used as an integrated device.

Pre-certification could be used during development to mitigate the final certification effort.

In general, the rough cost (on the low end) for the simplest cellular regulatory certification is in the low five-figures, taking into consideration FCC, PTCRB testing + registration, and a carrier cert. However, the exact cost varies significantly based on the exact bands and carriers that are used.

REFERENCES

[1] – FCC Supplier’s Declaration of Conformity Guide https://apps.fcc.gov/kdb/GetAttachment.html?id=cPjFB7kIR2TMlwiHUNAbvA%3D%3D&desc=896810%20D01%20SDoC%20v02.pdf

[2] – 3GPP TS 36.101 V17.1.0 (2021-03) Standard. [https://www.etsi.org/deliver/etsi_ts/136100_136199/136101/10.17.00_60/ts_136101v101700p.pdf]

Medical Electrical Hardware Robustness Whitepaper

PREFACE

Maximizing the reliability of electrical hardware becomes more paramount and challenging as the complexity of modern medical equipment increases. Failure to develop the hardware to a sufficiently high reliability level invites financial and legal peril. Achieving sufficient reliability is difficult; but, through disciplined design practices, the risks can be mitigated. We will discuss various techniques to increase medical device robustness with some practical examples in Altium. Here are some factors to consider when designing a highly robust product:

  • Component Selection
  • Electromagnetic Interference (EMI – FCC Part 15)
  • High Voltage Transients (IEC 61000-4-2)
  • Battery Protection (IEC 62133)

COMPONENT SELECTION

The component selection process is often overlooked but critically important. Even two passive components like capacitors with the same capacitance values can perform significantly differently in various conditions. Besides the component value, other important device parameters include:

  • Device Parasitics (inductance, capacitance, resistance)
  • Operating Temperature (typical and absolute maximum)
  • Operating Voltage (typical and absolute maximum)
  • Operating Currents (typical and absolute maximum)
  • Thermal Performance (heat dissipation – typical and absolute maximum)

Choosing elements such that they operate in environments below their absolute maximum (or minimum) ratings with a safety margin is called derating. For example, choosing a capacitor with a maximum voltage rating of 7.5 V (or higher) when the capacitor would only see at most 5 V, results in a 50% margin, which increases the reliability of the component. A 50% margin can increase Mean time between failures (MTBF) by about 30% [1]. Note that MTBF is the predicted time between inherent failures of a system during normal operation.

Furthermore, the manufacturing process needs to enforce the reliability requirements. If due to parts obsolescence or sub-standard supplier work, lower quality components with lower margin may be substituted, thereby degrading the reliability of the device (despite the upfront scrupulous design work). Therefore, requirements on the component tolerances must be communicated from the design engineering to the manufacturing engineering departments in a documented and traceable fashion for quality assurance purposes.

EMI

SOLID GROUND PLANE

An insufficient ground plane is the single largest factor for sub-par EMI performance. Using a 4-layer PCB allows for a dedicated ground layer. However, during the Printed Circuit Board (PCB) routing phase, much of the internal layer can have traces, which decreases the effectiveness of the ground plane. This is especially problematic for high frequency signals, since their electromagnetic fields will have to find another return path, thereby causing coupling issues. Since some traces may be required in the ground layer, traces in that internal layer should be minimized. A general rule of thumb is to ensure the ground plane has gaps smaller than 10 mm [2]. The ground plane should be solid across high frequency traces (Fig. 1). A ground plane area at least three times the high frequency signal trace width should be enforced on each side of the trace [3].

Figure 1: Solid Internal Ground Plane

SHIELD/GUARD VIAS

To ensure high frequency signals are kept within their traces, placing vias around the critical traces is recommended (Fig. 2). This is technique is extremely effective for frequencies up to at least 5 GHz. A general guideline is to ensure that the distance between the vias is at most a quarter of the system’s resonant wavelength. However, it is often more effective to locate the shield vias at most a 1/10th (or preferable a 1/20th) apart.

Figure 2: Via Shield – PCB Layout Example

In the example above, a high frequency trace originating from a wireless Microcontroller Unit (MCU) and terminating at an onboard antenna is protected.

Another important characteristic of the trace shown in Fig. 2 is its trace width, which dictates its inherent characteristic impedance. To maximize power transfer from the MCU to the antenna, the characteristic impedance needs to match the impedance of the microcontroller and antenna, which typically is 50 Ω. Determining the optimal trace width is based on the PCB composition and out of scope for this paper, but a simple calculator can be found on our website [4].

STITCH VIAS

Figure 3: Resulting Stitch Vias on Ground Plane (Top Layer) – PCB Layout
Figure 4: Stitching Parameters Window

Stitching vias are like guard vias. Stitching vias allow for greater EMI performance because various ground layers can be connected, yielding a lower parasitic series resistance to ground for all signals on the PCB (Fig. 3). The general rule for spacing should be followed as specified in the Shield/Guard Vias section. In a standard PCB design software, the size between vias (grid) as well as the via size can be selected. Below we show the control window in Altium (Fig. 4).

HIGH VOLTAGE TRANSIENTS

High voltage transients are defined as voltages several orders of magnitude larger than expected voltages (i.e., 10 kV) for a short period of time (a fraction of a second). Despite the short duration of transients, they can destroy a circuit. Static electricity from a human hand as well as USB cable removal/insertion can cause the transients. An effective safeguard is electrostatic discharge (ESD) protection, implemented as Transient Voltage Suppressors (TVS), which are special diodes. Note that using a conventional diode does not successfully protect the circuit because of their high parasitic capacitance that prevent the diode from reacting quickly enough to curb the transient; TVS have a capacitance on the order of picofarad. Applying such protections is recommended whenever a transient is expected, such as USB connectors or user buttons. There are four considerations when choosing a TV:

  1. Standoff Voltage higher than normal operating voltage
  2. Clamp voltage (for given peak current) is below the protected IC’s pin’s max voltage rating
  3. Specified peak current exceeds expected peak current
  4. Bidirectional protection is chosen (if required)
Figure 5: Example TV with Protected Circuit and parasitic inductance traces

Be wary that an excellent PCB schematic alone (Fig. 5) is insufficient to protect against the transients: an effective PCB layout is an equally important step. For example, a +/- 15 kV IEC-61000-4-2 Air-Gap Discharge ESD event with a nanosecond pulse results in a pulse current of 15 A. A ½ inch PCB trace represents L = 10 nH of parasitic inductance, which translates to a clamp voltage that is 450 V1 in additional to the diode’s clamp voltage [3]. This ineffective PCB layout would, therefore, render the ESD protection useless since most components are not rated for handling 450 V. Even if the components do not fail immediately, the product would have a lower MTBF reliability rating.

1450V = (L*dI/dT = 10 nH * 45A/109s)

BATTERY PROTECTION

Modern medical equipment commonly employs secondary rechargeable Lithium-based batteries for various reasons. Ventilators, for example, commonly employ backup power sources to mitigate power supply failure, as mandated by IEC Standard 80601-2 [5]. Other devices are fully wireless-based and, therefore, depend on a single primary rechargeable battery for connectivity applications. Even in non-safety-critical applications, medical devices are developed with adequate safety standards to minimize the risk of failures, such as fire or explosion that would endanger the user. IEC 62133 is a cross-industry standard for exporting devices with lithium batteries in accordance with international compliance. IEC 62133-2 specifies requirements and tests for the safe operation of lithium-based batteries [6]. IEC 62133-1 is applicable to nickel-based batteries.

Some of the battery tests include:

  • Free fall
  • Crush
  • Over charging
  • External short circuit

The batteries must be able to tolerate these tests with no fire or explosion results. Each battery will have a different safe voltage current, operating/charging temperature, and number of cells. Clearly, these safety standards levy requirements not only the battery manufacturer, but also the device manufacturer (battery integrator).

The Lithium-based battery should either have the protection circuit built-in or integrated onto the PCB. The choice will depend on the product’s mechanical and cost requirements. A PCB-based solution can be slightly more cost-effective in some cases but would increase the verification burden onto the application integrator. Specifically, the protection circuit must protect against:

  • Overcharge
    • A 3.7 V Li-Ion battery, for example, can typically be safely charged only to a certain level, such as 4.2 V.
  • Over-discharge
    • A 3.7 V Li-Ion battery must not be discharged below a certain voltage. 3 V is a common cutoff voltage for this battery class.
  • Charging too quickly
    • A charge rate is typically recommended that should not be exceeded. 0.5 C or 1 C are common charge rates.
  • Discharging too quickly
    • A maximum discharge current rate as a function of the battery capacity is typically specified. 2 C is a common, but not a universal, parameter value.

Exceeding any of these limits can increase the probability of critical device failure and, thereby, endanger the user. Fig. 6 shows a simplified protection circuit that could be integrated into the application PCB.

Figure 6: Simplified Protection Circuit Schematic, like examples in [7]

SUMMARY

Ubiquitous pitfalls exist in designing modern medical devices that are highly reliable, safe, cost-effective, and functionally competitive. As a result, a myriad of tradeoffs must be balanced to deliver a competitive product that can secure market share. We discussed tangible steps to help accomplish that vision. Any questions on this article (or any related topic) may be directed to the author.

REFERENCES

[1] Reliable Design of Medical Devices by Richard C. Fries.

[2] MAX 13202E Datasheet. https://datasheets.maximintegrated.com/en/ds/MAX13202E-MAX13208E.pdf.

[3] PCB Design and Layout Fundaments for EMC by Roger Hu.

[4] PCB Characteristic Impedance Calculator. https://simplonics.com/simulations/.

[5] IEC 80601-2 Standard.

[6] IEC 62133-2 Standard.

[7] Lithium-Ion Cell Protection Examples. https://www.digikey.com/en/maker/blogs/lithium-ion-cell-protection


Tutorial: Stock market analysis using the Hurst Exponent in C#

Disclaimer: This tutorial is for simplified demonstration/educational purposes and not intended for production applications. We cannot be held responsible for any misuse, errors, damages, or losses. Use at your own risk.

1. Overview

As a result of requests from our clients, we’ve decided to publish an article about the Hurst Exponent (H), which is a ubiquitously used econometric measure used for potential stock market studies in investment applications. H indicates the long-term memory of a time series (Y(t)) by examining the time series’ tendency to regress to the mean; H is a number between 0 and 1. When H is closer to 0.5, the data series is mean-reversive, indicating the tendency for Y to return to the mean. Values closer to 1 indicate that increases in Y typically correlate with increases in Y at future points in time. However, values closer to 0 indicate long-term switching between sequence values.

There are a gamut of implementations in MATLAB and Python, but not many in C#. In this discussion, we elucidate our implementation in the following sections:

  • Current Capability
  • Implementation
  • Test Results
  • Possible Enhancements & Next Steps
  • Classes & Methods Declaration
  • Future Thinking
  • Appendix – Code Files & References

2. Current Capability

We developed a Hurst Exponent calculation in C# that:

  • Performs an R/S Hurst Exponent (uncorrected) calculation for inputs with integer powers of two length
  • Implements a least squares linear regression for the final R/S calculation
  • Reads input data series from a CSV file
  • Takes care of some exception handling conditions
  • Was tested with Python Hurst Library
  • Passed test conditions when inputs Type==change and simplified==true were set [1]
  • Was tested with input sizes of 128, 256, 512, 2048, 4096, and 8192

3. Implementation

Fundamentally, we implemented the Hurst Exponent by the conventional R/S method. The variants of this method are apparent in different applications with various assumptions on how the input data is modeled [1]:

  • Change
  • Price
  • Random-Walk

We implemented the change variant, which is described below (and can result in different answers based on the variant). Support for different variants is another area of possible improvement we can focus on. Let’s walk through the algorithm.

Our code exposes the following interface

public Tuple<doubledouble> calcHurstExp(double[] inputData)

to calculate the Hurst Exponent. Here are additional details of what this function does.

1.  First, we determine sizes of our division arrays. Assume N = 512 elements in the inputData array. In our case, we have the following table.

Division (D) Chunk Size (C­s)
0 512
1 256
2 128
3 64
4 32
5 16
6 8
7 4

2. Next, we loop through each division. For each division, calculate the normalized R/S value for that division and keep it later for linear regression (one of the last steps below). For this example, let’s choose D = 2.

var divCaRS = GetDivR_S(double[] inputData, int div);

3. Furthermore, we need to loop through the input array and create N/CS chunks for analysis. For each double[] chunk, we need to calculate the R/S value.

double RS = getChunkRS(chunk);

4. To calculate the non-normalized R/S value for a given chunk we follow the following steps:

4.1. Find the mean of the chunk

4.2. Find the standard deviation (S) of the chunk

4.3. Create a mean-centered series

4.4. Find the cumulative deviation of the mean-centered series

4.5. Calculate the range(R) of the cumulative deviation

4.6. Calculate the non re-scaled range (R/S)

5. Furthermore, we will average all the R/S values for the chunk in a given division

double RS_Div = RSArr.Average();

6. Additionally, the Natural Log[RS_Div] and Natural Log[size(chunk)] needs to be determined so we can linearly fit the power curve corresponding to the overall data.

Log_RS_Div_Arr[div] = Math.Log(RS_Div, mathBase);
Log_Size_Div_Arr[div] = Math.Log(chunkSize, mathBase);

7. Finally, by using the least squares linear regression, we can find the slope of logarithm of R/S with respect to the logarithm of the division size. The slope of this line is the Hurst Exponent.

Tuple<doubledouble> HC = LinearRegression(Log_Size_Div_Arr, Log_RS_Div_Arr);

4. Test Results

Currently, our implementation was tested against the Python Hurst Library [1] as well as the MATLAB/Octave example [2]. When our code was tested against the Python code, the following prototype was used: [H, c, data = compute_Hc2(series, kind=‘change’, simplified=False)], yielding results within 1-2% [2]. Deviations from identical results is mostly due to the use of different division windows. Our code was significantly faster than the Python implementation. Of course, more rigorous testing will be needed for accuracy tweaking.

Figure 1 – C# Basic Hurst Testing (Simplonics Implemented)

Figure 2 – Hurst Verification in Python based on [1]

Figure 3. Python with modified window sizes based on [1]

Figure 4 – MATLAB/Octave Result based on [2]

Figure 5 – MATLAB/Octave Result based on [2]

5. Possible Enhancements & Next Steps

Even though the C# implementation agrees with several prominent benchmarks, it (as well as some of those benchmarks) fail to work under all circumstances. Therefore, this application is not ready for production deployment until Simplonics implements these enhancements. We outline several steps below that we can follow to make the application more accurate, faster, or more memory friendly. Some include resolving current limitations:

  1. Adding support for inputs whose sizes are not integer powers of 2
  2. Theoretical/Empirical R/S Correction, such as Anis-Lloyd/Peters Correction
  3. Ensure program works correctly with different input types
    • Currently, there are some corrections and other algorithms needed for reliable accuracy that are not yet implemented
      • The program sometimes outputs a Hurst indicator slightly above one as a result of not-yet implemented corrective measures.
  4. Optimize Speed
    • This becomes more critical as the size of the input increases. Forms of parallel algorithms can be employed
  5. Utilize other algorithms, such as wavelets, FD4, and others to avoid biases that exist with current R/S calculation method as input size increases

6. Classes & Methods Declaration

C# Files:

  • FileHandler.cs – Opens and reads files, such as CSV files for data inputs

class FileHandler
{
    public string GetTestPath();
    public double[] ReadCSV(string filePath)
}

HurstExponent.cs– C# file that performs the Hurst exponent calculation on a given input. The main function is here.

namespace HurstExponential
{
    class HurstExp
    {   
        public double mathBase = 10;      
        public double StdDev(double[] arr, double mean, int N)
        public double Mean(double[] arr, int N)
        public double[] MeanCenteredArr(double[] arr, int N, 
                                        double mean)
        public bool aEqual(double x, double y)
        public double[] CumDevArr(double[] mcArr, int N)
        public double getChunkRS(double[] chunkArr)
        public void PrintArray(double[] X)
        private void assert(bool v)
        public Tuple<int, double[]> GetDivR_S(double[] arr, int div)
        /* Gets specific divisions's R/S Ratio as an array of each
         * chunk's natural (non re-scaled) R_S value*/
        public double[] Slice(double[] arr, int start, int end)
        private bool CheckForValidInputs(double[] inputData)
        public void Print_RS_Table(double[] Log_RS_Div_Arr,
                                   double[] Log_Size_Div_Arr)
        public Tuple<double, double> calcHurstExp(double[] inputData)
        /*Highest-Level function that calculates the Hurst Exponential
         * Assumes input length is an integer power of 2 */
        
        public Tuple <double, double> LinearRegression(double[] X,
                                                       double[] Y)
        /* Calculating a Least Squares Regression -  
         *Returns slope and yint of linear regression for a best fit curve*/
        class HurstExpWrapper
         {
           static void Main(string[] args)
         }

}
}

  • UnitTest.cs– Unit tests that that performs the Hurst exponent calculation on a given input.
public class UnitTests
{
    bool AlmostEqual(double X, double Y, double t)
    public bool performUnitTest(double expH, double expC,
                  string fn = "pyTest_256.csv", double t = 0.02)
    public bool MainUnitTests()
}

7. Future Thinking

What are your thoughts on these additional concepts that Simplonics can help you realize?

  • Econometric Developments
    • Dickey-Fuller Test
    • Other algorithms
  • C# GUI Implementation
    • For an enhanced customer user experience and integration with your existing code base
  • Network Programming
    • Hosting your financial solution on a globally accessible platform for your different customers to use

8. Appendix:

a. Code Files

Available upon request here

b. References:

[1] https://pypi.org/project/hurst/

[2] http://prac.im.pwr.edu.pl/~hugo/RePEc/wuu/hscode/hurst.m