Blog

How will 5G and healthcare tango and why should I care?

As technology evolves, it’s becomes difficult to keep up. Fall behind, then your tech becomes obsolete and your competitors are all over you. Now, 5G, which is the fifth generation of wireless technology, is here.


“But, why should I care about the 5G tango?”

Video and/or health monitoring

Would a parent watching their infant with a monitoring system accept a choppy experience? Or can a vital signs monitoring system have spotty coverage? No, not these days.

It’s important to get ahead of the tango because:

  • Previously difficult problems can be solved now, meaning now is the time to solidify your vision of making the world a better place
    • Significant financial opportunities exist for ancillary features, such as remote control and monitoring
      • A $76 billion revenue opportunity for addressing the 5G healthcare transformation is predicted [2] 
  • Entire industries (your partners & competitors included) are moving, making it easy to fall behind
    • Due to your customers expecting seamless connectivity for more demanding services – current tech won’t suffice
  • Legacy technologies will be phased out
    • T-Mobile plans to remove 2G support by the end of 2020 [1]

Virtually all parts of healthcare will be affected; though the telehealth and remote patient monitoring (RPM) sections will be especially affected. The telemedicine market is expected to grow at an annual rate of 16.5% until 2023, elucidating ample opportunities to introduce new tech [3].

Not only do we have a technological shift, but also a patient expectation transformation. Patients expect seamless connectivity regardless of their locations. They no longer accept connectivity confined to home Wi-Fi or spotty outside coverage. They want their medical solution to work regardless of their location. Clearly a connectivity solution that considers the various available communication links is critical.

Part of that challenge has been partially solved with legacy wireless systems. But, medical products have yet to adapt to the 5G (and eventually 6G) revolutions. And that’s where we can achieve a rich user experience as well as effective diagnostics and treatments.

In the table below, we look at different wireless technologies that have evolved over time.

TechTheor. Data RateLatencyApplication
2G50 Kbps750 msSMS, pictures, MSM
3G2 Mbps300 msVoice, Video Calling, Internet
4G100 Mbps20 msHigh Res. Video Stream
5G20 Gbps1 msAR/VR/Ultra High Res. Video
Bandwidth for different technologies. Data from [4] and [5]


In practice, the true data rate1 is a function of multiple variables, including:

  • Surrounding devices
    • Similar devices broadcasting at same frequency can interfere
  • Modulation
  • Transmit Power
  • Weather
  • Other factors

The realized data rate may only be a tenth of the theoretical, but the table nonetheless underscores the growth potential. Implementing a 5G solution yields an effective throughput increase of about 20x.

However, the real benefit isn’t only the data rate. The latency, or time lag between sending and receiving messages, is key – 5G offers a substantial (20x) reduction compared to 4G. The reduction is crucial for Virtual Reality (VR) and telemedicine applications. Also, vital signs could be streamed with an error rate less than a billionth, making remote surgical operations possible; 4G, on the other hand, is insufficiently equipped [4].

Therefore, even though legacy systems may be fast enough for some RPM (and other healthcare applications), legacy systems in several cases do not meet the latency requirement.

Now, let’s study the data rate and size requirements for different applications:

ApplicationData Size or Data Rate
Image File – PET Scanner1 GB (Size)
Video Conference2 Mbps (Speed)
Virtual Reality (Training)50 Mbps (Speed)
Surgery (4K Camera)75 Mbps (Speed)
Augmented Reality (6 DoF) (Assisted Surgery)5 Gbps (Speed)
Data sizes and rates for different applications. Data from [4-7]


Therefore, as the application becomes demanding, legacy systems become less practical.

So, it’s more than simply device connectivity. It’s about providing access to all for a better, faster, more available healthcare solution.

“What if I just select some 5G chipset and call it good?”

Careful. Select the wrong 5G chipset and you’ll be in a world of hurt. The right choice requires a well-thought, forward-thinking exercise. Speaking from experience on chip selection.

What to do now?

  • What are some practical ways to get ahead of the competition before time is lost?
  • How can these learnings complement an existing strategy and product?
    • And what 5G chipsets to consider or avoid?
      • Well, what about integration?
  • What about in the context of medical regulations?

These are great questions. You could send us a message here and we can stir up some ideas.

Footer

1Data rateA->B is the data rate from point A to point B and is oftentimes asymmetric (Data RateA->B ≠ Data RateB->A) due to the different allocated transponder frequency bandwidths in either directions. Some texts refer to bandwidth as the same as data rate. However, we don’t mix the terms here because the term bandwidth has multiple meanings (such as range of frequencies).

Also, the common term download is related to, but different from, data rate. Download refers to an application level transmission of data that usually uses acknowledgements in the opposite direction during the transfer. Therefore, download is usually a function of the latency as well as the (asymmetric) data rate. The concept of bandwidth-delay product (BDP) becomes central.

References

[1] – https://usatcorp.com/anticipated-cellular-carriers-2g-3g-sunset-dates/
[2] – https://www.ericsson.com/en/networks/trending/insights-and-reports/5g-healthcare
[3] – https://www.business.att.com/learn/updates/how-5g-will-transform-the-healthcare-industry.html
[4] – https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6927096/
[5] – https://hpbn.co/mobile-networks/
[6] – https://www.rcrwireless.com/20200204/5g/4-ways-5g-is-transforming-medical-field
[7] – https://www.qualcomm.com/media/documents/files/vr-and-ar-pushing-connectivity-limits.pdf

Tutorial: Medical Device Design with MATLAB Simulink MBSE

We will discuss an emerging system design technique that has been proven to reduce complex product development costs by 55% [1].

INTRODUCTION & WHY?

As product complexity in virtually every industry has increased exponentially, processes to design and prove device correctness have become more involved. Traditional methods for complex system design in highly regulated industries, such as medical device and aerospace products include the following (simplified) steps1:

1). Manual gathering of requirements
2). Design system and/or architecture
3). Coding and/or development
4). Validation & Verification

Typically the process outlined above is performed in what is referred to as a waterfall approach, in which each of these steps is followed sequentially and fully. Particularly, airborne software standards follow the DO-178C standard and medical software components follow the life cycle processes outlined in IEC 62304.

Suppose during step (4), (validation), a missing requirement is caught. Adding the requirement would require redesign, re-coding, re-verification, and re-validation. This is one reason highly regulated products oftentimes require greater capital investments than consumer electronics.

One seemingly obvious solution is to ensure requirements are fully correct and comprehensive in the early product development stages. In practice, achieving such a feat isn’t always realistic (especially if the product is complex) due to a gamut of reasons including limited foresight, evolving interfaces controlled by external groups, and dynamic regulatory requirements (think General Data Protection Regulation (GDPR)).

Another solution (albeit a poor choice) could be to attempt abridging certain subsets of the process. However, such an endeavor would prove to be haphazard, error-prone, and likely to reduce product quality. An alternative to the above quandary is not necessarily attempting to avoid the process but rather shortening the burden of re-running the steps through automation; Model-Based System Engineering (MBSE) is one such approach.

WHAT IS MODEL BASED SYSTEM ENGINEERING (MBE)?

MBSE is a system design technique that decomposes a system as a representation of simpler elements (via models), connects the models together, and continues to represent complex models in terms of smaller ones. The following terms make up a “system,” as [2] excellently defines:

  • Entity are components that make up a system
  • Attributes are characteristics of entities, like temperature or volume
  • Relationships are associations between attributes and entities based on causality

The key observation is that a system can be broken down hierarchically. So starting with the highest level of abstraction, the top level (system) is designed first and broken into its elements, namely subsystems. The process is repeatedly performed until the lowest level is sufficiently simple and therefore requires no additional decomposition. The definitions of these levels (in progressively greater granularity) are as follow:

  • System
  • Sub-systems
  • Assemblies
  • Sub-assemblies
  • Parts
Fig.1: System Decomposition

HOW TO PUT MBSE INTO PRACTICE?

We can follow a process similar to the formal System Analysis and Design Technique (SADT). Classically, engineers and scientists use these methods by drawing and breaking down the system on a whiteboard or a piece of paper. However, with advances in computer-aided (CAD) modeling, system decomposition can be performed on a computer, using tools, such as CAMEO or MATLAB Simulink. After the design phase is completed, the model is verified, and the code is automatically generated. So, the burden of writing software is significantly reduced.

Consider the impact in time savings. Instead of having to create a 1 million-line program, the vast majority of the coding could be automated. The savings is further underscored when feature enhancements are considered in the development cycle (typically via a change control process). Therefore, effort impacts from required changes are greatly reduced. Faster product cycles (weeks instead of years) becomes possible.

REQUIREMENTS ANALYSIS & GATHERING

For the purposes of this tutorial, we’ll use MATLAB Simulink to design part of a mechanical ventilator medical device. When we start with a medical product, generally we gather product requirements, which include regulatory guidelines. IEC 60601 generally provides additional guidance, such as a required medical device’s alarm volume in decibels and expected operating modes. For this exercise, we will specify a simplified set of requirements, as shown in Fig. 2.

Fig. 2: Requirements Subset

With Simulink modeling, the requirements in Fig. 2 can be linked to the model itself, with the following steps accompanying Fig. 3:

  1. Open the requirements document in word and highlight the requirements text
  2. Right-click a specific Simulink block
  3. Select Requirements
  4. Select Link to Selection in Word
Fig. 3: Linking Requirements to Model

Fig. 4 shows how to ensure the requirement mapping is successful.

Fig. 4: Linked Requirements to Model

DESIGN INTERFACES & HIGH LEVEL

Let us breaking our system in three main components:

  1. User Input (Pre)Processor
  2. Tube & Patient Lung
  3. Ventilator
    • Alarm Subsystem
    • Control Micro-controller Unit (MCU) Subsystem
    • Pneumatic Subsystem
      • Sensors
    • Pneumatic Controller

The Ventilator will take user input from the (pre)processor/panel and accordingly adjust the output airflow (for example) to the patient. Inside the ventilator, there is a primary control MCU that controls the Pneumatic Controller, which drives the Pneumatic subsystems (which, in turn, includes sensors and pneumatic components). The alarm subsystem will make a noise and visual indicator if certain fault conditions, such as power loss, are detected.

Consistent with the above, the Simulink Model Browser shows the following:

Fig.5: System Decomposition

At the highest level, we design the following interface and major entities comprising the system.

Fig.6: High-level Block Diagram & User Interface Simulation

DESIGN LOW LEVELS

Next, we will dig into the yellow box of Fig. 6 and create a Control Logic MCU, Alarm Subsystem, Pneumatic Subsystem, and Pneumatic Controller.

Fig. 7: 2nd Level Decomposition

We continually decompose the system until the constituent parts are sufficiently simple and thus no longer divisible. Additionally, the primary control logic implementing the requirements of Fig. 2 is shown in Fig. 8.

Fig. 8: 3rd Level Decomposition – Control Sequence of MCU

For tutorial succinctness, we’ll stop at the third level.

SIMULATION RESULTS

Of course, testing the model is paramount. Testing phases before and after automatic code generation are commonly employed. For now, we will test the model itself. As shown in Fig. 9, the system was able to detect fault conditions when the ventilator was turned off abruptly during ventilation mode (shown visually with the red alarm indicator when the system ON input turned red as well).

Fig. 9: User Turns Machine off during Patient Ventilator – Fault Detected

By running the simulation while the state machine window is open, the control logic’s state machine can be debugged using Simulink’s Stateflow. The Blue boxes represent the current state machine’s state.

Fig. 10: Video Debugging & simulation of state machine

For specific plots pertaining to the patient’s flow, pressure, and volume, Fig. 11 shows the simulated readings from the ventilator’s sensors.

Fig. 11: Image of MBSE-designed Ventilator Patient Simulation

And, we conclude testing with a video showing the readings in live-action.

Fig. 12: Video of MBSE-designed Ventilator Patient Simulation

AUTOMATIC C/C++ CODE GENERATION

As mentioned, a prodigious added benefit of MBSE is automatic code generation, which has saved significant product development costs by ~55% [1]. Code generation is an excellent topic for a future blog article.

NEXT STEPS & ENHANCEMENTS

Verification (and coverage) should be performed as well. An example test includes mathematically integrating the output flow every cycle to ensure the tidal volume is truly delivered to the patient within an error margin. An assertion can be tied to this specific condition for every breathing cycle (often referred in computer science as an invariant).

FOOTER COMMENTARY

1 Implementing a commercial product generally requires several additional steps, such as design reviews. In the context of medical products, risk management processes and quality management processes, for example, need to be in place before, during, and after the product development process. ISO 14971 and ISO 13485 standards, although not directly discussed in this article, would, therefore, be required.

2 Depending on the product’s safety classification (design assurance), the MBSE elements (and the design tool itself) would need to be qualified, which means proven to be developed with a sufficiently high confidence level for meeting the product’s requirements.

REFERENCES

[1] MBE Cost Benefits (PTC)

[2] MBSE Primer

ABOUT SIMPLONICS

Simplonics is a leading electronics design consulting and a supplier firm for companies across the United States in highly regulated industries, including medical devices. Our common services include electrical hardware design, software development, and sensor system design.

Our customers tell us horror stories about ex-vendors who delivered defective systems, resulting in safety concerns, vulnerable software, and other issues. We’ve successfully helped companies out of these situations by consistently delivering superior quality and simultaneously reducing recurring costs by more than 20%.

Are you working on a technical challenge and want a fresh perspective? Feel free to reach out by visiting the Contact Us page and submitting your contact information. Let’s build a great relationship for the future.

Tutorial: How to Detect Cancer Cells with Machine Learning?

The medical device industry has been advancing towards solving diagnostic and treatment problems with machine learning (ML), which is a data prediction technique. Therefore, investing an effort to understand this multidisciplinary area for your own application can help you maintain an edge over your competition. To help you accomplish that end, we’ll cover a relevant case study for (ML) by first defining necessary terms, going into some theory, and implementing a Python coding example.

Conventional methods of data predictions use statistical techniques, such as regression, to classify data points or predict future values. The increase of computers’ computational power has made solving sophisticated algorithms with intense operations possible. In this tutorial, we will study and implement a Support Vector Machine (SVM) technique to categorize whether medical tumor cells are cancerous by studying their features; the principles of this tutorial can be applied to ubiquitous classification problems.

DEFINITIONS

Let’s start with a few definitions. Two common types of machine learning technique are supervised learning and unsupervised learning.

Supervised Learning– When the correct categories pertaining to input data points are known. The Support Vector Machine (SVM) is an example we’ll be studying here.

Unsupervised Learning – Occurs when the output targets aren’t known in the given problem. We would analyze commonalities among the data itself to find groupings of similar data together.

Features (Inputs) – Specific input that map to an output class target. A cell’s mean area and mean smoothness are two examples we’ll study here.

Target (Output) – “Correct” answers (determined classes) pertaining to the specific feature inputs.

Training Set – Subset of data used to build machine learning model. These data points are not used in the testing stage.

Test Set – Subset of data used to determine accuracy of model. These data points are not used in the training stage.

Class – Categories to which the input features pertain.  In this example, Malignant and Benign are the two possible classes for tumor cells. Other applications may have more than two possible classes.

Interference – Using the trained ML model, deduce to which class a test input pertains.

Margin – Distance between closest points of different classes in the context of Support Vector Machine. The support vectors are simply the points closest to the opposing class. During training, the support vectors are computed to determine the hyper-plane (in sufficiently high dimensions). Fortunately, after training, almost all data points can be disposed and only the support vectors are retained, resulting in significant storage space reductions.

THEORY

Support Vector Machines (SVMs) are a type of supervised learning algorithm that attempts to find a dividing line/curve (or hyper-plane in higher dimensions) so that unknown data points can be categorized in the appropriate class. It’s best to illustrate with some diagrams

Fig.1: Data points with Features 1 and 2 Plotted

Clearly, a line between the data points for the two classes (X’s and O’s) would serve as a reasonable divider for the data points. But, what’s the equation of that line? And what does it look like in higher dimensions?

Fig.2: Data points with Features 1 and 2 Plotted after SVM Invocation

The goal is to find where to draw the thick red line above in Fig. 2. Our goal is to maximize the margin. The data points (X’s and O’s above) closest to the thin red lines are called the support vectors.

The example above appears relatively simple and may not require using the SVM technique. So, why is SVM useful? It becomes useful when the data points don’t appear to be linearly separable, which means separation with a single decision surface. Because we are effectively solving for an equation that separates the data, transforming a low dimensional non-linearly separable to a higher linearly separable one will simplify the solution. The function used for this purpose is a kernel function, which is used to transform input data to higher dimensions.

1). Linear (‘linear’)

2). Polynomial (‘poly’)

3). Radial (‘rdf’)

Mathematically, we can write the SVM training equation, according to [1]:

In Eq [1] above, K is the kernel function, x is a matrix containing inputs we’d like to train, t represents targets, and the second term is added to help make the equation linearly separable in higher dimensions. We’ll use the Sklearn [2] library in python solve this equation for us. Other packages, such as cvxopt [3], would use a form similar to Eq [1], whose form is the same as the Lagrange Multiplier solutions.

IMPLEMENTATION

1. Import Libraries

First, we import the sklearn, numpy, matplotlib, and math libraries into our Python program.

from sklearn import svm
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
import math 

2. Load Data

Secondly, we’ll load the breast cancer data set and also calculate the number of data points we have. We’ve got around 569 samples.

	dataset = load_breast_cancer()   
    sampleSize = dataset.data.shape[0] #sample size
    trainSize = math.floor(0.9*sampleSize) #90% of dataset is used for training
                                           #Thus, remaining 10% used for testing

3. Select Featuers

Next, we need to select a couple features to analyze.

#Choose fourth and fift columns as features 1 and 2, respectively
    #Off by one because of zero indexing
    feat1Index = 3
    feat2Index = feat1Index + 1
    feat1Name = (dataset['feature_names'][feat1Index])
    feat2Name = (dataset['feature_names'][feat2Index])

4. Structure Data for SVM Input

Additionally, we’ll have three sets of variables housing our data to make the example clear. First, we’ll get all of the data, then we’ll designate about 90% of our data for training, and the rest will be reserved for testing. For analysis and plotting purposes later, we further split the data depending on whether the target is malignant or benign (XMal and XBen, respectively).

    [f1, f2, y] = sliceData(dataset, 0, sampleSize, feat1Index, feat2Index) 
    #all data 
    X, XBen, XMal = separateFeaturesViaClasses(f1,f2,y)
    
    
    [f1Tr, f2Tr, yTr] = sliceData(dataset, 0, trainSize, feat1Index, feat2Index) 
    #train data 
    XTr, XBenTr, XMalTr = separateFeaturesViaClasses(f1Tr,f2Tr,yTr)
    
    [f1Te, f2Te, yTe] = sliceData(dataset, trainSize, sampleSize, feat1Index, 
                                  feat2Index) #Test Data
    XTe, XBenTe, XMalTe = separateFeaturesViaClasses(f1Te,f2Te,yTe)
def separateFeaturesViaClasses(f1, f2, y):
    # Creates and returns TWO (2) separate input features matrices - each 
    # pertaining to one of either target classes as well 
    # ONE (1) input features matrix pertaining to both target classes
    assert((len(f1) == len(f2) == len(y)))
    #Create scatter plot inputs for each class
    X = [[f1[i],f2[i]] for i in range(len(f1))]
    XBen = np.array([X[i] for i in range(len(f1)) if y[i] == 1]) 
    #Class 1 - Benign  
    
    XMal = np.array([X[i] for i in range(len(f1)) if y[i] == 0])
    #Class 2 - Malignant
    
    return X, XBen, XMal
def sliceData(dataset, start, end, feat1Index, feat2Index):
    #Slices features and output arrays based on indicies
    f1 = dataset.data[start:end,feat1Index]
    f2 = dataset.data[start:end,feat2Index]
    y = dataset.target[start:end] #same as the outcome ("Correct Answers")
    return f1, f2, y

5. Invoke SVM Algorithm

To have Python solve Eq. [1] for us, we’ll need to provide our training data set and correct target labels.

    #Fit the input parameters to an SVM model. Assume a linear kernel
    #We only want to provide the training data so we'll have some 
    #left for testing
    clf=svm.SVC(kernel='linear')
    clf.fit(XTr,yTr)

6. Analyze Results

We’ll set the accuracy to the ratio of correct test outputs divided by the total number of test attempts. We’ll see that we got 2 samples wrong out of about 60 test attempts.

    #Now we perform the inferencing step and analyze accuracy results
    modelOutput = clf.predict(XTe)
    correctOutput = y[trainSize:]
    result = modelOutput == correctOutput
    #get indices for misclassified samples
    wrongIndices = [i for i in range(len(result)) if (result[i] == False)] 
    xWrong = np.array(XTe)[wrongIndices]
    accuracy = sum(result)/len(result)
    accuracyStr = "Accuracy is: " + str(round(accuracy*100,2)) + "%"
    print(accuracyStr)

7. Plot Data

Lastly, we plot our data. We also draw the SVM decision curve by extracting the line’s slope and intercept points.

  # Calculate SVM Curve for plotting 
    w = clf.coef_[0]
    a = -w[0]/w[1]
    xx = np.linspace(650,700)
    TERM = (clf._intercept_[0]/w[1])
    yy = a*xx + TERM
    plt.plot(xx,yy)
    
	#Plot Data points
    plt.scatter(XBenTr[:,0],XBenTr[:,1], label='Benign - Train Data',
                marker='o', color='blue')
    plt.scatter(XBenTe[:,0],XBenTe[:,1], label='Benign - Test Data', 
                marker='o', color='orange')
    plt.scatter(XMalTr[:,0],XMalTr[:,1], label='Malignant - Train Data', 
                marker='x', color='blue')
    plt.scatter(XMalTe[:,0],XMalTe[:,1], label='Malignant - Test Data',
                marker='x', color='orange')
    plt.scatter(xWrong[:,0],xWrong[:,1], label='Incorrect Test Outputs', 
                marker='+',color='red')
    plt.legend()
    plt.xlabel(feat1Name)
    plt.ylabel(feat2Name)
    plt.title("Support Vector Machine Example for Cancer Cell Classification")
    plt.text(400, 0.22, accuracyStr,bbox=dict(facecolor='red', alpha=0.5))
    plt.show()

Below, we have the plot from our work. We achieved about a 96.5% accuracy.

Fig.3: SVM Model Performance – 96.5% Accuracy

NEXT QUESTIONS

In production, we would optimize our accuracy further and consider the computation resources for the training and inference stages. Here are some questions to consider.

  1. How does varying the kernel function affect performance?
  2. How would the code example be modified to accommodate higher dimensions, such as three features? Would that change improve accuracy?
  3. What features are optimal for the above problem?
  4. How does the training and inference time grow with the number of features? Does this agree with theoretical estimates?
  5. What’s the optimal value of gamma and C, as defined in [2]?

REFERENCES

[1] – Machine Learning An Algorithmic Perspective. 2nd Edition. Stephen Marsland.

[2] – https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html

[3] – https://cvxopt.org/

Let us know when you’d like to discuss how the learning in this tutorial may be applicable to the technical problem you’re trying to solve. Our fresh view of your problem may give you a different, valuable perspective to consider.

How to Design Medical Devices with Dynamic Regulations?

With advances in the technology industry, instantaneous data access is not only ubiquitous, but also expected in the medical device industry. Health delivery and monitoring services are no longer solely in healthcare facilities; these services are becoming more pervasive in the patients’ homes, shifting the medical treatment paradigm and creating opportunities for new disruptive electronic technologies. The shift also engenders challenges to prevent compromise or misuse of patient data. In this blog, we’ll show how one can begin designing a medical device in the context of certain regulations to minimize rework between prototype and production design builds.

Consideration #1: What is a process for designing medical device electronics?

System engineering, which consists of understanding and defining the system requirements (including regulatory and customer expectations) are paramount to the design process. Federal regulations for both medical devices (and even aerospace systems) both regard traceability as essential. Traceability is a relationship mapping between two or more products of the development process. Increased traceability rigor is required as the criticality of the medial device is increased (Class III vs Class I, for example).

We traditionally design systems from the top-down, which means that we look at the overall system perspective and consider all the available requirements from all stakeholders. So, requirements are allocated to appropriate software or hardware designations, broken down into increased specificity, and implemented. Validation means Do you have the right requirements? while verification asks the question does your implementation meet the requirement? This process is called V&V (Validation & Verification).  

Requirements Validation 
Requirements Verification 
Time

Design and verification occur in opposite directions:

3 
2 
Design (Top Down) 
1 
1 
2 
Verification (Bottom Up) 
3 
Lowest Level Subsystem Module Highest Level Subsystem Module 
Order in Design Process 
Lowest Level Subsystem Module Highest L I Subsystem Module 
Order in Verification Process

Design is performed from the top-down, which means that high level software modules are defined before lower level ones. Lower level modules are combined to make up one or more higher level modules. Lower-level modules are verified first to mitigate integration risks. It’s recommended to perform requirements-based testing before system-level testing.

Consideration #2: What are some landmines to avoid when designing new medical devices?

Functional requirements, for example those pertaining to customer expectations, as well as regulatory requirements, must be considered simultaneously in the design process. Even the initial goal is to develop a prototype demonstrating the principle for your stakeholders, neglecting to consider the holistic system perspective by asking who are all of my stakeholders? And what do they care about? can lead to rework. This is because hardware or software components chosen to meet the functional requirements may not meet regulatory ones. So, for example, embedded computers (and other components) may have to be replaced in the production design. How much additional effort would we inadvertently append to the overall hardware and software development effort? 20%? 40%? From our experience, the cost delta can range from 30% – 50%.

Ephemeral goals without long-term design strategy context likely increases overall development cost. It’s a common observation we’ve seen with the myriad companies working in this space. Let’s study how certain regulatory requirements rework can be avoided for a medical device example by considering all critical elements sufficiently early in the design process.

Consideration #3: What are some example regulations to consider?

According to the Food and Drug Administration (FDA), regarding medical records, Title 21: Food & Drugs – Part 21 – Protection of Privacy (21.71 (5)) states that a “record is to be transferred in a form that is not individually identifiable.”  In another statement, (21.70 (a)) states that “Names and other identifying information are first deleted and under circumstances in which the recipient is unlikely to know the identify of the subject of the record.” [1]

To paraphrase, transmission and storage of customer records must be performed in a way that protects patient confidentiality. Simply transmitting the patient’s name and storing the name and data in one place won’t put federal regulators at ease. Suppose we’re designing a subsystem called the Diagnostic Reporting Function (DRF) of our next medical device, and we want to comply with the Part 21 regulations above. Perhaps we would enforce a requirement on our medical device, such as:

Diagnostic Reporting Function (DRF) shall generate a unique identifier for each heart-rate monitor reading. [REQ-1]

Diagnostic Reporting Function (DRF) shall transfer the generated unique identifier along with the recorded heart-rate monitor reading to the Transmission Endpoint System (TES). [REQ-2]

Medical Device 
Diagnostic 
Reporting 
Function 
Internet 
Transmission Endpoint System 
Unique 
ID/Patient 
Isolator 
Customer/Patient

We won’t get into too much detail in the above figure, but it’s a basically a underscores how one may decouple identifiable patient information with patient medical data to make it harder for unauthorized individuals to view sensitive information. Note that transferring patient data within an organization may be acceptable, if the Transmission Endpoint System (TES) is operated by a single organization and/or appropriate vendor risks are mitigated [1].

We have now decoupled health records from patient identification, a key regulatory requirement. Clearly this requirement impacts the software development of our example medical device. We also have other regulatory requirements to mitigate other cyber-risks. What about encryption? Sending the output of the medical device in plain-text (not encrypted) technically satisfies our above REQ-1 and REQ-2, but do not satisfy other regulatory requirements or guidelines. So, in the spirit of the V&V figure described previously, we have not yet satisfied our requirements validation process, since we need more requirement coverage.

Consideration #4:  What questions should we answer in order to develop a more secure medical device?

As an illustration, Remote Patient Monitoring (RPM) patients expect an intuitive user interface (UI) for their monitoring products. And, for device manufactures to deliver on this expectation safely, reviewing the overall system architecture is paramount to ensuring that security and privacy vulnerabilities risks are mitigated, according to a recent National Institute of Standards and Technology (NIST) article titled Securing Telehealth Remote Patient Monitoring Ecosystem [2].

Based on the NIST Cybersecurity Framework Version 1.1 [3], the same article ([2]) defines a set of Desired Security Characteristics guidelines, pertinent to medical device designers. Let’s pose the guidelines as questions to frame additional requirements for our medical device example (Note that these may be more relevant to the TES mentioned above):

  • Identify
    • How are network assets identified and managed?
    • What are the vendor risks, such as cloud providers or technology developers?
  • Protect
    • How are users identified and authorized appropriately with access control?
    • Is least privilege enforced for every user? 
    • Is data integrity and encryption (including data at rest) enforced?

The protect category especially echoes the principles outlined in the excellent book Zero Trust Networks: Building Secure Systems in Untrusted Networks by Evan Gilman and Doug Barth) [4].

  • Detect
    • What are the ways in which cyber threats are detected?
    • How are continuous security monitoring practiced?
    • In what ways are user account behaviors studied with analytics?
    • What type of information do security logs contain?
  • Respond
    • When a cyber threat event is discovered, what are the processes to limit the extent or limit of damage?
  • Recover
    • After a cyber incident, what are the processes to restore systems affected by the event?
    • How does the cyber incident recover process handle external and internal parties?

Consideration #5:  What is the impact of missing regulatory requirements in the design process?

Through the guidelines above, we can determine useful considerations to ensure systems are resilient against most common forms of cyber threats. Neglecting security best practices puts patient (and thus company reputation) in peril; in fact, there’s more than one cyber hack every 39 seconds, according to Michel Cukier of the University of Maryland [5]. To ensure the medical device we’re designing will be more prepared, let’s look at one question from the above – Is data integrity and encryption (including at rest) enforced?

Data integrity means that the data has not been tampered. A variety of common algorithms to check that the data has not been altered (mistakenly or intentionally) is checksum or hash. SHA-256 is an example.

So, what can some ancillary requirement be?

Some example, though not exhaustive, could be:

DRF shall transmit the computed SHA-256 of the message payload to TES. [REQ-3]

DRF shall encrypt all outbound communication to TES with the Transport Layer Security (TLS) version 1.2 or higher standard using the Advanced Encryption Standard (AES)-256 cipher. [REQ-4]

If we developed our medical device prototype without considering the full system view (ex: only REQs 1-2) of the end product, we may have (for example) chosen a microcontroller unit (MCU) that didn’t have the relevant encryption capability, resulting in prodigious hardware and software rework, and in our experience, may increase costs by more than 30-50%.

Fortunately, there is a way to mitigate these types of technical risks. And, we can help you accomplish both your short-term and long-term objectives through these disciplined processes. There are of course many more risks than we can ever write in this tutorial, and as you develop your product, we can help you avoid them. If you’d like us to brainstorm with you or if you’d like us to cover a future blog article topic, you’re welcome to comment or send us a message on our Contact Us page.

References:

[1] FDA Title 21: Food & Drugs – Part 21

[2] Securing Telehealth Remote Patient Monitoring Ecosystem       https://www.nccoe.nist.gov/sites/default/files/library/project-descriptions/hit-th-project-description-final.pdf

[3] NIST Cybersecurity Framework Version 1.1 https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf

[4] Zero Trust Networks: Building Secure Systems in Untrusted Networks by Evan Gilman and Doug Barth https://www.oreilly.com/library/view/zero-trust-networks/9781491962183/

[5] Hackers Attack every 39 Seconds (NBC News) http://www.nbcnews.com/id/17034719/ns/technology_and_science-security/t/hackers-attack-every-seconds/

Tutorial: How to Solder PCBs?

Disclaimer: This tutorial is for simplified demonstration/educational purposes and not intended for production applications. We cannot be held responsible for any misuse, errors, damages, or losses. Use at your own risk. Production implementations of our products follow much more stringent quality control processes that those shown in this tutorial, per our clients’ requirements.

Printed Circuit Boards (PCBs) are ubiquitous – they’re in cell phones, TVs, computers – the list goes on. When companies are developing their medical device, home automation, or even aerospace application, prototyping the concept for customer validation prior to large scale manufacturing, is critical. In this scenario, hand-soldering becomes handy for smaller quantities and for functional validation.

In this tutorial, we’ll show how PCBs can be soldered, principles we hope will help you get validation from your clients, just like how the circuit shown below helped our client obtain critical feedback from his customers. Speed and quality are key in winning in today’s competitive markets.

It’s best to start out with defining our goal, which is to solder all the though-hole and surface mount device (SMD) components for the PCB shown below. The former components penetrate all layers of the board, while the latter only resides on one side of the board.

Assembled PCB (Our Goal)

Our approach will be to solder the SMD devices first. SMD soldering can be accomplished with solder paste, which when heated, will re-flow and solder our components. For this step, it is possible to use a metal cutout, referred to as a stencil, that will apply the solder paste at the SMD pad locations (Step 1A). In this tutorial, we’ll show both; it is possible to get by without a stencil, though not using one usually is a bit messier at the beginning and requires some rework to remove excess solder (Step 1B).

Processing (Batching) Several Boards Simultaneously

1A. Use Stencil to Apply Solderpaste (Optional)

We can choose to apply solderpaste by using a stencil, which is a metal cutout that aligns with our PCB.

PCB Stencil

Next, we place the stencil on top of the PCB and align the cutout exactly with the SMD pads. Taping the PCB and the stencil in place is helpful.

PCB Stencil Aligned on top of PCB (SMD Pads aligning through stencil hole cutout)

Now, we squeeze the solder paste from our syringe and apply it on the stencil.

Solder paste Application

Additionally, we spread the solder paste with a credit-card-like applier.

Solder paste spread with card

Finally, we remove the stencil and reveal the PCB with the applied solder paste.

PCB with Solderpaste

1B. Apply Solder Paste without Stencil

Even without a stencil to cleanly apply solder paste, we’re going to prove that it’s possible to develop a quality circuit. For the purposes of this tutorial, (and to prove our point) we made a mess, as shown below.

Solder paste applied without stencil use

2. Solder SMD Components with Heat Gun

For this video, we will demonstrate SMD soldering without stencil use (from Step 1A). Even with this method, it is still possible complete the board, although solder bridge defects are more likely and need to be reworked in the next step. It’s also possible to solder SMD components with a solderiron; some elements of that process are covered in step (3) below.

SMD Soldering with Heatgun (Video Accelerated)

3. Solder Through-hole Components & Remove Solder Bridges

In this step, through-hole components are soldered. The PCB metal contact should be heated first with the solder iron and then the pin of the component, preventing what is referred to as “cold-solder joints.” In this section, we’ll focus on how to remove solder bridges. The key is to use soldering flux, which is a chemical that helps remove the metal oxides generated from the high heat; this helps lower the soldering temperature and time, therefore reducing the probability for solder bridges or thermally damaging the board electronics.

Solder Bridge Removal

4. Clean-up

Soaking the PCB with 99% Isopropyl Alcohol for a few seconds and scrubbing the circuit with a non-static or conductive brush helps clean the PCB. (A regular brush may induce static electricity during cleaning and thus potentially damage semiconductors on the PCB). The clean-up step removes the solder flux applied in previous steps. Since the solder flux is corrosive, failing to remove the flux may damage the PCB over time.

Isopropyl Alcohol for Electronics Cleaning
Conductive Brush for Electronics Cleaning

We demonstrate this cleaning process.

PCB Cleaning with Isopropyl Alcohol & Non-ESD Brush

And finally, drying with anti-static wipes, such as Kimwipes, helps accelerate drying and prevent a post-dry chalk-like reside.

PCB Drying with Non-ESD Wipes

5. Functional Testing

We need to conduct a final functional test after the clean-up step to ensure there are no clear defects. One reason that that during clean-up, solder balls may become loose and get stuck between adjacent and tightly space pins, causing a short (similar to a solder-bridge). We caught this issue a few times while making the circuits for this blog and were sure to remove them. The pump used for functional testing in the next video is from one of our industry partners, Simply Pumps!

Functional Test

6. Visual Inspection (Final)

Lastly, we ensure the board is aesthetically sound. Any observed defects are noted and corrected. With the process outlined in this tutorial, we have successfully soldered several boards that pass functional test. Happy soldering!

Boards Batch Successfully Soldered & Passed Functional Verification

The datasheet for the motor driver is available here:

Tutorial: Network Protocol Basics

In the previous tutorial, we discussed how to load a mobile app onto an android phone for basic testing. In this tutorial, we’ll go over the basics of network systems and protocols to better understand how that app works. This understanding will help us customize a solution for you to connect your system to others. If you have questions on how to enable your customization, please reach out and we’ll provide a complimentary consultation to see how we can help.

Due to great advances in technology such as the internet and computers, our lives are more connected than ever. Regardless of almost any website that you browse, your computer connects to another computer and sends as well as receives data. This is a simplified view:

Fig. 1 – Computer/Server Interaction

Right before you began reading this page, your computer or mobile device connected to our server and transmits bursts of data, in which each burst looks like Fig. 3. Let’s dig deeper. Every computer on the internet has an Internet Protocol (IP) Address, which is used to distinguish itself from the rest of the web. Currently, simplonics.com has a current IP address of 192.0.78.128, which my computer uses to connect to it. Another important concept is the port abstraction, which are operating system interfaces that provide a concept similar to mailboxes. When you send mail to an apartment address, which is similar in concept to IP addresses, you also have to specify the apartment number, which is analogous to ports in the network system world. HTTPS, which is HTTP and TLS, uses port 443 by convention.

By “pingingsimplonics.com with a command-line terminal, as shown below, we’re able to find its IP address:

Fig. 2 – Ping server for IP address

But, how does your computer know that simplonics.com corresponds to 192.0.78.128? Great question! It turns out that there are servers on the web called Domain-Name System servers that basically map domain names (ex: simplonics.com ) to IP addresses. It turns out that there can be a many to one mapping between domain names and IP addresses (but not the other way around). Common reasons for why this is include reverse-proxying for security enhancements and load balancing, which is what high load servers, like google use to distribute web traffic based on geographical considerations. These topics are advanced and out of scope for this tutorial but quite interesting.

Fig. 3 – OSI layering with packet representation

Ok, so we are building the understand of Fig. 3. The IP protocol helps to get data from one computer to the other and Transmission Control Protocol (TCP) is what is used to help ensure data reliability.

How does TCP ensure data reliability?
Well, this transport layer protocol is complex, but in a nutshell, the sender retransmits data whenever the receiver does not send an acknowledgements of receiving a certain range of numbered packets.

Alright! We successfully sent a packet through the internet with this understanding! What’s next?
Furthermore, Transport Layer Security (TLS) is part of the presentation layer of the OSI model. At the sender side, TLS encrypts each packet based on a complicated algorithm called a cipher, and at the receiver side, TLS decrypts each packet. SHA-256 is one example algorithm and TLS has two primary variants, namely, symmetric and asymmetric key algorithms. The former uses the same key for encryption and decryption, whereas the latter (and more popular version on the web) uses asymmetric key algorithms (aka public-key cryptography).

Lastly, we’ve arrived at the Hyper-Text Transfer Protocol (HTTP), which is an application layer protocol. This protocol sends the request to “GET” a webpage, for example.

So, in summary,

A Sender:
1. Finds the IP address of the target server with DNS
2. Create an IP packet that includes source and destination IP addresses
3. Places the IP packet in a TCP packet with additional information
4. Embed HTTP requests or responses in the TCP packet
5. Encrypts the TCP packet with TLS
6. Sends the packet to the internet

A Receiver:
At the receiver, the target server:
1. Receives the packet
2. Decrypts the packet
3. Extracts the HTTP request
4. Processes the request and sends data (similar in fashion to the sender’s steps above)

This is a high-level view of how the web works, and with this understanding, we are able to build pretty complex, yet modular systems. Please like this page if you found this useful and let us know your thoughts on how we can improve this article.

Tutorial: Stock market analysis using the Hurst Exponent in C#

Disclaimer: This tutorial is for simplified demonstration/educational purposes and not intended for production applications. We cannot be held responsible for any misuse, errors, damages, or losses. Use at your own risk.

1. Overview

As a result of requests from our clients, we’ve decided to publish an article about the Hurst Exponent (H), which is a ubiquitously used econometric measure used for potential stock market studies in investment applications. H indicates the long-term memory of a time series (Y(t)) by examining the time series’ tendency to regress to the mean; H is a number between 0 and 1. When H is closer to 0.5, the data series is mean-reversive, indicating the tendency for Y to return to the mean. Values closer to 1 indicate that increases in Y typically correlate with increases in Y at future points in time. However, values closer to 0 indicate long-term switching between sequence values.

There are a gamut of implementations in MATLAB and Python, but not many in C#. In this discussion, we elucidate our implementation in the following sections:

  • Current Capability
  • Implementation
  • Test Results
  • Possible Enhancements & Next Steps
  • Classes & Methods Declaration
  • Future Thinking
  • Appendix – Code Files & References

2. Current Capability

We developed a Hurst Exponent calculation in C# that:

  • Performs an R/S Hurst Exponent (uncorrected) calculation for inputs with integer powers of two length
  • Implements a least squares linear regression for the final R/S calculation
  • Reads input data series from a CSV file
  • Takes care of some exception handling conditions
  • Was tested with Python Hurst Library
  • Passed test conditions when inputs Type==change and simplified==true were set [1]
  • Was tested with input sizes of 128, 256, 512, 2048, 4096, and 8192

3. Implementation

Fundamentally, we implemented the Hurst Exponent by the conventional R/S method. The variants of this method are apparent in different applications with various assumptions on how the input data is modeled [1]:

  • Change
  • Price
  • Random-Walk

We implemented the change variant, which is described below (and can result in different answers based on the variant). Support for different variants is another area of possible improvement we can focus on. Let’s walk through the algorithm.

Our code exposes the following interface

public Tuple<doubledouble> calcHurstExp(double[] inputData)

to calculate the Hurst Exponent. Here are additional details of what this function does.

1.  First, we determine sizes of our division arrays. Assume N = 512 elements in the inputData array. In our case, we have the following table.

Division (D) Chunk Size (C­s)
0 512
1 256
2 128
3 64
4 32
5 16
6 8
7 4

2. Next, we loop through each division. For each division, calculate the normalized R/S value for that division and keep it later for linear regression (one of the last steps below). For this example, let’s choose D = 2.

var divCaRS = GetDivR_S(double[] inputData, int div);

3. Furthermore, we need to loop through the input array and create N/CS chunks for analysis. For each double[] chunk, we need to calculate the R/S value.

double RS = getChunkRS(chunk);

4. To calculate the non-normalized R/S value for a given chunk we follow the following steps:

4.1. Find the mean of the chunk

4.2. Find the standard deviation (S) of the chunk

4.3. Create a mean-centered series

4.4. Find the cumulative deviation of the mean-centered series

4.5. Calculate the range(R) of the cumulative deviation

4.6. Calculate the non re-scaled range (R/S)

5. Furthermore, we will average all the R/S values for the chunk in a given division

double RS_Div = RSArr.Average();

6. Additionally, the Natural Log[RS_Div] and Natural Log[size(chunk)] needs to be determined so we can linearly fit the power curve corresponding to the overall data.

Log_RS_Div_Arr[div] = Math.Log(RS_Div, mathBase);
Log_Size_Div_Arr[div] = Math.Log(chunkSize, mathBase);

7. Finally, by using the least squares linear regression, we can find the slope of logarithm of R/S with respect to the logarithm of the division size. The slope of this line is the Hurst Exponent.

Tuple<doubledouble> HC = LinearRegression(Log_Size_Div_Arr, Log_RS_Div_Arr);

4. Test Results

Currently, our implementation was tested against the Python Hurst Library [1] as well as the MATLAB/Octave example [2]. When our code was tested against the Python code, the following prototype was used: [H, c, data = compute_Hc2(series, kind=‘change’, simplified=False)], yielding results within 1-2% [2]. Deviations from identical results is mostly due to the use of different division windows. Our code was significantly faster than the Python implementation. Of course, more rigorous testing will be needed for accuracy tweaking.

Figure 1 – C# Basic Hurst Testing (Simplonics Implemented)

Figure 2 – Hurst Verification in Python based on [1]

Figure 3. Python with modified window sizes based on [1]

Figure 4 – MATLAB/Octave Result based on [2]

Figure 5 – MATLAB/Octave Result based on [2]

5. Possible Enhancements & Next Steps

Even though the C# implementation agrees with several prominent benchmarks, it (as well as some of those benchmarks) fail to work under all circumstances. Therefore, this application is not ready for production deployment until Simplonics implements these enhancements. We outline several steps below that we can follow to make the application more accurate, faster, or more memory friendly. Some include resolving current limitations:

  1. Adding support for inputs whose sizes are not integer powers of 2
  2. Theoretical/Empirical R/S Correction, such as Anis-Lloyd/Peters Correction
  3. Ensure program works correctly with different input types
    • Currently, there are some corrections and other algorithms needed for reliable accuracy that are not yet implemented
      • The program sometimes outputs a Hurst indicator slightly above one as a result of not-yet implemented corrective measures.
  4. Optimize Speed
    • This becomes more critical as the size of the input increases. Forms of parallel algorithms can be employed
  5. Utilize other algorithms, such as wavelets, FD4, and others to avoid biases that exist with current R/S calculation method as input size increases

6. Classes & Methods Declaration

C# Files:

  • FileHandler.cs – Opens and reads files, such as CSV files for data inputs

class FileHandler
{
    public string GetTestPath();
    public double[] ReadCSV(string filePath)
}

HurstExponent.cs– C# file that performs the Hurst exponent calculation on a given input. The main function is here.

namespace HurstExponential
{
    class HurstExp
    {   
        public double mathBase = 10;      
        public double StdDev(double[] arr, double mean, int N)
        public double Mean(double[] arr, int N)
        public double[] MeanCenteredArr(double[] arr, int N, 
                                        double mean)
        public bool aEqual(double x, double y)
        public double[] CumDevArr(double[] mcArr, int N)
        public double getChunkRS(double[] chunkArr)
        public void PrintArray(double[] X)
        private void assert(bool v)
        public Tuple<int, double[]> GetDivR_S(double[] arr, int div)
        /* Gets specific divisions's R/S Ratio as an array of each
         * chunk's natural (non re-scaled) R_S value*/
        public double[] Slice(double[] arr, int start, int end)
        private bool CheckForValidInputs(double[] inputData)
        public void Print_RS_Table(double[] Log_RS_Div_Arr,
                                   double[] Log_Size_Div_Arr)
        public Tuple<double, double> calcHurstExp(double[] inputData)
        /*Highest-Level function that calculates the Hurst Exponential
         * Assumes input length is an integer power of 2 */
        
        public Tuple <double, double> LinearRegression(double[] X,
                                                       double[] Y)
        /* Calculating a Least Squares Regression -  
         *Returns slope and yint of linear regression for a best fit curve*/
        class HurstExpWrapper
         {
           static void Main(string[] args)
         }

}
}

  • UnitTest.cs– Unit tests that that performs the Hurst exponent calculation on a given input.
public class UnitTests
{
    bool AlmostEqual(double X, double Y, double t)
    public bool performUnitTest(double expH, double expC,
                  string fn = "pyTest_256.csv", double t = 0.02)
    public bool MainUnitTests()
}

7. Future Thinking

What are your thoughts on these additional concepts that Simplonics can help you realize?

  • Econometric Developments
    • Dickey-Fuller Test
    • Other algorithms
  • C# GUI Implementation
    • For an enhanced customer user experience and integration with your existing code base
  • Network Programming
    • Hosting your financial solution on a globally accessible platform for your different customers to use

8. Appendix:

a. Code Files

Available upon request here

b. References:

[1] https://pypi.org/project/hurst/

[2] http://prac.im.pwr.edu.pl/~hugo/RePEc/wuu/hscode/hurst.m

Tutorial: Mobile App Local Deployment

Disclaimer: This tutorial is for simplified demonstration/educational purposes and not intended for production applications. We cannot be held responsible for any misuse, errors, or damages. Use at your own risk.

In this tutorial, we’ll discuss how to load a simple http2demo android application in developer mode. We will use an android phone, but analogous method exist for IOS devices. Internally, the demo performs network system calls that send an HTTP GET request to a server and prints the response on the demo’s screen. This is just a really simple example for pedagogical purposes, and we can build a much more complex product tailored to your needs. 

1. Turn on Developer Mode on your phone.  

    • To do this, go to Settings.
    • Click About Phone several times until the developer activation confirmation message pops up (if developer Options is not yet activated)
    • You should see a tab called Developer Options in Settings. Go into that tab and turn ON Developer Options.

2. Next, connect the phone to your computer via USB. Be careful to select a USB cable capable of charging AND data transfer; cheaper cables may support only charging. Select PTP on the phone (Picture Transfer Protocol).

3. Drag the mobile app APK file onto the phone. 

4. Find the application installer on the phone. You can do that by looking for Files->Installers. You’ll get some warning messages about a non-signed application when installing; but that’s alright for testing.

5. Next, run the application! In this case, the application name is http2demo.

6. Below, you’ll see some screens of this sample application, which is available as a free demo upon request here, with descriptions below. Of course, you can do this process automatically by building and deploying directly with Android Studio (but that requires getting your Gradle files in proper shape). Steps 1-2 are still necessary in this alternate option.

 

 

HTTP Demo Start Screen


Click on the screen to get hidden menu. Click on HTTP Get to connect to a sample server.


Received response from Server

Extra Learnings! – Gradle Files

Gradle files are used to build your Android code. Unfortunately, many demos online have non-working Gradle file configurations. For example, the “Compile” keyword is now deprecated and replaced with “implementation.” Below is a Gradle file we made. Important takeaways are to look at the SDK versions (targetSDKVersion, compileSDkVersion, and minSdkVersion). Android Studio will yell at you if your versions are not correct; however, there are many correct combinations. As a tip, try to keep compileSdkVersion and targetSdkVersion identical to avoid compile errors.

//Tested with Android 7.1.1
apply plugin: 'com.android.application'

android {
compileSdkVersion 28
defaultConfig {
applicationId "com.example.httpdemo2"
minSdkVersion 22
targetSdkVersion 28
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}

dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:28.0.0'
implementation 'com.android.support:support-v4:28.0.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.2'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
implementation project(':httpcomm') //httpcomm is a module we developed
}

Tutorial: How to Use a Serial Interface To Blink an LED?

1. Overview

Disclaimer: This tutorial is for simplified demonstration/educational purposes and not intended for production applications. We cannot be held responsible for any misuse, errors, or damages. Use at your own risk.

One of the most common questions someone learning electronics may think about is how to get a computer to perform basic operations, such as controlling lights and other systems. A simplified version of this problem is learning how to get an LED blinking from your computer.

There are many ways to get the job done and common solutions typically require micro-controllers. However, is it possible to accomplish this without an understanding of micro-controllers and programming.

Let’s go into the basics of what a serial interface is. A serial interface is a way for the computer to communicate to other systems by sending and receiving a stream of 1’s and 0’s, which dictates what message is sent. For example, for a computer to send the letter ‘A’, it needs to send the ASCII representation (0x41), which is a way for letters to be translated into binary representations. In this case, 0x41 is a hexadecimal representation for ‘A’ and has an associated binary representation of 0b0100_0001. The least significant bit is sent first.

Order of bit transmission: [StartBit] [Least Significant Bit] …[Most Significant Bit] [Stop Bit(s)]

This could look like: [Start Bit] [01 00 00 01] [Stop Bit(s)]

A transmitted 1, a logical high, represents a negative voltage and a 0, a logical low, represents a positive voltage (due to the electrical/logical definition of Serial Interfaces).

2. Setting up Your computer

So, we need to get our computer hooked up. To do this, we need a USB to Serial Adapter.
a device that converts USB signals to a different form that will make it more convenient for us. I happened to use Office Depot’s Ativa, but any one you purchase should work (provided it is compliant to RS-232).

First, plug the USB to Serial to your laptop. We now need to figure out the name of our adapter. On Windows 10, we can open up Device manager to figure it out. Below, we see my device’s name is COM3, which we will need later. Yours may be different, so please check.

Next, we need to open PUTTY (or any terminal that can perform serial communication). Set the Serial Line to the device name and the speed to 75. Make sure to select the connection type of Serial. Note: if you do not select the correct (Baud Rate) speed of 75, the LED will blink very rapidly for your eyes and it will appear that it isn’t blinking, even though it is.

If you click open and you see the image below, you can proceed further. Otherwise, recheck your PUTTY settings above or go to the troubleshooting section at the end of the post.

3. Connect LED!

So now we need to get our hands dirty. Grab three wires and plug them into the adapter’s pins of 2,3, and 5. Pins 2 and 3 are the RECEIVE (Yellow below) and TRANSMIT (RED Wire below) of the computer’s serial interface, respectively. Pin 5 (Black below) is ground (GND). Next, hook up a resistor whose anode is connected to the Transmit Pin (3). Make sure to add a resistor in series with the LED, where the resistor is also connected to ground. Double check the connections to see if they are consistent. If you flip the LED, the blinking example will still work, but will invert. Also, if you short the Receive and Transmit pins of the serial interface, you will do a loop back test in which the computer will print out what it send. A good test indeed!

Here is a closer look at the serial interface.

Next fire up PUTTY based on the setup above and try typing on the keyboard. Every time you type, you should see a blink! And, you will see your character appear on your screen!

4. TroubleShooting Serial Driver Issue

If you couldn’t get the blinky demo working, it could be that the USB to Serial Adapter’s drivers aren’t set up properly. Here is a fix that worked for me. Open up Device Manager again. Go to your device, right click, and then click update driver.

Go for “Browse my computer” for driver software.

Click “Let me pick” below.

Select the appropriate driver and then click Next. If you have a CD, you can also click “Have Disk.” In my case, I just clicked Next.


Now give the demo another try and dig in to troubleshoot if other issues could be the culprit.

Please let us know your thoughts and how we can improve this! Thanks for reading!

We at Simplonics can deliver robust connectivity solutions tailored for your application.