In today’s dynamic world, there are many applications for artificial intelligence, including pattern recognition (vision, speech recognition, fraud detection), intelligent behavior (learning, cognition, recommendation systems), and advanced autonomous and cognitive systems (robots, cars, etc.).
Recently, machine/deep learning has become increasingly important in healthcare, including work in diagnosing diabetic conditions, detecting metastases in cancer pathology, autism subtyping by clustering comorbidities, and large-scale phenotyping from observational data.
Regardless of these advances, the application of machine/deep learning in healthcare still faces uncertainty. Some of these challenges come from the desire in healthcare to make personalized predictions using data generated (and managed) via the medical system, but the collection of that data is primarily intended to support care, not conduct subsequent analysis.
In this article, we present successful state-of-the-art applications of deep-learning techniques applied in healthcare and medicine. As a custom software development company, we are dedicated to the implementation of HIPAA-compliant advanced machine-learning solutions in the healthcare domain that bring our customers a competitive advantage through unique solutions quickly delivered to market.
Deep learning is a branch of machine learning based on complex data representations at a higher degree of abstraction, which are obtained by a chain of learned, nonlinear transformations. Deep-learning methods are usually applied in important areas of artificial intelligence such as computer vision, natural language processing, speech and sound comprehension, and bioinformatics. This learning is based on advanced discriminative and generative deep models, with particular emphasis on practical implementations.
The key elements of deep learning are the classical neural networks, their building elements, regularization techniques, and deep model-specific learning methods. Other approaches involve deep convolution models that can be applied in image classification and natural language processing. Generative deep models can be applied in machine vision and natural language understanding. All these techniques can lead to sequence modeling via deep feedback neural networks and can be applied in the field of robotics and self-driving cars.
Deep-learning methods can be implemented using modern dynamic languages (e.g., Python, Lua, and Julia) and modern deep-learning application frameworks (e.g., Theano, TensorFlow, and Torch).
Biomedical imaging (computer vision)
Computer vision (artificial sight) is the ability of computers to recognize images and understand objects. It uses digital cameras, analog-to-digital conversion, and digital signal processing. One type of deep learning, known as a convolutional neural network (CNN), is very suitable for analyzing images used in healthcare, such as MRIs and X-rays.
According to computer science experts at Stanford, CNNs are intrinsically created to be used in image processing, since they operate very efficiently and handle larger images. Today, some CNNs are close to, or even exceed, the accuracy of human diagnosticians in recognizing important features in diagnostic images.
One study in the Annals of Oncology (2018) showed that a convolutional neural network trained to analyze dermatology images identified melanoma with 10% more specificity than human clinicians (https://www.sciencedirect.com/science/article/pii/S0923753419341055). Even when human doctors had additional information on patients, such as age, sex, and the site of the suspect feature on the patient’s body, the CNN outperformed the dermatologists by nearly 7%.
Deep-learning techniques are not only highly accurate, but also very fast. Researchers have designed a deep neural network capable of diagnosing certain neurological brain conditions, such as stroke and brain bleeding, 150 times faster than human radiologists. In just one second, the tool processes the image, analyzes its contents, and alerts in cases of a problematic clinical finding. In neural conditions, “time is brain.” In other words, a rapid response is critical in the treatment of acute neurological illnesses; any tools that decrease time to diagnosis may lead to improved patient condition.
In order to do predictions, AI scientists apply deep-learning neural networks to create medical images, not just read them. A team from NVIDIA, the Mayo Clinic, and the MGW & BWH Center for Clinical Data Science developed a method using generative adversarial networks, another type of deep learning, to create highly realistic medical images from scratch.
The images use patterns learned from real scans to create synthetic versions of CT or MRI images. This randomly generated and diverse data can enhance diagnostic and predictive processes without any concerns around patient privacy or consent.
Natural language processing for patient records
Natural language processing (NLP) is a branch of artificial intelligence that deals with the ability of machines to understand and interpret written or spoken human language. The goal of natural language processing is to make computers as intelligent as humans when it comes to understanding language. Simply put, the goal of NLP is to teach the computer to understand what we say.
In healthcare, deep-learning tools can still struggle to identify important clinical information, establish meaningful relationships within it, and transform those relationships into actions for doctors and patients as shown in this research.
Recent developments show that while deep learning surpasses other machine learning methods for processing unstructured text, there are still steps to be taken for those tools to be successfully applied in electronic health records (EHRs). The implementations are usually based on Google’s latest algorithms, BERT and SMITH (developed in TensorFlow, the Python-based machine learning library). BERT stands for Bidirectional Encoder Representations from Transformers. It is a powerful deep-learning model dedicated to language processing, and it is adopted and implemented by the Google Search engine.
Researchers have discovered that finding patterns among textual data can increase the accuracy of diagnosis, prediction, and overall performance of the learning system. However, free-text understanding is challenging, as EHRs may be in different languages and use different styles. Providing enough high-quality data to accurately train models is also problematic. Data is sometimes biased towards particular age groups or ethnicities; different characteristics could create models that are not equipped to accurately diagnose a broad variety of patient conditions.
However, deep learning is still a very promising tool that can perform free-text analytics, and a few pioneering developers are finding ways to break through the existing barriers. A team from Google, UC San Francisco, University of Chicago Medicine, and Stanford Medicine developed a deep-learning and natural-language-processing algorithm that analyzed more than 46 billion data points from more than 216,000 EHRs across 2 hospitals.
The tool has improved the accuracy of traditional approaches for identifying unexpected hospital readmissions and predicting length of stay and patient mortality. This success was achieved without an expert manually selecting important variables, similar to other deep-learning applications for EHR data. Instead, the model analyzed tens of thousands of data pieces for each patient, including free-text notes, and identified which data were important for a particular prediction.
Although this project was only a pilot study, Google researchers claim that the findings could bring huge benefits for hospitals and health systems looking to reduce negative outcomes and become more proactive about delivering critical care.
Predictive analytics in drug discovery and genomics
Deep learning is a great tool for researchers and pharmaceutical businesses looking to discover new patterns in their relatively unexplored data sets, as many precision medicine researchers don’t yet know exactly what they should be looking for. Deep-learning models have the potential to comb through compounds for a potential drug, including bio-chemical ingredients, and identify which component should be used for a particular medicine (https://healthitanalytics.com/features/what-is-deep-learning-and-how-will-it-change-healthcare).
The world of genetic medicine is still new enough that unexpected discoveries may occur, creating an exciting proving ground for innovative approaches to targeted care. The US National Cancer Institute is conducting research within a number of projects focused on leveraging machine learning for cancer discoveries. The application of predictive analytics and molecular modeling could help healthcare researchers understand how and why certain cancers form in certain patients.
Deep-learning technologies can speed up the process of data analysis, reducing the processing time for key components from weeks/months to just a few hours. Researchers from the MIT Computer Science and Artificial Intelligence Laboratory have combined efforts to leverage deep learning in the creation of personalized (precision) medicine. The MIT Clinical Machine Learning Group is investigating the use of deep learning in precision medicine to better understand disease processes and design effective treatment for diseases like type-2 diabetes. The tool offers human clinicians detailed elaboration for its recommendations, fostering trust and allowing providers to have confidence in their own decision making when potentially overruling the algorithm.
Google is also a significant participant in clinical decision support – in one particular example, for eye diseases. Its UK-based subsidiary, DeepMind, is developing a commercial deep-learning tool that can identify more than 50 different eye diseases and provide treatment recommendations for each one. In a supporting study published in Nature, DeepMind and Moorfields Eye Hospital found that the tool is just as accurate as a human clinician, and it can speed up the access to care by reducing the time for an exam and diagnosis.
Application in ECG processing
ECGs consist of a set of physiological signals that represent the changes of the electrical activity of the heart over time. The information recorded in an ECG is used to analyze heart functions like heart rate and rhythm, and it is frequently used to identify abnormalities in the electrical system of the heart. There are several databases online combining ECG data with information about arrhythmias and patient condition, available to the community with different research aims and goals.
Deep learning requires large datasets for proper training. In addition, it is important to compare and evaluate different deep-learning algorithms so that researchers can replicate those algorithms and apply them in a standard manner.
One of the most used databases is the American Heart Association’s ECG database. This database contains data about arrhythmias and normal sinus rhythm and includes 154 three-hour ECG recordings from real patients.
Arrhythmia detection and classification is one of the most important functions of an ECG, and the automatic recognition of abnormal heartbeats from a large amount of ECG data is an essential task. Several methods have been proposed to solve the problem of ECG heartbeat classification. However, recent solutions are using convolutional neural network and recurrent neural network (RNN) architectures. CNN-based systems developed a nine-layer CNN that automatically identifies five different categories of heartbeats (non-ectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown).
Normal and abnormal beats can be classified by implementing long short-term memory (LSTM) and gated recurrent unit architectures, which has enabled the automatic separation of regular and irregular beats in the arrhythmia database run by the Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH).
Different RNN models can be used to compare the effectiveness, accuracy, and capabilities of ECG data. Mixed architectures can also be applied. An automated system was created that uses a CNN and RNN LSTM to assess normal sinus rhythm, atrial premature beats, and premature ventricular contraction, all using ECG signals from the MIT-BIH arrhythmia database. The study used ECG segments of variable length and achieved around 98% accuracy.
Uses of big data and deep learning are booming nowadays due to the large advancement in processing power (e.g., multi-core and parallel processors) and new software tools (e.g., logic programming, big data, Python). Today, deep learning in healthcare is used in applications such as individual disease diagnosis, prognosis, prevention and prediction, and for the design of custom health treatments based on lifestyle.
From a clinical viewpoint, the main focus is likely to be on imaging analytics for the near future, due to the fact that deep learning has already had a head start on many useful applications in this area. Deep-learning algorithms in the clinical environment provide a potential for high-quality results.
Another approach that will provide numerous benefits includes consumer-facing technologies such as chatbots, mobile health apps, and virtual assistants like Alexa, Siri, and Google Assistant. Google is aiming to apply the voice recognition technologies already available in Google Assistant, Google Home, and Google Translate to document patient–doctor conversations and help doctors compose quick patient records.
With this high number of promising use cases, strong investment from major players in the industry, and a growing amount of data to support the analytics, deep learning will certainly play a big role in the quest to deliver the highest possible quality care to patients in the decades to come.
Do you want to leverage deep learning in your healthcare practice and stay ahead of the curve? Choose a technology partner that can check and improve your business’s technological readiness and establish critical data privacy and security mechanisms. Contact us today!