Biomedical Informatics Chapter Outline Preamble

Chapter 3
Biomedical Informatics
Chapter Outline
Preamble 42
The Rise of Predictive Analytics in Health Care 42
Moving From Reactive to Proactive Response in Health Care 43
Medicine and Big Data 43
An Approach to Predictive Analytics Projects 44
The Predictive Analytics Process in Health Care 45
Process Steps in Figure 3.1 45
Step 1: Problem Definition 45
Step 2: Identify Available Data Sources 46
Step 3: Formulate a Hypothesis 46
Step 4: Data Preprocessing 46
Step 5: Data Set Design 47
Step 6: Feature Selection 47
Step 7: Model Building 48
Step 8: Model Evaluation 48
Step 9: Model Implementation 48
Step 10: Validation of Clinical Utility 48
Meaningful Use 49
Translational Bioinformatics 49
Clinical Decision Support Systems 49
Hybrid CDSSs 50
Consumer Health Informatics 51
Direct-to-Consumer Genetic Testing 51
Use of Predictive Analytics to Avoid an Undesirable Future 52
Consumer Health Kiosks 52
Patient Monitoring Systems 52
Public Health Informatics 54
Medical Imaging 55
Clinical Research Informatics 56
Intelligent Search Engines 56
Personalized Medicine 56
Hospital Optimization 57
Challenges 57
Summary 58
Postscript 59
References 59
Further Reading 59
In the past, healthcare decision-making guided by previous medical information repositories (e.g., the Hippocratic
Corpus) was primarily reactive in nature, in that information and experience were marshaled to diagnose and treat existing illnesses and disabilities. In this chapter, we begin to set the stage (as it were) for proactive decisioning in medicine
and health care, facilitated by the construction of analytical models to predict future states, rather than react to existing
healthcare conditions.
Everyone likes the concept of gazing into a crystal ball to learn what will happen in the future. Chapter 1 discusses an
important element of the history of medicine and health care in the Middle Ages, which was centered on mystical seers
who vended medical advice. In fact, the concept of the medical casebook arose among those seers, seeking to document
the many cases they advised. In our modern age science has replaced the crystal ball, but has been based largely upon
what happened in the past, generating responses that are reactive to those events rather than proactive. Science can
“look” into the future to the extent that it extends trends or events that have happened in the past; the problem is finding
a way to view new information in a future context. However, gaining new insights from old data requires the complicated analysis of many interacting factors in medicine and health care to generate likely scenarios that might happen in
Practical Predictive Analytics and Decisioning Systems for Medicine. DOI:
© 2015 L.A. Winters-Miner, P.S. Bolding, J.M. Hilbe, M. Goldstein, T. Hill, R. Nisbet, N. Walton, G.D. Miner. Published by Elsevier Inc. All rights reserved.
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
the future. This goal has eluded physicians, who are trapped by the perceptions of their own minds. Now, we have a
way to do this with computers.
The computer has revolutionized medicine and made possible many advances that have had a tremendous impact on
health care and the life expectancy of humans. The computer is becoming an indispensible tool in the practice of health
care as it becomes more advanced, and as the volume of information in health care increases exponentially. The
American Medical Informatics Association (AMIA) has formally defined biomedical informatics as:
the interdisciplinary field that studies and pursues the effective uses of biomedical data, information, and knowledge for scientific inquiry, problem solving and decision making, motivated by efforts to improve human health.
Predictive analytics plays a key role in these efforts, and becomes even more important as we advance in this field.
In the 2002 movie Minority Report, Tom Cruise plays a cop who keeps his city crime-free by catching murderers
before they have a chance to commit a crime. In principle, this is the next level in law enforcement. This notion makes
a good story because it plucks at the heartstrings of many people who are very interested to change what might happen
in the future. The police in Minority Report depended on reports of special “precog” people who were able to “see”
what the future would be, if events or actors in the present were left unchanged. How very much like the seers in the
Middle Ages were the precogs of the film. The job of the police was to make the changes necessary to avoid the undesirable future (e.g., arrest the criminal before he commits the crime). We want to take analogous actions regarding our
medical health care. Our challenge is to find some means of precognition in the technology of the present to provide
some insights about what might happen in the future. We can’t change the future in such a direct manner as did
the police in Minority Report, but we can change what might happen in the future if we can predict with reasonable
accuracy what it is that might happen. This is the realm of predictive analytics.
Health care is entering an era of development that is very similar in principle to the theme in the film Minority Report.
The theme developing in health care is focused on using predictive analytics to follow a more proactive approach to the
diagnosis and treatment of disease. Homicide is also a health statistic; therefore, the theme followed in Minority Report is
of great interest to healthcare practitioners, at least in principle. In the Public Health Informatics section of this chapter,
you will learn that the methodologies of predictive analytics are quite different from the way the “precog” people were
used to “read” the future in Minority Report. In contrast, health care is leveraging the predictive power of artificial intelligence tools to predict probabilities, rather than certainties but these probabilities can be high enough and accurate
enough to have significant effects on prevention of adverse consequences (e.g., sickness and death).
In the past, medicine was primarily a reactive field. When we are faced with a disease, we treat it; if the pain gets worse,
we alleviate it; when someone stops breathing, we resuscitate him. One of the aims of predictive analytics in health care is
to diagnose problems at an early stage of development (or even before they occur at all), before they have had a chance to
take a toll on the human body. However, the role of predictive analytics does not stop once the individual develops the
disease. Another aim of predictive analytics is to guide in selecting and tailoring treatments for individuals by predicting
the course of events that is likely to occur with every treatment option that is available. Of course, these concepts apply
not only to individuals but also to populations, and by using predictive analytics we can foresee public health threats and
take the necessary steps to lessen their burden or prevent them from happening at all.
Biomedical informatics involves developing techniques to efficiently process and analyze the data, producing summative results that can then be used to improve health outcomes. One of the biggest challenges in biomedical informatics
today is developing techniques and tools to process the immense amount of data generated in health care today. The
amount of biomedical data available today is tremendous, and it is growing exponentially; it is becoming one of the
most important sources of “Big Data” (data volumes measured in terabytes and petabytes). The critical importance of it
is defined in terms of life and death of many people. Vast amounts of biomedical data are being accumulated in many
forms, such as free text, radiographs, photos, gene sequences, microarrays, vital signs, and lab values. We can produce,
transmit, and store more of this data than ever before, and our capacities to store it and our abilities to analyze it are
increasing at staggering rates. For example, the cost of 1 MB of data storage in 1995 was over 4,000 times the cost the
same amount of storage in 2012, and the processing speed of the our desktop computers for analyzing it has increased
Biomedical Informatics Chapter | 3 43
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
about 30 times (from 100 MHz to over 3,000 MHz) since 1995. The bottleneck we face now is the limitation in our
ability to process and synthesize these large volumes of data.
When working with such large volumes of data, the likelihood of finding associations occurring somewhere in the data
set simply by chance is quite high, and the process of finding the true meaning behind data becomes extremely difficult, if
not seemingly impossible. Imagine the immense number of calculations required by your brain just to perform an action as
simple as throwing a wad of paper into a trash can. Light hits color sensors in your retina, which detect color, brightness,
and depth. These signals are sent to the visual cortex (together with other parts of the brain), which processes every “pixel”
of the digital image they form and identifies every object in your field of vision. After the trash can is identified as the
target in your visual field (and the distance to it estimated), sensory signals from your hand are transmitted to your brain,
providing information about the weight, consistency, and form of the wad of paper. The brain processes millions of pattern
elements in your memory formed by similar signals caused by experiences in the past to calculate the force necessary to
make the wad of paper follow the correct trajectory to the trash can. These signals and calculations are then combined to
activate muscle groups, which receive visual, vestibular, proprioceptive, and tactile sensory input from thousands of neurons
in the arm, shoulder, and hand to coordinate a smooth muscular action to propel the wad of paper to the trash can. Your
nervous system is trained to do this through years of experience with inputs from your various sensory organs. All of these
inputs must be combined together and coordinated in very complex ways to perform this apparently simple task. The time
required to learn how to perform this action can be substantial, and the more experience the individual has in doing it, the
smoother and more accurate is the toss. You will not get it on the first try, but each toss will get closer and closer to hitting
the trash can, as your brain processes new information from each experience. This process is very similar in principle to the
way predictive analytics tools learn to recognize patterns in data, example by example (i.e., row by row in the data file).
Although predictive analytics techniques are not as advanced as are analytical processes in the human brain, the
capabilities of these techniques are growing continuously. They work on principles similar to those that control learning
processes in the brain. Historical data are provided to the predictive analytics tool, functioning as cases (or experiences)
in the past that are used to build a pattern, which is composed into an analytical model. The more historical data (or
“experience”) that are processed in the building (or training) of the model, the better it will perform. As time passes
and new data become available, they can be added to the training data set, and the model can be retrained. So, as each
year passes, your model can become increasingly accurate, as additional experience is available for the training process.
With the advent of meaningful use (see Chapter 9), there are considerable financial incentives for healthcare organizations to utilize their data stores to improve patient outcomes. Predictive analytics will play a key role in meeting the
goals associated with the concept of meaningful use. This chapter will give a brief overview of the biomedical informatics field, and how predictive analytics can be applied to some of the key areas of informatics. The goal of this chapter
is to create a stepping stone to inspire users of informatics technology to apply predictive analytics to their field in innovative and creative ways. This inspiration may lead them to create tools to improve health care and take on projects
that will make a difference in health care and, ultimately, in people’s lives.
There are limitless possibilities when deciding on a predictive analytics project. Predictive analytics is well established
in many areas of business, including customer relationship management (CRM), fraud detection, and sales forecasting,
and, more recently, online retail, where retailers add offerings on the first web page, based on your own personal preferences as shown by what you looked at or purchased in the past, where you live, your gender, and your age. Individuals
each have a personalized storefront showcasing the products they are most likely to buy, based on predictions of their
likely purchases. Predictive analytics are also used to manage employees and schedule their shifts automatically based
on the times buyer traffic is most likely to be the highest. For example, there appears to be a jump in traffic at a Jamba
Juice as temperatures rise (Bellcross, 2012). In the airline industry predictive analytics are used to schedule flights, and
Wall Street uses these technologies extensively to manage the buying and selling of stock. It is always important to
consider how and where technologies are already being deployed before deploying them in our respective fields, in
order to learn from what has already been done.
In health care we can do similar things, including:
● Catering treatments based on how a patient will respond optimally
● Offering additional services that a patient is likely to need and watching for additional symptoms or conditions that
a patient is likely to develop
● Scheduling of nurses, doctors and other staff to match predicted patient volumes
● Efficient purchase and storage of medical supplies according to predicted demand.
44 PART | 1 Historical Perspective and the Issues of Concern for Healthcare Delivery in the 21st Century
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
The Predictive Analytics Process in Health Care
Regardless of the purpose for which predictive analytics is used, the same process of steps can be followed in the
project. Figure 3.1 shows these key steps required to tackle a predictive analytics project in medicine and health
care. The focus of this methodology is on the directed path of operations that move the researcher from hypothesis
to solution. There are some feedback loops, which represent elements of the learning process, but the flow of
operations leads to the desired end point predictions that can be incorporated into medical decision-making.
Following this methodology section are discussions of some key areas in bioinformatics where predictive analytics
are being applied. There are innumerable examples in these areas, and those discussed are merely examples
selected to inspire innovative thinking and encourage you to initiate projects of your own involving predictive
Process Steps
Step 1: Problem Definition
Define a problem/situation for which advanced notice will change your course of action and steps can be taken to
change an outcome. Initially, choose problems that will have a relatively large impact, but also for which you will have
significant domain support in solving. Once you identify a problem, you may even break it up into parts and tackle a
1. Problem
2. Data Evaluation
3. Hypothesis
4. Preprocessing
5. Data Partitioning
6. Feature Selection
7. Model Building
8. Model Evaluation
9. Implementation
10. Validation of
Clinical Utility
Real Time Data Stream
Training Data Test Data
Complete Data Set
FIGURE 3.1 A predictive analytics process flow chart with feedback loops between Model Evaluation and Preprocessing-Feature Selection, and an
overall iteration loop between Implementation and Data Evaluation. Copyright r 2013 Nephi Walton.
Biomedical Informatics Chapter | 3 45
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
smaller piece of it before taking on the whole challenge. For example, you may be faced with the challenge to predict
census at a children’s hospital. There are many possible factors that can affect hospital census; therefore, one of the first
tasks in solving this problem is to look at factors that have the largest impact on census. The general problem of hospital census could be broken down into various causes of admission, and it might be that the biggest driver of admissions
is respiratory disease, primarily bronchiolitis. The causes of bronchiolitis could be further defined in terms of causal
factors, the most likely of which could be RSV (respiratory syncytial virus). Consequently, the defined problem could
be to predict an RSV outbreak.
Step 2: Identify Available Data Sources
Hospitals have a large store of data; however, ensure that you do not limit study to data that currently exist in the hospital.
Other valuable data sources include:
● External causal data, like data on adverse weather conditions, which might drive people indoors, thereby promoting
the transmission of viruses which can develop into disease outbreaks.
● Information available from local clinics and urgent care facilities, such as an increase in respiratory complaints,
which can be related to an increase in positive respiratory viral tests, or an increase in emergency room visits.
● Secondary data, such as a spike in medication purchases at retail stores, which can be related to the defined problem.
Other secondary data might include changes in television viewing patterns when children stay home from school, or
an increase in web searches for respiratory symptoms.
You can also exercise your creativity, and do some research in the medical literature to identify other secondary
sources of available data.
After the data sources have been identified, consider the likelihood of access to data from each one. For example, even
though data for retail medication purchases are available, gaining access to that data can be quite difficult and expensive.
You may decide to confine yourself initially to only those data sources that are readily available and can be accessed
within your budget, but be careful not to limit yourself to just those available data sources related to known associations,
because a major part of the predictive analytics process is the discovery of complex relationships that were previously
unknown. Preliminary analysis might identify other useful data which were previously unrecognized.
Step 3: Formulate a Hypothesis
After you have identified the group of available data sources to use, you must formulate an hypothesis. Following the
previous example, your hypothesis might be that you can predict RSV outbreaks with meteorological variables and positive
viral test counts as inputs to a neural network modeling algorithm. In this case, the specific hypothesis is defined in terms
of the methods proposed to test it. But that need not be the case the appropriate methodology can be selected later.
Step 4: Data Preprocessing
Cleaning and preparing data for predictive analytical modeling can take from 70% to 90% of the total project time to
complete; make your plans accordingly. Data preprocessing includes:
● Integration of data sets from multiple sources
● Filling of missing data elements with imputed values
● Deleting records and variables that are unusable for modeling
● Derivation of new variables to use as predictors
● Modifications in the data structure of the input data set (e.g., balancing data sets with rare targets)
● Data normalization and/or discretization.
Data formats and consistencies must be checked, and corrected where necessary. For example, testing of the same lab
value, such as the level of thyroid stimulating hormone, may show different values and ranges of normal depending on the
lab where it was processed. You might have to recode some variables, and derive other variables that you suspect might be
predictive of the target outcome. Predictive analysis requires values in every row of every variable, or the modeling
algorithm may ignore the entire row! Finally, you may need to normalize and or discretize the data to achieve better performance with your model. Discretizing means to create a separate variable for each unique value in a categorical variable
(e.g., A, B, C). Remember the principle “garbage in 5 garbage out,” and make sure your data are clean and consistent.
46 PART | 1 Historical Perspective and the Issues of Concern for Healthcare Delivery in the 21st Century
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
Step 5: Data Set Design
After the data set has been prepared at the data element level, you must perform several operations on the data set as a
whole. These operations include the following.
● Data set partitioning. It is important to hold out a portion of your data for evaluating the accuracy of your model
after it is built. Many predictive analytics algorithms will divide the input data set into two sub-sets used for training
the testing model over many iterations through the data set. The training set is input to the algorithms, which evaluates the relative predictive weights associated with each variable (for neural nets), or the selected cut-points in the
construction of a decision tree. These parameters are used to compose the predictive model, which is used to predict
the target outcome for data in the testing set after the first iteration. The predicted values are compared with the
actual values for each record in the testing data set, and an overall error is calculated. One of training parameters of
the algorithm is modified slightly, based on the overall error, and the training data are input to the algorithm again.
This process may go through hundreds of iterations until a specified threshold is reached (measured in terms of number of iterations performed, or a selected minimum error is reached).
Thus, both the training and testing sets of data are used in the training operation. Evaluation of the accuracy should
be performed on a data set not used in the training operation in any way. That means that you should create a third partition (the validation data set) in the data partitioning process, for use in calculating prediction accuracy. Don’t base prediction accuracy on the training set, or even on the testing set, because your model may be over-trained for the specific
training set, and it might fail significantly on any new data set. Some algorithms create all three data sets for you, most
other algorithms create only training and testing data sets, and some algorithms don’t partition input data sets at all
they depend on you to do the partitioning explicitly. Therefore, know your modeling algorithm!
● Balancing of data sets with rare targets. When an infant between the ages of 2 and 3 months presents to the emergency department with a fever, there is an approximately 7% chance that the fever is the result of a serious bacterial
infection (Hui et al., 2012). This is a relatively rare event, so only 7% of the data records will have Target value 5 1,
indicating a serious bacterial infection, and 93% will have a Target value 5 0, indicating another cause likely a
more benign viral infection. It is very easy for a modeling algorithm to build a model that is 93% accurate, just by
predicting all the rows as Target 5 0; however, that model doesn’t help you predict the infants that are at high risk
for a serious bacterial illness. In order to do that, you must force the algorithms to focus more on the 7% of the
records with the target value of 1, and less on the remaining 93%. There are three ways to do that:
● Delete enough records with Target 5 0 to equal the number of records with Target 5 1
● Duplicate enough records with Target 5 1 to equal the number of records with Target 5 0
● Calculate the ones-complement of the proportion of Target 5 1 records (1 2 proportion of Target 5 1 records), and
use that number as a weight to submit to the modeling algorithm, and treat Target 5 0 records analogously.
Step 6: Feature Selection
In this step, we apply the principle of Occam’s razor, which is essentially that if you have two competing theories
that make exactly the same predictions, the simpler one is better. This is particularly important in predictive analytics with machine learning, because having too many features can lead to overfitting. A feature is the name given
to a transformed variable. When a model is overfitted, it is conformed very closely to the training data set, including the noise in it (meaningless or inaccurate data that have no correlation to the outcome). As a result, the model
may show very poor performance on any new data it encounters. There are a number of feature selection algorithms specific to the particular methods employed in machine learning. Each of these techniques can help you
select only the features with the highest correlation to your outcome, and can improve your model’s performance.
Aside from the problem of overfitting, it makes no sense to use more data points if you can get equal or better
results with a simpler model.
There is a caveat, however, that must be considered in the choice to use feature selection. Performing feature selection on your entire data set may bias your results, so make sure that you partition the data set before performing feature
selection. If you are going to validate your model on a separate independent data set, then partitioning a third data set is
not necessary. When using an independent data set, make sure it comes from the same population as the training and
testing data partitions. Patient populations can differ markedly at different locations (e.g., Salt Lake City, UT, versus
Detroit, MI), which may introduce a significant bias to your results.
Biomedical Informatics Chapter | 3 47
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
Step 7: Model Building
This is where things get exciting and sometimes frustrating. There are many different predictive analytics algorithms that can
be used to build your model. There are many predictive analytics software packages available currently, which contain a
broad choice of modeling algorithms. Your choice may depend on your background, operating system, and budget. Popular
among the choices of these packages are the following:
IBM Modeler
Some of the modeling algorithms include logistic regression models, time series models, decision trees, artificial
neural networks (ANNs), support vector machines (SVMs), naı¨ve Bayes (NB), and k-nearest neighbors (KNN). If time
and budget allow, it may be useful to try several different methods and compare their results.
Ensembles of different modeling algorithms may produce more accurate models than possible with any of the
constituent algorithms. Most predictive analytics packages have modeling options that permit the design of ensemble
models. It is recommended that several different ensembles be tried, before selecting the single algorithm (or group of
them) that works best on your data set.
Step 8: Model Evaluation
After a predictive analytic model is created, that is not the end of the story. The model needs to be “evaluated” for reliability, sensitivity, and specificity. This model may have been created from small-sized datasets. Any model needs to be evaluated, but especially when the patient numbers that produced the model are small. This can be done using several
methods, such as:
● Use of both a TRAINING and TESTING set of the data; if both sub-sets of data provide about the same accuracy,
then the model may be a good one but it still needs further evaluation.
● Use of a hold-out sample, where part of the dataset is “held out” in a random manner, with the rest being used as
the TRAIN and TEST sets. Then, after the model is created, the hold-out sample is run against the model to see if
the same accuracy scores are obtained and the individual scores seem reasonable.
● Use of V-fold cross-validation; this is a process where the dataset is sub-sampled numerous times (10 times is commonly used in real practice); if the accuracy scores of the V-fold cross-validation are about the same as for the train,
test, and hold-out samples, then the model is probably quite robust.
The above list is not exclusive, as there are additional measures that can be taken to evaluate the model.
Step 9: Model Implementation
After the model is built and has been evaluated favorably, it can be deployed and tested in the operational systems where
it will be used. This step in predictive analytics can be extremely difficult, because it may require interfacing with other
systems, and collecting and analyzing current data on a daily basis, rather than working on an isolated dump of historical
data. Don’t put too much effort into integrating your model into the clinical workflow permanently, until you have completed Step 10 below. The deployed model cannot be used until it is proved to provide some clinical utility.
Step 10: Validation of Clinical Utility
You may spend a significant amount of time working on a prediction algorithm to predict admissions for the ER, only
to find that there are no interventions that the hospital is willing or able to take to improve the outcome. You could
build a very powerful predictive modeling solution, with no problem to solve. This is like designing a product that
nobody wants to buy. The ability to deploy the model should be evaluated up front before you start the project. Make
sure as you look at the outcome of your predictions that there are actual interventions that can take place based on the
results. To prove your model makes a difference with the intervention, you must have some way to compare it to existing methods and demonstrate that your predictions actually improve health care.
48 PART | 1 Historical Perspective and the Issues of Concern for Healthcare Delivery in the 21st Century
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
The next operation could be labeled as Step 11: Re-evaluate, add more data, and rebuild the model. As emphasized
earlier, a given model is not the end of analytical modeling; it is just one step along the way. Models “age” as new data
become available. You might be able to improve model performance significantly by adding more data reflecting local
demographic or societal changes. For example, you might be able significantly to improve the accuracy of a hospital
census model by adding new inputs from seven different viral outbreak models. You can also build similar models
using different variables, and combine their results to come up with a better estimate. Make sure, and remember, that as
every year goes by, you have another year’s worth of data for training your model. You could retrain monthly or even
weekly, if you like.
The term “meaningful use” is difficult to define. In practice, it covers a broad range of topics within the use of the electronic health record (EHR). Predictive analytics plays a critical role in meaningful use however this broad topic required
its own chapter. Please see Chapter 9 for further discussion.
There are a number of papers available in the literature on the use of predictive analytics in medical research, but its
value can only be realized when predictions are used expressed in a form that can used successfully to impact patient
care. This process is defined by AMIA as translational bioinformatics, to include:
the development of storage, analytic, and interpretive methods to optimize the transformation of increasingly voluminous biomedical data, and genomic data, into proactive, predictive, preventive, and participatory health. Translational bioinformatics
includes research on the development of novel techniques for the integration of biological and clinical data and the evolution
of clinical informatics methodology to encompass biological observations. The end product of translational bioinformatics is
newly found knowledge from these integrative efforts that can be disseminated to a variety of stakeholders, including biomedical scientists, clinicians, and patients.
The “Tricorder” medical device used in the Star Trek TV and film productions is not a far-fetched idea in bioinformatics; similar tools may be in use in the not too distant future. This device is used in these dramas to scan the human
body, provide information about its health, and make a quick diagnosis and prognosis of treatment. The human body is
a complex system controlled by a complex group of interacting biological signals, of which we are becoming increasingly aware. The actions and reactions of our body are the best indicators of what is happening in the body. However,
this voluminous cascade of signals can be difficult to interpret. The decision of when and where to collect these signals
is problematic in itself. When your body confronts an invader organism, signals are propagated to the white blood cells
that indicate the nature of the invader and where it is, and trigger the appropriate response to defend the body against
the threat. If we can sense and record these signals early, we might be able to prevent a healthcare disaster. In addition
to the uses of these signals as data inputs, relationships between disease and other factors (e.g., genes, proteins, and
adverse healthcare events) can be combined to build powerful predictive models useful in treating the disease. The
process of building such predictive models and expressing the outcomes in terms useful for diagnosis and treatment of
disease is the central goal of translational bioinformatics.
CDSSs are integrated analysis and deployment systems designed to facilitate decision-making in patient health care.
They combine information about the current patients with information about past diagnoses and treatments stored in a
database to provide feedback or recommendations that will aid in decision-making process at the point of care. The
Healthcare Information and Management Systems Society (HIMSS) expands this definition to include patients as recipients of information, to permit patients to be active participants in their care. The definition of clinical decision support
according to the HIMSS is:
a process for enhancing health-related decisions and actions with pertinent, organized clinical knowledge and patient information to improve health and healthcare delivery. Information recipients can include patients, clinicians and others involved in
patient care delivery; information delivered can include general clinical knowledge and guidance, intelligently processed
Biomedical Informatics Chapter | 3 49
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
patient data, or a mixture of both; and information delivery formats can be drawn from a rich palette of options that includes
data and order entry facilitators, filtered data displays, reference information, alerts, and others.
Therefore, CDSSs and HIMMS represent alternate expressions of translational biomedical informatics, which takes
the results of scientific research to the bedside, to directly impact patient care.
CDSSs are separated by Plato’s Problem; the gap between knowledge and experience. Clinical knowledge is a cognitive understanding of a set of known clinical rules and principles, based on medical literature, which guide our
decision-making processes. Experience is an acquired understanding of medical outcomes gained through years of practice applying various outcomes related to various particular conditions, the majority of which cannot be learned sufficiently through reading and acquiring cognitive knowledge. This important distinction arises because there is an
immense number of medical subjects in the literature that could be researched and taught. In addition, there are so
many variables in the medical decision-making process that outcomes based on knowledge versus those based on experience are often discordant. It is practicably impossible to teach physicians all of the knowledge acquired by experience,
because the environmental variables are constantly changing so the body and nature of our experiences evolve through
time, reflecting particular outcomes under specific conditions. Cognitive knowledge, however, is always associated
with a limited scope of outcomes that are out of date, by necessity there is a time-lag between subject outcomes and
the reporting of them. For example, this distinction could come sharply into focus when choosing a physician to remove
your kidney in surgery: would you choose one who has mastered a surgical textbook, or one who has the experience of
300 successful operations?
CDSSs can be classified into two types of systems:
● Knowledge-based support systems that are defined by a well-established set of rules that guide decisions, based on
the interpretation of the medical conditions judged in the medical literature to be the best practice.
● Non-knowledge based systems that do not use a set of defined a priori rules, but instead use artificial intelligence
algorithms to induce the rules through machine learning methods, allowing the system to learn from hundreds or
even thousands of encounters, rebuilding the “model” set of rules as environmental variables change. These systems
can be based on neural networks, genetic algorithms, support vector machines, decision trees, or any other machine
learning technology, which “learns” to recognize patterns in data sets case by case.
Hybrid CDSSs
Hybrid CDSSs have been developed to allow the end user to synthesize the results from both knowledge and clinical
experience, and make a clinical decision based on the results of both. (examples in the literature include Santelices
et al., 2010).
In such a hybrid system, multiple predicted outcomes are posed for the physician, based on data input from knowledge and experience bases, and furnished with associated probabilities to permit the selection of the appropriate decision. As we continue to learn more about cognitive science, and distil this knowledge into principles, we can apply
them to improve these “intelligent” systems to help us make the best clinical decisions possible at the time. This practice of continuous incorporation of patient data, cognitive knowledge, and clinical experience is referred to often
as “rapid learning.” Rapid learning approaches that continuously update the CDSS as new data become available
provide an ability to create decision models that adapt to the availability of new treatments, interventions, and metrics
(variables) that can be input to the modeling process. This paradigm is shown in Figure 3.2.
Many CDSSs provide information on drug interactions and can generate allergy alerts. These alerts, however, are
very basic, and do not include any information on many other factors, such as dose, time of administration, and the context in which the medications are given. Consequently, many physicians discount these warnings. On a given work day,
it is very common for a physician to dismiss dozens of these warnings, as he or she prescribes medications in the hospital. In some instances physicians become so used to ignoring these warnings that they may accidentally disregard an
important one. It is time to make these alerts more “intelligent,” by using predictive analytics to predict levels at which
problems occur, and to set thresholds to control when alerts will be generated. In addition, these alerts should provide
information about the effectiveness of the drug for the given clinical scenario, and suggest more effective options, if a
suboptimal treatment is selected. Such a system could include analysis of a patient’s antibiotic prescription history
before presenting a list of drugs for choice, or checking its database for any information about the susceptibility of the
patient to a bacterial invasion if the chosen antibiotic does not provide broad enough coverage.
50 PART | 1 Historical Perspective and the Issues of Concern for Healthcare Delivery in the 21st Century
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
The AMIA definition of consumer health informatics is:
the field devoted to informatics from multiple consumer or patient views.
The focus is shifted from the problem resolution to understanding by the consumers of the nature of the problems,
and the availability of various solutions to fix them. This new focus centers on information structures and programs that
empower consumers to manage their own health. These programs can be classified into the following three groups.
1. Patient-Focused Informatics:
● Predicting various elective procedures associated with a given treatment
● Predicting the level and type of information required to support a given treatment
● Predicting treatment schedules, based on patient symptoms and basic measurements
● Recommending various interventions to be made or precautions to be delivered to prevent future ailments.
2. Health Literacy:
● Presenting selected Internet resources to increase patient understanding of the nature of health problems, and
their recommended treatments. Patients will access the Internet anyway, and therefore it is important to direct
them to responsible websites, and screen out those that are inappropriate or present questionable information.
3. Consumer Education.
This shift in the focus of medical informatics addresses the need for healthcare information perceived by consumers
by providing a means for them to acquire it responsibly. Most importantly, this new focus in informatics integrates
consumers’ preferences into health information systems. Consumer informatics stands at the crossroads of other informatics disciplines, such as nursing informatics, public health, health promotion, health education, library science, and
communication science. From this central position, consumer informatics can function as a clearing house and a provider of valuable consumer-related information to other informatics disciplines.
Related to consumer informatics, direct-to-consumer (DTC) genetic testing is a rapidly growing new industry in which
DNA samples are accepted directly from consumers who then in turn receive a report from the analysis of the DNA
that gives them information about their risk of developing certain diseases. There has been considerable debate about
having such options available to consumers directly, because their lack of domain knowledge and inability to understand the report in the medical context may cause unnecessary worry and stress in individuals whose genetic make-up
indicates an increased risk of certain diseases. This stress can cause some people to fall prey to (or even seek) relief
from opportunistic marketing schemes that offer unproven treatments, cures, or preventative options for a disease.
Some of the companies that offer these genetic testing services have recognized this problem, and offer genetic
Single Patient
Clinical Encounter
Data Prep
Predictive Analytics
Decision Support Model
FIGURE 3.2 Illustration of the pathways in a hybrid CDSS
(Clinical Decision Support System); these have been developed to allow the end user to synthesize the results from both
knowledge and clinical experience, and make a clinical decision based on the results of both. Copyright r 2013 Nephi
Biomedical Informatics Chapter | 3 51
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
counseling to anyone who uses their service. Nonetheless, there are ethical concerns about this practice by responsible
physicians, who question whether these tests have any positive effect on the health of the population and consider that
they might have an overall negative psychological effect. There is certainly a beneficial place in medicine for genetic
tests, when they are properly administered. For example, testing for mutations in the BRCA1 and BRCA2 genes can be
done by responsible healthcare professionals. People with mutations in these genes may have a very high risk of
disease, and can take preventative measures (such as prophylactic mastectomy) that have been proven to decrease mortality. Most of the thousands of other markers that are tested in DTC genetic testing are genetic variants that have minimal impact on the disease. In order to assign any practical meaning to these markers, they must be analyzed in context
with the other information, such as family history, other genetic markers, and patient characteristics. There is a considerable need for research on the effects of DTC genetic testing and the benefits and harms that can arise from it.
Predictive analytics can play a significant role in this field by including many more variables in the analysis, such as
diet, activity, total disease burden, and environmental factors, to build more powerful predictive models and more accurate assessments of risk. In addition to providing a risk of disease, models could be built and deployed that show the
predicted decrease in risk with behavioral modifications, dietary changes, or use of certain medications. Currently the
future of this industry is very uncertain and many of the major players have halted their business secondary to pressure
from the FDA, which has shut down 23andMe, the biggest player in the market. The FDA had intervened prior to shutting down 23andMe when Pathway Genomics attempted to sell DTC genetic testing kits at Walgreens (Darnovsky and
Cussins, 2014). It is hard to determine at this point whether it will soon be possible to walk into a Wal-Mart and obtain
your genetic sequence; the technology for sequencing is certainly in place, but the proper interpretation of the results
and the infrastructure to manage this information is not. Whether or not this industry survives, the same principles and
use of predictive analytics models could certainly be deployed with testing ordered for a patient by a physician.
Tools designed to predict future health must be used in the proper context. It is well known that health improvements
are associated with changes in diet, demographic variables, exercise, and even education level. The opportunity to use a
computer to explore various options and outcomes related to modifications of certain factors in their lives could potentially lead to people making appropriate changes in their lives. Charles Dickens presented a set of similar situations to
Ebenezer Scrooge in A Christmas Carol. Instead of using a computer armed with predictive analytics programs (which,
of course, didn’t exist then!), he faced the old miser with the Ghosts of Christmas Past, Present, and Future. He used
those literary vehicles to show Scrooge how decisions and events in the past led to his present circumstances, and how,
if left unchecked, they would lead inevitably to a undesirable future. Scrooge was shocked! That response led him
change his present actions, in the hope of avoiding what he was convinced would happen otherwise in the future. We
can bundle those “ghosts” into a predictive analytical system to bring to reality the Victorian dream (and that, indeed,
of everyone) to change some scary things that otherwise might happen in the future.
How would you like to walk up to a kiosk at a mall or drug store and be able to get a prescription, in the same way as
you can get money at an ATM? That day is not far away. In addition to prescribing medications, these kiosks could
offer on-the-spot lab testing and vitals measurement, or reassure you that your symptoms are not life threatening.
Predictive analytics will be used in these systems to predict the outcome or severity of a problem, based on the available information compared against past visits. The kiosk may even be able to tell you to see a health provider, and
recommend that you rush to the emergency room, or do nothing and just wait for time and nature to fix the problem.
A patient may be hospitalized when there is a significant risk that he or she may take a turn for the worse. Therefore, it
is important to keep the patient in a controlled and monitored environment, where plenty of healthcare professionals are
available should the need arise. These patients are monitored according to the level of risk assigned to the patient. For
example, various levels of risk might direct that patient vitals be monitored every hour, every four hours, every eight
hours, or even continuously. These measurements are necessary in order to assess the status of patient stability, and
assure that there have been no changes related to the monitored values. Most hospitals have devised scoring systems to
assess the patient’s status and needs in order to ensure receipt of the proper level of care. For example, patient status
52 PART | 1 Historical Perspective and the Issues of Concern for Healthcare Delivery in the 21st Century
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
may indicate whether a patient can be watched on the general floor or must be transferred to the intensive care unit
(ICU). Despite the use of these monitoring systems, patients can become unstable very quickly and they may not be in
the appropriate hospital location to receive the adequate level of care for their new condition. An intern might be
presenting a patient to the attending physician outside the room, and a nurse might call out a code for an emergency
response because the patient’s blood pressure has dropped precipitously in a period of seconds, and the patient has
stopped breathing. Instances like these beg the following questions:
● Was there something that could have been done sooner?
— Probably there was, but staff limitations precluded it
● Could we have picked up a signal of the impending crisis quicker and taken action to prevent a near collapse and
long stay in the ICU?
— Yes, if the nurse could be dedicated full-time to the room.
● What went wrong in this instance was the patient not properly assessed?
The assessment may have been correct at the time it was made, but patient status can change very quickly.
● Why not measure vitals more often on every patient?
The answers to all of these question are related to limitations in cost and resources. The more monitoring a patient
receives, the more time and resources must be dedicated to that patient. This is why a stay on the general hospital ward
is much less expensive than a stay in the ICU. Another common question is:
We have continuous monitoring systems available; why not apply them to every patient?
This would be a bad idea for many reasons, one of which is that the more equipment is hooked up to a patient, the
more you restrict that patient’s actions. This restriction may require the patient to stay in bed, which is often counterproductive during rehabilitation. Another reason is the excessive cost and time you will must spend contending with incidental findings and errors on monitoring devices. Because not all signals from monitoring devices are intelligently
processed, they often produce erroneous measurements. It is very common in pediatrics to be called to a room to assess a
patient with a low oxygen saturation, only to find the sensor dangling from the hand, or that the patient was moving so
much that the machine was not picking up a good signal. Even with good signals, there is a broad range of normal
responses and you are liable to encounter values that appear to be outliers if you monitor constantly. One particularly
annoying event that physicians are commonly called to attend is the incidence of bradycardia during sleep. This condition
can be completely normal, and it must be assessed in the context of the given problem and the medications the patient is
taking. It is standard practice in medicine that unnecessary lab work is to be avoided, not just because of the added cost
but also because there is a 5% chance that you may spend hundreds or thousands of dollars chasing an “abnormal” lab
value (based on the mean and standard deviation of the entire population) that is completely normal for this person.
In pediatrics, the commonly used Patient Early Warning Score (PEWS) is a perfect example of an opportunity for
predictive analytics to generate significant improvements in patient monitoring operations. When a pediatric patient
receives a very low PEWS score (or two consecutive moderately low scores), the patient is assessed for transfer to the
ICU. When low scores occur, the transfer order is written, and the ICU staff are called and they do the transfer assessment. Based on the judgment of the ICU staff, the transfer either happens or the physicians are reassured that transfer is
unnecessary. While this system works reasonably well in most cases, physicians may find that they must occasionally
write PEWS exception orders to keep the patients on the floor despite these low scores. Why does this happen? Many
other variables are not considered in the system; assessment is based on population-based metrics, and not on all characteristics of the individual case. Physicians must compensate for this weakness in the PEWS system by over-riding it.
Predictive analytics can be used to create models that can learn from experience, and apply all the appropriate patient
characteristics to make a more accurate assessment.
ICUs have electronic systems to collect high-frequency measurements and closely monitor critically ill patients. In
most instances, these measurements are stored in large databases, combined with information provided by other EHR
systems. This situation is a prime opportunity for the design of predictive analytics projects, which compare the results
from such systems to existing patient scoring systems and prognostic models. Early work on artificial intelligence in
the ICU had focused on knowledge-driven techniques, as described in CDSSs above. More recently there has been
research on data-driven methods, or experience-based artificial intelligence models. There is ample opportunity right
now to implement predictive analytics systems in the ICU. It is the prime time to take advantage of the vast data stores
that are available in today’s ICUs, and start accounting for other characteristics of the patient that are not assessed in
standard scoring methods.
Biomedical Informatics Chapter | 3 53
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
Current predictions are derived from the analysis of raw signals from various monitors in the ICU. One of the key
problems in these systems is the significant amount of noise in the monitors. The signals from these monitors can be
affected by patient transport, patient movement, bad wiring, poor connections, and device failures, to name a few.
Noise must be filtered out of signals before accurate predictions can be made. Many devices have signals that can be
used in correlation with measurements to determine if the measurements are correct, such as the waveform on a pulse
oximetry monitor. Some algorithms can filter out much of the noise, and thus clarify any valid signal (or the lack of it).
Several of the challenges in making predictions in the ICU involve determining the points in time at which to make the
measurements, and selection of the granularity or resolution of data on which to make predictions, considering the fact
that different monitors capture information at different time resolutions. Physicians must also take into account interventions (e.g., medications given), and the status of the patient (such as bradycardia during sleep, and the rise in blood
pressure when a child is screaming).
The major challenge for current medical devices in the ICU is their relative lack of accuracy and portability. Medical
devices are evolving along with informatics technology, and more accurate devices will be developed. These devices will
become smaller, less sensitive to movement, and less prone to errors, allowing the patient to have more mobility. This
mobility will permit more frequent measurements, even permitting some patients to move away from the ICU for a period
of time. As these devices evolve, predictive analytics can be progressively incorporated into them, allowing us to sense signals that may alert us to an impending crisis requiring intervention before it’s too late. The majority of predictive modeling
technology in use currently in ICUs is based on analytical techniques developed for classification and regression (numerical estimation), which consider measurements of variables in a time sequence as independent variables. Information can be
extracted from these variables to permit modeling of a target outcome, based on changes in these variables over time.
Classical time-series analysis considers only the signal present in the outcome, not the predictive signals present in the
time sequence variables. (See Nisbet et al., 2009, for a discussion of this subject.) These time sequence analyses have been
very successful in medical informatics (e.g., for the prediction of the next diabetic episode).
Apart from accuracy and portability, another of the challenges of implementing systems in the ICU is that you are
dealing literally with life and death situations, in which the stakes are high and there is little room for error. Models
must have a high level of accuracy and high discriminative performance to be acceptable for use in this setting. Such
models should be used not as a “crutch” to replace judgment, but as an additional aid to support it. When these models
are validated against actual improvements in patient care, and reduction in mortality, they may replace older methods
but not until then. Because of the challenges in this new field of technology in the ICU, there are few such systems that
have been deployed there. Although many proof-of-concept studies have been carried out, few systems have been
validated. This is an area that is ripe for the use of predictive analytics to affect patient care directly and significantly.
Public Health Informatics is defined as:
the application of informatics in areas of public health, including surveillance, reporting, and health promotion.
In this discipline, the focus is shifted from individuals to groups of people, and may include many other fields that
can have an effect on the health of a population. Some examples of this broad list of fields include:
● Weather cold and rainy weather may cause many people to stay in enclosed spaces during periods of cold or rain.
This behavior may contribute to viral outbreaks in winter months.
● Safety features the design of cars, buildings, and toys may affect the general public health of the nation.
● Architecture and layout of streets the way in which people are forced to move in towns and cities may contribute
to accidents, congestion, which affects depression, and the general state of human wellness in the vicinity.
● Food cost and growing methods in large respect, we are what we eat, and the type and amounts of foods that people eat can affect public health in large geographical areas. The price of food can drive people to eat food that is not
nutritious and may contribute to obesity
● Food and sanitation standards local outbreaks of disease reduce the general state of public health.
● Social programs Great pubic interest has been generated about the health effects of obesity, diabetes, and sexually
transmitted diseases.
All of these areas of public health provide rich sources of data for use in predictive analytics, which can provide
valuable insights to programs aimed at increasing the general state of wellness in our society. Even the mining of social
54 PART | 1 Historical Perspective and the Issues of Concern for Healthcare Delivery in the 21st Century
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
networks can provide data for the use of predictive analytics to compare public health problems and status of different
geographical areas.
Public health departments tend always to be strapped for cash, so it is important to predict areas with the biggest
problems and prioritize the allocation of resources to those areas with the largest potential impacts. Insurance companies have begun to share their predictive analytics with healthcare providers so they can apply appropriate interventions
to cut their costs. It may be possible to build to build similar relationships between insurance companies and departments of public health in areas where large numbers of insured individuals are concentrated.
Mining of social media is a growing phenomenon in our society. The data available from Twitter, Facebook, and
LinkedIn provide a vast source of information about subjects that people love to share with others, particularly healthrelated issues. People are becoming increasingly connected through social media; it is hard to find someone who doesn’t
have a mobile device that can upload their photos, thoughts, and whereabouts into the social data cloud. This huge source
of data could be tapped with predictive analytics to take the “pulse” of the general status of public health in society, and
suggest where it is headed. Patterns of medically related complaints can be mined at various times to provide insights
about changes in patterns of disease outbreak, obesity, mental health problems, and educational needs. Social media can
provide information about the health-related interventions that are working, or indicators that show an increase in wellness.
This pulse of society can be related to geographical area by using GPS coordinates, and applied for prioritizing areas
of high violence. Some police departments (e.g., in Memphis and Chicago) do this now to optimize the allocation of
squad car and surveillance resources. These measures can be extended with predictive analytics to expose factors that
encourage a high incidence of crime in an area. Analyses like these can be orchestrated to “diagnose” a development of
conditions that might promote social unrest, depression, or other psychological conditions, and can help to design intervention measures to promote public health.
In recent years, an emphasis on disease prevention has arisen to complement a prior focus on diagnosis and treatment. It is difficult (if not impossible) for a physician to discuss every preventative strategy for any disease that may
befall patients, during the short time allotted to a typical office visit. Through the use of predictive analytics we can
analyze large populations of people to quantify risks related to public health, and help physicians to develop intervention programs for those patients at highest risk of some ailment or medical condition.
Large companies have very significant financial incentives to prevent injuries. Some companies monitor the incidence of workplace injuries and collect other data related to safety, and provide reports and real-time alerts to permit
timely intervention to prevent injury. Some large companies have used predictive analytics to reduce injury incidence
rates by more than 60%, which in turn has led to increased productivity and decreased workers’ compensation fees
(Schultz, 2012). A research group from Carnegie Mellon University (CMU) was able to build models that can predict
the number of injuries at a worksite with 8097% accuracy rates (Schultz, 2012).
Biosurveillance is a huge area in public health primarily stemming from national security interests and the threat of biological weapons. Several syndromic surveillance systems have been installed to detect outbreaks at local and national
levels (Kaydos-Daniels et al., 2013). Purchases of medication at large retailers are measured to assess signals of sickness.
These same techniques can be used to aid in hospital management and implementing public health measures when a disease outbreak is predicated, which increases public awareness and hinders the spread of disease. This is one of the more
mature areas of predictive analytics, although there is still a significant amount of work to be done.
Food-borne illness has been reported in the news media several times in the past few years, and recently a meningitis outbreak caused by a contaminated injectable steroid medication has shown that even drugs can carry illness.
Predictive analytics can and will play an important role in predicting where the outbreak is likely to spread, and how it
can be contained.
These are just a few of the applications of predictive analytics in public health. However, public health problems
also generally involve large populations with lots of data, and are ideally suited to predictive analytics. This is an exciting field with endless possibilities of creating tools that will have a large impact.
Image mining is a relatively new, but growing area of predictive analytics. Images can be 2D, 3D, static, or moving
(4D). Tools using these technologies are available for:
● Screening people for retinal macular degeneration
● Predicting cardiovascular events by using ultrasound flow imaging to measure pressures, velocities, and turbulence
of flow related to the likelihood of future events
Biomedical Informatics Chapter | 3 55
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
● Finding various problems on images consisting of billions of pixels, representing enormous amounts of special data
● CT lymph node analysis to support staging for cancer screening.
Soon we will be able to use complex image analysis predictive models, replacing more invasive means such as biopsies or removal of lymph nodes for tissues for staging cancer studies. We will be able to design surgeries and cater
treatments without any invasive procedures.
Face recognition and other biometrics (e.g., eye scanning) are well-established as components in security systems.
One of the most mature technologies using image analysis is recognition of specific information in images. Similar
tools are being used for cell recognition and identifying nuclei and other organelles and their features to classify tissue
samples. These morphological features are often indicative of what is happening to the organism as a whole.
Information from these types of image analyses can be combined with physiological information to provide rich new
variable combinations to use in building predictive models.
According to the AMIA:
Clinical Research Informatics involves the use of informatics in the discovery and management of new knowledge relating to
health and disease. It includes management of information related to clinical trials and also involves informatics related to secondary research use of clinical data. Clinical research informatics and translational bioinformatics are the primary domains
related to informatics activities to support translational research.
Clinical trials for new medications are expensive, often running into hundreds of millions of dollars. At the beginning of a trial, you should know how many patients you will be able to get, and how many are likely to drop out.
Insufficient recruits can stall the start of a trial, causing severe delays and increasing costs, and drop-outs can affect
reliability of results. Yet thousands of trials have been done, and there is an immense amount of information about these
trials that is available for use. We can mine these data to increase the likelihood of better outcomes before we start such
trials. Knowing what happened in the past in similar trials, we can optimize the study design before any money is
invested, or cancel the trial if the preliminary results appear similar to those obtained previously.
Researching a topic in PubMed (or any other online medical literature source) can be quite time consuming and difficult. Intelligent searching tools can use predictive analytics to present query results of keyword searches based on the
context of the search string. One approach to doing this, semantic mapping, can be incorporated into search engines to
present various strands of meaning for keywords, and permit researchers to search for what they mean, rather than just
what they have entered literally into the search string. These alternate search paths can give researchers more pertinent
and even rather obscure results that are important but would have been missed otherwise.
When people want to learn about a medical condition or a treatment, they rely on large search engines such as
Google to find answers to their questions. Even though Google has improved over time, there still may be a significant
amount of misinformation presented on the website, even on the first page of results. Researching a topic takes time
and effort, even with comprehensive web sites like PubMed. This is particularly true for the average consumer who
does not necessarily understand the terminology or context in the articles they find.
Information can be dangerous. It can lead people to spend excessive amounts of money on unproven treatments, and to
neglect getting appropriate medical care, which sometimes leads to death. We could use predictive analytics to analyze
Internet search phrases in the context of demographics, location, and search history to provide more “intelligent” search
results, and report the level of understanding of the result topic by the medical community. Google is already doing some
of this, but there are still significant dangers in following a Google search of information related to disease treatment. For
example, criminals can take advantage of people with incurable diseases by offering the only “cures” available.
Personalized medicine is a field that has huge potential for the use of prediction and association analyses. Specific treatments can be catered, based on past experience with other patients. Using exome or full genome analysis, it will soon
56 PART | 1 Historical Perspective and the Issues of Concern for Healthcare Delivery in the 21st Century
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
be possible to predict how patients will respond to various drug and therapies. Personalized medicine systems can
include information from image analysis, lab data, demographics, history of adherence to treatment, financial status,
physiological signals, and other data sources to cater the best treatment to the patient based on predicted probabilities.
This topic will be further discussed in Chapter 13.
Hospital staffing, particularly nurse staffing, is a major issue in many hospitals today. A shortage of nurses can have a
very detrimental effect on patient outcomes, while having too many nurses on shift adds unnecessary healthcare expense,
which translates directly to higher patient costs. By predicting hospital census, the scheduling of nurses with predictive
analytics technology can function to increase scheduling efficiency in during times of high need, while eliminating unnecessary shifts. Intelligent scheduling can also be applied to optimize availability, utilization, and storage of resources.
Certain supplies or medications related to outbreaks must be available when an outbreak hits, but many of these supplies
have relatively short shelf lives, which can be managed by just-in-time replenishment systems. On the other hand, having
too great a quantity of supplies with short self lives can unnecessarily increase the operating expenses necessary to maintain them. Many staffing tools are available that use predictive analytics in the general business world; however, in medicine the stakes are higher and the processes and relationships to staffing are very complex. This is an area that is ripe for
analytics, and there are many technologies common in the business world that can be applied to the world of health care.
As described in the public health section above, many businesses are looking at safety measures that can be recorded
in order to predict accidents, and businesses have been successful in using these measures to predict and thereby prevent
accidents from happening in the work place. In the hospital, accidents, mistakes, or changes in processes can have an
even more dramatic impact, particularly in such high-stakes areas as ICUs. Incidence of morbidity and mortality can be
greatly reduced by modeling outcomes with various hospital measures, and comparing them with actual outcomes,
while increasing patient satisfaction at the same time. This is a very broad and important area where predictive analytics
can be applied to significantly improve the quality and success of health care.
The extent of the space and the cost necessary to store biomedical data have been significant issues in the past, but
these issues are becoming increasingly pressing and important now, as very large amounts of data are generated by
existing medical systems. The prospect of storing all of the information in the entire genome of an individual is daunting enough (3 gigabytes in the Human Genome Project), but when related epigenetic effects (changes in gene expression without changes in the DNA nucleotide sequence) and temporal effects related to each gene are considered, the
storage volume required for each person becomes truly gigantic (possibly, several terabytes). When you consider the
data storage requirements for all of the people in a hospital census, which turns over many times during a given year,
the storage volume may increase into petabytes. And that is just for one hospital for one year! We are on the brink of a
monumental explosion in data volume in medicine and health care. The creation of advanced compression methods and
algorithms to store and retrieve such information efficiently becomes paramount.
Privacy and security are also major concerns in the storage and use of any data about individuals. Since the passage
of the American Health Insurance Portability and Accountability Act (HIPAA) in 2003, health providers must make
sure that all medical records and related information (e.g., billing records) conform to a set of standards of documentation, handling, and privacy. But these standards are rather broad, and each state can choose the way that information is
protected and made available to individuals. The problem is that there is a wide latitude among the states in regulation
of patient health information. For example, Meingast and colleagues (2006) reported that Alabama had no general statute restricting the disclosure of patient information, while California has extensive regulations of such disclosures. This
wide variability in disclosure regulations among the states provides a high probability of leakage and misuse of patient
healthcare information during transmission across state lines. This problem is exponentially worsening as patient information becomes available in electronic format. It appears that we are still in the “Wild West” of information regulation
in medicine and health care.
Meingast et al. (2006) pose some questions that remain today, even in the wake of the Patient Protections and
Affordable Care Act of 2010:
● Who owns the data?
● How much data should be stored?
Biomedical Informatics Chapter | 3 57
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
● Where should data be stored?
● To whom should these data be disclosed?
● Where should the data be stored, who owns the data, to whom should this data be disclosed without the patient’s
consent? (Unanswered questions abound regarding this area.)
● How should the data be secured?
The authors suggest some solutions to these problems, which are still relevant today:
● Define clear specifications for role-based access to healthcare data, in which different rules apply to people in different usage roles
● Define new HIPAA regulations to standardize how healthcare data can be used and transmitted between states
● Instigate rules to govern patient privacy in home monitoring programs
● Initiate policies and rules defining how data can be acquired for predictive analytical purposes, and who can have
this access.
Other sources of health-related data can be sourced from the Internet, which raises the question of whether or not it
is ethical to use this information without an individual’s permission. In one case, a medical student happened to look at
the Facebook page of a certain patient, and determined that the patient might be in a high-risk situation at home. This
information led to an intervention, which could have saved the patient’s life. Regardless of the happy ending, this example left lingering doubts about the propriety of such actions.
There is a considerable lack of consistency in terminology and measurements between medical practices and labs,
and even within the same hospital. It is very hard to analyze data compiled from these sources, and we are still quite
far from a universal standard in terminology and measurement. To make things more difficult, many of the measures in
medicine are extremely subjective; you could easily get four different answers from four different physicians if you
asked them to characterize a murmur, for example. You also have to take into consideration the temporal aspect of a
measurement and the context in which it happened, which are not always recorded. We must anticipate significant
challenges in these areas at the beginning of any predictive analytical project.
There is some suggestion that predictive analytics tools used in patient care, specifically CDSSs, should be regulated
similarly to medical devices, requiring stringent acceptance, commissioning, and quality assurance. The CDSS would
have to be validated on local datasets before approval. This can be problematic for rapid learning CDSSs, because they
change constantly as information is gathered from patients. Methods to regulate and thereby ensure patient safety
without losing the advantage of rapid learning will need to be addressed. Interestingly, emerging from the ACA Act is
PCOR (Patient Centered Outcomes Research), a non-profit “contract-research” agency that is committed to developing
transparent networks of medical data among healthcare organizations, including hospitals, clinics, and individual
doctors. One goal of this is to produce CER (comparative effectiveness research) to “really” determine which treatments
and drugs and medical devices are working for both “groups of patients” (grouped by age, race, sex, and other grouping
factors, including genetic predisposition) and “individual patients” (primarily determined by DNA profiles plus other
attributes). To do this will require that HIPAA laws and other regulatory processes, whether FDA or elsewhere, are
worked through so that they will not be inhibitory to the development of accurate diagnostic and treatment methods.
Only predictive analytics (PA) modeling can produce the accuracy that is needed for these efforts (traditional statistical
P-value Fisherian statistics, for the most part, only work for “groups” or “means” of population groups; modern PA can
pinpoint both groups and, more importantly, individuals). Predictive analytic modeling and decisioning is the only
method that is currently available to make accurate predictions and prescriptions for individuals. Unfortunately, very
little of this is being done; it is currently estimated by some that 99% of statisticians are still using traditional statistics
and have not yet been able to grasp the value of data mining, text mining, and predictive analytic modeling.
This is a very exciting time for predictive analytics in biomedical informatics. It is at the forefront of medical research,
as we transition from reacting to disease to proactively preventing it. Predictive analytics is in its infancy in this field,
and though many studies show predictions using small single-source data sets, there are few that are based on large
amounts of clinical data available from many sources, such as images, lab values, physiological signals, genetics, and
other patient demographics and characteristics. There are even fewer predictive analytics projects that have been incorporated into clinical practice. This situation provides abundant opportunities in almost every area of informatics for
predictive analytics, and there is ample opportunity to use these tools to make a lasting difference in health care.
58 PART | 1 Historical Perspective and the Issues of Concern for Healthcare Delivery in the 21st Century
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.
One of the challenges of medical informatics is the provision of means of effective communication of healthcare information to various organizations in forms that can be used. A collateral aspect of this communication is the coordination
of its such use among organizations for various purposes. The primary organization that facilitates this communication
and coordination of healthcare information is the Healthcare Information Management Systems Society (HIMSS).
Chapter 4 will focus on this organization, together with other organizations similar to it.
Bellcross, C.A., 2012. A Part-Time Life, as Hours Shrink and Shift. The New York Times.
Darnovsky, M., Cussins, J., 2014. FDA halts 23andMe personal genetic tests. What might this mean for the future of direct-to-consumer testing?
MLO Med. Lab. Obs. 46 (3), 33.
Hui, C., Neto, G., Tsertsvadze, A., Yazdi, F., Tricco, A.C, Tsouros, S., et al., 2012. Diagnosis and Management of Febrile Infants (03 Months).
Agency for Healthcare Research and Quality, Rockville, MD (Evidence Report/Technology Assessments, No. 205.) Introduction.
Kaydos-Daniels, S.C.I., Rojas Smith, L., Farris, T.R., 2013. Biosurveillance in outbreak investigations. Biosecur. Bioterror. 11 (1), 2028.
Meingast, M., Roosta, T., Sastr, S., 2006. Security and Privacy Issues with Health Care Information Technology. Proceedings of the 28th IEEE EMBS
Annual International Conference, New York, NY.
Nisbet, R., Elder, J.I.V., Miner., G.D., 2009. Handbook of Statistical Analysis and Data Mining Applications. Elsevier/Academic Press, New York, NY.
Schultz, G., 2012. Using Advanced Analytics to Predict and Prevent Workplace Injuries. Occup. Health Saf. 81 (7), 88, 9091. Available at: http://
Bellcross, C.A., Page, P.Z., Meaney-Delman, D., 2012. Direct-to-consumer personal genome testing and cancer risk prediction. Cancer J. 18 (4),
Cai, H., Cui, C., Tian, H., Zhang, M., Li, L., 2012. A novel approach to segment and classify regional lymph nodes on computed tomography images.
Comput. Math. Methods Med. 2012, 145926.
Cheng, S.K., Dietrich, M.S., Dilts, D.M., 2011. Predicting accrual achievement: monitoring accrual milestones of NCI-CTEP-sponsored clinical trials.
Clin. Cancer Res. 17 (7), 19471955.
Gu¨iza, F., Van Eyck, J., Meyfroidt, G., 2012. Predictive data mining on monitoring data from the intensive care unit. J. Clin. Monit. Comput. 27 (4),
Isariyawongse, B.K., Kattan, M.W., 2012. Prediction tools in surgical oncology. Surg Oncol Clin. N. Am. 21 (3), 439447, viiiix.
Kamel Boulos, M.N., Sanfilippo, A.P., Corley, C.D., Wheeler, S., 2010. Social Web mining and exploitation for serious applications: Technosocial
Predictive Analytics and related technologies for public health, environmental and national security surveillance. Comput. Methods Programs
Biomed. 100 (1), 1623.
Lambin, P., van Stiphout, R.G., Starmans, M.H., Rios-Velazquez, E., Nalbantov, G., Aerts, H.J., et al., 2013. Predicting outcomes in radiation
oncology-multifactorial decision support systems. Nat. Rev. Clin. Oncol. 10 (1), 2740.
Osheroff, J.A., Teich, J.M., Levic, D., Saldana, L., Velasco, F.T., Sittig, D.F., et al., 2012. Improving Outcomes with Clinical Decision Support: An
Implementer’s Guide. second ed. Scottsdale Institute, AMIA, AMDIS and SHM, Chicago, IL.
Phan, J.H., Quo, C.F., Cheng, C., Wang, M.D., 2012. Multiscale integration of -omic, imaging, and clinical data in biomedical informatics. IEEE Rev.
Biomed. Eng. 5, 7487.
Santelices, L., Wang, Y., Severyn, D., Druzdzel, M., Kormos, R., Antaki, J., 2010. Developing a hybrid decision support model for optimal ventricular
assist device weaning. Ann. Thorac. Surg. 90 (3), 713720.
Zheng, Y., Hijazi, M.H., Coenen, F., 2012. Automated “disease/no disease” grading of age-related macular degeneration by an image mining
approach. Invest. Ophthalmol. Vis. Sci.pii: iovs.12-9576v1.
Biomedical Informatics Chapter | 3 59
Miner, G. D., Miner, L. A., Goldstein, M., Nisbet, R., Walton, N., Bolding, P., Hilbe, J., & Hill, T. (2014). Practical predictive analytics and decisioning systems for medicine : Informatics
accuracy and cost-effectiveness for healthcare administration and delivery including medical research. Elsevier Science & Technology.
Created from snhu-ebooks on 2021-11-27 23:34:36. Copyright © 2014. Elsevier Science & Technology. All rights reserved.

Get Professional Assignment Help Cheaply

Buy Custom Essay

Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?

Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.

Why Choose Our Academic Writing Service?

  • Plagiarism free papers
  • Timely delivery
  • Any deadline
  • Skilled, Experienced Native English Writers
  • Subject-relevant academic writer
  • Adherence to paper instructions
  • Ability to tackle bulk assignments
  • Reasonable prices
  • 24/7 Customer Support
  • Get superb grades consistently

Online Academic Help With Different Subjects


Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.


Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.

Computer science

Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!


While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.


Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.


In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.


Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.


We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!


We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.


Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.

What discipline/subjects do you deal in?

We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.

Are your writers competent enough to handle my paper?

Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.

What if I don’t like the paper?

There is a very low likelihood that you won’t like the paper.

Reasons being:

  • When assigning your order, we match the paper’s discipline with the writer’s field/specialization. Since all our writers are graduates, we match the paper’s subject with the field the writer studied. For instance, if it’s a nursing paper, only a nursing graduate and writer will handle it. Furthermore, all our writers have academic writing experience and top-notch research skills.
  • We have a quality assurance that reviews the paper before it gets to you. As such, we ensure that you get a paper that meets the required standard and will most definitely make the grade.

In the event that you don’t like your paper:

  • The writer will revise the paper up to your pleasing. You have unlimited revisions. You simply need to highlight what specifically you don’t like about the paper, and the writer will make the amendments. The paper will be revised until you are satisfied. Revisions are free of charge
  • We will have a different writer write the paper from scratch.
  • Last resort, if the above does not work, we will refund your money.

Will the professor find out I didn’t write the paper myself?

Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.

What if the paper is plagiarized?

We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.

When will I get my paper?

You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.

Will anyone find out that I used your services?

We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.

How our Assignment  Help Service Works

1.      Place an order

You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.

2.      Pay for the order

Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.

3.      Track the progress

You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.

4.      Download the paper

The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.

smile and order essaysmile and order essay PLACE THIS ORDER OR A SIMILAR ORDER WITH US TODAY AND GET A PERFECT SCORE!!!

order custom essay paper