Electronic Health Records and Big Data
In a rather inconspicuous manner, sans the pomp and show, Big Data has permeated into the daily lives of the urban populace. From the list of Netflix title recommendations, retail nudges on Amazon to the search suggestion on Google, companies across industries have harnessed the power of data management and computational ability to transform the nature of consumption.
As with any other buzzword, Big Data, too, is replete with equal parts obfuscation and overuse. Surprisingly, big data simply refers to extremely large data sets that traditional data processing tools (think Excel Sheets, SQL databases) are ill-equipped to deal with. The sheer rate of accumulation, variety, size, and amorphousness of data collected is what renders traditional processing and analytic techniques inadequate and obsolete. However, advances in computer science and analytic techniques (machine learning, natural language processing) over the years have been instrumental in enhancing our ability to deal with unstructured, large data sets that do not fit into relational tables. It is this union of analytic and computing advances that has been the backbone of the Big Data revolution.
Governments and public institutions, however, have lagged behind significantly in adopting big data and utilizing it to alter the nature of interaction with the citizenry. Health care, amongst many other domains, is an important avenue where the opportunities to leverage big data are endless. Increasing uptake of Electronic Health Records (EHR) and Health Management Systems (HMS) has generated copious amounts of informative and context-rich data sets. While it’s tempting to underpin the proliferation of Big Data in the private sector on a host of sophisticated factors, the key to its outbreak can actually be distilled to an intuitively rudimentary factor – the perception of data. Hitherto the role of data in public institutions has been, at best, passive & secondary. Especially in the health sector, the role of data has been considered as a by-product of healthcare delivery rather as a vital asset to improve efficiency & penetration of healthcare services.
Central to the transition of data from refuse to riches is the role of information and data management systems. EHR and HMS carry the potential of significantly advancing the mission of quality and efficient health care delivery to all. Coupled with the application of big data analytic techniques, there are many ways in which such a goal can be achieved.
Firstly, big data is a huge asset in expanding the knowledge base and disseminating information within the medical fraternity. Building and transfer of knowledge is critical to data-driven decision making and goes a long way in making optimal choices. A seemingly simple task of keeping abreast with clinical advancements at the nexus of medicine and technology can be rendered very tedious if obstacles such as lack of digitisation, access, complexity and language translation are factored in. Digitisation of medical references and literature not only solves the issue of access, sharing, and language, but with the application of computational analysis techniques, vast amounts of information can be sorted to prepare sound treatment approaches for acute illnesses that plague public health. Besides cost-savings, standardization of care is a huge positive implication of such data-driven clinical decision support tools.
Big Data promises large scale analysis of outcomes, patterns, temporal trends, and correlations at a population level. This kind of analysis aids in identifying epidemic outbreaks, predicting high-risk patients, enumerating frequency of re-admissions, and assigning triage (treatment order) to incoming patients.
Secondly, EHRs are inundated with quantitative (lab result and medical test values), qualitative (text-based documents, pictures etc.) and transactional (visits record, medication record) data. Given the wealth of rich datasets that are stored, there is scope for a lot of interoperability within health and social systems. The integration of traditional patient related longitudinal data (medication history, ailment list, family health history) with social determinants of health (income, education, caste/religion, domicile state) offers an insightful and seamless solution for identifying and reducing public-health ills. With such a system in place, the patient can not only have access to personalised care but also be an active stakeholder in his health and well-being. Furthermore, in the future, integration of systems biology – the modelling of complex biological systems – with EHR data can pave way for personalised medication, treatment and care that is unique to every patient. Thus, that day is not too far away when medical professionals may use natural language processing to facilitate textual analysis of an individual’s prescription in conjunction with machine learning predictions made from that individual’s visits to the hospital and medication patterns to prognose the timing and nature of the next illness, along with preventive care strategies.
Lastly and most fundamentally, big data promises large scale analysis of outcomes, patterns, temporal trends, and correlations at a population level. This kind of analysis aids in identifying epidemic outbreaks, predicting high-risk patients, enumerating frequency of re-admissions, and assigning triage (treatment order) to incoming patients. As a result, management of patients, hospital supply chains, medical manpower, resource allocation, and minimization of unnecessary costs are some important value additions that big data brings to the health sector and delivery of care services to the people. Given the paucity of resources, lack of skilled medical practitioners and increasing fiscal efficiency, the case for the utilisation of big data to trawl countless data sets for gleaning crucial trends and insights is more pressing than ever.
The ubiquitous trifecta of data, technology, and science lends an air of inevitability to the impending Big Data explosion across all sectors and industries. In addition to capital, labour, and raw materials, data is the all-important missing ingredient for producing output in today’s information age. However, there is still a gulf between information collection and leveraging the same for actionable outcomes. A paradigm shift away from the descriptive and reporting role of data to one hinged on forecasting, predictive modelling, and decision optimization is essential to surmount this barrier. Any organization or institution that is ready to embrace this shift will be poised to reap the benefits that the evolution of science and information offer.
Thus, there is only one golden rule of Big Data and it is simple – Size doesn’t really matter. What matters is what one does with it.
Aruj Shukla is a Data Associate at Ank Aha.