Data and the Fight Against COVID

Print Friendly, PDF & Email

In this special guest feature, Neal Brenner, Chief Information Officer, SSG llc, outlines the technology infrastructure State Governments need have in place to combat the Long COVID crisis. Neal specializes in digital transformation and interoperability in the public health sector. He previously led the controls, software, and analytics team as Director of Engineering at XL Hybrids, a Boston cleantech startup and with Texas Instruments as a field applications engineer. Brenner has a Bachelor of Science in Electrical Engineering and Computer Science from MIT and an MBA at the University of Chicago Booth School of Business where he was chosen as a Siebel Scholar.

By the end of September 2021, the United States alone had conducted almost 700 million tests for the COVID-19 virus. That’s a lot of data, and data is one of our most important allies in the battle against this pandemic. Data has helped us track trends in the virus’s spread, help gauge the effectiveness of vaccines and treatments, and helped us decide what resources to commit and where.

Yet our COVID response remains largely reactive. We haven’t been able to interpret the data in a way that empowers us to prescribe new preventative measures (beyond masks, social distancing, and PPE), to head off outbreaks, or to predict incidents of so-called “Long COVID,” when symptoms of the virus can linger for weeks or even months.

This failure in large part comes from the scarcity of centralized data collection hubs from which researchers can draw, whether on a health authority, regional, or state scale. We’re missing an opportunity to get ahead of the pandemic because institutions are sharing information inefficiently and ineffectively.

So what’s stopping us?

The first, and perhaps largest, impediment to effectively sharing COVID-related information is the data collection process itself. Systems vary wildly across organizations, as some rely on transcribed handwritten notes while others collect data through mobile or station-based devices. This is a root cause in the siloing of information, making that data inaccessible to other systems.

In the United States, this fragmentation is a natural result of our state-centric organizational structure. While other countries have built reporting systems on a regional or even national basis to facilitate the sharing of data, data is siloed in the U.S. not just state by state, but department by department within individual states. Varying database technologies, formatting techniques, and other protocols across different organizations and subdivisions further inhibit the sharing of information.

Fortunately, these are solvable problems, but we must solve them now to prepare for the next outbreak — and there will be another.

The way forward

We must establish three primary goals for a preemptive approach to pandemic management: standardization, interoperability, and privacy compliance. The first two are the table stakes, the buy-in for an integrated system. The third will ensure compliance with the Health Insurance Portability and Accountability Act (HIPAA), a legal and ethical must for sharing connected personal medical records when aggregate data is not enough. We must meet these goals, and soon, as we transition to battling Long COVID.

Step By Step

What follows is a simple roadmap for fulfilling the three goals listed above:

1. Establish the scope of the project. Is the data to be collected and analyzed within a health authority, a local county or region, or statewide? Ideally, compatibility and interoperability flows upward. Each level of jurisdiction should be able to provide data usable by the jurisdiction senior to it.

2. Create a common technology baseline. This could range from Excel databases to full-fledged data marts, but must be uniform across all organizations in a pipeline, and must be integrated into a data warehouse or the equivalent.

The ranking jurisdiction should be responsible for this data storage facility. Given the possible number of formats in play, interoperability could depend on big data, artificial intelligence, and machine learning tools beyond the budget and expertise of clinics or small hospitals which need time to overhaul their systems. As we work to upgrade and standardize procedures across all organizations, we must also remember that a hasty rip-and-replace campaign will do more long-term harm than a step-by-step transition.

3. Leverage analytics and artificial technologies. The top of the pyramid, the organization where the most data is collected, is where the most sophisticated innovations come out to play. Analytics technologies can supply immense predictive power while working backwards toward root causes. This intersection of big data tools and artificial intelligence is the heart of pattern recognition, inference, and other semi-autonomous, intuitive processes, developing insights not even a muscular number-crunching warehouse can provide.

Forewarned is Forearmed

Given the lessons learned as we scramble to get ahead of this pandemic, there is no excuse to be caught flat-footed at the next world health emergency. We must draw a roadmap toward a system that’s up to the challenges we will face tomorrow. There is no excuse for refusing to bring hospital and clinic infrastructure into line and establish standards for interoperability with senior organizations.

Data is our most powerful ally in the fight against epidemics, pandemics, and other health disasters. By understanding the limitations of siloed and fragmented information, and standardizing and streamlining our data collection and processing, we can gain the most value from this incredible resource and maybe even prevent future crises from spiraling out of control.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*