Why the Future of Healthcare is Federated AI

Print Friendly, PDF & Email

In this special guest feature, Akshay Sharma, Executive Vice President of Artificial Intelligence (AI) at Sharecare, highlights advancements and impact of federated AI and edge computing for the healthcare sector as it ensures data privacy and expands the breadth of individual, organizational, and clinical knowledge. Sharma joined Sharecare in 2021 as part of its acquisition of doc.ai, the Silicon Valley-based company that accelerated digital transformation in healthcare. With doc.ai, Sharma previously held various leadership positions including CTO, and vice president of engineering, a role in which he developed several key technologies that power mobile-based privacy products in healthcare. In addition to his role at Sharecare, Sharma serves as CTO of TEDxSanFrancisco and also is involved in initiatives to decentralize clinical trials. Sharma holds bachelor’s degrees in engineering and engineering in information science from Visvesvaraya Technological University.

Healthcare data is an incredibly valuable asset and ensuring that it’s kept private and secure should be a top priority for everyone. But as the pandemic led to more patient exams and visits being conducted within telehealth environments, it’s become even easier to lose control of that data.

This doesn’t have to be the case. There are better options for ensuring a user’s health data remains private for them. The future of where all of the health information exists is only on the edge (mobile devices).

Right now, federated learning (or, federated AI) guarantees that the user’s data stays on the device, and the applications running a specific program are still learning how to process the data and building a better, more efficient, model. HIPAA laws protect patient medical data, but federated learning takes that a step further by not sharing the data with outside parties.

Leveraging federated learning is where healthcare can evolve with technology.

Traditional machine learning requires centralizing data to train and build a model. With federated learning, combining other privacy-preserving techniques can build models in a distributed data setup without leaking sensitive information from the data. This will allow health professionals to be more inclusive and find more diversity in the data by going to where the data is: with the users.

How the Right Data Makes a World of Difference

Right now, nearly everyone is carrying a smartphone that can collect health-based signals. With federated learning, we’ll be able to meet those users. Those health-based signals could include photos with medical information, an accelerometer that can capture motion, GPS location information that can reveal signals of health, and integration with several health devices which can contain biometrics data, integration with medical records like Apple health, and more.

AI-based predictive models can combine the data collected on the smartphone for both prospective and retrospective medical research and provide better health indicators in real-time.

Technology in our phones has been providing us information about air quality for some time, but with federated learning I expect apps to start engaging with users and patients during specific events on a more personal basis. For instance, if a user with asthma is too close to a region experiencing a forest fire or if someone with seasonal allergies is around an area where pollen-count is high, I fully expect the app to engage with that user and provide tips to mitigate the situation.

The Importance of Being Privacy First

These insights can’t be provided without a service gleaning that pivotal information from the user. With privacy-preserving techniques (such as differential privacy), this data is only stored locally and on the edge, without being sent to the cloud or leaked to a third party.

We keep stressing the importance of privacy, but its significance can’t be overstated. Users should own their data and have transparency around where data is sent and shared. Every type of data needs to be authorized for collection, and there must be transparency on how the data will be used.

There’s more to privacy than a mission statement – when health services are built privacy-first, you can bring in more participants to the data training loop which allows teams to find a more diverse pool of users who feel more confident in sharing access to their private data. The more real-time and encompassing health systems, where models are learning faster from a large group of users instead of just a few, will lead to better health outcomes.

The unfortunate truth is that healthcare has become incredibly siloed and data exchange is often difficult and expensive. For example, EMR data is not available with claims and prescription data, and then finding out whether the prescription was even collected only exists in other systems. If you then layer in data, such as genetics, what you eat, social determinants of health, and activity data, you have a multi-node problem for a single user. There is no single source of the full truth, and centralizing all this is incredibly hard.

Federated learning provides the perfect opportunity to avoid these barriers. By putting the user/patient in charge of coordinating their health data, you can provide the right opt-ins to learn from their data across these disparate systems. It’s now possible to imagine federated learning being applied across organizations, holding sensitive data, and come together to collectively build efficient and more effective models in healthcare.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*