Securing GenAI in the Enterprise

Print Friendly, PDF & Email

Opaque Systems released a new whitepaper titled “Securing GenAI in the Enterprise.” Enterprises are chomping at the bit to use GenAI to their benefit but they are stuck. Data privacy is the number one factor that stalls GenAI initiatives. Concerns about data leaks, malicious use, and ever-changing regulations loom over the exciting world of Generative AI (GenAI), specifically large language models (LLMs). 

Opaque Systems, founded by UCBerkeley researchers from Rise Labs doing deep investigation of data privacy preserving techniques and who co-developed Confidential Computing with Intel, have gone deep on how the next generation of privacy solutions will be implemented. 

This paper informed by collectively outlines the problems faced by an industry that generates more data then ever and has a need to make that data actionable. The catch, the leading techniques like data anonymization and data cleansing are inadequate and put companies at risk by providing a false sense of security. Here’s the gist of where we are in 2024. 

The Promise:

  • LLMs offer immense potential across various industries, from financial analysis to medical diagnosis.
  • LLMs process information and generate content at lightning speed but we generally view them as black boxes and are unable to discern how data is being used and potentially assimilated into these models. 
  • All of these initiatives have one lynch pin for success, they need to be secure, existing tactics for securing that data limit the ability to adopt GenAI in the enterprise. 

The Challenge:

  • Training LLMs requires sharing vast amounts of data, raising privacy concerns.
  • On average, companies are witnessing a 32% rise in insider-related incidents each month, translating to about 300 such events annually. This uptick heightens the risk of data leaks and security breaches.
  • Malicious actors could exploit LLMs for harmful purposes.
  • Unlike multi-tenant models in other cloud computing infrastructure models can leak data to other users. 
  • Keeping up with data privacy regulations becomes a complex puzzle.
  • These inference models are the actually able to defeat the data preserving standard technology like data anonymization. 

The Solution: 

  • Confidential computing emerges as a potential solution, shielding data throughout the LLM’s lifecycle.
  • All modern CPUs and GPUs have added hardware root of trust to process this data and securely sign encryption.
  • Secure GenAI adoption through trusted execution environments are the best way to secure LLM fine-tuning, inferencing, and gateways and provide military grade encryption that still allows LLMs to process the data but without risking data breaches.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*