Securing GenAI in the Enterprise

Opaque Systems released a new whitepaper titled “Securing GenAI in the Enterprise.” Enterprises are chomping at the bit to use GenAI to their benefit but they are stuck. Data privacy is the number one factor that stalls GenAI initiatives. Concerns about data leaks, malicious use, and ever-changing regulations loom over the exciting world of Generative AI (GenAI), specifically large language models (LLMs).

NEW RESEARCH: Growing Database Complexity Will Fuel Significant Skills Gaps in 2024

Increasing complexity, the rapid adoption of emerging technologies and a growing skills gap are the biggest concerns facing IT leaders in 2024, according to The State of the Database Landscape, a major new survey from end-to-end Database DevOps provider Redgate. 

Veritas Survey Finds Workers are Putting Businesses at Risk by Oversharing with GenAI Tools

Our friends over at Veritas just released a new survey revealing that workers are oversharing with generative AI tools, putting businesses at risk. Nearly a third (31%) of global office workers admitted to inputting potentially sensitive information into generative AI tools, such as customer details or employee financials.

Survey: 1 in 3 People are Using AI to Save their Love Lives

To find out exactly how people feel about AI influencing their dating journey, our friends over at Top10.com recently surveyed over one thousand adults about the idea. The results might surprise you!

Survey Shows that More than 90% of Insurers Plan to Increase AI Investment – Top 4 Trends for Insurers in 2024

To glean insights, Gradient AI conducted a survey  among 100+ customers across diverse insurance companies, revealing four noteworthy AI trends influencing the future landscape of the insurance sector.

New OneStream Research Finds 80% of Financial Decision-Makers Believe AI Will Increase Productivity

OneStream, a leader in corporate performance management (CPM) solutions for advancing financial close, consolidation, reporting, planning and forecasting, announced the results of its global “AI-Driven Finance“ survey, revealing the majority (80%) of financial decision-makers believe AI will increase productivity in the office of finance.

Generative AI Models Are Built to Hallucinate: The Question is How to Control Them

In this contributed article, Stefano Soatto, Professor of Computer Science at the University of California, Los Angeles and a Vice President at Amazon Web Services, discusses generative AI models and how they are designed and trained to hallucinate, so hallucinations are a common product of any generative model. However, instead of preventing generative AI models from hallucinating, we should be designing AI systems that can control them. Hallucinations are indeed a problem – a big problem – but one that an AI system, that includes a generative model as a component, can control.

Algorithmiq Demonstrates Path to Quantum Utility with IBM

Algorithmiq, a scaleup developing quantum algorithms to solve the most complex problems in life sciences, has successfully run one of the largest scale error mitigation experiments to date on IBM’s hardware. This achievement positions them, with IBM, as front runners to reach quantum utility for real world use cases. The experiment was run with Algorithmiq’s proprietary error mitigation algorithms on the IBM Nazca, the 127 qubit Eagle processor, using 50 active qubits x 98 layers of CNOTS and thus a total of 2402 CNOTS gates. This significant milestone for the field is the result of a collaboration between the two teams, who joined forces back in 2022 to pave the way towards achieving first useful quantum advantage for chemistry.

Want Better AI? Get Input From a Real (Human) Expert

Scientists at the Department of Energy’s Pacific Northwest National Laboratory have put forth a new way to evaluate an AI system’s recommendations. They bring human experts into the loop to view how the ML performed on a set of data.  The expert learns which types of data the machine-learning system typically classifies correctly, and which data types lead to confusion and system errors. Armed with this knowledge, the experts then offer their own confidence score on future system recommendations.

New Data on LLM Accuracy

Juan Sequeda, Principal Scientist at data.world, recently published a research paper, “A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model’s Accuracy for Question Answering on Enterprise SQL Databases.” He and his co-authors benchmarked LLM accuracy in answering questions over real business data.