NetSPI Debuts ML/AI Penetration Testing, a Holistic Approach to Securing Machine Learning Models and LLM Implementations

NetSPI, the global leader in offensive security, today debuted its ML/AI Pentesting solution to bring a more holistic and proactive approach to safeguarding machine learning model implementations. The first-of-its-kind solution focuses on two core components: Identifying, analyzing, and remediating vulnerabilities on machine learning systems such as Large Language Models (LLMs) and providing grounded advice and real-world guidance to ensure security is considered from ideation to implementation.

New Survey Findings on LLM Use Cases and Challenges from MLOps Community

We’re excited to share new survey results from our friends at the MLOps community. Their team surveyed more than 100 practitioners to understand challenges related to developing and deploying Large Language Models (LLMs).

insideBIGDATA AI News Briefs – 7/27/2023

Welcome insideBIGDATA AI News Briefs, our podcast channel bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We’re working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. We know this field is advancing rapidly and we want to bring you a regular resource to keep you informed and state-of-the-art.

Video Highlights: Generative AI with Large Language Models

At an unprecedented pace, Large Language Models like GPT-4 are transforming the world in general and the field of data science in particular. This two-hour training video presentation by Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, introduces deep learning transformer architectures including LLMs.

Power to the Data Report Podcast: Large Language Models for Executives

Hello, and welcome to the “Power-to-the-Data Report” podcast where we cover timely topics of the day from throughout the Big Data ecosystem. I am your host Daniel Gutierrez from insideBIGDATA where I serve as Editor-in-Chief & Resident Data Scientist. Today’s topic is “Large Language Models for Executives.” LLMs represent an important inflection point in the history of computing. After many “AI winters,” we’re finally seeing techniques like generative AI and transformers that are realizing some of the dreams of AI researchers from decades past. This article presents a high-level view of LLMs for executives, project stakeholders and enterprise decision makers.

POLL: Which Company Will Lead the LLM Pack?

Since the release of ChatGPT late last year, the world has gone crazy for large language models (LLMs) and generative AI powered by transformers. The biggest players in our industry are now jockeying for prime position in this lucrative space. The news cycle is extremely fast-paced and technology is advancing at an incredible rate. Meta’s announcement yesterday about Llama 2, the latest version of their large language model, being open sourced is a good example.

Brief History of LLMs

The early days of natural language processing saw researchers experiment with many different approaches, including conceptual ontologies and rule-based systems. While some of these methods proved narrowly useful, none yielded robust results. That changed in the 2010s when NLP research intersected with the then-bustling field of neural networks. The collision laid the ground for the first large language models. This post, adapted and excerpted from one on Snorkel.ai entitled “Large language models: their history, capabilities, and limitations,” follows the history of LLMs from that first intersection to their current state.

Generative AI Report: Pilot Taps OpenAI to launch Pilot GPT

Welcome to the Generative AI Report, a new feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We’ve been receiving so many cool news items relating to applications centered on large language models, we thought it would be a timely service for readers to start a new channel along these lines. The combination of a large language model, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.

MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications

MosaicML announced the availability of MPT-30B Base, Instruct, and Chat, the most advanced models in their MPT (MosaicML Pretrained Transformer) series of open-source large language models. These state-of-the-art models – which were trained with an 8k token context window – surpass the quality of the original GPT-3 and can be used directly for inference and/or as starting points for building proprietary models.

Research Highlights: LLMs Can Process a lot more Text Than We Thought

A team of researchers at AI21 Labs, the company behind generative text AI platforms Human or Not, Wordtune, and Jurassic 2, has identified a new method to overcome a challenge that most Large Language Models (LLMs) grapple with – a limit as to how much text they can process before it becomes too expensive and impractical.