Lightning AI, the company accelerating the development of an AI-powered world, today announced the general availability of PyTorch Lightning 2.0, the company’s flagship open source AI framework used by more than 10,000 organizations to quickly and cost-efficiently train and scale machine learning models. The new release introduces a stable API, offers a host of powerful features with a smaller footprint, and is easier to read and debug.
Lightning AI Releases PyTorch Lightning 2.0 and a New Open Source Library for Lightweight Scaling of Machine Learning Models
ClearML Study: Friction a Key Challenge for MLOps Tools
ClearML, the open source, end-to-end MLOps platform, released the final set of data to complete its recently released research report, MLOps in 2023: What Does the Future Hold? Polling 200 U.S.-based machine learning decision makers, the report examines key trends, opportunities, and challenges in machine learning and MLOps.
TOP 10 insideBIGDATA Articles for February 2023
In this continuing regular feature, we give all our valued readers a monthly heads-up for the top 10 most viewed articles appearing on insideBIGDATA. Over the past several months, we’ve heard from many of our followers that this feature will enable them to catch up with important news and features flowing across our many channels.
Harness Unstructured Data with AI to Improve Investigative Intelligence
In this special guest feature, Jordan Dimitrov, Product Manager, Unstructured Data Analytics, Cognyte, addresses the importance of unstructured data, why AI is an invaluable tool and how to move beyond legacy approaches to data management. Unstructured data comprises the majority of data being used for investigations by governmental organizations today and will play an increasingly vital role in investigative analytics going forward. To ensure a holistic, data-driven intelligence assessment, unstructured data fusion and analysis are essential.
Heard on the Street – 3/8/2023
Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace.
insideBIGDATA Latest News – 3/7/2023
In this regular column, we’ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we’re in close touch with vendors from this vast ecosystem, so we’re in a unique position to inform you about all that’s new and exciting. Our massive industry database is growing all the time so stay tuned for the latest news items describing technology that may make you and your organization more competitive.
#insideBIGDATApodcast: Three Metrics for Measuring Enterprise AI Success
Welcome to the insideBIGDATA series of podcast presentations, a curated collection of topics relevant to our global audience. Topics include big data, data science, machine learning, AI, and deep learning. Today’s guest is Supreet Kaur, Assistant Vice President at Morgan Stanley. In conversation with Emerj CEO Daniel Faggella, Supreet tells business leaders three metrics they need to measure their enterprise AI success.
Walled Garden Data Reliance – Hindrance, Annoyance or Myth?
In this special guest feature, Aman Khanna, ProfitWheel Co-founder, highlights why relying on walled garden data is not best for brands. There needs to be a fundamental shift in how they collect and use third party data while optimizing their own first party data pools. If corporate data strategies do not start restructuring now, they are in for an acute headache down the road, when that data access vice tightens even more and they are left not knowing who they are advertising to due to signal loss.
Research Highlights: SparseGPT: Prune LLMs Accurately in One-Shot
A new research paper shows that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models.
AI from a Psychologist’s Point of View
Researchers at the Max Planck Institute for Biological Cybernetics in Tübingen have examined the general intelligence of the language model GPT-3, a powerful AI tool. Using psychological tests, they studied competencies such as causal reasoning and deliberation, and compared the results with the abilities of humans. Their findings, in the paper “Using cognitive psychology to understand GPT-3” paint a heterogeneous picture: while GPT-3 can keep up with humans in some areas, it falls behind in others, probably due to a lack of interaction with the real world.