A team of researchers at the University of Pennsylvania School of Engineering and Applied Science is seeking to empower tech users to mitigate risks of AI generated misinformation. In a peer-reviewed paper presented at the February 2023 meeting of the Association for the Advancement of Artificial Intelligence, the authors demonstrate that people can learn to spot the difference between machine-generated and human-written text.
Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference
Lightning AI Releases PyTorch Lightning 2.0 and a New Open Source Library for Lightweight Scaling of Machine Learning Models
Lightning AI, the company accelerating the development of an AI-powered world, today announced the general availability of PyTorch Lightning 2.0, the company’s flagship open source AI framework used by more than 10,000 organizations to quickly and cost-efficiently train and scale machine learning models. The new release introduces a stable API, offers a host of powerful features with a smaller footprint, and is easier to read and debug.
Data Science Bows Before Prompt Engineering and Few Shot Learning
In this contributed article, editorial consultant Jelani Harper takes a new look at the GPT phenomenon by exploring how prompt engineering (stores, databases) coupled with few shot learning can constitute a significant adjunct to traditional data science.
TOP 10 insideBIGDATA Articles for February 2023
In this continuing regular feature, we give all our valued readers a monthly heads-up for the top 10 most viewed articles appearing on insideBIGDATA. Over the past several months, we’ve heard from many of our followers that this feature will enable them to catch up with important news and features flowing across our many channels.
Challenges for Startups in Adopting AI and Data Analytics
In this contributed article, Bal Heroor, CEO and Principal at Mactores, believes that by 2027, it is nearly unavoidable that every business, both big and small, will need to get serious about adopting a high-value data analytics system. While this can be a costly investment, there’s no reason that even a startup can’t be a part of the data transformation that is affecting almost every industry sector today.
Heard on the Street – 3/8/2023
Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace.
#insideBIGDATApodcast: Three Metrics for Measuring Enterprise AI Success
Welcome to the insideBIGDATA series of podcast presentations, a curated collection of topics relevant to our global audience. Topics include big data, data science, machine learning, AI, and deep learning. Today’s guest is Supreet Kaur, Assistant Vice President at Morgan Stanley. In conversation with Emerj CEO Daniel Faggella, Supreet tells business leaders three metrics they need to measure their enterprise AI success.
Infographic: Is AI the Next Gold Rush?
Our friends over at writerbuddy.ai analyzed over 10,000 AI companies and their funding data between 2015 and 2023. The data was collected from CrunchBase, NetBase Quid, S&P Capital IQ, and NFX. Corporate AI investment has risen consistently to the tune of billions.
Research Highlights: SparseGPT: Prune LLMs Accurately in One-Shot
A new research paper shows that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models.
AI from a Psychologist’s Point of View
Researchers at the Max Planck Institute for Biological Cybernetics in Tübingen have examined the general intelligence of the language model GPT-3, a powerful AI tool. Using psychological tests, they studied competencies such as causal reasoning and deliberation, and compared the results with the abilities of humans. Their findings, in the paper “Using cognitive psychology to understand GPT-3” paint a heterogeneous picture: while GPT-3 can keep up with humans in some areas, it falls behind in others, probably due to a lack of interaction with the real world.