AI from a Psychologist’s Point of View

Researchers at the Max Planck Institute for Biological Cybernetics in Tübingen have examined the general intelligence of the language model GPT-3, a powerful AI tool. Using psychological tests, they studied competencies such as causal reasoning and deliberation, and compared the results with the abilities of humans. Their findings, in the paper “Using cognitive psychology to understand GPT-3” paint a heterogeneous picture: while GPT-3 can keep up with humans in some areas, it falls behind in others, probably due to a lack of interaction with the real world.

Level AI Introduces the Future of Customer Service With Generative AI Solution, AgentGPT

Level AI, a leader in advanced conversation intelligence solutions for the contact center, announced a game-changing generative AI product. AgentGPT is a secure, omniscient generative AI system for customer service teams, trained on a client’s proprietary customer conversational data. It helps agents successfully handle even the most complex questions, stepping in to answer what can’t be found in the help section or other publicly available resources.

C3 AI Announces Launch of C3 Generative AI Product Suite

C3 AI (NYSE: AI), the Enterprise AI application software company, today announced the launch of the C3 Generative AI Product Suite with the release of its first product — C3 Generative AI for Enterprise Search.

2023 Trends in Artificial Intelligence and Machine Learning: Generative AI Unfolds  

In this contributed article, editorial consultant Jelani Harper offers his perspectives around 2023 trends for the boundless potential of generative Artificial Intelligence—the variety of predominantly advanced machine learning that analyzes content to produce strikingly similar new content.

Snorkel AI Accelerates Foundation Model Adoption with Data-centric AI

Snorkel AI, the data-centric AI platform company, today introduced Data-centric Foundation Model Development for enterprises to unlock complex, performance-critical use cases with GPT-3, RoBERTa, T5, and other foundation models. With this launch, enterprise data science and machine learning teams can overcome adaptation and deployment challenges by creating large, domain-specific datasets to fine-tune foundation models and using them to build smaller, specialized models deployable within governance and cost constraints.

The Move Toward Green Machine Learning

A new study suggests tactics for machine learning engineers to cut their carbon emissions. Led by David Patterson, researchers at Google and UC Berkeley found that AI developers can shrink a model’s carbon footprint a thousand-fold by streamlining architecture, upgrading hardware, and using efficient data centers. 

Research Highlights: Emergent Abilities of Large Language Models

In this regular column we take a look at highlights for breaking research topics of the day in the areas of big data, data science, machine learning, AI and deep learning. For data scientists, it’s important to keep connected with the research arm of the field in order to understand where the technology is headed. Enjoy!

Gaining the Enterprise Edge in AI Products

In this contributed article, Taggart Bonham, Product Manager of Global AI at F5 Networks, discusses last June, OpenAI released GPT-3, their newest text-generating AI model. As seen in the deluge of Twitter demos, GPT-3 works so well that people have generated text-based DevOps pipelines, complex SQL queries, Figma designs, and even code. In the article, Taggart explains how enterprises need to prepare for the AI economy by standardizing their data collection processes across their organizations like GPT-3 so it can then be properly leveraged.

Have a Goal in Mind: GPT-3, PEGASUS, and New Frameworks for Text Summarization in Healthcare and BFSI

In this contributed article, Dattaraj Rao, Innovation and R&D Architect at Persistent Systems, discusses the rise in interest for neutral network language models, specifically the recent Google PEGASUS model. This model not only shows remarkable promise when it comes to text summarization and synthesis, but its non-generalized approach could push industries such as healthcare to embrace NLP much earlier than was once supposed.

Why Humans Still Need to be Involved in Language-Based AI

In this contributed article, Christine Maroti, AI Research Engineer at Unbabel, believes that humans still need to be in the loop in most practical AI applications, especially in nuanced areas such as language. Despite the hype, these algorithms still have major flaws. Machines still fall short of understanding the meaning and intent behind human conversation. Not to mention, ethical concerns such as bias in AI still are far from a solution.