insideBIGDATA AI News Briefs – 9/13/2023

Print Friendly, PDF & Email

Welcome insideBIGDATA AI News Briefs, our timely new feature bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We’re working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. We know this field is advancing rapidly and we want to bring you a regular resource to keep you informed and state-of-the-art. Enjoy!

Intel Shows Competitive AI Inference Performance with MLPerf Inference Benchmark Results – impressive competitive AI gains with three products – Habana Gaudi2 accelerators, 4th Gen Xeon Scalable processors and Intel Xeon CPU Max Series. These results build on the MLPerf Training 3.0 GPT-3 June benchmark results that validated Gaudi2 as the ONLY viable alternative to H100 and the Hugging Face performance benchmarks that show Gaudi2 can outperform Nvidia’s H100 on a vision language AI model.

A few key takeaways from the results:

  • Gaudi2 delivers compelling performance vs. Nvidia’s H100, with H100 showing a slight advantage of 1.09x (server) and 1.28x (offline) performance relative to Gaudi2.
  • Gaudi2 outperforms Nvidia’s A100 by 2.4x (server) and 2x (offline).
  • The Gaudi2 submission employed FP8 and reached 99.9% accuracy on this new data type. 
  • For the GPT-J 100-word summarization task of a news article of approximately 1,000 to 1,500 words, 4th Gen Intel Xeon processors summarized two paragraphs per second in offline mode and one paragraph per second in real-time server mode.  
  • For GPT-J, the Intel Xeon CPU Max Series, which provides up to 64 gigabytes (GB) of high-bandwidth memory, was the only CPU able to achieve 99.9% accuracy.

MLPerf results show that SiMa.ai, delivering ML at the embedded edge, outperformed NVIDIA in the Closed Edge power category. With frames per second per watt as the defacto performance standard for edge AI and ML, these results demonstrate SiMa.ai’s pushbutton approach drives continued leadership in unrivaled power efficiency that does not compromise performance. 

Tidio did a study on AI hallucinations and what people think of them. Here are some of our findings:

  • About 96% of internet users know of AI hallucinations, and around 86% have personally experienced them
  • Around 93% are convinced that AI hallucinations can harm the users
  • Only 27% blame users who write prompts for AI hallucinations, while 22% believe it’s the fault of governments who want to push their agenda
  • About 48% of people would like to see improved user education about AI to fight AI hallucinations, while 47% would vote for stronger regulations and guidelines for developers

RNDGen, the state-of-the-art random data generator designed to meet the diverse needs of developers, testers, data analytics, and data scientists. RNDGen is not just another data generator. It’s a comprehensive tool that offers over 100 types of dummy data templates, It was created by the company for internal use. Allowing users to generate large amounts of randomized synthetic test data seamlessly. JSON, CSV, SQL, XML, and Excel formats are supported.

Transformers as Support Vector Machines – This new research paper establishes an equivalence between the optimization geometry of self-attention in transformers and a hard-margin Support Vector Machine (SVM) problem. This equivalence is used to characterize the implicit bias of 1-layer transformers optimized with gradient descent. The main issue is understanding the optimization landscape and implicit bias of transformers. Specifically, the intent was to understand how the attention layer selects and composes tokens when trained with gradient descent. The proposed solution optimizes the attention layer with vanishing regularization converges in direction to an SVM solution. The concept of “Attention-SVM” (Att-SVM) is introduced which separates and selects optimal tokens from each input sequence.

AI in TIME – The first-ever Most Influential People in AI List spotlights leaders, innovators, shapers, and thinkers.

IBM Rolls Out GenAI – The new features and models across its WatsonX data platform include data generation, a privacy and ethics toolkit and more. It will even reveal the data used to train its models.

d-Matrix NextGen AI Chips – The fast-moving startup designs GenAI-optimized and energy-efficient chips and expects $70M+ in ARR and break-even in 2 years. A Series B round of $110M was just backed by Temasek, Playground Global and Microsoft.

Apple may have showed up late for the AI hype-cycle, but you shouldn’t discount the company just yet. A history of visionary innovation and a massive distribution advantage means they could soon overcome competitors to become a leader of the pack. Actually, the genesis for Apple’s work on conversational AI was four years ago. Meanwhile, they have been quietly funneling “Millions per day” and the personnel resources of four teams into working on language or image model-based features. Additionally, Apple plans to incorporate LLM’s into Siri to allow users to automate complex tasks using voice commands. They could also solve some of the privacy, cost and speed problems prevalent in today’s LLMs with their “edge AI” approach, with face detection AI models running on iPhones rather than servers.

Open Interpreter: Let language models run code on your computer – Open Interpreter is an open-source platform that enables Large Language Models (LLMs) to run code on your local machine. It offers a natural-language interface for a wide range of general tasks such as:

  • Editing and creating photos, videos, PDFs, and more
  • Managing a Chrome browser to perform research
  • Generating, cleaning, and analyzing large data sets

Open Interpreter utilizes a function-calling language model supported by OpenAI. Primarily, it employs the robust GPT-4 model, but it also allows for the use of other LLM variants like Code LLaMA or any HuggingFace model.

Llama 2 is now available to run for free on Graphcore IPU using a Paperspace Gradient Notebook – Llama 2 is the next frontier of open-source Large Language Models (LLMs) developed by Meta. It is going to be a game changer for adoption and commercialization because of its comparable performance with much larger models and its permissive open-source license that allows its use and distribution in commercial applications. You can try Llama 2-7B and Llama 2-13B on IPU at no cost via the Paperspace free tier environment, using a Graphcore IPU-Pod4 system. This is a great way to get started, but to get the performance you need you can scale up to paid IPU-Pod16 systems for faster inference. Also introduced is another powerful and efficient LLM – Flan-T5 XXL (and Flan T5-XL, its smaller 3B-parameter relative) fine-tuning for Graphcore IPUs. 

Tencent releases AI model for businesses as competition in China heats up – Chinese tech giant Tencent  launched its AI model “Hunyuan” for business use. The news comes days after Baidu revealed a slew of AI-powered applications on Tuesday in the wake of more supportive regulation. Tencent has said it was internally testing its Hunyuan AI model on advertising and fintech.

OpenAI will host a developer conference — its first ever — on November 6, the company announced. At the one-day OpenAI DevDay event, which will feature a keynote address and breakout sessions led by members of OpenAI’s technical staff, OpenAI said in a blog post that it’ll preview “new tools and exchange ideas” — but left the rest to the imagination.

AI Insight Forums – The US Congress is heading back into session, and they are hitting the ground running on AI. We’re going to be hearing a lot about various plans and positions on AI regulation in the coming weeks, kicking off with Senate Majority Leader Chuck Schumer’s first AI Insight Forum on Wednesday. This and planned future forums will bring together some of the top people in AI to discuss the risks and opportunities posed by advances in this technology and how Congress might write legislation to address them. The first of nine 6-hour congressional forums on Wednesday will host Sundar Pichai, Mark Zuckerberg, Sam Altman, Elon Musk, Satya Nadella, Jensen Huang and others to define the US’ position on AI.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*