Decoding the Next Big Thing in AI: The Inevitable Race for AGI and Ethical Responsibility

Print Friendly, PDF & Email

Since the iPhone’s launch in 2007, the industry hasn’t witnessed any technology being adopted so swiftly, until now. This particular innovation garnered responses from almost every tech company globally and became a topic in virtually every meeting, whether tech-related or not. Some people call it generative AI, some consider it a chatbot and some think of it as a large language model (LLM). And they’re all correct.

No doubt, ChatGPT took the world by storm, making the war on AI reach its peak in 2023.

While companies, big and small, are figuring out the meaning of generative AI, governments around the world are also working hard to see how they can govern the technology before it gets out of control. Most notably, President Joe Biden issued an executive order on responsible AI in October 2023, and the E.U. recently reached a deal on the AI Act.

Looking at 2024, the war will accelerate to create artificial general intelligence (AGI), which learns and thinks like humans. While the AI robots from Hollywood are still far from reality, the thread is still there, especially the ability to spread misinformation. 

The race to develop AGI is happening at a pivotal time, with the upcoming presidential election. It took almost four years for Facebook to admit it played a role in the election. Now, with other social media companies finally understanding they played a role, AI will get even harder to govern.

Let’s take a look at the 2016 election.

What went wrong in 2016?

  1. Sophistication of misinformation campaigns: After developing its strategy in 2014, Russia created fake accounts to support then presidential-nominee Donald Trump and ran successful campaigns on social media to affect the election results, according to Time.
  1. Manipulation of platform algorithms: Russia used social media to its advantage and created trending topics and hashtags, targeting a massive audience to a level that affected Hillary Clinton.
  1. Reliance on engagement metrics: Sensitive topics trended on social media because there was no governance. The platforms wanted to prioritize clicks and serve more ads, putting profit above all else. 
  1. Limitations in AI’s understanding of context: By no means was it a calculated attack, but it highlighted that only humans understand an event’s intent and can combine it with a macro environment to capture the nuances of a piece of information. It was a dangerous way to capitalize purely based on AI, and this will become more dangerous if all the world’s interest is in commercializing AI for profit.

What should we expect in 2024?

The impact of social media fallout wasn’t just about politics. There are a lot of lessons to be learned and a lot to expect surrounding AI in 2024 and beyond.

  1. It’s good that the government is paying attention, but attention isn’t all you need: The government finally calling for responsible AI and accessible AI is one step forward. But like anti-trust lawsuits, governments can only take action after the fact, and the impact on big tech is minimal. When we look at OpenAI, many innovations started from small startups and became wildly popular overnight. While it’s encouraging to see the rules and regulations finally taking shape, history tells us that punishment and regulations can only do so much.
  1. Enabling big tech and small startups to do the right thing: The U.S. government is sending billions of dollars of aid to foreign countries to advance their defensive capabilities. Can it do the same with AI? For example, can we encourage or require transparency in all the chatbots? Some chatbots, like Microsoft Bing and Google Bard, can quote their source of understanding, but not so much with OpenAI. One can argue that the model is a result of training output, but we need to learn from 2016 and stop misinformation flowing into other new media that could impact people’s lives until it’s too late.
  1. Non-profit over for-profit: OpenAI started as a non-profit, but its epic rise has turned it into a nearly $100 billion for-profit company. While we can’t guarantee the popularity of any new program, if enough funds are available to enable responsible AI at a similar level to the electric vehicle movement, it will be a huge win for small and medium enterprises because they will have more means to survive. Rules and regulations also need to happen for big tech to adopt these technologies. Otherwise, it will be hard for them to survive in the long run.
  1. More people, not just big tech, need to come forward with the right intention: While companies like Microsoft and Databricks are developing copilots to push the code quality bar higher, the same should come into education and information sharing. Small players often think they don’t have a chance, but with OpenAI’s GPT Marketplace and the ability to fine-tune models with specific instructions, they must do their due diligence to ensure they’re also doing their part. For example, they can ask questions like “Can we ensure that there’s no bias in a marketing campaign ML model?” Or “Can we incorporate responsible AI into an ML pipeline like data engineers’ data validation, which is a must-have in the pipeline?”

What’s in President Biden’s executive order?

The executive order covers many things, ranging from regulating companies to democratizing AI. But let’s analyze the safety aspect of it and how organizations can take advantage of these guidelines to ensure our future generations can use AI responsibly.

1. Require developers of the most powerful AI systems to share their safety test results and other critical information with the U.S. government.

Since the rise of OpenAI, numerous powerful open-source models have been released, like Meta’s LLaMA 2, MosaicML’s MPT and Technology Innovation Institute’s Falcon. Not only do open-source models publish their source code, but they also publish their research papers or training details on the models. 

While it’s unknown whether OpenAI has or will share safety test results (it does have a moderation API endpoint, but it’s a black box) and other “critical information” to the U.S. government or whether there are enough experts to review them, open-source models will be transparent from day 1. 

For instance, Meta’s research paper has nine pages on the safety measures that they are taking, like Safety in Pretraining, Safety Fine-Tuning, Red Teaming and Safety Evaluation.

By no means will nine pages address all AI safety issues, but it’s a good start. Researchers with access to resources should validate the claims from these papers and continue to improve them. Opening up the source code and research papers is a baby step towards sustainable, safe AI. 

2. Develop standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy.

Different LLMs have different ways of interacting with a model. Huggingface has a comprehensive list of how to interact with the industry-leading open-source LLMs. Having said that, there isn’t a  standard for developing LLM or any AI systems, so it won’t be easy to create standard toolings to govern these models. U.S. companies do not contribute to all or even a majority of the open-source or commercial models.

Nevertheless, MosaicML has open-sourced the script and data to evaluate their MPT model. Although there are four datasets used among the 58 criteria, it only scratches the surface.

From the work that prominent players are doing, we know that safety isn’t a priority, and there is more jailbreaking interest than guardrailing the models. 

3. Protect against the risks of using AI to engineer dangerous biological materials.

It’s true that we all worry about biological weapons. The world has done a good job not deploying these weapons thus far, and we should continue to prevent it from happening. However, chemical reactions are not known until they are discovered, which makes the goal clear. For example, we should leverage AI to develop antidotes to treat infectious diseases like COVID. While no one developed COVID, we cannot guarantee that terrorists in lawless countries won’t try to establish COVID-like bioweapons using LLM. This will always be a cat-and-mouse game to see who can do it faster.

4. Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.

When President Biden signed the executive order, the GPT Store was still an idea in OpenAI, but with its release — and assuming it’ll be successful like Apple’s App Store — tackling misinformation will require much greater effort. In other words, we won’t be governing big companies but people all around the world. Who truly owns the information provided to the GPTs? This is similar to social media companies in that while they’ll monetize the information you provide, they won’t take responsibility for the harm this information will cause. 

5. Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.

Coding co-pilot today is still very immature. While it can suggest and debug code, it’s still lacking the ability to comprehend advanced programs. Not to mention, it will also need to understand the dependencies involved in installing the program. Open-source programs can run on proprietary software, such as software taking advantage of zero-day vulnerabilities in commercial operation systems like Windows or iOS. This will always be a niche field for the security researchers.

6. Order the development of a National Security Memorandum that directs further actions on AI and security.

It’s worth noting that governments often license software or AI development to third-party vendors. The Pentagon contract with Microsoft and AWS has been up for debate for a long time. Similar to gun rights, this topic will forever be up for debate. The safest measure is not to develop mass destruction weapons, like the nuclear treaty from the UN. Without something of this magnitude, guidelines will always stay on paper. 

Now let’s look at how the EU is governing AI.

The EU AI Act is less comprehensive than the U.S. executive order. Its primary goal is to categorize AI applications into different risk categories: 

  • Unacceptable risk 
  • High-risk
  • Limited risk 
  • Minimal risk
  • General-purpose

Because the EU is famous for governing large corporations, the law is a very welcomed one. However, there is currently no standard for categorizing these risk categories, although a description is given. Moreover, influential applications like ChatGPT or the GPT Store can cause more harm than unacceptable risk applications that don’t have market share. The certification can be self-assessed or by a third-party vendor, which will certainly allow many startups to self-assess themselves because they don’t necessarily have the funding to hire independent consultants to do the assessment.

With AI advancement evolving by the week, the 2025 timeline is farfetched. That’s why enforcement needs to happen at the corporate level and safety investment should draw more attention. 

When we look back at 2016, we tend to cast blame on technology companies. However, something can go viral quickly because of fear of missing out. The generative AI trend is no different. There are tools and future regulations available to ensure AI is safe. But if companies, small or large, don’t adopt these practices, our society will be vulnerable to bad actors again, and this will spread like COVID-19. If we prioritize humanity over profit, we can use the new tools to make money regardless of the consequences.

About the Author

Jason Yip is a Director of Data and AI at Tredence Inc. He helps Fortune 500 companies implement Data and MLOps strategies on the cloud, serves on the Product Advisory Board at Databricks, and is a Solution Architect champion, making him the top 1% Databricks expert worldwide. Jason also uses his programming skills for philanthropy, where he started lite.chess4life.com, an online chess learning platform used by hundreds of thousands of grade school students across the USA, as well as a founding member of Robert Katende Initiative, home to the Disney movie Queen of Katwe. 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*