Why ChatGPT is a Cyber Threat to Businesses

Print Friendly, PDF & Email

Cybersecurity expert explains what dangers lie underneath AI-powered chatbots and shares expert tips on how to stay safe

With the rise of AI-powered chatbots, businesses must be aware of the risks associated with using ChatGPT technology. That can become a double-edged sword for businesses, offering automatization and potential business gains as well as presenting unique risks. Hackers may use ChatGPT to manipulate victims, carrying out more sophisticated attacks and making it harder for users to detect potential cyberattacks. In order to avoid business losses, organizations must re-evaluate the steps they take and understand the risks. 

Growing utilization of ChatGPT as a crime tool 

According to NordVPN research, the number of new posts on dark web forums about the AI tool surged from 120 in January to 870 in February — a 625% increase. 

On dark web forums, the number of threads on ChatGPT rose from 37 to 91 in a month, making bot exploitation one of the most popular topics among dark web surfers. For example, forum discussions included information on how bad actors could program ChatGPT to produce basic malware. Later, the hacker community aimed for more radical action, such as taking over control of the chatbot, creating a malfunction, and thus wreaking havoc. Now anyone on the dark web can obtain expertise on such topics as “How to break ChatGPT,” “ChatGPT jailbreak 2.0,” “ChatGPT – progression of malware,” and “ChatGPT as a phishing tool.” 

ChatGPT and social engineering 

Social engineering attacks such as phishing, smishing, or pretexting might be easily used to steal valuable information. ChatGPT can generate convincing dialogues or scripts, manipulating people into revealing confidential information or performing certain tasks. In addition, poorly worded emails and grammatical errors can be easily avoided with the use ofChatGPT. This allows people from countries where English is spoken rarely to participate in cybercrimes on a greater scale and achieve higher success rates. 

Leaking sensitive information 

In March, ChatGPT experienced a data breach. A bug exposed users’ personal information, such as chat logs and credit card details. Such events prove that using this platform creates  high risks to privacy. This means that any kind of information added to the chatbot may be at risk of being leaked, especially when ChatGPT or other AI-powered tool is used for marketing purposes or writing emails. Never forget that for Chat GPT to get better, it uses data users give it, including personal or business information and other important data. If hackers can breach the system, businesses will suffer damage. 

How to protect your business and employees 

Prevention is always cheaper than a breach. Using the following core principles to protect your company:

  1. Educate your employees. As mentioned above, social engineering attacks are becoming increasingly sophisticated. Thus, even more attention to employee training is needed. One of the most effective ways to do that is to organize cyberattack simulation exercise and openly discuss the results. 
  2. Have a business resilience plan. Risk assessment is a must for any company – no matter the size or industry. Once you know (and acknowledge) your risks, create a plan to tackle any challenge that might arise.
  3. Invest in prevention, detection, and threat mitigation. Have quality cybersecurity tools that prevent threats and offer network segmentation, identification, and access management. For threat detection, maintain IDS policies and data integrity checks. In order to mitigate threats, have clear post-mortem analysis and backup policies. 

ChatGPT is a brilliant technology helping us automate tasks and increase efficiency. However, as most chatbots are available to everyone, the same goes for bad actors that use this technology to conduct sophisticated attacks that the human eye can no longer detect. For example, phishing attacks are now perfectly translated and targeted at certain markets. Also, on the dark web, even ChatGPT is a target, possibly leading to more data breaches and leaks from this tool in the future. The most important thing for both employees and company CEOs is to be critical about whether it is the information we input to this tool or company-wide procedures. 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind