ChatGPT has gone far beyond just being something “fun”. The popular AI chatbot is growing its presence and its modeling abilities, and with that technology comes enormous risks. It’s more important than ever for cybersecurity professionals to be aware of some of the most common ways AI can be used to wreak havoc and to develop protocols to combat these bad actors now.

 

AI Phishing Scams

We’re all familiar, to some degree, with how phishing scams operate: by generating notifications that look like legitimate requests from a familiar, “safe” source but actually lead unsuspecting users to click malicious links and/or share sensitive information with a harmful third party. Today, most users with some degree of internet savvy (or with some basic training provided through their institution) are able to spot the majority of phishing schemes, especially since they’re often accompanied by tell-tale “red flags,” even subtle ones, that indicate their dangers.

With ChatGPT, however, those red flags could be a thing of the past. Experts in Europe have sounded the alarm that ChatGPT and similar AI bots could eliminate one of the most obvious and often-used lines of defense against phishing emails: spelling and grammatical errors. Slight slip-ups in language are often a dead giveaway that an email is not legitimate, but the use of bots allows hackers to generate grammatically-correct, official-sounding emails that mimic human tone and cadence, even in languages other than the hackers’ native ones.

There is another, more personalized way in which AI could cause more people to fall victim to phishing scams. “Spear-phishing” is a particularly targeted form of phishing, where bad actors attempt to get specific information from specific victims. ChatGPT could make it easier for these individuals to craft tailored, believable emails that are nearly indistinguishable from legitimate emails that the user might expect to receive. These emails could trick even phishing-savvy individuals, since they don’t look like the types of messages they’ve been trained to spot and report. AI-detector technology will have to evolve (and be properly deployed) alongside AI tech itself in order to keep up and protect teams.

 

AI-Generated Malicious Code

Although safeguards are supposedly in place to prevent AI from producing hateful or harmful content, there are already plenty of people figuring out loopholes to twist the chatbots to their own ends. Cybersecurity watchdogs are already discovering hackers on the dark web discussing how to manipulate AI into recreating malware and hacker code. More is sure to come, as AI bots like ChatGPT make it easier for malicious individuals and organizations to figure out and test ways to get malware on the devices of their targets.

Hackers may also be able to manipulate AI to produce polymorphic malware, which mutates regularly to evade detection, and to produce code that is harder and harder to detect. Remember, AI continues to “learn” as more people use it and add input to its library of “knowledge” to draw from. The more that people use ChatGPT and other AI tools for coding purposes, the easier it may become for bad actors to nudge it into making their harmful actions even easier. It’s important for cybersecurity experts to be equally adept at using ChatGPT for their own purposes in order to effectively defend against the malware and other threats generated by the bots.

 

Deepfakes and Market Manipulation

One of the most concerning applications of AI and related technologies is the possibility to create deepfakes, in which real people appear to say or do something that they actually haven’t in real life. Cybersecurity experts are already concerned about how the growing use of ChatGPT could be used to manipulate not just individuals, but organizations and even stock markets.

MIT Sloan Management Review suggests a few potential scenarios, including:

  • An AI being used to impersonate the voice of an executive and pressure employees to transfer money or otherwise “break rules.”
  • Using AI to poison the data in a system
  • AI-generated deepfakes being used to leak fake but damaging “information” to the public, leading to a reputational and stock value hit.

With today’s social media reach, it’s all too easy for AI-generated disinformation to flourish. IN fact, it’s already happened. On May 22, 2023, an image that appeared to show black smoke billowing from a serious explosion near the Pentagon began circulating on social media. No legitimate media outlets corroborated the events depicted, and viewing the image for more than a few seconds quickly revealed some of the common flaws seen in AI imagery, such as sections of the “photo” that warp and blur into each other. Still, even that brief moment of uncertainty was enough to cause an immediate drop in the stock market. Imagine those panics over and over again, driven by “leaks” of AI-created deepfakes designed to take out corporate competitors, embarrass a target of revenge, or harm the economic system as a whole.

Right now, many AI-generated images are highly identifiable. They have an “uncanny valley” feeling to them, along with often-humorous flaws like extra limbs or misshapen fingers. As the technology continues to grow, however, it’s likely that AI-generated products will become harder to distinguish with the naked eye. That’s why it’s so important for cybersecurity professionals to remain aware of these issues and assemble top teams who are ready and able to develop the strategies necessary to protect company assets and public information resources.

By Daniel Midoneck