ChatGPT has caused a lot of buzz in the world in the past few months, but not all the buzz has been positive. Now, someone has claimed to have made powerful data-mining malware by using ChatGPT-based prompts.
Who is responsible for this malware?
Forcepoint security researcher Aaron Mulgrew shared how he was able to create this malware by using OpenAI's generative chatbot. Although ChatGPT has some protections that prevent people from asking it to create malware codes, Aaron was able to find a loophole.
He prompted ChatGPT to create the code function by function with separate lines. Once all the individual functions were compiled, he realised that he had an undetectable data-stealing executable on his hands.
This is incredibly alarming because Mulgrew was able to create this very dangerous malware without the need for a team of hackers, and he didn't even have to create the code himself.
What does the malware do?
The malware starts by disguising itself as a screensaver app that then auto-launches itself onto Windows devices. Once it's on a device, it will scrub through all kinds of files including Word docs, images and PDFs, and look for any data it can find to steal from the device. Once the malware gets hold of the data, it can break the data down into smaller pieces and hide those pieces within other images on the device. The images then avoid detection by being uploaded to a Google Drive folder. The code was made to be super strong because Mulgrew was able to refine and strengthen his code against detection using simple prompts on ChatGPT.
What does this mean for ChatGPT?
Although this was all done in a private test, it's truly alarming to know the dangerous acts that can be committed using ChatGPT. Mulgrew claimed to not have any advanced coding experience, and yet the ChatGPT protections were still not strong enough to block his test. Hopefully, the protections are strengthened before a real hacker gets the chance to do something as Mulgrew did.