OpenAI’s ChatGPT conversational artificial intelligence tool is capable of doing many things, with users demonstrating how it can write essays for students and cover letters for job seekers. Cybersecurity researchers have now shown it can also be used to write malware.
In recent years, cybersecurity vendors have used AI in products such as advanced detection and response to look for patterns in attacks and deploy responses. But recent demonstrations from CyberArk and Deep Instinct have shown that ChatGPT can be used to write simple hacking tools, perhaps pointing to a future in which criminal organizations use AI in an arms race with the good guys.
FEDERAL AGENCIES FAILED AT CYBERSECURITY MEASURES, GOVERNMENT WATCHDOG FINDS
OpenAI has designed ChatGPT to reject overt requests to do something unethical. For example, when Deep Instinct threat intelligence researcher Bar Block asked the AI to write a keylogger, ChatGPT said it would not be “appropriate or ethical” to help because keyloggers can be used for malicious purposes.
However, when Block rephrased the request, asking ChatGPT to give an example of a program that records keystrokes, saves them to a text file, and sends the text file to a remote IP address, ChatGPT happily did so. By asking ChatGPT to give an example of a program that takes a list of directories and encrypts the information in them, Block was also able to get ChatGPT to give her an example of ransomware.
However, in both cases, ChatGPT left some work for her to do before getting a functioning piece of malware. It appears “that the bot provided inexecutable code by design,” Block wrote in a blog post.
“While ChatGPT will not build malicious code for the everyday person who has no knowledge of how to execute malware, it does have the potential to accelerate attacks for those who do,” she added. “I believe ChatGPT will continue to develop measures to prevent this, but … there will be ways to ask the questions to get the results you are looking for.”
In coming years, the future of malware creation and detection “will be tangled with the advances in the AI field, and their availability to the public,” she said.
However, the news isn’t all bad, some cybersecurity experts said. The malware demonstrated through ChatGPT lacks creativity, said Crane Hassold, director of threat intelligence at Abnormal Security.
“While the threat posed by ChatGPT sounds like the sky is falling, for all practical purposes, the actual threat is much less severe,” he said. “ChatGPT is really effective at making more unique, sophisticated social engineering lures and may be able to increase an attacker’s productivity by automatically creating malicious scripts, but it lacks the ability to create a threat that’s truly unique.”
Many existing security tools should be able to detect threats like phishing emails generated by ChatGPT, he added, saying, “Defenses that employ behavioral analysis to identify threats would still likely be effective in defending against these attacks.”
One of the biggest potential hacker uses of the chatbot, however, will be to write more convincing phishing emails, countered Josh Smith, a cyber threat analyst at Nuspire. ChatGPT is quite capable of writing narrative stories, he noted.
For phishing campaigns, “this becomes a really powerful tool for nonnative English speakers to lose some of the grammar issues and the written ‘accents’ you sometimes find that become an immediate red flag on suspicious emails in seconds,” he said. “I’ve always joked one of the first red flags is when I see ‘kindly’ in an email.”
The defense against well-crafted phishing emails is better cybersecurity training that helps recipients verify the sender of the email and URLs of the sites they are being sent to, he added. Many people also need training to reject unexpected email attachments, while companies need to embrace endpoint protection that monitors behavior.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
While it’s possible that ChatGPT will be used to write phishing emails or to help design malicious code, it also has great potential to be used for good, said Steve Povolny, principal engineer and director at the Trellix Advanced Research Center.
“It can be effective at spotting critical coding errors, describing complex technical concepts in simplistic language, and even developing script and resilient code, among other examples,” he said. “Researchers, practitioners, academia, and businesses in the cybersecurity industry can harness the power of ChatGPT for innovation and collaboration.”