The product is thoroughly pizzled.
OpenAI’s release of ChatGPT has gotten a lot of attention lately, and we’ve written about it here at Vulcan Cyber® as well. While it’s certainly a fascinating application of their underlying GPT-3 engine, it’s unlikely to be the major security threat some have made it out to be. At least ChatGPT itself doesn’t represent an existential threat. While it can certainly be used for some social engineering tasks as is, its ability to create code – and code that exploits vulnerabilities – is more akin to the work of a programming student than an experienced programmer.
The real security implications of ChatGPT
Of course, as we’ve said, ChatGPT’s real threat is in social engineering. Or, perhaps more accurately, the GPT-3 engine behind the publicly accessible application has some real potential for social engineering. The threat isn’t just in its ability to create plausible hooks for a range of phishing attacks, or scripts for conversations, it’s that it can be done programmatically.
For example, a threat actor could take a list of email addresses and, with some relatively simple coding, put together a set of scripts that will gather the real name associated with the address. They could then use social media and other information to gather more information on the target – who they work for, where they are, where they went to school, etc. Knowing the industries their target is in, they could come up with some broad ideas for a hook. With all that, they could ask the AI for a suitable email text that included all the information they had to elicit the response they want.
Yes, they can, and do, do all of that now. Some relatively well-known spear phishing attacks involved a lot of research on the target to create a suitable hook. The issue here is that they could do the whole thing, at scale, and quickly. Add in a bit of machine learning to show which hooks are most likely to evade spam filters, and now we have customized phishing attacks at a scale we didn’t have before.
At least we could.
New tools like GPTZero can already identify AI-generated text with decent certainty. Much of the initial discussion focused on academia, but there are some solid use cases for using something like it in cyber security. We’ll likely see email security companies adopt similar techniques to help recognize malicious email soon, at least where they don’t already deploy some ML techniques to identify spam and phishing.
Can ChatGPT really write code?
The largest fear, and the most hyped, is ChatGPT’s ability to write code. As I’ve mentioned in earlier blogs, I think the fear is overstated given the nature of the GPT-3 engine and ChatGPT itself. While it’s great at holding passable conversations, it’s not designed or optimized for generating code. If you’ve played with it yourself, you’ll see that it can deliver code that works, but said code’s not especially sophisticated. Any modern anti-virus/anti-malware solution will spot ChatGPT’s code and crush it before it has a chance to execute. It’s not sophisticated, clever, or sneaky enough to get past.
The future of malware
The challenge with malware isn’t going to come from a conversational AI like ChatGPT. It will come from threat actors leveraging machine learning techniques specifically to defeat existing AV solutions. This is happening now, with malware authors running multiple iterations of their code against multiple AV platforms to help them determine which techniques are effective and which ones aren’t. While the machine learning concepts are the same, the details are different between this kind of adversarial AI1 and a conversational AI like ChatGPT.
We aren’t yet to the point of AI searching for vulnerabilities and writing malware on its own, while defensive AI constantly updates itself to identify and stop new malware without anyone organic in the loop. We are heading in that direction, but we’re not there yet.
After all, there is an entire range of existing tools that can help developers create and optimize code. It’s not a huge stretch to combine that with other AI tools that can help identify vulnerabilities, iterate code, automate testing, and other functions that pull it all together. I doubt anyone would be surprised to find out that it has happened in a government sponsored lab somewhere already.
Here at Vulcan Cyber we are focused on cyber security and most of what we have written about ChatGPT, and the potential risks created by conversational AI, is related to that field. This post has been focused on security as well, which kind of begs the question of the title – “The product is thoroughly pizzled.”
The impact of AI
If it wasn’t obvious, there are a lot of SciFi fans at Vulcan Cyber, and that particular reference is to the 1955 science fiction story “Autofac” by luminary author Philip K. Dick. The story was (somewhat loosly) adapted into a 1 hour episode of Philip K. Dick’s Electric Dreams, and the theme of the story is the downside of automation taken to the extreme.
How does this relate to cyber security, you ask? It’s part of the broader implications of using artificial intelligence in the rest of industry along with its security implications. We have seen AI used in everything from call trees when we call a business, to how Netflix or YouTube recommend videos for us to stream next, or Amazon making product recommendations. ChatGPT has already made an impact in academia where students are having it write essays for them, or in business where people are using it to write reports or even articles that appear in the press. In the art world, it’s spurred controversy with tools like Stable Diffusion creating AI art – and promptly getting embroiled in copyright lawsuits that may alter the landscape without addressing any of the deeper implications of the underlying technology.
Artificial intelligence in its various forms presents a range of technical challenges, which we’ve discussed here, but also has social, cultural, and economic implications that will almost certainly have far reaching effects as the technology evolves.
The fully automated future
For example, let me present a possible scenario that is not, at least conceptually, that far from what the state of the art can do now. Most of the technical pieces of this scenario are already in place. It’s just a natural extension of artificial intelligence, autonomous vehicles, additive manufacturing (i.e., 3D printing), and robotics.
Let’s say you are looking for a new vehicle. You want a new car, but you want it to be the very expression of your own personality, preferences, and whims. Getting the car of your dreams is super easy. Barely an inconvenience. You log into your favorite car creation site and start a conversation with the generative AI that’s behind the scenes running the show. You tell it what you want. Describe your perfect car in great detail, and it uses its backend vehicle generator to iterate through several versions of your perfect car before you settle on just the one you want.
The order is passed on first to the engineering AI that designs the mechanicals that will support the car, with the power, suspension, etc., tuned to your specifications. The finished design is sent to the automatic factory, akin to the Autofac of the aforementioned story or the automatic car factory from the movie Minority Report, which is also based in a PKD story, and within hours a set of 3D printers working in plastics, metals, and other materials, are assembled by a team of mechanical arms and humanoid robots.
Your now finished car is loaded onto an automated lorry (the EU term for truck just sounds more appropriate here) and driven autonomously to your home, where it is unloaded, and left waiting for you in the garage.
All of this was accomplished without a human hand anywhere in the process. The materials were acquired by automatic machines and robots, and then processed from raw materials into building materials in automated refineries, forges, whatever.
This is amazing. It’s not quite the Replicators from a Federation starship, but it has the same effect. So, all good, right?
Right?
The worst-case scenario
While this could be a very attractive future, I want to explore something of a worst-case scenario. I don’t think this will happen, but it’s a natural extension of what could happen given the current political, financial, and cultural situation.
For example, in our worst-case scenario, the technology exists to order your dream car and have it delivered, but there’s no way you can afford it. Those design jobs? Gone. Manufacturing? Gone. Engineering? Gone. Transportation? Gone. Mining, smelting, refining? All gone. Each taken over by AI control and robotics. The same could happen to most other forms of manufacturing, sales, etc.
We already shop online, but warehouse jobs would be replaced by robots with automated delivery. Want to get food from your favorite fast-food restaurant? A kiosk takes your order, robots prepare the food, and they deliver it to your table. Even the farming is automated, with the meat and other ingredients processed by machine and delivered to the restaurant by automated transportation.
You want to enjoy the convenience of this automated future, but you can’t afford to. Because there are no jobs for humans anymore. The only people who can afford it are those who own a piece of the automated infrastructure. In this worst-case scenario, the benefits of full automation are concentrated in a tiny fraction of the population, while the rest get, well, whatever is left.
In spite of the current economic situation looking like some people would very much like to see it go that way, the reality will almost certainly be much less severe. And from the perspective of cyber security, we can expect that even with more and more AI and automation appearing in our space, there will always be a place for human intelligence and intuition.
Taking action on AI-security threats
Artificial intelligence is already affecting society across a broad spectrum, from the softest arts to the hardest sciences. From the perspective of cyber security and risk management, we have already seen its impact and will continue to do so as threat actors find new and creative ways to abuse the emerging technologies and defenders use them in creative ways to thwart the new attacks.
We don’t expect the worst-case scenario to happen, but the changes to the cyber-risk landscape are very real and are happening very quickly. Those who are proactive stand the best chance of security.
As Captain Jack said, the 21st century is when everything changes, and you’ve got to be ready.