Get a demo
Perspectives

The developments in AI that raise security concerns, and more: first officer's blog - week 35

Microsoft's update on their text to speech voice system, ChatGPT, and more. Here are the latest stories from the cyber risk world.

Mike Parkin | January 23, 2023

The ongoing voyages of the Federation Support Ship [REDACTED] 

First Officer’s log, Terrestrial date, 20230123. Officer of the Deck reporting.  

After our unexpected diversion to Starbase 998, we finally arrived at the planet [REDACTED]. The captain ordered maximum warp to make up for lost time, which put us slightly behind schedule. Fortunately, the intermittent issues we’d been brought over to fix had remained intermittent, with only one additional instance happening during our delay. 

The administration that oversaw [REDACTED]’s planetary communications infrastructure had been experiencing apparently unrelated problems for several months at least, but they were having trouble drawing a pattern out of the events. The incidents seemed to happen at random, in random areas of the planet-spanning network, affecting random systems, seemingly at random intervals, which led their senior Minister of Communications to believe that the events were anything but random. 

With the [REDACTED] in orbit, Lieutenant [REDACTED] and her team beamed down to the planet’s surface at their communications ministry, while several other teams set down to help investigate some of the previous incidents and, more importantly, from our perspective at least, to see what sort of data the different sensors had collected on the incidents. 

It was common for us to be called in after a planet or station suffered a network anomaly. Our specialty was cross system communications and deploying tools that could coordinate across them, and recognizing that the existing systems didn’t always play nice with each other, or speak the same language as we’d dealt with before, was often why we were brought in.  

Sometimes it was because the people in charge recognized the problem and were acting proactively. Sometimes it was because someone else told them they had a problem, and they didn’t fix it they would be looking for a new position – proactively. 

So far, it looked like what we were facing on [REDACTED] would be a straightforward integration with their existing systems, and a bit of training to help them get the most out of it. 

At least we hoped it would. 

Can you hear me now? Are you sure it’s me? 

What happened 

Recently, Microsoft announced that their text to speech voice system is not only able to read text with a near-human auditory quality, but it will also soon be able to mimic any individual’s voice using as little as 3 seconds of recorded input. While there are many legitimate and worthwhile uses for this capability, it also raises security concerns due to the ability to imitate virtually anyone. 

Why it matters 

Deep Fakes have become more of an issue over the last several years, with the technology maturing to the point where it can be difficult to tell the difference between a computer-generated facsimile and the real thing. The old tag “is it live, or is it Memorex” has become “is it live, or is it really good CGI?” 

The ability to mimic anyone based on only a few seconds of recorded audio is both impressive and potentially concerning. Systems that rely on voice recognition for security are now faced with a new challenge they weren’t designed to deal with. While it may be easy to adapt them to quickly recognize a synthesized voice, it may not be so easy for Humans. There are countless scenarios where you “know” who you’re talking to based simply on their voice. But now, with a little carefully placed background noise, will you be able to trust your own ears? 

What they said 

Lifewire: https://www.lifewire.com/why-experts-say-ai-that-clones-your-voice-could-create-privacy-problems-7096294 

 

They always find a way – and we wish they wouldn’t 

What happened 

Microsoft’s removal, at least temporarily, of macros from Office had threat actors shifting more actively to LNK (link) files as an attack vector. These files are often used simply as shortcuts but can have additional information that makes them very versatile. Threat actors have learned to leverage that capability and have found ways to modify LNK files in ways that bypass normal protections.  However, researchers have found ways to use meta data in these files, or the lack thereof, to help identify threat actors and malicious files. 

Why it matters 

It’s been said that life finds a way, or, in this case, threats find a way. When one door closes another opens, maybe? Regardless of the metaphor, the point is that threat actors will always find creative ways to get their malicious payloads onto target systems. Often, as is the case here and with macros before, by finding creative abuses of existing functionality.  Here, they’re shifting the LNK because macros are no longer consistently available for them to abuse. 

On one hand, it should be relatively easy for security applications to identify malicious LNK files and block them before they can deliver a payload. But on the other hand, it shows how quickly attackers can adapt to a changing threat landscape. 

What they said 

SC Magazine: https://www.scmagazine.com/news/malware/threat-actors-said-to-shift-from-malicious-macros-to-lnk-files 

 

GLADOS versus R Daneel Olivaw? 

What happened 

OpenAI’s ChatGPT has been receiving a lot of attention recently from the press and security professionals worldwide. There are concerns that it lowers the bar of entry for creating malware and can be leveraged for advanced social engineering and phishing campaigns. While the technology is interesting and points to some interesting possibilities both as Threat and as Defense, it is hard to say how this, or other AI technologies, will play out. 

Why it matters 

The recent press coverage on ChatGPT is a mix of thoughtful analysis and some… not so thoughtful analysis. Some of it is just hype, while some leans toward borderline panic. The reality is ChatGPT has some fascinating capabilities and depending on how it’s used, can be a very useful tool or a genuine threat. Is the code it writes sophisticated enough to get past existing defenses? Not really. Is it advanced and innovative? Not really. Is it comparable to what you see from a programing student cutting and pasting examples together to form a working program? Yes. Yes, it is. 

That’s not to say it won’t improve over time or other, dedicated, AI engines won’t be able to take a vulnerability description and turn it into a working exploit. It’s also not saying that a dedicated AI won’t be able to iterate across multiple versions until it has a version that does bypass existing defenses. In fact, there’s some evidence that is already being done.  

The field is rapidly evolving, and there are at least as many people working to use these advanced AI’s (stop laughing at that, HAL) for good as there are people trying to bend them towards malicious ends. 

What they said 

DICE Insights: https://www.dice.com/career-advice/chatgpt-raises-cybersecurity-and-a.i.-concerns 

What we said 

Vulcan’s own take 

ChatGPT: An opportunity or a threat? Part 1

ChatGPT: An opportunity or a threat? Part 2

__________________________________________________________________________________________________________________________

Want to get ahead of the stories?

cve-2022-3875

 

Free for risk owners

Set up in minutes to aggregate and prioritize cyber risk across all your assets and attack vectors.

"Idea for an overwhelmed secops/security team".

Name Namerson
Head of Cyber Security Strategy

strip-img-2.png

Get rid of silos;

Start owning exposure risk

Test drive the leader in exposure risk management