SIEM detection gaps and more: first officer's blog - week 59

SIEM detection failures, DDoS attacks, and more. Here's what go us talking in the world of cyber risk from the past week.

Mike Parkin | July 03, 2023

The ongoing voyages of the Federation Support Ship USS [REDACTED] 

First Officer’s log, Terrestrial date, 20230703, Officer of the Deck reporting.  

When the Captain’s first words after explaining a situation are “that would be a no,” you can be sure the situation itself was not optimal. Or, in this case, something that would be subject to debate. 

The situation in question being the request from the AI known as “Majel” to take up residence in the [REDACTED]’s main computer core. To be sure, the Lieutenant who’d been dealing with the artificial personality wasn’t trying very hard to make a case to allow the AI to take up residence. They were simply doing due diligence and relaying the request. 

We were already well on our way to the next destination on our sweep through some outer worlds, which was one of the reasons the Captain gave for denying the request. Along with possible interference with out ongoing missions. Strain on the ship’s computer core. Legal ramifications involving sentient computers. The apparent affection the personality had for our officer, and half a dozen other valid and relevant reasons. 

The only reason in favor of the AI moving from their current home to join us was “she asked to.” 

Understood, Sir. But what do I tell the AI?” 

It seems she likes you, Lieutenant. Just be gentle. 

The lieutenant looked resigned to his immediate fate, in spite of some good-natured commentary from other crew finding it sweet how the AI had taken a liking to him. After all, what could go wrong? He would only be rejecting a request from an artificial intelligence – that was embedded in the main computer of a Federation heavy cruiser. 

No, no foreseeable issues down the line here. 

Survey says . . . 

What happened 

A recent study has shown gaps in the mapping between SIEM detections and adversary techniques identified by MITRE and documented in the ATT&CK framework. After examining thousands of detection rules, a million log sources, and additional unique log sources, the study found that SIEM’s could detect only 24% of the techniques shown in ATT&CK. The study also indicated that existing solutions ingest enough data to identify over 90%, though existing rules and configurations do not do so. 

Why it matters 

Existing SIEM solutions ingest a lot of data. And by a “lot” I mean millions of datapoints from multiple sources. Most of them come with a set of solid detection rules out of the box and let their customers add more as needed. It’s similar to the kind of data analysis we do in the risk management space, though focused on incidents and events as they occur, rather than mitigation, remediation, and prevention which is our world. 

The thing with MITRE ATT&CK though, is there are a lot of techniques they have identified and mapped that don’t really lend themselves to detection by a SIEM. For example, 3rd party recon (T1589 through T1591) which aren’t easy to identify as such, let alone with a SIEM. After all, how do you tell a partner, customer, or investor, doing due diligence from a threat actor doing reconnaissance? 

While the findings are certainly interesting, the impression I have is that A: a lot of organizations need to fine tune their SIEM deployments, and B: not everyone does a great job of mapping back to ATT&CK. 

And that is a non-trivial endeavor, as our own MITRE Mapper project has shown. 

What they said 

With alarm bells ringing, this got plenty of attention. 

Return of the DDoS 

What happened 

Incidents of Distributed Denial of Service (DDoS) attacks has increased dramatically over the last year, according to recent data. In many cases, the attacks were level 7 attacks against applications rather than the more common traffic flooding techniques seen previously. In some cases, the attacks have been severe enough to impact operations for major organizations including Microsoft and Google. 

Why it matters 

Originally, most DDoS attacks were simple traffic floods which were problematic but more of an annoyance than anything else. Eventually, attackers moved up the stack to deploy denial of service attacks that took advantage of server behaviors to hammer them with high loads rather than just bucketloads of packets. Those can be much harder to deal with, especially when threat actors manage to bypass the common content delivery networks and target the server directly. 

With the geopolitical situation over the last few years, especially over the last 18 months, this kind of attack to disrupt service has become more common. Unlike common criminal attacks that are going after a target for cash, DDoS attacks are usually employed to make a statement by interfering with the target’s operations. Though, to be sure, State actors use Ransomware and cybercriminal gangs use DDoS attacks. It’s just a matter of agenda. 

There are defenses against DDoS attacks that can help mitigate the kind of “query of doom” attacks that can be much more effective than a common traffic flood. But in many cases, it takes revisiting the application itself and correcting whatever behavior the attackers are leveraging. 

What they said 

Nobody’s in denial about this – see what people are saying

This seems like a suboptimal choice. . . 

What happened 

An issue with Node Package Manager (NPM), the GitHub owned JavaScript repository, could allow malicious actors to hide code, scripts, and dependencies, due to NPM not validating package manifests server side. Software supply chain attacks leveraging code repositories have been on the rise, and some architecture decisions with NPM have exacerbated the problem. 

Why it matters 

The specific problem here is that there can be a discrepancy between the package manifest and what’s actually deployed – with the repo server not doing any validation itself or cross-checking dependencies. Users would normally rely on client-side tools to take care of those checks, but not all client-side tools are made equal. This could lead to malicious actors slipping trojan code into the package in a more or less classic software supply chain attack.  

We’ve seen a lot of this lately, between threat actors taking over orphan projects, hijacking projects, typosquatting, creating entire false identities to legitimize projects, etc. And then there’s our own findings on AI Package Hallucinations that add another layer of threat surface – albeit one that relies as much on serendipity as it does targeting. 

So, at the risk of repeating myself, yet again, I’m going to say “You are vetting the code you download before you use it, right? RIGHT?!? 

What they said 

There was no shortage of attention here.


Want to get ahead of the stories?

Free for risk owners

Set up in minutes to aggregate and prioritize cyber risk across all your assets and attack vectors.

"Idea for an overwhelmed secops/security team".

Name Namerson
Head of Cyber Security Strategy