Security teams are inundated with incident reports. Yet something feels absent from them: meaningful evidence of sustained, sophisticated advanced threat actor activity. The typical volume of reporting might suggest a constant state of crisis, but paradoxically, there is sparse evidence of confirmed, high-end threat actor operations. This can create a sense that perhaps everything is under control.

Certainly, things are not under control. Advanced threat actors, particularly those backed by foreign nations, have so much to gain and so little to lose by targeting critical infrastructure and technology providers. It is implausible to assume they are maintaining organizations of engineers and operators numbering in the thousands without active objectives.

Sure, every so often we catch a DPRK plant who was sloppily hired in as a software engineer. Occasionally we'll identify in-depth compromise of one service or software supply chain. Surely there must be more going on.

The absence of evidence is not evidence of absence but it raises the question: Where are the advanced threat actors?

Are they operating more carefully than we like to assume, with a level of stealth that we aren't doing enough to detect? Are they operating with restraint, pre-positioning quietly rather than conducting noisy operations which would only push companies to improve their security postures? Are they focused on targets other than the ones we expect? Or are we simply underestimating the effectiveness of our threat detections?

Perhaps we're not looking in the right places. I have personally found easy-to-find vulnerabilities which, in the hands of an advanced threat actor, could have led to achievement of any number of their presumed goals. When we performed log dives on these vulnerabilities there was no evidence of exploitation. Are threat actors intentionally avoiding exploitation of these vulnerabilities, choosing instead to move stealthily through chains of trust?

I wonder if our security programs have systemic flaws which have led to a situation where threat actors are able to work effectively under the radar.

Distorted Incentives

Internal incentives and metrics are misaligned with the reality of how threat actors operate.

Offensive teams optimize for spectacle. Detection teams, too often, optimize for alert volume rather than impact. Leadership, compliance obligations aside, optimizes for report visibility. Meanwhile real threat actors optimize for simplicity, cost efficiency, and invisibility.

There's not enough organizational incentive to take a step back and think about security holistically. There's often no measurable incentive to develop better threat detections and detection quality is difficult to measure. The absence of compromise is not as persuasive as a dramatic red team narrative.

Offensive security reports tend to garner more attention from leadership for the simple reason that they're exciting. Detections teams often feel that they're graded on the number of detections rather than their quality or impact.

The result is a quiet misalignment: we reward what is visible while adversaries reward what goes unseen.

The Offensive-Defensive Rift

Many security programs place a divide between offensive security teams and defensive teams. We have ideological ideas of how an offensive security team should operate in relation to defensive teams. These ideas lead to an inherently adversarial relationship and a lack of communication between the two.

What if we reframed the relationship? What if offensive security teams were to work more closely with detections teams to develop better detections and more thoroughly test detections?

An Imbalance of Defensive Skills

Detection engineering and threat hunting teams are underrated, underfunded, and under-skilled. The most talented engineers prefer to work in offensive security for the simple reason that it's more exciting. We're left with a situation where defensive teams don't necessarily have the knowledge or skills necessary to push the boundaries.

We won't be able to change talented engineers' preferences but with closer collaboration between offensive and defensive teams perhaps we could reduce the imbalance a bit.

Security leadership should take a step back and re-assess their objectivity when making budget decisions. Is an offensive security team actually making more of an impact than a threat hunting team? Or do their findings simply feel more exciting?

Alert Structure and Fatigue

Defensive teams are far too often focused on individual alerts and fail to see the whole picture. Alerts are singular and isolated. Real compromises are systemic.

Offensive security engineers watch the pattern repeat itself: The SOC identifies one piece of their operation in isolation and treats it as an isolated incident rather than escalating it appropriately.

You see it in publicly-disclosed breaches such as the Notepad++ breach. In this case Hostinger claims that only one server was affected. It's probably a safe bet that the threat actor has Hostinger fully compromised and maintains persistence.

To put this problem in perspective: as an offensive security engineer my operations have triggered responses numbering in the dozens yet a blue team has never fully evicted me from a network which I had time to move laterally in.

In fact, the only time I've ever unintentionally lost access to a network was because I forgot to renew a server.

Alert fatigue certainly plays a part here. Last week I asked a member of one of my company's response teams how many alerts they handle in a day. It was a around noon on a Wednesday and he said he'd personally already actively triaged a little under 20.

A properly tiered SOC at least tries to address this by having tier-1 analysts escalate to tier-2 and tier-2 to tier-3 where appropriate. This at least reduces disincentives to not perform further response post-triage but also introduces failure points between each SOC tier.

Perhaps higher SOC tiers should dedicate time to taking a step back and looking at certain alerts holistically. Detection engineering teams should consider a new approach which treats detections with interconnectedness rather than spitting them out in isolation into a ticketing system or dashboard.

Misaligned Offensive Focus

Offensive security teams sometimes chase the wrong victories. For personal reasons we put substantial effort into complex attack paths but miss the simple ones. We've all been guilty of this before. We like to exploit advanced attack paths while we could achieve the same goals with simple, less costly social engineering techniques.

In highly developed environments I've noticed that offensive security teams gravitate towards vulnerabilities in legacy internally-developed infrastructure. I've seen offensive security team members exploit highly complex chains within internal infrastructure when they could have just as easily achieved their objectives by social engineering the right sysadmin over Slack.

Offensive security teams should re-align with how real threat actors operate.