You Against the World: The Offenders Dilemma

Share This Post

In February, we saw a very large and interesting data leak from I-S00N, a Chinese company offering adversarial services for clients including the Chinese Ministry of Public Security, Ministry of State Security, and People’s Liberation Army. 

The leak offered details of compromise within at least 14 governments, custom hardware snooping devices, iPhone “remote access” capabilities, and even a Twitter disinformation platform offering the ability to distribute information en masse to new and/or compromised accounts simultaneously and conduct extensive monitoring. While the individual tools, services and activities are interesting, the profile offers a truth for global organizations that is concerning.

Enterprises have a range of options to mimic certain attacker behaviors or hunt for the same vulnerabilities on which attackers will prey. But what the I-S00N revelation demonstrates is that we’re fighting with one hand tied behind our backs, as other countries are weaponizing their private sectors in ways we can’t and won’t.  We’ve all heard of the “Defender’s Dilemma” as the good guys need to be right every time, while bad guys only need to be right once. As more, and more fragmented offensive security options enter the market for US companies, this gives us an interesting look at what could be considered an “Offenders Dilemma”. Foreign attackers have many more toolsets at their disposal, so we need to make sure we’re selective about our modeling, preparation and how we assess and fortify ourselves. This article will look at four pillars of an offensive playbook – Red Teams, Penetration Testing, Automation and AI, and vulnerability assessment – and for each the best approaches that provide an offensive security program with the visibility and reach to make the greatest impact.

Red Teams need to be about Every Team

When most think of Red Teaming, they envision a team of security experts playing out an attack scenario – either digitally or physically – to see if they can evade detection and achieve a goal by compromising a target asset or assets. While not wrong, that perception it is incomplete. Whether Red Team or threat actor, the “attack” is neither the beginning or end of contact with a potential victim, and thus too narrow of an activity for an organization to determine the full extent of their own vulnerability and risk. 

Prior to emulating an attack scenario, it is absolutely necessary to assess what intelligence you are providing the outside world to inform an attacker, and what human or procedural weaknesses may provide an open door through which an attack can begin. Engaging a Red Team’s ability to collect Open Source Intelligence (OSINT) through company communications, media coverage and even social media can be a treasure trove for a threat actor.  Additionally, many companies conduct “security awareness” training separate from Red Team activities which only provides an assessment against general scenarios.  A Red Team can conduct Social Engineering campaigns using live OSINT and following scenarios that give a real world perspective on how an attack may likely begin, and how effective it can be.

So what then? Understanding how an attack may play out is valuable, but unless you also assess how the organization and its stakeholders are oriented to respond, you have no understanding the extent of the damage a successful attack can create, or how effective you can be at minimizing impact. For this, also conducting incident response tabletops provide that full assessment of readiness, and also a blueprint for improvement.

Finally, realize that even the security team and defensive technologies themselves are assets representing potential vulnerability, and should be assessed in kind.  Which brings us to the next pillar.

Advertisement. Scroll to continue reading.

Silos are no-gos

There are no “air gaps” in an enterprise. Every asset, be it application, device, office footprint, or cloud is interconnected.  So, while testing an application or device is important to understand individual vulnerability, they don’t exist in a vacuum.  The connections and attributes they share across multiple applications and devices in an environment can represent additional vulnerability, either on its own, or as a pathway from a vulnerability in an upstream asset. 

It’s for that very reason that an organizational attack surface needs to be understood at the macro and micro level.  We need to be testing individual applications and the overall ecosystem in which they exist.  But we can’t stop there.  There is a third dimension to testing — Time.

Just as no asset is an island, neither is a point-in-time. Applications are constantly being updated, added or deleted. New employees, business units or even whole companies via M&A are being added. Additionally, if a constantly evolving infrastructure wasn’t complex enough, new threats and classes of vulnerabilities are being discovered every day – some in new assets, some existing in assets for months or even years. For this reason, not only must assessment and testing be comprehensive, but it also needs to be continuous. The reality of this level of change would be overwhelming, if not paralyzing for any organization without the benefit of automation. But as with anything in security and life, there are benefits, and pitfalls.

AI and automation must not be autonomous

Automation in all aspects of technology and systems are what drive growth and let businesses scale. Technology is an amplifier, but it can amplify noise as well as signal. Then, it’s critically important to be able to discern not just signal, but the right signals, and that requires human intelligence, intuition and most importantly validation. 

We are also witnessing a step change technology amplification with advances in Large Language Models (LLMs) and Artificial Intelligence (AI).  The ability of AI to develop content, aid in programming, manage high level processes or detect anomalies in data and systems is astounding.  It is of course a reality not lost on malicious actors, who are also testing AI’s capabilities, from deepfakes to malware development. 

However, no technology, particularly as nascent as AI, is perfect.  The age old maxim of Garbage In, Garbage Out remains true. AI models and output are only as valid and effective as the data sources they draw from, and the people who maintain them. We at Bishop Fox like to think about enabling technologies as much like an Iron Man suit with Jarvis. It can supercharge what an analyst or an operator can do, but it still needs a human to see new patterns, determine outliers that break from models and just as importantly, confirm that the output results in a positive outcome. And even with that validation and extra set of eyes, sometimes we need the help of friends.  

A crowd needs leadership

One of the greatest strengths of the security industry community. From open-source tools to industry events and resources, strength numbers is an important asset as new threats are rapidly discovered and weaponized. In this respect, bug bounties are a critically important tool. Whether filling the gaps or finding the needles, community contributions in finding vulnerabilities and developing mitigations is invaluable. Additionally, while standardized disclosure processes and rewards are crucial in driving efficiency and communication, a bug bounty program in the wrong environment can present the same overwhelming issue of noise and lack of prioritization as automation. That’s why in many ways, a bug bounty program at scale needs a strong internal team and vulnerability assessment infrastructure to support it.

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.

Article

BFU – Seeing is Believing

Oh no, the device is in BFU. This is the common reaction; a device needs extracting, and you find it in a BFU state. Often, there’s an assumption that a BFU extraction will only acquire basic information, but that isn’t always the case.