Whether you’re trying to figure out if the latest breach impacts you or simply trying to connect different data sources for better reporting, navigating the cybersecurity landscape can be difficult. In this post we’ll break down what we see as four fundamental problems across the industry, how they all stem from a similar source, and what we can do about it.
Hello From Outcome Security
The cybersecurity industry is broken.
Okay, that’s not exactly true. But it reads a lot cleaner than “the cybersecurity industry is a twisting labyrinth where the gaps between competing priorities are so large it’s like trying to garden in the middle of a landslide.”
For a lot (maybe a majority) of cybersecurity professionals, joining a security team is akin to being thrown into a maelstrom of conflicting priorities, inconsistent data, and broken tools, and being told that you need to protect your organization from being infiltrated or hacked by the Chinese government, all while being understaffed, underfunded, and in some cases having to compete with internal priorities to construct something resembling a good cybersecurity posture.
For a lot (maybe a majority) of large organizations, CISOs and other decision makers are flooded with a constant stream of sales calls, e-mails about products, and marketing buzzwords. They come in different flavors (“we have a cutting-edge algorithm”, “we use AI/ML to…”, “our cybersecurity analysts are the best in the world”) but they all claim to the be latest-and-greatest, a new analytic, or novel way to measure impact.
For a lot (maybe a majority) of mid-tier organizations, there’s a vague notion that threat intelligence or other reporting/cybersecurity data is something you should be consuming to inform your day-to-day workflows but you need the best ROI for products/solutions you purchase and it’s difficult to do any kind of comparative analysis across solutions to figure out what’s going on.
Now I’m going to say something crazy: cybersecurity actually isn’t supposed to be this difficult. All of the situations I’ve described above have a few root causes that affect organizations differently but fundamentally come from the same root issues.
With that in mind, for our inaugural blog post I’d like to present what I’ve affectionately labeled “The Four Horsemen of Cybersecurity,” problems so far-reaching they’re causing catastrophic damage across the industry:
- Expertise Solving the Wrong Problems
- Too Many Products -> Too Much Broken Data
- A Skills Gap That Isn’t a Skills Gap
- An Inability to Solve Problems Collaboratively
In the future we’ll be talking about each of these problems in more depth than we do here, but this is important to frame some of the issues that we at Outcome Security see as fundamental to the industry.
The Red Rider: Expertise Solving the Wrong Problems
One of the conversations that I’ll always remember from early in Outcome’s history, before Kaleidoscope was a fully-formed idea, was with a senior engineer at a service provider. I explained some of the issues we saw with collaboration across security teams, the inaccessibility of threat intelligence and cybersecurity data, and the bespoke nature of most products on the market.
The person I was talking to said “that’s great and it all makes sense. Also, one question – what counts as cybersecurity data?” As a services company helping customers deploy and then monitor cloud solutions for data migration, they had access to what working-level analysts would classify as cybersecurity data, but even so where the lines in the sand were, or should be, was unclear.
What counts as cybersecurity data?
An industry professional, overwhelmed by the state of cybersecurity tooling
One of the coolest parts of cybersecurity, particularly as a new person getting into the industry, is the sheer volume of subject matter areas that you can explore and specialize in. There’s no shortage of smart people to work with and learn from, and there’s no shortage (and arguably, a glut) of technology that you can apply to a myriad of missions.
There are daily blogs and tweets and conferences throughout the calendar year dedicated to sharing cutting edge research. For a lot of positions, independent research is encouraged, and in some cases even required, as part of your job responsibilities. The freedom in many roles to perform research into areas that interest you individually and the sheer volume of information at your fingertips is part of what makes the cybersecurity industry so intoxicating – as long as you want to learn, there’s always something you can dig into.
And now, for the medicine: most of that isn’t practical or functional, and in a lot of cases just turns the industry into a self-licking ice cream cone where most research only benefits people doing other research and never trickles down to, you know, the people we’re all supposed to be defending. Coming from a team of Founders who still considers themselves to be subject matter experts in deep technical fields, that realization sucks, but it is the realization all the same.
To be clear I’m not advocating that this is incorrect whole-cloth. Lots of research makes products better, and being able to have that flexibility is what attracts and draws top talent usually in the first place. But what I am saying is that it’s easy for these incentivization structures to turn into a top-heavy analysis space where your smartest individual contributors aren’t solving the problems that would be most effective (because they’re not as exciting) and the analysis that is published instead benefits the top 0.5% of organizations (because the programs for the remaining majority aren’t mature enough yet).
As a general rule, the published content that gets the most traction and attention tends to be focused on the flash problems. But in practice the large majority of attacks use simple Tactics, Techniques, and Procedures (TTPs) that we as an industry have been seeing for decades. Phishing, remote access via purchased credentials, and the use of commodity Red Team toolkits such as Cobalt Strike are just three examples of techniques that are relatively simple for an attacker to use, but disproportionately effective in letting them breach organizations. All of these techniques are nothing new – every security vendor on the planet has some amount of remediation for each.
So then why are simple attacks like this so effective? It seems unlikely that the answer is “every tool in the world is bad.” What’s more likely is that the assumptions security vendors make don’t align with the reality of an organization’s network or system configuration. So, if a security vendor makes assumptions about an organization’s system and network configurations to be effective,, and the teams consuming the product haven’t done those steps (or don’t know how), how do we bridge the gap?
It is very common for people with large amounts of technical expertise (which cybersecurity is built on) to want to focus on and discuss problems that they find interesting, but most analysis coming from threat intelligence shops, EDR vendors, and advanced security teams is not applicable to general consumers because of that sophistication.
I could show you the best, most efficient, tutorial in the world about how to tie a tie but it won’t help much if you’re still learning to tie your shoes.
The White Rider: Too Many Products, Too Much Broken Data
In 2021 cybersecurity startups raised over 29 billion in funding, raising more than the previous two years combined. Everywhere you look there’s a new platform with a shiny dashboard and big claims about AI & ML automating all of your woes away, or cutting-edge threat intelligence analysis that will help you prioritize your defenses better than ever.
Which is great, except also, they don’t work.
Cybersecurity products are heavily incentivized to solve products at scale, whether they can or not, which leads to problems when it comes to trying to exercise deep technical expertise. The first piece of advice that you’ll hear when building a new company or product is to focus on as narrow of a problem as possible, solve it well, then scale the solution out to make it easier, proportionally, to help more teams that need your product (or target more customers, depending on how cynical you are).
From a practical perspective this is obviously effective, but it leads to two common scenarios when you look at more than an individual company:
- Good, quality analysis was sacrificed at the altar of scalability, meaning that your data, analysis, or product isn’t as good as it could be
- Even if you solve your problem perfectly, you’re solving 1 problem, probably in a vacuum
“Automating the expert” is something that gets thrown around a lot in different forms but the reality is that a lot of analysis across cybersecurity is nuanced and so applying that at scale is difficult. More detail means less scale, more scale means less detail, and trying to meet in the middle is a tough balancing act.
Even for the products that dominate a niche, the day-to-day for your average cybersecurity professional is a plethora of co-related problems, not just the one that your product solves. And unless your product or solution lets me replace an entire team (spoiler: not a single product does), then practically speaking your product is a blinking light that says “this is bad, go do something about it.”
What the blinking light means and what you do about it will change if you’re a Red Teamer, CTI Analyst, SOC Analyst, Risk Assessor, Malware Analyst, System Administrator or whatever else, but fundamentally the success of an organization is built on the backs of is people, not a fancy analytic, AI model, or visualization.
This is going to sound puritanical, because there’s a lot more to these companies and teams than just their product, but we are talking about security here. Any time you spend trying to oversell your data, analysis, threat intelligence, or product appearance is time you’re spending directly hurting the person that’s supposed to be consuming your solution. The only thing worse than no security, is a false sense of security.
The Black Rider: A Skills Gap That Isn’t a Skills Gap
We’ve been hearing about the cybersecurity skills gap for a decade, which should sound suspicious given the above two points. If there are more products than ever, with all of these new algorithms, analytics, and expertise, why are there never enough cybersecurity candidates?
But do we really have a skills gap? Or is it something else pretending to be a skills gap? What is a skills gap meant to convey in an industry that’s at least partially defined by research and dynamic problem solving?
The knock-on effect of the asymmetry I described with the first two horseman is that most of the expertise in the industry is focused on problems that won’t impact a majority of organizations, and a majority of analysis is diluted through products that aren’t necessarily built by cybersecurity experts.
But do we really have a skills gap? Or is it something else pretending to be a skills gap?
Outcome Security, just now
This means that when that product finally lands in the hands of a user it:
- Might not really solve the problem that it was marketed around
- May solve a single problem among the many the user will have to deal with every day
- May appear to solve your problem(s) but actually be built on incomplete or incorrect data or other assumptions
All three of these problems are hard to quantify in the best of scenarios, but almost impossible when you have to filter through the entire sales -> decision maker -> working level pipeline. Communication between each of those levels is a lossy process which makes feedback, iteration, and validation of efficacy difficult.
It is easy to pin the root of this problem on management that isn’t technical (which is a massive part of the issue) but it’s deeper than that – if the technical expertise behind a product wants to solve a hard problem, but the product team wants to solve a simpler one at a scale, and your marketing & sales teams want to say you do everything, which part is your product actually solving? And if you’re a customer looking at the product, which part are you supposed to validate, and how?
So, let me reframe the discussion: if it’s true that, collectively, we don’t have enough people to address all of the “cyber” needs across the industry, what else could we possibly expect? If products are targeted at problems more esoteric than an organization’s actual needs, built on analysis that’s been diluted in order to scale, and presented in a way that’s not actionable (because it wasn’t built by seasoned cybersecurity experts) what alternative is there but chaos?
Even organizations that have a small number of tools and data sources often have to cross-correlate internal streams with external signals, measurements and open source or OSINT reporting. Because products tend to solve one specific problem or subset or problems the “glue” between those products gets lost, and those points in the middle are where a cybersecurity team produces the majority of its value. And in many ways, it’s easy for a tool to make an analyst’s job harder by requiring them to untangle a Rube Goldberg machine of different data sources and tools before they can do anything useful. And in a lot of cases, all people really need to do is get an alert, validate where the alert came from, and do the thing about it (where the thing is an organization-specific workflow to remediate/track/accept an issue).
Maybe as an industry we don’t have a skills gap. Maybe instead, organizations are giving their teams a handicap by making them navigate a hedge maze to access the data they need to do their job. If data and tools were easier to use, and in particular helped teams track and accomplish their tasking, maybe there wouldn’t be a skills gap at all.
The Pale Rider: An Inability to Solve Problems Collaboratively
Collectively, as an industry, we’re awful at learning lessons. Despite the volume of tools that are being created we’re still making the same mistakes we did 5 years ago, 10 years ago, over and over. This applies on a smaller scale too – about the recent Uber breach is a prime example of how some of these concerns can manifest. The MFA attack used to breach Uber, along with its (mis)configurations and remediations, were front and center in the cybersecurity zeitgeist for the past couple years.
So how is it possible a company like Uber, a very tech-first and sophisticated organization, had something like push notifications for MFA enabled, something any industry professional will tell you is a blatant problem? Did they not know about the SolarWinds attacks and everything that followed? Did they know, but lose the remediation effort in the noise of everything else that’s happening constantly in the industry? Or did they know but thought they had another solution that remediated the risk but obviously didn’t?
We’ll never know for sure but we don’t need to – the biggest takeaway from this breach (or any breach) shouldn’t be that company X did Y wrong, it should be that if something like this can slip through the cracks at an organization as sophisticated as Uber, it can happen anywhere.
We need a better, more actionable way, to share cybersecurity data. No marketing sizzle, just “here’s what happened, here’s how we know, here’s what you should do about it, here’s how.” Analysis distilled to working-level steps organizations can use without needing to have a team of cybersecurity experts in the wings independently validating each step.
And once those steps are delivered to an organization, that organization should be able to track not just questions like “did we do it” but also “how did we resolve it” and “were the recommendations coming in actionable enough” and, critically, if the sourcing for the alert or recommendation was a product, was that alert valuable and were those recommendations correct.
Collaboration and sharing information don’t start at the organizational level – before you even start to think about sharing analysis, reporting, recommendations, etc. externally it behooves you to make sure your internal teams and processes are effective and consistent. Each organization is a unique operating environment with different requirements, data sources, and workflows. Similarly, expertise between individuals in this industry is extremely varied meaning that each team or even team member has their own unique flow for how to handle different issues.
But while what works at one team or organization may not work when applied directly to another, there are lots of commonalities across how teams and organizations solve different problems. The lack of cybersecurity tooling that enables teams to collaborate more directly while working on a line of effort (instead of consuming a dashboard or after-action report) is a massive gap in making teams and organizations more efficient.
Both of these issues combine to make it incredibly hard to track macro-level progress as an industry. The sheer volume of new attacks, threat actors, exploits, signatures, etc. makes it almost impossible for an organization to stay on top of everything that they should be doing as the landscape changes underneath them constantly. And when that organization does pick a problem to solve, the nuanced nature of most teams’ workflows makes it hard to capture what they did and why, as well as how beneficial any tools and cybersecurity data feeds were in helping them resolve their issues.
As an industry we’re generally unable to measure efficacy of different cyber solutions, and because of that the industry is too susceptible to chasing shiny baubles that won’t help working level teams be more effective.
Now What? Salvation or Armageddon?
References to a biblical apocalypse notwithstanding this post isn’t meant to sound bleak. There’s a lot to do, but actually, all of these issues have a single root cause that we can start to work to remediate: we do not have a sufficient language to convey cybersecurity data and analysis.
We do not have a sufficient language to convey cybersecurity data and analysis
The real foundation of the cybersecurity industry is the teams of experts that do the analysis, whether that analysis is a red team assessment, malware analysis, threat hunt, or signature sweep. As an industry, we need to enable those professionals to do their job more effectively and efficiently before we can take a genuine look at which parts can be automated away or improved with new technology.
That’s why with our platform, Kaleidoscope, we’re focused on trying to improve the state of the industry by empowering your teams and organizations to be more efficient. By giving you tools, workspaces, and integrations that allow you to not just log, but also do your daily tasks, and connecting those processes with internal and external data sources available to your organization, we’re able to build a better data model to talk about cybersecurity workflows, whether those workflows are the current crisis in the industry or how your teams are remediating different issues.
We’re all constantly being assaulted with new technology, terminology, buzzwords, and products. It’s extremely easy to get lost or lose sight of what kinds of things you should be prioritizing. And it’s extremely easy to get overwhelmed because pulling on a single thread of any of these problems quickly reveals decades of bad habits and decision making.
So, it might sound like we’re lost in the woods, and maybe we are, but the path forward starts with building a better model to talk about cybersecurity analysis and data, and using that foundation to start to measure not just the efficacy of our teams and the tools they use, but also the best way for your teams and organizations to consume threat intelligence and other cybersecurity data to make sure you’re tracking all of the efforts that need to be tracked.
Focus on the outcome, not the chaos.