At the Black Hat Europe conference in December, I sat down with one of our senior security analysts, Paul Stringfellow. In this first part of our conversation we discuss the complexity of navigating cybersecurity tools, and defining relevant metrics to measure ROI and risk.
Jon: Paul, how does an end-user organization make sense of everything going on? We’re here at Black Hat, and there’s a wealth of different technologies, options, topics, and categories. In our research, there are 30-50 different security topics: posture management, service management, asset management, SIEM, SOAR, EDR, XDR, and so on. However, from an end-user organization perspective, they don’t want to think about 40-50 different things. They want to think about 10, 5, or maybe even 3. Your role is to deploy these technologies. How do they want to think about it, and how do you help them translate the complexity we see here into the simplicity they’re looking for?
Paul: I attend events like this because the challenge is so complex and rapidly evolving. I don’t think you can be a modern CIO or security leader without spending time with your vendors and the broader industry. Not necessarily at Black Hat Europe, but you need to engage with your vendors to do your job.
Going back to your point about 40 or 50 vendors, you’re right. The average number of cybersecurity tools in an organization is between 40 and 60, depending on which research you refer to. So, how do you keep up with that? When I come to events like this, I like to do two things—and I’ve added a third since I started working with GigaOm. One is to meet with vendors, because people have asked me to. Two, go to some presentations. Three is to walk around the Expo floor talking to vendors, particularly ones I’ve never met, to see what they do.
I sat in a session yesterday, and what caught my attention was the title: “How to identify the cybersecurity metrics that are going to deliver value to you.” That caught my attention from an analyst’s point of view because part of what we do at GigaOm is create metrics to measure the efficacy of a solution in a given topic. But if you’re deploying technology as part of SecOps or IT operations, you’re gathering a lot of metrics to try and make decisions. One of the things they talked about in the session was the issue of creating so many metrics because we have so many tools that there’s so much noise. How do you start to find out the value?
The long answer to your question is that they suggested something I thought was a really smart approach: step back and think as an organization about what metrics matter. What do you need to know as a business? Doing that allows you to reduce the noise and also potentially reduce the number of tools you’re using to deliver those metrics. If you decide a certain metric no longer has value, why keep the tool that provides it? If it doesn’t do anything other than give you that metric, take it out. I thought that was a really interesting approach. It’s almost like, “We’ve done all this stuff. Now, let’s think about what actually still matters.”
This is an evolving space, and how we deal with it must evolve, too. You can’t just assume that because you bought something five years ago, it still has value. You probably have three other tools that do the same thing by now. How we approach the threat has changed, and how we approach security has changed. We need to go back to some of these tools and ask, “Do we really need this anymore?”
Jon: We measure our success with this, and, in turn, we’re going to change.
Paul: Yes, and I think that’s hugely important. I was talking to someone recently about the importance of automation. If we’re going to invest in automation, are we better now than we were 12 months ago after implementing it? We’ve spent money on automation tools, and none of them come for free. We’ve been sold on the idea that these tools will solve our problems. One thing I do in my CTO role, outside of my work with GigaOm, is to take vendors’ dreams and visions and turn them into reality for what customers are asking for.
Vendors have aspirations that their products will change the world for you, but the reality is what the customer needs at the other end. It’s that kind of consolidation and understanding—being able to measure what happened before we implemented something and what happened after. Can we show improvements, and has that investment had real value?
Jon: Ultimately, here’s my hypothesis: Risk is the only measure that matters. You can break that down into reputational risk, business risk, or technical risk. For example, are you going to lose data? Are you going to compromise data and, therefore, damage your business? Or will you expose data and upset your customers, which could hit you like a ton of bricks? But then there’s the other side—are you spending way more money than you need, to mitigate risks?
So, you get into cost, efficiency, and so on, but is this how organizations are thinking about it? Because that’s my old-school way of viewing it. Maybe it’s moved on.
Paul: I think you’re on the right track. As an industry, we live in a little echo chamber. So when I say “the industry,” I mean the little bit I see, which is just a small part of the whole industry. But within that part, I think we are seeing a shift. In customer conversations, there’s a lot more talk about risk. They’re starting to understand the balance between spending and risk, trying to figure out how much risk they’re comfortable with. You’re never going to eliminate all risk. No matter how many security tools you implement, there’s always the risk of someone doing something stupid that exposes the business to vulnerabilities. And that’s before we even get into AI agents trying to befriend other AI agents to do malicious things—that’s a whole different conversation.
Jon: Like social engineering?
Paul: Yeah, very much so. That’s a different show altogether. But, understanding risk is becoming more common. The people I speak to are starting to realize it’s about risk management. You can’t remove all the security risks, and you can’t deal with every incident. You need to focus on identifying where the real risks lie for your business. For example, one criticism of CVE scores is that people look at a CVE with a 9.8 score and assume it’s a massive risk, but there’s no context around it. They don’t consider whether the CVE has been seen in the wild. If it hasn’t, then what’s the risk of being the first to encounter it? And if the exploit is so complicated that it’s not been seen in the wild, how realistic is it that someone will use it?
It’s such a complicated thing to exploit that nobody will ever exploit it. It has a 9.8, and it shows up on your vulnerability scanner saying, “You really need to deal with this.” The reality is that you have already seen a shift where there’s no context applied to that—if we’ve seen it in the wild.
Jon: Risk equals probability multiplied by impact. So you’re talking about probability and then, is it going to impact your business? Is it affecting a system used for maintenance once every six months, or is it your customer-facing website? But I’m curious because back in the 90s, when we were doing this hands-on, we went through a wave of risk avoidance, then went to, “We’ve got to stop everything,” which is what you’re talking about, through to risk mitigation and prioritizing risks, and so on.
But with the advancement of the Cloud and the rise of new cultures like agile in the digital world, it feels like we’ve gone back to the direction of, “Well, you need to prevent that from happening, lock all the doors, and implement zero trust.” And now, we’re seeing the wave of, “Maybe we need to think about this a bit smarter.”
Paul: It’s a really good point, and actually, it’s an interesting parallel you raise. Let’s have a little argument while we’re recording this. Do you mind if I argue with you? I’ll question your definition of zero trust for a moment. So, zero trust is often seen as something trying to stop everything. That’s probably not true of zero trust. Zero trust is more of an approach, and technology can help underpin that approach. Anyway, that’s a personal debate with myself. But, zero trust…
Now, I’ll just crop myself in here later and argue with myself. So, zero trust… If you take it as an example, it’s a good one. What we used to do was implicit trust—you’d log on, and I’d accept your username and password, and everything you did after that, inside the secure bubble, would be considered valid with no malicious activity. The problem is, when your account is compromised, logging in might be the only non-malicious thing you’re doing. Once logged in, everything your compromised account tries to do is malicious. If we’re doing implicit trust, we’re not being very smart.
Jon: So, the opposite of that would be blocking access entirely?
Paul: That’s not the reality. We can’t just stop people from logging in. Zero trust allows us to let you log on, but not blindly trust everything. We trust you for now, and we continuously evaluate your actions. If you do something that makes us no longer trust you, we act on that. It’s about continuously assessing whether your activities are appropriate or potentially malicious and then acting accordingly.
Jon: It’s going to be a very disappointing argument because I agree with everything you say. You argued with yourself more than I’m going to be able to, but I think, as you said, the castle defense model—once you’re in, you’re in.
I’m mixing two things there, but the idea is that once you’re inside the castle, you can do whatever you like. That’s changed.
So, what to do about it? Read on Part 2, for how to deliver a cost-effective response.
The post Making Sense of Cybersecurity – Part 1: Seeing Through Complexity appeared first on Gigaom.