Problem is, defenders just don’t get to experience it.
One of the repeated discussions in infosec regards whether defense can be as sexy as offence, or at least perceived as sexy enough to attract more people to become defenders. Maybe not sexy as in a suave and debonair secret agent like James Bond, but sexy as in control of the situation.
I’ll tell you this isn’t a topic of discussion at Trustifier, because our tech and security model is definitely sexy. I’ll get into that more, in Part 2. I doubt anyone thinks being a defender is not a noble role or duty to take on, but it does seem to have a perception problem. Nick Fick explains here that,
“The industry remains locked in a talent imbalance where creative people flock to become hackers at least partly because defense too often seems like a check-the-box compliance exercise; it has always been more fun to be a pirate than it is to join the Coast Guard.“
While the industry is focusing on all that fun, it’s suggested here that attackers are focused on economics in order to score bigger pay days, ignoring industry hoopla and hype and just figuring out new attack vectors that work. They are innovating, sharing information, creating and selling exploit kits, and renting out botnets and credentials on already hacked systems.
Security is hard!
Holly Gracefield wrote a thoughtful post, “Security is Hard; Why are you laughing?“, discussing the tensions, and difference in perceptions and realities between blue teams and red teams which may be impeding collaborative efforts towards helping the interests of the business. She’s probably right about this,
“If we consider the blue team to only win if the penetration test comes back empty and the red team to have won if any significant security issue is found we’re hammering down the moral of the blue team and rewarding the red team, every time.“
Jerry Bell, who cheers for the defensive side over at Defensivesecurity.org podcast, also explored some of the issues involved in his post “On The Sexiness of Defense“. He brings up the more immediate gratification factor, the visible outcome of one’s efforts, and some bragging rights and cool bravado following breaking demonstrations. He adds that defense is probably the opposite of these points, and,
“Defense is complicated and often relies on the consistent functioning of a mountain of boring operational processes, like patch management, password management, change management and so on.”
Doesn’t sound too fun or sexy at all really, although it might be personally satisfying to know that you are good at it. Hopefully your employer recognizes your effort, skills and endurance. I think two of the points in Jerry’s post seem important.
If you know how to break it…
First, Jerry says in his post, there’s an underlying assumption, and it’s the basis for all patch and pray activity, that if you break something…,
“… you must know how to defend it. My observation is that many organizations seek out offence to help improve their defence.”
There isn’t a lot of evidence that breaking things translates into building secure systems, so this might be one of the biggest and dumbest assumptions in all of infosec. A breaker might find only one of the possible ways to hack a system; what if there are more than one?
In a post from a few years back, “The Search For Infosec Minds”, Ian Tibble writes,
“In 2012 we can make a clear distinction between protection skills and breaking-in skills. This is because as of 2012, 99.99… % of business networks are poorly defended. Therefore, what are “breaking-in skills”? So a “hacker” breaks into networks, compromises stuff, and posts it on pastebin.com. The hackers finds pride and confidence in such achievements.”
“However, what is actually required to break into networks? Of the 20000+ paths which were wide open into the network, the hacker chose one of the many paths of least resistance. In most cases, there is no great genius involved here. ”
“The thought process behind hiring a hacker is typically one of “she knows how to break into my network, therefore she can defend against others trying to break in”, but its quite possible that nothing could be further from the truth. In 2012, being a hacker, or possessing “breaking-in skills”, doesn’t actually mean a great deal.
“Protection is a whole different game.”
Problem: The Red Team Always Wins!
In his post, “What’s in a ‘Red Team’ and Why Aren’t Companies Deploying Them?, Bob Lord says this;
“After all, the company has invested in security teams, products and processes. So the outcome should be a win for the blue team and a failure for the red team. (For those of you who are lost already, a red team is an independent group within a company’s security organization that challenges the effectiveness of its security defenses. The red team performs analysis of systems and process gaps. Then it attacks you, hopefully before a real adversary does.) Let’s set the record straight on this critical aspect of modern security programs…
“The red team always wins! Always.”
Despite this, Bob Lord implies in his post that enterprises are worse off without Red Teaming exercises. But many enterprises still aren’t convinced the investment is worthwhile, defenders still dread them, and the idea that hacking is cool is reinforced further while still being unable to deliver any quantifiable statements about improved states of security or possible outcomes.
Many security pros take ethical or whitehat hacking in either the form of penetration testing or Red Hat testing very seriously, working on their craft, developing a breadth of knowledge and experience as discussed in the article “So You Want To Be A Penetration Tester“.
The problem is, that the odds are greatly stacked against the defender. It may not be as easy as taking candy from an baby, but unless artificial constraints or limitations are placed on a Red Team that really impede them, they seem to almost always succeed. Many smart Red Team or pen-testing pros say they can count on a few fingers (or less), the times they have not succeeded at reaching their goal out of hundreds, or thousands of challenges. A common frustration commonly expressed by them is that the target enterprise often fails to correct the weaknesses in their defences that were uncovered in the exercise.
Do you remember the classic paper “The 6 Dumbest Ideas in Computer Security” by Marcus Ranum? If you don’t know it, or are new to security, it’s a must read. One of the 6 dumb ideas is that hacking is cool.
“The #4th dumbest thing information security practitioners can do is implicitly encourage hackers by lionizing them.”
Marcus also says this in his paper;
“My prediction is that the “Hacking is Cool” dumb idea will be a dead idea in the next 10 years. I’d like to fantasize that it will be replaced with its opposite idea, “Good Engineering is Cool” but so far there is no sign that’s likely to happen.”
Unfortunately, 10 years have passed and and the hacking is cool notion isn’t dead yet. It’s actually bigger than ever, yet there are more and bigger breaches than ever as well. I guess even smart guys like Marcus don’t win ’em all, but what does that tell us about the effectiveness of all of this?
A possible explanation
Jerry Bell’s second point from his post offers a simple, possible explanation of the reason why this has occurred;
“Penetrating a system is deterministic; we can prove that it happened. We get a sense of satisfaction. Getting a shell probably gives us a bit of a dopamine rush.“
Eureka! By George, I think he’s got it! That’s obviously what’s happening, isn’t it? We have to change the order of things in order to shift that adrenaline rush to defenders. That’s the goal. We don’t want breakers to ever feel that little rush.
Marcus delivers a hint about how to go about this;
“Wouldn’t it be more sensible to learn how to design security systems that are hack-proof than to learn how to identify security systems that are dumb?“
Isn’t this really the bottom line? A pen test or Red Team victory is basically just the confirmation of the fact that a system’s security is dumb? This is easier to accept when one understands that computer systems were designed to share data, not to be secure and that current vulnerability-centric defences are problematic.
Red teaming is an artifact of the vuln-centric, whack-a-mole infosec model. To me this raises a question. If a constrained and limited Red Team always wins, isn’t it likely that unconstrained adversaries will always win too? They may not exploit certain vulnerabilities that the Red Team found, but there are always other ways. It’s “an only takes one” (vulnerability) world. We need to think about things differently.
Perhaps defenders generally won’t win against adversaries until the Blue Team starts winning Red Team challenges.
The implications of this would be that the Red Team function would change. Red Team challenges should be used to verify that one’s defensive posture is sound. Wouldn’t it be better if Red Team challenges confirmed that Blue Team work implementing system and network controls has been done correctly and the crown jewels are protected?
Give defenders what they need!
There are numerous posts on the Trustifier blog about “defender advantage, stoning Red Teams, trustworthy computing and more. Who else talks about these things?
Part 2 will look at the impact of obtaining different outcomes using Trustifier KSE trustworthy computing and TUX AI technology to sway the advantage to defenders and in doing so, boost their self-image in the sexiness department.
So give defenders what they want! Dopamine!
Part 2 ——– >