Infosec: where’s the beef?

Wendy’s Restaurants ran a  brilliant marketing campaign<watch video> back in the 1980’s, that really boosted their popularity and market share. It was personified by the slogan “Where’s the beef?”  Like little Clara Peller who starred in those ads, infosec needs to take a look “under the bun” and insist for the inclusion of something a little meatier.

In other words, where’s the security engineering/design and OS controls providing mechanisms enabling process separation, reference monitor (per user authorization controls) capability, kernel rule enforcement, and so on? Systems lack internal controls to enforce security policies and rules governing the behaviours that take place on them. If there is no means to enforce security policies, then does one really have security? We’re going to be relegated to reactive security and whack-a-mole,..- permanently! As long as all we do is add attack surface; it’s attacker advantage.


Beefing up system security with KSE

For anyone not familiar with KSE (kernel security enforcer), it’s a COTS security sub-system that is easily loaded and “injects” internal controls into computer operating systems, augmenting them so that a paradigm shift in trusted computing base design and implementation, is realized.

With the implementation of KSE, the system is changed, not in functionality, but in the nature of the security mechanisms and controls that have become part of it, at its core, and which are now available to use. Running trust models on previously untrustworthy systems becomes possible with the KSE engine.

KSE does deliver trusted computing enforcement through mandatory access controls, labeled multilevel security, multilevel integrity and multiple domain separation controls, providing the tools to implement and enforce need-to-know security, across domains. There is no need for anyone to become a serious threat.


Upgrading the TCB

As long as one is going to look under the bun, might as well check into the state of the TCB as well. The trusted computing base, or TCB, is the part of the system that is supposed to do the security heavy lifting. Some  may use the “everything” definition of TCB such as provided by Search Security here:

“The trusted computing base (TCB) is everything in a computing system that provides a secure environment. This includes the operating system and its provided security mechanisms, hardware, physical locations, network hardware and software, and prescribed procedures. Typically, there are provisions for controlling access, providing authorization to specific resources, supporting user authentication, guarding against viruses and other forms of system infiltration, and backup of data. It is assumed that the trusted computing base has been or should be tested or verified.”

Often though, the TCB is just referred to as a microkernel or security kernel where with relevant utilities incorporated. Oh, and that last sentence. You know what they say when you assume, – you make an ass of U and… well just you actually. I don’t assume anything anymore. And perhaps you really shouldn’t, according to Gunnar Peterson, in this noteworthy post.

“There is a burgeoning issue, a fatal flaw, in infosec processes that has the potential to result in increased downside risk to companies, and that issue is found at the intersection of assumptions and security products.”

Like Brian Snow says in his classic paper, “We Need Assurance!“, not assumptions.

Wikipedia informs about TCB here, and the entry there says,

“The careful design and implementation of a system’s trusted computing base is paramount to its overall security.”

This seems reasonable, given the purpose of the TCB, – to prevent abuse of system privileges on the rest of the system outside the TCB .

An ideal  example would be that the TCB enforces authorization policies verified by reference monitors to prevent insider abuse.




Unfortunately, on DAC systems we don’t have reference monitors to help prevent the insider threat, or much real enforcement either.


Screen shot 2015-07-27 at 8.07.21 PM


To summarize the  Wikipedia entry, the more robust definitions of TCB are found in the TCSEC Orange Book, any software in the TCB must be self-protecting, the TCB should be compact as possible for audibility, and just because the TCB must be trusted, it doesn’t mean that the TCB is trustworthy. Also mentioned is that occasionally, formal software verification, which uses mathematical proof techniques can be used to show the absence of bugs. You can read more at your option.

Maybe I’m missing the memos, but are there commonplace discussions regarding these issues for COTS systems? They seem to be eluding me. Perhaps such issues remain reserved only for  high security/assurance and national security domains? We should admit, that if we were having these discussions, our likely conclusion would be that for COTS systems, OS controls and enforcement are inadequate.

Recall the Schneier blog comment by Dirk Praet,- the topic was about Edward Snowden, that I referenced in my previous post here,

“If such a scenario unfolds in a high-security environment with inadequate technical controls on classified data and systems (MLS, MAC, RBAC etc.), even a low-grade nobody can become a serious threat.”


Although it would be useful if they were, such protections are not found as standard components in the COTS (commercial off the shelf) systems widely used today. As I said above, KSE does provide these controls to systems. They are added rule enforcement mechanisms made possible by KSE’s algebraic modelling. Anything with KSE is a reference monitor and enforces rules with a beefed-up TCB.



While some technologies may be able to provide parts of this, they all suffer from either lack of capability or complexity of implementation, or both. For example, despite some progress, complexity is still an issue for SELinux; no Linux user can use SELinux without a fair bit of additional training or education, at least without risking bricking their own system. If that’s the case, what’s in it for a Windows admin? 

Let me answer that now. KSE is a user-friendly equivalent of SELinux, and KSE rules are ubiquitous across platforms. Right now, it takes about 5 minutes to drop KSE on a target system. One can be ramping up security with tailored, enforceable security rules in short order. The TUX GUI will automate that and shorten the time even further.

Perhaps one will view this post as simply promoting KSE  innovation. Another way it could be viewed is as a way to  frame “asking for evidence”. As you will see in part 2, the algebraic modelling that delivers these security mechanisms in KSE provides this and other real advantages.

Infosec needs systems that are verifiable trustworthy, and which provide trusted execution environments in order to enable Defender Advantage. As you will see, the algebraic modelling which provides, also verifies, and this is the biggest advantage.


 Part 2 Why use algebra in infosec? ——— >



By |September 23rd, 2015|Insider threat, KSE|

About the Author:

One Comment

  1. […] Home→Insider threat, KSE→Why use algebra in infosec? It adds up.

Leave A Comment