The previous post discussed the NIST sp800-160 document, which has reached second draft release. It’s general purpose is to reduce attack surface so that systems are more inherently secure and resilient in the face of adversaries. Its objective is described by lead author Ron Ross of NIST;
“Increasing the trustworthiness of systems is a significant undertaking that requires a substantial investment in the requirements, architecture, design, and development of systems, components, applications, and networks—and a fundamental cultural change to the current “business as usual” approach. Introducing a disciplined, structured, and standards-based set of systems security engineering activities and tasks provides an important starting point and forcing function to initiate needed change. The ultimate objective is to obtain trustworthy secure systems that are fully capable of supporting critical missions and business operations while protecting stakeholder assets, and to do so with a level of assurance that is consistent with the risk tolerance of those stakeholders.”
Remember the title of the document:
Systems Security Engineering: Consideration for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems
The key words here are “trustworthy secure systems”. The goal is to raise the bar for system security and assurance, from the current low, to the highest state possible, that being trustworthy computing. Trustworthy computing has been applied to computing systems that are inherently secure, available, and reliable. From Wikipedia,
“The National Security Agency (NSA) defines a trusted system or component as one “whose failure can break the security policy”, and a trustworthy system or component as one “that will not fail“.”
Unfortunately this term is most commonly associated with a Microsoft marketing campaign. If you think about it just a little, have you heard of any Windows system that has ever been verified as inherently secure?
This assumes a trust model, where statements about security are verifiable and consistent, backed by mathematical models and formal methods. Systems feature security primitives that are mathematical objects; they can’t be bent. This is a far cry from granted or assumed trust and using estimated risk as is done currently. With trustworthy computing, the risk is known and controlled. There is a requirement to control “what needs to be controlled” in order to achieve Defender Advantage. Trustworthy computing delivers it.
With trustworthy computing, one does not fall back to discussions on hygiene,
best common practices, platitudes or infosec cliches. One is able to proceed knowing that a system has an inherent but verifiable, level of security. This is the mistake made in this article which starts out well, but falls back into the current model.
“However, it should be looked at as strategies, and more of a checklist or starting point that can be utilized by organizations to introduce a cyber hygienic and security-centric culture. It can also assist in creating best practices for entities.”
No. This is simply the wrong objective. We don’t want to perpetuate the model that’s failing. We want and need better outcomes. The purpose of inherent security is that everyone can get to work and not suffer disruptions due to hacking, or end up putting out fires, even when users, code and applications many not be verifiable as being trustworthy themselves. It may seem like a new concept, but the goal is to have systems that are “just secure”.
(Appendix) F.4 APPROACHES TO TRUSTWORTHY SYSTEM DEVELOPMENT
As I wrote last post, the move to trustworthy systems security engineering is a good idea, and a necessary one, but such shifts in thinking as prescribed by NIST sp800-160 will not be easy. As you will see, KSE by Trustifier will create an easier path to reach the goal of trustworthy systems and computing. The NIST document appendix listed in this section heading introduces three overarching strategies that need be applied in the development of secure systems, individually or in combination. They themselves differentiate trustworthy computing from hygiene and
best common practices. I’m focusing on them because they present the biggest technical challenges to trustworthy computing, ones that hygiene would never be able to compensate for their absence.
Section F.4.1 Reference Monitor Concept
The first item to take note in the appendix is the reference monitor. It is described as follows;
“The reference monitor concept provides an abstract security model of the necessary and sufficient properties that must be achieved by any system mechanism claiming to securely enforce access controls… can also be used to provide assurance that the system has not been corrupted by an insider. The abstract instantiation of the reference monitor concept is an “ideal mechanism” characterized by three properties: the mechanism is tamper-proof (i.e., it is protected from modification so that it always is capable of enforcing the intended access control policy); the mechanism is always invoked (i.e., it cannot be bypassed so that every access to the resources it protects is mediated); and the mechanism can be subjected to analysis and testing to assure that it is correct (i.e., it is possible to validate that the mechanism faithfully enforces the intended security policy and that it is correctly implemented).
“While abstract mechanisms can be ideal, actual systems are not. The reference monitor concept provides an “ideal” toward which system security engineers can strive in the basic design and implementation of the most critical components of their systems, given practical constraints and limitations.
The reference monitor concept has been a known, recommended security mechanism since 1972. It is essentially an authorization engine that controls need-to-access and need-to-know rules, and COTS systems do not have them. This is the number one reason why insider threat is a problem today. In NIST’s mind though, the reference monitor is only an abstract concept. They don’t have a workable model that can be applied across the board, so they are limited to presenting it as an ideal to strive for in system security design.
KSE is a reference monitor. Any system that implements the KSE security sub-system becomes a mathematically verifiable reference monitor.
From TCSEC-Orange Book days, KSE would be a B3 reference monitor. KSE is a break-through design that can be dropped on systems to deliver this security mechanism where needed.
Section F.4.2 Defense in Depth
While defense in depth remains a major tenet of infosec, note it’s limitations,
“… there is no theoretical basis to assume that defense in depth alone could achieve a level of trustworthiness greater than that of the individual security components used. That is, a defense in depth strategy is not a substitute for or equivalent to a sound security architecture and design that leverages a balanced application of security concepts and design principles.
Be assured that the combination of trustworthy security mechanisms and defense in depth can be very powerful.
Section F.4.3 Isolation
This section uses the term isolation for grab-all category that includes physical and network domain separation, and so on. It touches on separation using VMS. It is also referred to as compartmentalization.The topic of OS process separation is important and should be effectively delivered by a separation kernel. You can read more here. The document says this,
“F.4.3 Isolation Two forms of isolation are available to system security engineers: logical isolation and physical isolation. The former requires the use of underlying trustworthy mechanisms to create isolated processing environments.”
“Researchers continue to demonstrate that isolation for processing environments can be extremely difficult to achieve, so stakeholders must determine the potential risk of incomplete isolation, the consequences of which can include covert channels and side channels.”
How often does the subject of separation kernels come up in discussions of internet of things and embedded devices such as medical device implants? This mechanism is a definite requirement that too often is left out of the discussion.
The good news is that this capability is also delivered by KSE. KSE on a system, device or node would quality to pass EAL4+ for the Common Criteria using the NSA’s canned protection profile for separation kernels.
KSE delivers full separation kernel capability, – isolation plus augmentations, when implemented on a system OS.
Also, keep in mind that systems still need some other mechanisms for self-protection of their own security and integrity. KSE looks after these things as well, but this topic is beyond the scope of this post.
The goals and objectives of the NIST sp800-160 document are worthy. NIST has recognized the need to build more secure systems. It is worth looking at it and trying to understand the overall message. One wonders if these goals should become eventual standards for security for IoT cyber security approval bodies.
However, there is a difference between knowing you really need a reference monitor implementation and building one. If it were easy, these strategies would be commonly used today, but they are not. Plus, the problem is a compound one. The goal is not only trustworthy security mechanisms and secure systems, but they must be designed for usability and facilitate security management. They must be pragmatic for widespread use and cost-effective. They must enable the activities of the business, not impede them. A tall order.
KSE delivers on all counts. We’ve been doing this since 2003. It turns out that one can solve very difficult problems using algebraic modelling that are yet to be resolved using traditional approaches. With KSE on a device, it will enforce all rules pertaining to access and setting of device settings (anti-tampering), which other devices/nodes it may interact with(network level), and who/what/when has access to any data the system or device holds or generates(need-to-know).
So do you get it? Really get it?
Vendors need to raise the bar and ship product that has been designed and engineered to be trustworthy and inherently secure. That is what trustworthy security and high assurance is. It’s a worthwhile goal.
Do we really want nurses and doctors and even hospital network admins to be testing and monitoring devices in their environments, in case they can be used as springboards into the main network? Mind you, med devices should not be gatekeepers for insecure networks either. The networks themselves have to move to trustworthy nodes as well. This is eliminating attack surface.
Buyers need to push vendors to adopt trustworthy security engineering principles. We can help vendors. KSE can be embedded into an OS prior to development and then it’s security activated for QA testing. Then the product can be shipped with greater security. Think this is expensive? Well, it isn’t really if you think about it. Reducing complexity of trustworthy computing with a drop-on security sub-system rather than building yourself from scratch does have advantages. But aren’t emergency patches, recalls, law suits and reputation salvaging PR campaigns potentially expensive as well?
I’ve written a fair amount about KSE’s advanced security features, lowered complexity and admin overhead and the resulting lower total cost of ownership here, here and here, so feel free to explore and learn more at your own option, if you’re interested. Just know that the NIST document is correct in the direction it is prescribing, for the right reasons, and that KSE is a technology innovation that enables you putting these ideal goals and standards into practice.
Next post – KSE is a reference monitor ——– >
Resources & Related Reading