Evaluating Logical Security

Written by: Andrew Jamieson

May 12, 2016 - How to get that 'gut feeling' for logical security?

I feel old. It’s not really my fault, or at least not totally my fault. I blame the computers. You see, working in the computing industry is a great way to make yourself feel old. New technologies, new advances, new vulnerabilities, all these things come around at a pace that makes keeping up difficult for the best of us, but if you don’t keep up before you know it you’re known as the guy who used to program in Cobol (not that I’m that old!). Of course, in some things permanence: Take buffer overflows for example, they’ve been around forever. Haven’t they? Actually, no, and that just serves to make me feel old as well. You see, buffer overflows have not really been around forever; it just feels that way. Actually the first documented evidence of an understanding of how buffer overflows work can be found from 1972. Same year as I was born, although I am sure that’s just a coincidence.

So, according to my own internal logic, I’ve been around forever. No wonder I feel old.
Of course, it’s actually only 44 years ago. Which is a long time (trust me!), but is also quite short by some measures. Take physical security, for example. UL has been evaluating the safety of electrical systems for almost three times that period of time, and as humans we have a much, much longer relationship with physical security and safety. The whole idea of ‘logical’ security is a new thing, relatively speaking. So, perhaps I am not that old after all.

It is because of our extended relationship with physical security that we’re pretty good at assessing risk in this regard. Don’t get me wrong, we often ignore those risks – just check what’s trending on youtube for the latest example of that – but realistically we know when something is wrong or not. Indeed, it’s the fact that we’re ignoring that internal voice of reason that everyone has that makes for a viral video. We know how to assess the physical security of things because that physical security has been important to us for as long as we’ve been around as a species. When we see Fort Knox, we know it’s a good place to store things we need to keep safe. It looks safe. When we see someone propping one ladder on top of another, we know that’s a bad idea. It does not look safe. This basic ‘gut feel’ evaluation of physical security is burnt into our DNA. Or at least it is for most of us.

Maybe not this guy.

The problem is, how do we assess logical security when it’s not something that has been (literally) beaten into us over millennia? How do we get that same ‘gut feel’ that we can have for physical systems? There have been attempts. We have schemes such as the Common Vulnerability Scoring System (CVSS) that allows for the ranking of computer vulnerabilities, so that they can be compared. This scheme takes information about the way a logical flaw is exploited, how much privilege you need, what the vulnerability will get for you if you exploit it, etc, and converts this into a number. Which is great, but it can only be applied to vulnerabilities which have already been discovered. It does not allow for the assessment of a ‘level’ of security of a system. It does not give us our ‘gut feel’ for how secure a system could be, even if you can’t find any current vulnerabilities.

However, as luck would have it I have a potential solution to propose. I know: Who would have thought I’d want to talk about something I’m working on, in a post I’m writing, on a company blog? Wonders will never cease. But humour me – I’m old, remember?

To get to this ‘gut feel’ for logical security , we first need to find a way to describe the logical system(s) we are evaluating. Of course, the problem is that these systems may be quite different – it could be a PC, or a ‘smart’ diaper (yes, really). It could use several different 64 bit multi-core computing systems, or a single 4 bit micro-processor. We need to be able to compare these systems, and others of types we can’t even imagine yet, objectively.

How do we do this? What are the commonalities? Well, what does any computing system do? Compute?! Actually, yes, that’s not such a bad place to start: It computes. Let’s run with that. So, what do you need to ‘compute’? You need inputs, and some form of internal computing method. You also need an output of some kind, otherwise it’s not really a computer, it’s just an entropy sink (even a heater provides heat as an output).

Therefore we can say that every computing system in the world, ever, can be defined by its inputs, outputs, and internal computing mechanisms. This gives us our ‘prototype’, the model we can use to map our risk assessment onto. What about that risk? How do we define that? Well, let’s go back to those interfaces. Every interface of a logical system increases it’s code base, as the system needs to do something with the data that is sent through that interface (otherwise were back to our entropy sink). In computer-hacker-geek-speak – a language I am not totally fluent in, but in which I know enough to order a beer and find my way to the toilet – the code base of a system is commonly referred to as an ‘attack surface’. This ‘attack surface’ represents the potentials for risk/vulnerabilities in a system; the larger the attack surface, the more risk.

So far, there’s nothing new here.
However, that’s not the full picture. The computing architecture of the system can actually reduce these risks; systems can have the ability to physically separate code and data memory so that one cannot be made to over-run into the other, or the operating system can help to protect memory locations through clever use of software protections. Security assets can be stored in different processing areas, which can’t be read by ‘normal’ code. There’s lots of clever things we’ve built into computing systems over the years to help with security, to reduce the impact of increasing attack surfaces. It can go the other way too; the architecture of a system can make what would be a relatively benign flaw in a different implementation quite severe in another.

This overall view of a system, when assessing both the architecture and the attack surface, I am going to call the ‘vulnerability surface’. We just don’t have the metrics to apply to get that ‘gut-feel’ for the scope of that vulnerability surface yet.

What I am proposing, then, is such a methodology to apply those metrics. We assess a system by enumerating the logical inputs and outputs, and the type of computing architecture, and assign points. Each logical interface that a system has applies negative points. If that interface is run at the same level of privilege as an asset we are trying to protect, we apply a multiplier to subtract even more points. If the system has specific architecture features that help protect against attacks, then we add points. This gives us an overview of the ‘Logical Security Posture’ of a system, instantiated in a number that summarizes the vulnerability surface of the system. The higher the number the better. It’s not quite the one-look gut feel we have for physical security, it’s more of a short-hand for a full penetration test that can be applied quickly and without subjectivity.

What are these numbers, I hear you cry! They’ll be coming in part 2 of this post :)

Disclaimer

These are the personal opinions of UL’s employees and its guests and should not be misunderstood as representing the opinion of UL's clients, suppliers or other relations.