Let me be clear. I believe people understand what risk is and the THEORY behind risk. But are organizations doing risk impact assessments correctly? In my opinion, no. It’s the reason why I wrote an RIA SaaS application and just finished it.
First off, we all talk about things like Confidentiality, Integrity, and Availability (CIA) But we talk about them as if they, themselves, define risk and they don’t. These are all indicators of what the END RESULT would be IF a threat was to occur. I have one client that refers to CIA as “sensitivity” but the CIA is built into the consequences of a threat occurring so, if that’s the case, why are we asking about it? Is the CIA a set of requirements or an end result that you have to plan for?
BTW, if you are asking about things like Integrity, don’t you have to be able to define them? I’ve always used to think of Confidentiality as Information Classification. But many organizations don’t have a classification system but they DO understand what would happen if they were to have information go into the open so, rather than asking people what the Confidentiality requirement is, why not ask simply ‘What would happen if the information was to go public?’. So our definitions are, in fact, duplications of determining threat consequences, so why ask the question twice?
Oh, and if we are asking about Integrity, how do you define Integrity? I’ve asked a LOT of security professionals and none can actually give me a quantitative measurement of it. For the longest time, I thought Integrity was like being pregnant (you’re either pregnant or you’re not) – you either had integrity or you didn’t. But then I realized that what we meant by Integrity was accuracy and how important it was for the information to be handled to be accurate.
And far too often, we view Risk as consequence, rather than likelihood combined with consequence. It’s like when I first got into Security back in 2000. The first time I saw an anomaly in logs, I thought it was a hacker or attack. But it wasn’t. The second time, I thought the same thing. I was hyped up to react as if it was a real hack or attack. It took a few more occurrences of that happening that I came to realize that things happen. I now cringe when I hear a news report talking about hacking – it’s been so sensationalized that I’ve become tone deaf to it.
Many times, we take a single shot approach to RIAs. Do it once and then it’s done. That may have come because the RIAs weren’t done until the end of the project but doesn’t that, in itself, introduce risk? So, as part of my RIA SaaS solution, I decided to take a different approach – one based on Architecture.
First off, remember what Risk is defined as:
Risk = Likelihood of Threat occurrence X Consequence IF the Threat occurred
But most solutions aren’t fully backed at the beginning of a project. They are concepts and, maybe, even with pre-chosen products that are going to be put into place. But the architecture hasn’t been put together. So, rather than taking a ‘Wow, the Risk is HUGE on this!’ approach, I take more of a Project Management approach to Risk.
If you were to ask a Project Manager what the cost of a project is going to be just when a project starts, there’s no way they can give you a fixed price. They’ll give you an Order of Magnitude. A project will be budgeted at +/- 100% at the beginning of a project because there are so many things that are unknown. That’s the same with Risk. Then, as the project moves forward, the budgets get more and more accurate. The same with Risk. A RIA done at the outset of a project is going to be an Order of Magnitude because you don’t know enough. But, as a project moves on and a solution comes into being, the budgets (and the Risks) become more stable. It becomes a more iterative approach to doing RIAs.
Plus, and this is what really gets me, how in the world can you guess what the potential threats are to a project? Sure, there are some obvious ones – power outage, for example. But I have a list of around 250 threats to chose from that range from deliberate, to accidental, to natural, and I’ve come to learn that those 250 threats are not even close to being complete.
Looking at threats thus becomes navel gazing. What I’ve done with my RIA SaaS is to work from the solution OUTWARDS rather than the entire world INWARDS. All solutions have some sort of authentication built in now. But different types of authentication have different levels of risk (yes, they do!). Think about of it this way:
- Anonymous vs username / password
- Authentication against a local store vs Active Directory
- Active Directory vs SQL database store
- Active Directory vs SSO
- SSO vs Federation
- Username/Password vs 2 Factor
Each authentication has a different level of risk. And there are so many more different areas of a solution that may have different risk levels; Is it a Web base app vs Client/Server vs Mobile? Does it have a middleware component (that will add risk)? Is there a Test/Dev environment, QA environment, Both, or Neither? How are users set up (provisioned, help desk, solution group sets up, Self Service?)?
Different components have different risks. But if you logically group those components in a solution and assign risk scores for each, you can add up the risk scores and NOW you have a true representation of what the likelihood is for a solution to have a threat occur. And you can create bars that indicate what you think is a low threat level vs medium threat level vs high threat level. You’ll have different questions for a new solution vs an upgrade vs just reviewing existing infrastructure, but you’ll actually be able to measure likelihood by assigning a risk score to the different components. And, as the solution becomes more clear, you can adjust the architecture to lower likelihood or adjust the likelihood score as more information is had.
That brings us to the Consequences part of the definition of Risk. When I worked at BC Hydro a decade ago, they had a Risk Matrix that had 5 areas of measurement; Risk to Safety, Risk to the Enviroment, Risk of Reputational Loss, Risk of Financial Loss, and Risk of Availability. Each area of measurement has specific levels defined. And I’ve seen numerous similar risk matrix’ over the years. That tells me that organizations understand how to measure consequences, just not how to measure likelihood. So providing a combination of lows/mediums/highs is much easier here.
Now, if you have a standardized set of risk likelihood measurements (like I talked about above) and a standardized set of consequence measurements (which have been around for a while), doesn’t that make it much easier to create a Quantitative measurement of Risk in the RIA? And couldn’t you create an application written around that just will pull down boxes?
That’s what I’ve done with my RIA SaaS solution (just completed). Total questions – ranges between 22 and 26, depending on the type of project / area being reviewed. Output? Take it and put it into your Service Now or Remedy or whatever your ticketing system is and you have a workflow. OR build an automated workflow around the RIA so that it speeds the entire thing up. And THAT is dealing with a core issue organizations have – how long it takes for a RIA to be done. No more spreadsheets. If the questions aren’t filled with Security or Technical jargon, then you can just give the RIA to a project person and, if the score reaches a certain level, a security person can be assigned. Have a spot check on the various RIAs and NOW you have the entire organization responsible for security AND you can speed up the process.
And isn’t that IT is supposed to be about?
Hope this helps and, if you are interested, I’m looking for Beta Testers of my RIA SaaS solution moving forward …