One of the most difficult problems in securing any business that depends on information technology is working out what “secure enough” looks like.  Every stakeholder will have a different view, and offer a different opinion on the ideal outcome.  These opinions come from the business decisions about priorities, and hopefully a risk framework that reaches far beyond the technology walls of the information systems alone.  For example the UK government started out with CRAMM, from the CCTA (now the Office for Government Commerce) back in 1987.  CRAMM v5.1 is now owned by Siemens, with the OGC touting their Management of Risk approach.  There is also FAIR, NIST RMF, Intel’s TARA [pdf] and CERT’s own OCTAVE.  No matter which framework you adopt, one of its most useful outputs will be to set aside a justifiable amount of funding for security expenditure.

The right answer is always whatever the consensus that was reached says it is.  Not reaching a consensus is a recipe for disaster, which is a discussion for a future article.

Having established what the “secure enough” state looks like, the planning and road mapping to change the operating model begins.  Starting from where you are today, how do you get to your target of “secure enough”?  With unlimited budget, and unlimited resources, it just about looks feasible.  Back here on Earth, those resources and that budget need to be well understood and were hopefully a contributing factor in establishing the “secure enough” target in the first place.  Spending more than the value of the business is dumb, as is doing nothing assuming you’re not already in the right place. So how do you budget for your Information Systems security improvements?

Annual Loss Expectancy – pinning the tail

I’ve always been fond of ALE, however, too much can lead to headaches.  This is the expected loss, multiplied by the average number of incidents in a year.  For a typical bank’s computer systems the number of different types of losses can run into many hundreds, but it should remain manageable.  I include some examples in the table below, all fictitious I should add:

Type of Loss Amount Incidence ALE
Teller steals cash £2,450 180 £441,000
Credit/debit card internet fraud* £46 35,000 £1,610,000
ATM fraud – small £100,000 .5 £50,000
ATM fraud – large £800,000 .2 £160,000
SWIFT fraud** £20,000,000 .004 £160,000

*The UKCARDS Association estimated 7.4p of fraud for every £100 spent in first half of 2014

**This level fraud is very unlikely and correspondingly difficult to estimate – insurance is useful in managing these large but unlikely risks

 

This ALE does not come without a cautionary note.  ALEs have been standardised by many methodologies but ultimately boil down to this:

Pin the Donkey's Tail
  • Twitter
  • LinkedIn

  1. The consultant lists all the threats they can imagine
  2. They attach the probabilities (best guesses)
  3. Work out the ALEs
  4. Sum them all up to produce a fairy-tale amount

Following this wonderful work of probability, they tweak the numbers to produce something they feel the board is likely to stomach – perhaps on the advice of an internal auditor who understands the art of what’s possible given the current company balance sheet.

My apologies for this cynical view, however this is what seems to happen according to those in the know with whom I have discussed ALE.  ALE must never be elevated to a religious belief, but it can be useful for broad brush strokes.  When you attend your OGC M_o_R Practitioner course, as I did some years ago, ask your instructor for some real world examples, if he dare share!

 

(Visited 371 times, 1 visits today)
Share This