Menu

FAIRCON Showcases Quantitative Risk Analysis on the Cusp of Adoption

At FAIRCON 2018, keynote speakers described FAIR as a quantitative risk analysis “movement” to change the way industry measures and manages risk. Deep, ongoing frustration in business and government circles with the seeming inability of increased cybersecurity spending to stop continuing breaches drives this trend. Business leaders also want a better line of communication with security teams, one that talks about risk in the language of business.

Per FAIR Institute President Nick Sanna and Director Luke Bader: “We’ve been to the White House 4 times this year already. The Federal Government is looking for better ways to justify its actions.” Recently-released Security and Exchange Commission (SEC) cyber disclosure requirements “could have come straight out of a FAIR manual” as they call for companies to report on both the “frequency” and “magnitude” of cybersecurity risks.

This list goes on. The EU General Data Protection Regulation (GDPR) calls for risk assessment. The Department of Homeland Security (DHS) is researching FAIR. Everyone seems to be jumping on the bandwagon. Gartner came out with a statement that quantitative risk management is a critical capability. Among the U.S. Fortune 500, 30% are using FAIR in some way according to a FAIR Institute survey.

Then why is it still so hard to get companies to operationalize a full FAIR-based risk program?

In FAIR Institute Chairman Jack Jones’ opinion: “Companies have a lot of immaturity in the space. They’re addicted to zero-cost risk management, have an innate desire for an ‘easy button,” and tend to put bind faith in experts or look for shortcuts tied to existing immature models.”

Although grateful for FAIR’s signs of success after a long slog through the wilderness since writing the book on FAIR with Jack Freund, Jones is far from complacent. Paradoxically: “I think adoption will get harder after FAIR has some initial success because people will think they have nailed the risk problem, but they still haven’t. We need to recognize that our problem space is complex and stop looking for easy answers.”

At Security Architects Partners, we’ve done some risk management program reviews and have a few arrows in our backs as well as some success stories. To that end, let’s continue the discussion we started in an introductory FAIR blog post last August. In that, we promised to address some of the common objections to quantitative risk analysis.

Objections to Quantitative Risk Analysis

1. Too many variables and methodologies to really determine a valid approach.

This is probably the easiest to address. FAIR has become the go to standard thanks to the Open Group, which has published “Open FAIR” as a Risk Taxonomy Technical Standard (O-RT), and a Risk Analysis Technical Standard (O-RA).

FAIR has also been through more than 5 years of exhaustive field testing by early practitioners such as Jack Jones with RiskLens. FAIR provides a great set of terms and definitions to use in risk analysis, starting with the definition of Risk itself as “the probable frequency and probable magnitude of future loss.” With the ontology shown in the figure below, FAIR’s methodology is much more specific than vague statements of the past like “Risk is a function of threats, vulnerabilities, and consequences.”

Although there are other standards for risk assessment like NIST 800-30 or ISO 27005, these are more about the higher level processes involved and neither rule out nor specify a method of quantitative risk analysis. FAIR is complementary with them.

2. Organizational Baggage

Security and management staff have long memories, and we often hear clients say: “You’ll have a hard time bringing FAIR in here due to the [previous failed project].” We then try to learn as much as we can about the past effort to discover exactly what quantitative methodologies and tools were involved, where the gaps materialized, and why the project died. What you need to understand is that history doesn’t necessarily repeat itself, but it will usually rhyme. It’s important to get the lessons learned.

The “not invented here” syndrome often crops up as well, with one team starting a risk management project only to find that another group is going about the problem a different way. In our mind, these turf wars are symptomatic of a gap in security governance or top down commitment. Quantitative risk analysis reporting thrives only when executives don’t want a bland diet of overly optimistic content-free reporting – or dire warnings with extravagant, poorly justified spend requests – but instead expect to understand their cybersecurity situation in business terms of expected losses and recommended investments. 

It is important for executives to get enough quantitative information to navigate the uncertain realities of risk. No more heat maps without substance whose presentation falls apart when someone asks, “What’s in that yellow dot, anyway?” Instead of a qualitative risk emperor with no clothes, let’s have solid quantitative risk analysis at the right level of abstraction backed by the ability to drill down.

3. We Don’t Really have the Data to get Accurate Quantitative Risk Analysis Estimates

To me, the data objection poses the biggest challenge. Security analysts, however expert, shouldn’t just make numbers up concerning FAIR variables like threat probability of action or secondary loss event frequency and expect to be highly accurate. Getting the numbers requires meticulous research; interviewing HR, legal, physical security, and line of business veterans in your organization; subscribing to a data service like one of the ones exhibiting at FAIRCON; or all the above.

That being said, please understand the difference between getting an accurate forecast of risk down to the penny, and getting a probable loss estimate to a useful degree of precision. Jack Jones has a great discussion about accuracy versus precision on page 23 of the FAIR book:

If we estimated that the loss exposure associated with a [risk] scenario was $501,277.15, that is a precise number. It is right down to the penny. If, however, the actual loss exposure was $200,000, then our estimate was inaccurate…If, instead we’d estimated the loss exposure to be between $1 and $1,000,000,000, then our estimate was accurate, but the level of precision left something to be desired. For that scenario, an estimate of between $100,000 and $500,000 might represent an appropriate balance between accuracy and precision.”

Bottom Line

The real questions are: How precise does management need quantitative risk analysis estimates to be, how does it want them communicated, and how should they be used in the selection and prioritization of risk treatment options? What if we could get the loss estimate down to between 100K and 500K for Jack’s scenario? Would that be good enough?

FAIR practitioners maintain that with a modest amount of research and internal collaboration, one can build some useful data sets for an organization that, with the aid of calibrated estimates, can get them to a useful degree of precision. I’ll be working to explain how this could be done in the coming weeks. I’m also looking for a way to get some baseline data that I can use to model a few scenarios in the Open Group FAIR analysis tool, or on the FAIR Institute’s FAIR U Analysis Tool. Finally, I have clients asking me for guidelines on how to make the business case for agile risk management, and FAIR, to executives.

Stay tuned, and we’ll be back with you as soon as we can. Feel free to contact us with any questions and/or to explore opportunities. Risk assessments and risk management program development are some of our core subject matter areas, we want to hear your challenges, answer your questions, and learn how we can help.

Subscribe to Blog Notifications...  HERE
Archives