Which two factors do you need to account for when correlating an event timeline using a SIEM?

Analysis Process

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

TEF level

We almost never derive TEF from Contact Frequency and Probability of Action. In fact, in all the years we have been doing FAIR analyses, we can probably count on two hands the number of times we’ve found it useful to go that deep. There are a couple of reasons for this: (1) we rarely have any difficulty making an estimate directly at the TEF level, so why go any deeper, and (2) estimating probability of action effectively can be pretty tough. That said, there have been a couple of times when being able to operate at this deeper level has been tremendously helpful, so leverage it when you need to, but don’t count on needing it often.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000063

The FAIR Risk Ontology

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Threat event frequency

The probable frequency, within a given time-frame, that threat agents will act in a manner that may result in loss

Those who are just learning FAIR occasionally confuse TEF with LEF, and this is in fact one of the most commonly missed questions on the certification exam. We suppose the fact that the definitions and acronyms are almost identical doesn’t help. That said, once you gain even a little experience using FAIR, this will become second nature to you.

The operative phrase in the TEF definition that distinguishes it from LEF is “may result in loss.” In other words, the key difference between LEF and TEF is that loss may or may not result from Threat Events. For example:

Rolling a pair of dice while gambling is a threat event. Having the dice come up “snake eyes” is a loss event.

A hacker attacking a website is a threat event. If they manage to damage the site or steal information, that would be a loss event.

Pushing a new software release into production is a threat event. Having a problem with the release that results in downtime, data integrity problems, etc., would be a loss event.

Having someone thrust a knife at you would be a threat event. Being cut by the knife would be the loss event.

Note that in the first sentence of each bullet above, loss is not guaranteed; it isn’t until the second sentence that loss is clear. The probability of loss occurring in each threat event is a function of Vulnerability, which we will discuss in detail a little later.

TEF can either be estimated directly or derived from estimates of Contact Frequency (CF) and Probability of Action (PoA). Similar to LEF, TEF is almost always expressed as a distribution using annualized values, for example: Between 0.1 and 0.5 times per year, with a most likely frequency of 0.3 times per year. This example demonstrates that annualized frequencies can be less than one (i.e., the frequency is expected to be less than once per year). The example above could thus be restated: Between once every 10 years and once every other year, but most likely once every 3 years. Similarly, TEF can also be expressed as a probability rather than as a frequency in scenarios where the threat event could occur only once in the time-frame of interest. The factors that drive TEF are CF and PoA (see Figure 3.4).

Which two factors do you need to account for when correlating an event timeline using a SIEM?

FIGURE 3.4. FAIR threat event frequency ontology.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000038

Information Security Metrics

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Threat visibility

Threat metrics should, unsurprisingly from a FAIR perspective, focus on threat event frequency (TEF) and threat capability. For some threat communities (e.g., insiders of one sort or another), you can also include a metric regarding the number of threat agents, because there is likely to be some correlation between the number of threat agents and the probability of threat events (malicious or not).

Very few organizations really seem to leverage threat metrics. Oh, you’ll often see things about the number of viruses blocked, the number of scans against web systems, and such, but beyond that, organizations tend to underutilize what could be a rich source of intelligence. Later in the book we give SIEM providers a hard time for not leveraging their data very effectively. Today nobody is asking them to be very proficient because common practices regarding threat metrics are usually pretty superficial. If you adopt FAIR as a fundamental component of your organization’s risk management practices, you will inherently evolve your approach to threat metrics.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000130

Common Mistakes

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Mistaking contact frequency for TEF

This mistake occurs primarily when people are trying to estimate something like the TEF for attacks against an Internet-facing system or web application. Very often, we will see people use an incredibly large TEF estimate because, after all, these systems are on the Internet and everybody knows attacks happen every few seconds on the Internet (or so the thinking goes). Multiply an attack every few seconds by the number of minutes in a day and 365 days in a year and you get a really big TEF. Even if they estimated their vulnerability to be very low—something under 1%—you still end up with a huge LEF that suggests the organization should have gone out of business a long time ago. So, what’s wrong? Well, to a large degree it boils down to their interpretation of what constitutes a threat event.

By our way of thinking, if a threat event occurs and you are vulnerable to it, a loss event immediately occurs. Yet much of the activity we see on the Internet is better described as contact (i.e., somebody is scanning and poking around looking for things that qualify as particularly interesting by their standards). Very often when those scans encounter your systems and applications, they are looking for evidence of weakness but they aren’t going to necessarily leverage those points of weakness immediately. They’ll report it back to the mother ship, which may or may not result in additional probing and an actual attack later. Yes, we know there is a lot of gray in this area and some scans are more maliciously inclined than others. Nonetheless, when we actually examine Internet logs we see a much smaller number of events that we would qualify as actual threat events.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000105

Interpreting Results

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Unstable conditions

The chart above might result from a scenario where resistive controls are nonexistent but the threat event frequency is inherently low – i.e. there just isn’t much threat activity. This is referred to as an Unstable Risk Condition. It’s unstable because the organization can’t or hasn’t chosen to put preventative controls in place to manage loss event frequency. As a result, if the frequency changes the losses can mount fast. Another way to describe this is to say that the level of risk is highly sensitive to threat event frequency. Examples might include certain weather or geological events. The condition also commonly exists with privileged internal threat communities (e.g. executive administrative assistants, database administrators, etc.). Since most companies model scenarios related to privileged insiders as a relatively infrequent occurrence with high impact, these risk scenarios will often be unstable.

Perhaps given all of the other risk conditions the organization is wrestling with this just hasn’t been a high enough priority. Or perhaps the condition hadn’t been analyzed and recognized before. Regardless, it boils down to the organization rolling the dice every day and hoping they don’t come up snake-eyes. If you presented the analysis results above without noting that it is unstable management might decide there’s no need to introduce additional controls or buy insurance because the odds of it happening are so low. They would be making a poorly informed decision. Designating something as an unstable risk condition is really just a way of providing a more robust view of the situation so decision makers can make better-informed decisions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000075

Controls

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Exposure windows and threat event collisions

Unfortunately, despite our best efforts, “stuff” (variance) happens and, thus, we have what we refer to as “windows of exposure.” And, this is where it gets kind of interesting. In the illustration below, assume the horizontal axis represents time, and the vertical axis represents the degree of a system’s vulnerability. Furthermore, let’s assume that when we put an asset into production, we have designed, built, and implemented the asset in compliance with control-related policies and standards. As depicted in Figure 11.17, in a perfect world, the level of vulnerability never varies from the intended state (we are allowed to dream, are we not?).

Which two factors do you need to account for when correlating an event timeline using a SIEM?

FIGURE 11.17. Vulnerability over time (Shangri-La version).

In reality, changes in vulnerability are likely to occur at various times throughout the lifetime of the asset. These changes may, as we pointed out earlier, occur as a result of changes to the asset’s controls or changes in threat community capabilities. Regardless, when these changes occur, the asset is operating at an unintended level of vulnerability until the variance is identified and remedied (Figure 11.18).

Which two factors do you need to account for when correlating an event timeline using a SIEM?

FIGURE 11.18. Vulnerability over time (real version).

The frequency and duration of these increased exposure windows, and the degree of change in vulnerability during these windows, are major components of what drives the risk proposition of an asset over time. But they aren’t the only component.

Figure 11.19 shows an asset’s exposure window characteristics over time relative to a threat event (the red vertical line along the time axis). If threat events don’t happen often, then the probability of a “collision” between the threat event and an increased level of vulnerability in an exposure window is relatively low. Heck, if the threat event frequency is low enough, then we may be able to tolerate relatively wide and/or frequent exposure windows (depending, of course, on what the loss magnitude of such an event might be).

Which two factors do you need to account for when correlating an event timeline using a SIEM?

FIGURE 11.19. Threat events and vulnerability windows over time.

If, however, as depicted in Figure 11.20, threat event frequency is higher, then we must be highly proficient at preventing, detecting, and responding to variance to minimize the probability of collisions between threat events and exposure windows.

Which two factors do you need to account for when correlating an event timeline using a SIEM?

FIGURE 11.20. Threat events and vulnerability collision.

What this tells us is that to manage variance effectively (and the risk it introduces), we have to understand both an asset’s window of exposure characteristics and the threat event frequency it faces, and then put into place the appropriate variance management capabilities to minimize collisions. Keep in mind, though, that even when an asset’s vulnerability is at the intended level, it still has some amount of vulnerability. As a result, a threat event by a capable enough threat can still result in a loss event at any point along the timeline. The good news is that organizations often have at their disposal (but rarely use) good data on windows of exposure and threat event frequency, which makes this an important area of improvement for most organizations.

TALKING ABOUT RISK

We just read two reports on Internet web security written by two separate information security companies. Both of these reports offered lots of data and statistics about web security: one company1 focusing on the number of vulnerabilities it observed in the thousands of customer websites it provides security for, and the other company2 focusing on data associated with attacks against the customer websites with which it works. The report on vulnerabilities stated that 86% of all websites they tested had at least one serious vulnerability, and that the average window of exposure was 193 days. The report on website attacks stated that, on average, websites were subject to high severity attacks once every 15 days.

What’s wrong with this picture? Both of these companies are highly reputable, seem to have access to solid data, and seem to use reasonable statistical methods. Still, something is just not right. These reports would seem to imply that nearly all of those websites with “serious vulnerabilities” should have been compromised, at least given our premise on the relationship between exposure windows and threat event frequency. Of course, maybe they have been compromised. The report doesn’t provide that information.

Here’s our take on the problem. Given that the analyses underlying the reports were done separately and independently, there was no effort to correlate the frequencies of specific vulnerabilities against specific attacks (e.g., the probability of an Structured Query Language (SQL) injection attack hitting a site that has an SQL injection vulnerability). Furthermore, as a chief information security officer (CISO) for three organizations, I (J.J.) have become familiar with the odd situation where an application thought to have a specific vulnerability is subjected to attack but isn’t compromised. Sometimes, this is because the vulnerability is a false positive or there are compensating conditions that keep it from being truly exploited. Other times, it’s because the attack didn’t find that vulnerability among all the web pages and variables on those web pages. Often, the vulnerabilities exist behind an authentication layer and weren’t really accessible to most attacks. And, other times, what someone calls a “Serious Vulnerability” or a “Serious Attack” has to be taken with a grain of salt. Regardless, the point is that we have to exercise a lot more critical thinking and go deeper in our analyses before we can really gain clear and meaningful intelligence from these kinds of data. Don’t get us wrong, though: there was a ton of useful information in both of these reports, and we believe they are absolutely headed in the right direction.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000117

Implementing Risk Management

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Overall likelihood

The manner in which the NIST method determines overall likelihood of an event results in inaccurate risk ratings in some instances. Specifically, the matrix it uses to combine the likelihood of threat event initiation or occurrence (TEF in FAIR terms) with the likelihood that threat events result in adverse impacts (vulnerability) overlooks a fundamental “law”—the overall likelihood of an adverse outcome cannot exceed the likelihood of the catalyst (threat) event occurring in the first place (see Figure 14.1). To make the point, let’s look at an example. Let’s say that you are trying to understand the risk associated with some malicious actor running off with sensitive information. You’ve brought the appropriate subject matter experts together and come up with a likelihood of threat event initiation or occurrence rating of “low.” Now the team examines the controls and other factors that would drive the level of likelihood threat events result in adverse impacts and come up with an estimate of “high.” Using Table G-5 from the NIST 800-30 standard, you look up the results of combining those two values and arrive at an overall likelihood of “moderate.” So what’s the problem here? Somehow we started off with a threat event likelihood of “low” but ended up with an overall likelihood of “moderate.” How can we have a higher overall likelihood than the likelihood of the threat event itself? Simply stated, logically, we cannot.

Which two factors do you need to account for when correlating an event timeline using a SIEM?

FIGURE 14.1. NIST 800-30’s overall likelihood table.

National Institute of Standards and Technology (2012). Guide for conducting risk assessments (NIST Special Publication 800-30 rev 1). Retrieved from http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf.

In order to be accurate, the overall likelihood values in the matrix can never be greater than the likelihood of threat event initiation or occurrence values—even when likelihood threat events result in adverse impacts is high or very high. Given this upper likelihood limit, many of the overall likelihood values need to be adjusted. The table shown in Figure 14.2 provides an example of what an alternative matrix might look like.

Which two factors do you need to account for when correlating an event timeline using a SIEM?

FIGURE 14.2. Alternate form of NIST 800-30’s overall likelihood table.

National Institute of Standards and Technology (2012). Guide for conducting risk assessments (NIST Special Publication 800-30 rev 1). Retrieved from http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf.

Of course, absent a quantitative underpinning, it is ambiguous in both the existing matrix and any alternative matrix as to how much the overall likelihood should drop, as likelihood threat events result in adverse impacts decreases. Still, the fundamental logic seems indisputable regarding the need to limit overall likelihood values based on likelihood of threat event initiation or occurrence values.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000142

Information Security Risk Assessment: A Practical Approach

Mark Talabis, Jason Martin, in Information Security Risk Assessment Toolkit, 2013

Stage 2: Evaluate Loss Event Frequency

This stage of the FAIR framework is a bit longer than the others. It essentially has five steps. An easy way to look at it is that for each step, you will end up with a value. This value will then be used in either some intermediary computation for the stage or in the final risk computation.

What follows is a brief description of each of the activities. For more details around the specific steps refer to the FAIR documentation.

FAIR Documentations link:

http://www.cxoware.com/resources/

1.

Threat Event Frequency (TEF): FAIR defines this as the probable frequency, within a given timeframe, that a threat agent will act against an asset. Basically this tries to answer the question: How frequent can the attack occur? FAIR uses a 5 point scale with corresponding frequency ranges from Very High (>100) to Very Low (<.1 times) (see Table 2.6).

Table 2.6. TEF Ratings and Description

RatingDescription
Very High (VH) &gt;100 times per year
High (H) Between 10 and 100 times per year
Moderate (M) Between 1 and 10 times per year
Low (L) Between .1 and 1 times per year
Very Low (VL) &lt;.1 times per year (less than once every 10 years)

For example using this table, what would be the Threat Event Frequency for an automated mechanism (e.g. a worm) attacking an externally facing system such as a company website? Using the Table 2.6, this would be given a “Very High” rating as this event could possibley occur more than 100 times a year (due to the number of worms that are in the wild).

2.

Threat Capability (Tcap): FAIR defines this as probable level of force that a threat agent is capable of applying against an asset. Additionally, it is a measure of the threat agents’ resources and skill and how it can be effectively applied to the asset. All this means is you need to answer this question: What is the capability of the attacker to conduct the attack? (see Table 2.7).

Table 2.7. Threat Capability Table

RatingDescription
Very High (VH) Top 2% when compared against the overall threat population
High (H) Top 16% when compared against the overall threat population
Moderate (M) Average skill and resources (between bottom 16% and top 16%)
Low (L) Bottom 16% when compared against the overall threat population
Very Low (VL) Bottom 2% when compared against the overall threat population

While this may seem confusing, all this is trying to do is address the level of skill an attacker would need to have to successfully conduct a given attack. So let’s say we have three threat sources: A secretary, a systems administrator, and a hacker. Who would have the greatest Threat Capability to perform unauthorized activities on a server? It is reasonable to conclude that a systems administrator would probably be within the top 2% that could actually do this attack, followed by a hacker, and then a secretary.

3.

Estimate Control Strength (CS): FAIR defines this as the expected effectiveness of controls, over a given timeframe, as measured against a baseline level of force or the assets ability to resist compromise. In other words, how strong are the controls and protective mechanisms in place to prevent the attack? (see Table 2.8).

Table 2.8. Control Strength Table

RatingDescription
Very High (VH) Protects against all but the top 2% of an avg. threat population
High (H) Protects against all but the top 16% of an avg. threat population
Moderate (M) Protects against the average threat agent
Low (L) Only protects against bottom 16% of an avg. threat population
Very Low (VL) Only protects against bottom 2% of an avg. threat population

This is another rather confusing table but simply put, what we are trying to measure, is the strength of the control. If we used the example of the compromise of sensitive data on lost or stolen storage media, an encrypted hard drive would certainly have a much higher control strength (probably at the top 2%) compared to a hard drive that has not been encrypted. Unfortunately the difficulty with an evaluation like this is the subjectivity in identifying which controls fall into which categories.

4.

Derive Vulnerability (Vuln): FAIR defines this as the probability that an asset will be unable to resist the actions of a threat agent. To obtain this value, you consider two previous values which are the Threat Capability (Tcap) and the Control Strength (CS). Deriving the Vuln value is as simple as plotting the Tcap and Control Strength and finding the point where the two intersects. This is a fairly logical derivation as the capability of the attacker is inversely proportional to the control strength. For example, a system will be more vulnerable to unauthorized access if the threat source was a hacker and there was a weak control (e.g., lack of password complexity enforcement) that was unable to prevent a hacker from gaining access to the system.

5.

Derive Loss Event Frequency (LEF): FAIR defines this as the probable frequency, within a given timeframe, that a threat agent will inflict harm upon an asset. To obtain this value, you consider two previously computed values: Threat Event Frequency (TEF) and Vulnerability (Vuln).

Obtaining the LEF is done by simply plotting the TEF and the Vuln and identifying where the two intersect. The concept here is focused on determining how likely a threat source would be able to successfully leverage the vulnerability in a system. For example, if you consider a threat scenario of a worm infecting an unpatched system on the Internet you would have a very high LEF. This is because worms have a high TEF, as there are so many constantly probing the Internet, and the Vuln rating would be high since the control strength would be considered weak due to the lack of patching.

It is important to note that many of the tables in the FAIR documents are suggestions about how to quantitate these risk elements, and FAIR allows room for customizations.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497350000026

Thinking about Risk Scenarios Using FAIR

Jack Freund, Jack Jones, in Measuring and Managing Information Risk, 2015

Web application risk

Web application vulnerability is a special case of the previous section. There are some unique aspects about it, however, that warrant a short section unto itself.

Similar to vulnerability scanner results in general, we very often see results from web application scanners that don’t stand up to even superficial review. Consequently, organizations are faced with the same choices we mentioned before—aggressive remediation regardless of the cost, setting long remediation timelines, or a lot of missed remediation deadlines. Aggressive remediation of web application vulnerabilities—especially for applications written in-house by the organization—potentially has a more direct effect on the organization’s ability to grow and evolve as a business. Specifically, very often the programmers who are tasked with fixing vulnerable conditions are the same ones who should be developing new business-enabling web application capabilities and features. As a result, the time spent fixing bugs equates to lost business opportunity. This can create a pretty strong tension between the security team and the development team, as the security team is focused on protecting the organization and the development team is focused on growing the business. It also makes it especially important to only fix bugs that really need to be fixed. Ideally, organizations avoid this problem by writing secure code to begin with, but this is sometimes easier said than done given the complexity of some applications, the inevitable variability in developer skills, and the evolution of threat capabilities.

Some important considerations that can help you triage the findings (we’ll call the findings “deficiencies”) that come out of many web application vulnerability scanners include:

Is the web application Internet-facing? If it isn’t, then the TEF should be considerably lower, unless an organization has a pretty unusual internal threat landscape.

Is the deficiency directly accessible or does the attacker have to authenticate to the application first? Obviously, if a deficiency requires authentication, then it is far less likely to be discovered and leveraged through simple means. In other words, the TCap of the threat community is going to have to be higher, and almost any time you raise the TCap, you lower the TEF. There are simply fewer highly skilled and motivated threat agents than there are run-of-the-mill, opportunistic threat agents. When you’re talking about an authenticated attack, you are also talking about a targeted attack, which again lowers the TEF. By the way, if your web application has good logging in place, you might actually be able to acquire decent data regarding the volume of illicit activity that takes place by threat agents who have authenticated to the application. Illicit activity tends to have patterns that, once recognized, can alert you to an account that has been compromised, or that the threat agent set up specifically for malicious purposes.

Speaking of TEF—not all deficiencies experience the same rate of attack, either because they are lower value from the threat agent’s perspective, they are harder to execute successfully, or both. Working with experts in web security threat intelligence, you can have some pretty substantial differentiations in TEF between different deficiencies, which can make prioritization much easier.

Does the deficiency enable the threat agent to compromise a single user account at a time, or the entire customer base? In most cases, you should care much more about any deficiency that enables the threat agent to siphon off the entire contents of the database because of the LM implications. Most of the really damaging web application compromises we’ve heard of are of this latter variety.

Does the deficiency enable the threat agent to gain control over the system the application is running on? These can be very dangerous deficiencies; however, the good news is that many of them are more difficult to execute (higher required TCap, lower TEF).

Just using these criteria can help an organization prioritize its web application deficiencies far more effectively than what natively comes out of the scanner. We’ve also found it very helpful to engage one or more representatives from the development team in performing this kind of triage. It not only helps each team educate the other, but the outcome is (or should be) a jointly agreed upon prioritization. Besides more cost-effective risk management, this also can significantly reduce the tension between the two teams.

We would be remiss if we didn’t point out that doing full-fledged FAIR analyses on web application deficiencies enables an organization to make comparisons between the loss exposure a deficiency represents and the cost in person-hours (and perhaps opportunity costs) involved in remediating the deficiency. When an organization is able to do that, it is more explicitly making risk-informed business decisions. At least one web application scanning provider is in the process of integrating FAIR into their product, which will be able to provide automated quantitative loss exposure and cost-to-remediate results for deficiencies they uncover.

On a separate but related topic, we want to state that we’re advocates of continuous (or at least high frequency) scanning for Internet-facing web applications versus monthly, quarterly, biannual, or annual scanning. As you will learn in the Controls chapter that follows, the time it takes to discover a deficiency can play a huge role in how much vulnerability a deficiency actually represents, particularly in high TEF environments. Furthermore, we also believe strongly in scanning applications in production rather than just in a test environment. There was a time in the past where scanning methods posed real danger to the stability of web applications, but some scanning providers have a proven track record of being benign.

Note that web application security is a specialty unto itself, and we highly recommend that organizations either hire, engage, or train-up expertise in this area, even if an organization outsources web application development and doesn’t develop its own.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124202313000099

Threat and Vulnerability Management

Evan Wheeler, in Security Risk Management, 2011

Measuring Risks

Back in Chapter 6, we tried to apply the Annualized Loss Expectancy (ALE) formula to assess several risks, but that was where quantitative analysis broke down. There wasn't enough information available to properly calculate the rate of occurrence. Looking at the vulnerability advisory example from this chapter, let's see if FAIR can help us where ALE came up short.

FAIR provides a very comprehensive framework for analyzing information risks, including a step-by-step process for qualifying risks with several reference tables to get you started. The very basic reference tables will be briefly explained in this chapter to illustrate the depth of the FAIR analysis model even in its simplest form. To calculate the Threat Event Frequency (TEF), you could use the simple progression of quantitative frequency ranges from FAIR shown in Table 11.4. Like the qualitative likelihood scale used earlier in this chapter, the TEF scale uses five levels.

Table 11.4. FAIR Threat Event Frequency Table

LevelDescription
Very low &lt;.1 times per year (less than once every 10 years)
Low Between .1 and 1 times per year
Moderate Between 1 and 10 times per year
High Between 10 and 100 times per year
Very high &gt;100 times per year

The next two variables are aspects of the vulnerability that will affect the Loss Event Frequency: Threat Capability (Tcap) and Control Strength (CS), shown in Tables 11.5 and 11.6, respectively. You may notice that both of these variables were considered components of the likelihood rating in the earlier qualitative model.

Table 11.5. FAIR Threat Capability Table

LevelDescription
Very low Bottom 2% when compared against the overall threat population
Low Bottom 16% when compared against the overall threat population
Moderate Average skill and resources (between bottom 16% and top 16%)
High Top 16% when compared against the overall threat population
Very high Top 2% when compared against the overall threat population

Table 11.6. FAIR Control Strength Table

LevelDescription
Very low Only protects against bottom 2% of an average threat population
Low Only protects against bottom 16% of an average threat population
Moderate Protects against the average threat agent
High Protects against all but the top 16% of an average threat population
Very high Protects against all but the top 2% of an average threat population

In order to calculate the Vulnerability variable, which together with the Threat Event Frequency will determine the Loss Event Frequency, you will need to compare the capability of the threat source with the strength of the existing controls to prevent the exploitation. This is mapped in a table similar to the risk matrices used earlier for the qualitative models, except in this case a rating of Very Low for control strength is bad and a rating of Very High for threat capability is bad, as shown in Table 11.7. The best case is that the threat capability is rated Very Low and the control strength is rated Very High. The final Loss Event Frequency rating is determined using a second mapping table, shown in Table 11.8, comparing the Vulnerability and Threat Event Frequency values.

Table 11.7. FAIR Vulnerability Table

Control Strength
Very LowLowModerateHighVery High
Threat capability Very low Moderate Low Very low Very low Very low
Low High Moderate Low Very low Very low
Moderate Very high High Moderate Low Very low
High Very high Very high High Moderate Low
Very high Very high Very high Very high High Moderate

Table 11.8. FAIR Loss Event Frequency Table

Vulnerability
Very LowLowModerateHighVery High
Threat event frequency Very low Very low Very low Very low Very low Very low
Low Very low Very low Low Low Low
Moderate Very low Low Moderate Moderate Moderate
High Low Moderate High High High
Very high Moderate High Very high Very high Very high

The last factor for measuring the risk is the Probable Loss Magnitude (PLM) rating. Here too, FAIR takes a practical approach to estimating loss by providing ranges rather than trying to identify a precise value. The table in Table 11.9 shows one possible implementation of this factor. The appropriate ranges will be different depending on the organization, so this table may need to be tweaked for your organization.

Table 11.9. FAIR Probable Loss Magnitude (PLM) Table

MagnitudeRange Low EndRange High End
Very low $0 $999
Low $1,000 $9,999
Moderate $10,000 $99,999
Significant $100,000 $999,999
High $1,000,000 $9,999,999
Severe $10,000,000

Once the PLM and LEF are calculated, the final step is to determine the risk exposure using one last mapping table, shown in Table 11.10. Again, this matrix doesn't differ much from the risk matrix used in Chapter 6 (Figure 6.2), except that FAIR uses several more levels of PLM than is used for Severity or Sensitivity in the qualitative model. When you think about the fundamental differences in the approaches, this makes a lot of sense, as a quantitative model lends itself better to more precise evaluations of these variables than a qualitative model does. In the qualitative model, it is important to make it abundantly clear which level is appropriate, but when you are using hard numbers, it becomes more obvious where a risk falls on the scale.

Table 11.10. FAIR Risk Exposure Table

Loss Event Frequency
Very LowLowModerateHighVery High
Probable loss magnitude Very low Low Low Moderate Moderate Moderate
Low Low Low Moderate Moderate Moderate
Moderate Low Moderate Moderate High High
Significant Moderate Moderate High High Critical
High Moderate High High Critical Critical
Very high High High Critical Critical Critical

What FAIR considers the PLM value is equivalent to the risk sensitivity rating and components of the severity rating used throughout this book. So where Sensitivity, Likelihood, and Severity variables were used in our qualitative model to rate the risk exposure, FAIR is using a similar approach with Probable Loss Magnitude, Threat Event Frequency, and Vulnerability, respectively. They are different ways of approaching the problem, but in the end, they account for the same factors. You can find detailed guidance for each of the 10 steps of the basic FAIR workflow on the author's, Jack Jones, Web site [3].

Using the FAIR methodology can help you properly qualify many factors that affect the probability and severity of a given risk. You certainly wouldn't want to go to this level of granularity for every vulnerability that is announced; after all, you could have 10 advisories in one day! But this model provides a great way to approach an in-depth analysis of the vulnerabilities that are most highly rated using the quicker qualitative model.

In a way, the value of FAIR is in its granularity; it addresses each factor of risk individually. But, this is also what makes FAIR difficult to implement on a wide scale. It provides too many options for a quick assessment, such as what is needed in a TVM program.

Analyzing a Vulnerability Notification using FAIR

First, let's review a quick summary of the vulnerability advisory that was used earlier in this chapter to demonstrate rating a risk with the qualitative model introduced in the second part of this book. The highlights are as follows:

Affects Adobe Acrobat and Reader

Initial Risk Rating was High

Allow remote attackers to

Execute arbitrary code

Crash the application

Potentially control the system

Currently being exploited in the wild

Uses a heap-based buffer overflow attack

Using the qualitative scales, this risk was rated as a High for desktop systems and Moderate for servers. Now, let's try using the FAIR approach and compare the results. Luckily, both models use the same final four-level scale for risk—Low-Critical—so it should be easy to compare the two.

To begin, we need to determine the Threat Event Frequency (TEF) using the scale in Table 11.4. Based on the information in the advisory (you may want to look back at the full description earlier in this chapter), it would seem that this vulnerability is currently being exploited in the wild; so at a minimum, you can expect a breach to occur at least once in the next year. According to the FAIR model, this puts you in the Moderate TEF rating level (between 1 and 10 times). At this point, in the model, you aren't yet considering other controls (for example, user awareness) that might be in place to prevent this breach from occurring.

Next, the Threat Capability (Tcap) needs to be determined. Crafting a PDF document to abuse the buffer overflow in Acrobat is not the kind of skill possessed by just anyone off the street. Some advisories will indicate whether the exploit code is readily available on the Internet, whether someone has published instructions, or if an attack kit or script is being circulated. If that were the case, you would be looking at the lower end of the Tcap scale. Given the information provided in this case, we must assume that none of those resources are yet available to potential attackers. So, for this attack, you are probably looking at advanced, but not elite, hackers and programmers, who would put you in the High Tcap level.

Now, we need to account for the strength of existing controls. A signature to detect and block these malicious PDF files would certainly put the Control Strength rating in at least the High level. But there is no indication about whether that signature exists in this scenario, so our existing controls currently involve relying solely on the awareness of the user community not to open suspicious PDF files. The effectiveness of awareness programs is improving over time as we develop better methods of educating the general user community, but there is still a very high rate of users getting tricked. To be generous to our users, let's rate the control strength as Moderate.

According to our matrices, a TEF rating of Moderate, a Tcap rating of High, and a CS rating of Moderate yields a Vulnerability rating of High and Loss Event Frequency rating of Moderate.

Then, at this point, only one more variable remains, and that is the Probable Loss Magnitude (PLM). FAIR provides a very detailed methodology for looking at various aspects of loss, including productivity, replacement cost, fines, damage to reputation, and so on. This list should sound familiar to you from our discussions of risk sensitivity. We already estimated that, in the worst case scenario, this breach might happen 10 times in one year, which means possibly 10 desktops or servers becoming infected, and with servers likely having a lower probability of being exploited. Assume that the most likely case is that a system is crashed or even damaged such that it needs to be rebuilt, and that this will mostly impact productivity for the desktops. If an employee making $80,000 per year loses one day of work, you might estimate that this comes to around $300–$400 per day of lost productivity. If the breach occurs the maximum expected 10 times during the year, that puts the PLM in the $1,000 through $9,999 range. On the other hand, execution of arbitrary code can have many implications depending on the motivation of the attacker. They could use it to infect other systems on the same network, steal the user's authentication credentials, or even steal sensitive data from the system. You can see where this analysis could start getting even more complicated once you begin to take all of these possibilities into account. For the sake of simplicity, let's say that you determine the worst case scenario PLM is in the Significant range; in this case, your final risk score will be High.

To summarize, for the desktops, this vulnerability likely presents a Moderate risk, and in the worst case, a High-risk exposure. This puts the worst case FAIR analysis in line with our earlier qualitative results, but the more likely case according to the FAIR analysis actually comes out at a lower risk rating. You can try this exercise yourself and see what you come up with for the Domain Controller server. Compare that to the earlier result if you want to further examine the relationship between the two models.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496155000116

What can a SIEM detect?

SIEMs can detect lateral movement by correlating data from multiple IT systems. Mobile Data Security – A SIEM can monitor data from the mobile workforce and identify anomalies that might indicate information leakage via a mobile device.

What type of visualization is most suitable for identifying traffic spikes?

What type of visualization is most suitable for identifying traffic spikes? A line graph is a good way of showing changes in volume over time.

Which of the following are types of log collection for SIEM?

There are six different types of logs monitored by SIEM solutions:.
Perimeter device logs..
Windows event logs..
Endpoint logs..
Application logs..
Proxy logs..
IoT logs..

What is a SIEM and why is it useful?

A SIEM acts like the main hub for your system's logs. It will store all of the information and events about your environment and allow you to see all of the past logs as well, to weigh against your current usage and context. In short, it functions as the main alarm system of your digital business.