A Critical First Look at Stakeholder Specific Vulnerability Categorization (SSVC)
Publish date: Mar 6, 2020
Half as simple, twice as complicated.
This is an initial review of Stakeholder Specific Vulnerability Categorization (SSVC), which is an alternative proposal intended to replace current commonly used metric of Common Vulnerability Scoring System (CVSS). We summarize the proposal, discuss its pros and cons backed up by empirical analysis and conclude with suggestions for improvement.
The current standard metric, CVSS utilizes eight factors which include
Attack Vector, Attack Complexity, Privileges Required, User Interaction, Scope, Confidentiality, Integrity, Availability, which could be condensed down to three main categories (
Exploitation, Impact, Scope). The output of CVSS is a numerical score [0 … 10]. This score is typically divided into four uneven bands (
Low, Medium, High, Critical) which represent severity of an issue and dictate the urgency of creating a patch and apply it to affected systems.
The new proposal has the following distinguishing features.
- It attempts to identify multiple stakeholders (
Patch developers, Patch coordinators, Patch appliers) and proposes two distinct decision tree based solutions that generate two unique sets of decisions to be used by different stakeholders.
- Different factors are considered for each stakeholder.
- For patch developers, the factors include
Exploitation, Technical Impact, Utility (Virulence + Value Density), Safety impact.
- For patch appliers, the factors include
Exploitation, Utility (Virulence + Value Density), Safety impact, Exposure, Mission Impact.
- The outcome/decision is a set of 4 possible labels (
defer, scheduled, out-of-band, immediate) which are essentially similar to the bands created to represent severity and urgency of an issue derived from CVSS score.
Pros and Cons of SSVC
Let’s discuss the different pros and cons of this proposal.
Transparent and actionable outcomes
- Decision trees make it easier to explain how the outcome was calculated and which factor mattered most in that decision.
- The outcomes translate to distinct and actionable decisions.
- The outcome is essentially similar to CVSS risk bands, so it is easy to adopt in existing workflows.
Difference in priorities among multiple stakeholders
- The proposal results in potentially disjoint priorities among dependent stakeholders. As different stakeholders use different factors and different decision trees, they can come to wildly different results. For example, the two decision trees can potentially mark the same issue as ‘scheduled’ in one tree and ‘immediate’ in the other. A patch applier upon learning of a security issue’s disclosure might come to the conclusion that the issue needs to be patched on immediate basis but upon reaching out to patch developer finds out that they will not release a patch for another 6 months.
- Are outcomes significantly and often different when using CVSS as compared to using SSVC? The short answer is yes and yes. This is a very important drawback of the proposed approach of using two different decision trees. Let’s dig a bit deeper and confirm this empirically in the next section.
Complexity due to additional factors
- More factors are added resulting in more choices and greater chance of variation in opinion between different judges.
- Subjective and hard to evaluate/guess factors. Not all stakeholders are in a position to answer all questions, e.g. how would a patch developer decide on safety impact, i.e. if not fixing the issue would cut someone’s finger or kill them?
- What about existing bugs? Do all of them need to be re-evaluated using SSVC? That’s a lot of work, though it might be possible to use existing CVSS scores for existing bugs and starting with new scoring methodology from a certain cut-off point e.g. a new year.
- Would there be a central patch coordinator who would assign the values for factors and thus decide the recommended outcome for each new bug? This was something previously done by some well known parties and vendors for CVSS, so it can be safely assumed that similar process could still be followed by getting some central stakeholders to assess and publish assessment using the newly proposed system.
Empirical analysis of
Outcome Divergence problem
To confirm the hypothesis that different stakeholders would reach outcomes that are both significantly and often different from each other, we design an experiment.
- We take the factors and their values as proposed in SSVC paper.
- We take the two decision trees as proposed in SSVC paper.
- We generate all possible values of the input factors.
- For each set of values, we calculate both developer and applier outcomes using the decision trees.
- We assign numerical values (
1 = defer…
4 = immediate) to outcomes.
- We measure the difference between the two stakeholders’ outcomes as the absolute difference between the numerical values of the outcomes.
- We measure the difference values and frequencies and present them below.
|Absolute Difference in Stakeholder Outcomes||Frequency|
|0 = Idential outcomes||641|
- More than 50% instances result in different outcomes for stakeholders.
- Below are the 6 combinations that lead to maximum possible difference between Developer and Applier Outcomes (
deferfor one stakeholder,
immediatefor the other stakeholder):
|Exploitation||Exposure||Mission Impact||Safety Impact||Technical Impact||Utility||Applier Outcome||Developer Outcome|
- The significantly more common scenario is that the outcomes differ by one or two levels.
Recommendations for improvement
Smaller subset of factors
Not all factors contribute significantly to the outcome. Complexity can be reduced by using a smaller subset of factors. One possible approach could be to use union of top three factors from each stakeholder’s decision tree. Information gain can be used to identify the most pertinent factors.
|Applier Factors||Information Gain|
|Developer Factors||Information Gain|
It is evident from this data that
Mission Impact are the most significant attributes and mostly determine the final outcome. Perhaps, removing
Technical Impact would help reduce the complexity.
Combined decision tree
- It could be argued that the decision trees be altered in a way that the outcome of developer decision tree is never less severe than the applier tree. Though this is a very severe restriction, and might not be a practical one.
- It could be argued that the two decision trees should not produce outcome that differs by more than one level of severity.
- Perhaps getting input from multiple stakeholders does not necessarily precipitate creating separate distinct outcomes for different stakeholders. It could potentially be argued that although factors from both applier and developer stakeholders be considered, yet a single common decision tree could be used that results in one outcome. Below is one example of a combined decision tree.
SafetyImpact = catastrophic | Exploitation = active: immediate | Exploitation = none | | Exposure = small: out-of-band | | Exposure = controlled: out-of-band | | Exposure = unavoidable: immediate | Exploitation = poc: immediate SafetyImpact = hazardous | Exploitation = active: immediate | Exploitation = none | | Exposure = small: scheduled | | Exposure = controlled: out-of-band | | Exposure = unavoidable: out-of-band | Exploitation = poc | | Utility = laborious: out-of-band | | Utility = efficient: immediate | | Utility = super effective: immediate SafetyImpact = major | Exploitation = active | | Utility = laborious: out-of-band | | Utility = efficient: immediate | | Utility = super effective: immediate | Exploitation = none: scheduled | Exploitation = poc: out-of-band SafetyImpact = minor | Exploitation = active | | Utility = laborious: out-of-band | | Utility = efficient: out-of-band | | Utility = super effective: immediate | Exploitation = none | | Utility = laborious: defer | | Utility = efficient: scheduled | | Utility = super effective: scheduled | Exploitation = poc: scheduled SafetyImpact = none | MissionImpact = none: defer | MissionImpact = degraded | | Exploitation = active: scheduled | | Exploitation = none: defer | | Exploitation = poc: scheduled | MissionImpact = MEF crippled | | Exploitation = active: out-of-band | | Exploitation = none | | | Exposure = small: defer | | | Exposure = controlled: defer | | | Exposure = unavoidable: scheduled | | Exploitation = poc: scheduled | MissionImpact = MEF fail | | Exploitation = active: out-of-band | | Exploitation = none: scheduled | | Exploitation = poc: scheduled | MissionImpact = mission fail | | Exposure = small | | | Exploitation = active: out-of-band | | | Exploitation = none: scheduled | | | Exploitation = poc: scheduled | | Exposure = controlled: immediate | | Exposure = unavoidable: immediate
Source code for tests
The source code to perform the tests mentioned in this document and the generated datasets are available here.
- Prioritizing Vulnerability Response: A Stakeholder-Specific Vulnerability Categorization - Webpage, https://resources.sei.cmu.edu/library/asset-view.cfm?assetID=636379
- Prioritizing Vulnerability Response: A Stakeholder-Specific Vulnerability Categorization - Paper, https://resources.sei.cmu.edu/asset_files/WhitePaper/2019_019_001_636391.pdf
- Common Vulnerability Scoring System version 3.1: Specification Document, https://www.first.org/cvss/specification-document
- Information Gain in Decision Trees, https://en.wikipedia.org/wiki/Information_gain_in_decision_trees
- SSVC-Tests Repository - GitHub, https://github.com/secursive/SSVC-Tests