Without intending to be trite, there is a very important role that experience plays in the mitigation of risk. Experience comes into play when you are tasked with prioritizing risks. If you have zero experience in cybersecurity risk management, two critical vulnerabilities have equal weight and importance. But not all critical vulnerabilities can or will be weaponized and exploited. And not all critical vulnerabilities will result in a breach or security incident. This is the difference between a priori vs a posteriori vulnerability management. To be effective at mitigating risk, we need to find ways to make intelligent use of experience in running our infosec programs. We need to use not just our own experience, but also the experience of others. This is a form of collective resilience that is crucial to defending against nation states, organized crime and, like it or not, bored teenagers attacking and breaching companies just for the lulz like LAPSUS$. This blog post aims to help identify some ways in which we can better prioritize our efforts.
A priori (‘from the earlier’) and a posteriori (‘from the later’) are Latin phrases used in philosophy to distinguish types of knowledge, justification, or argument by their reliance on experience. A priori knowledge is independent from any experience. Examples include mathematics, tautologies, and deduction from pure reason. A posteriori knowledge depends on empirical evidence. Examples include most fields of science and aspects of personal knowledge.
Research into patching cadence by Kenna Security’s Michael Roytman (acquired by Cisco) and analyzed by the Cyentia Institute’s Wade Baker have surfaced a metric that most organizations are able to remediate 50% of their vulnerabilities within 30 days of the patch becoming available. That’s not good enough for CISOs and their senior leadership teams to sleep well at night. There are more and more vulnerabilities discovered and disclosed every day. Closing half of them every month is not a winning strategy. Oddly enough, the severity of the vulnerabilities does not influence the patching cadence. A reasonable assumption would be that critical vulns get remediated more quickly than lower severity vulns. But given that this is not the case and combined with the fact that the patch cadence cannot be accelerated due to a general inability to acquire additional resources for performing security updates and the requisite QA and testing of those updates, we can see that infosec teams cannot hope to keep pace with the exposures that are a priori critical risks.
Rather than take a mindless brute force effort to remediate risk, we need to drop our obsession with the prioritization of work based on severity alone. Instead, we need to focus on exploitability. If we can only remediate half of the vulns for a given month of Microsoft Patch Tuesday updates and other security fixes, let’s make sure we put our time into patching what matters based on the likelihood that it will be used in a successful attack on our infrastructure, applications, and APIs. We need to come up with a solid empirical (aka a posteriori) basis for prioritizing our infosec team’s attention and efforts. How can we patch more judiciously? Meaning, how can we address the most likely paths to breach rather than trying to tackle 100% of critical and high vulnerabilities?
Exploit Prediction Scoring System (EPSS) for Vulnerability Management
Thankfully, this kind of thinking and data-driven machine learning analysis has already been in place for a few years now and EPSS v2 is available as an open source model. I first learned about EPSS in 2019 when v1 was published and presented at a security conference where I was giving a talk that was followed by Michael Roytman’s session on EPSS. Since then, the model received a nice “bump” in exploit and incident response data when Cisco acquired Kenna Security and the Cisco proprietary DFIR (Digital Forensics and Incident Response) data was added for training the ML algorithm and model. The model got a lot better. What does that mean exactly? It means that we can patch less, yet remediate more risk. Remediating exploitable vulns (or highly likely to become exploitable with code snippets and evidence discovered in the wild) helps us address the resource constraints and testing challenges that plague security teams the world over.
How Attack Path Analysis Helps
So we’re now in a position to prioritize our vulnerabilities based on an increasingly sophisticated machine learning model. We no longer have to suffer under the mindless rubric of patching all criticals and highs within 30 days (which was demonstrably not happening anyway). And for better or for worse, the new SEC disclosure regulations will require more breaches and security incidents to be disclosed, which adds even more data to the collective awareness of infosec teams and exploited vulns. But this is still just a generic improvement to the work of vulnerability management and remediation of risk for our companies. It is definitely a welcome improvement with regard to prioritization, but there is yet another powerful approach to be brought to bear on the problem of prioritization: attack path analysis. Experience can also be applied from historical DFIR case analysis to identify the routes that attackers take or their path to successful exploitation.
You’ve undoubtedly already heard the phrase “kill chain” when talking about threat actors and how we need to disrupt their sequence of reconnaissance, weaponization, delivery, exploitation, and exfiltration. Each of these steps in an attack can be met with detection and, with the right tools, disruption. Attack path analysis should be tailored to your organization’s specific application stack and set of libraries, tools, and assets. Mitigation of risk can be achieved not only by going after just those vulnerabilities which are exploitable, but also by going after just those critical junctures that matter in your stack. Optimized mitigation of risk should be based on an enriched view (aka metadata) of your assets, their value, and their importance to the business.
Example of Hyver Risk Dashboard
In the above Hyver screenshot, you can begin to see the value of risk quantification beyond just the identification of CVEs and assignment of CVSS and EPSS scores. The dashboard speaks a fundamentally different language of risk: dollar value. Effective cybersecurity risk management has been languishing for years without the full support of executive management because they just didn’t understand the crazy “moon language” that infosec professionals speak. But modern governance of risk needs to include an increasingly large dose of cybersecurity risk. And while it is a laudable goal to ask boards of directors to step up and get some cybersecurity acumen, that will take time. Creating and sharing a view of an organization’s risk posture in terms that the senior executives can readily understand is an excellent way to bring everyone into alignment about risk. We must present the options for addressing risks with focused projects to add process capabilities and fundamentally raise the bar on an organization’s maturity around managing risk, both cyber and non-cyber.
Experience matters. Whether that is selecting the best candidate from a pool of applicants or selecting tools that deliver insights and actionable intelligence around risk. Your processes only get better when you invest in your people and when you invest in your tools. For a lot of tool choices out there, I am quite vendor-agnostic. You can implement any tool poorly and waste your money chasing a silver bullet solution that promises the world. But in the same vein, you have the possibility to implement any tool well and get beyond the 1.0 level of implementation and reach a 2.0 and 3.0 level of control and observability that amplifies your security posture. This is the idea behind what is meant when saying we need to be accomplishing more with less.