Companies are spending more money than ever on cybersecurity. Yet, attacks are growing—and so are the damages and repercussions, both financially and regarding public reputation. With the stakes rising, there’s no question that when it comes to cyberattacks, companies need to better understand their risks and where and how to invest in reducing them.
Although risk quantification has emerged in recent years as a key concept and buzzword in cybersecurity, what this really means is still not fully understood and implemented. True risk quantification in cybersecurity requires a new integrative approach, melding the cyber and technical aspects with the business side of an organization. It goes way beyond meeting compliance requirements, keeping up with the latest tools or filling out the types of questionnaires used to get cyber insurance coverage.
Mapping Out The Costs
The first step is to map out a company’s assets and business activities and how much each is worth in dollars. This includes not just the sets of customer data that could be stolen or encrypted but also, for example, the cost of shutting down an assembly line or a shopping website for an hour, day or week. It also includes the cost of employees sitting idle when a network is down or the extra labor required to do tasks manually rather than digitally during a network outage. Companies need to think about the worst-case scenario if any parts of their businesses are attacked and put a dollar value on it.
In addition, organizations need to map out third-party costs linked to each business aspect, like the price of reporting requirements for certain types of data breaches and legal fees for possible lawsuits, which can arise from consumers suing over privacy concerns or customers suing over a delayed order. Most companies don’t comprehensively do this. Although many talk about a risk-based approach, the step of figuring out the cost of each risk is often skipped. The key here is to speak about possible consequences of interrupted activities in language that describes business activities directly, like “the cost of closing a factory,” instead of focusing on vague, unquantifiable dangers like “the threat of ransomware” or a “data breach.”
Finding The Risks For Each Asset
Once companies understand the business risks, they need to turn to their cybersecurity experts to identify the technical vulnerabilities or threats to each aspect of business operations and figure out the costs of defending against or eliminating them. These teams should also be able to assess the likelihood of each activity or asset being targeted or damaged in an attack. These assessments will determine how much to spend on specific tools and where to apply them.
Keeping up with real-time threats and assets requires constant calculation. At all times, the focus shouldn’t be on absolute risk but rather on relative risk, as this serves as a guideline for how much should be spent to reduce the most likely damages and impact. To optimize efficiency, mapping out the threats and the cost of each one if it materializes should be done on automated platforms. Such systems can make clear the cost of each threat and balance that against the price of reducing it.
The CISO As A Bridge In An Ongoing Cycle
But, of course, before any actual spending or resource deployment decisions are made, an executive team needs to approve them. It’s at this stage that the CISO’s role as a bridge between the security and business realms is most critical. The CISO needs to explain the number of resources required to reduce specific business risks to the decision makers.
In general, we need to move from a compliance mindset to one that looks at individual cyber risks inside individual companies. By truly quantifying that risk, companies can effectively reduce it. When this is done well, cybersecurity spending—long seen simply as a cost—can finally turn into an investment. Ideally, a company won’t see just a better cybersecurity posture but a return on its investment.