After the recent attack on 23andMe, their announced position on the hack was that users were compromised because they reused compromised passwords. I have to admit I do agree that users who reuse passwords and fail to implement available multifactor authentication (MFA) bear responsibility for their compromise. You can argue that organizations should have extra protections, and many do, but B2C organizations risk losing customers who don’t want to implement stronger security, which adds friction to using a website. However, this in no way means that a company just gives up on protecting their system.
While 23andMe might have had 14,000 users compromised through password reuse, they apparently allowed lateral movement, causing the compromise of the data of 6,900,000 users. This goes well beyond any fault of any user. This is a clear example of the delineation of user responsibility and corporate responsibility. That aside, even if the users are 100% the cause of a breach, it is still the job of the cybersecurity team to mitigate the results.
Can We Blame the User?
First, let’s talk about when it is okay to blame a user. While there is the typical mantra, “You can’t blame the user,” the reality is that users do many things that they should not do. For example, I investigated a case where a security guard wanted to watch movies on duty, and loaded VPN software on the physical security PC so that he could bypass the corporate controls. The guard ended up downloaded malware and causing a major impact to physical access to the facilities. Yes, you blame that user. I discuss these concepts in detail in my book, You Can Stop Stupid.
There is a concept called a Just Culture. In a Just Culture, users are provided with the appropriate training to know what to do, are given the resources to do it correctly, have the time to do a task correctly, and the jobs are not overly complicated to cause unnecessary confusion. If the prerequisites are not there, the user cannot be considered at fault, assuming there is no malice. Users can make mistakes, but in a Just Culture a user is not blamed for mistakes, and is encouraged to report mistakes. However, if there is a clear violation of policy, users can be blamed and disciplined.
The Importance of Security Awareness Programs
This has strong implications for security awareness programs. First is the obvious that the user has to be provided with the appropriate training. The training should include ensuring that people actually understand and apply the material. Second, and more important, the implication is that training has to be focused on what to do versus what to be afraid of.
There is a great deal of awareness training that focuses on the mystique of “hackers” and teaches people to be afraid. Good awareness training should have a focus on how to do things right, and not to fixate on hackers and fear. This is a critical distinction and I cover the concepts in detail in my book, Security Awareness for Dummies.
That aside, whether or not the user is to blame, you have to assume that users will fail and cause harm. Anyone who advocates for the human firewall is a fool. This implies perfection on the part of the users. This will never happen. Knowing there will never be a perfect user, you have to anticipate all possible actions and defend against this. This why the attack path visualization integrated into Hyver is critical to understanding not just the actions a user can take, but the implications and potential losses resulting from the actions.
The Responsibility of the Organization
You also need to consider that you can blame users as much as you want, but the organization is ultimately responsible for breaches. For example, if a user decides to intentionally click on a phishing message and causes a major data breach, regulators will go after the organization and not the user. Regulators are not going to walk into a breached organization and just commiserate with your admins about “stupid users.” They will only ask what you did to prevent the breach.
Clearly this leaves infinite possibilities. To account for those possibilities, you need to define clear attack paths to see where users can fail and the potential loss associated with these actions, which again is exactly what Hyver was built to do. It doesn’t matter why users fail, but just seeing the points where users can provide an attacker a further path into critical roles. At that point, you can start to determine how to mitigate the inevitable. Just because a user fails in some way does not mean that the organization has to experience a loss. Again, applying attack path visualization and optimizing the choice of countermeasures should account for user failure, regardless as to if they’re to “blame” or not.
Want to learn more about how Hyver can help prevent data breaches? Schedule a demo.