Back in May,  my good friend Eric Cowperthwaite caused a stir with a blog post about security breach victims getting demonized for failing to prevent break-ins. Other industry friends passionately disagreed.
My thinking on the matter continues to evolve. But as is usually the case, my thinking takes me to the middle.
Companies that suffer a breach — Home Depot and Target have been among this year’s biggest poster children — are victims. They don’t set out to put their customers’ data in danger and they probably thought they were practicing all due diligence until they discovered the intrusions. But they probably also mistook their compliance check lists for real security and failed to turn security into a company-wide mindset, and that makes them enablers for the hackers who beat them.
One friend noted that Target demanded that customers let them scan driver’s licenses to buy things like Nicorette gum and that at the time of the breach the company didn’t have a  CISO or point person on consumer privacy.
Another friend reminded those participating in the debate that companies don’t truly care about security until they get popped.
They’re both right. And for those reasons, I think companies like Home Depot and Target deserve every drop of criticism. They also won’t do right by their customers and improve security without that criticism.
But is there a middle course, where we continuously remind companies that they were stupid and reckless but do so with some sort of bedside manner? I don’t pretend to have the answer.
But Cowperthwaite made an argument in his post that it absolutely true: People in the security community love to kick a company when it’s down.
When someone messes up, we circle around them and start swinging, throwing one tweet and Facebook post after another.
Some say that’s too bad, that the victims need to hear the hard truth, and that if they can’t take the criticism they need to get over it or get out of business.
That’s true, too.
Here’s the sticking point:
There’s no consensus on what equals constructive criticism vs. name-calling. Some folks see disagreeable feedback as trolling, which is the wrong conclusion at least half of the time.
I propose two starting points to get us toward that better balance:
- To the security experts on Twitter and Facebook: When opining on a company’s failings, drop certain words. For example, refrain from calling them such things as “bozos,” “buffoons,” “asshats” and “fucktards.” Surely there are cleaner ways to tell them they were asleep at the wheel.
- To those who accuse the critics of trolling them: Get a thicker skin. If the language is critical, that doesn’t mean you’re being trolled. Most of it is free advice you can put toward the improvements you need to make, which should be your first care, anyway.
If there’s an even better way, I’m all ears.
Companies will begin to take data breaches seriously when one of two conditions occur. First, they suffer an aha moment where they recognize that the quarterly balance sheet to drive stock performance is a bad metric. Security is a long-term investment that lives on the rules of risk management and probability. If your security team tells you there is a 20 percent probability of a successful attack, investing in preventive and detective mechanisms will rarely show a return on a quarterly basis. It is only over the long haul that these investments pay off and then it is through avoidance and not reaction and remediation. Second, the market needs to demand that these organizations provide trusted environments before entrusting them with payment data or other sensitive information. This last will be difficult given the market presence of some organizations, but we must find to do it nonetheless. Completing a compliance check sheet should not be sufficient to earn that trust. We need to ensure that the company or organization meets accepted security standards that have a reasonable probability of success in detecting and preventing malicious activity BEFORE a breach occurs.
I propose that some measure or metric be developed similar to a SSAE 16 SOC 3 which is based on a SSAE 16 SOC 2 Type 2 audit that can be published without adverse impact on the company’s security posture. Another option might be a sort of Trust Index similar to the one published by Agari for trustworthy email systems.
To be sure, this will require some investment by the organization since a good risk and vulnerabiliy assessment is not the cheapest engagement. However, if organizations see that the market rewards those who use this proposed metric, perhaps they will recompute the calculus needed to provide an ROI for good security practice.
Just some thoughts, but I welcome further feedback and discussion.