I love it when someone tells me most of my risk comes from insiders.
In the past week I’ve had the insider breach conversation twice. Fortunately in both cases the person I was speaking with was quick to listen to my rebuttal (hint, it’s based on actual data).
First though, a tip of the hat to the folks at the New School of Information Security who have and continue to dispel myths using data.
The most common number I hear is “70% of breaches are due to insiders”, but lets reduce that to “a significant majority of breaches are caused by insiders”. While we’re at it let’s define insider breaches (where possible) as malicious action by an employee (former employees, partners, suppliers and accidents are, in my opinion, distinct from common claim). Is there any evidence to support the claim?
A quick search (“insider breaches” and “information security survey”) of the tubes for recent commentary found a few survey based studies. Here are a few that I learned:
-  The Ponemon Institute’s Perceptions About Network Security state “Fifty-two percent say the breaches were caused by insider abuse”
- PwC’s 2012 Global State of Information Security Survey stated for the utilities industry that 41% of incidents are due to employees
- The TELUS Rotman Joint Study on Canadian IT security practices set insider breaches at 22%
(I found a few other reports but they required I register for an account and that’s something I won’t do on general principle. Signing up for a report doesn’t mean I want to talk to you or buy your stuff, so why waste my time and yours)
None of the surveys really support that significant majority claim; when I looked to actual data (breach reports), here’s what I found:
- Data Loss DB shows 11% by malicious insider action
- Verizon’s DBIR put 92% of breaches came from external sources;
While the data doesn’t support that insider breaches are the most common type, they are media attention worthy (for example, Wikileaks/Manning) and they’re the most visceral as they betray trust. Perhaps therein lies the reason that people believe the claim that most breaches are from insiders, perhaps they’re confusing frequency with impact. One Bradley Manning is possibly far more damaging to the US government than all the .gov website defacements and stolen credit card records combined. Breach data and sources do vary by industry, so it’s not impossible that the data in a specific industry could support the notion of the insider dominant threat landscape. However, I doubt it would be significant enough to result in the oft stated global averages.
Neither the surveys nor the reports present a consistent number but they do give us ranges that are much lower than we usually hear form vendors. We can easily refute the argument that most breaches are caused by insiders. Granted, user computers may be the means through which the breach passed, but that’s like blaming guns for gun related crime and it’s also obscuring negligence on the part of companies for not taking adequate measure to protect from accidents, lock former employees out and adequately restrict partner/vendor access.
A note on surveys
The most common issue with security surveys is that they typically ask respondents for their opinion but are often presented by the media or others as fact. If you ask someone their opinion, it’s not fact; it may resemble and even come close to fact, but it’s still not fact. The only time an opinion survey is fact is if you’re explicitly measuring opinion (“Do you think Senator X is a bad man?”). More important is the design of the question; consider these variants:
- “Where do you think most breaches come from?” (bad guys)
- “Do you think most of your risk comes from insiders?” (yes, that’s what the last vendor told me)
- “Who could do the most harm to your business?” (an angry employee)
- “What number of breaches involved employees?” (most of them, they were the victims)
- “Have you had any breaches caused by insiders?” (yes)
- “How many breaches have you had that were caused by insiders? how many were caused by external parties” (2, 10)
It’s possible that you ask all those questions to the same person and get conflicting answers. The last two question are the most interesting, the former could allow you to say “X% of respondents report breaches caused by insiders” while the latter “Y percentage of N reported incidents were caused by insiders”. Obviously the second format is the most meaningful.
Perhaps cynically, consider who publishes these reports – it’s usually vendors selling stuff (Full disclosure: I used to co-author a security survey report for a large Canadian telco). For them, opinion measuring is a powerful tool, it’s their attempt at understanding their customers better; give a sales person a statistic (real or otherwise) and they’ll use it to overcome objections to adoption of their particular product – “X% of companies are investing in this widget…” (and then implying or explicitly stating you should too). Finally, it doesn’t help that the media gets involved and writes things like “type X breaches increased 50% this year” when what that really means is respondents said that 2% of them had experienced type X breaches and then in a subsequent year the response rate rose to 3%.
A perennial favorite, indeed. On surveys, also check out Cormac Herley, et. al. “Sex, Lies, and Cyber-crime Surveys” http://research.microsoft.com/pubs/149886/SexliesandCybercrimeSurveys.pdf
While I think they paint with somewhat too broad a brush (having worked for a respected academic survey research organization, I may be a tad biased), their point is well-made and should be heard widely.