Why Are We Only Finding Out About the VeriSign Security Breach Now?

  • Share
  • Read Later

“Key Internet operator VeriSign hit by hackers” read the headline in Reuters yesterday. This is big news because, as the article casually points out, VeriSign Inc. “is ultimately responsible for the integrity of Web addresses ending in .com, .net and .gov.” That means a serious breach could let hackers redirect people to fake sites.

What caught me by surprise was that these unreported breaches occurred in 2010. How is it that we’re only hearing about this now? It turns out that VeriSign didn’t reveal this information until an SEC filing in October of 2011. It might not have released the information at all if it wasn’t for an SEC request on October 13 urging all companies to report hacking incidents.

(MORE: SEC Issues New Cybersecurity Guidelines for Publicly Traded Companies)

It then took Reuters a few months to comb through around 2,000 SEC filings released after the October request, which is why you and the rest of the public only heard about this on the morning of February 2, 2012.

Now, as far as we can tell, nothing too terrible happened because of the breach. VeriSign told Reuters that its executives “do not believe these attacks breached the servers that support our Domain Name System network,” although I’d feel a lot better if they used the words “are certain” instead of “do not believe.” VeriSign also used to sell Secure Sockets Layer (SSL) certificates before it sold that part of its business to Symantec in 2010.

Still, we’re not talking about McDonald’s getting hacked. Even the security breach of Sony’s PlayStation Network, which put millions of people’s personal information at risk, wasn’t this bad. We’re talking about the breach of a company that, in its own marketing materials, boasts that “more than half (56%) of the world’s DNS hosts rely on the Verisign .net and .com infrastructure.” The fact that a company this big and this central to the Internet would wait so long to reveal it had been attacked is unacceptable.

(MORE: Hackers: We Intercepted FBI, Scotland Yard Call)

“It’s always a cat and mouse game between fraudsters and security experts,” says Catalin Cosi, chief security researcher at BitDefender Labs. “In this particular scenario, advising users into installing state-of-the-art anti-phishing solutions, adjusting anti-malware engines into not skipping files that are signed with a valid certificate and even basic education could have been quite helpful immediately after the attack.”

While you can certainly point a finger at VeriSign for all of this, it doesn’t help that the government has been so lax when it comes to making companies report security breaches. Remember, the SEC only requested that companies offer this information in a guidance document; there are no legal repercussions for not doing so.

As long as it’s not required, companies will continue to hide security breaches from their investors. Only when the security breach involves the possibility of stolen credit card numbers—which the public would find about anyway after being hit with unfamiliar charges—do companies ever seem to fess up and warn the rest of us.

This needs to change. The more we trust companies with our money and personal information, the more they owe us in terms of telling us when those things are at risk.

MORE: The Basics Behind Google’s New Privacy Policies