Menu

Security Monitoring of FireEye Off-Target During 2013’s Big Retail Breach

Two week’s ago, Bloomberg Businessweek broke this news:

The biggest retail hack in U.S. history wasn’t particularly inventive…It’s a measure of …how conventional the hackers’ approach [was] that Target was prepared for such an attack…As they uploaded exfiltration malware to move stolen credit card numbers—first to staging points spread around the U.S. to cover their tracks, then into their computers in Russia—FireEye spotted them. Bangalore got an alert and flagged the security team in Minneapolis. And then…Nothing happened.”

I guess that when I wrote “Lessons Learned from the Target Attack

  • Third party vendors are often your weak link
  • Defense-in-depth is the only protection once hackers have infiltrated the network
  • Beware of vulnerabilities in the security software itself

…I should have added something about security monitoring. Although you could say that security monitoring is part of defense-in-depth so I sort of covered it. But what I didn’t know then was that Target had deployed FireEye – a leading vendor in advanced anti-malware sandboxing – and that they had a round-the-clock security monitoring team in Bangalore watching the logs and the alerts from security software.

 
According to Bloomberg Businessweek’s sources, not only did FireEye detect the point of sale malware and pinpoint the addresses of the servers to which credit card information was initially being exfiltrated, but FireEye could have automatically deleted or blocked the malware. But again according to unnamed sources, Target had turned automated remediation off. That, in itself, is very common as security automation’s risks to the availability of production systems may outweigh any other protection benefits. But leaving remediation as a manual process makes it essential that incident response teams, such as Target’s in Minneapolis, stay on the ball and react to suspicious occurrences.
 
The breach was in part a failure of security monitoring, but it wasn’t the usual problem of no one reading the logs. The logs were read in Bangalore. So what was the problem? Were there too many false positives from FireEye? Earlier this year, I talked to a FireEye representative at their booth in the 2014 RSA Conference did tell me a major focus for the company is now “making alerts more actionable” (and hence the company’s purchase of Mandiant last year.) Many of the sandboxing vendors I talked to were focused on this problem.

The Bloomberg article speculates: “It is possible that FireEye was still viewed with some skepticism by its minders at the time of the attack.” But the article also raises the possibility that the Minneapolis security operations center (SOC) was in disarray because “the SOC manager…departed the company in October…leaving a crucial post vacant” (just before the attack started in November).

Bottom line: Something was going wrong in Target’s SOC. We don’t know conclusively yet whether it was a people problem or a technology problem. But we do have a new lesson learned:

  • Make sure incident response is well-staffed and running smoothly
 
Subscribe to Blog Notifications...  HERE
Archives