Attackers collaborate, defenders are drowning in a sea of data.
“Can’t take the humans out of the loop.” Human and machine.
These are key points. Here’s why.
This week we analyzed malware that was believed to be one piece of code was used in the NY Times attacks. It’s not automated analysis, it’s automated assisted analysis. Code is pulled from a machine after enterprise-wide searches for unique ‘indicators of compromise’. This is done in one of a few ways, but the most common is through software that examines each computer for file names, sizes, MD5 hash matches, or other pieces of metadata about the files on the system. If there’s a match, a file, piece of running memory, and/or a forensic image of the computer is pulled --either locally, or remotely.
Here’s the problem. In any given enterprise, when you run those host based tools to look for these indicators of compromise (IOCs), any given company is going to be inundated with results --and most will not be false positives.
So here’s what happens...
- You load a system that can inventory computers, their file systems, and the infrastructure. There are some really nice tools out there that do this work for us --Mandiant’s MIR, Carbon Black, RSA ECAT, and others. (Note that I didn’t call out anti-virus. AV alone is simply not in the same class as these other tools. The use of signature-based anti-virus alone as a defense is no longer enough in a world attacks and threats are changing daily.)
- Every file on every device is identified, and the meta-data about those files are collected and sent to a centralized analysis device for aggregation and correlation. (Not all require movement of data to the centralized analysis machine. Some perform the work locally on the host, others send data back.)
- At the same time, a good team will have some form of network analysis device running on their Internet Point of Presence (iPoP). Whether a netflow analysis tool (i.e.: Arbor, NetWitness, etc.), or full packet capture... thereby aggregating even more data (that must later be analyzed).
- Now comes the hard part... every piece of that data must be examined, separating the known good from the suspected bad --our indicators of compromise. Done manually, correlations of this data could take months. Thankfully we don’t have to do this manually.
- The results? You’re not going to like this... expect big numbers. In a small or medium sized company, without a formalized information security structure, expect every computer to be reported as compromised. In a larger enterprises where formal information security processes exist, expect a large percentage of your systems to be compromised at first, and then as you work the issues, the numbers will level out. And, in enterprises where formal infosec processes exist, even those with great process, training, etc., it is not uncommon to experience a 2-5% daily detection (probable compromise) rate. Many companies (companies with GREAT infosec team) have stated that they are attacked hundreds of thousands of times per week. These numbers, and a 2-5% targeted compromise rate is not uncommon. In fact, it’s probably a pretty good number. So imagine this.. your company owns 1,000 computers. Giving you the benefit of the doubt, Let’s say you probably have a great, highly trained, very mature infosec team. 20-50 compromised computers per day (7 days/week) could (should) be expected.
- Assuming you have a great team (I’ll assume that you do!), and you have 20 compromised computers every day (140 per week), and each computer is a 100 Gb hard drive (yes, I know this is small), not including shares, SAN devices, network attached storage, cloud, etc., You need to be prepared to perform incident response on 140 systems and 1.4 Tb of host based data per week, or roughly 6 Tb per month, plus whatever you’ve collected at the iPoP. That’s like examining more than six Libraries of Congress every month! Now, how might you pull out the good (bad) stuff? Automation (and hopefully on systems that are not compromised!) You’re going to want to run tools against those systems to help you assess which ones are really compromised, or at a minimum, prioritize your work. How do you do that?
So, both Brett and Dr. Schneck are absolutely correct.
Defenders are drowning in data --both enterprise data that must be analyzed, and potential sources of good intelligence and indicators. Defenders MUST gather information from as many good sources as possible. You need sources of known goods, sources of high confidence bads; and, you must share information between your peers, other industries. Look for sources of information that will give you high quality indicators that can be placed in your networks that can proactively block, drop, and stop those attacks before they successfully penetrate your environment. An interesting observation - smaller infosec shops generally tend to try and save money by seeking out open source lists of indicators. Their teams are often times smart technically, and they choose to spend time reading the open source lists of bugs. While they can obtain a lot of good information, to find those nuggets requires a lot of time reading, analyzing, and evaluating the information before actual implementation. This is a counterproductive. Research times can be reduced significantly, by purchasing a membership in an information sharing group like Red Sky (or others), where many members are reading those same lists, and talking about them in a private environment. This is a game changer. In fact, a July 2012 McKinsey Report states the average worker spends 28 hours per week reading email. The report suggests that knowledge workers (I’d call an infosec pros knowledge workers) using a social environment (like Red Sky?) to exchange knowledge can double the benefits received, and increase productivity by 25%!
This goes to the heart of why Red Sky. How long before your incident responders burn out? What if you could reduce their workload by 25% by participating in our social environment? What if they could be twice as productive by reducing cycle times needed to research cause and effect? You can.
- Defenders are drowning in data, and losing the fight. (Harttman)
- Humans are required in the loop to understand the nuances of daily changing threats and attacks. (Schneck)
- Current thinking on how to capture intelligence and information isn’t working. Red Sky Alliance and its public sector portal, Beadwindow, are working. (Stutzman)
Call today for an introduction to our community. With every Red Sky demo, we’ll give you our latest white paper “How Great Companies Fight Targeted Attacks and APT”. This paper outlines a roadmap, at an executive level, in less than 10 pages, seven items companies who’ve dealt with, survived, and thrived in the face of Targeted attacks and APT have done effectively to defend themselves against targeted and advanced persistent threats.
Until next time,
Have a great week!