20 Critical Security Controls: Control-by-Control
Posted January 20th, 2010 by rybolovOK, now for the control-by-control analysis of the 20 Critical Security Controls. This is part 2. Look here for the first installment. Read part 3 here.
Critical Control 1: Inventory of Authorized and Unauthorized Devices. This is good: get an automated tool to do IT asset discovery. Actually, you can combine this with Controls 2, 3, 4, 11, and 14 with some of the data center automation software–you know the usual suspects, just ask your ops folks how you get in on their tools. This control suffers from scope problems because it doesn’t translate down to the smaller-system scale: if I have a dozen servers in an application server farm inside of a datacenter, I’ll usually know if anybody adds something. The metric here (detect all new devices in 24 hours) “blows goats” because you don’t know if you’re detecting everything. A better test is for the auditor to do their own discovery scans and compare it to the list in the permanent discovery tool–that would be validation that the existing toolset does work–with a viable metric of “percentage of devices detected on the network”. The 24 hour metric is more like a functional requirement for an asset discovery tool. And as far as the isolation of unmanaged assets, I think it’s a great idea and the way things should be, except for the fact that you just gave us an audit requirement to implement NAC.
Critical Control 2: Inventory of Authorized and Unauthorized Software. Sounds like the precursor to whitelisting. I think this is more apropos to the Enterprise unless your system is the end-user computing environment (laptops, desktops). Yes, this control will help with stuff in a datacenter to detect when something’s been pwned but the real value is out at the endpoints. So yes, not happy with the scope of this control. The metric here is as bad as for Control 1 and I’m still not happy with it. Besides, if you allow unauthorized software to be on an IT device for up to 24 hours, odds are you just got pwned. The goal here should be to respond to detected unauthorized software within 24 hours.
Critical Control 3: Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers. This is actually a good idea, provided that you give me a tool to apply the settings automagically because manual configuration sucks. I think it’s about a dozen different controls all wrapped into one, it’s just trying to do too much in one little control. The time-based metric for this control is really bad, it’s like watching a train wreck. But hey, I’ll offer up my own: percentage of IT assets conforming to the designated configuration. It’s hinted at in the implementation guide, make it officially the metric and this might be a control I can support.
Critical Control 4: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches. This is basically Control 3 for network devices. The comments there also apply here.
Critical Control 5: Boundary Defense. This control is too much stuff crammed into one space. As a result, it’s not concise enough to be implemented–it’s all over the map. In fact, I’ll go as far as to say that this isn’t really one control, it’s a control theme with a ton of controls inside of it. The “audit requirements” here are going to utterly kill me as a security manager because there is so much of a disparity between the control and the actual controls therein.
Critical Control 6: Maintenance, Monitoring, and Analysis of Audit Logs. Some of this control should be part of Controls 3 and 4 because, let’s be honest here, you’re setting up logging on devices the way that the hardening guide says you should. The part that’s needed in this control is aggregation of logs and review of logs–get them off all the endpoints and into a centralized log management solution. This is mentioned as the last “advanced” implementation technique but if you’re operating a modern Enterprise, I don’t see how you can get the rest of the implementation done without some kind of SIEM piece. I just don’t get the metric here, again with the 24 hours. How about “percentage of devices reporting into the SIEM”? Yeah, that’s the easy money here. The testing of this control makes me do a facepalm: “At a minimum the following devices must be tested: two routers, two firewalls, two switches, ten servers, and ten client systems.” OK, we’ve got a LAN/WAN with 15000 endpoints and that’s all we’re going to test?
Critical Control 7: Application Software Security. You keep using those words, I do not think they mean what you think they mean. Application security is a whole different world and 20 CSC doesn’t even begin to scratch the surface of it. Oh, but guess what? It’s a tie-in to the 25 Most Dangerous Programming Errors which is about all this control is: a pointer to a different project. The metric here is very weak because it’s not tied back to the actual control.
Critical Control 8: Controlled Use of Administrative Privileges. This should be part of Controls 3 and 4, along with something about getting an Identity and Access Management system so that you have one ID repository. I know this is a shocker to you, but the metric here sucks.
Critical Control 9: Controlled Access Based on Need to Know. This is a great idea, but as a control it’s too broad to achieve, which is why the 20 CSC were created in the first place. What do we really want here? Network share ACLs are mentioned, which is a control in itself, but the rest of this is hazy and leaves much room for interpretation. Cue “audit requirements” and the part where Rybolov says “If it’s this hazy, it’s not really a standard, it’s a guideline that I shouldn’t be audited against.
Critical Control 10: Continuous Vulnerability Assessment and Remediation. All-in-all, not too bad. I would suggest “Average time to resolve scan findings” here as a metric or even something as “hoakey” as the FoundScan metric just to gauge overall trends.
Arm Control photo by Crotchsplay.
Critical Control 11: Account Monitoring and Control. Haven’t we seen this before? Yep, this should be incorporated into Controls 8, 3, and 4. However, periodic account reviews are awesome if you have the patience to do it.
Critical Control 12: Malware Defenses. OK, this isn’t too bad. Once again, the metric sucks, but I do like some of the testing steps. The way I would test this is to compare our system inventory with my total list of devices. A simple diff later, we have a list of unmanaged devices.
Critical Control 13: Limitation and Control of Network Ports, Protocols, and Services. Host firewalls was not what I thought of… I’m thinking more like firewalls and network segmentation where you have to get change control approval to add a firewall rule. As far as the host setup, this should be part of Control 3.
Critical Control 14: Wireless Device Control. Not bad, but this should be dumped into a technical standard that you use like a hardening guide. Metric here still sucks, but I don’t really need to say this again… oh wait, I just did.
Critical Control 15: Data Loss Prevention. Puh-lease. I’ll be the first to admit, I’m a big believer in DLP done right, and that it’s an awesome tool to solve some of the unique . But I don’t think that the market is mature enough to add it into your catalog of controls. Also this will fall flat on its face if your system is just a web application cluster: DLP addresses the endpoints (desktops, laptops, mobiles) and the outbound gateways (email, web, etc). The problem with this control is that if you don’t buy and implement a full DLP solution (cue Rich Mogull and his DLP guide), there isn’t anything else that has a similar capability. This is one of those controls where the 800-53 mapping gets really creative–Good Ship Lollipop Creative because we’re tapdancing around the issue that DLP-type solutions aren’t specifically required in 800-53.
These controls don’t have automated ways to implement and test them:
Critical Control 16: Secure Network Engineering. This control is a steaming crater. It’s very much a guideline instead of an auditable standard.
Critical Control 17: Penetration Tests and Red Team Exercises. Not bad. Still too easy to shop around for the bargain-basement penetration test team. But yeah, pretty good overall.
Critical Control 18: Incident Response Capability. Good control. Hard to test/audit except to look at after-incident reports.
Critical Control 19: Data Recovery Capability. Not bad here. Not real COOP/DR/ITCP but about on par with typical controls frameworks.
Critical Control 20: Security Skills Assessment and Appropriate Training to Fill Gaps. Good idea. Hard to implement without something like 8570.10 to give you a matrix by job position. You want to change the world here, give your own mapping in the control.
Similar Posts:
Posted in FISMA, NIST, Rants, Technical | 2 Comments »
Tags: 20csc • 800-53 • auditor • catalogofcontrols • compliance • fisma • government • infosec • itsatrap • management • NIST • pwnage • risk • security
January 20th, 2010 at 10:23 pm
[…] This post was mentioned on Twitter by rybolov, ghostnomad. ghostnomad said: RT @rybolov: More 20 Critical Security Controls fun, this time control-by-control http://bit.ly/837ZOI […]
January 21st, 2010 at 6:15 am
[…] for 1Password in Chrome by emailing, posting in our forums, tweeting, Face… 2 Likes 20 Critical Security Controls: Control-by-Control | The Guerilla CISO OK, now for the control-by-control analysis of the 20 Critical Security Controls. Look here […]