Some Words From a FAR

Posted September 9th, 2008 by

FAR: it’s the Federal Acquisition Regulation, and it covers all the buying that the government does.  For contractors, the FAR is a big deal–violate it and you end up blackballed from Government contracts or having to pay back money to your customer, either of which is a very bad thing.

In early August, OMB issued Memo 08-22 (standard .pdf caveat blah blah blah) which gave some of the administratrivia about how they want to manage FDCC–how to report it in your FISMA report, what is and isn’t a desktop, and a rough outline on how to validate your level of compliance.

Now I have mixed feelings about FDCC, you all should know that by now, but I think the Government actually did a decent thing here–they added FDCC (and any other NIST secure configuration checklists) to the FAR.

Check this section of 800-22 out:

On February 28, 2008, revised Part 39 of the Federal Acquisition Regulation (FAR) was published which reads:
PART 39-ACQUISITION OF INFORMATION TECHNOLOGY
1. The authority citation for 48 CFR part 39 continues to read as follows: Authority: 40 U.S.C. 121(c); 10U.S.C. chapter 137; and 42 U.S.C. 2473(c).
2. Amend section 39.101 by revising paragraph (d) to read as follows:
39.101 Policy.
* * * * *

(d) In acquiring information technology, agencies shall include the appropriate IT security policies and requirements, including use of common security configurations available from the NIST’s website at http://checklists.nist.gov. Agency contracting officers should consult with the requiring official to ensure the appropriate standards are incorporated.

Translated into English, what this means is that the NIST configurations checklists are coded into law for Government IT purchases.

This carries a HUGE impact to both the Government and contractors.  For the Government, they just outsourced part of their security to Dell and HP, whether they know it or not.  For the desktop manufacturers, they just signed up to learn how FDCC works if they want some of the Government’s money. 

Remember back in the halcyon days of FDCC when I predicted that one of the critical keys to success for FDCC was to be able to buy OEM desktops with the FDCC images on them.  It’s slowly becoming a reality.

Oh what’s that, you don’t sell desktops?  Well, this applies to all NIST configuration checklists, so as NIST adds to the intellectual property in the checklists program, you get to play too.  Looking at the DISA STIGs as a model, you might end up with a checklist for literally everything.

So as somebody who has no relation to the US Federal Government, you must be asking by now how you can ride the FDCC wave?  Here’s Rybolov’s plan for secure desktop world domination:

  • Wait for the government to attain 60-80% FDCC implementation
  • Wait for desktops to have an FDCC option for installed OS
  • Review your core applications on the FDCC compatibility list
  • Adopt FDCC as your desktop hardening standard
  • Buy your desktop hardware with the image pre-loaded
  • The FDCC configuration rolls uphill to be the default OS that they sell
  • ?????
  • Profit!

And the Government security trickle-down effect keeps rolling on….

Cynically, you could say that the OMB memos as of late (FDCC, DNSSEC) are very well coached and that OMB doesn’t know anything about IT, much less IT security.  You probably would be right, but seriously, OMB doesn’t get paid to know IT, they get paid to manage and budget, and in this case I see some sound public policy by asking the people who do know what they’re talking about.

While we have on our cynical hats, we might as well give a nod to those FISMA naysayers who have been complaining for years that the law wasn’t technical/specific enough.   Now we have very static checklists and the power to decide what a secure configuration should be has been taken out of the hands of the techies who would know and given to research organizations and bureaucratic organizations who have no vested interest in making your gear work.

Lighthouse From Afar

Lighthouse From AFAR photo by Kamoteus.



Similar Posts:

Posted in FISMA, NIST, What Doesn't Work, What Works | 8 Comments »
Tags:

Government Pre-Election Slowdown has Started

Posted September 9th, 2008 by

Signs of the pre-election slowdown are around us, and I’m definitely starting to feel it.

For those of you outside the beltway, it breaks down like this:  people aren’t willing to make any long-term decisions  or start any long-term projects because they will be overruled in a couple of months after the elections and as election platforms meet reality.  Typically this happens once most of the political appointees are in-place, and I have a feeling that early 2009 is going to be much fun, no matter who wins the presidency.

Now when the current president took charge of the executive branch, he issued a 5-point plan called the President’s Management Agenda.  You can check out the PMA on the OMB website.  And yes, E-Government is one of the 5.  You can expect something similar under the new administration.

As a parting shot, you know it’s a slowdown when you see contracts that will be awarded in November but the work doesn’t start until April.  =)

 

Lame ducks frozen in water

Lame Ducks Frozen in the Ice photo by digitalART2.



Similar Posts:

Posted in Odds-n-Sods | 1 Comment »
Tags:

Oh Hey, Secure DNS now Mandatory

Posted August 27th, 2008 by

OMB sneaked this one in on me:  OMB Memo 08-23 requires secure DNS (standard .pdf caveat).  Agencies need to submit a plan by September 5th on how they should accomplish this.  The whole switchover should occur by December 2009.

The interesting thing to me is that OMB is getting bolder in specifying technical solutions.  Part of me wants to scream because public policy people have no business dictating technical solutions–that’s what we have standards boards and RFCs for.

From what I hear, some of this is because OMB is starting to be a really bad lame duck.  Think about it, what are the odds that anybody at OMB is going to be around in December 2009?  Completely unofficial word on the street is that OMB is pushing last-minute initiatives because of politicals–trying to accomplish things in time for the elections.

Also, I think that OMB is getting tired of NIST’s nonpspecificity in their guidance.  NIST’s approach to being generic in nature is necessary because of their role as a research organization and the producers of methodologies.

The solution to all this?  Well, the way it happens in the rational world is organic standards boards.  Yes, they have their problems (*cough* WAFs anyone? *cough*) but overall, they fill a place.  Inside Government, we don’t have much of that happening–we have the CIO council and the Enterprise Architecture folks, but nothing security-specific.

Lockup Your Data

Lock Up Your Data photo by psd.

Description of the picture, it’s great and needs to be repeated:

The road passes the temptations of Flash and AIR. Those who succumb or who are unfortunate enough to be lured by Silverlight’s Siren find themselves sold down the river of Rich User Experiences and hurled towards lock-in weir. The TiddlyWiki steps may rescue some, who can then join those who stuck to the main path of Javascript and AJAX for their interactions.

The URI scheme is based on DNS, a registry which has weaknesses, meanwhile the ICANN Fracture results from their greedily adding spurious new Top Level Domains such as .mobi, .jobs, .xxx and even .tel, which whilst generating more revenue (for them) causes mass confusion and threatens to break the opacity of URIs.



Similar Posts:

Posted in Technical | 2 Comments »
Tags:

Effective Inventory Management

Posted August 20th, 2008 by

So what exactly is a “system”?  After all this time, it’s still probably one of the most misunderstood ways that we manage security in the Government.

The short answer is this:  a system is what you say it is.  Long answer is it depends on the following factors:

  • Maturity of your agency
  • Budget processes and Exhibit 300s
  • The extent of your common controls
  • Political boundaries between inter-agency organizations
  • Agency missions
  • Amount of highly-regulated data such as PII or financial

Yes, this all gets complicated.  But really, whatever you say is a system is a system, the designation is just for you so you can manage the enterprise in pieces.  There are 3 main techniques that I use to determine what is a system:

  • As a budget line-item: If it has an Exhibit 300, then it’s a system.  This works better for Plan of Actions and Milestones (POA&Ms) but in reality there might not be a 1:1 correllation between systems and Exhibit 300s.
  • As a data type: If it has a particular type of data, then it’s a system.  This works well for special-purpose systems or where a type of data is regulated, such as PII or financial data.
  • As a project or program: if it’s the same people that built it and maintain it, then it’s a system.  This dovetails in nicely with any kind of SDLC or with any kind of outsourcing.

Inventory

Inventory photo by nutmeg.

Inventory management techniques that work:

  • Less systems are better.  Each system incurs overhead in effort and cost.
  • More systems works when you have no idea what is out there, but will cripple you in the long term because of the overhead.
  • Start with many systems, assess each as its own piece, then consolidate them into a general support system or common controls package.
  • Set a threshold for project size in either pieces of hardware or dollar value.  If the project exceeds that threshold, then it’s a system.
  • Determine if something will be a system when the budget request is made.  Good CISOs realize this and have a place on the investment control board or capital planning investment board.

Guerilla CISO war story time:

Way back when all this was new, one of the agency CISOs would have a roundtable every quarter or so.  Won’t name who, but some of my blog readers do.  Almost every meeting devolved at some point into the time-honored sticking point of “what is a system?”  Everybody wanted to know if they had “2 servers, 3 PCs, a database, a dog, and a dickfore”, was that a system.  After one too many iterations, the gray-hair in the group would put up “Exhibit 300=System” on the whiteboard before every meeting.  Then when the inevitable conversation of “what is a system?” would come up, he would just point to the board.

And another story:

Several years ago I was working an IT outsourcing contract with an inventory that was determined using the budget line-item technique.  Turned out we had all sorts of systems, some of which didn’t make sense, like the desktop client to manage the local admin account.  One of my first priorities was to consolidate as many systems as I could.  Not that I was altruistic about saving money or anything, it was that the less systems I had, the less paperwork needed to be generated. =)   Most of the systems I rolled up into a general support system aimed at basic user connectivity.



Similar Posts:

Posted in FISMA | No Comments »
Tags:

Cloud Computing and the Government

Posted August 19th, 2008 by

Post summary: We’re not ready yet culturally.

What spurred this blog post into being is this announcement from ServerVault and Apptis about a Federal Computing Cloud.  I think it’s a pretty ballsy move, and I’ll be watching to see if it works out.

Disclaimer being that at one time I managed security for something similar in a managed services world, only it was built by account with everything being a one-off.  And yeah, we didn’t start our organization the right way, so we had a ton of legacy concepts that we could never shake off, more than anything else about our commercial background and ways of doing things.

Current Theory on Cloud Computing

Current Theory on Cloud Computing photo by cote.

The way you make money in the managed services world is on standardization and economy-of-scale.  To us mere mortals, it means the following:

  • Standardized OS builds
  • Shared services where it makes sense
  • Shared services as the option of choice
  • Split your people’s time between clients
  • Up-charge for non-standard configurations
  • Refuse one-off configurations on a case-by-case basis

The last 2 were our downfall.  Always eager to please our clients, our senior leadership would agree to whatever one-offs that they felt were necessary for client relationship purposes but without regard to the increased costs and inefficiency when it came time to implement.

Now for those of you out in the non-Government world, let me bring you to the conundrum of the managed services world:  shared services only works in limited amounts.  Yes, you can manage your infrastructure better than the Government does, but they’ll still not like most of it because culturally, they expect a custom-built solution that they own.  Yes, it’s as simple as managing the client’s expectations of ownership v/s their cost savings, and I don’t think we’re over that hurdle yet.

And this is the reason: when it comes to security and cloud computing, the problem is that you’re only as technically literate as your auditors are.  If they don’t understand what the solution is and what the controls are around it, you do not have a viable solution for the public sector.

A “long time ago” (9000 years at least), I created the 2 golden rules for shared infrastructure:

  • One customer cannot see another customer.
  • One customer cannot affect another customer’s level of service.

And the side-rules for shared infrastructure in the public sector:

  • We have a huge set of common controls that you get the documentation to.  It will have my name on it, but you don’t have to spend the money to get it done.
  • It’s to my benefit to provide you with transparency in how my cloud operates because otherwise, my solution is invalidated by the auditors.
  • Come to us to design a solution, it’s cheaper for you that way.  I know how to do it effectively and more cheaply because it’s my business to know the economics of my cloud.
  • You have to give up control in some ways in order to get cost savings.
  • There is a line beyond which you cannot change or view because of the 2 golden rules.  The only exception is that I tell you how it’s made, but you can’t see any of the data that goes into my infrastructure.
  • If I let you audit my infrastructure, you’ll want to make changes, which can’t happen because of the 2 golden rules.
  • I’ll be very careful where I put your data because if your mission data spills into my infrastructure, I put myself at extreme risk.

So, are ServerVault and Apptis able to win in their cloud computing venture?  Honestly, I don’t know.  I do think that when somebody finally figures out how to do cloud computing with the Federal Government, it will pay off nicely.

I think Apptis might be spreading themselves fairly thin at this point, rumor has it they were having some problems last year.  I think ServerVault is in their comfort space and didn’t have to do too much for this service offering.

I can’t help but think that there’s something missing in all of this, and that something is a partnering with the a sponsoring agency on a Line of Business.  FEA comes to mind.



Similar Posts:

Posted in What Doesn't Work, What Works | 1 Comment »
Tags:

New SP 800-60 is Out, Categorize Yerselves Mo Better

Posted August 18th, 2008 by

While I was slaving away last week, our friends over at NIST published a new version of SP 800-60.  Go check it out at the NIST Pubs Page.

Now for those of you who don’t know what 800-60 is, go check out my 3-part special on the Business Reference Model (BRM), a primer on how SP 800-60 aligning FIPS-199 with the BRM, and a post on putting it all together with a catalog of controls.

And oh yeah, the obligatory press reference: Government Computer News.

Data Release Show

Data Release Show photo by Discos Konfort.

So deep down inside, you have to be asking one question by now:  “Why do we need SP 800-60?”  Well, 800-60 does the following:

  • Level-sets data criticality across the Government:  Provides a frame of reference for determining criticality–ie, if my data is more important than this but less than this, then it’s a moderate for criticality.
  • Counters the tendency to rate system criticality higher than it should be:  Everybody wants to rate their system as high criticality because it’s the safe choice for their career.
  • Protection prioritization:  Helps us point out at a national level the systems that need more protection.
  • Is regulations-based:  The criticality ratings reflect laws and standards.  For example, Privacy Act Data is rated higher for confidentiality.

All things considered, it’s a pretty decent systemfor Government use.

Now this is where I have a bit of heartburn with GRC tools and data classification in general in the private sector–they classify the wrong things.  How the vendors (not all of them, there is a ton of variation in implementation) want you to categorize your data:

  • HIPAA-regulated
  • PCI-DSS-regulated
  • SOX-regulated
  • All other data types

How your CISO needs to categorize data to keep the business afloat:

  • Data that gets you paid:  If you’re a business, your #1 priority is getting money.  This is your billing/AR/POS data that needs to keep going.
  • Data that keeps you with a product to sale over the next week:  usually ERP data, stuff that slows down the production line.
  • Data that people want to rip off your customers:  hey, almost all the regulated data (PCI-DSS, HIPAA, etc) fits in here.
  • Data where people will rip you off:  ie, your internal financial systems.  Typically this is SOX country.

I guess really it comes down to the differences between compliance and risk, but in this case, one version will keep you from getting fined, the other will keep your business running.



Similar Posts:

Posted in FISMA, NIST | No Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: