When Standards Aren’t Good Enough

Posted May 22nd, 2009 by

One of the best things about being almost older than dirt is that I’ve seen several cycles within the security community.  Just like fashion and ladies’ hemlines, if you pay attention long enough, you’ll see history repeat itself, or something that closely resembles history.  Time for a short trip “down memory lane…”

In the early days of computer security, all eyes were fixed on Linthicum and the security labs associated with the NSA.  In the late 80’s and early 90’s the NSA evaluation program was notoriously slow – glacial would be a word one could use…  Bottom line, the process just wasn’t responsive enough to keep up with the changes and improvements in technology.  Products would be in evaluation for years before coming out of the process with their enabling technology nearly obsolete.   It didn’t matter, it was the only game in town until NIST and the Common Criteria labs  came onto the scene.  This has worked well, however the reality is, it’s not much better at vetting and moving technology from vendors to users.  The problem is, the evaluation process takes time and time means money, but it also means that the code submitted for evaluation will most likely be several revisions old by the time it emerges from evaluation.   Granted, it may only be 6 months, but it might take a year – regardless, this is far better than before.

So…  practically speaking, if the base version of FooOS submitted for evaluation is, say Version 5.0.1, several revisions —  each solving operational problems affecting the  organization — may have been released.  We may find that we need to run Version 5.6.10r3 in order to pass encrypted traffic via the network.  Because we encrypt traffic we must use FIPS-Level 2 certified code – but in the example above, the validated version of the FooOS will not work in our network…    What does the CISO do?  We’ll return to this in a moment, it gets better!

In order to reach levels of FIPS-140 goodness, one vendor in particular has instituted “FIPS Mode.”  What this does is require administration of the box from apposition directly in front  of the equipment, or at the length of your longest console cable…  Clearly, this is not suitable for organizations with equipment deployed worldwide to locations that do not have qualified administrators or network engineers.  Further, having to fly a technician to Burundi to clear sessions on a box every time it becomes catatonic is ridiculous at worst.  At best it’s not in accordance with the network concept of operations.  How does the CISO propose a workable, secure solution?


Standard Hill photo by timparkinson.

Now to my point.  (about time Vlad)   How does the CISO approach this situation?  Allow me to tell you the approach I’ve taken….

1. Accept the fact that once Foo OS has achieved a level of FIPS-140 goodness, the likelihood that the modules of code within the OS implementing cryptographic functionality in follow-on versions have not been changed.  This also means you have to assume the vendor has done a good job of documenting the changes to their baseline in their release notes, and that they HAVE modular code…

2. Delve into vendor documentation and FIPS-140 to find out exactly what “FIPS Mode” is, its benefits and the requirement.  Much of the written documentation in the standard deals with physical security of the cryptographic module itself (e.g., tamper-evident seals) – but most helpful is Table 1.

Security Level  1 Security Level 2 Security Level 3 Security Level 4
Cryptographic

Module Specification

Specification of cryptographic module, cryptographic boundary, Approved algorithms, and Approved modes of operation. Description of cryptographic module, including all hardware, software, and firmware components. Statement of module security policy.
Cryptographic Module Ports and Interfaces Required and optional interfaces. Specification of all interfaces and of all input and output data paths. Data ports for unprotected critical security parameters logically or physically separated from other data ports.
Roles, Services, and Authentication Logical separation of required and optional roles and services Role-based or identity-based operator authentication Identity-based operator authentication.
Finite State Model Specification of finite state model.  Required and optional states.  State transition diagram and specification of state transitions.
Physical Security Production grade equipment. Locks or tamper evidence. Tamper detection and response for covers and doors. Tamper detection and response envelope.  EFP or EFT.
Operational Environment Single operator. Executable code. Approved integrity technique. Referenced PPs evaluated at EAL2 with specified discretionary access control mechanisms and auditing. Referenced PPs plus trusted path evaluated at EAL3 plus security policy modeling. Referenced PPs plus trusted path evaluated at EAL4.
Cryptographic Key Management Key management mechanisms: random number and key generation, key establishment, key distribution, key entry/output, key storage, and key zeroization.
Secret and private keys established using manual methods may be entered or output in plaintext form. Secret and private keys established using manual methods shall be entered or output encrypted or with split knowledge procedures.
EMI/EMC 47 CFR FCC Part 15. Subpart B, Class A (Business use). Applicable FCC requirements (for radio). 47 CFR FCC Part 15. Subpart B, Class B (Home use).
Self-Tests Power-up tests: cryptographic algorithm tests, software/firmware integrity tests, critical functions tests. Conditional tests.
Design Assurance Configuration management (CM). Secure installation and generation. Design and policy correspondence. Guidance documents. CM system. Secure distribution. Functional specification. High-level language implementation. Formal model. Detailed explanations (informal proofs). Preconditions and postconditions.
Mitigation of Other Attacks Specification of mitigation of attacks for which no testable requirements are currently available.

Summary of Security Requirements From FIPS-140-2

Bottom line — some “features” are indeed useful,  but this one particular vendor’s implementation into a “one-size fits all” option tends to limit the use of the feature at all in some operational scenarios (most notably, the one your humble author is dealing with.)  BTW, changing vendors is not an option.

3. Upon analyzing the FIPS requirements against operational needs, and (importantly) the environment the equipment is operating in, one has to draw the line between “operating in vendor FIPS Mode,” and using FIPS 140-2 encryption.

4. Document the decision and the rationale.

Once again, security professionals have to help managers to strike a healthy balance between “enough” security and operational requirements.   You would think that using approved equipment, operating systems, and vendors using the CC evaluation process would be enough.  Reading the standard, we see the official acknowledgement that “Your Mileage May Indeed Vary:” TM

While the security requirements specified in this standard are intended to maintain the security provided by a cryptographic module, conformance to this standard is not sufficient to ensure that a particular module is secure. The operator of a cryptographic module is responsible for ensuring that the security provided by a module is sufficient and acceptable to the owner of the information that is being protected and that any residual risk is acknowledged and accepted.”     FIPS 140-2 Sec 15, Qualifications

The next paragraph constitutes validation of the approach I’ve embraced:

“Similarly, the use of a validated cryptographic module in a computer or telecommunications system does not guarantee the security of the overall system. The responsible authority in each agency shall ensure that the security of the system is sufficient and acceptable.”  (Emphasis added.)

One could say, “it depends,” but you wouldn’t think so at first glance – it’s a Standard for Pete’s sake!

Then again, nobody said this job would be easy!

Vlad



Similar Posts:

Posted in Rants, Risk Management, Technical | 4 Comments »
Tags:

Where For Art Thou, 60-Day Review

Posted May 7th, 2009 by

April Fools Day pranks aside, I’m wondering what happened to the 60-day Cybersecurity Review.  Supposedly, it was turned into the President on the 17th.  I guess all I can do is sigh and say “So much for transparency in Government”.

I’m trying hard to be understanding here, I really am.  But isn’t the administration pulling the same Comprehensive National Cybersecurity Initiative thing again, telling the professionals out in the private sector that it depends on, “You can’t handle the truth!”

And this is the problem.  Let’s face it, our information sharing from Government to private sector really sucks right now.  I understand why this is–when you have threats and intentions that come from classified sources, if you share that information, you risk losing your sources.  (ref: Ultra and  Coventry, although it’s semi-controversial)

Secret Passage photo by electricinca.

Looking back at one of the weaknesses of our information-sharing strategy so far:

  • Most of the critical infrastructure is owned and operated by the private sector.  Government (and the nation at-large) depends on these guys and the resilience of the IT that these
  • The private sector (or at least critical infrastructure owners and operators) need the information to protect their infrastructure.
  • Our process for clearing someone to receive sensitive information is to do a criminal records investigation, credit report, and talk to a handful of their friends to find out who they really are.  It takes 6-18 months.  This is not quick.
  • We have some information-sharing going on.  HSIN and Infragard are pretty good so far–we give you a background check and some SBU-type information.  Problem is that they don’t have enough uptake out in the security industry.  If you make/sell security products and services for Government and critical infrastructure, you owe it to yourself to be part of these.
  • I’ve heard people from Silicon Valley talk about how the Government doesn’t listen to them and that they have good ideas.  Yes they do have some ideas, but they’re detached from the true needs because they don’t have the information that they need to build the right products and services, so all they can do is align themselves with compliance frameworks and wonder why the Government doesn’t buy their kit.  It’s epic fail on a macromarket scale.

In my opinion, Government can’t figure out if they are a partner or a regulator.  Choose wisely, it’s hard to be both.

As a regulator, we just establish the standard and, in theory anyway, the private sector folks don’t need to know the reasoning behind the standard.   It’s fairly easy to manage but not very flexible–you don’t get much innovation and new technology if people don’t understand the business case.  This is also a more traditional role for Government to take.

As a partner, we can share information and consequences with the private sector.  It’s more flexible in response but takes much more effort and money to bring them information.  It also takes participation from both sides–Government and private sector.

Now to tie it all off by going back to the 60-Day Cybersecurity Review….  The private sector needs information contained in the review.  Not all of it, mind you, just the parts that they need to do their job.  They need it to help the Government.  They need it to build products that fit the Government’s needs.  They need it to secure their own infrastructure.



Similar Posts:

Posted in Public Policy, Risk Management | 3 Comments »
Tags:

The Accreditation Decision and the Authorizing Official

Posted February 10th, 2009 by

The accreditation decision is one of the most key activities in how the US Government secures its systems. It’s also one of the most misunderstood activities. This slideshow aims to explain the role of the Authorizing Official and to give you some understanding into why and how accreditation decisions are made.

I would like to give a big thanks to Joe Faraone and Graydon McKee who helped out.

The presentation is licensed under Creative Commons, so feel free to download it, email it, and use it in your own training.



Similar Posts:

Posted in FISMA, NIST, Risk Management, Speaking | 5 Comments »
Tags:

Could the Titanic have changed course?

Posted January 6th, 2009 by

Rybolov really struck a note with me (as he usually does) with his blog entry with his decision that S.3474 was a bad thing. It reminds me of a conversation I had with a friend recently. Basically she ask me why bad thing happen even after smart people put their heads together and try to deal with the problem before facing a crisis. Intrigued with her question, I asked her what specifically she was asking about. She shared that she had been thinking about the tragedy of the Titanic sinking.

Of course she was referring to the sinking of the passenger ship RMS Titanic on the evening of 14 April 1912. She made two points, first that the experts declared that the ship was “unsinkable” – how could they be so wrong. Second, she wondered how the ship could be so poorly equipped with boats and safety equipment such that there was such great loss of life.

The Titanic’s Disaster photo by bobster1985.

Little did she know that I have had an odd fascination with the Titanic disaster since childhood and have basically read much of the common public material about the event. So, I replied that that no expert had ever declared her unsinkable, that it was basically something that was made up by the press and the dark spineless things that hang around the press. However, I added the designers and owners of the ship had made much of her advanced safety features when she was launched. A critical feature was including water-tight bulkheads in her design. This was something of an advanced and novel feature at the time. What it meant was that you could poke a pretty big hole in the ship, and as long as the whole was not spread over several of these water-tight compartments she would stay afloat. The problem was that the iceberg that she hit (the Titanic, not my friend), ignored all of this a tore a big gash along about a third of the length of the ship.

So, my friend pressed again about the lack of safety equipment, especially lifeboats. I told her that the problem here was that the Titanic indeed did meet all of the safety requirements of the time. And that a big part of the problem was that the safety requirements were drafted in 1894 at a time when there were rapid changes and in the size and design of ships of this kind. Those regulations indicated that all passenger ships over 10,000 tons required 16 life boats, and that’s how many the Titanic had. At the time the regulations were written there were hardly any ships over 10,000 tons in size. However, when Titanic was launched she was designed to be over 50,000 tons when fully loaded. The fact was that if each of these lifeboats was fully loaded they could barely hold half of the of the passengers and crew of the ship if fully loaded. What is worse, when the ship did sink, not all of the boats were usable because of speed and angle in which the ship began sinking.

So, the bottom-line was that when the Titanic was reviewed by the safety accountants, they took out their check-list and went over the ship with a fine tooth comb. When the day was done the ship fully met all the safety criteria and was certified as safe.

This is where I see the parallels between root causes of the Titanic disaster and the odd situation we find ourselves in today in terms of IT security. Security by checklist –especially out of date checklists—simply doesn’t work. Moreover, the entire mental framework that mixes up accounting practices and thoughts with security discipline and research is an utter failure. Audits only uncover the most egregious security failures. And, they uncover them at a point in time. The result is that audits can be gamed, and even ignored. On the other hand, formal reviews by experienced security professionals are rarely ignored. Sometimes not all of the resources are available to militate against some of the vulnerabilities pointed out by the professionals. And sometimes there is debate about the validity of specific observations made by security professionals. But, they are rarely ignored.

Interesting enough, because of the mixed IT security record of many government agencies, Congress is proposing – more audits! It seems to me what they should be considering is strengthening the management of IT security and moving from security audits often performed by unqualified individuals and teams toward security assessments conducted by security professionals. And since professionals are conducting these proposed assessments, they should be required to comment on the seriousness of deficiencies and possible mitigation actions. An additional assessment that the professionals should be required to report on is the adequacy of funding, staffing and higher management support. I don’t really see any point in giving a security program a failing grade if the existing program is well managed but subverted and underfunded by the department’s leadership.



Similar Posts:

Posted in FISMA, NIST, Risk Management, The Guerilla CISO | 4 Comments »
Tags:

Database Activity Monitoring for the Government

Posted November 11th, 2008 by

I’ve always wondered why I have yet to meet anyone in the Government using Database Activity Monitoring (DAM) solutions, and yet the Government has some of the largest, most sensitive databases around.  I’m going to try to lay out why I think it’s a great idea for Government to court the DAM vendors.

Volume of PII: The Government owns huge databases that are usually authoritative sources.  While the private sector laments the leaks of Social Security Numbers, let’s stop and think for a minute.  There is A database inside the Social Security Administration that holds everybody’s number and is THE database where SSNs are assigned.  DAM can help here by flagging queries that retrieve large sets of data.

Targetted Privacy Information:  Remember the news reports about people looking at the presidential candidate’s passport information?  Because of the depth of PII that the Government holds about any one individual, it provides a phenomenal opportunity for invation of someone’s privacy.  DAM can help here by flagging VIPs and sending an alert anytime one of them is searched for. (DHS guys, there’s an opportunity for you to host the list under LoB)

Sensitive Information: Some Government databases come from classified sources.  If you were to look at all that information in aggregate, you could determing the classified version of events.  And then there are the classified databases themselves.  Think about Robert Hanssen attacking the Automated Case System at the FBI–a proper DAM implementation would have noticed the activity.  One interesting DAM rule here:  queries where the user is also the subject of the query.

Financial Data:  The Government moves huge amounts of money, well into $Trillions.  We’re not just talking internal purchasing controls, it’s usually programs where the Government buys something or… I dunno… “loans” $700B to the financial industry to stay solvent.  All that data is stored in databases.

HR Data:  Being one of the largest employers in the world, the Government is sitting on one of the largest repository of employee data anywhere.  That’s in a database, DAM can help.

 

Guys, DAM in the Government just makes sense.

 

Problems with the Government adopting/using DAM solutions:

DAM not in catalog of controls: I’ve mentioned this before, it’s the dual-edge nature of a catalog of controls in that it’s hard to justify any kind of security that isn’t explicitly stated in the catalog.

Newness of DAM:  If it’s new, I can’t justify it to my management and my auditors.  This will get fixed in time, let the hype cycle run itself out.

Historical DAM Customer Base:  It’s the “Look, I’m not a friggin’ bank” problem again.  DAM vendors don’t actively pursue/understand Government clients–they’re usually looking for customers needing help with SOX and PCI-DSS controls.

 

 

London is in Our Database photo by Roger Lancefield.



Similar Posts:

Posted in Rants, Risk Management, Technical, What Works | 2 Comments »
Tags:

Imagine that, System Integrators Doing Security Jointly with DoD

Posted September 11th, 2008 by

First, some links:

Synopsis: DoD wants to know how its system integrators protect the “Controlled Unclassified Information” that they give them.  Hmm, sounds like the fun posts I’ve done about NISPOM, SBU and my data types as a managed service provider.

This RFI is interesting to me because basically what the Government is doing is collecting “best practices” on how contractors are protecting non-classified data and then they’ll see what is reasonable.

Faustian Contract

Faustian Contract photo by skinny bunny.

However, looking at the problem, I don’t see this as much of a safeguards issue as I do a contracts issue.  Contractors want to do the right thing, it’s just that they can’t decide if security is which of these things:

  • A service that they should include as part of the work breakdown structure in proposals.  This is good, but can be a problem if you want to keep the solution cheap and drop the security services from the project because the RFP/SOW doesn’t specify what exactly the Government wants by way of security.
  • A cost of doing business that they should reduce as much as possible.  For system integrators, this is key:  perform scope management to keep the Government from bleeding you dry with stupid security managers who don’t understand compensating controls.  Problem with this approach is that the Government won’t get all of what they need because the paranoia level is set by the contractor who wants to save money.

Well, the answer is that security is a little bit of both, but most of all it’s a customer care issue.  The Government wants security, and you want to give it to them in the flavor that they want, but you’re still not a dotorg–you want to get compensated for what you do provide and still make a profit of some sort.

Guess what?  It takes cooperation between the Government and its contractors.  This “Contractor must be compliant with FISMA and NIST Guidelines” paragraph just doesn’t cut it anymore, and what DoD is doing is to research how its contractors are doing their security piece.  Pretty good idea once you think about it.

Now I’m not the sharpest bear in the forest, but it would occur to me that we need this to happen in the civilian agencies, too.  Odds are they’ll just straphang on the DoD efforts. =)



Similar Posts:

Posted in Outsourcing, Risk Management | No Comments »
Tags:

« Previous Entries Next Entries »


Visitor Geolocationing Widget: