tssci security

Building a security plan

An audit framework for evaluating structured security program frameworks

How many readers implemented a new security plan for 2006 or 2007? How many had clients that implemented a new security program? Which frameworks were involved?

Possible frameworks (Criteria)

Break down your plan

Even if you have no formal plan (e.g. buy SANS! buy Microsoft!), let's try and find out how the security program was implemented. A plan can be broken down into three major aspects, and several areas of specialty, however my list is not exhaustive:

Did you start from a top-down or bottom-up approach? Were you able to implement all three aspects? To what degree? Which area was most successful? Which areas were not successful?

Plan effectiveness and measurement

Did the framework you chose work for you in 2006/2007? Are you going to continue using it? How effective was it? How did you measure the effectiveness? Would you think about moving to a different framework in 2008? Which one? Why?

I read a book from Andrew Jaquith called Security Metrics: Replacing Fear, Uncertainty and Doubt on March 8th of 2007. Until the ISO 27004 standard is completed, this is above and beyond the best/only way of measuring the effectiveness of any security program. Rothman has some great ideas on his blog and in the P-CSO. As a reader of the TS/SCI blog -- I think you should feel obligated to check out securitymetrics -- add the blog to your RSS and get on the mailing-list if you haven't already.

Risk analysis

The ISO 27005 standard is also in progress. The ISO/IEC 27000-series FAQ answers the question of which information security risk analysis methods can be evaluated in the mean time. I have listed some of the obvious risk analysis frameworks listed from the FAQ, but I don't have any experience with a lot of these.

A comment on SecGuru introduced me to Practical Threat Analysis, a software suite (free to researchers/analysts) that aids in risk analysis and audits. It has modules for ISO 27001 and PCI-DSS 1.1, which include industry vertical customizations. There couldn't be an easier way to perform nearly-automated risk analysis, or just as a guide for self-assessment.

Risk analysis for software assurance has been a major interest of mine for the past two years, and here's my sorted list:

It's unsurprising to hear that the security measurement masters (Jaquith and Rothman) both have extremely unique views on risk analysis. Rothman seems to like "risk-adjusted expected loss models" in the P-CSO, but also calls them "black-magic" and "fear-mongering". His solution appears to revolve around ROSI/ROI methods he separates into best/worst/middle-road cases.

Jaquith's view: risk-modeling is incredibly important, but very hard to do. His primary point of his book, however, is that risk management is dead. The Deming Wheel (Plan-Do-Check-Act or PDCA) is Jaquith's Hamster Wheel of Pain, the antithesis of managing risk and measuring security. I found it interesting that Jaquith's Holy Grail is the Security Balanced Scorecard, which is based on a Six Sigma tool by the same name. Six Sigma's key methodology is based on the primary tool, DMAIC (Define-Measure-Analyze-Improve-Control), clearly inspired by Deming's PDCA. Adam Shostack has also referenced the military strategy OODA Loop (Observe-Orient-Decide-Act) as the basis of a security plan. Most interesting to me are "elements of a finding" from GAGAS 7.73 (US Government standards for performance audits) which lists Criteria-Condition-Effect-Cause.

Combining programs and putting risk assessment in its place

While Jaquith says that the program categories in his book are based on COBIT, he also mentions a gem entitled Aligning COBIT, ITIL and ISO 17799 for Business Benefit. His approach to risk analysis (i.e. assessing risk) is tied to effectiveness, is forward-looking, and denounces risk management and asset valuation (especially ALE). Instead, his view is similar to mine - risk assessment is primarily for software assurance and vulnerability management.

I have mentioned products such as Configuresoft and Lumension Security in the past few blog posts. These two vendors supply patch management solutions. I probably should have mentioned other possibilities such as BigFix, Opsware, and ThreatGuard (my bad). Recently, I came across the MITRE OVAL-Compatible program, which lists these and a few other vulnerability management solutions. OVAL support is great for managing risk from vulnerabilities, but I feel obligated to also mention Cassandra (an email service from CERIAS at Purdue), Advisory Check (advchk, which works via RSS), SIGVI (FOSS Enterprise vulnerability database), and OSVDB 2.0. Of course, this "vulnerability management" only applies to "known" vulnerabilities. Let's turn our attention back to unknown vulnerabilities, which are inherently weaknesses in software.

What is the problem we are trying to solve? (Cause)

I'm sure that you'll hate me for a few seconds here as I apparently go off-topic on an already overly long post. I would really like to address some issues with these modern-day security programs. Jaquith bought up some excellent points regarding security programs, risk management, and risk assessment. The root-cause of our problems come down to laws of vulnerabilities. Stop the vulnerabilities; stop the attacks; stop the threats.

Richard Bejtlich introduced some anti-audit sentiments while providing interesting ideas in a recent blog post, Controls Are Not the Solution to Our Problem. In it, he addresses hardened builds that are tested with vulnerability management, pen-tested, and shot into outer space (j/k on this last one). He also addresses topics of visibility and integrity. I can fully see his vision here, and it's quite good from an operational perspective.

For me, I come from a long line of intense operational background. I'm your 1997-2004 equivalent of a Green Beret Veteran from Vietnam, only a BGP Operator Veteran from Silicon Valley (although I prefer the term *Renegade Network Engineer*). My introduction to Unix/network security came on much earlier and it was then that I learned that all security problems are due to vulnerabilities and all vulnerabilities are due to bugs.

However, I often fail where it appears that I most succeed. I had a hard time trying to balance building things with breaking things, which made me avoid the security industry like the plague (plus all of the other obnoxious reasons). Sometimes I think that I don't have the security gene.

Yet I proselytize developers managing risk by producing significantly less vulnerabilities based on a process so simple in nature -- the CPSL I defined. Why do I think this will work? These developers are also lacking the security gene. Usually, only people with advanced knowledge and practice in finding vulnerabilities or writing exploits have the skills and genes necessary. These individuals are few and far between. Or are they?

Vulnerability theory (Effect)

Some of my frustrations as an operator have finally come to pass in the security world as well. When my mentor Tom Jackiewicz handed me a dog-eared copy of DNS & BIND, 1st Edition, and said "this is a black art; learn this first", this instinctively pushed to me to learn everything about DNS. Weeks later, I found out about the darkest art at the time, BGP routing. My biggest take from both of these is that after learning them -- I realized that they weren't really dark arts and were very easy once you got the theory down.

On Tom's birthday this year (maybe a tribute?), MITRE released a document called Introduction to Vulnerability Theory. In the introduction to this "Introduction" it reads:

Despite the rapid growth of applied vulnerability research and secure software development, these communities have not made much progress in formalizing their techniques, and the "researcher's instinct" can be difficult to describe or teach to non-practitioners. The discipline is much more black magic than science. For example, terminology is woefully garbled and inadequate, and innovations can be missed because they are misdiagnosed as a common issue. MITRE has been developing Vulnerability Theory, which is a vocabulary and framework for discussing and analyzing vulnerabilities at an abstract level, but with more substance and precision than heavily-abused, vague concepts such as "input validation" and "denial of service." Our goal is to improve the research, modeling, and classification of software flaws, and to help bring the discipline out of the dark ages. Our hope is that this presentation will generate significant discussion with the most forward-thinking researchers, educate non-expert researchers, and make people think about vulnerabilities differently.

The document goes through rough definitions and examples. Many are based on artifact labels to identify code sections/locations. In order, these labels are given as Interaction-Crossover-Trigger-Activation. The document also tries to introduce concepts and seeks feedback constantly throughout.

My favorite part is the "Related Work" section. Some of these have already been mentioned in this post, including the RASQ / Measuring Relative Attack Surfaces work and the Trike model. I've also already mentioned The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities, which apparently is a must-read for at least chapters 1-3, in addition to their accompanying website. I'd also add generic vs. context-specific and design/code visibility as portrayed in Secure Programming with Static Analysis, which also mentions the OWASP Honeycomb Project.

The final reference, the infamous Matt Bishop (author of Introduction to Computer Security and Computer Security: Art and Science), alludes to his research on "Vulnerabilities Analysis":

This project treats vulnerabilities as a collection of conditions required for an attacker to violate the relevant security policy. We're developing a set of conditions that apply to multiple vulnerabilties, which will help us locate previously unknown vulnerabilities, and a language to express the conditions in, which will help us reason about vulnerabilities with respect to security policies.

Types of vulnerability testing

While reading Matt Bishop's research, I also uncovered an interesting presentation entitled Checking Program Security: From Proofs to Scanning via Testing. He talks about different types of security testing (see the comments discussion between Ariel and I in the previous post on Why pen-testing doesn't matter for more information). From my blog post on Formal Methods and Security, I will change these to read:

Today, most vulnerability testing is done as ad-hoc. There are multi-stage attacks such as those described in Building Computer Network Attacks and w3af - A framework to own the Web. This sort of research may bring penetration-testing into the world of science by using formal methods such as hierarchical task networks (HTN) and Markov decision processes (MDP). HTN and MDP enter the realm of AI -- the promised land for all of the scanner vendors (network, application, software, source-code, exploitation engine, et al).

Artificial Intelligence to the rescue

Godel and Turing would say that computers are not capable of producing formal proofs, including any automated theorem-prover (ATP). I recently read Another Look at Automated Theorem-proving, which demonstrates these principles remain the same -- ATP over-promises and under-delivers.

Take an example from that paper, Fermat's Last Theorem. The proof was completed some 357 years after the theory was proposed. It was argued by the media that the theory should have been solvable by a computer.

If an integer n is greater than 2, then the equation a*n* + b*n* = c*n* has no solutions in non-zero integers a, b, and c.

However, this problem can only be solved through a geometric or inductive proof, requiring a human. An ATP, given a^3 + b^3 = c^3 to solve a, b, and c -- will simply run forever. This is due to the Halting problem.

Nitesh Dhanjani wrote an excellent piece on Illogical Arguments in the Name of Alan Turing recently. For those in the security industry claiming the Halting problem (or the undecidability problem, one of Turing's other laws) as the reason why we haven't improved any further, this should be mandatory reading.

See my previous blog post on Formal Methods and Security for some thoughts on how model-checkers and ATP's can come to our rescue. Also expect some future posts on this topic (although I could use some guidance on where to research next).

Chess and the vulnerability problem (Condition)

No, I'm not talking about Brian Chess from Fortify Software (although this may be a good place to start with your budgeting plans for 2008). I'm talking about the game, "chess".

AI may have the best tactics, but only humans can provide the strategy necessary to defeat their machine opponents. Except of course in the case of Deep Blue, a chess-playing computer developed by IBM that defeated world champion Garry Kasparov in 1997. Deep Blue cheated on strategy, by using an evaluation function that had previously analyzed 700k grandmaster games, which additionally required tuning between games. Deep Blue was also tuned specifically to Kasparov's prior games.

Many people who first start to play chess think that it's entirely a strategic game (based on the above and other various reasons), focusing very little effort on tactical maneuvers. Experienced chess players will tell you that chess is mostly tactics. Chess "currency" such as time, space, and material eventually become patterns of recognition to chess masters.

In the third section above, "Break down your plan", I list three aspects of a security plan: strategic, operational, and tactical. If your security plan is largely tactical, you'll begin to see patterns of vulnerabilities and software weaknesses. If it's too strategic/operational and not backed up with enough tactics, you may end up with some losses. Many security program frameworks do not include enough information on tactics, or they don't address the right tactics.

Criteria, Condition, Effect, Cause, (Recommendation)

Having said all of this, I would encourage people to evaluate their current security program against Gunnar Peterson's Security Architecture Blueprint. I feel his program covers a lot of the tactical, risk management, and measurement issues presented in this post in a very succinct manner. Don't take just my word for it; also see what Augusto Paes De Barros (he suggests combining the P-CSO with the Security Architecture Blueprint), Anton Chuvakin, Alex Hutton, Chris Hoff, and Mike Rothman (under Top Blog Postings, on the first one listed here and the third one listed here) have to say.

It would be interesting to turn Gunnar's security program framework into an ISMS usable by ISO27k. The combined framework could then be further combined with COBIT and ITIL in order to meet a variety of requirements. This would allow for ease of use when dealing with auditors, particularly under the auspices of governance and regulations.

In the Security Architecture Blueprint, I particularly like the inclusion of Assurance, and pointers to work such as Architectural Risk Analysis. His domain specific metrics and use of dashboards is above and beyond some of the material in Jaquith's Security Metrics.

I recently came across a presentation Gunnar gave at QCon, the InfoQ conference in San Francisco. While the talk was on SOA and Web Services Security, in the first section he addressed Network, Host, Application, and Data security spending in a way that I thought was truly unique. My take was that typical businesses are spending too much on network security and not enough on data or application security. It appeared as if he recommended to take half of the network security budget and allocate it to software assurance and data protections.

One of the most interesting takes from Gunnar is the unanswered question about asset valuation. On May 4, 2006, as a post to the securitymetrics mailing-list, he said:

"Customers and customer relationships, as opposed to a valuation of the amount of gigabytes in the database, have tangible, measurable value to businesses, and their value is much easier to communicate to those who fund the projects. So in an enterprise risk management scenario, their value informs the risk management process ... [For example, consider] a farmer deciding which crop to grow. A farmer interested in short-term profits may grow the same high-yield crop every year, but over time this would burn the fields out. The long-term-focused farmer would rotate the crops and invest in things that build the value of the farm and soil over time. Investing in security on behalf of your customers is like this. The investment made in securing your customer's data builds current and future value for them. Measuring the value of the customer and the relationship helps to target where to allocate security resources".

Gunnar's recommendations here are consistent with Balanced Scorecards, and the Security Architecture Blueprint is directly compatible with Jaquith's Security Metrics. This also allows easy integration with Six Sigma, including other 6S tools such as Voice of the Customer (VOC), KJ Analysis, and SIPOC.

Having a good security plan is critical to the success of your organization, your job, and extends value to your customers. Will you make the right decision on a security program for 2008? Will you be able to say whether your program was effective come 2009? Did you calculate risk correctly? Did you value your assets correctly? What regulations and other requirements do you need to satisfy or comply with? Will your security program allow you to fully meet all of these requirements? Based on these recommendations, this framework should guide you towards building the right security plan in no time.

Posted by dre on Monday, December 10, 2007 in Defense, Hacking, Intelligence, Politics, Security, Tech and Work.

blog comments powered by Disqus
blog comments powered by Disqus