Who is Using Cyberthreat Intel & How? Take Survey - Enter to Win iPad

Interview with authors of The Art of Software Security Assessment

Introduction


Table of Contents

Interview with authors of The Art of Software Security Assessment

July 9th, 2007
By Stephen Northcutt


Summary: The Leadership Laboratory recently posted a book review of The Art of Software Security Assessment. The book raises a number of issues that we would love to explore further and the authors, Mark Dowd, John McDonald and Justin Schuh have graciously agreed to an interview. One section was titled Code Auditing and the Development Life Cycle and we used that as the basis of the interview.

Q. Gentlemen, how mature do you feel the practice of code audit is? Are there standards for this sort of work?

McDonald: Practically speaking, I'd argue that it's in its infancy. There is a considerable amount of existing work covering general source code inspection, including, for example, an IEEE Standard for Software Reviews and Audits. However, we in the security community still have a lot of work to do in defining actionable, pragmatic processes and standards for security-oriented code auditing. There are some promising efforts toward secure development standards, which naturally complement code auditing efforts. In particular, readers might be interested in CERT's Secure Coding Standards (http://www.securecoding.cert.org/confluence/display/seccode/CERT+Secure+Coding+Standards), SecureSoftware's CLASP process http://www.securesoftware.com/process/), and Fortify's Taxonomy of Coding Errors (http://vulncat.fortifysoftware.com/).

Q. Can you talk briefly about your process for performing code audits and what's unique about it?

McDonald: The process we present in the book is based on years of experience performing source code audits, and we designed it to be flexible and pragmatic. We've experimented with a number of techniques, tools, and processes over the years, and some have worked out better than others. While our personal preference lies towards manual code review, we left our process open-ended in order to encompass other effective techniques for finding software vulnerabilities, such as black-box testing, source code analysis tools, and fuzz testing.

There are three different general types of strategies that we apply when auditing applications:

1. Code Comprehension Strategies -- These are centered around reading the code in order to understand how it works, infer its design and purpose, and to search for vulnerabilities. This can involve tracing end-to-end data flow throughout an application, or reading sections of code in isolation in order to construct working models of the software and identify problematic idioms.

2. Candidate Point Strategies -- These are centered around using a process or tool to generate a list of potential security issues in the code. These processes start with the generation of a list of candidate points. The auditor then evaluates these candidate points, typically by tracing data-flow backwards through the software.

3. Design Generalization Strategies -- These are techniques for achieving and testing a higher level understanding of the software, which are useful for validating software against design documentation, or deriving the design of the software from the implementation in the absence of accurate documentation.

Dowd: As we state in the book, we take a somewhat iterative approach to actually reading the code, such as reviewing key parts of the code several times over. Initially, we are trying to gain insight into what a given function is trying to achieve and the way in which the solution is implemented. On subsequent iterations, we attempt to find variations between how the solution is intended to work with how it really works. Throughout Part II of the book, we illustrate many situations which commonly occur when certain functionality is implemented in applications, be it string processing, file accesses, or synchronization of various resources. Hopefully, readers will see that even though these types of problems are typical, unique scenarios can result in unique application vulnerabilities, and that's why this iterative approach is useful. We also state that the process of code auditing is creative, and this is the reason why: While many other code auditors (and static analysis tools) look for specific types of vulnerabilities irrespective of the context of the code, we focus on inferring the intended goal of the code in front of us and try to discover deviations between its intended functionality and its actual functionality.

Q. If you type "code audit" into Google you get a lot of Adwords some for tools, but others for services, do you have any tips for finding the qualified code auditors?

Dowd: Code auditing is a pretty specialized trade, and the quality of an audit varies vastly depending on who you get. It's not unlike the pen-testing industry in that sense; there are some run-of-the-mill automated-tool testing houses, and then there are quality code audits from application security specialists. What distinguishes a good auditor (or firm) from a standard one is their knowledge base, experience, and creative thought process. Good auditors are able to exploit their advanced knowledge of security subsystems, language features, and API internals to discover subtle and unique vulnerabilities that automated tools are unable to catch. Therefore, I think reputation of the auditor (or firm) you're looking at recruiting is key.

Schuh: My answer is going to get a bit long, but I'd like capture what I've seen from past clients. As Mark said, you'll want to get some background on any prospective companies and auditors. Word-of-mouth recommendations often convey the best real-world measure of experience. To cast a wider net though, you can use publications and industry recognition as a good measure of reputation. When approaching a company, you may also want to ask for bios on the auditors likely to perform your assessment. Next, you'll want to ask for a sample report from any auditors you're considering. The quality of this report is extremely important, because it's a large part of what you're paying for. The report should be comprehensive and include sections targeted at the developer, management, and executive levels. The technical content should be clear enough that any developer familiar with the language and platform can follow both the vulnerability details and the recommendations for addressing them. You also need to get some understanding of the audit process itself. Ask if they lean toward manual analysis or if it's more tool-driven. Ask for names and versions of any commercial tools. For proprietary tools, ask for some explanation of the capabilities, and what advantages their tools have over those on the market. You also want to be wary of any process that's overly tool driven. After all, you're paying a consultant for their expertise, not to simply click a button and hand you a report. If a good assessment was that easy, all software would be a lot more secure.

Finally, you want to pay attention to what the auditors ask you for. Any competent auditor will need a considerable amount of information before giving you an estimate. At a minimum, you can expect to provide:

  • description of the application's function and purpose
  • source-metrics breakdown including languages
  • lines of code,
  • possibly source code samples
  • information on the threat environment of the application
  • any available documentation for developers or users.

If your auditors don't ask for this, you really need to question how they could successfully scope the cost and duration of your audit.

Q. You mention the waterfall model, is anyone actually using this in industry? If so, since it is a strict interpretation of the SDLC model is it easier to audit?

McDonald: We've encountered a variety of development models, ranging from complete disarray to very heavy, formal top-down methodologies. In our experience, commercial software vendors tend to use the more agile development models, while enterprises tend towards the more formal documentation and up-front design-driven approaches (especially in the financial sector and with outsourced projects). Having the documentation that results from the more formal models can certainly help when planning an audit. It's also very useful to have accurate design documentation when looking for the more abstract high-level vulnerabilities or trying to understand the risks that will face a system in the production environment. That said, the overall quality of the code affects the difficulty of the audit in a more pronounced fashion than the development model that produced that code. If the system is organized in a clear and logical way, and effort is made to make the code maintainable and readable, then it usually lends itself towards auditing.

Schuh: I've also found that the development model itself isn't so much a factor, as even methodologies like Agile and XP can generally be treated as an iterative waterfall, from a security perspective. I will add that, in practice, teams that adhere to some form of documented SDLC usually produce code that's much easier to audit and secure.

Q. In my experience, the feasibility, requirements definition and design phases of the Software Development Life Cycle tend to get the short end of the straw. Do you have any practical advice for a manager that is responsible for a software development project and is concerned the early phases of the project are being given the bums rush in favor of diving in and starting to code?

Schuh: Security really needs to be a formal component of the requirements and design process. The developers need to work with the business stakeholders to ensure that security requirements are accurately understood and properly modeled in the design phase, and they need an SDLC process that supports this. Establishing the right developer to business dialog can be difficult, so it often helps to present corner cases that ostensibly fall within proposed (or assumed) security requirements but may violate their intent. This helps increase awareness and gets both the developers and business drivers asking the right questions.

McDonald: You need to have enough of your system on paper at any given time to be able to perform design reviews and maintain threat models throughout the development process. This information can be somewhat high level, but even practitioners of the most extreme agile methods should be able to define the system in broad strokes and refine it as they progress.

Dowd: Howard and Lipner's book "The Security Development Lifecycle," presents good coverage on how Microsoft addresses these and similar problems with their SDL. We'd also recommend McGraw's "Software Security: Building Security In."

Q. Testing is another part of the SDLC that seems to suffer from a lack of maturity. From your experience can you point to some of the promising practices to improve testing? Who is doing it the best?

McDonald: Just like with development teams, we see varying types of testing efforts within organizations, and some are quite effective. In particular, developer-driven testing can provide a good choke-point for identifying many common implementation vulnerabilities. Some of the more sophisticated software vendors we've worked with devote considerable time to maintaining impressive test harnesses that really exercise their software every time a change is made.

Schuh: We're seeing a rapidly growing interest in security-oriented testing. It's certainly a useful tool and we strongly encourage incorporating it into any development process. I recently read "Hunting Security Bugs" by Gallagher, Landauer, and Jeffries; it provides some really great information on test strategies, though it's a bit Windows-centric. There's also the "The Art of Software Security Testing," from Addison-Wesley, which we haven't had a chance to explore yet, but it looks promising.

Q. Can you point potential readers to particularly juicy parts of your book on software testing, either chapters or page numbers?

Schuh: Well, we really tried to drive home the point of manual code auditing, so we didn't delve too specifically into testing approaches. Although, the Chapter 4 coverage of the assessment process is essential for understanding how testing and code auditing can compliment each other. There's also a lot of great web-related testing advice throughout Chapters 17 and 18.

Dowd: Our thorough coverage of vulnerabilities should provide QA personnel with the requisite knowledge for designing much more effective tests. Our book can definitely help a tester gain much deeper insight into the coding problems underlying vulnerabilities. In this sense, it should prove to be an extremely valuable reference for a variety of technical professionals involved in software security.

McDonald: We don't really come from the QA culture, but our understanding is that there is an older tradition of sophisticated white-box source-code assisted analysis. This is, in essence, the heart of our book, so there's lots of great information for this type of practitioner.

Q. That's great, I really appreciate you sharing that. What is next for you now that you have completed this book?

Dowd: We've already started planning for a second edition, and have set up a website on software security at http://taossa.com/. The site includes errata, related resources, and suggestions for the 2nd edition. The content is presented as a blog, so it also lets us highlight potentially interesting technical advancements or useful security-related tidbits we've come across on our work.

Schuh: I think we all ended up in shock at how much work this book required. We tried to cover the topic of code auditing as thoroughly as possible, and I think we produced something really valuable. However, we had to make some compromises and cuts if we ever intended to finish. So, as Mark stated, we're already looking to the next edition. We're hoping that the website helps us in that capacity by providing a good venue for reader feedback and discussion.

McDonald: I'm going to keep my head down auditing code. There's a lot of work left to be done, and a lot left to learn. I definitely plan to continue writing about software security, probably focusing on our website in the immediate future. I've also started contributing to CERT's Secure Coding Standards (http://www.securecoding.cert.org/confluence/display/seccode/CERT+Secure+Coding+Standards), which I think is a great resource.


<< Thought Leader Home