Kick off the New Year with SANS Security East 2017 in New Orleans (January 9-14)

Thought Leaders

Table of Contents

Ryan Barnett, Director of Application Security Training at Breach Security, Inc.

Stephen Northcutt - June 29th, 2007

Ryan Barnett, Director of Application Security Training for Breach Security has agreed to be interviewed for the security lab for this special series in web app security and we certainly thank him for his time.

Ryan, can you tell us something about yourself, what do you like to do when you are not in front of a computer, Apple or Microsoft, favorite language to code it?

When I am not working on web security stuff, just about all of my time is spent with my wife Linda and 1 year old daughter Isabella. As for specifics of what I like to "do", I like to golf, listen to music and I am pretty big movie buff and football fan

You are affiliated with a number of different security organizations. Can you give us a brief overview?

There are three main security organizations that I work with. The first one is actually the SANS Insitute![1] I have been an Instructor, Courseware Developer and Local Mentor for a number of years now. My main focus has been teaching classes on web server platform security (Apache) and, more recently, the SEC519 Web Application Security Workshop class. The attendence for these classes has been steadily increasing over the past few years and the comments from the students have always been very positve as there are vulnerability and exploit topics which we cover that many developers weren't aware of or weren't quite sure what they meant and what the potential impacts are. Once they get a chance to see these vulnerabilities first hand in the labs, the lightbulb finally turns on.

The second organization is the Web Application Security Consortium (WASC).[2] The main goal of WASC is to be a clearing house of webappsec information and to raise public awareness. As a member, I have assisted with the creation of the Threat Classification (TC) document which outlines all of the various web vulnerabilities and attacks.[3] The OWASP Top Ten was so well received as it was packaged in a very easily consumable size; however, as the name implies, it is only the top vulnerabilities and not all encompassing. The WASC TC outlines all of the attacks so it is a great companion reference guide when new web applications are initially being designed and security features need to be identified. We are currently in the early stages of a TC version 2 update that will outline the most current vulnerabilities and attacks.

The final organization that I work with is the Center for Internet Security(CIS).[4] I am the Apache Benchmark Project Leader. The goal of the benchmarks are to provide a step-by-step guide showing how to quickly lock down the specific OS or application. Sadly, these benchmarks are needed as most applications are delivered with a default configuration where security settings are not enabled and too much functionality is turned on. The benchmarks and companion scoring tools help administrators to quickly secure their host and/or application and to ensure continued compliance. We are seeing a real surge in usage of the benchmarks as more and more regulations are being pushed out, including PCI.

There is a really interesting WASC Project that you are heading up called the Distributed Open Proxy Honeypot Project. This sounds like an updated version of the honeypot project you ran a while ago that we outlined in the HTTP Elephant on the Table paper.[5] Can you tell us more about it?

That is correct, Stephen. The main difference with this new deployment is the distributed architecture of the honeypot sensors as I wanted to get a wider view of web attack traffic. One of the voids that we have in the webappsec space is the lack of concrete metrics. Not just the number of attacks but metrics on the details of attacks such as; what web vulnerabilities are really being exploited by attackers (vs. some of the more theoretical ones that are produced in labs but not really exploited in-the-wild). This sentiment is shared by just about every member of the webappsec community. The real challenge that we face in obtaining this information is that the people who are being compromised through web attacks are not rushing to release it. Unless the attack is a web defacement, the public probably won't know it even happened.

So, where does that leave us? Some people have chosen to deploy web application/server honeypots in hopes of catching new web attack data. While there has been some limited success in this area, the problem has really been that of scope and value. If you were to deploy a web honeypot yourself, you will mostly likely only get traffic from automated bots/worms probing the web for known vulns (can you believe that there is still hosts infected with Code Red that are just scanning away...). Why would a web attacker want to come to your site? This is the lack of value issue. Now, if you were to deploy a complex web application honeypot that mimics your real, e-commerce website, that is a different story. You would probably get some web attackers who are looking to obtain customer data such as credit card numbers or social security numbers. Still, however, the scope is somewhat limited to your own site. The idea that we decide to use was to flip the tables, so to speak. Instead of being the "target" of the attacks, we instead chose to place our monitoring on an interim host - the open proxy server. Almost all web attackers use open proxy servers to help hide their true origin. So, in this type of a setup, the web attacks are coming to our host but are destined for other remote targets. Additionally, by deploying many open proxy honeypot systems and then centralizing the logging, we are able to get a much wider view of web attacks that are occuring around the globe.

We released our first Threat Report [6] in May and the response was tremendous. The data in the report shows that the vast amount of web attacks are automated. It re-enforces the "identify a vulnerabilty and then find a site" vs. "find a site and then look for a vulnerability" mentality where attackers are looking for the easy kill. This means that it is absolutely critical that organizations stay on top of patch managment in order to shorten the window of exposure when new vulnerabilities in public software are disclosed. We are planning a phase 2 release in early July where we will have updated VMware honeypot images that are running the latest and greatest ModSecurity software and rulesets. Another change will be that we will be using the Breach commercial ModSecurity Management Appliance (MMA) [7] for our central logging host.

What can you share about the web app security market segment, growing, shrinking, becoming more sophisticated?

The application security market is booming as we are seeing a marked increase in industry awareness. Let's talk about vulnerability and attack terminology for a moment. In one of the previous SEC519 classes I was teaching, I asked the class if someone could provide a definition for Session Fixation. One of the students, who was a developer, answered "Isn't that a scenario where a user's session id gets stuck and the application does not delete it properly?" I told him that it sounded like he was describing insufficient session expiration and he said yes. When no one else offered a definition, I then described the Session Fixation scenario and showed how this could be used to conduct a Session Hijacking attack. Almost everyone said a collective "Oh, that is what that term means..." If web developers don't know what these terms mean then how can they be expected to code in defenses for them? This lack of awareness has begun to change however. In a more recent SEC519 class, I asked the class what the term Cross-Site Scripting (XSS) was and about 2/3 of the class raised their hands and then gave the correct definition!

Another major trend that is affecting web application security awareness is PCI. Breach Security is member of the PCI Security Standards Council and the Security Vendor Alliance. As such, I recently have been participating in PCI panel discussions at security conferences. The sessions have had a fantastic turnout and the audience is asking very relevant questions to try and identify the core issues that they will have to address to meet PCI compliance. Now, I don't want to focus on the whole Compliance vs. Security debate as that could be argued endlessly but I would like to point out two of the major issues. The first issue that raised many a discussion in the webappsec community was PCI mentioning complying with the OWASP Top Ten. As it was mentioned in the Dinis Cruz article,[8] the OWASP Top Ten was never intended as a compliance document. The second issue was with the version 1.1 update that included the new section 6.6 which stated that organizations need to either have a code review conducted by a third party or deploy a web application firewall. The debate centers around the concept that these two items are not alternatives but, rather, are complimentary. There are issues that WAFs can handle that Code Reviews do not, and vice versa. For organizations that are formulating budgets for PCI compliance this is a relavant topic as there are certainly different costs associated with each. Budgetary allotments aside, ideally organizations should be doing all three of the following:
  • Conduct code reviews on all web applications and fix the identified issues
The code reviews should be conducted when applications are initially being developed and placed into production and also when there are code changes. Any issues that can not be fixed immediately should be identified and passed onto the vulnerabilty scanning and WAF teams for monitoring and remediation.
  • Conduct vulnerability scans on all web applications and fix the identified issues.
Vulnerability scanning should be conducted prior to an application going online and then at regularly scheduled intervals and on an on-demand basis when code changes are made. Any issues identified should be passed onto the Developement and WAF teams for remediation.
  • Deploy a Web Application Firewall in front of all web server. A WAF will provide persistent, continuous protection. When the WAF identifies issues with the web application, it can provide reports back to both Development and Vulnerability Scanning teams.

PCI is a step in the right direction as it identifies the need for all three of these items in a fairly concise manner.

In a related question, can you elaborate a bit more on how those technologies can be used together?

Sure. The idea is that these three technologies and processes should not be isolated. The goal is to achieve a symbiotic balance between the inputs and outputs of each process. The greatest benefit can be gained when each process's output is used as input to the other two. Think of it from the customer's perspective for a moment. You pay a lot of money to have a code review conducted or to have vulnerabilty scans executed. The output of these two processes is normally a big report that identifies all of the vulnerabilities. This is the critical point in the process. What can organizations do with this report information? The vulnerability information should become actionable and used as input to both a code update by the developement team and for a virtual patch by the WAF team. The virtual patch on the WAF can be used as a stop-gap measure to provide immediate protection for someone attempting to remotely expliot a specific vulnerability. It is important to note, however, that virtual patches are complimentary to actually fixing the code, they are not a replacement. The problem is that code updates are often expensive and very time consuming, so a virtual patch is a quick and easy fix, which is why they are garnering widespread support. Ivan Ristic and I will actually be doing a joint webcast on this topic as part of the Breach Webinar Series [9]on Wednesday, July 18, 2007.

What cool new capabilities can we expect to have over the next year or so with regards to Web Application Firewalls?

As mentioned above, Virtual Patching is going to increase in popularity as it offers a pathway to expedite implementing mitigations for web vulnerabilities at the infrastructure level vs. the more time intensive normal SDLC channels. In order to make this process even more effecient, Breach Security is working with vulnerability scanning vendors to implement an automated process for creating ModSecurity Virtual Patching rulesets for issues identified during scanning. Think about it, wouldn't your customers love to get custom Virtual Patching rulesets in the remediation/mitigation sections of their normal vulnerability scan reports?

Another big area for growth will be in what Breach Security is calling Application Defect identification. The idea is that the WAF can monitor outbound data and identify poor coding practices, such as not utilizing the "httponly" or "secure" cookie flags, or misconfigurations such as providing detailed error messages. This information can then be used to feed back into the SDLC process so that web developers can update the code to address the problems.

This is a question I like to ask everyone in this space, one of the unique things about web applications is that one programming error can be referenced in hundreds of instances often all of them Internet reachable. What do you think the number one error is, the mistake a programmer can make to guarantee a spot in the hall of shame?,

I know you asked for just one, so I will cheat a bit and offer 1A and 1B :)
  • Trusting User Input. Yes, this is related to input validation, however it is the core problem. The big paradigm shift when coding for the web is for developers to clearly understand that any input that comes from the client should not be untrusted. Developers, unfortunately, rely on the client's web browser to enforce certain controls and this is a fatal mistake as clients can circumvent these settings. When teaching the SANS SEC519 class, this concept normally hits home once we run the labs with a local web proxy such as WebScarab. Once they see that the client side Javascript that they were using to enforce input validation can be negated by either removing it entirely from the response page or the request intercepted and then changed, it then sinks in that they need to do all security validation on the server.
  • Inadequate Logging. Web developers need to come to terms with the fact that their web applications will error out and fail. It is therefore paramount that they implement proper logging to provide data for both trouble-shooting and incident response. As the old saying goes, Prevention is Ideal but Detection is a Must. One of the main reasons that Web Application Firewalls are so critical is that they are able to function as web auditing devices and provide detailed logging for the entire web transaction. All too often, the logging mechanisms in web applications are either poor or non-existent.

Ryan, the security market continues to change and new threats evolve. What are the hottest trends right now in attacking web applications, and what can we do to prevent them?

While it is true that there will always be new attack vectors that come with new technologies, most of the same old vulnerabilities will continue to be used just with new transport channels. For example, buffer overflows, command injections and SQL injection all can affect web services. It is just that the attack vector is now held within an XML XPath parameter instead of a normal variable location. This means that we need to make sure that our defensive applications (such as WAFs) can correctly parse and interpret XML/SOAP data in order to be able to identify these attacks. So while packaging may be slightly different, the attacks themselves are the same. So, while some of the attack techniques will change, our best defense is to properly implement and use our existing technologies and process. Mastercard's Fraud division released a report [10] that listed the top 5 root causes of account data compromise:
  1. Ineffective Patch Management
  2. No Security Scanning
  3. Weak Network Security
  4. Lack of Real-Time Security Monitoring
  5. SQL Injection

Boy, nothing really new on that list is there? These are all well known security best practice mitigations that everyone should be doing. What is interesting is that the only web application specific attack listed is SQL Injection. This means that if an attacker can find an alternative, easier way to get the customer's data, they will do it.

What advice do you have for someone in the security field to stay current on web app security, what is your favorite newsgroup, mailing list or other information source? I know you speak at events on a regular basis, where does a software developer go to get the inside scoop on application security?

There are a few really good resources for current information in the webappsec space. First would be either the WASC or OWASP sites. I would also recommend the CGISecurity [11] website as it always highlights relevant info. As for mailing lists, the WASC Web Security [12] is probably the best, followed closely by SecurityFocus Web Application Security [13]. For blogs, I make it a daily habit to check out Jeremiah Grossman’s [13] and RSnake's over at Ha.ckers.[14] While webappsec is not the daily topic focus, Richard Bejtlich's TaoSecurity Blog [15] is always a great read and has often given me a different perspective on some of the issues I am facing. The best conferences to go to for webappsec stuff are OWASP and BlackHat, however SANS is starting to gain some ground with some of the newer web security related courses:

What haven't I asked, this is your chance to grab the bully pulpit[22], a platform from which to persuasively advocate an agenda, and drive home your number one point that you are trying to make as a thought leader in the industry?

I would urge the industry to embrace the "It takes a Village" concept for tackling web application security issues. We can not just push the responsibility for securing web applications over the fence onto developers and say that it is "their" problem. Even if you had a web app that was coded securely and had no identified vulnerabilities, there are still web security infrastructure issues that need to be addressed. For instance, take the idea a web defacement against a target website. Consider the following scenarios where it would "appear" from the customer's perpsective that your website was defaced even though your web app had no direct vulnerabilities:
  • Domain Hijacking - This attack attempted to use the poor confirmation processes used by Domain registration sites to update the DNS records for a target site to point to the attacker's DNS systems. The end result would be that the DNS resolutuion returned would not point to the real target's site but to the attackers. [23]
  • DNS Poisoning - this attack targets vulnerabable DNS servers and attempts to poison the cache information to point to a new IP address. [24]
  • Caching Proxy Defacement - this attack uses HTTP Response Splitting requests to trick intermediate proxy/caching devices to update their cache information with data that is supplied by the attacker. [25]
  • Partner/Affiliate Data Manipulation - these types of attacks attack the trust relationship of partner data by manipulating the data exchanged. Here are some specific examples:
With these sample attacks in mind, it is obvious that all facets of enterprise security need to work together to protect the integrity of your website. This means that we need security in layers to help protect our DNS server, audit our firewall settings, deploy network IDS systems and correlate logging to identify any anomalies. Richard Bejtlich, as I mentioned in the previous section, had a great recent post on what he is calling "Security Application Instrumentation" that further drives home the point that solely relying on a securely coded application is not sufficient and why applications such as WAFs provide added value.[29]

Ryan, we appreciate your time and willingness to share your thoughts with the SANS readership community, thank you.

=== All links valid as of this date, June 29, 2007