When I am not working on web security stuff, just
about all of my time
is spent with my wife Linda and 1 year old daughter Isabella.
As for specifics of what I like to "do", I like to golf,
listen to music and I am pretty big movie buff and football fan
There are three main security organizations that I work with. The first one is actually the SANS Insitute![1] I have been an Instructor, Courseware Developer and Local Mentor for a number of years now. My main focus has been teaching classes on web server platform security (Apache) and, more recently, the SEC519 Web Application Security Workshop class. The attendence for these classes has been steadily increasing over the past few years and the comments from the students have always been very positve as there are vulnerability and exploit topics which we cover that many developers weren't aware of or weren't quite sure what they meant and what the potential impacts are. Once they get a chance to see these vulnerabilities first hand in the labs, the lightbulb finally turns on.
The second organization is the Web Application Security Consortium (WASC).[2] The main goal of WASC is to be a clearing house of webappsec information and to raise public awareness. As a member, I have assisted with the creation of the Threat Classification (TC) document which outlines all of the various web vulnerabilities and attacks.[3] The OWASP Top Ten was so well received as it was packaged in a very easily consumable size; however, as the name implies, it is only the top vulnerabilities and not all encompassing. The WASC TC outlines all of the attacks so it is a great companion reference guide when new web applications are initially being designed and security features need to be identified. We are currently in the early stages of a TC version 2 update that will outline the most current vulnerabilities and attacks.
The final organization that I work with is the Center for Internet Security(CIS).[4] I am the Apache Benchmark Project Leader. The goal of the benchmarks are to provide a step-by-step guide showing how to quickly lock down the specific OS or application. Sadly, these benchmarks are needed as most applications are delivered with a default configuration where security settings are not enabled and too much functionality is turned on. The benchmarks and companion scoring tools help administrators to quickly secure their host and/or application and to ensure continued compliance. We are seeing a real surge in usage of the benchmarks as more and more regulations are being pushed out, including PCI.
That is correct, Stephen. The main difference with this new deployment is the distributed architecture of the honeypot sensors as I wanted to get a wider view of web attack traffic. One of the voids that we have in the webappsec space is the lack of concrete metrics. Not just the number of attacks but metrics on the details of attacks such as; what web vulnerabilities are really being exploited by attackers (vs. some of the more theoretical ones that are produced in labs but not really exploited in-the-wild). This sentiment is shared by just about every member of the webappsec community. The real challenge that we face in obtaining this information is that the people who are being compromised through web attacks are not rushing to release it. Unless the attack is a web defacement, the public probably won't know it even happened.
So, where does that leave us? Some people have chosen to deploy web application/server honeypots in hopes of catching new web attack data. While there has been some limited success in this area, the problem has really been that of scope and value. If you were to deploy a web honeypot yourself, you will mostly likely only get traffic from automated bots/worms probing the web for known vulns (can you believe that there is still hosts infected with Code Red that are just scanning away...). Why would a web attacker want to come to your site? This is the lack of value issue. Now, if you were to deploy a complex web application honeypot that mimics your real, e-commerce website, that is a different story. You would probably get some web attackers who are looking to obtain customer data such as credit card numbers or social security numbers. Still, however, the scope is somewhat limited to your own site. The idea that we decide to use was to flip the tables, so to speak. Instead of being the "target" of the attacks, we instead chose to place our monitoring on an interim host - the open proxy server. Almost all web attackers use open proxy servers to help hide their true origin. So, in this type of a setup, the web attacks are coming to our host but are destined for other remote targets. Additionally, by deploying many open proxy honeypot systems and then centralizing the logging, we are able to get a much wider view of web attacks that are occuring around the globe.
We released our first Threat Report [6] in May and the response was tremendous. The data in the report shows that the vast amount of web attacks are automated. It re-enforces the "identify a vulnerabilty and then find a site" vs. "find a site and then look for a vulnerability" mentality where attackers are looking for the easy kill. This means that it is absolutely critical that organizations stay on top of patch managment in order to shorten the window of exposure when new vulnerabilities in public software are disclosed. We are planning a phase 2 release in early July where we will have updated VMware honeypot images that are running the latest and greatest ModSecurity software and rulesets. Another change will be that we will be using the Breach commercial ModSecurity Management Appliance (MMA) [7] for our central logging host.
PCI is a step in the right direction as it identifies the need for all three of these items in a fairly concise manner.
Sure. The idea is that these three technologies and processes should not be isolated. The goal is to achieve a symbiotic balance between the inputs and outputs of each process. The greatest benefit can be gained when each process's output is used as input to the other two. Think of it from the customer's perspective for a moment. You pay a lot of money to have a code review conducted or to have vulnerabilty scans executed. The output of these two processes is normally a big report that identifies all of the vulnerabilities. This is the critical point in the process. What can organizations do with this report information? The vulnerability information should become actionable and used as input to both a code update by the developement team and for a virtual patch by the WAF team. The virtual patch on the WAF can be used as a stop-gap measure to provide immediate protection for someone attempting to remotely expliot a specific vulnerability. It is important to note, however, that virtual patches are complimentary to actually fixing the code, they are not a replacement. The problem is that code updates are often expensive and very time consuming, so a virtual patch is a quick and easy fix, which is why they are garnering widespread support. Ivan Ristic and I will actually be doing a joint webcast on this topic as part of the Breach Webinar Series [9]on Wednesday, July 18, 2007.
As mentioned above, Virtual Patching is going to
increase in popularity as it offers a pathway to expedite implementing
mitigations for web vulnerabilities at the infrastructure level vs. the
more time intensive normal SDLC channels. In order to make
this process even more effecient, Breach Security is working with
vulnerability scanning vendors to implement an automated process for
creating ModSecurity Virtual Patching rulesets for issues identified
during scanning. Think about it, wouldn't your customers love
to get custom Virtual Patching rulesets in the remediation/mitigation
sections of their normal vulnerability scan reports?
Another big area for growth will be in what Breach Security is calling
Application Defect identification. The idea is that the WAF
can monitor outbound data and identify poor coding practices, such as
not utilizing the "httponly" or "secure" cookie flags, or
misconfigurations such as providing detailed error messages. This
information can then be used to feed back into the SDLC
process so that web developers can update the code to address the
problems.
Boy, nothing really new on that list is there? These are all well known security best practice mitigations that everyone should be doing. What is interesting is that the only web application specific attack listed is SQL Injection. This means that if an attacker can find an alternative, easier way to get the customer's data, they will do it.