3 Days Left to Save $400 on SANS Scottsdale 2015

Chris Wysopal, CTO, Veracode

Introduction


Table of Contents

Chris Wysopal, CTO, Veracode

January 27th, 2010
By Stephen Northcutt


Veracode’s CTO and Co-Founder, Chris Wysopal, is responsible for the company’s software security analysis capabilities. In 2008 he was named one of InfoWorld's Top 25 CTO's and one of the 100 most influential people in IT by eWeek. One of the original vulnerability researchers and a member of L0pht Heavy Industries, he has testified on Capitol Hill in the US on the subjects of government computer security and how vulnerabilities are discovered in software. He is the author of “The Art of Software Security Testing” published by Addison-Wesley, and we certainly thank him for his time to be interviewed as part of the Security Laboratory's Thought Leader series.

Would love to go online and find some of the papers or presentations you have written - can you tell me three or four?

Detecting Certified Pre-Owned Software
http://www.blackhat.com/presentations/bh-europe-09/Wysopal/BlackHat-Europe-2009-Wysopal-Certified-Pre-Owned-1.00-wp.pdf

Static Detection of Application Backdoors
http://www.veracode.com/images/stories/static-detection-of-backdoors-1.0.pdf

Building Secure Applications: Avoiding the SANS Top 25 Most Dangerous Programming Errors
https://www.sans.org/webcasts/93028.pdf

Application security metrics from the organization on down to the vulnerabilities
http://www.owasp.org/images/e/e6/Application_security_metrics-Chris_Wysopal.ppt


And, please list your top three “must read” papers that are available on the web that you did not write:

The Tao of Windows Buffer Overflow
http://www.cultdeadcow.com/cDc_files/cDc-351/index.html

Dan Geer Keynote, Source Boston Conference, 13 March 2008
http://www.sourceconference.com/2008/sessions/dan-geer-keynote.html

Reflections on Trusting Trust
http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf


How did you become interested in the field of information security?


I was working as a software engineer at Lotus in 1992 when the commercial internet was created. We began thinking about how to move our networked software to the internet. At the time I was also involved with “The L0pht” which we called a “hacker think tank”. I realized that if people were going to expose software to anonymous connections and data from the internet there would be a huge security challenge. At that point I started researching flaws in web applications and published one of the first web application advisories in 1996.


Have you worked on security products before the product you are working on today? If so, please list them and describe the highlights of some of these products.

My first “security product” was a tool called netcat for windows. It allowed a pen tester to make arbitrary tcp and udp connections to ports and send data. It is a very useful, general purpose tool. Hobbit wrote the original netcat for unix and called it the “swiss army knife for network testing”.

My second security product was L0phtCrack. This was the first commercial quality password auditor and the first one for Windows. It started out as a proof of concept to show how weak Windows password storage was. It then became a commercial tool because, in many cases, it is the only way to know whether you have weak dictionary word passwords stored on your systems.


Chris, can you give us a story from your L0pht Heavy Industries days, we are trying to preserve earlier history when we can?

My friend Mudge made a connection with President Clinton’s chief counter-terrorism adviser, Richard Clarke, and we got a group of L0pht guys together when he was in Boston/Cambridge to have dinner. This was around 1997. Our discussions would cover the security landscape. We were down in the weeds and he was surveying with a satellite. We did this a couple times. Then we invited him back to our workspace to give him a tour. That time he brought along a couple of National Security Council colleagues.

We took them around and showed them our server room, our hardware lab, and our software lab, which was stocked with every version of Unix and Windows you could imagine. After the tour wound down Richard Clarke went off to a corner and was whispering to his colleagues. Mudge confronted them and said, “You are invited guests here. It’s rude to have a separate conversation. Tell us what are you talking about.” Richard then said that they were bowled over by the capability we had created for cyber research with no funding. After all, most of the software and hardware was corporate cast offs retrieved from dumpsters and flea markets, and the skills were self taught with free information. This visit had caused the US Government to re-evaluate the asymmetric nature of cyber warfare.


What product are you working on today? What are some of its unique characteristics? What differentiates it from the competition?

The product I am working on today is Veracode’s SecurityReview which is an online platform for managing application security risk for a portfolio of applications. It is able to find application security defects through a combination of static analysis, dynamic analysis, and manual testing.

Veracode’s Security review is unique as it can statically test 100% of the software code in its executable final form, whether compiled C/C++ binaries, Java or .NET bytecode, or interpreted scripting languages. This gives the most accurate view of software security risk. The other unique aspect of Veracode is we operate in the cloud so we can analyze software and deliver results to anyone with a web browser, whether they wrote the code or are deploying it.

Besides the technical aspect of analyzing binaries instead of source, the competitive differentiator my customers have told me is we are far easier to use. Since there are no complex on-site tools to install and configure they can roll out a software security verification process globally with far less time and expense. Also the fact that they don’t have to tune our analysis software to understand their different coding styles to get low noise and low false positive rates is a big time saver.


May I ask you to give me a definition of Static Code Analysis that my friend’s nine year old daughter has a shot at understanding and by the way, she could count backwards in hex in Christmas 2008 and showed me the about:crashes command in Firefox.

Using static analysis to look for flaws in software is much like using visual inspection in a factory to see if a cast part has a crack or a bubble in it. You aren’t running the software and looking at how it behaves. You are inspecting the code to see how it is constructed. This enables a deeper and more comprehensive view of software defects than you can get by diagnosing an executing version.

My father worked in quality control for a jet engine manufacturer. They would never think of assembling the jet engine and running it in a test cell to look for problems before first inspecting every individual part before the assembly process. Static analysis allows that level of quality checking for software.


What do you think the security products in your space will look like in two years, what will they be able to do?

I think we are going to see the code analysis space bifurcate. First, there will be technology that is built right into the compiler by the compiler makers to enable developers to quickly find simple secure coding errors. This will operate much like a spell checker for writing. It will work on one source file at a time and be nearly instantaneous. Second, there will be heavyweight, whole program analysis that takes into account all the binary components that are linked and all the interactions among the entire codebase. This analysis will be done in the cloud because then it can take advantage of supercomputer like resources and do a depth of analysis no workstation class machine could do in reasonable time.


You mention supercomputer several times, I think my last supercomputer conference t-shirt turned into a car wash rag five years ago, what does a supercomputer as you refer to it look like today?

Supercomputers today are computer clusters using “off the shelf” server-class microprocessors. We use clusters of 24-way machines with 256G of RAM do our analysis. It’s the benefit of cloud computing. The service providers is able to afford hardware you couldn’t dream of owning. This allows a fundamental different level of analysis in the cloud than you could do on your own machine.


Please share your impression of the defensive information community. Are we making progress against the bad guys? Are we losing ground?


It is a speed race that we are currently losing. If the software engineers would stop inventing new ways to deliver code in new languages, frameworks, API’s and platforms the defense might be able to catch up. But code is delivered on new platforms and in new ways every day. The offense is able to discover new classes of attack or new spins on old classes of attack for these new situations faster than we can come up with ways to find the vulnerabilities. The defense needs to find all vulnerabilities and fix or prevent them and the offense just needs to find one. It will always be faster to find one.

Looking for bad stuff packets or data entering our systems will never work. We need to verify that the software and systems we deploy are known good. Putting software on them that has not been verified as known good is deploying a big black box of risk.


One of your "must read" papers, Reflections on Trusting Trust, is in the course I teach tomorrow; do you think such a thing is possible in today’s environment?


We have seen trojaned compilers in modern time. It happened last year with Delphi. The malware modified the Delphi compiler to insert malicious code each time it compiled a Delphi application. I don’t see why this couldn’t happen to Microsoft Visual Studio or gcc. As an industry we do little static inspection of executable binaries and rely on analyzing source or performing black box testing. Neither of the two analysis techniques will detect a subverted development tool chain. At Veracode we are raising the bar and performing static analysis on binaries. We even have a set of malicious code and malicious code indicator scans to find fishy things in binaries. We have found backdoors that the developers of the code didn’t know about because they didn’t inspect at the binary level. In 2004 I wrote an article for USENIX “Putting Trust in Software Code” [http://www.usenix.org/publications/login/2004-12/pdfs/code.pdf] that discusses this.


Please share your thoughts concerning the most dangerous threats information security professionals will be facing in the next year to eighteen month
s.

Attacks are moving to the clients that access the data. If you own the client or steal the credentials you look like a legitimate user to the heavily fortified back end system housing the data. This means that the biggest risks are going to be end user systems whether a traditional PC or a smartphone.

Another growing risk is going to be the intentional introduction of malicious code into legitimate applications and hardware. I call this “certified pre-owned products”. Attackers of high end targets are not going to be satisfied with the chances of exploiting a security flaw that might or might not be available when they want to attack. Maliciously inserted code gives a guaranteed attack vector on a highly scrutinized system.


What is your biggest source of frustration as a member of the defensive information community?

My biggest frustration is the mentality of doing the bare minimum to get a checkbox on a compliance requirement and then being shocked when you weren’t secure. Since customers want this, vendors “race to the bare minimum” and come up with solutions that can plausibly get the checkbox but don’t do anything for security.


We like to give our interview candidates a bully pulpit, a chance to share what is on their mind, what makes their heart burn, even if it is totally unrelated to the rest of the interview. Please share the core message you want people to know.

Stop fielding software and systems that you have no idea of the security risk of and hoping that an external defensive system will catch attacks or notice behavior that is not normal. There is absolutely nothing wrong with defensive systems but they shouldn’t be the primary security system. A simple example is turning off unused or insecure ports/services on a system AND using a firewall. The firewall shouldn’t be the primary security mechanism. Eliminating the insecure services should be. We need to think the same way about software. Test for and fix the defects before deploying it. Don’t rely only on antivirus or intrusion detection. These are a great secondary defensive mechanism. Don’t ignore the primary mechanism of scanning for and removing defects.


Please tell us something about yourself, what do you do when you are not in front of a computer?


When I am not in front of my computer I am likely playing with my kids or taking photographs. Many times I combine the two!

Thanks for the opportunity to be part of the leadership program!

<< Thought Leader Home