Table of Contents
- What is a Security Thought Leader - Updated November 18th, 2009
- Framework for Security Thought Leader Interview - August 26th, 2009
- Daniel B. Cid, Sucuri - November 21st, 2013
- Dominique Karg, AlienVault - November 20th, 2013
- Lance Spitzner, Securing The Human, founder - Updated November 29th, 2012
- Bill Pfeifer, Juniper Networks - March 4th, 2011
- Chris Pogue, Senior Security Analyst - July 8th, 2010
- John Kanen Flowers - May 26th, 2010
- Kees Leune, Leune Consultancy, LLC - February 13th, 2010
- Joel Yonts, CISO - February 12th, 2010
- Maury Shenk, TMT Advisor, Steptoe & Johnson - January 31st, 2010
- Chris Wysopal, CTO, Veracode - January 27th, 2010
- Amir Ben-Efraim, CEO, Altor Networks - November 25th, 2009
- Ed Hammersla, COO, Trusted Computer Solutions - Updated November 19th, 2009
- Amit Klein, CTO, Trusteer - September 27th, 2009
- An Interview with Ron Gula from Tenable about the role of a vulnerability scanner in protecting sensitive information - Updated August 13th, 2009
- A. N. Ananth, CEO, Prism Microsystems, Inc. - August 7th, 2009
- Jeremiah Grossman, Founder and CTO of WhiteHat Security - Updated April 24th, 2009
- Mike Yaffe, Director of Product Marketing, Core Security Technologies. - April 15th, 2009
- Chris Petersen, Chief Technology Officer, LogRhythm - March 13th, 2009
- John Pirc, IBM, ISS Product Line & Services Executive: Security and Intelligent Network - February 17th, 2009
- Leigh Purdie, InterSect Alliance, co-founder of Snare: Evolution of log analysis - January 28th, 2009
- Bill Worley, Chief Technology Officer, Secure64 Software Corporation - December 9th, 2008
- Doug Brown, former Manager of Security Resources, University of North Carolina at Chapel Hill - October 30th, 2008
- Amrit Williams, Chief Technology Officer, BigFix - June 30th, 2008
- Andrew Hay, Q1 Labs - May 13th, 2008
- Gene Schultz, CTO of High Tower - April 4th, 2008
- Tomasz Kojm, original author of ClamAV - April 3rd, 2008
- Bill Johnson, CEO TDI - April 2nd, 2008
- Gene Kim, Tripwire - March 14th, 2008
- Kevin Kenan, Managing Director, K2 Digital Defense - March 14th, 2008
- Leigh Purdie, InterSect Alliance, co-founder of Snare - March 7th, 2008
- Marty Roesch, Sourcefire CEO and Snort creator - February 26th, 2008
- Dr. Anton Chuvakin, Chief Logging Evangelist with LogLogic - January 28th, 2008
- Kishore Kumar, CEO of Pari Networks - Updated January 28th, 2008
- Interview with Dr. Robert Arn, CTO of Itiva - November 1st, 2007
- Interview with Charles Edge - September 15th, 2007
- Ivan Arce, CTO of Core Security Technologies - Updated May 6th, 2009
- Mike Weider, CTO for Watchfire - Updated July 23rd, 2007
- Interview with authors of The Art of Software Security Assessment - Updated July 9th, 2007
- Ryan Barnett, Director of Application Security Training at Breach Security, Inc. - June 29th, 2007
- Dinis Cruz, Director of Advanced Technology, Ounce Labs - June 11th, 2007
- Brian Chess, Chief Scientist for Fortify Software - June 9th, 2007
- Caleb Sima, CTO for SPI Dynamics - Updated May 29th, 2007
- An Interview with David Hoelzer, author of DAD, a log aggregator - May 1st, 2007
Dr. Anton Chuvakin, Chief Logging Evangelist with LogLogicStephen Northcutt - January 28th, 2008
LogLogic Anton Chuvakin interview
Dr. Anton Chuvakin from LogLogic has agreed to be interviewed by the Security Laboratory and we certainly thank him for his time! He is probably the number one authority on system logging in the world, and his employer is probably the leading vendor for logging, so we appreciate this opportunity to share in his insights.
Dr. Chuvakin, there has been a lot of attention on the whole logging space; we have gone from a couple of vendors to probably ten or more. Is this a fad or can we expect this to be something we are focused on for the next ten years or more?
Call me Anton, Stephen, no need to be formal. Log management is here to stay and literally everybody needs it, unlike some other security and IT technologies. Everything produces logs and there is (and always will be!) a need to deal with them. Thus, approaching logs with an open log management platform that enables all possible - current and future! - uses for log data, from regulatory to operational, is the only way to not be buried under the proverbial logjam.
Anton, I like the term logjam, it makes a lot of sense in this context. Do you have any insights as to why the auditors and compliance folks are so focused on log analysis these days?
Logs = accountability. If you tend to not be serious about logs, be aware that you are not serious about accountability. Is that the message your organization wants to be sending? Keep in mind that most recent regulations and mandates actually call for creating, retaining and - yes! - reviewing logs: by ignoring logs you break the law.
Thank you for that Anton, can you expand a bit on the statement ignoring logs = breaking the law?
Sure, that’s easy: while some of the laws (broadly used to mean "external mandates"), such as the venerable Sarbanes-Oxley, only imply having logs (for example, when they talk about controls and the need to audit them), others are more explicit. For example, FISMA (for federal agencies) mandates having and reviewing logs, and HIPAA (for healthcare) also directly mentions them.
There are a number of approaches to achieve defense in depth, one of these would be to focus on identifying the critical information and making sure it is protected. In the past, this sounded nice, but impractical; who would bother to identify and survey their information? Since 2006, there has been a movement towards digital discovery, so that part of the work is done. Now, if we want to, we can leverage that work to achieve defense in depth for critical information. When I think about log collection and log analysis, that is what comes to mind. Are you seeing more attention to information centric protection architectures? (You can read more about this concept here: http://www.sans.edu/resources/securitylab/321.php)
Recent signs that security is rapidly evolving from network- and system- focused to information-focused indicate that there will be a geater need to have fine-grained monitoring and auditing capabilities across the whole organization. How do you achieve that? With logs, you already have that capability! People just need to harness it by deploying log management architectures.Some of the evidence that such evolution is indeed taking place (and does not only exist in the mind of market analysts) are:
- More attention paid to database security compared to previous years. Database security is mostly about data, not so much about the infrastructure.
- Recent abundance of data loss and data theft incidents that also reminded people that security is not only about "fighting worms", but also about keeping the data safe
- Additionally, identity theft has raised the amount of attention paid to data security: people start to understand that malicious hackers are not only (not so much, in fact!) “out to have fun with their networks", but are actually more intent to making profit off their data…
Anton, for some reason, this is my week to think about logs. I was reading NIST SP 800 - 92, Guide to Computer Security Log Management yesterday, and there is a section about common sources of logs. All well and good, but I know there are lots of other sources of logs. If you think about it, everything creates logs, how are organizations going to handle that?
NIST SP 800 - 92: http://csrc.nist.gov/publications/nistpubs/800-92/SP800-92.pdf
We are looking into the face of a coming increase in the *breadth* of log sources that people care for: it used to be just firewall and IDS logs, then servers, and now it is expanding to all sorts of log sources - databases, applications, etc.Specifically, a few years ago, any firewall or network admin worth his salt would look at least at a simple summary of connections that his baby PIX or Checkpoint is logging. Indeed, firewall log analysis represented a lot of early business for log management vendors. Many firewalls log in standard syslog format and such logs are easy to collect and review.
Next, even though system administrators always knew to look at logs in case of problems, massive server operating system (both Windows and Unix/Linux flavors) log analysis didn't materialize until more recently. Collecting logs from all critical (and many non-critical) Windows servers, for example, was hindered by the lack of agentless log collection tools, such as LASSO. On the other hand, Unix server log analysis was severely undercut by a total lack of unified format for log content in syslog records.
Similarly, email tracking through email server logs languished in a somewhat similar manner: people only turn to email logs when something goes wrong (email failures) or horribly wrong (external party subpoenas your logs). Lack of native centralization and, to some extent, complicated log formats slowed down the email log analysis initiatives.
Next, database logging wasn't on the radar of most IT folks until, probably, last year. It is emerging now! In fact, IT folks were perfectly happy with the fact that even though RDBMS had extensive logging and data access auditing capabilities, most of them were happily never turned on. It will be all the rage in a very near future. Oracle, MS SQL, DB2, MySQL all provide excellent logging, if you know how to enable it (and know what to do with the resulting onslaught of data)
In a more remote future, various esoteric log sources will be added into the mix. Custom applications, physical sensors and many other uncommon devices and software want to "be heard" as well! *grin*
So, we observed people typically paying attention to firewall logs first, then server logs, then other email and web logs, then databases (this is coming now) and then other applications and even non-IT log sources (more information).
OK, but while all of this sounds nice and important, what is the driver, because this is going to cost serious dinero? What is the business logic behind collecting and storing a gazillion logs?
This might sound obvious, but it is still a major trend: more regulations, governance frameworks and standards will cover logs and logging. Just look at recent PCI, NIST 800-92, ITIL updates and a few others; logs are proudly features. A typical regulation mandates that organizations have logs, retain them and review them periodically.
What will emerge "after compliance"? More compliance, of course. However, there is one more thing that is emerging right now, and that is directly related to logs: e-discovery. Logs often need to be produced as evidence, and doing that successfully without having a log management platform is next to impossible.
Well, from my point of view it is great to see that forward-looking organizations are intentionally moving away from the hard robust perimeter/ soft chewy inside to something defensible. Amazingly, a big part of this is auditing. I was talking with a security architect from a bank in California last week. He started out looking for a web application firewall, but as his team considered the problem they were trying to solve, they wanted more than just a WAF, they wanted a solution with full on database auditing, because, these days, web solutions are database driven. Are you seeing an increased sensitivity to auditing; collecting and reviewing information?
There is also a trend towards auditing more access and more activity through logs; for example, few of the file server or database vendors cared much about logging, but now they do. What used to be just about "access to info" is not about "auditable access info."
Recently, I was involved in some fun discussions on storage security. One of the storage vendors I talked to mentioned that every year they've been in business (since early 90s), they have to add one or more audit features to their information access solution to increase the level of details, performance of their audit logging or whatever other audit related feature.
My response was: "What? You didn't build the data access audit features from the very beginning?" And then I thought: why provide access to any information without having an ability to log each and every successful and failed access?
Having access audit info is useful in so many cases, that not doing it becomes inexcusable and, frankly, stupid. Some of the many uses for such information are:
- Operational troubleshooting: knowing who failed to access the info and why
- Policy audit: who accessed what, with or without authorization?
- Regulatory compliance: legal requirement to have audit data is there to stay
- Incident response: what info got stolen and by whom?
- Information access trending and performance optimization: are we providing quick and reliable access to information?
When thinking about logs, one of the mistakes one can make is take a narrow view: logs for security, logs for compliance, logs for application debugging, etc. The reality is that logs are useful for all of the above and more (much more - all the way to HR and legal) and, thus, need to be approached broadly: with a log management platform.
OK, you decided to harness the power of logs. But, how? Should you build or buy? The experience of hundreds of organizations and people screams: "Don’t build your own!" Log management is not as simple as many think; especially when we are looking at terabytes of them.
What is your number one pet peeve (wrt logging, please), and number two?
- That people still sometimes treat logs like dirt even after they themselves got burned really badly due to not having logs during the incident. If you don’t respect (i.e., collect + analyze and review) logs, you will suffer: my pet peeve is that soooo many people choose to suffer.
- Sorry, lack of standards in log formats, contents and meaning is only #2 (and we are working on fixing it via CEE standard by MITRE)
What is the biggest mistake, bar none, people make when it comes to logging?
Not turning logging on!
That is a hoot! Do you know, the first time I heard that was in 1995. Dr. Gene Schultz, who is also involved in log management these days, was giving a talk on incident response. I can still remember what he said. "At least turn logging on, even if you don’t look at the logs. That way, when you call someone like me in to help you, we have something to look at." What are some of the other mistakes people commonly make?
Yes, I can spend hours, maybe days, talking about other mistakes, but, sadly, not having logs is still pretty common. This is by far the most damaging and fatal mistake. Other mistakes related to logging I’ve seen over the years are:
- Not logging at all: the most common reason for no having logs is never turning the logging on.
- Not looking at the logs: yes, having logs is great, but most of the value comes from actually looking at them (see Top 11 Reasons to Look at Your Logs!)
- Storing logs for too short a time: logs might be gone not because of evil hackers, but because you configured log retention to be too short (see Top 11 Reasons to Collect Logs! )
- Prioritizing the log records before collection: this is a tricky one and comes from "betting wrong" on which logs you’d actually need - don’t bet - grab them all!
- Ignoring the logs from applications: this mistake is coming to light more and more as we move from network and system security to data and information security where most of the useful log data will come from applications, not network gear
- Only looking at what they know is bad. The idea of "just show me what is bad" is attractive, but also happens to be completely wrong for the realm of logs: it is way too complex to just pinpoint ‘what’s wrong.’ A better approach is to combine this with ‘ignoring what you know is normal and looking at the rest.’
What are the odds that my grandson will still be alive to see the day of a truly useful, implemented standard for logging?
Log standards: will they ever come? Why, yes! They are coming NOW. The work on Common Event Expression (CEE - see http://cee.mitre.org) has begun, and many of the key log producers (i.e. software and platform vendors) and consumers (i.e. log management vendors) are on board. The road will be long and there will be many battles, but we are already walking, not arguing about whether to go.Well, we will have some pieces implemented way before his time! However, I don’t think we will ever see the world where every single log looks the same (standard format), is moved via the same mechanisms (standard log transport), phrased in a standard way (standard content): there is way too much legacy stuff built already for this to ever happen.
Anton, one of the traditions of the Security Laboratory is to offer people a bully pulpit, a chance to share what is burning on your heart related to security?
I said it before and I will say it again - and again - and again - and again - and again - and again - and again - and again - and again - and again *grin*
- turn logging on,
- collect those logs,
- approach logging with a broad platform-based approach - not on a siloed basis.
OK, we really thank you for your time and effort, last question, can you tell us just a bit about Anton Chuvakin. When you are not in front of a computer, what do you do for fun?
Oh, I have an unusually long list for a purported geek: beyond spending time with my wife, Olga, I enjoy reading, hiking, dancing, playing volleyball, traveling (yes, even for business!), skiing (in winter), kayaking (in summer) and probably a few other things that I enjoy but didn’t think of at the moment *grin*