For many years, security professionals have advocated the approach of collecting logs from all the places where they’re generated and centralizing them into one or only a few places.
But now, we have much more data and many more sources of security telemetry, including from endpoints, networks, email, IAM, SaaS applications, and cloud assets in multi-cloud infrastructures.
Does centralizing log data still make sense, or should we be thinking about decentralized approaches such as federated data storage or distributed data storage, leveraging security data lakes and other repositories?
The problem is that the centralized approach is becoming much harder as volumes and log source counts, types, and distributed nature go up.
For example, If you’re present in multiple public cloud providers, and present there at scale, it is very likely that you are NOT collecting logs into one place in one cloud. Various complexities, egress costs, and storage costs all play into this becoming a questionable decision for most organizations.
So, what are the pros and cons of each approach?
In this webinar led by Dr. Anton Chuvakin, Security Advisor at Office of the CISO, Google Cloud, we’ll explore key questions such as:
• Why has centralization been a core strategy until now? • Why might decentralized approaches be more attractive today? • What are the challenges of decentralized approaches? • What about normalization? • How do cloud-native SIEMs like Google Chronicle change how we think about this?
We’ll also review insights from the recent “Third Annual Report on the State of SIEM Detection Risk.” Based on a data-driven analysis of more than 4,000 rules across diverse SIEM platforms in production environments — including Splunk, Microsoft Sentinel, IBM QRadar, and Sumo Logic — the report provides some interesting benchmark data about typical data ingestion metrics, MITRE ATT&CK coverage, and rule health in enterprise SOCs.