Compliance is Not an Effective Approach to Cybersecurity
An experiment conducted by Navy CIO Aaron Weis and command information officer at the Naval Postgraduate School Scott Bischoff had red teams launch frequent and unannounced attacks against their own networks. The experiment demonstrated that the approach “reveals which vulnerabilities are the most dangerous, the easiest for an attacker to exploit with the highest impact—information they wouldn’t have otherwise.” Weis notes that while the Defense Department currently manages cybersecurity as a compliance issue, ”Cybersecurity is not a compliance problem.”
For most NewsBites readers, this is a “no duh” moment – even Navy CIO Weis says “We've got…15 to 20 years of track record using a compliance mentality that says it doesn't work…” Same issue in private industry over that period: the vast majority of credit card info breaches occurred at companies that had passed PCI DSS audits. The key is “protect the business/mission first, then convince auditors you are compliant” and the US DoD needs to focus on the obstacles impeding change. In civilian federal government, we’ve seen the Office of Inspectors General take initiative to add active testing (a la targeted threat hunting and pen testing) to their audits, vs. just data calls collecting reams of policy documents for compliance. Always most effective for security teams to do the right security things before the auditors do it!
Well, this is a “water is wet” kind of story, and while it’s a bit embarrassing to hear senior technology leaders say that a compliance-driven mentality is wrong when the rest of the world has been saying the same thing for the past two decades, it is progress. If it moves the Navy in the direction of managing by risk instead of managing by compliance, it’s something we should applaud.
Compliance, configuration of security to an accepted baseline and verifying it remains at or above that baseline is a starting point, not an end state. For many years now, I’ve been involved with audits of FISMA systems against published baselines. Those baselines have been suggesting active monitoring of technical controls for a few years now, and DHS’s CDM program is an example of active monitoring. The problem is you need more than big brother watching, you need your own assessment. About 15 years ago our FISMA audits started to include external pentests, and about ten years ago, the testing added internal testing, ultimately having the assessors gear live on our internal network. This is both scary and enlightening. Two excellent lessons here. First, don't wait for a regulator to find your deficiencies; use active means. Remember the auditors are of limited scope, you need a plan for *everything*. Second use a third party to compensate for your biases, question your accepted deficiencies.
If compliance won't get us there, let's focus on what will. Asset inventories, identity management, and patching/vulnerability management all matter. We must also hire reputable penetration testers and give them network diagrams, inside access, and recent vulnerability scans.
The one thing that concerns me here is that this report is even making the news, or that even a report like this has to be published.