John Pescatore - SANS Director of Emerging Security Trends
Patches are like vaccines: Low quality leads to long-lasting resistance and self-inflicted wounds.
This week's Drilldown will focus on an item (included below) from NewsBites Issue 10, where a Google security researcher reported that 30% of the zero-day vulnerabilities they find are caused by ineffective or partial security patches to previously known issues.
Malware and intrusion detection systems have long been judged by two key parameters:
- False negatives--declaring a file or executable to be harmless, or a set of network events to be normal/safe traffic when they are actually malicious.
- False positives--declaring a file or executable, or a set of network events to be malicious when they represent legitimate business traffic.
Both errors result in damage or disruption to the business, but false positives can be even more damaging because they are self-inflicted wounds. Taking security action is what causes the damage/disruption, not attacker activity! False positives can often quickly result in turning off security controls and are very hard to recover from.
Bad patches represent the worst of both false positives and false negatives. Security demands that patching be done faster than IT wants to do so, and when IT pushes out a bad patch (false negative) and business disruption occurs, that then gets blamed on the security process (like a false positive.)
Before October 2003, Microsoft had no regular patch schedule. Patches came intermittently--sometimes quickly, sometimes waiting for other releases. The damage done by the Code Red and Nimda attacks in 2001 and Slammer and Blaster in 2002 finally convinced Microsoft that it needed to go to regular, predictable patch releases in order to decrease its own cost as well as reduce its IT customers' pain from constant and unpredictable scrambling to deploy patches.
However, when Microsoft made that transition, its patch quality processes were not very mature, and organizations learned either to wait for other organizations' reports of applications that stopped working after Windows was patched or to perform weeks of expensive patch testing before pushing patches out. The low quality of the Windows patches early on caused lasting distrust of the patches and resulted in slower patching and lower security levels--very analogous to issues surrounding infectious disease vaccinations today.
Bottom line: All RFPs and procurement evaluations for software should require the vendor to provide data on the percentage of patches that had to be recalled or reworked. This should be a highly weighted security evaluation criteria in all software procurements.
Better Patches Could Reduce the Number of Zero-days
(February 2 and 3, 2021)
Maddie Stone, a Google security researcher, told an audience at the USENIX Enigma 2021 virtual conference that more than one-third of the 24 zero-day vulnerabilities Google's Project Zero team found last year were variants of other security issues that had already been disclosed or had been incompletely patched. In a blog post, Stone writes, "If more vulnerabilities are patched correctly and comprehensively, it will be harder for attackers to exploit 0-days."
[Neely] It's easy to get tunnel-vision when a flaw is reported and only address that issue, particularly when facing a disclosure countdown. The security code review has to begin at inception, not after flaws are discovered; it's a nearly insurmountable task to review and fix existing applications. One approach is to augment bug fix procedures to include activities to seek and find and remediate similar flaws elsewhere in the code, possibly necessitating a second release.
[Pescatore] (Long anecdote coming, so you can skip to the last sentence if not in the mood.) Years ago I worked for the U.S. Secret Service. Part of the job was being part of advance teams and doing "technical security" in places where the protectee would visit or stay overnight. In hotels, we had to get the elevator maintenance guy to come in, inspect the elevators, and recommend which one was the most reliable. On the first trip I worked solo, after training I worked in tandem for a few trips, in Denver, the Otis elevator guy said, "I just did a repair and full preventive maintenance on elevator A, but elevator B never fails. You should use B. It always seems like on these new fancy elevators with all the electronics, when I fix something, I also weaken something else that breaks the following week." I felt that elevator B was probably due to fail, ignored the advice, and chose elevator A. The next morning, then Vice President George H. W. Bush got stuck in elevator A when the doors opened three feet above the lobby. Much paperwork ensued. Moral of the story: As we learned with Windows patches in the early days, too many software vendors treat patching as a drain on profits and don't invest in doing it right. Poor patch QA is usually a sign of bigger problems at the vendor, which can include lack of sufficient maturity in software life cycle, under investment in QA overall, and so on.
Read more in:
Google Project Zero: Deja vu-lnerability | A Year in Review of 0-days Exploited In-The-Wild in 2020
Google Docs: 0day "In the Wild"
Dark Reading: Patch Imperfect: Software Fixes Failing to Shut Out Attackers
ZDNet: Google: Proper patching would have prevented 25% of all zero-days found in 2020
The Register: Rubbish software security patches responsible for a quarter of zero-days last year
Cyberscoop: Bad patching practices are a breeding ground for zero-day exploits, Google warns
Duo: Making 0-Day Hard Is Still Hard