SEC595: Applied Data Science and AI/Machine Learning for Cybersecurity Professionals

Experience SANS training through course previews.
Learn MoreLet us help.
Contact usBecome a member for instant access to our free resources.
Sign UpWe're here to help.
Contact UsIf you are still running vulnerable GoAnywhere MFT software, patching now isn't going to fix your problem. This vulnerability has been known, and publicly discussed, for a while now. If you are still not patched: Go straight to incident response and do not waste time patching first.
Many disclosures include notification that they are no-longer using the GoAnywhere file transfer service. If you are using GoAnywhere MFT software, make sure it's updated. File transfer services as well as API gateways have become increasingly prevalent with the increased use of cloud and outsourced services. Make sure that you know what services are used for these communications and that you've not only secured them, but watch for changes in that security.
The underlying attack utilized a zero-day exploit. ‘Zero days’ are next to impossible to defend against until the vendor issues a patch, which it did within a week. Now it is up to users of the software to escalate remediation as part of their patch management process. It often comes down to a race between the evil-doer to exploit and, the target to protect themselves by patching.
The GoAnywhere software we wrote about a few weeks ago is popping up again as more companies get hit with this vulnerability. We haven’t seen this system in use in many of the orgs we have tested, but then again, our view of the total install base would be fairly small. It is commercial software sold by Fortra, and you would imagine it would have been sold to several companies.
Security Week
This update includes patches a known and already exploited vulnerability for older devices. In addition, more than 60 vulnerabilities are addressed. Interestingly, this update fixes a privilege escalation vulnerability that can be exploited via Apple's Studio display. Studio Display uses similar hardware as iOS devices. Executing code on the display shouldn't come as a surprise as the device does have the hardware to run apps.
This is a busy day for your Apple devices. There is also a firmware update for your Studio Display monitor: you need to be on macOS Ventura 13.3 or later to deploy it as well as Safari 16.4 for macOS Big Sur and Monterey. With over 30 CVEs in flight, it's not worth waiting for the KEV or other exploit notice, get these updates tested and queued up. The new version of iOS includes improved crash detection for the iPhone 14 (regular and pro) which should reduce false positives; watchOS 9.4 has improved the alarm silencing by covering your watch function to prevent accidentally disabling the alarm while you're sleeping.
At this point the bare minimum you can do is enable automatic updates and keep your systems patched and updated. I really can’t say much more other than, another month, another month of patching.
Only allowing traffic from known good Exchange servers is consistent with Microsoft's ZTA architecture. While they aren't the boss of you and can't tell you what to do, they are the boss of the Exchange Online services and can take actions to prevent interaction with insecure services. The Microsoft bulletin includes the enforcement actions and timetable, which ranges from notification, throttling and ultimately blocking over a 90 day interval if security issues remain unresolved. You can leverage the Exchange Server Health Checker to see how your on-premises Exchange servers stack up, as well as the new mailflow report in the Exchange admin center (EAC) of Exchange Online to detect any unsupported/out-of-date Exchange servers connecting to your Exchange Online tenant to send email.
This could be construed as Microsoft using the dominant position of Exchange to drive more users to use cloud-based Exchange. But, I think we need to see more of this: if you can’t reach minimal security hygiene levels, you can’t connect to the community. This is essentially the needed “walk” that is required to live up to the “zero trust” talk. Since I now have a 2-year-old grandson, I will use this analogy: “If you are still in diapers, you can’t come in the pool.”
This is certainly one way to get the attention of organizations running unpatched and unsupported software. While some will argue this is about increasing sales by forcing users to upgrade, it really is about security of the enterprise. No software product lasts forever: it has to be upgraded and the costs planned as part of the annual IT budget. The phased approach seems very reasonable and moves us a step closer to the concept of ‘collective defense.’
Microsoft putting the banhammer on admins that don’t patch. This is going to be interesting to watch. I suspect there will be a subset of companies that YOLO admin their on-premises exchange and may never check the Microsoft emails coming in. You can now add YOLO admin to your personal dictionary; you’re welcome.
I appreciate GitHub acting quickly to protect users from potential machine in the middle attacks. But this also highlights the need to have procedures in place to quickly swap out crypto keys as needed. Many scripts automating actions against GitHub broke as a result, and will likely be broken for a while until users get around to swapping the respective keys. In particular for SSH, rotating keys isn't quite as straight forward as for TLS.
Practitioner's note: If you're already using GitHub, this will require removing old host key lines from your known hosts file (in the .ssh/ folder, in your Linux or Windows home directory). Any git actions that use SSH will bring errors about which lines are passé. Once the old entries are deleted, the new host key will show up during your next git action. Accept the new key if it matches one in GitHub's list: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/githubs-ssh-key-fingerprints
SOP should be that when a private key is exposed, or otherwise compromised, you revoke and replace the keypair. The GitHub blog includes what you need to do to update your copies of their SSH public key, a useful reference for users when you change one of your SSH Host keys. This is a good time to make sure that you don't have any private keys stored in your repositories, particularly public facing ones.
Encryption/digital signatures are worse than useless if private keys are not kept private, they provide a dangerous false sense of security. GitHub seems to have handled this well and found the “inadvertent publishing of private information” quickly, but it really means they just got lucky that the error wasn’t found first by bad actors. I hope they publish lessons learned on what they changed to greatly reduce chances of this happening again.
The 2022 Verizon Data Breaches Investigations Report states that 82 percent of data breaches are caused by a human element; other reports claim upwards of 90 percent. GitHub’s actions are both prudent and reflect the potential severity of the risk.
The remedy for the possible compromise of a private key is to replace the key pair. This includes revoking and replacing the corresponding public key. Revocation of key pairs should be routine and should not require "instructions for what users need to do."
Twitter issued a DMCA takedown notice which GitHub executed within 90 minutes. Now the trick is learning how that information was released. Could you detect employees putting your IP in unauthorized repositories? Do you allow processing of your sensitive information on non-corporate devices (e.g., BYOD)? If so, what do you do to ensure that information remains under your control? The interesting question will be what happens after a disclosure like this. Some companies re-engineer, rendering the exposed code irrelevant, others decide to make the entire codebase open-source based on using other measures, not in that code, for differentiation. It's better to have your response plan ahead of time.
We have seen previous source code leaks for major corporations showing up on GitHub and then seemingly taken down hours later. The last sizable one that I saw was the Intel MicroCode with UEFI which seemed to have been leaked back in November and was taken down quickly. Since Twitter is a private company whose source code leaked, this would indicate one of two scenarios: malicious attacker breaking in and stealing source, or insider threats. The part to watch here is the litigation. Will Twitter compel GitHub to go after the individual or group that posted this? You would think they would use a “VPN” or shield their IP, but is this the case?
It’s been a tumultuous six months at Twitter. Widespread employee dissatisfaction can lead to increased insider threats. The theft of, and release of source code is one example of that threat. It will take time to reestablish the company culture and focus on security. This can only be done by leadership from the top down.
This is one of those areas where citizen privacy rights and law enforcement/intelligence agency needs come into conflict. It may sound like an odd comparison, but this issue is like desires to make strong encryption unusable by bad actors by essentially making strong encryption weak when used by anyone. Open societies with strong civil liberties often need to lead by not using technologies that put those liberties in danger and to allow the use of technologies that may require law enforcement and intelligence agencies to not have easy access to citizen information.
The intent appears to limit spyware use, such as Pegasus, but as written, agencies will have to develop approved use cases for tools which can also be used legitimately, such as Cellebrite. The trick is that tools used for research and forensics, facilitating the DHS required vulnerability disclosure programs, can also be used for nefarious purposes, so we need to achieve a proper balance. Watch for refinements/guidance.
While a step onto the moral high ground, unfortunately the market is much larger than the US. There are many governments willing and able to purchase commercial spyware to augment their intelligence gathering capabilities.
If the intention here is to limit the acquisition of actual workplace commercial spyware, I think that would be a good call as many, but not all, of these companies, tend to be … operating in a gray space, let’s say. If this also applies to Offensive Tools that can enable Webcam and Mic, then I think that would be a rather big mistake in testing. What constitutes commercial spyware in this case?
There is motivation on the part of governments to misuse tools against their citizens. Of course, the solution is to outlaw the behavior rather than the tools. In the US, we have such law. One would have no problem with the lawful use of these tools with a warrant, that is with due process, probable cause, and the permission and supervision of a court with jurisdiction.
White House
Cyberscoop
Gov Infosecurity
Nextgov
SC Magazine
Ars Technica
Kudos to Dish for providing services while recovering, minus a few points for not updating their outage notification. The trick in DR is fully understanding interdependencies, which is difficult as we rapidly deliver new services and capabilities. The days of a full data center restart (to normalize things) are largely gone, particularly with outsourced and cloud services. Work to define both interdependencies as well as strategies to resolve issues where services are unexpectedly "stuck." Make sure you are communicating with customers, internal and external, so they know what to expect, and how to successfully provide feedback if services aren't meeting expectations.
Certainly, Dish Network is struggling from the effects of the ransomware attack. The board will want to review updates to the company incident response plan and increase investment in its cybersecurity program going forward.
The Dish Network issue with Ransomware is fascinating as their recovery has been mired in issues while not getting as much media attention as one would think.
This is obviously a very extreme action but would be good fodder for a tabletop exercise: with both online business and hybrid work environments, what would happen to your users, your IT admins and your security team if mobile devices were not usable. Especially: what is the impact on your use of those devices for MFA and is your fallback process secure/scalable?
This is an interesting scenario, the state cuts off Internet and SMS access in a region. What impact would that have on your business? How much of your business is interconnected over public providers who would be offline? What services do you use which rely on SMS at same level? Not just paging/alerting, but also two factor auth. Remember when we used to talk about pulling the plug on the Internet? Is that even practical in your current environment? This assessment could make for a very interesting BC/DR tabletop.
Protect OT as if it has both unresolved and unknown vulnerabilities, keeping in mind they are designed for availability, and long life with a specific mission, not general computing and security. When updates are available, have a frank conversation with your OT support staff about the realities of applying those updates. You may discover that downtime windows are few and far between. Before bringing plans forward for security scanning of your OT systems, understand their existing security model and the impacts such activity has. Read up on the Purdue model – while it's from 1992, it's still relevant – before coming to the table with improvements.
While the report is concerning, it isn’t exactly news. Vulnerabilities have been cropping up for individual OT products over the last decade. Your choice is to ‘air-gap’ the OT environment from the internet or have a robust network architecture in-place to limit attacker access to said environment. The greater threat is perhaps the unscrupulous insider, so double down on both physical and personnel security.
The title of the conference suggests what we all know: securing single-use, purpose-built devices, at least by design, is not (as) difficult (as securing general-purpose software like operating systems and browsers). The content of the report suggests that it must be.
Another Malicious HTA File Analysis Part 1
https://isc.sans.edu/diary/Another+Malicious+HTA+File+Analysis+Part+1/29674
Apple Updates Everything
https://isc.sans.edu/diary/Apple+Updates+Everything+including+Studio+Display/29682
Update for Windows Snipping Tool
Linux Tech Tips YouTube Hack
https://www.theverge.com/2023/3/23/23653115/linus-tech-tips-youtube-hack-crypto-scam
https://isc.sans.edu/diary/Elon+Musk+Themed+Crypto+Scams+Flooding+YouTube+Today/29434
GitHub Rotates SSH Keys
https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/
MacStealer Malware Exfiltrates Mac Secrets
https://www.uptycs.com/blog/macstealer-command-and-control-c2-malware
redis-py vulnerability leads to mixed up sessions, affects ChatGPT
https://openai.com/blog/march-20-chatgpt-outage
CyberChef Update
https://github.com/gchq/CyberChef/wiki/Character-encoding,-EOL-separators,-and-editor-features
Catch up on recent editions of NewsBites or browse our full archive of expert-curated cybersecurity news.
Browse ArchiveFree technical content sponsored by Dragos, Inc.Free Report: Get the most comprehensive ICS/OT Cybersecurity Report available | The industrial cyber threat landscape is constantly changing with new adversaries, vulnerabilities, and attacks that put operations and safety at risk.
SANS 2023 Application Security Survey | Share your insights with us as we gather information around industry practices in application security focusing on Application Programming Interface (API) security awareness, processes and controls.
Upcoming webcast on Tuesday, April 4th at 12:30pm ET | SOC Visibility Triad, Why You Need NDR Alongside EDR - Join us as we demo popular EDR tools and give analyst workflow examples and use cases.
Join Chris Crowley on Wednesday, April 5th at 10:30am ET for this upcoming whitepaper discussion - Managed Detection and Response: Optimizing External Expertise | Register now: https://www.sans.org/info/225635