5 Days Left to Save $200 on SANS South Florida 2015

Reading Room

More than 75,000 unique visitors read papers in the Reading Room every month and it has become the starting point for exploration of topics ranging from SCADA to wireless security, from firewalls to intrusion detection. The SANS Reading Room features over 2,380 original computer security white papers in 90 different categories.

Latest 25 Papers Added to the Reading Room

  • Insider Threat Mitigation Guidance Masters
    by Balaji Balakrishnan - October 9, 2015 in Work Monitoring

    Insider threats are complex and require planning to create multi-year mitigation strategies. Each organization should tailor its approach to meet its unique needs. The goal of this paper is to provide relevant best practices, policies, frameworks and tools available for implementing a comprehensive insider threat mitigation program. Security practitioners can use this paper as a reference and customize their mitigation plans according to their organizations' goals. The first section provides reference frameworks for implementing an insider threat mitigation program with the Intelligence and National Security Alliance (INSA) Insider Threat roadmap, Carnegie Mellon University's Computer Emergency Response Team (CERT) insider threat best practices, CERT insider threat program components, National Institute of Standards and Technology (NIST) Cybersecurity Framework, and other relevant guidance. This section provides an implementation case study of an insider threat mitigation program for an hypothetical organization. The second section of this paper will present example use cases on implementing operational insider threat detection indicators by using a risk scoring methodology and Splunk. A single event might not be considered anomalous, whereas a combination of events assigned a high-risk score by the methodology might be considered anomalous and require further review. A risk scoring method can assign a risk score for each user/identity for each anomalous event. These risk scores are aggregated daily to identify username/identity pairs associated with a high risk score. Further investigation can determine if any insider threat activity was involved. This section explains how to implement a statistical model using standard deviation to find anomalous insider threat events. The goal is to provide implementation examples of different use cases using a risk scoring methodology to implement insider threat monitoring.

  • Passing the Sniff (Snort) Test by Matthew Hansen - October 7, 2015 in Mobile Security

    They go by several names: Bloatware. Trialware. Pre-installation-ware. Some of them are completely innocuous. Many are designed to automate harvesting of information from the user. The line between these "unwantedware" and malware is thinning. Whether they arrive in our networks from a less-than-perfect supply chain, or as a natural result from Bring-Your-Own-Device (BYOD) policies, or even as an aggressive customer support "service" from the manufacturer, unwantedware shall exist. On the best of days, network defenders will identify, mitigate, and remove said software from their organization in the hopes that it cannot come back. Unfortunately, these herculean efforts are not enough. Users will ignore warnings from the security administrators. Users will pay lip service to the security training their organization provides. Users will rationalize intrusions into their devices through a myriad of worthless excuses: "I'm really boring", or "Anyone who wants to spy on me will have a lot of nothing to do", or "I'm really ugly, turning on my webcam would hurt THEM." Time and again users have shown that they are incapable of understanding the risks involved, they must be trained to dislike being spied on. In this paper we will examine unwanted data exfiltrations initiated by software we are told to trust, be it prepackaged software, chatty smartphone apps, or smart television applications. We will also present methods for detecting said exfiltrations, determining what data is being sent, and alerting the user in a meaningful way.

  • The Industrial Control System Cyber Kill Chain by Michael J. Assante and Robert M. Lee - October 5, 2015 in Industrial Control Systems

    Read this paper to gain an understanding of an adversary's campaign against ICS. The first two parts of the paper introduce the two stages of the ICS Cyber Kill Chain. The third section uses the Havex and Stuxnet case studies to demonstrate the ICS Cyber Kill Chain in action.

  • Practical Security Considerations for Managed Service Provider On-Premise Equipment Masters
    by Mike Yeatman - October 5, 2015 in Best Practices

    Many organizations are not adequately staffed to perform 24x7 monitoring of network, systems infrastructure, and security activities such as vulnerability scanning and penetration testing. Use of third party managed service provider to fill this gap is on the rise. It is typical for managed service providers to require the implementation of an on premise device or appliance at the customer location(s). But, who watches the watcher? Service providers must be sure to fully harden any onpremise device placed on a customer network, and they must take steps to protect their own infrastructure against the propagation of an attack or compromise of the customer network and systems. Customers must be informed and work closely with service providers to assure proper placement of the on premise device such that it does not become a vector for compromise against the customer network. Collectively, and in accordance with a set of standards and guidelines, all stakeholders involved in the managed services relationship must be sure to set a sustainable benchmark that sufficiently reduces the chances for 3rd party on premise equipment becoming the root, or a contributing cause of a security compromise.

  • Practical approaches for MTCP Security Masters
    by Joshua Lewis - October 2, 2015 in Intrusion Detection, Intrusion Prevention

    Multi-path TCP (MPTCP) is an emerging IETF standard for providing connection resilience and bandwidth aggregation. MPTCP evolves the existing TCP protocol by allowing multiple TCP flows for a TCP session. This provides exciting new possibilities for mobile devices that can maintain TCP sessions as connection paths are added or dropped, and multi-homed servers that allow TCP sessions to take advantage of a mesh topology. However, current network security monitoring infrastructure solutions cannot appropriately inspect MPTCP connections, leaving significant intrusion detection and data loss blind spots. This paper will discuss practical approaches for MPTCP security.

  • Automating the Hunt for Hidden Threats Analyst Paper
    by Eric Cole, PhD - October 1, 2015 in Intrusion Detection

    An Analyst Program whitepaper by Dr. Eric Cole. It defines the process of automating the hunt for threats, and discusses how to deploy a continuous threat-hunting process while preparing a team to analyze threats to protect critical processes and data.

  • Forensic Analysis of Industrial Control Systems by Lewis Folkerth - September 25, 2015 in Forensics

    Industrial Control Systems (ICS) contribute to our safety and convenience every day, yet remain unseen and unnoticed. From oil refineries to traffic lights, from the elevators we ride to the electric power plants that keep our lights on, they provide the control and monitoring for our essential services. ICS have served reliably for decades, but a changing technological environment is exposing them to risks they were not designed to handle. Internet connectivity, vulnerability assessment tools, and attacks by criminal and nation-state organizations are part of this changing picture. Along with this higher-risk environment comes the certainty that some of our ICS will be compromised. In order to prevent recurring attacks, security professionals must be able to discover where the compromise originated, how it was carried out, and, if possible, who was responsible. Many types of ICS run on proprietary hardware, so commonly accepted forensic techniques must be adapted for use in an ICS environment. In order to detect a compromise, baseline configurations should be documented. Networks should be monitored for unauthorized access and activity. In addition, a response plan should be in place to maintain service and streamline recovery. Techniques for forensic analysis were adapted and tested on live ICS, resulting in recommendations for successful detection and recovery after an incident. With adequate preparation and the appropriate response planning and execution, it is possible to successfully perform a forensic analysis for an ICS compromise.

  • Orchestrating Security in the Cloud Analyst Paper
    by Dave Shackleford - September 22, 2015 

    Survey results indicate a strong need to keep security close to the data as it traverses cloud systems. Findings also indicate a need to integrate monitoring capabilities across hybrid environments and partnership with public cloud providers for full-spectrum visibility and response. Learn more by in this survey report focusing on cloud security.

  • Fingerprinting Windows 10 Technical Preview Masters
    by Jake Haaksma - September 17, 2015 in Intrusion Detection, Protocols

    Understanding the intricacies of a network is powerful information for security professionals and malicious attackers alike. Operating system (OS) fingerprinting is the process of determining the OS of a remote computer. This can be primarily accomplished by passively sniffing network packets between hosts or actively sending crafted packets to the ports of a target host in order to analyze its response. This paper attempts to fingerprint Windows 10 Technical Preview for the purpose of OS identification and to improve Nmap's OS detection database.

  • Security Risk Communication Tools Masters
    by Andrew Baze - September 16, 2015 in Management & Leadership

    The effective communication of risks is a serious challenge faced by every security risk management professional in today's dynamic cybersecurity environment. Business executives expect communication in their language, focusing on financial gain, risk, or loss. Security professionals often speak in technical terms, describing threats or vulnerability in the context of confidentiality, integrity and availability. A key challenge is to translate common security metrics into risk statements using the language of business so that executives with limited security knowledge can make the best, risk-informed decisions. One of the reasons security risk management is a unique challenge is because the language of security is often relatively technical. An in-depth security discussion often requires a level of engineering understanding that one should not generally expect of executives. It is the responsibility of the security risk professional to translate relevant risk metrics, details, and descriptions into the language of their business leaders, whose understanding could directly affect the future of the business.

  • NERC CIP Patch Management and Cisco IOS Trains Masters
    by Aaron Prazan - September 14, 2015 in Compliance

    NERC CIP Version 5 is challenging many organizations with mandatory patch management requirements. The requirements are intended to be general for any managed system with a defined source for patches or security updates. However, the picture gets muddier for Cisco network devices, because the vendor issues frequent new versions of the operating system along multiple user trains, not patches to any static version. In addition, the proprietary SCADA systems to which NERC requirements apply do not lend themselves to frequent patching. This paper will describe the requirements for patching under NERCs requirements and propose a set of processes an entity using such devices in a tightly controlled SCADA control system might use to satisfy the requirements.

  • Combatting Cyber Risks in the Supply Chain Analyst Paper
    by Dave Shackleford - September 9, 2015 

    By some estimates, up to 80% of breaches may originate in the supply chain. Read this paper to get some guidance on best practices to protect your organization from vulnerabilities introduced by your vendors and suppliers.

  • A Network Analysis of a Web Server Compromise Masters
    by Kiel Wadner - September 8, 2015 in Forensics

    Through the analysis of a known scenario, the reader will be given the opportunity to explore a website being compromised. From the initial reconnaissance to gaining root access, each step is viewed at the network level. The benefit of a known scenario is assumptions about the attackers reasons are avoided, allowing focus to remain on the technical details of the attack. Steps such as file extraction, timing analysis and reverse engineering an encrypted C2 channel are covered.

  • Retail Security: Third-Party Interaction Analyst Paper
    by Eric Cole, PhD - September 4, 2015 
  • The Sliding Scale of Cyber Security Analyst Paper
    by Robert M. Lee - September 1, 2015 

    The Sliding Scale of Cyber Security is a model for providing a nuanced discussion to the categories of actions and investments that contribute to cyber security.

  • A Proactive Approach to Incident Response Analyst Paper
    by Jake Williams - August 31, 2015 
  • Detecting and Preventing Attacks Earlier in the Kill Chain Masters
    by Chris Velazquez - August 31, 2015 in Getting Started/InfoSec

    Most organizations place a strong focus on intrusion prevention technologies and not enough effort into detective technologies. Prevention of malicious attacks is ideal, but detection is mandatory in combatting cyber threats. Security vendors will only provide blocking signatures when there is a near zero false-positive rate. Because of this, there are signatures that are not implemented resulting in false-negatives from ones security devices. This paper provides a look at tools that can be used to improve the detection of attackers at every phase of their attack. The intelligence learned from these attacks allows one to defend against these known attack vectors. This paper will look at a variety of open-source network IDS capabilities and other analysis tools to look at preventing and detecting attacks earlier in the cyber kill chain.

  • The Fall of SS7 How Can the Critical Security Controls Help? Masters
    by Hassan Mourad - August 31, 2015 in Critical Controls

    For decades, the security of one of the fundamental protocols in telecommunications networks, Signaling System No. 7 (SS7), has been solely based on the mutual trust between the interconnecting operators. Operators relied on their trust in other operators to play by the rules, and the SS7 network has been regarded as a closed trusted network. This notion of trust and security has recently changed after several security researchers announced major vulnerabilities in the SS7 protocol that threatens the users privacy and can lead to user location tracking, fraud, denial of service, or even call interception. In this paper we will discuss each individual attack and examine the possibility of using the critical security controls to protect against such attacks and enhance the security of SS7 interconnections.

  • Breaking the Ice: Gaining Initial Access Masters
    by Phillip Bosco - August 28, 2015 in Best Practices

    While companies are spending an increasing amount of resources on security equipment, attackers are still successful at finding ways to breach networks. This is a compounded problem with many moving parts, due to misinformation within the security industry and companies placing focus on areas of security that yield unimpressive results. A company cannot properly defend and protect against what they do not adequately understand, which tends to be a misunderstanding of their own security defense systems and relevant attacks that cyber criminals commonly use today. These misunderstandings result in attackers bypassing even the most seemingly robust security systems using the simplest methods. The author will outline the common misconceptions within the security industry that ultimately lead to insecure networks. Such misconceptions include a companys misallocation of their security budget, while other misconceptions include the controversies regarding which methods are most effective at fending off an attacker. Common attack vectors and misconfigurations that are devastating, but are highly preventable, are also detailed.

  • Using Hardware-Enabled Trusted Crypto to Thwart Advanced Threats Analyst Paper
    by John Pescatore - August 28, 2015 
  • Protecting Third Party Applications with RASP Infographic Analyst Paper
    by - August 27, 2015 
  • Deployment of a Flexible Malware Sandbox Environment Using Open Source Software by Jose Ortiz - August 24, 2015 in Incident Handling

    The identification and analysis of malware is one of the many tasks performed by incident handlers. Only a small number of commercial entities provide the technology capable of automating this. Most times these offerings are beyond the reach of small organizations due to the high costs associated with licensing and maintenance. One open source alternative is Cuckoo Sandbox. It is a free software project licensed under GNU GPLv3. It allows the user to analyze and collect data against suspected pieces of malware. The framework installation requires careful configuration by an experienced Linux administrator. The accepted method of deployment is to follow the prescribed steps and test the application until it works. Attempting to scale the sandbox environment beyond a few virtual machines becomes a complicated process due to the maintenance required for multiple Windows configurations. By using techniques borrowed from the DevOps methods, a small team of incident handlers can create a sandbox environment that is not only repeatable and consistent, but also scalable. The user can create multiple template profiles, which allow for flexible testing.

  • Preventing data leakage: A risk based approach for controlled use of the use of administrative and access privileges Masters
    by Christoph Eckstein - August 24, 2015 in Data Loss Prevention

    Organizations invest resources to protect their confidential information and intellectual property by trying to prevent data leakage or data loss. They adopt policies and implement technical controls to stop the loss and disclosure of sensitive information by outside attackers as well as inadvertent and malicious insiders. They follow best practices like the Critical Security Controls, specifically Control 12 (Controlled Use of Administrative Privileges) and Control 17 (Data Protection), to prevent the unauthorized leakage and disclosure of sensitive information. One type of data loss or data leakage prevention controls includes endpoint protection solutions to stop file transfers to USB storage devices or file uploads to public websites. However, the larger and more complex the business and organization the more users that may be granted exceptions to these policies and controls in order for them to be able to fulfill their job related tasks. The approval of these exceptions is often solely based on the business need for the individual user. This raises the question of how an approval for an exception does influence the risk of data leakage for an organization? What is the specific data leakage risk for granting an individual user a certain exception? This paper presents a new approach to risk based exception management, which will allow organizations to grant exceptions based on inherent data leakage risk. First, this paper introduces a concept for evaluating and categorizing users based on their access to sensitive information. Then in the second step, a ruleset is defined for granting exceptions based on the categorization of users, which enables individual approvers to make informed decisions regarding exception requests. The overall objective is to lower the data leakage risk for organizations by controlling and limiting exceptions where the access and thereby potential loss of information is the highest.

  • What Companies need to consider for e-Discovery by Thomas Vines - August 24, 2015 in Legal Issues

    Within the legal environment, Discovery is the process of identifying, locating, preserving, securing, collecting, preparing, reviewing, and producing facts, information, and materials for the purpose of producing/obtaining evidence for utilization in the legal process. Electronic Discovery (e-Discovery) is an extension of these processes into the digital environment and Electronically Stored Information (ESI). Legal departments are ill-prepared to deal with the digital environment of a business. Increasingly they are turning to the companys Information Technology (IT) department in order to identify, locate, preserve, and collect ESI. This is not break/fix work that is typical in IT operations. This is a new area of Data Governance and Records Information Management. This paper explores the relationships between Executive Management, Legal, Risk Management, IT, and Security in fulfilling the demands and obligations for defensible e-Discovery. This analysis includes a discussion of the Electronic Discovery Reference Model (ERDM) and its integration with Information Governance Reference Model (IGRM).

  • Paying Attention to Critical Controls Masters
    by Edward Zamora - August 21, 2015 in Critical Controls

    International organizations such as the Australia DSD, the European Commission and the US NSA have developed their lists of top mitigations and actions they consider necessary for organizations and governments to implement. It has been further established by the international information security community that the twenty critical security controls are the top relevant guidelines for implementing and achieving greater security. Many of the controls require the deployment and installation of security software. But is installing software all there is to it? Will an organization be better defended by buying lots of security products? In one particular use case, attackers were able to break through the network defenses of an organization that implemented many of the security controls but did not do so properly. Under the sense of false security, the senior leadership woke up to some bad news when they learned that gigabytes of data were stolen from the organizations network after controls were in place. The implementation of security controls should be done with careful planning and attention to detail. This paper covers what the attackers did to circumvent the controls in place in the organization, how they could have implemented the critical controls properly to prevent this compromise, and what an organization needs to do to avoid this pitfall.

All papers are copyrighted. No re-posting or distribution of papers is permitted.

Masters - This paper was created by a SANS Technology Institute student as part of their Master's curriculum.