2 Days Left to Save $200 on SANS Security East 2017

Reading Room

SANS eNewsletters

Receive the latest security threats, vulnerabilities, and news with expert commentary

Take the SANS 2017 Cyber Threat Intelligence Survey and enter to win a $400 Amazon Gift Card! https://www.surveymonkey.com/r/2017SANSCTISurvey

How does your organization classify systems as endpoints, prioritize & manage risks related to those endpoints, and define next-generation endpoint protections? Tell us in the SANS 2016 Endpoint Protection survey and enter to win a $400 Amazon gift card! www.surveymonkey.com/r/2017SANSEndpointSurvey

More than 75,000 unique visitors read papers in the Reading Room every month and it has become the starting point for exploration of topics ranging from SCADA to wireless security, from firewalls to intrusion detection. The SANS Reading Room features over 2,600 original computer security white papers in 100 different categories.

Latest 25 Papers Added to the Reading Room

  • Active Defense via a Labyrinth of Deception by Nathaniel Quist - December 5, 2016 in Active Defense

    A network baseline allows for the identification of malicious activity in real time. However, a baseline requires that every listed action is known and accounted, presenting a nearly impossible task in any production environment due to an ever-changing application footprint, system and application updates, changing project requirements, and not least of all, unpredictable user behaviors. Each obstacle presents a significant challenge in the development and maintenance of an accurate and false positive free network baseline. To surmount these hurdles, network architects need to design a network free from continuous change including, changing company requirements, untested systems or application updates, and the presence of unpredictable users. Creating a static, never-changing environment is the goal. However, this completely removes the functionality of a production network. Or does it? Within this paper, I will detail how this type of static environment, referred to as the Labyrinth, can be placed in front of a production environment and provide real time defensive measures against hostile and dispersed attacks, from both human actors and automated machines. I expect to prove the Labyrinth is capable of detecting changes in its environment in real time. It will provide a listing of dynamic defensive capabilities like identifying attacking IP addresses, rogue-process start commands, modifications to registry values, alterations in system memory and recording the movements of an attacker's tactics, techniques, and procedures. At the same time, the Labyrinth will add these values to block list, protecting the production network lying behind. Successful accomplishment of these goals will prove the viability and sustainability of a Labyrinth defending network (Revelle, 2011) environments.


  • Next Generation of Privacy in Europe and the Impact on Information Security: Complying with the GDPR by Edward Yuwono - December 5, 2016 in Legal Issues

    Human rights have a strong place within Europe, part of this includes the fundamental right to privacy. Over the years, individual privacy has strengthened through various European directives. With the evolution of privacy continuing in Europe through the release of the General Data Protection Regulation (GDPR), how will the latest iteration of European Union (EU) regulation affect organisations and what will information security leaders need to do to meet this change? This paper will explore the evolution of privacy in Europe, the objectives and changes this iteration of EU privacy regulation will provide, what challenges organisations will experience, and how information security could be leveraged to satisfy the regulation.


  • A Black-Box Approach to Embedded Systems Vulnerability Assessment by Michael Horkan - December 5, 2016 in Security Basics, Risk Management

    Vulnerability assessment of embedded systems is becoming more important due to security needs of the ICS/SCADA environment as well as the emergence of the Internet of Things (IoT). Often, these assessments are left to test engineers without intimate knowledge of the device's design, no access to firmware source or tools to debug the device while testing. This gold paper will describe a test lab black-box approach to evaluating an embedded device's security profile and possible vulnerabilities. Open-source tools such as Burp Suite and python scripts based on the Sulley Fuzzing Framework will be employed and described. The health status of the device under test will be monitored remotely over a network connection. I include a discussion of an IoT test platform, implemented for Raspberry Pi, and how to approach the evaluation of IoT using this device as an example.


  • Insider Threats and the Need for Fast and Directed Response Analyst Paper
    by Dr. Eric Cole - December 1, 2016 in Threats/Vulnerabilities

    As breaches continue to cause significant damage to organizations, security consciousness is shifting from traditional perimeter defense to a holistic understanding of what is causing the damage and where organizations are exposed. Although many attacks are from an external source, attacks from within often cause the most damage. This report looks at how and why insider attacks occur and their implications.


  • Node Router Sensors: What just happened? by Kim Cary - November 22, 2016 in Incident Handling, Logging Technology and Techniques, System Administration

    When an airliner crashes, one of the most important tasks is the recovery of the flight recorder or black box. This device gives precise & objective information about what happened and when before the crash. When an information security incident occurs on a network, it is equally important to have access to precise information about what happened to the victim machine and what it did after any compromise. A network of devices can be designed, economically constructed and managed to automatically capture and make available this type of data to information security incident handlers. In any environment, this complete record of network data comes with legal and ethical concerns regarding its use. Proper technical, legal and ethical operation must be baked into the design and operational procedures for devices that capture information on any network. These considerations are particularly necessary on a college campus, where such operations are subject to public discussion. This paper details the benefits, designs, operational procedures and controls and sample results of the use of "Node Router Sensors" in solving information security incidents on a busy college network.


  • A Checklist for Audit of Docker Containers by Alyssa Robinson - November 22, 2016 in Auditing & Assessment

    Docker and other container technologies are increasingly popular methods for deploying applications in DevOps environments, due to advantages in portability, efficiency in resource sharing and speed of deployment. The very properties that make Docker containers useful, however, can pose challenges for audit, and the security capabilities and best practices are changing rapidly. As adoption of this technology grows, it is, therefore, necessary to create a standardized checklist for audit of Dockerized environments based on the latest tools and recommendations.


  • Security Assurance of Docker Containers by Stefan Winkle - November 22, 2016 in Information Assurance, Cloud Computing, System Administration

    With recent movements like DevOps and the conversion towards application security as a service, the IT industry is in the middle of a set of substantial changes with how software is developed and deployed. In the infrastructure space, we see the uptake of lightweight container technology, while application technologies are moving towards distributed micros services. There is a recent explosion in popularity of package managers and distributors like OneGet, NPM, RubyGems and PyPI. More and more software development becomes dependent on small, reusable components developed by many different developers and often distributed by infrastructures outside our control. In the midst of this all, we often find application containers like Docker, LXC, and Rocket to compartmentalize software components. The Notary project, recently introduced in Docker, is built upon the assumption the software distribution pipeline can no longer be trusted. Notary attempts to protect against attacks on the software distribution pipeline by association of trust and duty separation to Docker containers. In this paper, we explore the Notary service and take a look at security testing of Docker containers.


  • BGP Hijinks and Hijacks - Incident Response When Your Backbone Is Your Enemy by Tim Collyer - November 21, 2016 in Incident Handling

    The Border Gateway Protocol (BGP) is used to route packets across the Internet, usually at the level of the Internet backbone where Internet Service Providers (ISPs) pass traffic amongst themselves. Unfortunately, BGP was not designed with security in mind, like many of the protocols used in modern networks such as the Internet. Lack of security within BGP means that traffic is susceptible to misdirection and manipulation through either misconfiguration or malicious intent. Among the traffic manipulation possible within BGP routing is Autonomous System (AS) path injection, in which a new router can insert itself into the routing path of traffic. This can create a man-in-the-middle condition if the path injection is malicious in nature. Differentiation between a malicious incident and mere misconfiguration can be extremely challenging. Even more difficult for an affected company is to conduct incident response during a BGP-related incident. This paper explores the incident response options currently available to security teams to prevent, detect, and where possible, respond should a BGP incident arise.


  • Reducing Attack Surface: SANSí Second Survey on Continuous Monitoring Programs Analyst Paper
    by Barbara Filkins - November 14, 2016 in Critical Controls, Management & Leadership

    Continuous monitoring is not a single activity. Rather, it is a set of activities, tools and processes (asset and configuration management, host and network inventories, and continuous vulnerability scanning) that must be integrated and automated all the way down to the remediation workflow. Although CM is shifting focus and slowly improving, it still has a way to go to attain the maturity needed to become a critical part of an organizationís business strategy.


  • Auditing Windows installed software through command line scripts by Jonathan Risto - November 14, 2016 in Auditing & Assessment, Critical Controls

    The 20 Critical Controls provides guidance on managing and securing our networks. The second control states there should be a software inventory of the products for all devices within the infrastructure. Within this paper, the auditor will be enabled to compare Windows system baseline information against the currently installed software configuration. Command line tools utilized will be discussed and scripts provided to simplify and automate these tasks.


  • Network Inspection of Duplicate Packets by Randy Devlin - November 11, 2016 in Intrusion Detection, Intrusion Prevention, IPS

    Network Intrusion Analysis enables a security analyst to review network traffic for protocol conformity and anomalous behavior. The analystís goal is to detect network intrusion activity in near-real time. The detection provides details as to who the attackers are, the attack type, and potential remediation responses. Is it possible that a network security stack could render the analyst ďblindĒ to detecting intrusions? This paper will review architecture, traffic flow, and inspection processes. Architecture review validates proper sensor placement for inspection. Traffic flow analyzes sources and destinations, approved applications, and known traffic patterns. Inspection process evaluates protocols and packet specific details. The combination of these activities can reveal scenarios that potentially result in limitations of network security inspection and analysis.


  • Forcepoint Review: Effective Measure of Defense Analyst Paper
    by Eric Cole, PhD - November 9, 2016 in Intrusion Detection, Firewalls & Perimeter Protection, Intrusion Prevention

    Effective security is all about the quality of the solution, not the quantity of products. Indeed, buying more products can make the problem worse. All of the major breaches over the last several years have had one thing in common: Multiple products were issuing alerts, but there were too many alerts and not enough people charged with monitoring and responding to them. When that is the case, putting more products in place spreads current resources even thinner--the problem gets worse, not better. This paper explains the advantages of an integrated defense-in-depth approach to security and looks at how Forcepoint's integrated solution suite meets the needs of such an approach.


  • The Age of Encryption by Wes Whitteker - November 7, 2016 in Encryption & VPNs

    Over the last few years, there has been an increasing movement toward encrypting Internet communication. Though this movement increases the confidentiality of transmitted information, it also severely limits the ability of security tools to analyze Internet traffic for malicious content. This paper investigates the growth of encrypted Internet traffic (i.e. HTTPS) and its impact on Cybersecurity. This paper also proposes an open source solution for decrypting and inspecting Internet traffic accommodating IPv4 and v6 for both home and small-to-medium sized business (SMB) use.


  • Implementing Full Packet Capture by Matt Koch - November 7, 2016 in Forensics

    Full Packet Capture (FPC) provides a network defender an after-the-fact investigative capability that other security tools cannot provide. Uses include capturing malware samples, network exploits and determining if data exfiltration has occurred. Full packet captures are a valuable troubleshooting tool for operations and security teams alike. Successful implementation requires an understanding of organization-specific requirements, capacity planning, and delivery of unaltered network traffic to the packet capture system.


  • Triaging the Enterprise for Application Security Assessments by Stephen Deck - November 4, 2016 in Auditing & Assessment, Critical Controls, Penetration Testing

    Conducting a full array of security tests on all applications in an enterprise may be infeasible due to both time and cost. According to the Center for Internet Security, the purpose of application specific and penetration testing is to discover previously unknown vulnerabilities and security gaps within the enterprise. These activities are only warranted after an organization attains significant security maturity, which results in a large backlog of systems that need testing. When organizations finally undertake the efforts of penetration testing and application security, it can be difficult to choose where to begin. Computing environments are often filled with hundreds or thousands of different systems to test and each test can be long and costly. At this point in the testing process, little information is available about an application beyond the computers involved, the owners, data classification, and the extent to which the system is exposed. With so few variables, many systems are likely to have equal priority. This paper suggests a battery of technical checks that testers can quickly perform to stratify the vast array of applications that exist in the enterprise ecosystem. This process allows the security team to focus efforts on the riskiest systems first.


  • Out with the Old, In with the New: Replacing Traditional Antivirus Analyst Paper
    by Barbara Filkins - November 2, 2016 in Clients and Endpoints, Firewalls & Perimeter Protection

    Research over the past 10 years indicates that traditional antivirus products are rarely successful in detecting smart malware, unknown malware and malware-less attacks. This doesnít mean that antivirus is ďdead.Ē Instead, antivirus is growing up. Today, organizations look to spend their antivirus budget on replacing current solutions with next-generation antivirus (NGAV) platforms that can stop modern attacks. This paper provides a guide to evaluating NGAV solutions.


  • Security in a Converging IT/OT World Analyst Paper
    by Bengt Gregory-Brown and Derek Harp - November 1, 2016 

    In this paper we look at the challenges in securing ICS environments and recommendations for effective ICS security. OT cyber security is a relatively young field with few experts, but a great deal can be judiciously drawn from IT experience. The fundamentals are the same: controlling access to devices and applications; monitoring networks to identify potential issues and direct appropriate responsive action; oversight and periodic reviews of controls and their effectiveness; securing the supply chain; and securing the human factor through awareness training. It is in the design and application of these basics to the particular considerations and technical nature of control systems and process control networks (PCNs) that things diverge the most, and it is here that we will focus.


  • Detecting Penetration Testers on a Windows Network with Splunk Masters
    by Fred Speece - October 31, 2016 in Logging Technology and Techniques

    Through data collection, reports, and alerts, an InfoSec team can have a better idea of what Penetration Testers are doing and hopefully in turn stop real bad guys that may get on their network. This paper discusses the configuration and setup of those alerts and the logging behind them. It also covers the thought process behind the alert and attack(s) it is trying to defend against. If an InfoSec department picked up this paper before their first Penetration Test, they would have better visibility into their network and alert on possible changes that an adversary could make. Splunk should not alert on everything, but it should alert on behavior that is abnormal. This paper is targeted for a Windows majority network with Active Directory in an organization with an immature security posture, using Splunk as their SIEM.


  • Keys to Effective Anomaly Detection by Matt Bromiley - October 25, 2016 in Data Protection, Hackers, Incident Handling

    Simply put, an anomaly is something that seems abnormal or doesnít t within an environment. A car with ve driving wheels would be an anomaly. In the context of an enterprise network, an anomaly is very much the sameósomething that does not t or is out of place. While anomalies in an enterprise network may be indicative of a con guration fault, they are often evidence of something much more worrisome: a malicious presence on the network.


  • The Information We Seek by Jose Ramos - October 25, 2016 in Information Assurance, Data Loss Prevention, Hackers

    Whether you are performing a penetration test, conducting an investigation, or are skilled attackers closing in on a target, information gathering is the foundation that is needed to carry out the assessment. Having the right information paves the way for proper enumeration and simplifies attack strategies against a given target. Throughout this paper, we will walk through some strategies used to identify information on both people and networks. Some people claim that all data can be found using Google's search engine; but can third party tools found in Linux security distributions such as Kali Linux outperform the search engine giant? Maltego and The Harvester yield a wealth of information, but will the results be enough to identify a target? The right tool for the right job is essential when working with any project in life. Let's take a journey through the information gathering process to determine if there is a one size fits all tool, or if a multi-tool approach is needed to gather the essential information on a given target. We will compare and contrast many of the industry tools to determine the proper tool or tools needed to perform an adequate information gathering assessment.


  • Intrusion Detection Through Relationship Analysis by Patrick Neise - October 24, 2016 in Intrusion Detection

    With the average time to detection of a network intrusion in enterprise networks assessed to be 6-8 months, network defenders require additional tools and techniques to shorten detection time. Perimeter, endpoint, and network traffic detection methods today are mainly focused on detecting individual incidents while security incident and event management (SIEM) products are then used to correlate the isolated events. Although proven to be able to detect network intrusions, these methods can be resource intensive in both time and personnel. Through the use of network flows and graph database technologies, analysts can rapidly gain insight into which hosts are communicating with each other and identify abnormal behavior such as a single client machine communicating with other clients via Server Message Block (SMB). Combining the power of tools such as Bro, a network analysis framework, and neo4j, a native graph database that is built to examine data and its relationships, rapid detection of anomalous behavior within the network becomes possible. This paper will identify the tools and techniques necessary to extract relevant network information, create the data model within a graph database, and query the resulting data to identify potential malicious activity.


  • Getting C-Level Support to Ensure a High-Impact SOC Rollout Analyst Paper
    by John Pescatore - October 24, 2016 

    To security professionals, the need for an effective SOC is obvious. But to organizational management, security is just one of many groups asking for financial and personnel resources. Security leaders who simply promise management that a SOC will provide better security or help the company avoid attacks wonít get very far. The security team must define and communicate the business benefits of investing in, establishing and optimizing a SOC over the long term.


  • A Secure Approach to Deploying Wireless Networks by Joseph Matthews - October 19, 2016 in Wireless Access

    Enterprise wireless networks are an important component of modern network architecture. They are required to support mobile devices and provide connectivity to various devices where wired connections are not practical or cost prohibitive. But the missing physical control of the medium does require additional precautions to control access to wireless networks. Most books and papers present the problem and the risks, but do not provide a fully secure solution with examples. The 802.11 standard for wireless networks does offer encryption and authentication methods like WPA. But in an enterprise environment, these controls have to be implemented in a scalable and manageable way. This paper presents a hands-on guide to implementing a secure wireless network in an enterprise environment and provides an example of a tested secure solution.


  • From the Trenches: SANS 2016 Survey on Security and Risk in the Financial Sector Analyst Paper
    by G. Mark Hardy - October 18, 2016 in Alternate Payment Systems, Firewalls & Perimeter Protection

    The financial services industry is under a barrage of ransomware and spearphishing attacks that are rising dramatically. These top two attack vectors rely on the user to click something. Organizations enlist email security monitoring, enhanced security awareness training, endpoint detection and response, and firewalls/IDS/IPS to identify, stop and remediate threats. Yet, their preparedness to defend against attacks isnít showing much improvement. Read on to learn more.


  • Detecting Incidents Using McAfee Products by Lucian Andrei - October 10, 2016 in Active Defense, Incident Handling, Tools

    Modern attacks against computer systems ask for a combination of multiple solutions in order to be prevented and detected. This paper will do the analysis of the capacities of commercial tools, with minimal configuration, to detect threats. Traditionally, companies use antivirus software to protect against malware, and a firewall combined with an IDS to protect against network attacks. This paper will analyze the efficacy of the following three combinations: antivirus, antivirus plus host IDS, and antivirus combined with a host IDS plus application whitelisting in order to withstand application attacks. Before doing the tests we predicted that the antivirus will block 20% of the attacks, the HIDS will detect an additional 15%, and McAfee Application Control will protect at least against 50% more of the attacks executed by an average attacker using known exploits, without much obfuscation of the payload. The success of defensive commercial tools against attacks will justify the investment a company will be required to make.


All papers are copyrighted. No re-posting or distribution of papers is permitted.

Masters - This paper was created by a SANS Technology Institute student as part of their Master's curriculum.