3 Days Left to Save $400 on SANS Scottsdale 2015

Intrusion Detection FAQ: What are some acceptable procedures for documentation and detective work that will result in court-admissible evidence?

Digital information retrieval systems are wonderful things. With hardly any effort at all, it is possible to extract specific data from an archive, manipulate it to solve a problem, and commit the changes back in to the archive, all from the same terminal. It is this ability to manipulate data that makes proving a computer attack so difficult.

Should circumstances allow for the details of the intrusion to see the light of day in a court room, it would seem reasonable to say that the defense will do everything in its power to cast the shadow of uncertainty and unreliability over the prosecution. Monitoring and detection methods will be scrutinized for weaknesses, the quality of evidence produced will be called into question, and the actual information presented to the court will be suspect because it may have been tainted either during the intrustion or by it's handlers. Thus it should follow intuitively that any procedure for collecting evidence should be designed with redundancy and integrity in mind. Convenience is not an issue after an attack; rather, a system which unimpeachably supports the victims' case is. The problem of integrity is managed via careful procedural controls whereas redundant logging facilities are purely technical issue solved by the addition of an integrated monitoring infrastructure.

According to Cheswick and Bellovin "...logfiles are legally classified as hearsay. That is, they are not the oral testimony of a witness. In general, hearsay testimony is not admissible in court under the [United States] Federal Rules of Evidence. However there are a number of exceptions that are often applicable to computer-generated output. The most important such exception covers business records...." They then go on to point out that logs should (must) be generated as part of normal business operations in order to be truly credible. Log files are suspect if they are ad hoc concoctions, as well as if those who generate the logs do not trust their contents enough to use them in daily transactions.

After interviewing the RCMP, it seems that a top down approach is preferred. Top-down refers to the analysis path. It begins at the most general question: private or corporate machine and decends to minute details about the physical hardware configuration. The path is designed such that non-technical staff may safely execute large parts of it without compromising the validity of the evidence. The logical location of the machine must first be documented to determine how far traces should go. One personal private workstation will present far fewer opportunities than a department server with possibly exploitable trust relationships. Ultimately each iteration of the path will lead to one machine that could contain evidence of illegal activity.

Corporate machines seem more likely to be abused, therefore this scenario will be more thoroughly covered. The principles applies just as well to personal systems. If compromised, the company has every right to demand the affected machines be disconnected. The reasons for this are twofold. First, ascertaining the amount of damage becomes a considerably more difficult task if the machine is not isolated for examination. The possibility of destroying evidence becomes a very real risk. Second, the corporation faces a certain amount of liability risk if it knowing keeps machines online which were used in a crime, even if the machines were victims. If the attack is determined to be local, cataloguing its dimensions of use, namely authorized users, typical users, usage patterns and deviance from acceptable use policy will play a very important role in the investigation. If at all possible, anyone with means or motive should be denied further access to that machine. In the interests of productivity, it would be wise to replace it (if possible) with another workstation. The reason for denying access is not to single out possible attacker (that what the police are for) but rather to ensure the integrity of the evidence. Now the analysis paths separate. If the attack succeeded only in achieving a local compromise, the following steps should be taken.
  • Sever all connections with affected hardware and put the machine in a "safe harbor"
  • Photodocument the system, especially the screen and error messages, and login times.
  • Photograph the system. Show the configuration including memory, expansion card population, mass storage systems, network cards, etc.
  • Check the NVRAM to establish hard time reference between real world and internal time.
  • Pull the HD out for safekeeping.
If the compromised machine is a server-class device, the rules change slightly, since it is acknowlegded that removal of a server is generally too disruptive to be acceptable. Procedures here include:
  • Retaining all relevant logs. If there is an entry in any file within 24 hours of the event, that file is relevant.
  • Build a snapshot of activity, beginning at least 24 hours before the event.
  • Establish deviation from normal activity. That is the core of intrusion detection anyway, the emphasis is on documenting that anomaly.
  • Record and examine possible paths of travel. Are any trust relationships exploited? By what route was access gained?
  • Does the affected machine trust any others on the way out of the local net? If so, the list of machines to be examines has just grown.
Proper logging is therefore a procedural issue; there must be clear policy (preferably in written form) describing what is 'interesting' and should be recorded, and why. The design, development, installation, and testing of the recorders should be documented in order to prove their 'correctness.'

The 'correctness,' or validity of the recorders, once designed then becomes a techical issue. Placement of network sensors for optimal view of interesting traffic, the hardware and software platforms used to implement the sensors and the security systems used to ensure the recorder's safety and accuracy are but three things to consider. If a particular recorder is designed to monitor transactions with the outside world, a good location to tap would be between the incoming network feed from the ISP, router or firewall and the central distribution switch. The switch prevents the monitor from wasting time on recording trival traffic such as bootp requests and windows browser elections while affording it a clear view of all packets permitted by the firewall and all packets attempting to leave the secure zone. The hardware should be chosen with support system failure in mind. Support system includes central disk arrays, remote analysis stations and perhaps even the network itself. A modem configured to dial the operator's pager might be able to warn of a network failure or attack even if primary methods such as e-mail have been corrupted or have fallen off-line. Power may be cleaned and smoothed with an uninterruptable power supply, but if the power fails, the entire network goes down, and then monitoring is no longer necessary. In fact, with modern redundant power supplies to buildings, power failures longer than 30 seconds are quite a rarity. Of more concern is a power bump causing a disk write to be interrupted thereby corrupting the logs.

Once a suitable physical recorder is in place, it must be carefully examined and tuned. While recording a 10Mbps Ethernet stream is a trivial task, analysis of that stream is a far more useful and far more difficult assignment. This means that the recorder must include some preprocessor machanism to quickly summarize network events which may require operator intervention or investigation. A packet sucker watching for incoming connections on port 79, the finger port for empty or @host style commands might write an entry to an hourly log. Fake daemons listening on ports with well-known vulnerabilities (IMAP on port 143, for example) could raise a more intrusive alert upon detecting suspicious or hostile activity.

It would be wise to collect the preliminary alerts in a central location as well as keeping hard copies for some reasonable amount of time. Centralization facilitates trend analysis and hard copy can be used to prove that the electronic analysis is legitimate. Trends to examine should include not only the apparent location of the attackers, but also the targets. Certain assets may be attacked more frequently, either due to visibility or because of the assets' importance. An example would be some noticeable number of trite probes (port scans, phf attacks, null fingers, etc) from big-nsp.net against the mail server. The probes trickle in over the span of a week. Normally this is written off as attributable to a plethora of bored idiots with Internet access. Sometime after the close of business on Friday, small-isp.com is logged sending considerably more hostile traffic. Using the traceroute tool, it is discovered that reveals that small-isp.com connects to popXXX.routerYY.big-nsp.net and the earlier probes also came from something attached to routerYY.big-nsp.net. This suggests that the attack actually began a week earlier, and has been hiding in the background noise. This is where good connection attempt logs would come in useful.

The handling of data once it is collected is another issue. UN*X-style operating systems are very good about timestamping files with access, modification and creation dates. Unfortunately looking at a file's attributes often counts as an access, and access times are often useful. Consider a machine that has been compromised. The console errors include attempts to mount "/" from 256.72.304.19 at 0235h local. The file access times for the nfsd, exports and mountd manpages show access times close to 0235h. This could reinforce clues to the attacker's location. If the syslog has been wiped, perhaps a polaroid of the screen showing this attempt would be useful.

Before reading the logfiles, use tar and gzip to archive them. The archive may be stored on removable media and logfiles can be extracted from for perusal. The archives can become quite large, and gzip generally produces better results than the 'compress' command. Better compression translates into the ability to store more of the filesystem for analysis. GNU tar correctly preserves timestamps on archival and extraction. This allows the Incident Response Team to compare the filesystem data with the logfiles. It also prevents accidental erasure of critical evidence.

The FBI's IPCIS squad recommends that after an intrusion, a site "appoint one person to handle potential evidence." This person would "Establish a chain-of-custody" and would actively gather information from the appropriate personnel. The idea is to minimize the number of people who could cause the evidence to be tainted. Futhermore, it is a good idea to "have pre-established points of contact for the General Counsel, Emergency Response Personnel, Law Enforcement, etc..." Establish a working relationship with these agencies. By doing this before an attack, a site or its Information Security Officer will have established something of a command hierarchy. Often, these agencies will be able to provide a few proactive suggestions for risk management.

Physical control over the data must be maintained as well. If possible, the compromised disk systems should be removed, carefully packaged, and sealed with tamper-evident packaging. The evidence controller should sign the package as being committed into evidence and should have this witnessed. A bank's safety deposit boxes may be a good place to store the drives during the investigation. This prevents the accusation that the evidence may have been tampered with. Safety deposit box records can be subpoenaed to prove that the disks have not been accessed since the attack. This again prevents accidental loss of evidence. Log files and monitor archives should be burned on to CD-ROM. This allows all the analysts to have a secure working copy and prevents accidental erasure. The more copies of the logs there are in circulation, the harder it is to forge entries in those logs.

Returning to the issue of monitoring systems, a critical legal issue comes to light: is it legal to monitor? Paraphrasing the Canadian Criminal Code:
Every one who, fraudulently and without colour of right, ... by means of an electro-magnetic, acoustic, mechanical or other device, intercepts or causes to be intercepted, directly or indirectly, any function of a computer system, ...is guilty of an indictable offence and liable to imprisonment for a term not exceeding ten years, or is guilty of an offence punishable on summary conviction. From 342.1 (1) (b)
The answer then hangs on the definition of "fraudulently and without colour of right." Security administrators may be relieved to know that this act does not prohibit monitoring outright. In fact there are certain indications that it may be illegal to not monitor a system that, if compromised, could endanger human life. Interviews with the RCMP and the Edmonton Police Service indicate that there is no prohibition against the owner of a system, or the appointed custodians monitoring the system for security and business reasons, at least not in Edmonton, Alberta, Canada. Once again it must be emphasized that local legal counsel be consulted before attempting to monitor a network. The RCMP recommends that users be notified in writing before monitoring begins and that the login banner states that systems are monitored, continued use signifies consent to monitor.

When monitors are actually placed, network topology becomes the final arbiter of what is possible. Some networks may be able to simply monitor the gateway, wheras other should be divided into zones for administrative, political, business, or security reasons, with each zone having its own monitor. The monitor's scope will determine its usefulness in evidence collection but also what filters may be applied. Consider this example: A university department uses remote financial and business applications in the adminstration office. The applications are hosted on a Windows NT server and delivered to Linux or Windows workstations via Citrix WinFrame or MetaFrame. The Citrix Protocol is cryptgraphically secured and the workstation group has its own firewall and mail and file servers. POP, IMAP, finger and FTP are not permitted in this area. The monitor watching the admin group should then be configured to ignore Citrix traffic involving these machines, and alarm on detecting the banned services. The rest of the department including staff and grad student offices, a library and a semi-public computer room is permitted to FTP and finger out. Sundry web, ftp, pop3 and imap servers are scattered around the general purpose network. The monitor is attached to the network port on the switch, grabbing a copy of any ethernet frame attempting to leave the local network. This machine is configured to do traffic rate analysis and record end-to-end connections. This permits all machines to be monitored, without subjecting the main monitor to X11 traffic and without trading admin office privacy for security. The network administrators plan to reorganize the network in the future, making more use of the switch to prevent sniffers from grabbing internal passwords.

This hypothetical network has a central syslog host. Most of the machines on the network are configured to keep local copies of their syslogs as well as send a copy to loghost. This allows the administrators to have independant records of what each machine is doing, even if a particular machine is cracked. A useful network monitor configuration consists of a pair of quick pentiums using OpenBSD 2.4 with up to 20GB of disk and 128MB of memory. One machine runs Network Flight Recorder, while the other runs SHADOW. NFR permits the administrators to query a database for an arbitrary connection and generate logs and graphs, as well as set off alarms. Shadow is similar to NFR, and though it does not have as simple an interface as NFR, it also does not have resource requirements as demanding as those of NFR. Having two machines online at each 'sensor' adds a further dose of reliability to the system. If one machine is compromised, the second can trigger alarms and maintain recording capability. Furthermore it is possible to configure a 'stealth' monitor that is difficult to detect, yet still can listen to the network.

My thanks go to all upon whom I have called in preparing this article. This document would not have been possible without the cooperation of the RCMP, especially the Edmonton Division's Det. P. McLelan.

Chris Kuethe ckuethe@math.ualberta.ca
University of Alberta, Mathematical Sciences

References

Frederick B. Cohen, "Protection and Security on the Information Superhighway"
John Wiley & Sons, Inc., New York. 1995

W. R. Cheswick & S. M. Bellovin, "Firewalls and Internet Security"

Addison-Wesley, Inc., Reading, MA. 1994

Network Flight Recorder, Inc. NFR Corporate Web Site Washington, DC. 1999 http://www.nfr.net/
US Navy & SANS Institute, "The CIDER Project" Dalhgren, VA. 1999 http://www.nswc.navy.mil/ISSEC/CID/

Michael A. Geist, "The Canadian Internet Law Resource Page" Ottawa, ON. 1998 http://aix1.uottawa.ca/~geist/cilrp.html

Department of Justice Canada, "Department of Justice: Laws", DoJ Web page Ottawa, ON. 1999 http://canada.justice.gc.ca/Loireg/index_en.html

Copyright Act http://canada.justice.gc.ca/STABLE/EN/Laws/Chap/C/C-42.html

Criminal Code of Canada http://canada.justice.gc.ca/STABLE/EN/Laws/Chap/C/C-46.htm

Note: The interesting parts are sections 326, 327, 342.1 and 430(1.1) of the Criminal Code and section 42 of the Copyright Act.

RCMP, "Computer Crime", RCMP Web Site Ottawa, ON: RCMP 1998 http://www.rcmp-grc.gc.ca/html/cpu-cri.htm

FBI, "Federal Bureau of Investigation Home Page", FBI Web Site Washington DC: FBI 1998 http://www.fbi.gov/programs/ipcis/index.htm
http://www.fbi.gov/nipc/compcrime.htm
http://www.fbi.gov/nipc/index.htm
http://www.fbi.gov/faq/fbifaq.htm#53

United States Criminal Code, Title 18 U.S.C. (Section) 1030 http://www.usdoj.gov/criminal/cybercrime/1030_new.html

--------------------------------------------------------------------------------

Disclaimer: While every effort has been made to ensure the accuracy of the information presented here, this document is not a substitute for competent professional advice. This information is presented as a guide on an "as-is" basis; all warranties of fitness for a particular purpose, either implied or otherwise are hereby disclaimed. This article was not written by a legal professional. Furthermore, this information is based heavily on the laws in effect in Edmonton, Alberta, Canada, as of May 1999. Competent professional advice concerning local laws should be obtained before relying on these procedures