2 Days Left to Save $400 on SANSFIRE 2017

IDFAQ: How to Examine a Unix Box for Possible Compromise

To identify a potential compromised Unix box is some what of an arcane art, though there are some simple things to look for.
  • Examine syslog entries, process table, and file systems to see if there are any "odd" messages, processes, or files. Examples are 2 inetds running, ssh running as EUID root but not UID root, core files for RPC services in /, new setuid/setgid programs, files quickly growing in size, df not closely matching du, perfmeter/top/BMC Patrol/SNMP monitors not matching vmstat/ps output, Higher than normal outbound network traffic.
  • Check the /etc/passwd & /etc/shadow for accounts that don't belong or should not have passwords.
  • Check the /.rhosts, /etc/hosts.equiv, /.ssh/known_hosts and ~/.rhosts for new entries that don't belong.
  • If you see anything that you suspect, then install a sniffer on a second host, and watch for connections to/from that host, and at the same time, back the machine up as evidence for later analysis and evidence. Then contact your local CERT for assistance in examining the other hosts in your network, and recovering your site.
Always keep an eye on the un-seen trust relationships. Who mounts who via NFS? Who has who in their .rhosts, .shosts, or hosts.equiv? Who has a .netrc from that host?

Who shares any network segment with that host? These are your first ring of next targets to verify, then work out from there. Typically an attacker doesn't compromise just one host, they hop from host to host, hiding their tracks and keeping as many potential back doors open as possible.

Scott Kennedy produced this first draft, any suggestions for improving it, send to handler@incidents.org Unix Compromise in the subject line.

George Drake adds:

I would add to the things to look for:
Entries for ordinary files or directories in /dev - especially if they look like other proper entries .

Wierd file names in /tmp or /var/tmp or any other world-writeable directory. By wierd is meant names such as ... (3 periods). If you find such a name and it is actually a directory then you almost certainly have many other problems on that system. If you are lucky it and everything in it is owned by an ordinary user and is "just" and irc server (including backdoors). But we have also seen this patern with packet sniffers installed, and if that has happened and there are actual logs of traces the system has been root compromised and the it should be quickly secured and reinstalled. (See the CERT recommendations on recovery from a root compromise - at the least you do a full reinstall).

[There needs to be appropriate warnings interspersed about some of the nasty booby-traps that intruders may install.]

If the data on a system is at all valuable and especially if it is not recoverable just halt the system (as in on a Sun Stop-A). Don't unplug from the net, don't shutdown, just halt. Then be really careful about how you bring the system back up (not connected to the net, of course, and not into multiuser mode - best from a cd or some other media you are certain has not been tampered with).

The idea of watching the compromised system is tempting, but whether that is a responsible course of action depends critically on the nature of the system. If it is "just" a web server, for example, with well backed-up copies of the site, then fine. If it is "just" one of a bunch of workstations in a public computing area, probably fine. If it is a transaction-processing system, not fine. If it is a system being used for scientific data collection or instrument control, probably not fine.

The remarks about unseen trust relationships can't be emphasized too strongly - but even better is to inspect systems for that sort of problem BEFORE you have a cascade of compromised systems to deal with. The examples given all can be found by running a find command on each system - but actually, in practice you can get a good idea of the extent of your problems by just running the find on a few key systems. Users who believe in .rhost files usually point them every which way - if machine A has one pointing to machine B then it is a really good bet that machine B has one pointing back at A. There is little excuse, though, for continuing to even allow rlogin or rsh (see the SANS ssh project) - those services are nothing but additional doors to let unwanted people into your systems.

The one most important preventive action any sys admin can take is to compile and run tripwire against each system NOW and make sure they have a copy of the database somewhere that it can't be tampered with - such as on cd or tape not mounted in the system, even on a drive which is simply disconnected. We had one case in which a system was enterred through a user's account and the intruder attempted to install a rootkit. Only it was for a different system and so it just left our system broken. The admin was able to restore the system in a few hours with the help of a (6 month old!) tripwire database because he could see exactly what had been tampered with. (This clearly would not have been an acceptable proceedure had the intruder actually gained root). Ideally tripwire should be run at least daily and the results of the run mailed or otherwise transferred to one or more other systems, where a responsible person will look at them immediately. Routine use of tripwire takes some work in keeping the config files up to date as the system changes, but it is far less work than is required to totally reinstall the OS on a badly compromised system.

Olaf Schreck adds the following information:

George Brake mentioned the value of Tripwire in detecting modifications to a system, and I agree perfectly *as long as* the tripwire reference data has been collected on a fresh system that has just been put into place. On the other hand, it is dangerous and stupid to collect tripwire reference data on a system that has been connected to the Internet for a while, or on a system that you suspect to be compromised -- there's no point in trusting the current system files if you can't compare them to the original ones.

Once in a while I'm asked to "check" a system that "shows unusual behavior", and there's no original tripwire data to compare against. More than once, I was able to detect an intrusion with a simple 'find / -mtime -3 -print' ("show all inode entries that have changed in the last 3 days"). Of course, this will not detect sophisticated attackers, but it *did* detect script kiddie attacks.

Note this simple "check" can be easily circumvented by resetting the system time with root privileges. This will show up in the system logs, unless the attacker modifies or deletes the logs (again, root privs). So it's not as tamperproof as tripwire, but it has a value..

You may want to get hold of a recent "Rootkit" distribution for your specific OS. This will point you to the binaries that you should monitor *very* closely.

There are "lightweight" alternatives to Tripwire: see http://www.pgci.ca/p_articles.html; also note that Linux systems that use 'rpm' package managemnet system ({RedHat,SuSE} Linux) can use the built-in RPM mechanisms to validate signatures.