I attended ThotCon 0x1 on Friday, April 23rd, and watched a talk where the presenters disclosed and demonstrated an exploit embedded in a disk image that triggered arbitrary code execution when the same malicious file was examined using either EnCase or FTK. I'd like to talk a bit about this and it's implications, as well as a few things that we, as a community, might want to do in response.
The specific vulnerability in question appeared to actually exist in the Outside-In component, and was not triggered until the malicious file was actually viewed inside EnCase or FTK. The presenters stated that the vulnerability had been initially reported to Guidance and Access Data more than 3 versions of EnCase ago. Thinking back now, I was assuming they meant they had notified before 6.14, but it's possible that they were counting point releases.
When triggered, the exploit seemed to cause EnCase to crash silently, but FTK appeared to remain up, not that they did anything further with it after exploitation.
While this specific issue is a matter for some concern, more significant is the generic question of what other exploitable vulnerabilities may exist in commonly used forensic tools such as EnCase or FTK. Of particular concern would be any which could affect filesystem parsers, and thus might cause a payload to be automatically executed when a maliciously crafted subject image is first loaded into the tool.
I talked a bit with the presenters after their talk, and they told me that they had done relatively little to search for other vulnerabilities in either tool. They'd apparently just fuzzed shared components of FTK and EnCase (They never formally admitted that the vulnerability was in the outside-in library, but that was the consensus among the viewers.) until they found an exploitable condition. My impression was that it hadn't taken them long. They indicated their belief that there could be a plethora of other such issues in the code. I queried them regarding the possibility of such issues in the filesystem parsing code as well, and they indicated it was likely, but that they had not tested that code extensively. I didn't get into specifics, but it seems probable that they simply had better tools for fuzzing the file formats.
So what do we do, as users of these tools, as customers of the companies which produce them (or of the open source projects which generate them in some cases of the more general problem), and as forensic practitioners, to address and mitigate against this issue and others like it?
- First, and most specifically, we need to universally view this particular bug with alarm, and call on both Guidance and Access Data to apply pressure on their common vendor to create a fix for the vulnerability which was exploited.
- Second, and more generically, someone needs to be doing fuzzing against commonly used forensic applications. Since it's such specialized niche software, it hasn't gotten the kind of attention that's been focused on things like network-accessible Windows services to-date. The fact that the issue presented appears to have been discovered so easily suggests that this isn't currently being done by either Guidance or Access Data. As illustrated by Paul Craig's 'Hacking Scientists' presentation (audio) at Kiwicon 3 in Australia, fuzzing niche applications is an area that's now getting more attention in the industry. Once found, the issues need to be presented to the vendors, and if not promptly addressed, to the public at large. (I agree that public announcement is a debatable point, but there has to be some way to ensure that fixes for exploitable issues are appropriately prioritized.)
- And lastly, in some environments, some forensic examination procedures or configurations may need to be modified to take into account the possibility that analyzing an image with a specific tool could cause arbitrary code to be executed on the analysis station. The following are a subset of the possible attacker actions which might need to be considered. I'm not saying these are likely today, but given the rising degree of capability in today's attackers, and the amount of money sometimes involved, I think it's probable that one or more of them will be attempted eventually, and we should do what we can to be ready for:
- Code which seeks to actively mask specific evidence within a case.
- Code which seeks to discredit the current analysis or the analyst performing it in some way. Perhaps hidden HTML code might be introduced into a report, which could be highlighted to indicate negligence or incompetence on the part of the investigator. Or on a network-connected workstation, a tunnel could be established to allow an attacker to perform arbitrary actions such as uploading inappropriate images to the investigator's Internet cache.
- Code which seeks to discredit some other analysis.
- Code which seeks to insert false evidence into this or another case
- Code which seeks to exfiltrate data about work done on the forensic workstation where it is run. Possibly using the network, or using hidden files and/or automatic execution on inserted thumbdrives.
- Code which seeks to delay processing of a case, perhaps by doing something as subtle as identifying a time intensive forensic operation, and inducing a crash in the application and corrupting the results after a long but variable period.
- Code which seeks to identify an isolated forensic network (whether by scanning, or by simply piggy-backing onto normal usage) and infect other forensic workstations with itself to perform other actions as listed above.
Mitigating forensic procedure/configuration modifications could include any or all of the following, depending on your degree of concern and risk:
- Running a host-based IPS on the analysis workstation
- Running tools or configuring the host OS such that exploitation is more difficult. This includes measures such as enabling DEP and/or ASLR, and/or migrating to an OS which supports such measures, or more of them.
- Maintaining forensic workstation patches & installed software versions at more current levels than might otherwise be done. Many examiners maintain their forensic workstations completely off the network, which increases the difficulty of performing updates, and may encourage the attitude that the specified systems are therefore unlikely to be compromised absent direct execution of a malicious binary. Additionally, some examiners are reluctant to upgrade their forensic tools due to the required expense.
- Monitoring/examining forensic workstation logs for anomalies.
- Re-imaging forensic workstations completely after each case, and only working one case at a time on each one.
- Verifying all results across multiple completely separate forensic platforms.
As always, please feel free to leave commentary if you liked this article or want to call me on the carpet for some inaccuracy.