The Meaning of LEAK Records

I’ve been pretty quiet lately, largely due to spending time developing LibForensics.  Currently I’m adding support to read Microsoft Windows Internet cache containers (a.k.a. index.dat files).  If you’ve ever dealt with index.dat files before, you’ve probably encountered the mysterious “LEAK” record.  The purpose of this blog post is to explain one way that these records are created.

Background Information

In order to understand how LEAK records are created, it is useful to understand the Microsoft Windows Internet API.  The Microsoft Windows Internet API (WinInet) provides applications with the ability to interact with networked resources, usually over FTP and HTTP.  There are several functions in the WinInet API, including functions to provide caching.  Applications can use the WinInet API caching functions to store local (temporary) copies of files retrieved from the network.  The primary reason to use caching is to speed up future network requests, reading a local copy of a file, instead of from the network.

A cached file is called a “Temporary Internet File” (TIF).  The WinInet API manages TIFs using a cache container, which are files named index.dat.  There are several WinInet API functions to work with entries in cache containers, including creating a URL cache entry, reading locally cached files, and deleting URL cache entries.  The WinInet API also provides a cache scavenger, which periodically runs and cleans up entries that are marked for deletion.

The cache containers (index.dat files) are almost always associated with Microsoft Internet Explorer.  This is likely because Internet Explorer is one of the most commonly used applications that uses the WinInet API caching capabilities.  However, since the WinInet API is available to any end-user application, any application can use the caching capabilities.  This can pose an issue when attributing a specific entry in the cache container, to the program which generated the entry.

Internally a cache container is composed of a header, followed by one or more records.  There are several different types of records, including URL records (which describe cached URLs), and REDR records (for describing redirects).  A cached URL can have an associated TIF, which is described in the appropriate URL record.

LEAK Records

Now that we’ve reviewed index.dat files, we’ll see how to create LEAK records.  However before going further I want to emphasize that this is just one approach to creating LEAK records.  LEAK records may have uses outside of what is described in this post.

For the impatient: A LEAK record can be generated by attempting to delete a URL cache entry (via DeleteUrlCacheEntry) when the associated temporary internet file (TIF) can not be deleted.

The last paragraph of the MSDN documentation on the cache scavenger, discusses what happens when a cache entry is marked for deletion:

The cache scavenger is shared by multiple processes. When one application deletes a cache entry from its process space by calling DeleteUrlCacheEntry, it is normally deleted on the next cycle of the scavenger. However, when the item that is marked for deletion is in use by another process, the cache entry is marked for deletion by the scavenger, but not deleted until the second process releases it.

To summarize, when the cache scavenger runs and it encounters an item that is marked for deletion, but the item in use by another process, then the cache entry is not actually deleted.

Another reference to LEAK records can be found at Understanding index.dat Files.  The author describes LEAK as a “Microsoft term for an error”.

Combining these two ideas (deleting a cache entry when it is in use, and LEAK as a term for error), we can come up with a theory: a LEAK record is generated when an error occurs during the deletion of a url cache entry.  If you’ve ever taken a SANS Security 508 course (Computer Forensics, Investigation, and Response) from me, you’ll probably remember my approach to examinations (and investigations in general): theory (hypothesis) and test.

In order to test the theory, we need to create a series of statements and associated outcomes, that would be true if our theory is correct.

At this stage our theory is fairly generic.  To make the theory testable, we need to make it more specific.  This means we will need to determine a series of actions that will result in the generation of a LEAK record.  The first place to look is at the MSDN documentation on the WinInet API.  To save time, rather than walking through all the WinInet API functions, I’ll just reference the relevant ones:

Looking at this list, there are a few possible ways to generate an error while deleting a URL cache entry:

  1. Create/Commit a URL cache entry, and lock the entry using RetrieveUrlCacheEntryStream.
  2. Create/Commit a URL cache entry and corresponding TIF, and open the TIF.
  3. Create/Commit a URL cache entry and corresponding TIF, and make the TIF read-only.

The general approach is to create (and commit) a URL cache entry, then create a condition that would make deleting the entry fail.

Let’s solidify these into testable theories as “if-then” statements (logical implications) with function calls:

  • IF we create a URL cache entry using CreateUrlCacheEntry and CommitUrlCacheEntry, lock the entry using RetrieveUrlCacheEntryStream, and call DeleteUrlCacheEntry
    • THEN we will see a LEAK record.
  • IF we create a URL cache entry and corresponding TIF using CreateUrlCacheEntry and CommitUrlCacheEntry, open the TIF using open(), and call DeleteUrlCacheEntry
    • THEN we will see a LEAK record.
  • IF we create a URL cache entry and corresponding TIF using CreateUrlCacheEntry and CommitUrlCacheEntry, make the TIF read-only using chmod, and call DeleteUrlCacheEntry
    • THEN we will see a LEAK record.

Theory Testing

The next step is to test our theories.  It is relatively straight forward to translate the if-then statements into code.  In the “Sample Code” section I’ve included a link to a zip file that contains (amongst other things) three Python files, test_leak1.py, test_leak2.py, and test_leak3.py.  Each file implements one of the if-then statements.

Here is the output from running test_leak1.py (in a Windows 2003 virtual machine):

C:ToolsPython31>python z:inet_cachetest_leak1.py
Creating URL: http://rand_286715790
Using file: b'C:\Documents and Settings\Administrator\Local Settings\Temporary Internet Files\Content.IE5\81QNCLMB\CAUJ6C3U'
Locking URL: http://rand_286715790
Deleting URL: http://rand_286715790
ERROR: DeleteUrlCacheEntryA failed with error 0x20: The process cannot access the file because it is being used by another process.

The output from test_leak1.py indicates that there was an error during the call to DeleteUrlCacheEntry.  After copying the associated index.dat file to a Linux system, we can find a reference to http://rand_286715790:

xxd -g 1 -u index.dat.leak1
...
000ef00: 55 52 4C 20 02 00 00 00 00 00 00 00 00 00 00 00  URL ............
000ef10: 50 A1 F4 DB 08 32 CA 01 00 00 00 00 00 00 00 00  P....2..........
000ef20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
000ef30: 60 00 00 00 68 00 00 00 02 00 10 10 80 00 00 00  `...h...........
000ef40: 01 00 40 00 00 00 00 00 00 00 00 00 00 00 00 00  ..@.............
000ef50: 2A 3B B3 5A 02 00 00 00 01 00 00 00 2A 3B B3 5A  *;.Z........*;.Z
000ef60: 00 00 00 00 EF BE AD DE 68 74 74 70 3A 2F 2F 72  ........http://r
000ef70: 61 6E 64 5F 32 38 36 37 31 35 37 39 30 00 AD DE  and_286715790...
000ef80: 43 41 55 4A 36 43 33 55 00 BE AD DE EF BE AD DE  CAUJ6C3U........
...

The record is still marked as “URL “.  Further examination of the file shows no additional references to http://rand_286715790.  Here is the output from running test_leak2.py (in a Windows 2003 virtual machine):

C:ToolsPython31>python z:inet_cachetest_leak2.py
Creating URL: http://rand_3511348668
Opening file: b'C:\Documents and Settings\Administrator\Local Settings\Temporary Internet Files\Content.IE5\81QNCLMB\CAC23G8H'
Deleting URL: http://rand_3511348668

There was no clear indication that an error occurred.  After copying the index.dat file to a Linux system, we can find a reference to http://rand_3511348668:

xxd -g 1 -u index.dat.leak2
...
000ef00: 4C 45 41 4B 02 00 00 00 00 00 00 00 00 00 00 00  LEAK............
000ef10: 90 70 17 74 0C 32 CA 01 00 00 00 00 00 00 00 00  .p.t.2..........
000ef20: 00 04 00 00 00 00 00 00 00 00 00 00 00 E7 00 00  ................
000ef30: 60 00 00 00 68 00 00 00 02 00 10 10 80 00 00 00  `...h...........
000ef40: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
000ef50: 2A 3B EB 5D 01 00 00 00 00 00 00 00 2A 3B EB 5D  *;.]........*;.]
000ef60: 00 00 00 00 EF BE AD DE 68 74 74 70 3A 2F 2F 72  ........http://r
000ef70: 61 6E 64 5F 33 35 31 31 33 34 38 36 36 38 00 DE  and_3511348668..
000ef80: 43 41 43 32 33 47 38 48 00 BE AD DE EF BE AD DE  CAC23G8H........
...

This time a LEAK record was created.  Further examination of the file shows no additional references to http://rand_3511348668.  Here is the output from running test_leak3.py (in a Windows 2003 virtual machine):

C:ToolsPython31>python z:inet_cachetest_leak3.py
Creating URL: http://rand_1150829499
chmod'ing file: b'C:\Documents and Settings\Administrator\Local Settings\Temporary Internet Files\Content.IE5\81QNCLMB\CAKB2RNB'
Deleting URL: http://rand_1150829499

Again, there was no clear indication that an error occurred.  After copying the index.dat file to a Linux system, we can find a reference to http://rand_1150829499:

xxd -g 1 -u index.dat.leak3
...
000ef00: 4C 45 41 4B 02 00 00 00 00 00 00 00 00 00 00 00  LEAK............
000ef10: 00 2B AF B5 0D 32 CA 01 00 00 00 00 00 00 00 00  .+...2..........
000ef20: 00 04 00 00 00 00 00 00 00 00 00 00 00 E7 00 00  ................
000ef30: 60 00 00 00 68 00 00 00 02 00 10 10 80 00 00 00  `...h...........
000ef40: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
000ef50: 2A 3B 0A 5F 01 00 00 00 00 00 00 00 2A 3B 0A 5F  *;._........*;._
000ef60: 00 00 00 00 EF BE AD DE 68 74 74 70 3A 2F 2F 72  ........http://r
000ef70: 61 6E 64 5F 31 31 35 30 38 32 39 34 39 39 00 DE  and_1150829499..
000ef80: 43 41 4B 42 32 52 4E 42 00 BE AD DE EF BE AD DE  CAKB2RNB........
...

As with test_leak2.py, a LEAK record was generated. Further examination of the file shows no additional references to  http://rand_1150829499.

Given the results, we can assess the correctness of our theories.  Since test_leak1.py did not generate a LEAK record, while test_leak2.py and test_leak3.py did, we can narrow our original theory to TIFs.  Specifically that a LEAK record is generated when DeleteUrlCacheEntry is called, and the associated TIF (temporary internet file) can not be deleted.

It is also prudent to note that we only ran the tests once.  In all three tests it is possible that there are other (unknown) variables that we did not account for, and in the latter two tests the unknown variables just happened to work in our favor.  To strengthen the theory that LEAK records occur when a TIF can not be deleted, we could run the tests multiple times, as well as attempt other methods to make the TIF file “undeleteable”.

Sample Code

The file test_leak.zip contains code used to implement the testing of theories in this blog post.  The files test_leak1.py, test_leak2.py, and test_leak3.py implement the tests, while inet_cache_lib.py, groups.py, entries.py, and __init__.py are library files used by the test files.  All of the code was designed to run on Python3.1 on Windows systems, and interfaces with the Windows Internet API via the ctypes module.  The code is licensed under the GPL v3.

To install the sample code, unzip the file test_leak.zip to a directory of your choosing.  You can download the sample code by clicking on the link test_leak.zip.

Recovering a FAT filesystem directory entry in five phases

This is the last in a series of posts about five phases that digital forensics tools go through to recover data structures (digital evidence) from a stream of bytes. The first post covered fundamental concepts of data structures, as well as a high level overview of the phases. The second post examined each phase in more depth. This post applies the five phases to recovering a directory entry from a FAT file system.

The directory entry we’ll be recovering is from the Honeynet Scan of the Month #24. You can download the file by visiting the SOTM24 page. The entry we’ll recover is the 3rd directory entry in the root directory (the short name entry for _IMMYJ~1.DOC, istat number 5.)

Location

The first step is to locate the entry. It’s at byte offset 0x2640 (9792 decimal). How do we know this? Well assuming we know we want the third entry in the root directory, we can calculate the offset using values from the boot sector, as well as the fact that each directory entry is 0x20 (32 decimal) bytes long (this piece of information came from the FAT file system specification.) There is an implicit step that we skipped, recovering the boot sector (so we could use the values). To keep this post to a (semi) reasonable length, we’ll skip this step. It is fairly straightforward though. The calculation to locate the third entry in the root directory of the image file is:

3rd entry in root directory = (bytes per sector) * [(length of reserved area) + [(number of FATs) * (size of one FAT)]] + (offset of 3rd directory entry)

bytes per sector = 0x200 (512 decimal)

length of reserved area = 1 sector

number of FATs = 2

size of one FAT = 9 sectors

size of one directory entry = 0x20 (32 decimal) bytes

offset of 3rd directory entry = size of one directory entry *2 (start at 0 since it’s an offset)

3rd entry in root directory = 0x200 * (1 + (2 * 9))+ (0x20 * 2) = 0x2640 (9792 decimal)

Using xxd, we can see the hex dump for the 3rd directory entry:

$ xxd -g 1 -u -l 0x20 -s 0x2640 image
0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..

Extraction

Continuing to the extraction phase, we need to extract each field. For a short name directory entry, there are roughly 12 fields (depending on whether you consider the first character of the file name as it’s own field.) The multibyte fields are stored in little endian, so we’ll need to reverse the bytes that we see in the output from xxd.

To start, the first field we’ll consider is the name of the file. This starts at offset 0 (relative to the start of the data structure) and is 11 bytes long. It’s the ASCII representation of the name.

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
File name = _IMMYJ~1.DOC (_ represents the byte 0xE5)

The next field is the attributes field, which is at offset 12 and 1 byte long. It’s an integer and a bit field, so we’ll examine it further in the decoding phase.

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Attributes = 0x20

Continuing in this manner, we can extract the rest of the fields:

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Reserved = 0x00

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Creation time (hundredths of a second) = 0x68

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Creation time = 0x4638

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Creation date = 0x2D2B

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Access date = 0x2D2B

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
High word of first cluster = 0x0000

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Modification time = 0x754F

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Modification date = 0x2C8F

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Low word of first cluster = 0x0002

0002640: E5 49 4D 4D 59 4A 7E 31 44 4F 43 20 00 68 38 46 .IMMYJ~1DOC .h8F
0002650: 2B 2D 2B 2D 00 00 4F 75 8F 2C 02 00 00 50 00 00 +-+-..Ou.,...P..
Size of file = 0x00005000 (bytes)

Decoding

With the various fields extracted, we can decode the various bit-fields. Specifically the attributes, dates, and times fields. The attributes field is a single byte, with the following bits used to represent the various attributes:

  • Bit 0: Read only
  • Bit 1: Hidden
  • Bit 2: System
  • Bit 3: Volume label
  • Bit 4: Directory
  • Bit 5: Archive
  • Bits 6 and 7: Unused
  • Bits 0, 1, 2, 3: Long name

When decoding the fields in a FAT file system, the right most bit is considered bit 0. To specify a long name entry, bits 0, 1, 2, and 3 would be set. The value we extracted from the example was 0x20 or 0010 0000 in binary. The bit at offset 5 (starting from the right) is set, and represents the “Archive” attribute.

Date fields for a FAT directory entry are encoded in two byte values, and groups of bits are used to represent the various sub-fields. The layout for all date fields (modification, access, and creation) is:

  • Bits 0-4: Day
  • Bits 5-8: Month
  • Bits 9-15: Year

Using this knowledge, we can decode the creation date. The value we extracted was 0x2D2B which is 0010 1101 0010 1011 in binary. The day, month, and year fields are thus decoded as:

0010 1101 0010 1011
Creation day: 01011 binary = 0xB = 11 decimal

0010 1101 0010 1011
Creation month: 1001 binary = 0x9 = 9 decimal

0010 1101 0010 1011
Creation year: 0010110 binary = 0x16 = 22 decimal

A similar process can be applied to the access and modification dates. The value we extracted for the access date was also 0x2D2B, and consequently the access day, month, and year values are identical to the respective fields for the creation date. The value we extracted for the modification date was 0x2C8F (0010 1100 1000 1111 in binary). The decoded day, month, and year fields are:

0010 1100 1000 1111
Modification day: 01111 binary = 0xF = 15 decimal

0010 1100 1000 1111
Modification month: 0100 binary = 0x4 = 4 decimal

0010 1100 1000 1111
Modification year: 0010110 binary = 0x16 = 22 decimal

You might have noticed the year values seem somewhat small (i.e. 22). This is because the value for the year field is an offset starting from the year 1980. This means that in order to properly interpret the year field, the value 1980 (0x7BC) needs to be added to the value of the year field. This is done during the next phase (interpretation).

The time fields in a directory entry, similar to the date fields, are encoded in two byte values, with groups of bits used to represent the various sub-fields. The layout to decode a time field is:

  • Bits 0-4: Seconds
  • Bits 5-10: Minutes
  • Bits 11-15: Hours

Recall that we extracted the value 0x4638 (0100 0110 0011 1000 in binary) for the creation time. Thus the decoded seconds, minutes, and hours fields are:

0100 0110 0011 1000
Creation seconds = 11000 binary = 0x18 = 24 decimal

0100 0110 0011 1000
Creation minutes = 110001 binary = 0x31 = 49 decimal

0100 0110 0011 1000
Creation hours = 01000 binary = 0x8 = 8 decimal

The last value we need to decode is the modification time. The bit-field layout is the same for the creation time. The value we extracted for the modification time was 0x754F (0111 0101 0100 1111 in binary). The decoded seconds, minutes, and hours fields for the modification time are:

0111 0101 0100 1111
Modification seconds = 01111 binary = 0xF = 15 decimal

0111 0101 0100 1111
Modification minutes = 101010 binary = 0x2A = 42 decimal

0111 0101 0100 1111
Modification hours = 01110 binary = 0xE = 14 decimal

Interpretation

Now that we’ve finished extracting and decoding the various fields, we can move into the interpretation phase. The values for the years and seconds fields need to be interpreted. The value of the years field is the offset from 1980 (0x7BC) and the seconds field is the number of seconds divided by two. Consequently, we’ll need to add 0x7BC to each year field and multiply each second field by two. The newly calculated years and seconds fields are:

  • Creation year = 22 + 1980 = 2002
  • Access year = 22 + 1980 = 2002
  • Modification year = 22 + 1980 = 2002
  • Creation seconds = 24 * 2 = 48
  • Modification seconds = 15 * 2 = 30

We also need to calculate the first cluster of the file, which simply requires concatenating the high and the low words. Since the high word is 0x0000, the value for the first cluster of the file is the value of the low word (0x0002).

In the next phase (reconstruction) we’ll use Python, so there are a few additional values that are useful to calculate. The first order of business is to account for the hundredths of a second associated with the seconds field for creation time. The value we extracted for the hundredths of a second for creation time was 0x68 (104 decimal). Since this value is greater than 100 we can add 1 to the seconds field of creation time. Our new creation seconds field is:

  • Creation seconds = 48 + 1 = 49

This still leaves four hundredths of a second left over. Since we’ll be reconstructing this in Python, we’ll use the Python time class which accepts values for hours, minutes, seconds, and microseconds. To convert the remaining four hundredths of a second to microseconds multiply by 10000. The value for creation microseconds is:

  • Creation microseconds = 4 * 10000 = 40000

The other calculation is to convert the attributes field into a string. This is purely arbitrary, and is being done for display purposes. So our new attributes value is:

  • Attributes = “Archive”

Reconstruction

This is the final phase of recovering our directory entry. To keep things simple, we’ll reconstruct the data structure as a Python dictionary. Most applications would likely use a Python object, and doing so is a fairly straight forward translation. Here is a snippet of Python code to create a dictionary with the extracted, decoded, and interpreted values (don’t type the >>> or …):

$ python
>>> from datetime import date, time
>>> dirEntry = dict()
>>> dirEntry["File Name"] = "xE5IMMYJ~1DOC"
>>> dirEntry["Attributes"] = "Archive"
>>> dirEntry["Reserved Byte"] = 0x00
>>> dirEntry["Creation Time"] = time(8, 49, 49, 40000)
>>> dirEntry["Creation Date"] = date(2002, 9, 11)
>>> dirEntry["Access Date"] = date(2002, 9, 11)
>>> dirEntry["First Cluster"] = 2
>>> dirEntry["Modification Time"] = time(14, 42, 30)
>>> dirEntry["Modification Date"] = date(2002, 4, 15)
>>> dirEntry["size"] = 0x5000
>>>

If you wanted to print out the values in a (semi) formatted fashion you could use the following Python code:

>>> for key in dirEntry.keys():
... print "%s == %s" % (key, str(dirEntry[key]))
...

And you would get the following output

Modification Date == 2002-04-15
Creation Date == 2002-09-11
First Cluster == 2
File Name == ?IMMYJ~1DOC
Creation Time == 08:49:49.040000
Access Date == 2002-09-11
Reserved Byte == 0
Modification Time == 14:42:30
Attributes == Archive
size == 20480
>>>

At this point, there are a few additional fields that could have been calculated. For instance, the file name could have been broken into the respective 8.3 (base and extension) components. It might also be useful to calculate the allocation status of the associated file (in this case it would be unallocated). These are left as exercises for the reader ;).

This concludes the 3-post series on recovering data structures from a stream of bytes. Hopefully the example helped clarify the roles and activities of each of the five phases. Realize that the five phases aren’t specific to recovering file system data structures, they apply to network traffic, code, file formats, etc.

The five phases of recovering digital evidence

This is the second post in a series about the five phases of recovering data structures from a stream of bytes (a form of digital evidence recovery). In the last post we discussed what data structures were, how they related to digital forensics, and a high level overview of the five phases of recovery. In this post we’ll examine each of the five phases in finer grained detail.

In the previous post, we defined five phases a tool (or human if they’re that unlucky) goes through to recover data structures. They are:

  1. Location
  2. Extraction
  3. Decoding
  4. Interpretation
  5. Reconstruction

We’ll now examine each phase in more detail…

Location

The first step in recovering a data structure from a stream of bytes is to locate the data structure (or at least the fields of the data structure we’re interested in.) Currently, there are 3 different commonly used methods for location:

  1. Fixed offset
  2. Calculation
  3. Iteration

The first method is useful when the data structure is at a fixed location relative to a defined starting point. This is the case for a FAT boot sector, which is located in the first 512 bytes of a partition. The second method (calculation) uses values from one or more other fields (possibly in other data structures) to calculate the location of a data structure (or field). The last method (iteration) examines “chunks” of data, and attempts to identify if the chunks are “valid”, meaning the (eventual) interpretation of the chunk fits predetermined rules for a given data structure.

These three methods aren’t mutually exclusive, meaning they can be combined and intermixed. It might be the case that locating a data structure requires all three methods. For example the ils tool from Sleuthkit, when run against a FAT file system ils first recovers the boot sector, then calculates the start of the data region, and finally iterates over chunks of data in the data region, attempting to validate the chunks as directory entries.

While all three methods require some form of a priori knowledge, the third method (iteration) isn’t necessarily dependent on knowing the fixed offset of a data structure. From a purist perspective, iteration itself really is location. Iteration yields a set of possible locations, as opposed to the first two methods which yield a single location. The validation aspect of iteration is really a combination of the rest of the phases (extraction, decoding, interpretation and reconstruction) combined with post recovery analysis.

Another method for location, that is less common than the previous three is location by outside knowledge from some source. This could be a person who has already performed location, or it could be the source that created the data structure (e.g. the operating system). Due to the flexible and dynamic nature of computing devices, this isn’t commonly used, but it is a possible method.

Extraction

Once a data structure (or the relevant fields) have been located, the next step is to extract the fields of the data structure out of the stream of bytes. Realize that the “extracting” is really the application of type information. The information from the stream is the same, but we’re using more information about how to access (and possibly manipulate) the value of the field(s). For example the string of bytes 0x64 and 0x53 can be extracted as an ASCII string composed of the characters “dS”, or it could be the (big endian) value 0x6453 (25683 decimal). The information remains the same, but how we access and manipulate the values (e.g. concatenation vs. addition) differs. Knowledge of the type of field provides the context for how to access and manipulate the value, which is used during later phases of decoding, interpretation, and reconstruction.

The extraction of a field that is composed of multiple bytes also requires knowledge of the order of the bytes, commonly referred to as the “endianess”, “byte ordering”, or “big vs. little endian”. Take for instance the 16-bit hexadecimal number 0x6453. Since this is a 16-bit number, we would need two bytes (assuming 8-bit bytes) to store this number. So the value 0x6453 is composed of the (two) bytes 0x64 and 0x53

It’s logical to think that these two bytes would be adjacent to each other in the stream of bytes, and they typically are. The question is now what is the order of the two bytes in the stream?

0x64, 0x53 (big endian)

or

0x53, 0x64 (little endian)

Obviously the order matters.

Decoding

After the relevant fields of a data structure have been located and extracted, it’s still possible further extraction is necessary, specifically for bit fields (e.g. flags, attributes, etc.) The difference between this phase and the extraction phase is that extraction extracts information from a stream of bytes and decoding extracts information from the extraction phase. Alternatively, the output from the extraction phase is used as the input to the decoding phase. Both phases however focus on extracting information. Computation using extracted information is reserved for later phases (interpretation and reconstruction).

Another reason to distinguish this phase from extraction is that most (if not all) computing devices can only read (at least) whole bytes, not individual bits. While a human with a hex dump could potentially extract a single bit, software would need to read (at least) a whole byte and extract the various bits within the byte(s).

There isn’t much that happens at this phase, as much of the activity focuses around accessing various bits.

Interpretation

The interpretation phase takes the output of the decoding phase (or the extraction phase if the decoding phase uses only identity functions) and performs various computations using the information. While extraction and decoding focus on extracting and decoding values, interpretation focuses on computation using the extracted (and decoded) values.

Two examples of interpretation are unit conversion, and the calculation of values used during the reconstruction phase. An example of unit conversion would be converting the seconds field of a FAT time stamp from it’s native format (seconds/2) to a more logical format (seconds). A useful computation for reconstruction might be to calculate the size of the FAT region (in bytes) for a FAT file system (bytes per sector * size of one FAT structure (in sectors) * number of FAT structures.)

Since this phase is used heavily by the reconstruction phase, it’s not uncommon to see this phase embodied in the code for reconstruction. However this phase is still a logically separate component.

Reconstruction

This is the last phase of recovering digital evidence. Information from previous phases is used to reconstruct a usable representation of the data structure (or at least the relevant fields.) Possible usable representations include:

  • A language specific construct or class (e.g. Python date object or a C integer)
  • Printable text (e.g. output from fsstat)
  • A file (e.g. file carving)

The idea is that the output from this phase can be used for some further analysis (e.g. timeline generation, analyzing email headers, etc.) Some tools might also perform some error checking during reconstruction, failing if the entire data structure is unable to be properly recovered. While this might be useful in some automated scenarios, it has the downside of potentially missing useful information when only part of the structure is available or is valid.

At this point, we’ve gone into more detail of each phase and hopefully explained in enough depth the purpose and types of activity that happen in each. The next (and last) post in this series is an example of applying the five phases to recovering a short name directory entry from a FAT file system.

How forensic tools recover digital evidence (data structures)

In a previous post I covered “The basics of how digital forensics tools work.” In that post, I mentioned that one of the steps an analysis tool has to do is to translate a stream of bytes into usable structures. This is the first in a series of three posts that examines this step (translating from a stream of bytes to usable structures) in more detail. In this post I’ll introduce the different phases that a tool (or human if they’re that unlucky) goes through when recovering digital evidence. The second post will go into more detail about each phase. Finally, the third post will show an example of translating a series of bytes into a usable data structure for a FAT file system directory entry.

Data Structures, Data Organization, and Digital Evidence

Data structures are central to computer science, and consequently bear importance to digital forensics. In The Art of Computer Programming, Volume 1: Fundamental Algorithms (3rd Edition), Donald Knuth provides the following definition for a data structure:

Data Structure: A table of data including structural relationships

In this sense, a “table of data” refers to how a data structure is composed. This definition does not imply that arrays are the only data structure (which would exclude other structures such as linked lists.) The units of information that compose a data structure are often referred to as fields. That is to say, a data structure is composed of one or more fields, where each field contains information, and the fields are adjacent (next) to each other in memory (RAM, hard disk, usb drive, etc.)

The information the fields contain falls into one of two categories, the data a user wishes to represent (e.g. the contents of a file), as well as the structural relationships (e.g. a pointer to the next item in a linked list.) It’s useful to think of the former (data) as data, and the latter (structural relationships) as metadata. Although the line between the two is not always clear, and depends on the context of interpretation. What may be considered data from one perspective, may be considered metadata from another perspective. An example of this would be a Microsoft Word document, which from a file system perspective is data. However, from the perspective of Microsoft Word, the file contains both data (the text) as well as metadata (the formatting, revision history, etc.)

The design of a data structure not only includes the order of the fields, but also the higher level design goals for the programs which access and manipulate the data structures. For instance, efficiency has long been a desirable aspect of many computer programs. With society’s increased dependence on computers, other higher level design goals such as security, multiple access, etc. have also become desirable. As a result, many data structures contain fields to accommodate these goals.

Another important aspect in computing is how to access and manipulate the data structures and their related fields. Knuth defines this under the term “data organization”:

Data Organization: A way to represent information in a data structure, together with algorithms that access and/or modify the structure.

An example of this would be a field that contains the bytes 0x68, 0x65, 0x6C, 0x6C, and 0x6F. One way to interpret these bytes is as the ASCII string “hello”. In another interpretation, these bytes can be the integer number 448378203247 (decimal). Which one is it? Well there are scenarios where either could be correct. To answer the question of correct interpretation requires information beyond just the data structure and field layout, hence the term data organization. Even with self-describing data structures, information about how to access and manipulate the “self-describing” parts (e.g. type “1” means this is a string) is still needed.

So where does all this information for data organization (and data structures) come from? There are a few common sources. Perhaps the first would be a document from the organization that designed the data structures and the software that accesses and manipulates them. This could be either a formal specification, or one or more informal documents (e.g. entries in a knowledge base.) Another source would be reverse engineering the code that accesses and manipulates the data structures.

If you’ve read through all of this, you’re might be asking “So how does this relate to digital forensics?” The idea is that data structures are a type of digital evidence. Realize that the term “digital evidence” is somewhat overloaded. In one context, a disk image is digital evidence (i.e. what was collected during evidence acquisition), and in another context, an email extracted from a disk image is digital evidence. This series focuses on the latter, digital evidence extracted from a stream of bytes. Typically this would occur during the analysis phase, although (especially with activities such as verification) this can occur prior to the evidence acquisition phase.

The 5 Phases

Now that we’ve talked about what data structures are and how they relate to digital forensics, lets see how to put this to use with our forensic tools. What we’re about to do is describe five abstract phases, meaning all tools may not implement them directly, and some tools don’t focus on all five phases. These phases can also serve as a methodology for recovering data structures, should you happen to be in the business of writing digital forensic tools.

  1. Location
  2. Extraction
  3. Decoding
  4. Interpretation
  5. Reconstruction

The results of each phase are used as input for the next phase, in a linear fashion.

An example will help clarify each phase. Consider the recovery of a FAT directory entry from a disk image. The first task would be to locate the desired directory entry, which could be accomplished through different mechanisms such as calculation or iteration. The next task is to extract out the various fields of the data structure, such as the name, the date and time stamps, the attributes, etc. After the fields have been extracted, fields where individual bits represent sub fields can be decoded. In the example of the directory entry, this would be the attributes field, which denotes if a file is considered hidden, to be archived, a directory, etc. Once all of the fields have been extracted and decoded, they can be interpreted. For instance, the seconds field of a FAT time stamp is really the seconds divided by two, so the value must be multiplied by two. Finally, the data structure can be reconstructed using the facilities of the language of your choice, such as the time class in Python.

There are a few interesting points to note with recovery of data structures using the above methodology. First, not all tools go through all phases, at least not directly. For instance, file carving doesn’t directly care about data structures. Depending on how you look at it, file carving really does go through all five phases, it just uses an identify function. In addition, file carving does care about (parts of) data structures, it cares about the fields of the data structures that contain “user information”, not about the rest of the fields. In fact, much file carving is done with a built-in assumption about the data structure: that the fields that contain “user information” are stored in contiguous locations.

Another interesting point is the distinction between extraction, decoding, and interpretation. Briefly, extraction and decoding focus on extracting information (from stream of bytes and already extracted bytes respectively), whereas interpretation focuses on computation using extracted and decoded information. The next post will go into these distinctions in more depth.

A third and subtler point comes from the transition of data structures between different types of memory, notably from RAM to a secondary storage device such as a hard disk or USB thumb drive. Not all structural information may make the transition from RAM, and as a result is lost. For instance, a linked list data structure, which typically contains a pointer field to the next element in the list, may not record the pointer field when being written to disk. More often that not, such information isn’t necessary to read consistent data structures from disk, otherwise the data organization mechanism wouldn’t really be consistent and reliable. However, if an analysis scenario does require such information (it’s theoretically possible), the data structures would have to come directly from RAM, as opposed to after they’ve been written to disk. This problem doesn’t stem from the five phases, but instead stems from a loss of information during the transition from RAM to disk.

In the next post, we’ll cover each phase in more depth, and examine some of the different activities that can occur at each phase.

Evaluating Forensic Tools: Beyond the GUI vs Text Flame War

One of the good old flamewars that comes up every now and again is which category of tools is “better”: graphical, console (e.g. interactive text-based), or command-line?

Each interface mechanism has its pros and cons, and when evaluating a tool, the interface mechanism used can make an impact on the usability of the tool. For instance displaying certain types of information (e.g. all of the picture files in a specific directory) naturally lend themselves to a graphical environment. On the other hand, it’s important to me to be able to use the keyboard to control the tool (using a mouse can often slow you down). The idea that graphical tools “waste CPU cycles” is pretty moot, considering the speed of current processors, and that much forensic work focuses on data sifting and analysis, which is heavily tied to I/O throughput.

Text based tools however do have the issue that paging through large chunks of data can be somewhat tedious (this is where I like using the scrollwheel, the ever-cool two-finger scroll on a Macbook, or even the “less” command.)

To me, there are more important issues than the type (e.g. graphical or text-based) of interface. Specifically some of the things I focus on are:

  • What can the tool do?
  • What are the limitations of the tool?
  • How easy is it to automate the tool (getting data into the tool, controlling execution of the tool, and getting data out of the tool)?

The first two items really focus on the analysis capabilities of the tool (which can be “make-or-break” decisions by themselves), and the last item (really three items rolled into one) focuses on the automation capabilities of the tool.

The automation capabilities are often important because no single tool does everything, and an analyst’s toolkit is composed of a series of tools that have differing capabilities. Being able to automate the different tools in your toolkit (and being able to transfer data between multiple tools) is often a huge time saver.

Many tools have built-in scripting capabilities. For instance ProDiscover has ProScript, EnCase has EnScript, etc. Command line tools can typically be “wrapped” by another language. Autopsy for example, is a PERL script that wraps around the various Sleuthkit tools. While it is useful to be able to automate the execution of a tool, it’s also useful to be able to automate the import and export of data. Being able to programmatically extract the results of a tool and feed them as input (or further process them) allows you to combine multiple tools in your toolkit.

So to me, when evaluating a forensic tool the capabilities (and limitations) and the ease of automation are (often) more important than the interface.

Two tools to help debug shellcode

Here are two small tools to help debug/analyze shellcode. The goal of both tools is to provide an executable environment for the shellcode. Shellcode is usually intended to run in the context of a running process, and by itself doesn’t provide the environment typically provided by an executable.

The first tool, make_loader.py is a Python script which takes the name of a file containing shellcode and outputs a compilable C file with the embedded shellcode. If you compile the output, the resulting executable run the shellcode.
The second tool, run_shellcode is a C program (you have to compile it) which, at runtime, loads shellcode from disk into memory (and then transfers execution to the shellcode.) A neat feature of this tool is that it can be completely controlled by a configuration file, meaning you only need to load the file once into a debugger. You can examine new shellcode by changing the configuration file.
Both tools allow you to specify if you want to automatically trap to the debugger (typically by an int 3), and to skip over a number of bytes in the file that contains the shellcode. The
automatic debugger tripping is nice so you don’t always have to explicitly set a breakpoint.
The skip is nice if the shellcode doesn’t sit at the start of the and you don’t want to bother stripping out the unnecessary bytes. Think Wireshark “Follow TCP Stream” with a “Save”.

An alternative to these tools is shellcode2exe, although I didn’t feel like installing PHP (and a webserver)

Here are the files….
run_shellcode.c 1.0 make_loader.py 1.0

The basics of how digital forensics tools work

I’ve noticed there is a fair amount of confusion about how forensics tools work behind the scenes. If you’ve taken a course in digital forensics this will probably be “old hat” for you. If on the other hand, you’re starting off in the digital forensics field, this post is meant for you.

There are two primary categories of digital forensics tools, those that acquire evidence (data), and those that analyze the evidence. Typically, “presentation” functionality is rolled into analysis tools.

Acquisition tools, well… acquire data. This is actually the easier of the two tools to write, and there are a number of acquisition tools in existence. There are two ways of storing the acquired data, on a physical disk (disk to disk imaging) and in a file (disk to file imaging). The file that the data is stored in is also referred to as a logical container (or logical evidence container, etc.) There are a variety of logical container formats, with the most popular formats being: DD (a.k.a. raw, as well as split DD) and EWF (Expert Witness, a variant used with EnCase). There are other formats, including sgzip (seekable gzip, used by PyFlag) and AFF (Advanced Forensics Format). Many logical containers allow an examiner to include metadata about the evidence, including cryptographic hash sums, and information about how and where the evidence was collected (e.g. the technicians name, comments, etc.)

Analysis tools work in two major phases. In the first phase, the tools read in the evidence (data) collected by the acquisition tools as a series of bytes, and translate the bytes into a usable structure. An example of this, would be code that reads in data from a DD file and “breaks out” the different components of a boot sector (or superblock on EXT2/3 file systems). The second phase is where the analysis tool examines the structure(s) that were extracted in the first phase and performs some actual analysis. This could be displaying the data to the screen in a more-human-friendly-format, walking directory structures, extracting unallocated files, etc. An examiner will typically interact with the analysis tool, directing it to analyze and/or produce information about the structures it extracts.

Presentation of digital evidence (and conclusions) is an important part of digital forensics, and is ultimately the role of the examiner, not a tool. Tools however can support presentation. EnCase allows an examiner to bookmark items, and ProDiscover allows an examiner to tag “Evidence of Interest”. The items can then be exported as files, to word documents, etc. Some analysis tools have built in functionality to help with creating a report.

Of course, there is a lot more to the implementation of the tools than the simplification presented here, but this is the basics of how digital forensics tools work.

“Forensically Sound Duplicate” (Update)

So after the whirl of feedback I’ve received, we’ve moved discussions of this thread from Richard Bejtlich’s blog to a Yahoo! group. The url for the group is: http://groups.yahoo.com/group/forensically_sound/

We now return this blog to it’s regularly scheduled programming… 🙂

“Forensically Sound Duplicate”

I was reading Craig Ball’s (excellent) presentations on computer forensics for lawyers at (http://www.craigball.com/articles.html). One of the articles mentions a definition for forensically sound duplicate as:

“A ‘forensically-sound’ duplicate of a drive is, first and foremost, one created by a method which does not, in any way, alter any data on the drive being duplicated. Second, a forensically-sound duplicate must contain a copy of every bit, byte and sector of the source drive, including unallocated ’empty’ space and slack space, precisely as such data appears on the source drive relative to the other data on the drive. Finally, a forensically-sound duplicate will not contain any data (except known filler characters) other than which was copied from the source drive.”

There are 3 parts to this definition:

  1. Obtained by a method which does not, in any way, alter any data on the drive being duplicated
  2. That a forensically sound duplicate must contain a copy of every bit, byte and sector of teh source drive
  3. That a forensically sound duplicate will not contain any data except filler characters (for bad areas of the media) other than that which was copied from the source media.

This definition seems to fit when performing the traditional disk-to-disk imaging. That is, imaging a hard disk from a non-live system, and writing the image directly to another disk (not a file on the disk, but directly to the disk.)

Picking this definition apart, the first thing I noticed (and subsequently emailed Craig about) was the fact that the first part of the definition is often an ideal. Take for instance imaging RAM from a live system. The act of imaging a live system changes the RAM and consequently the data. The exception would be to use a hardware device that dumps RAM (see “A Hardware-Based Memory Acquisition Procedure for Digital Investigations” by Brian Carrier.)

During the email discussions, Craig pointed out an important distinction between data alteraration inherent in the acquisition process (e.g. running a program to image RAM requires the imaging program to be loaded into RAM, thereby modifying the evidence) and data alteration in an explicit manner (e.g. wipe the source evidence as it is being imaged.) Remeber, one of the fundamental components of digital forensics is the preservation of digital evidence.

A forensically sound duplicate should be acquired in such a manner that the acquisition process minimizes the data alterations inherent to data acquisition, and not explicitly alter the source evidence. Another way of wording this could be “an accurate representation of the source evidence”. This wording is intentionally broad, allowing one to defend/explain how the acquisition was accurate.

The second part of the definition states that the duplicate should contain every bit, byte, and sector of the source evidence. Similar to the first part of the definition, this is also an ideal. If imaging a hard disk or other physical media, then this part of the definition normally works well. Consider the scenario when a system with multiple terabytes of disk storage contains an executable file with malicious code. If the size of the disk (or other technological restriction) prevents imaging every bit/byte/sector of the disk, then how should the contents of the file be analyzed if simply copying the contents of the file does not make it “forensically sound”? What about network based evidence? According to the folks at the DFRWS 2001 conference (see the “Research Road Map pdf“) there are 3 “types” of digital forensic analysis that can be applied:

– Media analysis (your traditional file system style analysis)
– Code analysis (which can be further abstracted to content analysis)
– Network analysis (analyzing network data)

Since more and more of the latter two types of evidence are starting to come into play (e.g. the recent UBS trial with Keith Jones analyzing a logic bomb), a working definiton of “forensically sound duplicate” shouldn’t be restricted to just “media analysis”. Perhaps this can be worded as “a complete representation of the source evidence”. Again, intentionally broad so as to leave room for explanation of circumstances.

The third part of the definition states that the duplicate will not contain any additional data (with the exception of filler characters) other than what was copied from the source medium. This part of the definition rules out “logical evidence containers”, essentially any type of evidence file format that includes any type of metadata (e.g. pretty much anything “non dd”.) Also compressing the image of evidence on-the-fly (e.g. dd piped to gzip piped to netcat) would break this. Really, if the acquisition process introduces data not contained in the source evidence, the newly introduced data should be distinguishable from the duplication of the source evidence.

Now beyond the 3 parts that Craig mentions, there are a few other things to examine. First of all is what components of digital forensics should a working definition of “forensically sound” cover? Ideally just the acquisition process. The analysis component of forensics, while driven by what was and was not acquired, should not be hindered by the definition of “forensically sound”.

Another fact to consider is that a forensic exam should be neutral, and not “favor” one side or the other. This is for several reasons:
– Digital forensic science is a scientific discipline. Science is ideally as neutral as possible (introducing as little bias as possible). Favoring one side or the other introduces bias.
– Often times the analysis (and related conclusions) are used to support an argument for or against some theory. Not examining relevant information that could either prove or disprove a theory (e.g. inculpatory and exculpatory evidence) can lead to incorrect decisions.

So, the question as to what data should and shouldn’t be included in a forensically sound duplicate is hard to define. Perhaps “all data that is relevant and reasonably believed to be relevant.” The latter part could come into play when examining network traffic (especially on a large network). For instance, when monitoring a suspect on the network (sniffing traffic) and I create a filter to only log/extract traffic to and from a system the suspect is on, I am potentially missing other traffic on the network (sometimes this can even be legally required as sniffing network traffic is considered in many places a wiretap). A definition of “forensically sound duplicate” shouldn’t prevent this type of acquisition.

So, working with some of what we have, here is perhaps a (base if nothing else) working defintion for “forensically sound”:

“A forensically sound duplicate is a complete and accurate representation
of the source evidence. A forensically sound duplicate is obtained in a
manner that may inherently (due to the acquistion tools, techniques, and
process) alter the source evidence, but does not explicitly alter the
source evidence. If data not directly contained in the source evidence is
included in the duplicate, then the introduced data must be
distinguishable from the representation of the source evidence. The use
of the term complete refers to the components of the source evidence that
are both relevant, and reasonably believed to be relevant.”

At this point I’m interested in hearing comments/criticisms/etc. so as to improve this defintion. If you aren’t comfortable posting in a public forum, you can email me instead and I’ll anonymize the contents 🙂