ExFAT …. On the road again!


I have been invited by the meet-up group NYC4SEC to do a presentation on the exFAT file system on June 11, 2014 and by HTCIA at their 2014 International conference on Tuesday August 26, 2014.

Been updating the power point deck to pick up any good stuff to add. After I give the presentations I’ll post the new decks to this blog. I also expect to be adding sone new stuff as well. I have a SDXC card on order and will be doing some testing and acquisitions which I have been wanting to check out for a while now!

SANS exFAT paper getting noticed


Steve Bunting is listed as either an author or co-author of two books in the forensics field that contain sections on exFAT – and gives a plug for my SANS paper on exFAT.

One book is:

Mastering Windows Network Forensics and Investigation

By Steven Anson, Steve Bunting, Ryan Johnson, Scott Pearson

and the other is:

EnCase Computer Forensics — The Official EnCE: EnCase Certified Examiner …

By Steve Bunting

Thanks Steve!

 

 

 

Windows 8


It looks like Windows 7 may have some interesting exFAT stuff. I am in the process of upgrading one of my laptops to Windows 8 and will the check the differences.  Windows 8 allows creation of bootable exFAT media, so it will be interesting looking at the boot VBR records.

2011 in review


The WordPress.com stats helper monkeys prepared a 2011 annual report for this blog.

 

Here’s an excerpt:

The concert hall at the Syndey Opera House holds 2,700 people. This blog was viewed about 12,000 times in 2011. If it were a concert at Sydney Opera House, it would take about 4 sold-out performances for that many people to see it.

Click here to see the complete report.

Introduction to the Microsoft Extended FAT File System made the AT&T Tech Channel


On April 19th, 2011 I did the exFAT presentation at the Computer Forensics Show in NY (I also did it in 2010 as well). But this time, the CFS originators arranged to have a couple of the tracks recorded for the AT&T Tech Channel. So, if you want t see the recorded session, use this link.

http://techchannel.att.com/play-video.cfm/2011/8/16/Conference-TV-Computer-Forensics-Show:-Introduction-to-exFAT

If this link doesn’t work, then use this link http://techchannel.att.com/ and enter either “shullich” or “exfat” in the search box to find the presentation

exFAT Defragmentation


I am starting to see some questions come up regarding defragmentation of an exFAT volume. Looking today at Diskeeper, it doesn’t appear that they provide support for their product on the exFAT filesystem.

Keep in mind that defragmentation can be a problem, especially if you set options to run the defrag constantly. exFAT was designed for removable media, such as USB sticks and SD cards, which use a flash type of solid state memory. Because the chips are gated circuits, they degrade over time as you write to them. Although some gates can withstand 10,000 or 100,000 or even millions of writes per gate over the life of the chip, your ability to write files is finite, and you can effectively wear out sections of the chip memory by excessive writes.

Since exFAT is a Microsoft proprietary file format, any program that implements the file system requires a license of the specification from Microsoft. Now, I am not a lawyer, and I am not that well versed with the Microsoft licensing scheme, but the way I think it works, if someone want to write a defrag program, and be legit, they may need the Microsoft license. If they don’t have the license, then you could be at risk – sort of hiring an unlicensed plumber to work on your pipes.

I see the need for at least two defragment scenarios.

The first scenario is the directory, which includes subdirectories and the root directory. In certain cases directory entries are marked “not in use” which is common when a file is either deleted or under special renaming circumstances. exFAT will tend to write new directory entries rather than reuse already inactive entries. I believe that the reason for this may be to spread the gate changes to reduce the wear on the flash memory. However, it is theoretically possible to compress the directory and if there is many “not in use” entries, could possibly free up clusters if you do a compress. The possible problem with a directory compress is that each file has a “search hash” for quick file searching. Actually, this hash is yet another Microsoft patent, and if the compression of the directory requires a recalculation of the search hash value, I think this is where Microsoft licensing can be an issue.

The other scenario is the defragmentation of the files themselves. Keep in mind that the subdirectories are also files as well, but if the subdirectory is defragmented – but not compressed – I don’t think the search hash values are affected. The difference between compress and defragment is that compress removed records imbedded in the fie itself, but defragment just puts the clusters in the proper order and puts them together.

exFAT will attempt to write files in a unfragemented manner. By using the allocation bitmap, when a file is allocated, exFAT will try to store it as unfragmented. Why? Because it is faster, and faster for 2 reasons. The more traditional reason as with any filesystem, even back to mainframe days, is that when a physical disk is used, if a file is fragmented, the physical read/write heads have to move to the physical cylinder/head/sector to read the block. If the file is unfragmented, head movement is minimal or non-existent except to seek to the first sector. Flash memory is electronic random access and this hardly applies as there is no physical movement. The 2nd reason, and part of the reason why exFAT is supposed to be faster than FAT32, is the cluster run. A cluster run is the chain in the FAT, in which the clusters are chained in a forward single direction link list. This means that to find the next cluster, the FAT is referenced to find the address of the next cluster. FAT12, FAT16 and FAT32 (what I call legacy FAT) all use cluster runs to track allocation of a file. exFAT does not use the FAT to track allocation (the allocation bitmap is used for that) and the cluster run – the cluster chaining – is only needed and used when the file is fragmented. If the file is NOT fragmented, there is no cluster run, the FAT is not written to, and a bit in the stream extensions directory record called the “FAT Chain Invalid” flag is set. This extra writing, reading, and management of the cluster run goes away if the file is not fragmented. So, for performance reasons, it is better to keep the file unfragmented so you don’t incur the cost of cluster runs.

So, what do you do if you really need to defragment the exFAT volume? Without the tools and utilities today, you are sort of out of luck. But there is a brute force method, it is risky, and it is time-consuming. But if you have to, then you have to.

You may be able to just copy a few files off the drive (I really mean MOVE) to free up the clusters and hope to create a big enough hole so that you can build a large contiguous space. Then move them back. The more you move, the higher the probability that you will remove the bottleneck. In the worse case scenario, move everything to a separate different drive, reformat the drive that should now be empty because you moved everything off, then move everything back. This may be your only hope right now if your stuck.