When is MFT Less Than Perfect?
MFT, like all software, is a compromise. It was originally designed to solve database performance problems by making random writes as fast as possible: at least as fast as random reads on Flash media. As a result, it switched to a liniear writing method and away from a random writing method that always recognized data as being in only one place. The trade-off is that data is written as received, and may not be written in long blocks in a linear order. As a result, the performance of big files may suffer, even though the performance of smaller files will dramatically improve. Whether this splintering does, in fact, occur, and the degree to which it will occur, will depend upon how you use your drive.
Lets consider a worst case first. Lets suppose that you are writing only one megabyte files all day long, 24 hours a day without pause, and that you only have 25% write efficiency. Here, in principal, each 2mb write will be able to accept approximately 25% new data, and each of your 1mb writes will be broken into two, and perhaps three pieces. It you set up one of the drive testing software programs to do just 1mb writes, you will see that this is exactly what happens, and that your effective write speed declines to only 1/4th of the actual write speed of the drive. Similarly, if this is the only task being performed, and it is being performed all the time, you would be better off using the flash drive in traditional random write mode because the random write efficiency would be higher than the write efficiency of MFT. All this said, this is a very rare circumstance.
Consider the same issues from the perspective of personal use on a Laptop. Here, also, you are writing big 1mb files. But you are not doing it very often and are not using your flash drive a very significant percentage of each day. Remember that the reason for inactivity is simple: if you were using the traditional random write, everything you could write in a day would be written in seven minutes instead. When your Flash Drive has free time, MFT builds totally empty write/erase blocks, and will make almost all of your free space into totally empty space. Accordingly, when you copy in multiple large blocks of data, these will be contiguous and write at the maximum speed. Being contiguous, they will also read, effectively, at the linear read speed of the flash?
Having discussed the most common case, let's go back to the worst case and consider how that slows down the linear read speed, if at all? Here, the fundamental issue is the degree of fractioning versus the random access time for each component. The best made Flash drives have random access times that range between 0.04ms and 0.1ms (i.e. between 40 microseconds and 100 microseconds). Lets suppose that you have a 1mb file, and that it has been broken into 10 pieces, and that the perfect linear read speed is 100mb/sec or 10000 microseconds per megabyte. To this, we would have to add the random access time for 20 random accesses: 400 to 1,000 microseconds. Extending this out, we see that the linear access speed would be cut from 100mb/sec to between 91mb/sec and 96mb/sec depending upon the random access speed.
The general conclusion to be drawn is that while big files are negatively impacted by what may be a 4% to 10% performance loss, the dramatic gains to be had in small random writes, which occur with much higher frequency, more than compensate for this many times over in most regular environments. That said, it is wise to remember the trade-offs if you are doing something exceptional. For instance, if you are running a video on demand server farm, storing video images using normal writing methods makes more sense. Thus, it makes sense to have separate volumes for your image files and for your data files. It also does not make sense to pay for MFT in such environments if it is only needed for a portion of the space.