Prior page Up a level Next page


USB SuperCharger

Understanding MFT
Free Space Important
Optimizing Drive Life
Assuring Drive Life
Journals and Logs
How fast is fast enough?
When is MFT less than perfect?

MFT for Laptops

MFT for Servers

MFT Pricelists

Drive Appraisals


Other Components



Technical Notes


In the News

MFT Contact

EasyCo Home

Bookmark Page
Call Toll Free:  
+1 610-237-2000  

Copyright 2011 by EasyCo LLC

Optimizing Drive Life

In the prior section, we pointed out the importance of free space in enhancing speed. In this, we discuss how free space enhances drive life. Let's begin with the way traditional random writes to flash media dramatically reduce product life.

Each flash drive is made up of a series of erase blocks. These typically are 2mb in size. To perform a random write, a free erase block is erased. Next, the unchanged data is copied into the new block, up to the point of data-change. Then the changed data is copied in. Finally, the rest of the unchanged data is copied in, and the original block is marked as available.

The problem here is what if the data written is only 4kb in size? In a Windows system, three of the four writes performed to update a specific file will always be 4kb in size, because what is being updated is inode controls. (In Linux, 2 out of 3 are inodes.) Another way of thinking of this is that even if your data is a megabyte in size, the average write will only be 256kb. Similarly, database random updates are always in 4kb or 8kb chunks. Email messages average about 11kb in length. And most cached web elements are just a few kilobytes in size.

The write efficiency of a 4kb write is only 4kb/2mb, or 0.2%. Thus, if a manufacturer specifies a linear write level of 50gb a day, this means that at a 4kb block size, the machine can only accept 100 megabytes a day before it is consuming more than the specified performance. Similarly, if we perform a theoretical yield by taking expected erase life per cell times drive size, and dividing by life in days, we find that some drives do not have a long practical life. Consider the worst case: an MLC drive with only 5,000 lives and a 32gb size. Here, the formula would be:

5,000 x 0.2% x 32gb
5 x 365

10 x 32gb

Here, we come up with the conclusion that this drive used with the block size described could only last for a total of 320gb, or 175mb/day if we wanted the drive to last for five years. In this specification, this drive would only be suitable for a light duty environment on a Laptop.

Fortunately, some chip makers such as Toshiba make MLC chips that last four times as long as this 5,000-life case. Similarly, a drive that is four times as big has four times the quantity of life. Finally, the practical average amount of data written is normally at least eight kilobytes and moving towards sixteen. Thus, with optimal multipliers of 4, 4, and 3, the amount of data writeable daily becomes satisfactory for a power Laptop user, even if not sufficient for a server. (This said, the power Laptop user still has a major problem with timelyness of writing. See MFT on Laptops.)

But MFT dramatically extends life. Consider what happens to a machine with free space of 30%. This becomes a product that has a write efficiency of 30%, 150 times the life of the 0.2% discussed above. With the application of tuned cleanups and statistical distribution, the 30% becomes a number effectively between 40% and 50%, with the higher efficiency occurring in heavier work loads. As a result, we can think about a 5,000 life cycle 32gb drive accepting 80 terabytes over five years, or 26gb a day with a 30% efficiency, or more with a higher practical efficiency. And if we discuss a 128gb drive with Toshiba chips, that drive can accept three to four over-writes of changes a day.

Let's give that efficiency a real world comparison. Let's suppose that we have a 15k rpm SAS drive, the fastest hard drive made, and that this is used in a database environment where everything written is in 8kb random blocks. Let's further assume that it is doing 100% writes 24 hours a day at saturation. That drive can write about 250 IOs a second, so at 8kb, it can theoretically write a total of 2mb x 86400 seconds, or 168gb of data per drive, what ever that drive's size. In any real world case, that is unlikely to happen. Minimally, half the time will be spent random reading, and more practically 70% to 80% of the time the drive will be spent random reading. The point here is that in practical terms, even the worst quality MLC Flash media when used with MFT is able to do any job those SCSI drives do today, like for like, and the better quality media can do a lot more work.

The reasonable conclusion is that with MFT, drives using MLC chip technology are satisfactory for about 95% of all server applications, and that SLC technology drives with their capacity to be overwritten 15 to 25 times a day, are suitable for use even in mission critical 7x24 Enterprise environments.