Prior page Up a level Next page


USB SuperCharger

Understanding MFT
Free Space Important
Optimizing Drive Life
Assuring Drive Life
Journals and Logs
How fast is fast enough?
When is MFT less than perfect?

MFT for Laptops

MFT for Servers

MFT Pricelists

Drive Appraisals


Other Components



Technical Notes


In the News

MFT Contact

EasyCo Home

Bookmark Page
Call Toll Free:  
+1 610-237-2000  

Copyright 2011 by EasyCo LLC

Journals and Logs

Innately, MFT solves the problems of performance by converting clusters of random writes into linear writes. As a result, this gets written data off your computer very fast, and also, because all data is written in FIFO order, assures that data will be updated in the order received and thus much less susceptible to disk corruption on a crash. Similarly, you can set your server to pulse-write all write-committed data every tenth of a second, if you wish.

But if you are a purist, this is not enough. Whether because you have extremely high transaction volume, or very specific audit requirements, you may need transaction logging or journaling that is hard-committed to disk sector by sector before the next step is taken. With hard drives, this drives a disk sub-system into the mud because every transaction must be double-written to logs before going to file systems, and because there can be absolutely no out-of-order writing. As a result, neither disk nor cache pool can be optimized for least seek time or clustering of related sectors on a track. But with MFT, journaling can be done at low time cost.

We have a new solution to this problem of individual log items. Preliminary specifications keep the incremental overhead of sector-by-sector committed journals to approximately 1.2 milliseconds. This is a half-order of magnitude better than the performance that is possible in hard disk drives, even when these are ultra-performant 15k rpm devices. This means that even very large mission critical systems can be speeded up, and similarly that purchase, power, and operating costs for these can be significantly reduced because many storage arrays these days are really huge only in order to maximize disk i/o operations.