Prior page Up a level Next page

Home

USB SuperCharger

Understanding MFT

MFT for Laptops

MFT for Servers

MFT Pricelists

Drive Appraisals

Appliances/Servers

Other Components

Downloads

Literature

Technical Notes
  
Drive Performance
  
Windows Install
  
Linux Install
  
Arrays and Appliances
  
Technical Articles

Resellers

In the News

MFT Contact

EasyCo Home

Bookmark Page
 
 
Questions?  
Call Toll Free:  
888-473-7866  
+1 610-237-2000  

 
Copyright 2011 by EasyCo LLC
Flash Drive Performance

This section measures and reports the performance of various Flash SSDs.

Testing Methodology Overview

Testing is based upon two components. The first is a detailed measure of performance at different read and write sizes based upon a test developed by EasyCo that tests random read and write performance both as a bare drive, and together with MFT software. The second is determination of a general value, for comparison purposes, again both for the bare drive and for the drive operating in an MFT environment (if this is, indeed, possible). This is preceeded both by general commentary on the drive as well as shopping links to indicate where the Flash Drive can be purchased.

Tested Examples - Inclusion of Hard Disks

Media tested includes random read and write tests of several different models of Hard Disk Drives merely for comparison of these against Flash SSDs. In considering Hard drives, we only consider the performance of an individual hard disk drive. We do not consider the impact of RAID configuration such as RAID-5 and -10. In high write environments, RAID selection can have a profound impact on the performance of hard disk drives.

Testing Methodology - Exclusion of Linear Read and Write Tests

While our general writeup of each drive reports the general performance of each drive, including linear read and write speed (generally based upon manufacturers' specifications), testing methodology at EasyCo excludes linear read and write testing for three reasons.

The first is that the vast majority of data moving in general purpose servers and workstations involves small pieces of data. Most processes performed by systems involve random reads and random writes of various sizes. For instance, Inode update, with a typical data size of 4kb, represents 3/4ths of the reading and writing of a FAT file system, as well as 2/3rds of general Linux file systems. The average email message is only 11,000 bytes. Most database systems such as Oracle and SQL update in 8kb chunks, and other database systems generally operate in 4kb chunks. Finally, most elements sent or received by web servers and browsers are small chunks, if only because of the extensive use of JPG and other graphical elements.

Conversely, linear reads and writes as they are generally understood, tend to apply to full tables of a database, programs, master document records, and to large sound or visual images. While some environments (such as video on demand servers) may involve significant or exclusive movement of such documents, most servers and workstations these should be treated as special condition with separate volumes or systems for large documents.

The second reason for exclusion is that MFT increases random write speeds. As such, measuring linear performance in most cases is irrelevant.

The third is that while linear reads and writes are fully understood in hard disks, some flash disks treat "random linear IO" in a different manner than truly sequential IO that begins at the beginning of a drive and proceeds all the way to the end of the drive. For instance, the SuperTalent MasterDisk MX will sequentially write at its linear rated speed of 36gb per second, but if one is random writing 64mb files (which one would think of as linear writes) the effective transfer speed falls to about 13 megabytes a second. We will try to note such exceptions when we find them in testing.

Testing Methodology - Random Read and Write Tests Described

The standard test performed by EasyCo takes a 4gb chunk of a flash disk (or in the case of arrays of disks, a 4gb chunk of the drive set) and performs a series of random read and write tests to these, beginning with a 512 byte block size and proceeding up to typically 4mb. Testing is performed first in a single thread environment (as would occur on a single-user workstation) and then proceeds to measure the performance with 10 and 40 threads as well. The advantage of reporting multi-thread activity is that it shows anomalies which can occur as a result of stacking requests, or that aggregately occur when arrays of multiple drives are tested. Normally, in single drive environments, multi-thread tests perform only 5% to 10% better than single-thread tests. But in the case of large arrays, total performance of multi-thread operations may be ten times that of a single-thread test.

In presenting data to you, we generally limit reported performance to that between 4kb and 1mb. At the low end, small block tests are irrelevent because both Windows and Linux standard file systems fetch data in 4kb data chunks. Similarly, sizes greater than 1mb are practically linear reads or writes. In most cases, we preserve the multi-thread performance where this seems relevant.

Testing Methodology - Aggregate Value Methodology Described

We report two aggregate values to you: for the drive without MFT technology and for the drive with MFT technology present.

The core of this value is based upon abstraction of the Microsoft performance testing protocol for Microsoft Exchange Servers. Here, the essential features are that the average email message size is 11kb, and that the average exchange server spends 70% of its activity performing random reads, and 30% of its activity performing random writes. We approximately emulate the 11kb data size by treating the data as involving two 8kb reads/writes and one 16kb read/write as part of the measure. We have chosen this mix because it reflects variance and can be easily manually computed from the values involved. The value can be manually computed as follows:

result = 10,000 / ( 4666/8k-read-val + 2334/16k-read-val + 2000/8k-write-val + 1000/16k-write-val )

Each of the read- and write-values shown here is the number of IOs which can be completed in one second. In traditional Flash technology, these tend to be very large numbers for random reads and very low numbers for random writes. What each of the value-pairs does is to produce a number of elapsed seconds required to perform each of the four elements of the analysis. Then, we sum these second times and use the product as a divisor against the aggregate of 10,000 operations considered. Consider this from the perspective of analyzing the performance of a 15,000 rpm SAS drive with a reported IOPS rate of 250 per second whether reading or writing and whether dealing with 8kb or 16kb data:

result = 10,000 / ( 4666/250 + 2334/250 + 2000/250 + 1000/250 )
result = 10,000 / ( 18.664 + 9.336 + 8 + 4 )
result = 10,000 / 40
result = 250

Thus, in the example considered the computed result is 250 aggregate IOPS which is exactly what we would intuitively expect.

We believe that our measure is a reasonable method of measuring both absolute and relative performance of different devices. However, if you prefer a different mix of datasizes because you believe it more precisely reflects your class of computing problem, you can use the same principals of methodology to compute your own values.

Testing Methodology - General Commentary

The third section of our analysis focusses on general commentary about the particular drive evaluated. This includes manufacturers links and statistics, commentary about discovered issues (such as the SuperTalent anomaly mentioned above, as well as sources of supply and links to find typical market pricing for the product. Our purpose here is to allow you to choose your own supply channel and brand preferences. Similarly, we report several different comparison values both with and without MFT. The first of these is the cost per gigabyte. The second is the cost per IOPS (Input Outputs per second).

The final measure is the cost per IOPS-gigabyte. This last standard was created by us because drives do have different sizes and requirements. Similarly, to the extent they are used with mft, the value of the mft used also changes. IOPS-gigs allows us to normalize this figure. It should be remembered that in considering the IOPS-gigs value, all we are calculating is a general value. For instance, it is possible to get a near-identical very low value with both a 3.5" 250gb 7,200 rpm SATA drive and some of our flash combinations, but that the total costs and storage capacities of these may vary radically. IOPS-gigs should be used as only a general measure of value, with special care given to what you want to accomplish.

Ranking Table

The final element of our analysis is an application that will allow you to generate results based upon various of these indices, looking at issues of probable cost and performance in a database setting.