Glossary of category theory: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Cydebot
m Robot - Speedily moving category Glossaries on mathematics to Category:Glossaries of mathematics per CFDS.
 
en>Addbot
m Bot: Migrating 1 interwiki links, now provided by Wikidata on d:q13635346
Line 1: Line 1:
Hi there. Let me start by introducing the writer, her name is Sophia. For many years he's been living in Alaska and he doesn't strategy on altering it. My day job is a journey agent. My husband doesn't like it the way I do but what I really like doing is caving but I don't have the time lately.<br><br>my blog: [http://www.youronlinepublishers.com/authWiki/AdolphvhBladenqq best psychic readings]
{{distinguish|International Organization for a Participatory Society}}
'''IOPS''' ([[Input/Output]] Operations Per Second, pronounced ''eye-ops'') is a common performance measurement used to [[benchmark (computing)|benchmark]] [[data storage device|computer storage]] devices like [[hard disk drive]]s (HDD), [[solid state drives]] (SSD), and [[storage area network]]s (SAN). As with any benchmark, IOPS numbers published by storage device manufacturers do not guarantee real-world application performance.<ref name=Scott_Lowe>{{cite web |url=http://www.techrepublic.com/blog/datacenter/calculate-iops-in-a-storage-array/2182 |title=Calculate IOPS in a storage array |last=Lowe |first=Scott |publisher=techrepublic.com |date=2010-02-12 |accessdate=2011-07-03}}</ref><ref name="Symantec">{{cite web |url=http://www.symantec.com/connect/articles/getting-hang-iops-v13 |title=Getting The Hang Of IOPS v1.3 |date=2012-08-03 |accessdate=2013-08-15}}</ref>
 
IOPS can be measured with applications, such as [[Iometer]] (originally developed by [[Intel]]), as well as [[IOzone]] and [[FIO (software)|FIO]]<ref>{{cite web |url=http://freshmeat.net/projects/fio/|title=Flexible IO Tester |last=Axboe |first=Jens |accessdate=2010-06-04}}(source available at http://git.kernel.dk/)</ref> and is primarily used with [[server (computing)|servers]] to find the best storage configuration.
 
The specific number of IOPS possible in any system configuration will vary greatly, depending upon the variables the tester enters into the program, including the balance of read and write operations, the mix of [[Sequential access|sequential]] and [[Random access|random]] access patterns, the number of worker [[Thread (computer science)|threads]] and queue depth, as well as the data block sizes.<ref name=Scott_Lowe/> There are other factors which can also affect the IOPS results including the system setup, storage drivers, OS background operations, etc. Also, when testing SSDs in particular, there are preconditioning considerations that must be taken into account.<ref name=Kent_Smith>{{cite web |url=http://www.lsi.com/downloads/Public/Flash%20Storage%20Processors/LSI_PRS_FMS2009_F2A_Smith.pdf |title=Benchmarking SSDs: The Devil is in the Preconditioning Details |last= Smith |first=Kent |publisher=SandForce.com |date=2009-08-11 |accessdate=2012-08-28}}</ref>
 
==Performance characteristics==
[[Image:Random vs sequential access.svg|thumb|right|Random access compared to sequential access.]]The most common performance characteristics measured are sequential and random operations. Sequential operations access locations on the storage device in a [[Contiguity#Computer science|contiguous manner]] and are generally associated with large data transfer sizes, e.g., 128&nbsp;[[Kilobyte|KB]]. Random operations access locations on the storage device in a non-contiguous manner and are generally associated with small data transfer sizes, e.g., 4&nbsp;KB.
 
The most common performance characteristics are as follows:
 
{| class="wikitable"
|-
! Measurement
! Description
|-
| Total IOPS
| Total number of I/O operations per second (when performing a mix of read and write tests)
|-
| Random Read IOPS
| Average number of random read I/O operations per second
|-
| Random Write IOPS
| Average number of random write I/O operations per second
|-
| Sequential Read IOPS
| Average number of sequential read I/O operations per second
|-
| Sequential Write IOPS
| Average number of sequential write I/O operations per second
|}
 
For HDDs and similar electromechanical storage devices, the random IOPS numbers are primarily dependent upon the storage device's random [[seek time]], whereas for SSDs and similar solid state storage devices, the random IOPS numbers are primarily dependent upon the storage device's internal controller and memory interface speeds. On both types of storage devices the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle.<ref name=Scott_Lowe/> Often sequential IOPS are reported as a simple [[MB/s#Megabyte per second|MB/s]] number as follows:
 
<blockquote><math>\text{IOPS} * \text{TransferSizeInBytes} = \text{BytesPerSec}</math>    (with the answer typically converted to [[Megabytes per second|MegabytesPerSec]])
</blockquote>
 
Some HDDs will improve in performance as the number of outstanding IO's (i.e. queue depth) increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either [[Tagged Command Queuing]] (TCQ) or [[Native Command Queuing]] (NCQ). Most commodity [[SATA]] drives either cannot do this, or their implementation is so poor that no performance benefit can be seen.{{Citation needed|date=August 2008}} Enterprise class SATA drives, such as the [[Western Digital Raptor]] and Seagate Barracuda NL will improve by nearly 100% with deep queues.<ref>{{cite web|url=http://www.storagereview.com/articles/200607/500_6.html |title=SATA in the Enterprise - A 500 GB Drive Roundup &#124; StorageReview.com - Storage Reviews |publisher=StorageReview.com |date=2006-07-13 |accessdate=2013-05-13}}</ref> High-end [[SCSI]] drives more commonly found in servers, generally show much greater improvement, with the [[Seagate Technology|Seagate]] Savvio exceeding 400 IOPS—more than doubling its performance.{{Citation needed|date=August 2008}}
 
While traditional HDDs have about the same IOPS for read and write operations, most [[Flash memory|NAND flash-based]] SSDs are much slower writing than reading due to the inability to rewrite directly into a previously written location forcing a procedure called [[Garbage collection (SSD)|garbage collection]].<ref name="IBM_WA">{{cite web |title=Write Amplification Analysis in Flash-Based Solid State Drives |author=Hu, X.-Y. and E. Eleftheriou, R. Haas, I. Iliadis, R. Pletka |year=2009 |publisher=[[IBM]] | id = {{citeseerx|10.1.1.154.8668}} |accessdate=2010-06-02}}</ref><ref name="OCZ_WA">{{cite web |url=http://www.oczenterprise.com/whitepapers/ssds-write-amplification-trim-and-gc.pdf |title=SSDs - Write Amplification, TRIM and GC |author= |date= |work= |publisher=OCZ Technology |accessdate=2010-05-31}}</ref><ref>{{cite web |url=http://www.intel.com/cd/channel/reseller/asmo-na/eng/products/nand/feature/index.htm |title=Intel Solid State Drives |author= |date= |work= |publisher=Intel |accessdate=2010-05-31}}</ref> This has caused hardware test sites to start to provide independently measured results when testing IOPS performance.
 
Newer flash SSD drives such as the Intel X25-E have much higher IOPS than traditional hard disk drives. In a test done by Xssist, using IOmeter, 4&nbsp;KB random transfers, 70/30 read/write ratio, queue depth 4, the IOPS delivered by the Intel X25-E 64&nbsp;GB G1 started around 10000 IOPs, and dropped sharply after 8 minutes to 4000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3000 to 4000 from around the 50th minutes onwards for the rest of the 8+ hours test run.<ref>{{cite web |url=http://www.xssist.com/blog/Intel%20X25-E%2064GB%20G1,%204KB%2070%2030%20RW%20Random%20IOPS,%20iometer%20benchmark.htm |title=Intel X25-E 64GB G1, 4KB Random IOPS, iometer benchmark |date=2010-03-27 |accessdate=2010-04-01}}</ref> Even with the drop in random IOPS after the 50th minute, the X25-E still has much higher IOPS compared to traditional hard disk drives. Some SSDs, including the [[OCZ]] RevoDrive 3 x2 PCIe using the [[SandForce]] controller, have shown much higher sustained write performance that more closely matches the read speed.<ref>{{cite web |url=http://thessdreview.com/our-reviews/ocz-revodrive-3-x2-480-gb-pcie-ssd-review-1-5gb-read1-25gb-write200000-iops-for-699/ |title=OCZ RevoDrive 3 x2 PCIe SSD Review – 1.5GB Read/1.25GB Write/200,000 IOPS As Little As $699 |date=2011-06-28 |accessdate=2011-06-30}}</ref>
 
==Examples==
 
Some com­monly accepted aver­ages for random IO operations, calculated as 1/(seek + latency) = IOPS:
{| class="wikitable"
|-
!Device
!Type
!IOPS
!Interface
!Notes
|-
| 7,200 [[Revolutions per minute|rpm]] [[SATA]] drives
| [[Hard disk drive|HDD]]
| ~75-100 IOPS<ref name="Symantec"/>
| SATA 3 [[Gbit/s]]
|
|-
| 10,000 rpm SATA drives
| HDD
| ~125-150 IOPS<ref name="Symantec"/>
| SATA 3 Gbit/s
|
|-
| 10,000 rpm [[Serial Attached SCSI|SAS]] drives
| HDD
| ~140 IOPS<ref name="Symantec"/>
| SAS
|
|-
| 15,000 rpm [[Serial Attached SCSI|SAS]] drives
| HDD
| ~175-210 IOPS<ref name="Symantec"/>
| SAS
|
|-
<!-- | 10,000 rpm [[SATA]] drives, queue depth 24
| HDD
| ~290 IOPS
| SATA 3 Gb/s
| <code>fio -readonly -name iops -rw=randread -bs=512 -runtime=20 -iodepth 24 -filename /dev/sda -ioengine libaio -direct=1</code>
|-  -->
|}
 
Solid State Devices
{| class="wikitable"
|-
!Device
!Type
!IOPS
!Interface
!Notes
|-
| Simple [[Single-level cell|SLC]] [[Solid-state drive|SSD]]
| [[Solid-state drive|SSD]]
| ~400 IOPS {{Citation needed|date=February 2010}}
| SATA 3 Gbit/s
|
|-
| [[Intel#Solid-state drives (SSD)|Intel X25-M G2]] ([[Multi-level cell|MLC]])
| SSD
| ~8,600 IOPS<ref>{{cite web |url=http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html |title=Intel's X25-M Solid State Drive Reviewed                |last1=Schmid |first1=Patrick |last2=Roos |first2=Achim |date=2008-09-08 |accessdate=2011-08-02}}</ref>
| SATA 3 Gbit/s
| Intel's data sheet<ref>http://download.intel.com/design/flash/nand/mainstream/322296.pdf</ref> claims 6,600/8,600 IOPS (80&nbsp;GB/160&nbsp;GB version) and 35,000 IOPS for random 4&nbsp;KB writes and reads, respectively.
|-
| [[Intel#Solid-state drives (SSD)|Intel X25-E]] (SLC)
| SSD
| ~5,000 IOPS<ref>{{cite web|author=1. |url=http://www.tomshardware.com/reviews/intel-x25-e-ssd,2158.html |title=Intel’s X25-E SSD Walks All Over The Competition : They Did It Again: X25-E For Servers Takes Off |publisher=Tomshardware.com |date= |accessdate=2013-05-13}}</ref>
| SATA 3 Gbit/s
| Intel's data sheet<ref>http://download.intel.com/design/flash/nand/extreme/extreme-sata-ssd-datasheet.pdf</ref> claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.<ref>{{cite web |url=http://www.xssist.com/blog/%5BSSD%5D_Comparison_of_Intel_X25-E_G1_vs_Intel_X25-M_G2.htm |title=Intel X25-E G1 vs Intel X25-M G2 Random 4&nbsp;KB IOPS, iometer |date=May 2010 |accessdate=2010-05-19}}</ref>
|-
| [[G.Skill]] Phoenix Pro
| SSD
| ~20,000 IOPS<ref name="tweakpc">{{cite web|url=http://www.tweakpc.de/hardware/tests/ssd/gskill_phoenix_pro/s05.php |title=G.Skill Phoenix Pro 120 GB Test - SandForce SF-1200 SSD mit 50K IOPS - HD Tune Access Time IOPS (Diagramme) (5/12) |publisher=Tweakpc.de |date= |accessdate=2013-05-13}}</ref>
| SATA 3 Gbit/s
| [[SandForce]]-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.<ref name="tweakpc" />
|-
| [[OCZ Technology|OCZ]] Vertex 3
| SSD
| Up to 60,000 IOPS<ref>http://www.ocztechnology.com/res/manuals/OCZ_Vertex3_Product_Sheet.pdf</ref>
| SATA 6 Gbit/s
| Random Write 4&nbsp;KB (Aligned)
|-
| [[Corsair Memory|Corsair]] Force Series GT
| SSD
| Up to 85,000 IOPS<ref>{{cite web|author=Force Series™ GT 240GB SATA 3 6Gb/s Solid-State Hard Drive |url=http://www.corsair.com/us/ssd/force-series-gt-ssd/force-series-gt-240gb-sata-3-6gbps-solid-state-hard-drive.html |title=Force Series™ GT 240GB SATA 3 6Gb/s Solid-State Hard Drive - Force Series GT - SSD |publisher=Corsair.com |date= |accessdate=2013-05-13}}</ref>
| SATA 6 Gbit/s
| 240&nbsp;GB Drive, 555 MB/s sequential read & 525 MB/s sequential write, Random Write 4&nbsp;KB Test (Aligned)
|-
| [[OCZ Technology|OCZ]] Vertex 4
| SSD
| Up to 120,000 IOPS<ref>{{cite web|url=http://www.ocztechnology.com/ocz-vertex-4-sata-iii-2-5-ssd.html#overview |title=OCZ Vertex 4 SSD 2.5" SATA 3 6Gb/s |publisher=Ocztechnology.com |date= |accessdate=2013-05-13}}</ref>
| SATA 6 Gbit/s
| 256&nbsp;GB Drive, 560 MB/s sequential read & 510 MB/s sequential write, Random Read 4&nbsp;KB Test 90K IOPS, Random Write 4&nbsp;KB Test 85K IOPS
|-
| [[Texas Memory Systems]] RamSan-20
| SSD
| 120,000+ Random Read/Write IOPS<ref>{{cite web|url=http://www.ramsan.com/products/pcie-storage/ramsan-10-20 |title=IBM System Storage - Flash: Overview |publisher=Ramsan.com |date= |accessdate=2013-05-13}}</ref>
| [[PCIe]]
| Includes RAM cache
|-
| [[Fusion-io]] ioDrive
| SSD
| 140,000 Read IOPS, 135,000 Write IOPS<ref>{{cite web|url=http://community.fusionio.com/media/p/459.aspx |title=Home - Fusion-io Community Forum |publisher=Community.fusionio.com |date= |accessdate=2013-05-13}}</ref>
| PCIe
|
|-
| [[Virident Systems]] tachIOn
| SSD
| 320,000 sustained READ IOPS using 4KB blocks and 200,000 sustained WRITE IOPS using 4KB blocks<ref>http://www.theregister.co.uk/2010/06/16/virident_tachion/</ref>
| PCIe
|
|-
| OCZ RevoDrive 3 X2
| SSD
| 200,000 Random Write 4K IOPS<ref>{{cite web|url=http://www.storagereview.com/ocz_revodrive_3_x2_480gb_review |title=OCZ RevoDrive 3 X2 480GB Review &#124; StorageReview.com - Storage Reviews |publisher=StorageReview.com |date=2011-06-28 |accessdate=2013-05-13}}</ref>
| PCIe
|
|-
| Fusion-io ioDrive Duo
| SSD
| 250,000+ IOPS<ref>{{cite web|url=http://community.fusionio.com/media/p/461.aspx |title=Home - Fusion-io Community Forum |publisher=Community.fusionio.com |date= |accessdate=2013-05-13}}</ref>
| PCIe
|
|-
| [[Violin Memory]] Violin 3200
| SSD
| 250,000+ Random Read/Write IOPS<ref>[http://www.violin-memory.com/products/3200-memory-array/ ]{{dead link|date=May 2013}}</ref>
| PCIe /FC/Infiniband/iSCSI
| Flash Memory Array
|-
| WHIPTAIL, ''ACCELA''
| SSD
| 250,000/200,000+ Write/Read IOPS<ref>{{cite web|url=http://www.whiptail.com/products/accela |title=Products |publisher=Whiptail |date= |accessdate=2013-05-13}}</ref>
| Fibre Channel, iSCSI, Infiniband/SRP, NFS, CIFS
| Flash Based Storage Array
|-
| [[DDRdrive]] X1,
| SSD
| 300,000+ (512B Random Read IOPS) and 200,000+ (512B Random Write IOPS)<ref>http://www.ddrdrive.com/ddrdrive_press.pdf</ref><ref>http://www.ddrdrive.com/ddrdrive_brief.pdf</ref><ref>http://www.ddrdrive.com/ddrdrive_bench.pdf</ref><ref>{{cite web|author=Author: Allyn Malventano |url=http://www.pcper.com/article.php?aid=704 |title=DDRdrive hits the ground running - PCI-E RAM-based SSD &#124; PC Perspective |publisher=Pcper.com |date=2009-05-04 |accessdate=2013-05-13}}</ref>
| PCIe
|
|-
| SolidFire ''SF3010/SF6010''
| SSD
| 250,000 4KB Read/Write IOPS<ref>{{cite web|url=http://www.solidfire.com/solution/solidfire-storage-system/ |title=SSD Cloud Storage System - Examples & Specifications |publisher=SolidFire |date= |accessdate=2013-05-13}}</ref>
| iSCSI
| Flash Based Storage Array (5RU)
|-
|[[Texas Memory Systems]] RamSan-720 Appliance
| SSD
| 500,000 Optimal Read, 250,000 Optimal Write 4KB IOPS<ref>8. https://www.ramsan.com/files/download/798</ref>
| FC / InfiniBand
|
|-
| OCZ Single SuperScale Z-Drive R4 PCI-Express SSD
| SSD
| Up to 500,000 IOPS<ref name="OCZ_Z-Drive">{{cite web |url=http://www.ocztechnology.com/aboutocz/press/2011/445 |title=OCZ Technology Launches Next Generation Z-Drive R4 PCI Express Solid State Storage Systems |date=2011-08-02 |publisher=OCZ |accessdate=2011-08-02}}</ref>
| PCIe
|
|-
| WHIPTAIL, ''INVICTA''
| SSD
| 650,000/550,000+ Read/Write IOPS<ref>{{cite web|url=http://www.whiptail.com/products/invicta |title=Products |publisher=Whiptail |date= |accessdate=2013-05-13}}</ref>
| Fibre Channel, iSCSI, Infiniband/SRP, NFS
| Flash Based Storage Array
|-
| [[Violin Memory]] Violin 6000
| 3RU Flash Memory Array
| 1,000,000+ Random Read/Write IOPS<ref>{{cite web|author=6000 Series Flash Memory Arrays |url=http://www.violin-memory.com/products/6000-flash-memory-array/ |title=Flash Memory Arrays, Enterprise Flash Storage Violin Memory |publisher=Violin-memory.com |date= |accessdate=2013-11-14}}</ref>
| /FC/Infiniband/10Gb(iSCSI)/ PCIe
|
|-
| [[Texas Memory Systems]] RamSan-630 Appliance
| SSD
| 1,000,000+ 4KB Random Read/Write IOPS<ref>{{cite web|url=http://www.ramsan.com/products/rackmount-flash-storage/ramsan-630 |title=IBM flash storage and solutions: Overview |publisher=Ramsan.com |date= |accessdate=2013-11-14}}</ref>
| FC / InfiniBand
|
|-
| Fusion-io ioDrive Octal (single PCI Express card)
| SSD
| 1,180,000+ Random Read/Write IOPS<ref>{{cite web|url=http://www.fusionio.com/products/iodriveoctal/ |title=ioDrive Octal |publisher=Fusion-io |date= |accessdate=2013-11-14}}</ref>
| PCIe
|
|-
| OCZ 2x SuperScale Z-Drive R4 PCI-Express SSD
| SSD
| Up to 1,200,000 IOPS<ref name="OCZ_Z-Drive"/>
| PCIe
|
|-
| [[Texas Memory Systems]] RamSan-70
| SSD
| 1,200,000 Random Read/Write IOPS<ref>{{cite web|url=http://www.ramsan.com/products/pcie-storage/ramsan-70 |title=IBM flash storage and solutions: Overview |publisher=Ramsan.com |date= |accessdate=2013-11-14}}</ref>
| PCIe
| Includes RAM cache
|-
| [[Kaminario]] K2
| Flash/DRAM/Hybrid SSD
| Up to 1,200,000 IOPS SPC-1 IOPS with the K2-D ([[DRAM]])<ref name="The Register">{{cite web|last=Mellor |first=Chris |url=http://www.theregister.co.uk/2012/07/30/kaminario_spc_1/ |title=Chris Mellor, The Register, July 30, 2012: "Million-plus IOPS: Kaminario smashes IBM in DRAM decimation" |publisher=Theregister.co.uk |date=2012-07-30 |accessdate=2013-11-14}}</ref><ref name="Storage Performance Council">http://www.storageperformance.org/results/benchmark_results_spc1/#kaminario_spc1</ref>
| [[Fibre Channel|FC]]
|
|-
| Fusion-io ioDrive2
| SSD
| Up to 9,608,000 IOPS<ref>{{cite web|url=http://www.fusionio.com./press-releases/fusion-io-achieves-more-than-nine-million-iops-from-a-single-iodrive2/ |title=Achieves More Than Nine Million IOPS From a Single ioDrive2 |publisher=Fusion-io |date= |accessdate=2013-11-14}}</ref>
| PCIe
|
|}
 
==See also==
* [[Instructions per second]]
* [[Performance per watt]]
 
==References==
{{reflist|30em}}
 
{{Solid-state Drive}}
 
{{DEFAULTSORT:Iops}}
[[Category:Computer performance]]
[[Category:Data transmission]]
[[Category:Units of frequency]]

Revision as of 22:05, 7 July 2013

Template:Distinguish IOPS (Input/Output Operations Per Second, pronounced eye-ops) is a common performance measurement used to benchmark computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). As with any benchmark, IOPS numbers published by storage device manufacturers do not guarantee real-world application performance.[1][2]

IOPS can be measured with applications, such as Iometer (originally developed by Intel), as well as IOzone and FIO[3] and is primarily used with servers to find the best storage configuration.

The specific number of IOPS possible in any system configuration will vary greatly, depending upon the variables the tester enters into the program, including the balance of read and write operations, the mix of sequential and random access patterns, the number of worker threads and queue depth, as well as the data block sizes.[1] There are other factors which can also affect the IOPS results including the system setup, storage drivers, OS background operations, etc. Also, when testing SSDs in particular, there are preconditioning considerations that must be taken into account.[4]

Performance characteristics

Random access compared to sequential access.

The most common performance characteristics measured are sequential and random operations. Sequential operations access locations on the storage device in a contiguous manner and are generally associated with large data transfer sizes, e.g., 128 KB. Random operations access locations on the storage device in a non-contiguous manner and are generally associated with small data transfer sizes, e.g., 4 KB.

The most common performance characteristics are as follows:

Measurement Description
Total IOPS Total number of I/O operations per second (when performing a mix of read and write tests)
Random Read IOPS Average number of random read I/O operations per second
Random Write IOPS Average number of random write I/O operations per second
Sequential Read IOPS Average number of sequential read I/O operations per second
Sequential Write IOPS Average number of sequential write I/O operations per second

For HDDs and similar electromechanical storage devices, the random IOPS numbers are primarily dependent upon the storage device's random seek time, whereas for SSDs and similar solid state storage devices, the random IOPS numbers are primarily dependent upon the storage device's internal controller and memory interface speeds. On both types of storage devices the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle.[1] Often sequential IOPS are reported as a simple MB/s number as follows:

(with the answer typically converted to MegabytesPerSec)

Some HDDs will improve in performance as the number of outstanding IO's (i.e. queue depth) increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either Tagged Command Queuing (TCQ) or Native Command Queuing (NCQ). Most commodity SATA drives either cannot do this, or their implementation is so poor that no performance benefit can be seen.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. Enterprise class SATA drives, such as the Western Digital Raptor and Seagate Barracuda NL will improve by nearly 100% with deep queues.[5] High-end SCSI drives more commonly found in servers, generally show much greater improvement, with the Seagate Savvio exceeding 400 IOPS—more than doubling its performance.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

While traditional HDDs have about the same IOPS for read and write operations, most NAND flash-based SSDs are much slower writing than reading due to the inability to rewrite directly into a previously written location forcing a procedure called garbage collection.[6][7][8] This has caused hardware test sites to start to provide independently measured results when testing IOPS performance.

Newer flash SSD drives such as the Intel X25-E have much higher IOPS than traditional hard disk drives. In a test done by Xssist, using IOmeter, 4 KB random transfers, 70/30 read/write ratio, queue depth 4, the IOPS delivered by the Intel X25-E 64 GB G1 started around 10000 IOPs, and dropped sharply after 8 minutes to 4000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3000 to 4000 from around the 50th minutes onwards for the rest of the 8+ hours test run.[9] Even with the drop in random IOPS after the 50th minute, the X25-E still has much higher IOPS compared to traditional hard disk drives. Some SSDs, including the OCZ RevoDrive 3 x2 PCIe using the SandForce controller, have shown much higher sustained write performance that more closely matches the read speed.[10]

Examples

Some com­monly accepted aver­ages for random IO operations, calculated as 1/(seek + latency) = IOPS:

Device Type IOPS Interface Notes
7,200 rpm SATA drives HDD ~75-100 IOPS[2] SATA 3 Gbit/s
10,000 rpm SATA drives HDD ~125-150 IOPS[2] SATA 3 Gbit/s
10,000 rpm SAS drives HDD ~140 IOPS[2] SAS
15,000 rpm SAS drives HDD ~175-210 IOPS[2] SAS

Solid State Devices

Device Type IOPS Interface Notes
Simple SLC SSD SSD ~400 IOPS Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. SATA 3 Gbit/s
Intel X25-M G2 (MLC) SSD ~8,600 IOPS[11] SATA 3 Gbit/s Intel's data sheet[12] claims 6,600/8,600 IOPS (80 GB/160 GB version) and 35,000 IOPS for random 4 KB writes and reads, respectively.
Intel X25-E (SLC) SSD ~5,000 IOPS[13] SATA 3 Gbit/s Intel's data sheet[14] claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.[15]
G.Skill Phoenix Pro SSD ~20,000 IOPS[16] SATA 3 Gbit/s SandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.[16]
OCZ Vertex 3 SSD Up to 60,000 IOPS[17] SATA 6 Gbit/s Random Write 4 KB (Aligned)
Corsair Force Series GT SSD Up to 85,000 IOPS[18] SATA 6 Gbit/s 240 GB Drive, 555 MB/s sequential read & 525 MB/s sequential write, Random Write 4 KB Test (Aligned)
OCZ Vertex 4 SSD Up to 120,000 IOPS[19] SATA 6 Gbit/s 256 GB Drive, 560 MB/s sequential read & 510 MB/s sequential write, Random Read 4 KB Test 90K IOPS, Random Write 4 KB Test 85K IOPS
Texas Memory Systems RamSan-20 SSD 120,000+ Random Read/Write IOPS[20] PCIe Includes RAM cache
Fusion-io ioDrive SSD 140,000 Read IOPS, 135,000 Write IOPS[21] PCIe
Virident Systems tachIOn SSD 320,000 sustained READ IOPS using 4KB blocks and 200,000 sustained WRITE IOPS using 4KB blocks[22] PCIe
OCZ RevoDrive 3 X2 SSD 200,000 Random Write 4K IOPS[23] PCIe
Fusion-io ioDrive Duo SSD 250,000+ IOPS[24] PCIe
Violin Memory Violin 3200 SSD 250,000+ Random Read/Write IOPS[25] PCIe /FC/Infiniband/iSCSI Flash Memory Array
WHIPTAIL, ACCELA SSD 250,000/200,000+ Write/Read IOPS[26] Fibre Channel, iSCSI, Infiniband/SRP, NFS, CIFS Flash Based Storage Array
DDRdrive X1, SSD 300,000+ (512B Random Read IOPS) and 200,000+ (512B Random Write IOPS)[27][28][29][30] PCIe
SolidFire SF3010/SF6010 SSD 250,000 4KB Read/Write IOPS[31] iSCSI Flash Based Storage Array (5RU)
Texas Memory Systems RamSan-720 Appliance SSD 500,000 Optimal Read, 250,000 Optimal Write 4KB IOPS[32] FC / InfiniBand
OCZ Single SuperScale Z-Drive R4 PCI-Express SSD SSD Up to 500,000 IOPS[33] PCIe
WHIPTAIL, INVICTA SSD 650,000/550,000+ Read/Write IOPS[34] Fibre Channel, iSCSI, Infiniband/SRP, NFS Flash Based Storage Array
Violin Memory Violin 6000 3RU Flash Memory Array 1,000,000+ Random Read/Write IOPS[35] /FC/Infiniband/10Gb(iSCSI)/ PCIe
Texas Memory Systems RamSan-630 Appliance SSD 1,000,000+ 4KB Random Read/Write IOPS[36] FC / InfiniBand
Fusion-io ioDrive Octal (single PCI Express card) SSD 1,180,000+ Random Read/Write IOPS[37] PCIe
OCZ 2x SuperScale Z-Drive R4 PCI-Express SSD SSD Up to 1,200,000 IOPS[33] PCIe
Texas Memory Systems RamSan-70 SSD 1,200,000 Random Read/Write IOPS[38] PCIe Includes RAM cache
Kaminario K2 Flash/DRAM/Hybrid SSD Up to 1,200,000 IOPS SPC-1 IOPS with the K2-D (DRAM)[39][40] FC
Fusion-io ioDrive2 SSD Up to 9,608,000 IOPS[41] PCIe

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

Template:Solid-state Drive