28,768,928
Everyone remembers how there were the first truly mass SSD products. Enthusiasm, growth rates, tens of thousands of beautiful IOPS performance. Almost idyll.
Naturally, for the server (single computers, we do not not consider) the market it was a huge step forward - after all magnetic media have long become a bottleneck for building high-performance solutions. Norm is a few cabinets with wheels, which together osilivali two or three thousand IOPS, and here the opportunity to increase the productivity of a hundred or more times with a single drive (compared with SAS 15K).
Optimism was the sea, but in reality it turned out not so smooth.
Here and compatibility issues, and problems of the resource when the server put in everything from the cheapest lines, and the problem of performance degradation - still raises questions TRIM support in RAID controllers.
SSD technology development proceeded in stages. First, all work on speed linear operations to achieve within the interface. With SATA II it happened almost immediately, to conquer the SATA III took some time. The next step was building the performance of operations with random access, there is also able to achieve a decent growth.
The next point, which drew attention was the stability of performance:
Taken from the review Anandtech i>
On average, it is, of course, a lot, but jumps from 30k to values in a couple dozen iops - it's hard spindle produces its performance is stable.
The first person to speak out, was Intel with its line of DC S3700.
Taken from the review Anandtech i>
If the approximate right portion of the graph, the variation is within 20%. Why is this important?
The behavior of the disk in a RAID array is much more predictable, the controller is much easier to work when all members of the array have approximately the same performance. Few people would think to build arrays of 7.2K and 15K drives simultaneously, and an array of SSD with a range of instantaneous performance a hundred times worse. application that needs to consistently and quickly obtain or write data randomly, will work more predictable.
There are quite a long time and SAS drives on SLC (Single Level Cell) memory space cost and virtually unlimited resource. Naturally, they have been designed for operation in the NAS - there is a two-port access must for storage. Over time, there were more products available on eMLC memory. Resource, of course, fell, but still remains very impressive due to the large volume of the backup is not available to the user memory.
Example of Modern SAS SSD drive i>
Since they were originally developed for use in corporate systems, the stability of the performance was at the height immediately. Because hard disks approaches to performance tests SSD little applicable, the industrial consortium Storage Networking Industry Association (SNIA) has developed a special technique SNIA Solid State Storage Performance Test. The main feature of the methodology - the disc first "prepared", the purpose of training - to score all available memory, especially since smart controller writes data not only on the selected disk capacity, he spreads the data across the available memory. In order to obtain the result of the disc in the real environment after a long continuous operation in synthetic dough - it should deny access to the "fresh" memory where data has never been. After this the actual testing:
Random Reading i>
Random post i>
At random record shows a significant advantage of 12G SAS, but the CPU processing flow increases two times or more.
The current position in the market SAS / SATA SSD
Disks are divided into several groups, each of which has been used successfully for certain tasks.
Household wheels for tasks related to reading. Extremely popular option among Russian Internet holding that use software arrays. This group may include the type of discs Toshiba HG5d, which are positioned for entry-level enterprise workloads (perfect for OS installation or problems with primary reading). Living under such loads long are few that still need to be happy? Corporate CDs 1-3 complete rewrite in the day. Positioned in the warehouse with a small percentage of records read-intensive or read caching. Work well with RAID controllers, some are made for storage and have interface SAS, disk cache necessarily protected capacitors. Insignificantly more expensive first group. Discs with 10 complete rewrite in the day. Versatile workhorse both servers (which are mainly used SATA drives), and in storage. Considerably more expensive first group. Drives with 25 complete rewrite day. The most expensive and fastest, heap memory for the provision sets a high price tag per gigabyte of available capacity.
Now tell about SSD in an unusual performance, because the flash (unlike magnetic plates) can be placed anywhere.
SATA SSD format DIMM
Thanks to the growth in the volume of memory modules and the efforts of Intel / AMD to increase the amount of memory supported by brackets on the processor Few servers use all slots on the motherboard.
In our experience, even 16 bars of memory in the server does not occur too often, while the model RS130 / 230 G4 offers 24 slots in the system.
Lots and lots of memory i>
When idle, such portion of the platform - is deeply hurt and annoyed.
What you can do with this?
Empty slots can take SSD drive!
For example, such:
SSD format DIMM i>
Now we have a few of these drives are validated, the capacity of which reaches to 200GB SLC memory and 480GB on MLC / eMLC.
Technically, this is a normal controller-based SSD SandForce SF-2281, a friend to many format disc 2.5 & quot; and very popular in inexpensive disks for problems with a predominance of reading (the first group). Interface - a standard SATA, from the memory slot is taken only food. Flash Used Toshiba (MLC NAND Toggle Mode 2.0, 19nm) TH58TEG8DDJBA8C, 3K P / E cycles, a total of 256 gigabytes. Promised Bit Error Rate (BER) is less than 1 in 10 ^ 17 bits read (it gives - was considered previous material on the hard disks ).
View of the controller i>
Installing a server is simple and easy - just insert a memory slot (c it takes power) and pull the cable to the port:
Type in the server i>
Original solutions
Current SSD use regular SATA connector, which does not occur on all boards. For example, our RS130 G4 only two such connectors. If necessary, you can make a cable that will unite four SSD in the mini-SAS or mini-SAS HD.
mini-SAS cable i>
Using this option, you can make various interesting items such as:
32 SSD in the enclosure height 1U i>
Pro SSD with standard SAS / SATA interfaces, perhaps, everything. In the next article we will look at PCIe SSD and their future, but for now a bit about the method of determining resource SSD write.
Resource on the record
At home using very few people care resource disk to record, while for more serious problems, this value can be critical. Already become a traditional measure of the number of overwriting the disk daily Disk Writes Per Day (DWPD), which is defined as the total amount of recorded data Total Terabyte Written, divided by the period of operation (usually 5 years). Best SATA drives have a record 10 DWPD, best SAS SSD reach 45 DWPD.
How does this magic is measured? We need to delve into the theory of the flash memory.
The main feature of the flush - write (program) cell data first need to delete (erase). Unfortunately, you can not simply delete the cell, such operations are carried out on blocks (Erase block), the minimum memory to erase, consisting of several pages. Page - this is the minimum area of memory that can be read or written in a single read / write operations.
So the notion of program / erase cycles - Program / erase cycle. Recording data for one or more pages in the block, and erasing the block in any order.
Logically, the notion of factor increasing the recording - Write amplification factor (WAF). The amount of data recorded on the disc, divided by the amount of data sent to the recording system.
What affects the WAF?
Nature load:
sequential or random; large or small blocks; Is there alignment data block size; type of data (especially for SSD with support for compression).
For example, if the system sends write 4KB and 16KB recorded on the flash (one block), the WAF = 4.
One block flash memory i>
Here, one block is represented NAND, consisting of 64 pages. Assume that each page has a size of 2KB (four quadrants), as a result of 256 sectors per block. All pages of the block occupied by useful data. Assume that the system overwrites only a few sectors in the block.
Pages to overwrite i>
To write 8 sectors, we need to:
Read the entire block into memory. Change the data in pages 1, 2 and 3. Clear block of NAND. Record block of memory.
Total 256 sectors erased and overwritten for change of only 8, WAF is already 32. But all the horrors of small blocks and non-optimized algorithms to flush, when writing large blocks WAF will be equal to one.
JEDEC (Industrial Consortium for any microelectronics) has identified a bunch of factors that affect the life cycle of SSD drives and derived as a function depending on f (TBW) = (TBW × 2 × WAF) / C, where C - capacity of the disk, and the factor 2 is introduced for prevent the influence of wear on the flash storage reliability.
Total, TBW Flash capacity * PE cycles / 2 * WA
As a result, the vitality of each SSD determined by the type of load, which have to manually determine. The case of linear recording easiest for random operations will be more strongly affected by supply NAND flash memory, which is not used by the user.
If you take a drive with 3K P / E cycles on the memory cell, then the linear recording TBW = 384 or about 1 DWPD to 256GB in capacity for 5 years.
Enterprise workload, on JEDEC'u, WA gives approximately equal to 5, or about 0, 2 DWPD for 5 years.
Source: habrahabr.ru/company/etegro/blog/217735/