Flash Memory Technology in Big Data Analytics

Print Friendly, PDF & Email

In this special feature, Les Lorenzo looks at Flash Memory technology and its advantages for Big Data analytics.

Big Data problems, whether in data analysis, financial services, research, life sciences, or visualization, are traditional HPC challenges that need fast, sustained I/O rates and throughput.

Flash memory technology is having a profound impact on storage architectures, because of its significant performance advantage and low power consumption. Its performance characteristics have grabbed the attention of enterprises. This technology represents an area of great innovation and excitement in the data storage world. What is the role of flash memory in Big Data processing today? How can Solid State Disks improve data throughput in these applications, and what does it cost? Are SSD capacities sufficient for Big Data needs? How reliable are they?

Many Big Data analytics workloads result in large outputs of data that are then used as inputs for the next step in a program. These volumes of data are written to disk before becoming the next input. The demands on the storage infrastructure for a high IOPS rate and throughput are significant, and often perform very slowly using current hard drive technology. But are solid state drives the answer?

High Performance Analytics workloads tend to consist of two parts. First there are metadata files that are generally small and accessed at random, while the data itself are very large sets accessed sequentially.

Are Hybrid Arrays the Answer?

Many storage arrays now support a mixture of both hard disk drives, as well as solid state drives, and use software tiering to place the data in the appropriate tier. Tier 0 is generally SSD, while tier 1 would be 10K/15K RPM SAS, and tier 2 would be 7200-RPM SATA drives. Generally tiering uses SSD for the most recently and frequently accessed files, and it’s not clear that that approach is valid in Big Data, as this tiering is targeted at random workloads prevalent in commercial applications. Instead an approach where metadata is stored on SSDs, while the bulk of the data is kept on hard drives, be they SAS or SATA is preferable.

The reason is that small file accesses are frequent, and it is in the small randomly accessed files that the greatest penalties are incurred by using HDDs. A 15K RPM SAS drive will have an average access time of 4 ms. Average seek time of 2 ms, and an average rotational delay of 2 ms, with a maximum rotational delay of 4 ms. Each time the arm seeks a sector on the drive it must wait for the sector to come around. It takes 4 ms for a drive to complete one rotation if spinning at 15K RPM. A 7200 RPM SATA drive, will take at least twice as long as it has a rotational delay greater than 4 ms. An SSD in contrast has a seek time between 0.08 and 0.16 ms, and no rotational delay. A 15K RPM SAS drive will never be able to deliver more than 250 IOPS (and typically less than 200) while an SSD can routinely deliver in excess 50,000 read IOPS and at least 10,000 write IOPS. Random I/O of small files generally access 4K or 16K blocks, but there are lots of them. While an HDD performs one IOP the SSD can perform 20. If you need lots of IOPS you will either need to add lots of HDDs, or you can use SSDs and purchase far fewer drives. Now the economics of HDDs do not look so good in comparison. If your metric is IOPS SSDs are the best choice.

But what about sequential files? Here the SSD is again faster, but because these transactions consist of large transfers, and the amount of seeks are far fewer, the penalty is not as great. The HDD performs one seek operation as opposed to many in random operations. Now transfer rates come into play. The effective transfer rate of a 15K SAS drive is approximately 140 MB/s, while SSDs can transfer data at a rate of at least 500 MB/s for reads. When it comes to write operations SSDs are not as efficient, as they must erase all the blocks before writing, but they are still faster than HDDs.

Cost Factors

What do SSDs cost? Pricing has come down substantially and continues to drop. Enterprise SSDs are now available for less than $1 per GB. This will depend on whether SLC or MLC is used, types of interface, but in all cases prices are continuing to decline. Hard drives can be had for as little as $0.10 per GB but that is not the best way to compare. If we look at drive pricing in terms of IOPS the advantage is clearly with SSD. Sequential throughput of SSDs is also much greater, reads in excess of 500MB/s are routine as are writes above 400 MB/s.

Capacities in SSDs have also risen. Vendors have continued to increase the density of flash chips and with 19nm sizes now shipping one can find drives in excess of 1 TB, which is equal to the largest SAS drives currently available.

Reliability

Much has been written about the limited endurance and short warranties of SSDs. Each time a drive is erased and written causes wear on an SSD. Eventually it will no longer hold data and must be replaced. There are three types of flash available. SLC or single level cell is the most expensive, highest performance, and most durable on the market. MLC or multi-level cell is relatively inexpensive, but short lived. eMLC or enterprise multi-cell is between these two. Wear leveling software is used along with over provisioning to increase their endurance. At present SLC flash drives last about 100,000 write cycles, MLCs about 3,000 write cycles, and eMLCs about 30,000 write cycles. This translates to an approximate lifespan of eight years for SLC drives, and about three years for eMLC drives. Standard MLC drives are targeted at consumers, and their lifespan is longer than this calculation as they are not in continuous use.

The following form factors are available:

  • Flash modules directly inserted into RAM DIMM slots. Memory Channel Storage is a very recent and exciting development from Diablo Technolologies. Its advantage is that it offers the highest performance. Latency is less than 10 microseconds, Each module can deliver about 150K read and 65K write IOPS. Eight modules in a server board can deliver 1.2 M read and 520K write IOPS. The disadvantage is that it cannot be shared in a SAN or NAS environment as it is bound to the server. See the recent RichReport Podcast on this topic.
  • Server based, PCI direct attached. These are PCIe cards that slot directly on a server. They have the advantage of relatively low latency as there are no latencies from network, SAN fabric, protocol conversions, SAS/SATA, etc. For example an IBM Ramsan (TMS) get approximately 1.2 Million IOPS and 30 microseconds latency. The disadvantage is that again, they are bound to a server and cannot be shared.
  • All-Flash arrays. These are purpose built systems with flash modules inside. They are available from Violin Memory, IBM (TMS), and Skyera among others. They attach to servers using Fibre Channel, Infiniband, or Ethernet. They deliver relatively high performance, and can be shared on a SAN or NAS environment. For example the IBM RAMSAN 820 delivers 500K IOPS and 5 GB/s, with latency between 200 and 400 microseconds.
  • All-Flash enclosures. These are becoming quite common. They are available from multiple suppliers and have SSD drives mounted inside traditional enclosures. They use standard SAS or SATA interfaces. Some engineering is done to improve their performance for SSDs. They are relatively easy to deploy, and can be shared on a SAN or a NAS. They have the disadvantage of being limited by relatively slow SAS or SATA interfaces, which can often become a bottleneck. They can deliver about 200K IOPS with less than 1 millisecond response time.
  • SSDs in traditional arrays. While this might be the easiest and most economical way, its performance gains are limited. Expensive and proprietary tiering software from traditional storage suppliers is difficult to tune, creates vendor lock-in, and is complex to manage. Implementation requires careful planning on data layout. Performance improvements only happen if the data is in cache, which is typically how SSDs are used in these hybrid arrays.

From DIMM slots one can have less than ten microseconds using flash. Everything else is added latency. For PCIe add 60 us, for Fiber Channel add 100 us, 300/400 us for iSCSI, and add another microsecond for SAS. In conclusion, selection of flash memory technology can improve application performance in Big Data Analytics, but placement of that technology is even more important.

Speak Your Mind

*

Comments

  1. Great overview article. Nice job.

    Next it would be great to compare Flash in DIMM from Smart Storage, Viking and others, PCIe cards from Virident and Flash IO, , as well as Flash Arrays, Flash in Enclosures, and Flash in Hybrid configurations. This would be a fabulous series of articles.