With the rise of flash storage deployments in the data centre, either in all-flash or as part of hybrid arrays, there is continual speculation in the market (and positioning from vendors) that it is only a matter of time before all hard disk media is eliminated from the data centre and all of our data sits on flash.  Is this a reasonable assumption, or purely wishful thinking from those with a vested interest in selling their flash products?

In recent years storage has been classified through the use of tiering.  Tier 1 was for high performance, mission critical apps.  Tiers 2, 3 and lower manage less important data, perhaps file content, archiving and backup.  The specifics of each tier don’t exist as hard and fast rules; each organisation assigns metrics and service levels based on their own requirements.  Tier 0 crept in with flash as a way to define those ultra-high performance applications for which I/O throughput or latency (I/O density) could justify the excessive cost of flash compared to disk.  The industry has matured as MLC devices have replaced SLC; flash prices have fallen and capacities increased.  Today there still may be a need for tier 0, but most vendors are using flash to attack the tier 1 market and replace disk systems.  At the top end, the latest Intel enterprise-grade SSDs can be purchased (retail) for about $2000/TB, whereas 15K drives are still around 1/7th of that price (depending on source and vendor markup).  This means, with de-duplication and compression, flash can be competitive with 15K HDDs at the top end of the market, especially when calculating TCO (including power, cooling, floorspace).

Ever since flash was introduced, there’s been an assumption that the technology is inherently unreliable and that the finite lifetime poses a reliability problem.  It’s clear that with careful management (including knowing how to write to the SSD itself) that lifetime isn’t a problem and as a result vendors such as Pure Storage and SolidFire have been able to offer replacement guarantees for as long as arrays stay within maintenance.  During a discussion last week at Pure Storage’s offices in Mountain View, John Colgrove (co-founder and CTO) told me that the company had seen less than a handful of SSD hard failures and in most cases drives experienced firmware “hiccups” that could be resolved by resetting the drive.  These devices may be replaced at the customer’s site but can continue to be used in Pure’s test/dev labs.  This process is analogous to the predictive failure process used by HDD array vendors.

So, with the introduction of multi-terabyte high-end MLC flash, the days of 15K drives are over.  What about 10K and 7.2K media?  The embedded image shows Micron’s projection for the adoption of MLC and TLC technologies, followed by 3D NAND – a technique that stacks data in towers on the silicon substrate to create a 3D density effect compared to today’s “planar” technology.  The benefit of 3D-NAND is the ability to increase SSD capacities without shrinking the manufacturing process.  Process sizes are reaching the level where reductions are becoming increasingly harder to achieve, so 3D-NAND becomes the short/medium term solution.

At a macro level, SSD capacities will continue to increase and costs will decrease.  This will put more pressure on the 10K and 7.2K market over time and we will see more data gradually move to a second tier of flash based on cheaper technologies like 3D-NAND.  As far as the all-flash array market goes, this trend will be interesting to see develop into new or updated products.  Many all-flash solutions are based entirely on the use of high-end drives, however many are not and so these solutions will easily adapt to a tiered model.

At the bottom end of the tiering model there is still a need for high capacity solutions.  Flash is unlikely to play in this market for some time, if ever, because there are other more cost effective solutions available.  These include tape (although accessibility is a problem) and the new wave of ultra-capacity HDDs based on SMR (shingled magnetic recording) technology, such as HGST’s new 10TB drive.  SMR is a recording technology that increases density by overlapping the magnetic components used to store each bit of data on the drive.  The increased density comes at a cost; data can only be recorded on the drive in bands (that could be gigabytes in size) and so a read-modify-write approach will be needed in a similar fashion to the way flash works today.  The result of this is much lower latency and throughput. The HGST 10TB drive is rated at a sustained write throughput of 68MB/s, less than half the read performance and around 40% of the capability of the equivalent 6TB HDD.

Around five years ago, I predicted hard drives would decrease rather than increase in speed, primarily because of the power savings this would deliver, making large HDD-based archives cheaper to run.  In fact indirectly we are seeing slower technology taking over as 15K and 10K drives start to get retired.  It’s still possible that 7.2K drives could be slowed down, providing a more “power optimised” version of high capacity drives.  The alternative is the use of optical technology, which is on track to reach 1TB per disk.  Optical has benefits of low power offered by tape (you don’t have to keep each disk powered up), with increasing capacities that will start to make it attractive compared to disk.  The added benefit is “nearline” access, which is random once the disk is loaded.  When building archives, both HDDs and optical have the benefit of low data decay (aka bit rot), compared to flash – another reason why flash isn’t going to be suited to large-scale archives.

Flash is here to stay and will be the primary technology for active data.  At the archive end, the choice is still undecided, with a three way race between tape, SMR HDD and optical.  There are merits with each of these technologies as well as disadvantages; this area will be one to watch.

For more information on flash storage in the enterprise, see Langton Blue’s premium content report available here – Flash Storage in The Enterprise 2015.

(Visited 506 times, 1 visits today)
Share This