Having worked at Pure for three years now I have been constantly impressed by how innovative our software engineering design is. Customers tell us all the time that they like the simplicity of operation; the fact that all hardware and software upgrades are designed to be non-disruptive and non-performance impacting; and the feedback that the cloud-based management and monitoring framework helps avoid and often resolve issues before they have an impact to operations.
Because of these innovations, it is easy to overlook the importance of hardware innovation and the flexible, well engineered design of Pure Storage. One area of this hardware development and “looking around the corner” is the plan Pure has for development of NVMe.
So, what is the evolution of NVMe and why is this so important? Most current storage systems, both all flash arrays and hybrid arrays typically use SAS (serial attached SCSI) links from their controller processors to communicate to flash. While SCSI solved the fundamental issue of having multiple devices to communicate to a single device, it was designed for disk and has in and of itself started to become a bottleneck. And as flash drives get larger and larger every year, there can be a bottleneck for communicating to the flash which means that data cannot enter and exit the flash drive fast enough.
Taking that further, each connection from CPU core to flash is limited by the SAS host bus adapter and synchronized locking. This serial bottleneck creates performance bottlenecks within the back-end of all-flash arrays. The NVMe (over PCIe) protocol brings massive parallelism to an area where previously there was none. With NVMe there is the opportunity to leverage 64,000 queues for communication which gives much more access to each CPU core. This simply means data can get to and from the SSD with much more efficiency.
The adoption of NVMe in storage arrays will have several key benefits that can be expected in data centers. Three key benefits include higher density, improved performance, and greater consolidation.
Over the course of the last five years, Pure has continued to introduce denser drives. In fact, 1PB of usable data in 2012 used to cover 6 racks; today this same amount of data can fit into 5U. With the advent of NVMe, 2017 and beyond means NVMe will allow for even greater densities.
At Pure, when we meet with customers we tell them that the array gets faster and denser the longer they own the array, and often we have to explain the hardware designed to help them understand this as it’s unfamiliar to many legacy storage customers who are constantly refreshing their arrays. Unlike most arrays which must be fully refreshed every several years, Pure storage arrays are designed from the ground up to support new technologies.
Another aspect that NVMe enables is “performance density”. As mentioned above, as drive gets bigger, getting data from the drive itself can become more problematic. A colleague of mine likes to say “moving data from a very large flash drive to that of drinking water from a lake with a straw”, there is a similar effect when one wants to move data from a drive. Take for example that at the outset of All Flash Arrays the average drive size was 256GB each. We are now seeing announcements for 8TB and 15TB drives from some vendors. Moving data into a drive as it increases in size as above is very much like trying to drink a lake through a straw. Maybe not a perfect analogy, but hopefully this illustrates the bottleneck well.
For example, today AFA’s (all flash arrays) often require 20 SSD’s t achieve the maximum performance that the AFA is capable of. With NVME, you’ll be able to get exceptional performance of very large flash modules or SSD’s (16TB, 32TB and beyond) without performance ability.
While exact timing is always a question, as we saw with consumer products such as smart phones drive the market with things such as commercial SSD’s, we see similarly that consumer products may drive the adoption of new connection types such as NVMe. NVMe is widely present in consumer devices already, such as laptops, desktops and even some large-scale clouds. As we have seen, the Enterprise is usually not too far behind.
One well known industry veteran, Howard Marks makes the case that the tipping point to NVMe is coming very soon, even as soon as 2018. Other experts suggest very broad adoption in 2019.
Our view at Pure is your array should be designed TODAY for NVMe support and you should not be penalized for buying SAS connectivity today when you will need NVMe tomorrow. The great news about Pure storage arrays, is that you pay no penalty.
Our current view is the process of retrofitting NVMe will be a very challenging one for legacy storage arrays;
Pure Storage has future-proof plan for our storage, does your vendor? In summary, with the types of innovation happening to day, it is critical that your storage vendor has a clear and non-disruptive path to supporting NVMe.
Please contact Pure or Viadex for further questions. Please join us in the next weeks for a session where we can help you understand the roadmap and the reason why adoption of NVMe will be so important.