Considerations for Testing All-Flash Array Performance

Virtual servers, cloud-based platforms, and virtual desktops are becoming the standard way to deploy new infrastructures. This trend toward consolidating many different applications onto a single set of systems causes what is sometimes called the “I/O Blender” effect. The storage workloads become in the aggregate highly random and simultaneously distributed across a large percentage of the storage capacity. At the same time, powerful analytics are driving value for organizations and placing a premium on the ability to collect and analyze huge volumes of data.
Traditional storage systems using spinning disks and caching controllers are a great fit for workloads that have manageable working sets and a high percentage of sequential accesses. In those scenarios, traditional architectures provide the best value. But for many deployments dedicated to consolidated infrastructure and high performance analytics, all-flash arrays are a better fit and provide a superior ROI because they address the highly random access demands of consolidated infrastructures in a cost effective manner.
Flash storage isn’t new, but recent improvements in price and performance have made it possible to deploy flash-based storage to meet demanding enterprise requirements. Flash packaged as standard SAS and SATA drives (solid state drives or SSDs) has rapidly improved and turned into a generally trustworthy commodity. Storage vendors have been quick to take advantage, but many are just replacing spinning disks with SSDs. This doesn’t take advantage of flash’s strengths or compensate for weaknesses. In the marketplace, some solutions are better at delivering on the promise of flash than others.