What are the specific aspects that an array can and should have, to be efficient and TCO friendly? How does ISE meet and exceed these aspects by making the “whole greater than the sum of the parts”? I will deal with these questions across my next few blogs, but first consider the design of a storage product from the ground up. A storage array is a specialized computer system. It has a clear focus on data storage, but it’s also much more than that. A storage array has a few laws it must live by:
- It must protect data from at least a single failure
- It must never lose data after a power failure
- It must withstand a failure as a result of a power failure (see number 1)
- Reads and writes should be expected and be capable of being performed, at a proper duty cycle, depending on the tier of storage (e.g., ISE is a Tier 0/1 device, meaning the duty cycle should be 100%, meaning anything at any time/all the time, and be low latency, high IOPS/throughput).
So, what makes up an array that meets these “laws” in such a way that it’s not just a small server or even a PC with a bunch of Band-Aids on top (or “perfume on a pig”!)?
Array Hardware and Its Effect on TCO
Given that a storage array typically has two controllers, aspects that make or break TCO include:
1. Are both controllers active at the same time for access to same data volumes? If they are not, then typically an active/passive system or one that is only active to some volumes, and the other to the rest of the volumes, causes availability issues and/or software reliability issues, driving cost. An active/passive system would most likely throw more hefty hardware at each controller, driving up power to make up for performance loss, in the normal case of both controllers operating. Also, in cases where active-active is not within an array, then software called multi-pathing drivers, must be put into play that add complexity, sometimes cost extra money, and drive the overall solution cost up—either way with storage companies seeking to recover development and support costs by hiding costs inside of high warranty costs.
2. Do both controllers have a communication link that has near zero latency? This makes a difference in case 1, above, when failover is to occur; but most importantly to solve issues with an application’s write workload with the lowest latency and overall cost. Mirroring of write data between controllers is the best method to ensure data integrity in the case of failure, and also for lowest latency across the widest range of host access patterns. True active-active operation with a dual controller array is possible when this communication link is fast enough. Not only does this allow for faster failover, in the event of a controller reboot or failure, but also additive performance to all volumes when both controllers are operational. In addition, servers no longer need special drivers to control multiple paths to the storage.
3. Related to case 2 is how the dynamic random access memory (DRAM) cache is used for writes and how it is protected. A good write-back cache can smooth out most application I/O “outliers” from the standpoint of overall access to the dataset for the application. A small amount of DRAM with non-volatility, as well as a very fast inter-controller communication link, allows for I/O latency to be reduced on the first order. Remember, DRAM is 1000x faster than SSD, which in turn is much faster than HDD (for random I/O). Using DRAM in the proper quantities can reduce TCO, but throwing a large amount at it without intelligence just drives up cost and power usage.
4. Good cache algorithms that can aggregate I/O, pre-fetch, do full raid strip writes, atomic writes, parity caching, etc., are all aspects of a very cost-effective usage of a small amount of DRAM that points to all the back-end storage devices, which in the end must have I/O performed to/from them, in the most efficient ways possible, for each back-end device type.
5. What kind of back-end device types should be considered? Nearline HDD (SATA or SAS), Enterprise HDD (10K or 15K), SSD in drive or plug-in card form factor? It all depends on the mission of the array. If it is price/performance and TCO, then my mind goes to how to use the 10K HDD, as well as MLC SSD, for some applications for the job. Using nearline HDD has its place in very low performance or sequential I/O environments, mainly in backup and archive use cases; because the extremely low I/O density causes the ability to utilize the capacity behind these typically high-capacity drives (to disallow efficient full capacity utilization). Remember though, low-cost, high-capacity drives have a different duty cycle than enterprise drives. For example, throwing multiple sequential workloads against high cap drives is just like a random workload and will kill these drives prematurely, resulting in more service events, slower performance during long rebuilds, potential data loss, and sub-optimal performance.
6. Does the array have the ability to utilize all the capacity with I/O to all attached capacity? This is a key metric in effective TCO vs. the old adage of $/GB. If an array can utilize ALL the capacity under load, then efficiency drives down TCO. The ability to utilize all the capacity is the function of the data layout, effective utilization of back-end devices, andt also how the caching and controller cooperation work. All of this can drive TCO way down or way up depending on how well it’s done.
7. Does the array have a warranty greater than three years? If so, then it’s either because the technology reduces service events OR it’s a sales tactic. If it’s the former, then it truly drives TCO down as more storage is purchased. If it’s not, then its “pay me now or pay me later.” Technology that provides for less service is based on a design for reliability and availability that goes far past just dealing with errors that occur in a system. It’s a system approach, similar to Six Sigma to reduce variation in the system, which reduces the chance of failure. In an array, that means how the devices are packaged, how the removable pieces are grouped together, and how the software can deal with potential faults in the system and keep the application running without loss of QoS. A system that can do this drives TCO down because of the fact that customers don’t have to design for failure, or in other words, design around the shortcomings of the array by over provisioning (as many cloud vendors do). Many cloud providers have designed for failure with mass amounts of over-provisioned storage, n-way mirroring, etc. The industry has been trained around the shortcomings of array design and error recovery, so those that build their own datacenters just go for the cheapest design with the cheapest parts because of this. In contrast, a storage system that really does provide for magnitudes-greater reliability, availability, capacity utilization, and performance across that capacity, can actually change this mindset. However, it takes belief that a design of this nature is possible . . . and it has been done with the ISE from X-IO.
8. Does the array provide real-time tiering that maintains a consistent I/O stream for multiple applications across the largest amount of capacity possible? An array that can effectively do this with the highest I/O and largest capacity, at lowest cost, wins the TCO battle. Beware of marketing fear, uncertainty, and doubt (FUD) that sound the same, but the architecture and design of the product, as well as results, are what matter.
9. Does an array add features that, under the right circumstances, reduce capacity footprint via de-dupe or compression? If so, I smell snake oil because in most tier1 applications, compression and de-dupe just drive up cost of the controller while giving dubious results. On paper it might look good for the $/GB, but other aspects like space, power, and utilization go down. And if it’s done with all SSD, in order to artificially say the cost is less, all the worse.
Why am I harping on the way that arrays are designed? It is because all of this drives the TCO up or down based on architecture and methods used to drive up performance, capacity utilization, reliability, and availability . . . or NOT!
Most arrays today are very wasteful when it comes to the:
- amount of compute power inside the array
- amount of actual usable capacity
- overall reliability (or aversion to service events)
- availability of the array to the application
Also, adding features such as those noted above, as well as many kinds of replication, make the performance of the array inconsistent, causing IT architects to over-provision their gear and “work around the SAN.” SANs got a bad name for bloated, framed architectures with big iron, big license fees for every feature on the planet, poor performance, poor reliability, poor capacity utilization, etc., etc., etc . . . A SAN was originally meant to just put storage on a private network that servers could share. Oh, how things get polluted over time when greed takes over by a vendor.
As noted before, putting the right amount of compute, against the right amount of storage, will drive costs down in power, space, and application efficiency.
Most arrays also have the mindset of “when in doubt, throw it out” when it comes to replaceable components within the system, also known as Field Replaceable Units (FRUs). This leads to more service events, higher warranty costs, as well as potential and real performance loss at the application, and even down time.
What Makes ISE Tick?
X-IO is now in its second generation of ISE, a balanced storage system that breaks all the molds of the traditional storage system. Unique aspects of ISE and its second generation are:
1. All the things ISE solves, including two to three times the I/O, per HDD, over any other array manufacturer.
2. Dual super-capacitor subsystems, in order to always be able to hold up both controllers, for up to 8 minutes, to flush the mirrored write-back cache on both controllers to a small SSD on each controller. This ENDS the issue of the batteries or UPS, to either hold up cache or hold up the entire array, to write out write-back cache to a set of log disks. It now means reliability goes up exponentially over a batter which was already good—it not only keeps the price the same, but also make the data readily available for server usage when power comes back on. (Note: Two super-caps are in each ISE but only one is necessary for hold-up. Two are provided for high availability and no single point of failure.)
3. Reliability that is increased tenfold, over the first generation ISE, for the back-end devices in datapacs that are using the new Hyper ISE 7-series (with additional groupings of HDDs). This extends the art of ISE-deferring-service, and includes the 5-year hardware warranty that X-IO extends on all its ISE systems.
4. Unique Performance Tiering in the Hyper ISE hybrid that allows for full use of the HDD capacity with a small % of SSD. The new 7-series extends this capability, with varying capacities of the Hyper ISE, as well as SSD capacity for application acceleration.
5. No features that are not necessary for application performance. ISE does NOT do de-duplication as it’s not necessary if the application does it—which most do—but moreover, since we are the only company in the world that allows for full utilization of the storage purchased, de-duplication/compression is relegated to where it should be: for data at rest NOT for tier 1 storage. Furthermore, features like thin provisioning are not necessary as the mainline OS, such as Windows and Linux, let alone VMware, allow for proper grow and shrink of volumes that ISE does support.