IT shops are always on the lookout for ways to improve application performance without negatively impacting availability and adding cost or complexity. Of course, adding more servers to the environment could do the trick – but comes at a steep cost and in many cases, added complexity (more servers means more things to manage and maintain). Many customers then look to their storage infrastructure for performance gains – all-flash arrays, storage tiering or storage caching. Let’s take a look at the differences between the three.
All-flash arrays are getting a lot of attention in the market, and for good reason – they can deliver massive application performance gains (in terms of IOPS and sub-millisecond latency). A chassis full of SSD drives can be very tempting for any IT organization looking to solve performance bottlenecks. However, this comes at a significant cost. SSD drives are approximately 10X the cost/GB of spinning disks (but have a very low cost/IOPS). Also, not every “performance-hungry” application will benefit from all-flash arrays – in some cases the performance bottleneck is somewhere else in the system. If the IT team is confident that all-flash will solve an application’s performance requirements, and the budget supports expensive, all-flash arrays – these could be a good option.
Another approach to performance improvement is storage tiering. With this approach, multiple different types of drives are configured – SSD, 15K HDD, 10K HDD and/or 7.2K HDD and intelligent software moves data to the most appropriate tier. Typically, all writes go to SSD first to maximize write performance and then data is moved to lower cost tiers as the data ages. Each vendor’s implementation is slightly different (block level migration, volume level migration, scheduling movement by time or priority, etc…) but the common denominator is that the actual application data is being moved between the different tiers and controller software and CPU cycles are being used to control the movement.
The latest approach is storage caching, or more specifically, Predictive Storage Caching (which is StorMagic’s approach). This implementation is similar to storage tiering in that multiple types of drives can be utilized, but there are two main differences.
The first difference is that system memory can be utilized as a caching tier for reads. Since adding memory to the server is inexpensive – this is an excellent way to improve storage performance for read intensive workloads without breaking the bank. The second difference is that since this is a caching approach, the data being moved between the different tiers is actually a copy of the original source data. This approach minimizes the amount of data being moved around the back-end because only the most active blocks of data actually migrate up the tiers, from HDD to SDD or memory cache. As it turns out, for typical applications, only a small percentage of data is actual read frequently so it’s extremely cost-effective to keep this data in memory or SSD cache.
StorMagic’s Predictive Storage Caching is also quite effective in improving performance because of the efficiency of our intelligent, patent-pending caching algorithms. Our software is able to determine which blocks are likely to become active based on previous sequential IO access patterns and “pre-fetch” them from the lower tiers into memory. Our Tracker Module looks at the data over a long period of time to determine the hot data, ensuring the correct blocks remains in cache. Other caching algorithms on the market implement a least recently used (LRU) approach, which looks at a shorter time period and isn’t as truly predictive. Transient data could fill up the cache and could evict important data from the cache too soon.
We’ve also figured out how to address the virtualization “I/O Blender Effect” (which can impact storage performance) by discovering sequential IO patterns from the different virtual machines. And there is a data pinning mode where the data for a specific workload (e.g. VM booting/start of day/VDI boot storm, or end of month database processing) can be preloaded into memory cache ahead of time – the performance for that workload is then similar to an all-flash array.
The other thing to mention about these three different approaches to improving storage performance is packaging. Typically, all-flash arrays and storage tiering solutions require the end user to purchase a complete hardware/software solution – which can be quite expensive. StorMagic’s Predictive Storage Caching approach is a software-defined storage implementation that can run on any x86 server. The IT team simply configures their server with any mix of server memory, SSD and HDDs and then installs the StorMagic virtual SAN with caching enabled. With our approach, the user can select the exact volumes of data that will be eligible for caching so the faster tiers will only focus on the most important data.
Predictive Storage Caching is a much simpler, more cost-effective approach to storage performance improvement and can save IT departments considerable budget dollars and headaches. To find out more about Predictive Storage Caching, why not check out our recent white paper or webinar recording?