What to Expect in 2017 for Software-Defined Storage

Published On: 23rd March 2017//4.4 min read//

StorMagic CEO Hans O’Sullivan offers his insight into what he believes will be the trends in software-defined storage over the course of 2017.

The storage market is in the midst of a revolution: the biggest shift in storage technology since networked attached storage was introduced over 30 years ago. Software-defined storage (SDS) — a technology that is effectively a storage hypervisor — has turned legacy storage on its head. This transformation from monolithic, proprietary storage arrays to flexible software-based storage solutions leveraging commodity hardware is a necessary response to the digital transformation and rapid growth of data everywhere.

At the same time, software-defined storage is evolving rapidly. Here’s what we expect to see in 2017:

1. Storage budgets aren’t keeping pace with growth.

According to a 451 Research report, most enterprises find that storage demands are outpacing budgets – even as raw storage gets cheaper every year. Organizations will look to more cost-effective ways to store and manage data.

2. Tier 1 storage will come under greater cost scrutiny and data will be relocated.

Tier 1 storage is expensive, but enterprises lacked real alternatives in the past. Much of the data stored there can now be comfortably stored in lower cost storage tiers that meet the original performance and resilience SLAs. SDS presents organisations with additional flexibility to create new capacity at a fraction of the cost of traditional tiered storage whilst meeting the defined Tier 1 storage requirements of 3 to 5 years ago.

3. A breakout year for virtual SANs replacing legacy physical SANs.

VMware, StorMagic and several others have been shipping virtual SANs for some time now. Microsoft’s entry into the market with Windows Server 2016 Storage Spaces Direct will cement virtual SANs as a mainstream technology in the mind of many buyers. Further, the cost and complexity of SAN hardware is a poor fit to support dedicated applications or edge deployments. IT is starting to realize there’s a better way.

4. Commodity server hardware will become the backbone of SDS.

In the early days of SDS, many solutions were built on dedicated appliances. This made more sense when flash was more expensive and storage performance across a pool of servers was less predictable. But appliances can only scale in relatively large increments, and it’s harder for IT to justify new dedicated hardware when commodity servers offer better economics and flexibility, including the chance for IT to repurpose existing hardware.

5. Repurposing industry standard servers will gain momentum.

How many organisations always have a batch of 3 to 5 year old servers sitting on the proverbial shelves at any one time? The compute and memory of these servers is a perfect platform for building software defined storage arrays at a fraction of the cost of physical SANs, a perfect antidote to point 2.

6. Hyperconverged appliances will be exposed as overkill and inflexible for many use cases.

Being locked into a predefined limited set of configurations has some advantages, particularly when performance and the ability to scale-out in minutes within the data center is the primary need. However, organisations should also consider the danger of over-provisioning in use cases where performance and scale-out in minutes is not the primary driver. Leveraging SDS and a virtual SAN to custom-build hyperconverged infrastructure removes the danger of over-provisioning and gives IT professionals freedom to build to their exact needs.

7. SDS deployments at the edge will accelerate.

SDS started in the datacenter and deployments at the edge will now accelerate as companies look to replace aging legacy infrastructure throughout the company. Branch offices are the most obvious candidate where aging servers and storage arrays can be replaced with flexible, more cost-effective options, but new use cases are also emerging. For example, pre-process sensor and machine data as well as factory automation are both a perfect fit for SDS due to the need for redundancy and a small IT footprint.

8. Everyone is talking about object storage, but file and block aren’t going away.

Though object storage is gaining momentum, standard block and file protocols will continue to play a big role, even as data moves off-premise. SDS will play an ever increasing important role in modern storage architectures.

9. Hosting and managed service providers will implement SDS.

Many HSPs and MSPs have become regional cloud providers in their own right, and need to support customer workloads that run on block architecture (in addition to object storage and other file systems). Big storage arrays are clearly too expensive. SDS provides an attractive alternative that gives them the flexibility to “roll their own” solutions based on commodity hardware.

10. SDS will unlock desktop virtualization opportunities.

Storage cost and performance have historically been blockers, especially for smaller scale deployments. SDS enables economical shared storage for desktop virtualization. For example, Citrix XenApp customers can finally move away from direct attached storage and benefit from the flexibility of shared storage. Not only that, the ability to fix performance bottlenecks with affordable tiered storage now becomes reality.

In 2017, we expect SDS to continue to change the economics of existing enterprise storage platforms. The performance, reliability, and scalability that was once the domain of proprietary arrays is becoming “democratised” – such that it can be leveraged anywhere in the organisation for a much lower cost. The market will take greater notice of SDS as they consider their storage strategy and the need to drive continual improvement in their IT infrastructure.

Share This Post, Choose Your Platform!

Recent Blog Posts
Go to Top