Why Now is the Time for an SDS First Strategy

Published On: 15th August 2017//4.1 min read//

Server virtualization transformed the datacenter and radically simplified server and application deployment. Server utilization became much more efficient, and provisioning new systems much simpler. Because it brought so many benefits to IT, companies began adopting a virtualization first policy as early as 2007. Big organizations like the State of California CIO’s office implemented a virtualization first policy in 2011.

Storage is now following the same path. Commodity hardware, along with a storage software layer, provide sufficient performance and resiliency for most enterprise applications, and are much more cost effective than traditional arrays. Just like companies adopted a “virtualization first” strategy for all new apps and servers, they’ll adopt a “software-defined storage first” strategy for any new storage, or any refresh of existing storage.

The Demise of the Legacy Storage Array

Big iron storage arrays have delivered enterprise performance and reliability for over two decades. But the proprietary model isn’t without flaws — it’s expensive, complex, has to be deployed in large increments, and requires specialized knowledge to run. As data storage needs have grown, companies have struggled to manage storage costs and deliver performance for modern applications.

Startups such as Pure Storage, Nimble Storage, Nutanix, and Tintri (just to name a few!) noticed the challenges customers were facing with both virtualization and growing storage demands. Many of the startups were founded by people who once worked at the big array vendors. But even that was just the beginning, and still tied storage to proprietary hardware. Storage performance and availability is driven much more by software than by specific hardware platforms.

Big arrays and dedicated storage hardware platforms may remain relevant for specific workloads that demand sub-millisecond response time for every transaction. But for most applications, it’s increasingly difficult to justify the cost of proprietary storage platforms — especially when there are other more compelling options.

Software-Defined Storage Grows Up

Five years ago, you could be forgiven for concluding that software-defined storage (SDS) and virtual SANs wouldn’t be good enough for most production applications. Early on, software solutions didn’t have all of the enterprise features — high availability, compression, deduplication, data resiliency, replication, performance management. The underlying hardware was expensive, too. You needed higher end drives or expensive SSDs, and high end server hardware.

In the past few years, SDS and virtual SAN providers have added enterprise features and optimized the software to make more efficient use of the hardware. Server hardware has seen drastic improvements in performance along with significant cost declines. Additionally, drive technology has evolved — there are now many more options for spinning and solid state disks — with the cost per GB continuing to fall.

Economics Seal the Deal

Commodity hardware and standardized software ultimately win over buyers as technology improves and reaches near parity with the features, performance, and reliability of proprietary systems. We’ve seen this happen before with server platforms (x86 servers displacing proprietary servers) and other technologies. Today, enterprises also see the results that Amazon, Google, Facebook and other internet companies can get with commodity hardware. They want to do the same.

SDS is following the same path. The commodity hardware is there, and combined with advances in the software itself, virtual SANs can now meet the needs of most applications. It brings storage economics back in line with what companies need and want — the cost advantages and flexibility of virtualization and the cloud for SANs.

An SDS First Strategy

More companies will begin adopting an SDS first strategy in the next two to three years. A “virtualization first” policy is already commonplace – any new servers or applications will be virtualized by default, with an exception process to manage outliers.

The same thing will happen with storage — new storage will need to be based on commodity hardware using a converged or hyperconverged architecture, unless there’s clear evidence that an application needs dedicated storage hardware. After all, why do otherwise when the economics and technology support it?

And since virtual SANs can support the vast majority of workloads, there are fewer and fewer compelling reasons to buy specialized storage — especially big arrays. More demanding workloads will shift to next-generation storage (such as Pure, Nimble, or Tintri), and some companies will choose hyperconverged solutions (e.g. Nutanix, Simplivity, StorMagic) to consolidate infrastructure in the datacenter and at edge locations for distributed IT.

Only a handful of workloads will still require a dedicated SAN array, or the high performance of enterprise flash storage. The only other thing holding companies back is sheer inertia — you keep buying what you know.

But the market is already moving. The Dell-EMC merger, HP’s acquisition of both Simplivity and Nimble, and the financials of other incumbents, makes it clear. Within the next five years, you’ll be in the minority if you’re still buying dedicated storage hardware for on-premises workloads.


…now what next?

Examine StorMagic SvSAN in more detail.

Share This Post, Choose Your Platform!

Recent Blog Posts
Go to Top