Evolution of Storage: Part 3 – Server consolidation and shared SAN/NAS storage

Published On: 7th September 2017//2.6 min read//Tags: , , //

Storage technologies have been constantly evolving over the past few decades, from dedicated application resources, through server consolidation, to the latest innovations in space saving and performance such as thin provisioning, caching using SSD or RAM, and data deduplication. This technological development has lent itself to the requirements of large data center environments, yet largely ignored the challenges faced by organizations deploying storage at branch or remote locations.

In this five part series we take a look at how storage solutions have evolved and how data storage vendors are finally addressing the challenges faced by distributed organizations in serving their remote sites with cost-effective and highly available shared storage. Having examined the emergence of shared JBOD arrays, our Evolution of Storage series now considers how the consolidation of these arrays took hold. As organizations grow in size, so does the requirement to retain more and more data. To limit the server sprawl and the number of storage islands, the concepts of storage and server consolidation were conceived.

The fundamental principle of server consolidation is to co-locate multiple applications onto fewer but larger servers, reducing the number of servers required and therefore increasing resource utilization. This is made possible due to the fact that modern servers are more powerful and have an abundance of CPU and memory resources. To address the rapid data growth, the storage arrays became larger with the connectivity issues being addressed using storage area network (SAN) switches.

SAN storage arrays deliver high performance, high capacity shared storage, along with numerous storage features such as thin provisioning, compression, deduplication, snapshots, replication, tiering and caching. Traditionally these arrays utilized bespoke purpose-built hardware and ASICs, and while today they use commodity “off-the-shelf” processors, memory and disks, they were monolithic in design allowing them to scale up capacity and required specialist skills to administer.

Limits of storage and server consolidation

Consolidated server and storage solutions are complex and more expensive than dedicated application resources or JBOD arrays, typically requiring more time to design and deploy. Additionally they have a larger environmental footprint, requiring environmentally controlled equipment rooms, increasing the total cost of ownership of the solution.

Does this approach address challenges to distributed enterprise?

The size, cost and complexity of these dedicated storage arrays and switches are ideal for data center environments that are looking to consolidate hundreds or thousands of servers and applications. However, they are overkill for remote locations and were not designed with remote sites in mind, where critical applications must remain on-site.

One major issue with the storage and server consolidation approach is that it is still possible for a “rogue” application to consume all the available server resources, starving other applications that are located on the same server as there are no mechanisms to isolate applications from one another. This could cause critical application availability issues that create downtime and loss of revenue at remote sites for distributed enterprise.

The next installment in the Evolution of Storage series takes a look at server virtualization and the introduction of virtual machines. Subscribe to the StorMagic blog and never miss another article again.

Share This Post, Choose Your Platform!

Recent Blog Posts
Go to Top