Storage technologies have been constantly evolving over the past few decades, from dedicated application resources, through server virtualization, to the latest innovations in space saving and performance such as thin provisioning, caching using SSD or RAM, and data deduplication. This technological development has lent itself to the requirements of large data center environments, yet largely ignored the challenges faced by organizations deploying storage at branch or remote locations.
In this five part series we take a look at how storage solutions have evolved and how data storage vendors are finally addressing the challenges faced by distributed organizations in serving their remote sites with cost-effective and highly available shared storage. Last time out, we explored the development of server consolidation and now we move on to server virtualization. With the primary objectives of protecting against a rogue application from consuming all server resources and driving up server resource utilization further, the concept of server virtualization was widely adopted.
Server virtualization abstracts the physical server hardware from the operating system, using a hypervisor, such as VMware vSphere or Microsoft Hyper-V. This enables multiple servers or virtual machines (VMs) with differing operating systems and applications to be collocated on the same server, while ensuring server isolation.
Each VM is dedicated a share of the available physical server resources ensuring that a rogue server cannot consume all the available resources. Additionally more VMs can be run on each physical server, driving up the server utilization and reducing the requirement for more servers.
Hypervisors provide a number of advanced features that provide high availability and enable virtual machine mobility such as VMware vMotion, Distributed Resource Scheduler (DRS), Microsoft Hyper-V Live Migration and Dynamic Optimization. These features require the use of shared storage which has traditionally been provided in data center environments by dedicated storage arrays.
History of server virtualization
The concept of server virtualization is not a new one and has been around in some form since the 1960s, with IBM being pioneers.
It’s only since the late 1990s and early 2000s when consumer products such as Windows VirtualPC and VMware Virtual Platform (forerunner to vSphere ESXi) became available.
Does this approach address challenges to distributed enterprise?
Server virtualization must be combined with shared storage to realize the features distributed enterprise needs. While dedicated storage arrays are suited to data center environments, they are not so ideal for SMEs and distributed enterprise. As more servers are virtualized the reliance on the underlying storage infrastructure increases, as a storage failure could affect multiple servers. To reduce this risk would require duplicating the storage infrastructure and replicating the data between them, considerably increasing the footprint, complexity and cost. This renders the dedicated storage array approach inappropriate for distributed environments.
However, where server virtualization can be realized with a virtual software based approach, a solution can be found that meets the requirements of SMEs and distributed enterprise. This innovation of software-defined storage is discussed in the final installment in the Evolution of Storage series. Subscribe to the StorMagic blog and never miss another article again.