Storage technologies have been constantly evolving over the past few decades, from dedicated application resources at the start to the latest innovations in space saving and performance such as thin provisioning, caching using SSD or RAM, and data deduplication. This technological development has lent itself to the requirements of large data center environments, yet largely ignored the challenges faced by organizations deploying storage at branch or remote locations.
In this five part series we take a look at how storage solutions have evolved and how data storage vendors are finally addressing the challenges faced by distributed organizations in serving their remote sites with cost-effective and highly available shared storage. In part 1, we go back to the early days of dedicated application resources.
The beginning – Dedicated application resources
In the early days of computing, each application was allocated its own dedicated server and storage resources. This was done for numerous reasons, one being organizational or budgetary reasons as each department had responsibility for their own IT requirements and budgets.
Another reason was that servers in the past did not have sufficient processor, memory or storage resources to accommodate and execute multiple applications. To ensure that applications had sufficient resources each one was given its own dedicated IT resources.
Limits of dedicated server and storage resources
This approach led to non-standard configurations, with assorted server and storage technologies being introduced, resulting in server sprawl and inefficient management.
Servers had limited scalability options, meaning that the applications were constrained by the size of the server in terms of CPU performance, available memory and number of disk slots. To combat this, larger servers were deployed with over-provisioned compute and storage resources, to cater for any future growth. Invariably most of the resources remained largely underutilized, even though other servers and applications became resource constrained.
The biggest failing of this approach was that the solutions contained multiple single points of failure (SPOF), for example single server, no replicas of data, etc. leading to application downtime or even data loss in the event of a failure.
Does this approach address distributed enterprise challenges?
Despite the above issues, these solutions are simple in design, quick to deploy and relatively inexpensive as they used commodity servers. Today these solutions could be suitable for distributed environments where a small IT footprint is required and data loss or server outages are not an issue, for example website hosting or big data applications, where there are multiple replica copies of the data that protects against failures. However, where high availability and critical data are concerned, such as multi-site environments, these solutions are inappropriate.
The next installment in the Evolution of Storage series takes a look at JBOD arrays and the dawn of shared storage. Subscribe to the StorMagic blog and never miss another article again.