Defining the Edge for the Modern Era: Part One

Published On: 31st October 2019//3 min read//Tags: , , , , , , //

With the advent of mobile connectivity, bring your own devices, working from home and this unilateral shift from on-premises to the cloud, the very way we consume and work with our data on a day to day basis has changed and is continually shifting.

We’re used to these buzz-words being thrown at us from all angles: Big Data, Cloud-Ready, Everyware, Ubiquitous Computing, Edge Computing… but what does all this actually mean, and when it comes down to your own organization and environment, why should you care about any of this?

Let’s take a step back and go to an era in recent history where the “server” was a physical machine making lots of noise and required skilled and experienced engineers to keep it running. Terminals would be directly connected, sometimes even by some proprietary interface like a serial cable and interruptions to service would often affect everyone at once.

Step one of modernization was removing the proprietary and standardizing our interfaces. Generally, we point to “Microsoft Windows” as a primary driver among other tools, as this operating system fundamentally changed the way computers were used and interacted with each other. It reduced training requirements and gave application owners and developers a common platform to work on making their work less bespoke and therefore more useful to a greater audience.

Once the workers all had the common tools to make them scalable in nature, our focus turned on the infrastructure itself. Data was held in commodity servers, running off-the-shelf software making our journey towards the future largely modular in nature. Standards were set up; parts became cheaper; skills increased; and innovation thrived. In the world of storage, standardization happened around fibre-channel connectivity which allowed storage to move outside the server and be housed in enterprise-class, shared storage solutions like SAN and NAS. At the tail end of this chapter was the introduction of virtualization, further modularizing services and provisioning and in turn reducing the hardware required to manage data and workloads in a distributed way.

One of the key requirements of server virtualization was external shared storage – typically a physical SAN. This way, all the virtualized servers in a cluster could access the same storage and this was how VMware was able to deliver advanced functionality like vMotion, load balancing and failover in case a server failed completely. This used to be the only way to implement a cluster of virtualized servers – to implement expensive, external enterprise-class storage arrays. As time moved on, these traditional strategies started to be replaced by big ideas and complexity.

Then came the cloud. Well, it was here all along, but with the connected infrastructure around the world commonly called the “internet” we began to use this term to define the big data center in the sky that you can’t see or touch: the place those snapchat messages end up when you send them and switch off your phone; the revolutionary replacement for that loud and complicated room we once called the data center. In reality: the cloud is just someone else’s data center. Why recreate the same basic architecture, modular or otherwise, again and again, if we could just rent space on more professionally managed hardware, removing all the pain in the process? Those that run the magical servers in the sky could scale their infrastructure up efficiently and cost effectively, offering services to those who would not have been able to afford to enter this space before now. The world rejoiced; the future was upon us! This is where we find ourselves right now, here in 2019.

Stay tuned for part two

Share This Post, Choose Your Platform!

Recent Blog Posts
Go to Top