Welcome to the final installment of a three part blog, providing a technical overview of StorMagic SvSAN. Part one of the blog provided a brief overview of SvSAN and outlined how you can decide whether hyperconverged infrastructure or server SAN is right for you. The second part explored SvSAN in more depth, and this part will define options for deployment models.
While typically deployed as a 2-node cluster, it is also possible to deploy SvSAN as a single node solution or as a multi-node cluster. But how do you know which deployment method suits you edge environment? Here’s a handy guide to help you find out:
A two server system is the minimum requirement to provide highly available shared storage, with no downtime. By installing SvSAN onto two servers, you create an option that can tolerate single server failure and provide the shared storage required to enable advanced hypervisor features to be used.
An optional witness can be added to this deployment to eliminate split-brain scenarios from occuring.
The best option for environments where a loss of server is not important, or with non critical data that can be replaced, such as test and development sites.
SvSAN can be installed onto a single server, to deliver shared iSCSI storage. This is a low cost solution, and can be deployed as a caching device, for when high performance is required, but high availability is not.
This option does not protect against server failure, however data can be protected using a RAID controller to provide disk mirroring or parity RAID.
By deploying SvSAN on three or more servers you can increase the storage and compute capacity and data is still mirrored between any two nodes in the cluster.
This deployment method provides an increased resiliency to solutions with 2 or more nodes, as the servers are geographically separated (up to 3,000km), allowing copies of data to be stored in two discrete locations.
By installing SvSAN on a stretched cluster deployment, you can protect your stored data from risk factors, such as natural disasters (depending on the size of the separation). The witness can be configured to suit various cluster deployments, based on your needs, more information can be found in the whitepaper.