Published On: 29th November 20181.5 min readTags:

Resiliency in the face of server failure or maintenance

A typical SvSAN deployment calls for cluster sizes of just two nodes as the lightest, simplest, most efficient way of delivering highly available virtualized storage. However, a two node cluster cannot retain high availability should one node be lost through failure or maintenance. The storage is still available, on the remaining online node, but is vulnerable.

Consequently, many organizations choose to deploy additional nodes within each cluster to maintain high availability even when one node is lost. Competitive solutions generally require the deployment of four nodes per cluster to achieve this. With StorMagic SvSAN, it requires only three.

Join Mark Christie, Director of Technical Services, StorMagic as he walks you through how a three node SvSAN cluster is protected against planned and unplanned server failure and maintenance. Mark details why SvSAN provides highly available shared storage on three nodes when other solutions cannot, including:

  • How SvSAN’s witness provides quorum for all three nodes
  • How VMs and storage are migrated between nodes in the event of a failure or maintenance

Complete the form opposite to watch the webinar on-demand and download the presentation slides.

Watch the webinar on-demand by completing the form below:

By submitting this form, you consent to allow StorMagic to store and process the personal information submitted and for StorMagic to contact you via telephone and email in relation to the content requested.
StorMagic is committed to protecting and respecting your privacy, and we’ll only use your personal information to provide the products and services you requested from us. You may unsubscribe from StorMagic marketing communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Data Protection Policy.

Share This Post, Choose Your Platform!