The StorMagic SvSAN FAQ attempts to answer a wide variety of queries relating to StorMagic SvSAN, its deployment and features.

In addition to this FAQ, complete documentation for the deployment of SvSAN is available to read by visiting the SvSAN manual page.

Have you read through the SvSAN data sheet, and the SvSAN Technical Overview white paper, both of which are packed with useful information about SvSAN, its requirements, and capabilities?

If you cannot find an appropriate answer within the SvSAN FAQ, you can contact the StorMagic team at [email protected].

Question Categories

Use the links below to jump to a specific section:

General questions

StorMagic SvSAN simplifies IT storage. In contrast to the numerous competing solutions in the storage market, StorMagic SvSAN is not complex, expensive or difficult to manage. At its heart is an ambition to give your organization simple virtual storage. It makes the complex simple.

SvSAN is a highly available two-node virtual SAN designed for hyperconverged edge and small datacenter sites. The technology is based on software-defined storage that eliminates the need for physical SANs. It is deployed as a virtual storage appliance (VSA) on top of a hypervisor.

SvSAN enables highly available clusters by mirroring data between two nodes. Its simplicity ensures that only two nodes are needed per site, with deployment, management and witness services capable of being handled remotely at a central location.

SvSAN supports VMware vSphere, Microsoft Hyper-V and Linux KVM hypervisors. It is installed as a virtual storage appliance (VSA) requiring minimal server resources to provide the shared storage necessary to enable advanced hypervisor features such as High-Availability/Failover Cluster, vMotion/Live Migration and VMware Distributed Resource Scheduler (DRS)/Dynamic Optimization.

For full details on the latest version of SvSAN’s hypervisor compatibility, please refer to the SvSAN data sheet.

SvSAN can be deployed as a simple 2-node cluster, with the flexibility to meet changing capacity and performance needs. This is achieved by adding additional capacity to existing servers or by growing the SvSAN cluster, without impacting service availability.

SvSAN mirrors data between VSAs/cluster nodes synchronously, ensuring that the data is stored in two places before being acknowledged as complete.

Each side of the mirror (plex) is active, which allows data to be accessed from any plex. In the event that one side of the mirror fails (server failure, storage failure, network failure) data can still be accessed from the surviving plex.

While one side of a mirror is offline, changes to the surviving side are recorded on the meta-data journal. Upon recovery, the journal is read to determine which data has changed, this is then copied to the recovered side of the mirror, along with any new data written, this is known as a “fast re-synchronization”.

The metadata journal should be at least 20 GB in size, which is capable of handling a very large number of changes. If the meta-data journal wraps fails, the system simply reverts to doing a full mirror re-synchronization.

SvSAN does not explicitly handle protection against drive failures. The server on which the VSA is running will be protecting against drive failures using hardware RAID.

In cases where there is no RAID, the data is protected by SvSAN with a mirrored copy of the data on another SvSAN node.

The VSA is designed to handle any unexpected power loss. On startup, the VSA performs checks to determine if it was previously shut down in a graceful manner.

If the VSA was previously shut down abnormally, checks are run to ensure all the configuration data is consistent and correct.

A VSA contains dual boot images, to protect against corruption. If the primary boot image becomes corrupted, the VSA can boot from the other image.

SvSAN has been designed to be scalable as far as possible, in that most entities do not have hard limits, but rather are limited by available hardware resources.

The maximum capacity of a virtual disk is 128 Petabytes.

The minimum scaling unit to provide highly available shared storage is two nodes, where virtual disks are mirrored between nodes.

A third server is used to provide quorum to protect against split-brain scenarios, this is the SvSAN witness. Read more about it in this white paper. The witness can be located onsite, or remotely over a WAN. If a third node cannot be used, SvSAN can operate in a 2-node configuration using the “Stay up Isolation Policy”

Beyond two nodes, any number of nodes can be supported. It is possible to create as many mirrored, virtual disks as capacity allows, and each can be mirrored between any pair of nodes. Furthermore, SvSAN can be configured in 3-node clusters. More information on this type of configuration is contained in the corresponding white paper.

Additional questions regarding the SvSAN witness are answered in a separate section below.

The minimum recommended requirements for SvSAN are:

CPU 1 x virtual CPU core1

  • 2 GHz or higher reserved
Memory 1GB RAM2
Disk 2 x virtual storage devices used by VSA

  • 1 x 512MB Boot device, used to store the VSA boot image and configuration data
  • 1 x 20GB Journal Disk, used to store journaling metadata, log files, etc.
Network 1 x 1Gb Ethernet

  • Multiple interfaces required for resiliency
  • 10Gb Ethernet is supported
  • Jumbo frames supported

1 When using SvSAN data encryption feature to encrypt data, 2+ virtual CPUs are recommended.
2 Additional RAM may be required when caching is enabled.

SvSAN can support up to 32 GB of memory.

The following table shows the recommended amount of memory that needs to be allocated to the VSA based upon the memory and SSD cache sizes:

SSD cache size
Up to 0GB2 Up to 250GB Up to 500GB Up to 1000GB Up to 1500GB Up to 2000GB
Memory cache required 0GB1 1GB 3GB 3GB 4GB 5GB 6GB
1GB 3GB 4GB 4GB 5GB 6GB 7GB
2GB 4GB 5GB 5GB 6GB 7GB 9GB
3GB 5GB 6GB 6GB 7GB 9GB 10GB
4GB 6GB 7GB 7GB 9GB 10GB 11GB
6GB 9GB 9GB 10GB 11GB 12GB 13GB
8GB 11GB 11GB 12GB 13GB 14GB 15GB
12GB 15GB 16GB 16GB 17GB 18GB 20GB
16GB 20GB 20GB 21GB 22GB 23GB 24GB
20GB 24GB 24GB 25GB 26GB 27GB 28GB
24GB 28GB 29GB 29GB 31GB 32GB

1 Memory caching disabled
2 SSD caching disabled

For more information on SvSAN’s caching abilities, please refer to the caching white paper.

The minimum network bandwidth is 1Gb Ethernet.

SvSAN supports 10Gbps and 40Gbps Ethernet, jumbo frames and network teaming to provide network performance improvements.

Yes, SvSAN will take advantage of all network links the virtual server has configured. SvSAN can be configured to load balance and aggregate the bandwidth of all available network interfaces. These can be used for management, mirroring or iSCSI traffic.

Yes, two virtual switches need to be configured on the virtual servers. SvSAN, by default, is installed and configured with two virtual network interfaces (vNICs). However, more network interfaces may be added.

In the event that all network interfaces used for mirror traffic are unavailable, the mirror traffic will redirect over any remaining management network interfaces.

Yes, an existing un-mirrored target can be converted into a mirrored target and vice-versa. Please refer to the SvSAN manual or contact our support team at [email protected] for detailed steps.

The serial number and current host name are displayed at the start of the configuration wizard.

After the setup is completed, you can view the serial number from the console, the system tab of the WebGUI or the VSA view>System tab, within the StorMagic Plugin. e.g. https://VSAname.domainname/system/license/

Yes, SvSAN cluster nodes can be located in different locations. For example different sides of a building, across campus or in different cities.

Please refer to the stretch cluster white paper for details on bandwidth and latency requirements, or contact our support team at [email protected].

Yes, in some cases it is possible to skip a single firmware revision for example from SvSAN 6.0 to SvSAN 6.0. The intermediate upgrade to SvSAN 6.1 is not required.

Please refer to the SvSAN release notes for supported and valid upgrade paths.

However, we recommend that you keep up to date with the latest version of SvSAN firmware to ensure you have access to latest features and current security, performance and bug fixes.

Licensing and support

Yes, a free, fully functional evaluation of SvSAN is available to download, enabling organizations to trial and experience the features and benefits of SvSAN before purchasing. For more information and to download an evaluation copy, visit the trial download page on the website.

During the trial period, evaluators can, if desired, receive support and assistance with the first installation and a product demonstration.

SvSAN is sold as a single perpetual license for the total addressable usable VSA storage capacity.

  • Available in 2TB, 6TB, 12TB and Unlimited TB usable storage capacities
  • 1 license required per server/cluster node
  • Pricing based on a single license (2 licenses required for a 2-node cluster)
  • Base SvSAN license contains all the features necessary for highly available shared storage
  • Performance- and security-enhancing add-ons available: Predictive Storage Caching and data encryption
  • Maintenance must be purchased for each SvSAN license. Each add-on also requires its own maintenance to be purchased
  • Maintenance can be purchased in 1, 3 or 5 year increments

To discuss licensing and maintenance requirements and obtain a quote, please contact our sales team at [email protected].

SvSAN is available as a base license with all of the features necessary for highly available shared storage included.

There are also two add-ons available to SvSAN to enhance performance and security.

SvSAN’s performance-enhancing features are collectively known as Predictive Storage Caching, which utilizes patent-pending algorithms to unleash the full power of memory and hybrid disk configurations.

SvSAN’s data encryption feature enables organizations with one to thousands of locations to affordably and efficiently introduce data encryption at each individual site.

The following table provides an overview of the features available with SvSAN:

Features Base SvSAN Predictive Storage Caching Data Encryption
Synchronous mirroring/high availability ✔
Stretch/metro cluster support ✔
Volume migration ✔
VSA restore (VMware only) ✔
VMware vSphere Storage API (VAAI) support ✔
Centralized management and monitoring ✔
Witness ✔
I/O performance statistics ✔
Multiple VSA GUI deployment and upgrade ✔
PowerShell script generation ✔
Cluster-aware upgrades ✔
Write back caching (SSD) ✔
Predictive read ahead caching (SSD and memory) ✔
Data pinning ✔
Data encryption ✔

Yes, upgrading capacity is available at any time.

To get software updates and product support requires a valid StorMagic Maintenance & Support contract.

StorMagic delivers 24/7, world-class support to ensure customers and partners can quickly and effectively troubleshoot any difficulties that may arise. StorMagic Maintenance & Support provides instant access to knowledge base articles, software updates including major and minor software updates (including bug fixes and new feature releases), as well the ability to log technical support requests.

StorMagic Maintenance & Support is available in two levels, Gold or Platinum.

Gold delivers daytime, weekday support, while Platinum offers 24/7 support, 365 days a year. A summary of the maintenance levels is shown in the table below:

Gold Support Platinum Support
Hours of operation 8 hours a day1 (Mon – Fri) 24 hours a day2, (7 days a week)
Length of service 1, 3 or 5 years 1, 3 or 5 years
Product updates Yes Yes
Product upgrades Yes Yes
Access method Email Email + Telephone (via platinum engagement form on support.stormagic.com)
Response method Email + WebEx Email + Telephone + WebEx
Maximum number of support administrators per contract 2 4
Response time 4 hours 1 hour

1Gold Support is only available from 07:00 UTC/DST to 01:00 UTC/DST. If your business hours fall outside this window, you must purchase Platinum Support.
2Global, 24×7 support for Severity 1 – Critical Down & Severity 2 – Degraded issues

SvSAN witness

The SvSAN witness is a quorum service that acts as a tiebreaker, providing a majority vote in the event of a failure that requires a cluster leader election process. This prevents SvSAN clusters from getting into a state known as “split-brain”.

Split-brain is a clustering condition that occurs when cluster nodes lose contact from one another and begin to operate independently. The data on each node starts to diverge and become inconsistent, ultimately leading to data corruption and loss.

The witness has minimal server requirements, as shown below:

CPU 1 x virtual CPU core (1 GHz)
Memory 512MB (reserved)
Disk 512MB
Network 1 x 1Gb Ethernet NIC
When using the witness over a WAN link use the following for optimal operation:
• Latency of less than 3000ms, this would allow the witness to be located anywhere in the world
• 9Kb/s of available network bandwidth between the VSA and witness (less than 100 bytes of data is transmitted per second)
Operating System The SvSAN witness can be deployed onto a physical server or virtual machine with the following:
• Windows Server 2016 (64-bit)
• Hyper-V Server 2016 (64-bit)
• Raspbian Buster (32-bit)
• vCenter Server Appliance (vCSA)1
• StorMagic SvSAN Witness Appliance
• Ubuntu 20.04
1VMware vSphere 5.5 and higher

NOTE: The witness should be installed onto a server separate from the SvSAN VSA.

SvSAN has multiple deployment options for the witness to suit different requirements, these include:

Shared remote witness – This enables the minimum amount of IT infrastructure equipment that provides high availability (2 servers) to be located at the remote site. The witness is located at another central site (datacenter/HQ) and accessed over a WAN link. A single witness can be shared between hundreds of SvSAN clusters in remote locations.

Local witness – Similar to the previous configuration there are two servers each with an SvSAN VSA installed. This time however, the witness is hosted on a third physical server or virtual machine (outside the SvSAN HA cluster) located at the same site. This configuration is for environments which are totally isolated and have limited or no external network connectivity and need to protect against split-brain scenarios.

Multi-node SvSAN cluster – SvSAN can be deployed in multi-node clusters containing three or more servers. In this deployment scenario one of the SvSAN VSAs acts as a quorum for other cluster members. For example, with a 3-node cluster with VSAs A, B & C:

  1. When a mirror is created between VSAs A & B, VSA C acts as the quorum.
  2. When a mirror is created between VSAs A & C, VSA B acts as the quorum.
  3. When a mirror is created between VSAs B & C, VSA A acts as the quorum.

More information on multi-node SvSAN clusters is available in this white paper.

No witness – The final option is to deploy two servers at the remote site with no witness. In this configuration it is possible to enter a split-brain scenario in the event of loss of network connectivity between servers or if the servers are rebooted simultaneously. To reduce the chance of split-brain occurring, best practices should be followed. These include providing resilient network connections between servers, using quality components and using multiple, redundant power supplies.

No – the witness is an optional infrastructure component, see the “no witness” deployment option in the answer to “What are the typical SvSAN witness deployment options?”.

Yes – with clusters of 3 nodes or more other VSAs in the cluster act as quorums for other pairs of VSAs. See the “multi-node SvSAN cluster” deployment option in the answer to “What are the typical SvSAN witness deployment options?”.

Yes – the witness can be located at a central site and provide quorum for remote SvSAN VSA clusters. See the “shared remote witness” deployment option in the answer to “What are the typical SvSAN witness deployment options?”.

The witness can be installed on servers or VMs running one of the following operating systems:

  • Windows Server 2016 (64-bit)
  • Hyper-V Server 2016 (64-bit)
  • Raspbian Buster (32-bit)
  • vCenter Server Appliance (vCSA)1
  • StorMagic SvSAN Witness Appliance
  • Ubuntu 20.04

1VMware vSphere 5.5 and higher

Yes – the witness can be shared between multiple clusters and mirrors. StorMagic has customers with over 2000 sites each with multiple mirror targets using a centralized witness.

For a thorough run-down of different failure scenarios that the SvSAN witness can protect against, please refer to the witness white paper.

The witness uses the SvSAN discovery service, which utilizes the network port 4174 (TCP/UDP). Please refer to the “Port numbers in use by SvSAN” section of the SvSAN manual for all the ports used by SvSAN.

When using the witness over a WAN link the following are network bandwidth and latency recommendations to ensure optimal operation:

  • Latency should be less than 3,000ms, this would allow the witness to be located nearly anywhere in the world.
  • The amount of data transmitted from the VSA to the witness is small (under 100 bytes per second). It is recommended that there is at least 9Kb/s of available network bandwidth between the VSA and witness.

In general the witness can function on very high latencies and has very low network bandwidth requirements. Although these are extreme scenarios and networks with these characteristics are rarely used in practice, it shows how efficient the witness is.

For more discussion on the bandwidth and latency tolerances of the witness, please refer to the SvSAN witness white paper.