The StorMagic SvSAN FAQ attempts to answer a wide variety of queries relating to StorMagic SvSAN, its deployment and features.

In addition to this FAQ, complete documentation for the deployment of SvSAN is available to read by visiting the SvSAN manual page.

Have you read through the SvSAN data sheet, and the SvSAN Technical Overview white paper, both of which are packed with useful information about SvSAN, its requirements, and capabilities?

If you cannot find an appropriate answer within the SvSAN FAQ, you can contact the StorMagic team at [email protected].

Question Categories

Use the links below to jump to a specific section:

General questions

StorMagic SvSAN simplifies IT storage. In contrast to the numerous competing solutions in the storage market, StorMagic SvSAN is not complex, expensive or difficult to manage. At its heart is an ambition to give your organization simple virtual storage. It makes the complex simple.

SvSAN is a highly available two-node virtual SAN designed for hyperconverged edge and small datacenter sites. The technology is based on software-defined storage that eliminates the need for physical SANs. It is deployed as a virtual storage appliance (VSA) on top of a hypervisor.

SvSAN enables highly available clusters by mirroring data between two nodes. Its simplicity ensures that only two nodes are needed per site, with deployment, management and witness services capable of being handled remotely at a central location.

SvSAN supports VMware vSphere, Microsoft Hyper-V and Linux KVM hypervisors. It is installed as a virtual storage appliance (VSA) requiring minimal server resources to provide the shared storage necessary to enable advanced hypervisor features such as High-Availability/Failover Cluster, vMotion/Live Migration and VMware Distributed Resource Scheduler (DRS)/Dynamic Optimization.

For full details on the latest version of SvSAN’s hypervisor compatibility, please refer to the SvSAN data sheet.

SvSAN can be deployed as a simple 2-node cluster, with the flexibility to meet changing capacity and performance needs. This is achieved by adding additional capacity to existing servers or by growing the SvSAN cluster, without impacting service availability.

SvSAN mirrors data between VSAs/cluster nodes synchronously, ensuring that the data is stored in two places before being acknowledged as complete.

Each side of the mirror (plex) is active, which allows data to be accessed from any plex. In the event that one side of the mirror fails (server failure, storage failure, network failure) data can still be accessed from the surviving plex.

While one side of a mirror is offline, changes to the surviving side are recorded on the meta-data journal. Upon recovery, the journal is read to determine which data has changed, this is then copied to the recovered side of the mirror, along with any new data written, this is known as a “fast re-synchronization”.

The metadata journal should be at least 20 GB in size, which is capable of handling a very large number of changes. If the meta-data journal wraps fails, the system simply reverts to doing a full mirror re-synchronization.

Where hardware RAID is not possible, such as with edge-specific server models that don’t contain hardware RAID card support, SvSAN provides software RAID functionality. More detail about this can be found in our software RAID blog.

SvSAN does not explicitly handle protection against drive failures. The server on which the VSA is running will be protecting against drive failures using hardware RAID.

In cases where there is no RAID, the data is protected by SvSAN with a mirrored copy of the data on another SvSAN node.

The VSA is designed to handle any unexpected power loss. On startup, the VSA performs checks to determine if it was previously shut down in a graceful manner.

If the VSA was previously shut down abnormally, checks are run to ensure all the configuration data is consistent and correct.

A VSA contains dual boot images, to protect against corruption. If the primary boot image becomes corrupted, the VSA can boot from the other image.

SvSAN has been designed to be scalable as far as possible, in that most entities do not have hard limits, but rather are limited by available hardware resources.

The maximum capacity of a virtual disk is 128 Petabytes.

The minimum scaling unit to provide highly available shared storage is two nodes, where virtual disks are mirrored between nodes.

A third server is used to provide quorum to protect against split-brain scenarios, this is the SvSAN witness. Read more about it in this white paper. The witness can be located onsite, or remotely over a WAN. If a third node cannot be used, SvSAN can operate in a 2-node configuration using the “Stay up Isolation Policy”

Beyond two nodes, any number of nodes can be supported. It is possible to create as many mirrored, virtual disks as capacity allows, and each can be mirrored between any pair of nodes. Furthermore, SvSAN can be configured in 3-node clusters. More information on this type of configuration is contained in the corresponding white paper.

Additional questions regarding the SvSAN witness are answered in a separate section below.

System requirements for SvSAN are provided in the SvSAN data sheet. Please refer to the system requirements table for full details.

SvSAN can support up to 32 GB of memory.

The following table shows the recommended amount of memory that needs to be allocated to the VSA based upon the memory and SSD cache sizes:

SSD cache size
Up to 0GB2 Up to 250GB Up to 500GB Up to 1000GB Up to 1500GB Up to 2000GB
Memory cache required 0GB1 1GB 3GB 3GB 4GB 5GB 6GB
1GB 3GB 4GB 4GB 5GB 6GB 7GB
2GB 4GB 5GB 5GB 6GB 7GB 9GB
3GB 5GB 6GB 6GB 7GB 9GB 10GB
4GB 6GB 7GB 7GB 9GB 10GB 11GB
6GB 9GB 9GB 10GB 11GB 12GB 13GB
8GB 11GB 11GB 12GB 13GB 14GB 15GB
12GB 15GB 16GB 16GB 17GB 18GB 20GB
16GB 20GB 20GB 21GB 22GB 23GB 24GB
20GB 24GB 24GB 25GB 26GB 27GB 28GB
24GB 28GB 29GB 29GB 31GB 32GB

1 Memory caching disabled
2 SSD caching disabled

For more information on SvSAN’s caching abilities, please refer to the caching white paper.

The minimum network bandwidth is 1Gb Ethernet.

SvSAN supports 10Gbps and 40Gbps Ethernet, jumbo frames and network teaming to provide network performance improvements.

Yes, SvSAN will take advantage of all network links the virtual server has configured. SvSAN can be configured to load balance and aggregate the bandwidth of all available network interfaces. These can be used for management, mirroring or iSCSI traffic.

Yes, two virtual switches need to be configured on the virtual servers. SvSAN, by default, is installed and configured with two virtual network interfaces (vNICs). However, more network interfaces may be added.

In the event that all network interfaces used for mirror traffic are unavailable, the mirror traffic will redirect over any remaining management network interfaces.

Yes, an existing un-mirrored target can be converted into a mirrored target and vice-versa. Please refer to the SvSAN manual or contact our support team at [email protected] for detailed steps.

The serial number and current host name are displayed at the start of the configuration wizard.

After the setup is completed, you can view the serial number from the console, the system tab of the WebGUI or the VSA view>System tab, within the StorMagic Plugin. e.g. https://VSAname.domainname/system/license/

Yes, SvSAN cluster nodes can be located in different locations. For example different sides of a building, across campus or in different cities.

Please refer to the stretch cluster white paper for details on bandwidth and latency requirements, or contact our support team at [email protected].

Yes, in some cases it is possible to skip a single firmware revision for example from SvSAN 6.1 to SvSAN 6.3. The intermediate upgrade to SvSAN 6.2 is not required.

Please refer to the SvSAN release notes for supported and valid upgrade paths.

However, we recommend that you keep up to date with the latest version of SvSAN firmware to ensure you have access to latest features and current security, performance and bug fixes.

Licensing and support

Yes, a free, fully functional evaluation of SvSAN is available to download, enabling organizations to trial and experience the features and benefits of SvSAN before purchasing. For more information and to download an evaluation copy, visit the trial download page on the website.

During the trial period, evaluators can, if desired, receive support and assistance with the first installation and a product demonstration.

SvSAN is sold as a single perpetual license for the total addressable usable VSA storage capacity.

  • Available in 2TB, 6TB, 12TB and Unlimited TB usable storage capacities
  • 1 license required per server/cluster node
  • Pricing based on a single license (2 licenses required for a 2-node cluster)
  • Base SvSAN license contains all the features necessary for highly available shared storage
  • Performance- and security-enhancing add-ons available: Predictive Storage Caching and data encryption
  • Maintenance must be purchased for each SvSAN license. Each add-on also requires its own maintenance to be purchased
  • Maintenance can be purchased in 1, 3 or 5 year increments

To discuss licensing and maintenance requirements and obtain a quote, please contact our sales team at [email protected].

SvSAN is available as a base license with all of the features necessary for highly available shared storage included.

There are also two add-ons available to SvSAN to enhance performance and security.

SvSAN’s performance-enhancing features are collectively known as Predictive Storage Caching, which utilizes patented algorithms to unleash the full power of memory and hybrid disk configurations.

SvSAN’s data encryption feature enables organizations with one to thousands of locations to affordably and efficiently introduce data encryption at each individual site.

For a full explanation of SvSAN’s features, please visit the SvSAN Features page.

Yes, upgrading capacity is available at any time.

To get software updates and product support requires a valid StorMagic Maintenance & Support contract.

StorMagic delivers 24/7, world-class support to ensure customers and partners can quickly and effectively troubleshoot any difficulties that may arise. StorMagic Maintenance & Support provides instant access to knowledge base articles, software updates including major and minor software updates (including bug fixes and new feature releases), as well the ability to log technical support requests.

StorMagic Maintenance & Support is available in two levels, Gold or Platinum.

For full details regarding StorMagic support, including comparisons between support plans, product lifecycle matrixes, severity definitions and business support hours, please refer to the StorMagic support overview document.

SvSAN witness

The SvSAN witness is a quorum service that acts as a tiebreaker, providing a majority vote in the event of a failure that requires a cluster leader election process. This prevents SvSAN clusters from getting into a state known as “split-brain”.

Split-brain is a clustering condition that occurs when cluster nodes lose contact from one another and begin to operate independently. The data on each node starts to diverge and become inconsistent, ultimately leading to data corruption and loss.

System requirements for the SvSAN witness are provided in the SvSAN witness data sheet. Please refer to the system requirements table for full details.

SvSAN has multiple deployment options for the witness to suit different requirements, these include:

Shared remote witness – This enables the minimum amount of IT infrastructure equipment that provides high availability (2 servers) to be located at the remote site. The witness is located at another central site (datacenter/HQ) and accessed over a WAN link. A single witness can be shared between hundreds of SvSAN clusters in remote locations.

Local witness – Similar to the previous configuration there are two servers each with an SvSAN VSA installed. This time however, the witness is hosted on a third physical server or virtual machine (outside the SvSAN HA cluster) located at the same site. This configuration is for environments which are totally isolated and have limited or no external network connectivity and need to protect against split-brain scenarios.

Multi-node SvSAN cluster – SvSAN can be deployed in multi-node clusters containing three or more servers. In this deployment scenario one of the SvSAN VSAs acts as a quorum for other cluster members. For example, with a 3-node cluster with VSAs A, B & C:

  1. When a mirror is created between VSAs A & B, VSA C acts as the quorum.
  2. When a mirror is created between VSAs A & C, VSA B acts as the quorum.
  3. When a mirror is created between VSAs B & C, VSA A acts as the quorum.

More information on multi-node SvSAN clusters is available in this white paper.

No witness – The final option is to deploy two servers at the remote site with no witness. In this configuration it is possible to enter a split-brain scenario in the event of loss of network connectivity between servers or if the servers are rebooted simultaneously. To reduce the chance of split-brain occurring, best practices should be followed. These include providing resilient network connections between servers, using quality components and using multiple, redundant power supplies.

No – the witness is an optional infrastructure component, see the “no witness” deployment option in the answer to “What are the typical SvSAN witness deployment options?”.

Yes – with clusters of 3 nodes or more other VSAs in the cluster act as quorums for other pairs of VSAs. See the “multi-node SvSAN cluster” deployment option in the answer to “What are the typical SvSAN witness deployment options?”.

Yes – the witness can be located at a central site and provide quorum for remote SvSAN VSA clusters. See the “shared remote witness” deployment option in the answer to “What are the typical SvSAN witness deployment options?”.

Operating system compatibility for the SvSAN witness is provided in the SvSAN witness data sheet. Please refer to the system requirements table for all supported operating systems.

Yes – the witness can be shared between multiple clusters and mirrors. StorMagic has customers with over 2000 sites each with multiple mirror targets using a centralized witness.

For a thorough run-down of different failure scenarios that the SvSAN witness can protect against, please refer to the witness white paper.

The witness uses the SvSAN discovery service, which utilizes the network port 4174 (TCP/UDP). Please refer to the “Port numbers in use by SvSAN” section of the SvSAN manual for all the ports used by SvSAN.

When using the witness over a WAN link the following are network bandwidth and latency recommendations to ensure optimal operation:

  • Latency should be less than 3,000ms, this would allow the witness to be located nearly anywhere in the world.
  • The amount of data transmitted from the VSA to the witness is small (under 100 bytes per second). It is recommended that there is at least 9Kb/s of available network bandwidth between the VSA and witness.

In general the witness can function on very high latencies and has very low network bandwidth requirements. Although these are extreme scenarios and networks with these characteristics are rarely used in practice, it shows how efficient the witness is.

For more discussion on the bandwidth and latency tolerances of the witness, please refer to the SvSAN witness white paper.

The witness is architected to support hundreds of mirror targets and clusters. It uses a lightweight protocol requiring minimal system and network resources.

While it is possible to use a single witness for all sites, best practice would be to deploy multiple witnesses at the datacenter/HQ and divide the remote clusters/mirrors among the available witnesses to avoid affecting all remote sites in the event of a witness server failure. Multiple witnesses can be deployed in different regions to ensure network connectivity is available and meets the minimum requirements.

The witness does not store any application/customer data, it only records mirror target and cluster state. Data stored on the witness includes:

  • Mirror state (synchronized, re-synching, etc.)
  • iSCSI Qualified Name (IQN)
  • Mirror target name
  • Mirror plex names
  • VSAs the mirror plexes reside on
Yes – the witness can be in a separate subnet.

The witness uses the SvSAN discovery service that utilizes network port 4174 TCP/UDP. By default this service does not “traverse” subnets, this is to ensure that the broadcast traffic is kept to a minimum. However, it is possible to create static network entries in the network routing tables, enabling the witness to reside on different subnets to the VSA clusters.

Static network entries can be created manually using the WebGUI or through scripting following deployment. They are automatically created when static IP addresses are used for the management address of the VSA.

It is only possible to have a single witness providing quorum for a cluster/mirror target. This is to ensure that all VSAs within a cluster agree on which VSA is the leader. However, the witness has a number of deployment options to ensure its availability:

  1. The witness can be installed onto a VM that can be failed over to another virtual server in the event of an outage.
  2. The witness can be installed on to a standby server/VM. A manual switchover is performed to make the clusters/mirror targets use this “standby” witness in the event of a failure. This operation can be scripted.
  3. In VMware vSphere environments the witness can be installed on to a fault tolerant (FT) virtual machine, ensuring there is no downtime of the witness service.
  4. It is also possible to automate the transition with a separate monitoring device and PowerShell commandlets.

Yes it can, so long as it is off the cluster it is providing quorum for.

Yes – it is possible to run the witness on a server/VM hosted by a cloud provider.

The server/virtual machine must meet the minimum server resource specification and the network latency and bandwidth requirements must be sufficient, which are outlined in the answer to “What are the system requirements for the SvSAN witness?”.

The StorMagic SvSAN Witness Appliance is a restricted version of the SvSAN VSA that is dedicated to providing quorum capabilities only. It has the following benefits:

  • Lightweight system requirements (CPU, memory and disk)
  • Self-contained – does not require another operating system e.g. Microsoft Windows or Linux
  • Quicker deployment
  • Upgraded using the same firmware as the SvSAN VSA

The Witness Appliance is for VMware vSphere ESXi environments only.

SvSAN data encryption

Gartner defines encryption as:
“Encryption is the process of systematically encoding a bit stream before transmission so that an unauthorized party cannot decipher it.”

Encryption should be considered as one aspect of a wider security strategy and is the process of translating data from one form (plaintext) to another (ciphertext). It ensures that if the data falls into an unauthorized party’s hands, the data cannot be accessed without having the correct encryption keys to decrypt the data.

Data encryption protects data when it is stored on disk and can be used to protect data from unauthorized access or equipment theft.

In the event of a disk failure, the failed disks can now be disposed or replaced without fear of data being accessed, as it is encrypted. This eliminates the requirement for data destruction techniques such as “degaussing” of magnetic disks, physical destruction or “disk scrubbing”.

SvSAN’s data encryption feature uses the widely available open source “OpenSSL” library that provides the encryption algorithms.

The current version is OpenSSL 1.0.2u.

SvSAN data encryption uses the XTS-AES-256 cipher to encrypt data.

The crypto key length is 256 bits.

SvSAN data encryption uses an embedded FIPS 140-2 (Level 1) validated cryptographic module (OpenSSL Object Module v2.0, Certificate #1747) running on the SvSAN platform per FIPS 140-2 Implementation Guidance section G.5 guidelines.

It utilizes a robust cryptographic cipher (XTS-AES-256) that is FIPS 140-2 compliant and meets HIPAA, PCI DSS and SOX standards.

However, SvSAN itself is not currently FIPS 140-2 validated.

Gartner defines Enterprise Key Management (EKM) as:
“Enterprise key management (EKM) provides a single, centralized software or network appliance for multiple symmetric encryption or tokenization cryptographic solutions. Critically, it enforces consistent data access policies through encryption and tokenization management. It also facilitates key distribution and secure key storage, and maintains consistent key life cycle management.”

In addition, they also state that:
“EKM products adopt the Key Management Interoperability Protocol (KMIP) standard, sponsored by the Organization for the Advancement of Structured Information Standards (OASIS). EKM solutions can manage any cryptographic solutions that are compliant with KMIP.”

Key Management Interoperability Protocol (KMIP) was introduced by OASIS in 2010 and is a single, standard protocol for communication between key management systems and encryption solutions.

Prior to KMIP each vendor would have their own encryption key management solution leading to multiple key management solutions (KMS) being used, which increases management overhead.

KMIP provides a standard mechanism for applications, storage arrays, tape libraries, disk drives (Self-Encrypting Drives) and networking equipment to communicate with key management solutions (KMS) from different vendors, reducing the number of key management solutions required.

SvSAN data encryption supports KMIP versions 1.0 to 1.4.

By default, KMIP uses port 5696, as assigned by the IANA (Internet Assigned Numbers Authority). This is the only port that should need to be opened of a firewall between SvSAN and the KMS.

SvSAN’s data encryption feature is compatible with all KMIP-compliant key managers, including StorMagic SvKMS. This includes software-based KMS solutions and hardware security modules (HSM). An HSM is a hardened, tamper-resistant appliance that is specifically designed for the protection of the cryptographic keys.

StorMagic recommends SvKMS encryption key management for SvSAN users with the data encryption feature enabled. However, the SvSAN manual contains integration guides with many leading KMS solutions from major providers.

Integration guides are included within the SvSAN manual, under the “More guides > Data encryption – SvSAN KMS integration” section.

Availability is the most important requirement of key management. Therefore, it is highly recommended to have at least two or possibly more key management servers to ensure keys are always available.

If it is not possible to contact the KMS to retrieve the cryptographic keys, it is not possible to encrypt or more importantly, decrypt the data and the data is lost.

Having multiple key management servers allows the keys to be replicated, and ideally they would be installed in different locations/datacenters to ensure that power outages, floods, fire, etc do not interrupt availability.

In addition to having multiple servers, good key management best practices should be followed, such as:

  • Take frequent backups of the KMS
  • Don’t store the keys on the storage that the keys are protecting

It is always best to consult with the KMS provider to ensure everything is configured and set up according to their best practices.

No, SvSAN data encryption is software-only and does not require any special encryption cards, RAID cards, FPGAs or ASICs.

However, as encryption can be quite CPU intensive, many of the modern CPUs now provide instructions to provide hardware acceleration instructions to significantly improve encryption performance, these are known as “Advanced Encryption Standard New Instruction” (AES-NI) operations. (https://en.wikipedia.org/wiki/AES_instruction_set)

SvSAN uses the OpenSSL library and the XTS-AES-265 cipher which in turn uses the AES-NI hardware acceleration instructions, if they are present and enabled in the CPU. The AES-NI hardware acceleration instructions may need to be enabled in the server BIOS.

Yes, it will be possible to encrypt an existing volume with data stored on it.

Yes. Changing the encryption keys is possible.

There are many variables that determine the impact on performance when using encryption, including:

  • How “fast” is the CPU?
  • Does the CPU support AES-NI?
    (Most modern processors support AES-NI and newer generations of CPUs have made performance improvements for these instructions over the previous generation, making them faster to perform encryption calculations.)
  • How fast are the underlying disks?
  • How much I/O and amount of data is being generated?

Therefore the answer about performance impact is: “it depends” – and will be specific to each customer environment.

SvSAN remote syslog feature

Syslog is a standard protocol, as defined in RFC 5424 that is used by the rsyslog utility to send system log or event messages to a centralized server allowing logs to be consolidated from various different devices, including servers, storage, network devices (routers, switches, firewalls) as well as peripherals (printers, scanners). This allows the logs to be used for system monitoring, security auditing and other analysis purposes.

Traditionally, syslog uses the UDP protocol on port 514 but can be configured to use any port.

Currently all events are sent from the VSA to the remote syslog server.

The SvSAN remote syslog feature should work with any syslog collector software that can decode the standard syslog message format as defined in RFC 5424.

It has been tested with the following common syslog software packages and works out of the box without requiring any additional message translation:

  • Linux syslog server
  • Nagios
  • Paessler PRTG
  • Graylog

Additional software packages will be tested and added based on customer feedback.

SvSAN with VMware vSphere

Hypervisor version compatibility for SvSAN is provided in the SvSAN data sheet. Please refer to the hypervisor compatibility table for all supported hypervisors.

The plugin provides virtual SAN management via wizard-based deployment and ongoing administrative tasks directly within vCenter on premises or in a centralized location.

The StorMagic dashboard within the vCenter displays a traffic light based health status for all VSAs managed by the vCenter and provides the capability to perform VSA firmware upgrades.

The SvSAN VSA VM presents a console available via the hypervisor tools to enable basic management tasks and ensure network connectivity.

VSAs are managed via the StorMagic vCenter plugin. Alongside this each VSA presents a web interface that can be used via any web browser using the VSA IP address or hostname.

SvSAN requires a VMware vSphere hypervisor and will run with or without vCenter. SvSAN can provide integrated vSphere management, if a vCenter server is available.

If a vCenter server is not available, SvSAN can be managed via a web interface and the hosts, to provide software-defined SAN functionality. vCenter is required for VM High Availability.

Yes, SvSAN fully supports all TCP/IP networking with architectures utilizing subnets or VLANS to segregate traffic types. Accessing the plugin while using VLANs requires the management interfaces to be configured between the vCenter and appliances. For example, you could have one VLAN containing a vCenter, Service Console and a SvSAN management interface, and another VLAN containing a VMKernel and another SvSAN interface providing a dedicated iSCSI VLAN.

Yes, it is possible to perform a staggered, non-disruptive upgrade of both host and virtual appliance.

SvSAN with Microsoft Hyper-V

Hypervisor version compatibility for SvSAN is provided in the SvSAN data sheet. Please refer to the hypervisor compatibility table for all supported hypervisors.