StorMagic SvSAN FAQ

Find the answers you need within these frequently asked questions

The StorMagic SvSAN FAQ attempts to answer a wide variety of queries relating to StorMagic SvSAN, its deployment and features.

In addition to this FAQ, complete documentation for the deployment of SvSAN is available to read by visiting the SvSAN manual page.

Have you read through the SvSAN data sheet, and the SvSAN Technical Overview white paper, both of which are packed with useful information about SvSAN, its requirements, and capabilities?

If you cannot find an appropriate answer within the SvSAN FAQ, you can contact the StorMagic team at [email protected].

 

General questions

SvSAN is a virtual SAN solution which enables simple and affordable hyperconverged infrastructure without compromising on reliability and performance.

SvSAN is a software-defined storage solution designed to deliver hyperconverged compute and storage infrastructure with two or more low-cost servers. It is uniquely optimized for cost-effective, multi-site data management, enabling continuous, high-speed data access for business-critical applications.

Eliminate the need for physical SANs, remove the worry of downtime for business-critical applications and significantly lower IT operating and acquisition costs. SMEs and large organizations across 72 countries have already chosen SvSAN to modernize their IT infrastructure.

This robust software product is designed for the realities of poor network reliability often found in remote areas, and it delivers highly-available hyperconverged infrastructure, even when networks have long latencies and limited throughput.

SvSAN supports the industry-leading hypervisors, VMware vSphere and Microsoft Hyper-V. It is installed as a Virtual Storage Appliance (VSA) requiring minimal server resources to provide the shared storage necessary to enable the advanced hypervisor features such as High-Availability/Failover Cluster, vMotion/Live Migration and VMware Distributed Resource Scheduler (DRS)/Dynamic Optimization.

For full details on hypervisor version compatibility, please refer to the SvSAN data sheet.

SvSAN can be deployed as a simple 2-node cluster, with the flexibility to meet changing capacity and performance needs. This is achieved by adding additional capacity to existing servers or by growing the SvSAN cluster, without impacting service availability.

SvSAN mirrors data between VSAs/cluster nodes synchronously, ensuring that the data is stored in two places before being acknowledged as complete.

Each side of the mirror (plex) is active, which allows data to be accessed from any plex. In the event that one side of the mirror fails (server failure, storage failure, network failure) data can still be access from the surviving plex.

While one side of a mirror is offline, changes to the surviving side are recorded on the meta-data journal. Upon recovery, the journal is read to determine which data has changed, this is then copied to the recovered side of the mirror, along with any new data written, this is known as a “fast re-synchronization”.

The metadata journal should be at least 20 GB in size, which is capable of handling a very large number of changes. If the meta-data journal wraps fails, the system simply reverts to doing a full mirror re-synchronization.

SvSAN does not explicitly handle protection against drive failures. The server on which the VSA is running will be protecting against drive failures using hardware RAID.

In cases where there is no RAID, the data is protected by SvSAN with a mirrored copy of the data on another SvSAN node.

The VSA is designed to handle any unexpected power loss. On startup, the VSA perform checks to determine if it was was previously shut down in a graceful manner.

If the VSA was previously shutdown abnormally, checks are run to ensure all the configuration data is consistent and correct.

A VSA contains dual boot images, to protect against corruption. If the primary boot image becomes corrupted, the VSA can boot from the other image.

SvSAN has been designed to be scalable as far as possible, in that most entities do not have hard limits, but rather are limited by available hardware resources.

The maximum capacity of a virtual disk is 128 Petabytes.

The minimum scaling unit to provide highly available shared storage is two nodes, where virtual disks are mirrored between nodes.

A third server is used to provide quorum to protect against split-brain scenarios, this is the SvSAN witness. Read more about it in this white paper. The witness can be located onsite, or remotely over a WAN. If a third node cannot be used, SvSAN can operate in a 2-node configuration using the “Stay up Isolation Policy”

Beyond two nodes, any number of nodes can be supported. It is possible to create as many mirrored, virtual disks as capacity allows, and each can be mirrored between any pair of nodes.

Additional questions regarding the SvSAN witness are answered in a separate section below.

The minimum recommended requirements for SvSAN are:

CPU1 x virtual CPU core1

  • 2 GHz or higher reserved
Memory1GB RAM2
Disk2 x virtual storage devices used by VSA

  • 1 x 512MB Boot device, used to store the VSA boot image and configuration data
  • 1 x 20GB Journal Disk, used to store journaling metadata, log files, etc.
Network1 x 1Gb Ethernet

  • Multiple interfaces required for resiliency
  • 10Gb Ethernet is supported
  • Jumbo frames supported

1 When using SvSAN Advanced Edition with Data Encryption enabled, 2+ virtual CPUs are recommended.
2 Additional RAM may be required when caching is enabled.

SvSAN can support up to 32 GB of memory.

The following table shows the recommended amount of memory that needs to be allocated to the VSA based upon the memory and SSD cache sizes:

SSD cache size
Up to 0GB2Up to 250GBUp to 500GBUp to 1000GBUp to 1500GBUp to 2000GB
Memory cache required0GB11GB3GB3GB4GB5GB6GB
1GB3GB4GB4GB5GB6GB7GB
2GB4GB5GB5GB6GB7GB9GB
3GB5GB6GB6GB7GB9GB10GB
4GB6GB7GB7GB9GB10GB11GB
6GB9GB9GB10GB11GB12GB13GB
8GB11GB11GB12GB13GB14GB15GB
12GB15GB16GB16GB17GB18GB20GB
16GB20GB20GB21GB22GB23GB24GB
20GB24GB24GB25GB26GB27GB28GB
24GB28GB29GB29GB31GB32GB-

1 Memory caching disabled
2 SSD caching disabled

For more information on SvSAN's caching abilities, please refer to the caching white paper.

The minimum network bandwidth is 1Gb Ethernet.

SvSAN supports 10Gbps and 40Gbps Ethernet, jumbo frames and network teaming to provide network performance improvements.

Yes, SvSAN will take advantage of all network links the virtual server has configured. SvSAN can be configured to load balance and aggregate the bandwidth of all available network interfaces. These can be used for management, mirroring or iSCSI traffic.

Yes, two virtual switches need to be configured on the virtual servers. SvSAN, by default is installed and configured with two virtual network interfaces (vNICs). However, more network interfaces may be added.

In the event that all network interfaces used for mirror traffic are unavailable, the mirror traffic will redirect over any remaining management network interfaces.

Yes, an existing un-mirrored target can be converted into a mirrored target and vice-versa. Please refer to the SvSAN manual or contact our support team at [email protected] for detailed steps.

The serial number and current host name are displayed at the start of the configuration wizard.

After the setup is completed, you can view the serial number from the console, the system tab of the WebGUI or the VSA view>System tab, within the StorMagic Plugin. e.g. https://VSAname.domainname/system/license/

Yes, SvSAN cluster nodes can be located in different locations. For example different sides of a building, across campus or in different cities.

Please refer to the stretch cluster white paper for details on bandwidth and latency requirements, or contact our support team at [email protected].

Yes, it is possible to skip a single firmware revision for example from SvSAN 5.1 to SvSAN 5.3. The intermediate upgrade to SvSAN 5.2 is not required.

Please refer to the SvSAN release notes for supported and valid upgrade paths.

 

Licensing and support

Yes, a free, fully functional evaluation of SvSAN is available to download, enabling organizations to trial and experience the features and benefits of SvSAN before purchasing. For more information and to download an evaluation copy, visit the trial download page on the website.

During the trial period, evaluators will receive support and assistance with the first installation and a product demonstration.

SvSAN is sold as a perpetual license in pairs of license keys for the total addressable usable VSA storage capacity.

  • Available in 2TB, 6TB, 12TB and Unlimited TB usable storage capacities
  • 1 license required per server/cluster node
  • Pricing based on a 2 node license bundle (single licenses also available)
  • Two editions - Standard and Advanced

SvSAN licenses include the first year of maintenance. Additional maintenance years can be purchased in increments of 2 and 4 years, for 3 and 5 year maintenance terms.

To discuss your maintenance requirements and obtain a quote, please contact our sales team at [email protected].

SvSAN is available in two editions – Standard and Advanced.

The following table provides an overview of the features in each edition:

FeatureStandard EditionAdvanced Edition
Synchronous mirroring/high availabilityXX
Stretch/metro cluster supportXX
Volume migrationXX
VSA restore1XX
VMware vSphere Storage API (VAAI) supportXX
Centralized management and monitoringXX
WitnessXX
I/O performance statisticsXX
Multiple VSA GUI deployment and upgradeXX
PowerShell script generationXX
Write back caching (SSD)X
Predictive read ahead caching (SSD and memory)X
Data pinningX
Data encryptionX

1 VMware only

Yes, upgrading edition is available at any time.

Yes, upgrading capacity is available at any time.

To get software updates and product support requires a valid StorMagic Maintenance & Support contract.

StorMagic delivers 24/7, world-class support to ensure customers and partners can quickly and effectively troubleshoot any difficulties that may arise. StorMagic Maintenance & Support provides instant access to knowledge base articles, software updates including major and minor software updates (including bug fixes and new feature releases), as well the ability to log technical support requests.

StorMagic Maintenance & Support is available in two levels, Gold or Platinum.

Gold delivers daytime, weekday support, while Platinum offers 24/7 support, 365 days a year. A summary of the maintenance levels is shown in the table below:

Gold SupportPlatinum Support
Hours of operation8 hours a day1, Monday - Friday24 hours a day2, 7 days a week
Length of service1, 3 or 5 years1, 3 or 5 years
Product updatesYesYes
Product upgradesYesYes
Access methodEmail
Web Chat
Email
Web Chat
Telephone
Response methodEmail
Telephone
Email
Telephone
Remote support / WebExYesYes
Maximum number of support administrators per contract24
Target response times
- Low12 business hours8 business hours
- Medium8 business hours4 business hours
- Critical4 business hours1 hour (24/7/365)

1Gold Support is only available within the timezones of UTC -08:00 to UTC +02:00. If you fall outside of this range, you must purchase Platinum Support.
2Global, 24x7 support for critical issues

 

SvSAN witness

The SvSAN witness is a quorum service that acts as a tiebreaker, providing a majority vote in the event of a failure requiring a cluster leader election process. This prevents SvSAN clusters from getting in to a state know as "split-brain".

Split-brain is a clustering condition that occurs when cluster nodes lose contact from one another and begin to operate independently. The data on each node starts to diverge and become inconsistent, ultimately leading to data corruption and loss.

The witness has minimal server requirements, as shown below:

CPU1 x virtual CPU core (1 GHz)
Memory512MB (reserved)
Disk512MB
Network1 x 1Gb Ethernet NIC
When using the witness over a WAN link use the following recommendations for optimal operation:

  • Latency of less than 3000ms, this would allow the witness to be located anywhere in the world
  • 9Kb/s of available network bandwidth between the VSA and witness (less than 100 bytes of data is transmitted per second)
Operating SystemThe SvSAN witness can be deployed onto a physical server or virtual machine with the following:

  • Windows Server 2016 (64-bit)
  • Hyper-V Server 2016 (64-bit)
  • Raspbian Jessie (32-bit)1
  • Raspbian Stretch (32-bit)2
  • vCenter Server Appliance (vCSA)3
  • StorMagic SvSAN Witness Appliance

1On Raspberry Pi 1, 2 and 3
2On Raspberry Pi 2, 3 and 3+
3VMware vSphere 5.5 and higher

NOTE: The witness should be installed onto a server separate from the SvSAN VSA.

SvSAN has multiple deployment options for the witness to suit different requirements, these include:

Shared remote witness – This enables the minimum amount of IT infrastructure equipment that provides high availability (2 servers) to be located at the remote site. The witness is located at another central site (datacenter/HQ) and accessed over a WAN link. A single witness can be shared between thousands of SvSAN clusters in remote locations.

Local witness – Similar to the previous configuration there are two servers each with an SvSAN VSA installed. This time however, the witness is hosted on a third physical server or virtual machine (outside the SvSAN HA cluster) located at the same site. This configuration is for environments which are totally isolated and have limited or no external network connectivity and need to protect against split-brain scenarios.

Multi-node SvSAN cluster – SvSAN can be deployed in multi-node clusters containing three or more servers. In this deployment scenario one of the SvSAN VSAs acts as a quorum for other cluster members. For example, with a 3-node cluster with VSAs A, B & C:

  1. When a mirror is created between VSAs A & B, VSA C acts as the quorum.
  2. When a mirror is created between VSAs A & C, VSA B acts as the quorum.
  3. When a mirror is created between VSAs B & C, VSA A acts as the quorum.

No witness – The final option is to deploy two servers at the remote site with no witness. In this configuration it is possible to enter a split-brain scenario in the event of loss of network connectivity between servers or if the servers are rebooted simultaneously. To reduce the chance of split-brain occurring, best practices should be followed. These include providing resilient network connections between servers, using quality components and using multiple, redundant power supplies.

No – the witness is an optional infrastructure component, see the “no witness” deployment option in the previous answer above.

Yes – with clusters of 3 nodes or more other VSAs in the cluster act as quorums for other pairs of VSAs. See the “multi-node SvSAN cluster” deployment option in the answer above.

Yes – the witness can be located at a central site and provide quorum for remote SvSAN VSA clusters. See the “shared remote witness” deployment option in the answer above.

The witness can be installed on servers or VMs running one of the following operating systems:

  • Windows Server 2016 (64-bit)
  • Hyper-V Server 2016 (64-bit)
  • Raspbian Jessie (32-bit)1
  • Raspbian Stretch (32-bit)2
  • vCenter Server Appliance (vCSA)3
  • StorMagic SvSAN Witness Appliance

1On Raspberry Pi 1, 2 and 3
2On Raspberry Pi 2, 3 and 3+
3VMware vSphere 5.5 and higher

Yes – the witness can be shared between multiple clusters and mirrors. StorMagic has customers with over 2000 sites each with multiple mirror targets using a centralized witness.

For a thorough run-down of different failure scenarios that the SvSAN witness can protect against, please refer to the witness white paper.

The witness uses the SvSAN discovery service, which utilizes the network port 4174 (TCP/UDP). Please refer to the “Port numbers in use by SvSAN” section of the SvSAN manual for all the ports used by SvSAN.

When using the witness over a WAN link the following are network bandwidth and latency recommendations to ensure optimal operation:

  • Latency should be less than 3,000ms, this would allow the witness to be located nearly anywhere in the world.
  • The amount of data transmitted from the VSA to the witness is small (under 100 bytes per second). It is recommended that there is at least 9Kb/s of available network bandwidth between the VSA and witness.

In general the witness can function on very high latencies and has very low network bandwidth requirements. Although these are extreme scenarios and networks with these characteristics are rarely used in practice, it shows how efficient the witness is.

For more discussion on the bandwidth and latency tolerances of the witness, please refer to the SvSAN witness white paper.

The witness is architected to support thousands of mirror targets and clusters. It uses a lightweight protocol requiring minimal system and network resources.

While it is possible to use a single witness for all sites, best practice would be to deploy multiple witnesses at the datacenter/HQ and divide the remote clusters/mirrors among the available witnesses to avoid affecting all remote sites in the event of a witness server failure. Multiple witnesses can be deployed in different regions to ensure network connectivity is available and meets the minimum requirements.

The witness does not store any application/customer data, it only records mirror target and cluster state. Data stored on the witness includes:

  • Mirror state (synchronized, re-synching, etc.)
  • iSCSI Qualified Name (IQN)
  • Mirror target name
  • Mirror plex names
  • VSAs the mirror plexes reside on

Yes – the witness can be in a separate subnet.

As previously mentioned, the witness uses the SvSAN discovery service that utilizes network port 4174 TCP/UDP. By default this service does not “traverse” subnets, this is to ensure that the broadcast traffic is kept to a minimum. However, it is possible to create static network entries in the network routing tables, enabling the witness to reside on different subnets to the VSA clusters.

Static network entries can be created manually using the WebGUI or through scripting following deployment. They are automatically created when static IP addresses are used for the management address of the VSA.

It is only possible to have a single witness providing quorum for a cluster/mirror target. This is to ensure that all VSAs within a cluster agree on which VSA is the leader. However, the witness has a number of deployment options to ensure its availability:

  1. The witness can be installed onto a VM that can be failed over to another virtual server in the event of an outage.
  2. The witness can be installed on to a standby server/VM. A manual switchover is performed to make the clusters/mirror targets use this “standby” witness in the event of a failure. This operation can be scripted.
  3. In VMware vSphere environments the witness can be installed on to a fault tolerant (FT) virtual machine, ensuring there is no downtime of the witness service.

Yes – it is possible to run the witness on a server/VM hosted by a cloud provider.

The server/virtual machine must meet the minimum server resource specification and the network latency and bandwidth requirements must be sufficient, which are outlined in the question above.

The StorMagic SvSAN Witness Appliance is a restricted version of the SvSAN VSA that is dedicated to providing quorum capabilities only. It has the following benefits:

  • Lightweight system requirements (CPU, memory and disk)
  • Self-contained - does not require another operating system e.g. Microsoft Windows or Linux
  • Quicker deployment
  • Upgraded using the same firmware as the SvSAN VSA

The Witness Appliance is for VMware vSphere ESXi environments only.

 

SvSAN Data Encryption

Gartner defines encryption as:
“Encryption is the process of systematically encoding a bit stream before transmission so that an unauthorized party cannot decipher it.”

Encryption should be considered as one aspect of a wider security strategy and is the process of translating data from one form (plaintext) to another (ciphertext). It ensures that if the data falls into an unauthorized party's hands, the data cannot accessed without having the correct encryption keys to decrypt the data.

Data-at-rest encryption protects data when it is stored on disk and can be used to protect data from unauthorized access or equipment theft.

In the event of a disk failure, the failed disks can now be disposed or replaced without fear of data being accessed, as it is encrypted. This eliminates the requirement for data destruction techniques such as “degaussing” of magnetic disks, physical destruction or “disk scrubbing”.

The SvSAN Data Encryption feature uses the widely available open source “OpenSSL” library that provides the encryption algorithms.

The current version is OpenSSL 1.0.2n.

SvSAN Data Encryption uses the XTS-AES-256 cipher.

The crypto key length is 256 bits.

The SvSAN Data Encryption feature uses an embedded FIPS 140-2 (Level 1) validated cryptographic module (OpenSSL Object Module v2.0, Certificate #1747) running on SvSAN platform per FIPS 140-2 Implementation Guidance section G.5 guidelines.

It utilizes a military grade cryptographic cipher (XTS-AES-256) that is FIPS 140-2 compliant and meets HIPAA, PCI DSS and SOX standards.

However, SvSAN itself is not currently FIPS 140-2 validated.

Gartner defines Enterprise Key Management (EKM) as:
“Enterprise key management (EKM) provides a single, centralized software or network appliance for multiple symmetric encryption or tokenization cryptographic solutions. Critically, it enforces consistent data access policies through encryption and tokenization management. It also facilitates key distribution and secure key storage, and maintains consistent key life cycle management.”

In addition, they also state that:
“EKM products adopt the Key Management Interoperability Protocol (KMIP) standard, sponsored by the Organization for the Advancement of Structured Information Standards (OASIS). EKM solutions can manage any cryptographic solutions that are compliant with KMIP.”

Key Management Interoperability Protocol (KMIP) was introduced by OASIS in 2010 and is a single, standard protocol for communication between key management systems and encryption solutions.

Prior to KMIP each vendor would have their own encryption key management solution leading to multiple key management solutions (KMS) being used, which increases management overhead.

KMIP provides a standard mechanism for applications, storage arrays, tape libraries, disk drives (Self-Encrypting Drives) and networking equipment to communicate with key management solutions (KMS) from different vendors, reducing the number of key management solutions required.

SvSAN Data Encryption supports KMIP versions 1.0 to 1.4.

By default, KMIP uses port 5696, as assigned by the IANA (Internet Assigned Numbers Authority). This is the only port that should need to be opened of a firewall between SvSAN and the KMS.

As SvSAN Data Encryption uses KMIP for encryption key management, then it should work with all KMIP-compliant KMS solutions. This includes software-based KMS solutions and hardware security modules (HSM). A HSM is a hardened, tamper-resistant appliance that is specifically designed for the protection of the cryptographic keys.

StorMagic has partnered with four leading KMS providers whose solutions have been tested and verified with SvSAN:

  • Fornetix Key Orchestration (solution brief)
  • HyTrust KeyControl
  • Thales Vormetric Data Security Manager
  • Gemalto SafeNet KeySecure

Integration guides for all four solutions are included within the SvSAN manual, under the "Technical notes > Data encryption - KMS integration" section.

Other example KMS solutions include:

  • Cryptsoft
  • Dell/EMC Cloudlink
  • HPE Secure Key Manager
  • IBM Security Key Lifecycle Manager
  • Townsend Security Alliance Key Manager
  • Hancom Secure
  • Cryptomathic Key Management System
  • Oracle Key Manager
  • QuintessenceLabs qCrypt Key and Policy Manager
  • Utimaco

Availability is the most important requirement of key management. Therefore, it is highly recommended to have at least two or possibly more key management servers to ensure keys are always available.

If it is not possible to contact the KMS to retrieve the cryptographic keys, it is not possible to encrypt or more importantly, decrypt the data and the data is lost.

Having multiple key management servers allows the keys to be replicated, and ideally they would be installed in different locations/datacenters to ensure that power outages, floods, fire, etc do not interrupt availability.

In addition to having multiple servers, good key management best practices should be followed, such as:

  • Take frequent backups of the KMS
  • Don't store the keys on the storage that the keys are protecting

It is always best to consult with the KMS provider to ensure everything is configured and setup according to their best practices.

No, SvSAN Data Encryption is a software-only encryption solution and does not require any special encryption cards, RAID cards, FPGAs or ASICs.

However, as encryption can be quite CPU intensive, many of the modern CPUs now provide instructions to provide hardware acceleration instructions to significantly improve encryption performance, these are known as “Advanced Encryption Standard New Instruction” (AES-NI) operations. (https://en.wikipedia.org/wiki/AES_instruction_set)

SvSAN Data Encryption uses the OpenSSL library and the XTS-AES-265 cipher which in turn uses the AES-NI hardware acceleration instructions, if they are present and enabled in the CPU. The AES-NI hardware acceleration instructions may need to be enabled in the server BIOS.

Yes, it will be possible to encrypt an existing volume with data stored on it.

In the initial release of the SvSAN Data Encryption feature, the volume will need to be taken offline. However, online encryption of volumes is planned for a future release.

Yes. Changing the encryption keys will be possible. However, in the initial release of the SvSAN Data Encryption feature, this will have to be performed with the volume offline.

Online re-keying of volumes is on the roadmap and scheduled for a future release.

There are many variables that determine the impact on performance when using encryption, including:

  • How “fast” is the the CPU?
  • Does the CPU support AES-NI?
    (Most modern processors support AES-NI and newer generations of CPUs have made performance improvements for these instructions over the previous generation, making them faster to perform encryption calculations.)
  • How fast are the underlying disks?
  • How much I/O and amount of data is being generated?

Therefore the answer about performance impact is: “it depends” - and will be specific to each customer environment.

Yes, deleting the cryptographic keys will instantly make the data on a volume inaccessible.

 

SvSAN with VMware vSphere

SvSAN supports VMware vSphere 5.5, 6.0, 6.5 and 6.7. The latest version of SvSAN, version 6.2, supports VMware vSphere 6.5 and 6.7.

For full details on hypervisor version compatibility please refer to the SvSAN data sheet.

SvSAN integration services are installed to either a Windows or Linux vCenter. The plugin provides virtual SAN management via wizard-based deployment and ongoing administrative tasks directly within vCenter on premise or in a centralized location.

The StorMagic dashboard within the vCenter displays traffic light based health status for all VSAs managed by the vCenter and provides the capability to perform VSA firmware upgrades.

The SvSAN VSA VM presents a console available via the hypervisor tools to enable basic management tasks and ensure network connectivity.

VSAs are managed via the StorMagic vCenter plugin. Alongside this each VSA presents a web interface that can be used via any web browser using the VSA IP address or hostname.

SvSAN requires a VMware vSphere hypervisor and will run with or without vCenter. SvSAN can provide integrated vSphere management, if a vCenter server is available.

If a vCenter server is not available, SvSAN can be managed via a web interface and the hosts.

Yes, SvSAN fully supports all TCP/IP networking with architectures utilizing subnets or VLANS to segregate traffic types. Accessing the plugin while using VLANs requires the management interfaces to be configured between the vCenter and appliances. For example, you could have one VLAN containing a vCenter, Service Console and a SvSAN management interface, and another VLAN containing a VMKernel and another SvSAN interface providing a dedicated iSCSI VLAN.

Yes, it is possible to perform a staggered, non-disruptive upgrade of both host and virtual appliance.

Running Virtual vCenter is supported but dependent on configuration. Please contact the StorMagic support team for more information at [email protected].

 

SvSAN with Microsoft Hyper-V

SvSAN supports Windows Server and Hyper-V Server 2012, 2012 R2 and 2016. The latest version of SvSAN, version 6.2, supports Windows Server and Hyper-V Server 2016.

For full details on hypervisor version compatibility please refer to the SvSAN data sheet.

Yes, there is a StorMagic SCOM management pack that provides alerts and system status.

For more information please refer to the SCOM solution brief.