The StorMagic SvHCI FAQ attempts to answer a wide variety of queries relating to StorMagic SvHCI, its deployment and features.
In addition to this FAQ, complete documentation for the deployment of SvHCI is available to read by visiting the SvHCI manual page.
Have you read through the SvHCI data sheet, and the SvSAN Technical Overview white paper, both of which are packed with useful information about SvHCI, its requirements, and capabilities?
If you cannot find an appropriate answer within the SvHCI FAQ, you can contact the StorMagic team at [email protected].
Question Categories
Use the links below to jump to a specific section:
SvHCI Architecture
SvHCI currently supports a maximum of two nodes per cluster. Single node clusters are also supported. Support for three or more nodes per cluster is planned for in the product roadmap. Furthermore, SvHCI can present storage as an iSCSI target so any number of additional compute nodes can access the SvHCI cluster.
Yes. It is a complete virtualization platform for x86-based physical servers.
No, SvHCI is very flexible and supports 2-node clusters that use different servers from multiple manufacturers. Full symmetry between CPU processors is not required and can differ slightly too. However, SvHCI does require CPUs from the same product family or that CPU clock speeds have been set to be the same.
SvHCI supports clusters with nodes that contain different types of storage media, however care should be taken to ensure that performance is balanced between the nodes.
The complete index of compatible hardware for SvHCI can be found in the Hardware Compatibility List (HCL): https://stormagic.com/doc/svhci/1-3-0/en/Content/SvHCI/supported-hardware.htm
We are working hard to test as many combinations and permutations of hardware, controllers and drivers, and if there are specific components or drivers that you would like to see supported, please contact your StorMagic representative who will be able to pass on your feedback and add it to our testing and validation effort.
SvHCI 1.3 supports up to 25 virtual machines (VMs) per cluster if all of them require high availability (HA). If HA is not required, then each node can have up to 25 VMs (a total of 50 in a two node cluster). For environments with a mix of HA and non-HA VMs you can use this formula:
HA = # of VMs that require high availability
XHA = # of VMs that do not require high availability
2*HA + XHA ≤ 50
SvHCI 2.0 will increase the limit from 25 to 50 VMs per node which means 50 HA protected VMs per cluster, or 100 VMs per cluster without HA. (the above formula would then be: 2*HA + XHA ≤ 100
Note: Only powered-on VMs started in runtime are counted towards the limit. Powered-off VMs are not. For further details see the SvHCI Technical Product
Documentation: https://stormagic.com/doc/svhci/1-3-0/en/Content/SvHCI/scalability.htm
SvHCI 1.3 supports the following list of guest operating systems: https://stormagic.com/doc/svhci/1-3-0/en/Content/supported-guest-os.htm
No. We are considering adding this capability to the product as part of our long-term product roadmap.
No, SvHCI contains its own hypervisor based on KVM/QEMU virtualization. If you are happy with your existing hypervisor and are looking only for storage virtualization instead, consider StorMagic SvSAN which integrates with ESXi and Hyper-V as a virtual SAN.
Yes, the SvHCI witness can run on a Windows VM or even a Windows laptop or PC. The witness is extremely lightweight and can run almost anywhere. For more information on the witness system requirements refer to the SvHCI Data Sheet: https://stormagic.com/resources/data-sheets/svhci-data-sheet/
Yes. When there’s a new software update available, it can be installed from the user interface and the whole SvHCI stack will be updated at the same time, non-disruptively.
High Availability
StorMagic does not guarantee a level of availability provided by our software. However, we design our software for 100% uptime and have many examples of customers going years without a single instance of downtime.
SvHCI’s mirroring synchronizes all of the data between the two nodes so that it is always on both servers. So if a server fails, its VM data as well as the datastores are already on the other server and therefore there is no wait time to get that data available and online.
Regarding the restart of VMs on the surviving node, there is a small delay of around 30 seconds. This compares favorably to VMware and Microsoft hypervisors which take around 2.5 minutes and 4 minutes respectively to restart VMs on another node in the event of a failure.
SvHCI is an active-active 2-node cluster. During configuration, VMs can be selected to run on node A, or node B, to create a balanced system and in the event of a node going offline, all the VMs from the failed node will migrate over to the surviving node.
Separating the two servers in an SvHCI cluster is possible in the same rack or datacenter. But to move them further distances (stretched or “metro” clusters) for DR planning is coming in SvHCI 2.0.
No. Live migration of VMs works with all servers on the SvHCI Hardware Compatibility List (HCL) which is available at the following link: https://stormagic.com/doc/svhci/1-3-0/en/Content/SvHCI/supported-hardware.htm
Please see the Technical Product Documentation section for this topic: https://stormagic.com/doc/svhci/1-3-0/en/Content/SvHCI/add-remove-hardware.htm
Storage
SvHCI supports any physical hardware RAID controller for servers on the SvHCI HCL and therefore supports any RAID configuration through that controller. If there is no physical hardware RAID controller available, SvHCI offers software-enabled RAID 0, 1 and 10. In addition, SvHCI 1.3 introduced support for Intel Virtual RAID on CPU (VROC) embedded storage RAID controllers for RAID 0, 1 and 10.
No, SvHCI does not support connections to external storage at present, though it is on the product development roadmap. However, the reverse is possible: SvHCI can present an iSCSI target that can be used by other compute nodes.
No, external storage cannot be added to SvHCI through an iSCSI connection at present, though it is on the product development roadmap. However, the reverse is possible: SvHCI can present an iSCSI target that can be used by other compute nodes.
No, SvHCI cannot currently provide an NFS connection to external storage. It is being considered as a future roadmap development.
No, SvHCI does not currently support 4K. 4K block support is planned for release in 2025.
Networking
The minimum network connection to an SvHCI node is a single 1GbE port. However, the higher the network bandwidth between SvHCI nodes, the better. SvHCI supports up to 100GbE network interface cards and ethernet speeds. For full system requirements, refer to the SvHCI Data Sheet: https://stormagic.com/resources/data-sheets/svhci-data-sheet/
Yes, SvHCI supports back-to-back network cabling.
Management
Yes, SvHCI is managed through a web-based graphical user interface known as the SvHCI Virtualization Manager. This allows users to manage SvHCI by the node, or cluster.
Note: StorMagic’s centralized fleet management product, Edge Control, will be integrated with SvHCI in 2025 which will allow management of an organization’s entire fleet of SvHCI nodes and clusters from a single web console.
The SvHCI Virtualization Manager is included and installed on each server running SvHCI. To access the user interface, simply open a web browser and enter the IP address for the specific SvHCI host.
The current SvHCI management functionality is per node or per cluster. Each can be managed from a separate instance of the SvHCI Virtualization Manager.
Note: StorMagic’s centralized fleet management product, Edge Control, will be integrated with SvHCI in 2025 which will allow management of an organization’s entire fleet of SvHCI nodes and clusters from a single web console.
The SvHCI Virtualization Manager management console is part of the SvHCI software stack running on each server. The server would continue running if the Virtualization Manager is not accessible. The Virtualization Manager process can fail, but it is separate from the core server virtualization service which may continue running. However, in a single node scenario, if the entire server went down this would mean that the Virtualization Manager, virtualization service and VMs would fail.
No, not in the current version of SvHCI. Security enhancements are being prioritized as part of the product roadmap however.
No, this is not supported in SvHCI at present. It is an item being considered for the development roadmap.
Backup & Data Protection
SvHCI supports agent-based backup with any third-party backup software product. This is achieved by installing a backup agent in any VM that must be backed up. Storing the backup in another storage location is the responsibility of the user. In the event of disaster recovery, the user can recover the VM from backup.
No, SvHCI does not support agentless backup at this time. Agentless backup integration requires the backup software vendors to work directly with the hypervisor vendor’s engineering team. To date, the only hypervisors that have agentless integration with major backup vendors are from VMware, Microsoft and Nutanix. This is because they have the largest market share and most amount of users, and backup vendors prioritize these virtualization platforms over others. All of the other server virtualization vendors offer agent-based backup while they try to get on the roadmaps of the backup software vendors. We highly encourage users to have a full understanding of the pros and cons of the agent-based approach to backup. Read this blog post for more information: https://stormagic.com/company/blog/agent-vs-agentless-backup/
No, the current version of SvHCI, 1.3 does not support VM-level snapshots. However, this feature will be included as part of the next release of SvHCI, version 2.0.
Performance
SvHCI currently commits the memory allocated to a VM. This is the safest option, but we understand that competitors can overcommit memory, and therefore it is a development that we are considering as part of our product roadmap.
No, there are no restrictions. SvHCI allows you to run any application inside the guest OS running on a VM. However, we recommend sizing the correct system environment and performance to your needs.
No, it is not currently supported in SvHCI, but is on our product development roadmap.
No, SvHCI 1.3 does not have this capability. It is on our roadmap for future versions.
No, SvHCI does not have DRS, but it is it is part of future roadmap plans.
Pricing & Licensing
SvHCI is licensed by the node (virtualized server) and the amount of storage needed. There is a subscription license required for SvHCI that enables the hypervisor (virtual networking and management) and one for the storage (2TB, 6TB, 12TB, 24TB, 48TB, and unlimited TB options).
Yes. SvHCI pricing includes the hypervisor and support for the entire software stack, all as a subscription.
No, the license model is not based on cores, the licensing model is based on nodes and storage capacity, either 2TB, 6TB, 12TB, 24TB, 48TB, or unlimited TB.
SvHCI prices are per node, but high availability (HA) is included as a feature within the software. So purchase two SvHCI licenses which will allow the creation of a 2-node high availability cluster.
No. The management GUI for the cluster, i.e. the Virtualization Manager is included in the main SvHCI subscription license SKU.
While SvSAN software is an integral part of SvHCI, it is installed differently than when used with a third party hypervisor (Hyper-V, VMware ESXi). The end user will need to upgrade their SvSAN perpetual license to a subscription SvSAN license and perform a new bare metal install of the SvHCI stack.
Migrating to SvHCI
Yes, users can export VMs from any existing virtualization platform and import them to SvHCI. Users must first convert the VMs into a raw disk file and then import them via the “Upload disk image” feature under “Virtual Machines”. For environments with a large number of clusters, StorMagic Technical Services can assist with scripting the VM migration for ease. For more information, refer to the technical product documentation: https://stormagic.com/doc/svhci/1-3-0/en/Content/vm-management.htm#Importing_virtual_machines
Note: A VM Import Utility that will automate and simplify the process of importing VMs from VMware environments to SvHCI is in development and expected to be available in 2025.
No, existing VMs must be converted into a raw disk file and uploaded into SvHCI.
Note: A VM Import Utility that will automate and simplify the process of importing VMs from VMware environments to SvHCI is in development and expected to be available in 2025.
Roadmap
No. Like most technology vendors, the SvHCI product roadmap is not publicly available. It is available under NDA and shared only on a case-by-case basis with customers, prospects, channel or alliance partners. Please contact your StorMagic sales representative to arrange a discussion about the SvHCI roadmap.
Yes, StorMagic operates an ongoing Beta Program for SvHCI that allows new or existing StorMagic customers, channel partners and OEMs to evaluate upcoming product features and provide feedback to the development team. Program participants receive access to time-limited 30-day SvHCI beta software licenses that are fully-featured with unlimited TB storage capacity. The beta licenses must only be used for pre-production, lab deployments with test workloads. Deploying the software in production environments and/or running production workloads on the software will not be supported by StorMagic. To apply for the Beta Program, visit https://stormagic.com/beta