Microsoft NLB in VMware: alternative to multicast and unicast with the SafeKit software
Evidian SafeKit
Microsoft NLB multicast mode
As explained in the knowledge base of VMware on network load balancing (NLB) multicast mode configuration, you need to manually configure static ARP resolution at the switch or router for each port that connects to the cluster. Deployment of the Microsoft NLB multicast mode in an unknown network environment can prove to be a complex and strenuous task.
Microsoft NLB unicast mode
With Microsoft NLB unicast mode, you must configure the ESXi/ESX host to not send RARP packets when any of its virtual machines is powered on. That's why, Microsoft NLB is not working properly in Unicast Mode with VMware.
Alternative with Evidian SafeKit
The SafeKit virtual IP address configuration does not require any special network configuration and the network load balancing can run in any environment. An important feature when the solution must be deployed in an unknown infrastructure: unknown switches or routers, physical servers or virtual servers.
How the SafeKit farm cluster works?
Virtual IP address in a farm cluster
On the previous figure, the application is running on the 3 servers (3 is an example, it can be 2 or more). Users are connected to a virtual IP address.
The virtual IP address is configured locally on each server in the farm cluster.
The input traffic to the virtual IP address is received by all the servers and split among them by a network filter inside each server's kernel.
SafeKit detects hardware and software failures, reconfigures network filters in the event of a failure, and offers configurable application checkers and recovery scripts.
Load balancing in a network filter
The network load balancing algorithm inside the network filter is based on the identity of the client packets (client IP address, client TCP port). Depending on the identity of the client packet input, only one filter in a server accepts the packet; the other filters in other servers reject it.
Once a packet is accepted by the filter on a server, only the CPU and memory of this server are used by the application that responds to the request of the client. The output messages are sent directly from the application server to the client.
If a server fails, the farm heartbeat protocol reconfigures the filters in the network load balancing cluster to re-balance the traffic on the remaining available servers.
Stateful or stateless applications
With a stateful application, there is session affinity. The same client must be connected to the same server on multiple TCP sessions to retrieve its context on the server. In this case, the SafeKit load balancing rule is configured on the client IP address. Thus, the same client is always connected to the same server on multiple TCP sessions. And different clients are distributed across different servers in the farm.
With a stateless application, there is no session affinity. The same client can be connected to different servers in the farm on multiple TCP sessions. There is no context stored locally on a server from one session to another. In this case, the SafeKit load balancing rule is configured on the TCP client session identity. This configuration is the one which is the best for distributing sessions between servers, but it requires a TCP service without session affinity.
Comparison of SafeKit with Traditional High Availability (HA) Clusters
How does SafeKit compare to traditional High Availability (HA) cluster solutions?
| Solutions | Complexity | Comments |
|---|---|---|
| Failover Cluster (Microsoft) | High | Specific Storage (shared storage, SAN) |
| Virtualization (VMware HA) | High | Specific Storage (shared storage, SAN, vSAN) |
| SQL Always-On (Microsoft) | High | Only SQL is redundant, requires SQL Enterprise Edition |
| Evidian SafeKit | Low | Simplest, generic and software-only. Unsuitable for large data replication. |
SafeKit's Advantage in Application Redundancy
SafeKit achieves its low-complexity High Availability through a simple, software-based mirroring mechanism that eliminates the need for expensive, dedicated hardware like a SAN (Storage Area Network). This makes it a highly accessible solution for quickly implementing application redundancy without complex infrastructure changes.Architectural Differentiators: SafeKit Software-Defined vs. Hardware HA Clusters
Which High Availability Architecture Is Right for You: SafeKit Software Clustering or Traditional Hardware Clustering?
| SafeKit (Software Clustering) | Hardware Clustering |
|---|---|
|
|
| SafeKit (Shared Nothing Cluster) | Shared Disk Cluster |
|---|---|
|
|
| Application High Availability | Virtual Machine High Availability |
|---|---|
|
|
| SafeKit (High Availability) | Fault Tolerance |
|---|---|
|
|
| SafeKit (Synchronous Replication) | Asynchronous Replication |
|---|---|
|
|
| SafeKit (Byte-level File Replication) | Block-level Disk Replication |
|---|---|
|
|
| SafeKit | Traditional HA |
|---|---|
|
|
| SafeKit | Traditional HA |
|---|---|
|
|
Summary and Key Takeaways for High Availability
The architectural choice between software clustering (like SafeKit) and hardware clustering (traditional shared-disk/SAN) significantly impacts deployment complexity, operational costs, and recovery effectiveness. The key takeaway from this comparison is the shift toward shared-nothing, application-level HA which prioritizes rapid application recovery (low RTO) and deployment flexibility (even across remote sites), often resulting in a more streamlined and resilient solution than highly complex, hardware-dependent cluster configurations. For maximum business continuity with simplified management, evaluating a software-based approach is essential.Key Differentiators of the SafeKit Mirror Cluster
What are the key features and advantages of the SafeKit Mirror Cluster for High Availability (HA)?
| Feature Category & Advantage | Detailed Benefit and Mechanism |
|---|---|
3 products in 1
More info >
![]() |
|
Very simple configuration
More info >
![]() |
|
Synchronous replication
More info >
![]() |
|
Fully automated failback
More info >
![]() |
|
Replication of any type of data
More info >
![]() |
|
File replication vs disk replication
More info >
![]() |
|
File replication vs shared disk
More info >
![]() |
|
Remote sites and virtual IP address
More info >
![]() |
|
Quorum and split brain
More info >
![]() |
|
Active/active cluster
More info >
![]() |
|
Uniform high availability solution
More info >
![]() |
|
RTO / RPO
More info >
![]() |
|
Summary of SafeKit Mirror Cluster Benefits for High Availability
In summary, the SafeKit mirror cluster delivers a compelling high availability solution through its shared-nothing architecture and synchronous file replication. By offering a unified platform that bundles replication, monitoring, and failover/failback mechanisms, it successfully addresses critical enterprise needs like zero data loss (RPO=0) and fast Recovery Time Objectives (RTO) of around 1 minute or less. Its simplicity, lack of dependency on expensive SANs or enterprise OS editions, and ability to handle remote sites and active-active configurations make it a highly cost-effective and flexible alternative to complex traditional cluster solutions.Key Differentiators of the SafeKit Farm Cluster
What are the key differentiators of the SafeKit Farm Cluster for load balancing and failover?
| Feature Category & Advantage | Detailed Benefit and Mechanism |
|---|---|
No load balancer or dedicated proxy servers or special multicast Ethernet address
More info >
![]() |
|
All clustering features
More info >
![]() |
|
Remote sites and virtual IP address
More info >
![]() |
|
Uniform high availability solution
More info >
![]() |
|
Summary of SafeKit Farm Cluster Benefits for Load Balancing
In conclusion, the SafeKit Farm Cluster provides a unified, software-based approach to load balancing and high availability that dramatically lowers complexity and cost. By embedding load balancing and failover directly into the application server layer using a standard virtual IP address, it avoids the need for external network hardware (load balancers or proxies) and specialized multicast configurations. This integrated approach, coupled with its ability to combine with the mirror cluster for full N-tiers HA, makes SafeKit a uniquely simple and comprehensive solution for achieving scalable and resilient application delivery across diverse environments.SafeKit HA Comparison: Virtual Machine Level vs. Application Level
What are the fundamental differences between SafeKit's VM-based and Application-based High Availability?
| Comparison Feature | VM HA with SafeKit Hyper-V or KVM Module | Application HA with SafeKit Application Modules |
|---|---|---|
| Deployment Diagram | ![]() |
![]() |
| Failover Scope | SafeKit inside 2 hypervisors: replication and failover of the full VM. | SafeKit inside 2 virtual or physical machines: replication and failover at the application level. |
| Data Replicated | Replicates more data (Application + Operating System). | Replicates only application data, leading to smaller data volumes. |
| Recovery Process & Speed (RTO) | Reboot of VM on hypervisor 2 if hypervisor 1 crashes. Recovery time depends on the OS reboot. VM checker and failover mechanism. | Quick recovery time with restart of App on OS2 if server 1 crashes. Typically around 1 minute or less (low RTO). Application checker and software failover. |
| Configuration |
Generic solution for any application / OS running in the VM.
|
It requires a technical understanding of the application itself.
|
| Platform Compatibility | Works with Windows/Hyper-V and Linux/KVM but is not compatible with VMware. | Platform agnostic; works with physical or virtual machines, cloud infrastructure, and any hypervisor, including VMware. |
Final Recommendation: VM HA for Generality vs. Application HA for Low RTO
In summary, choosing between SafeKit's VM HA and Application HA depends on the priority. VM HA is a generic solution ideal for environments standardized on Hyper-V or KVM, offering redundancy for the entire operating system stack, though with a potentially longer Recovery Time Objective (RTO) due to the VM reboot. Conversely, Application HA provides superior agility and platform agnosticism, including support for VMware, resulting in a much lower RTO by focusing solely on critical application data replication and restart. For the lowest RTO and maximum deployment flexibility, Application HA is the optimal SafeKit choice.VM High Availability: SafeKit's SAN-Less vs. Hyper-V/VMware HA
What is the difference between SafeKit VM High Availability and Traditional Shared Storage Clusters (Hyper-V Cluster and VMware HA)?
| SafeKit (with Hyper-V or KVM Module) | Microsoft Hyper-V Cluster & VMware HA (Traditional) |
|---|---|
![]() |
![]() |
| No shared disk required - uses synchronous real-time replication instead, ensuring no data loss. | Requires shared disk and a specific external disk bay (SAN). |
| Supports Remote Sites without requiring SAN replication across locations. | Remote sites typically require replicating disk bays across a complex SAN setup. |
| No specific IT skill is required to configure the system (using hyperv.safe and kvm.safe). | Requires specific, high-level IT skills to configure the cluster and SAN infrastructure. |
| Note that the Hyper-V/SafeKit and KVM/SafeKit solutions are limited to replication and failover of 32 VMs. | Note that the Hyper-V built-in replication (Hyper-V Replica) does not qualify as a high availability solution. This is because the replication is asynchronous, which can result in data loss during failures, and it lacks automatic failover and failback capabilities. |
Conclusion: Why SafeKit's SAN-Less HA is a Modern Alternative
The core difference lies in the reliance on shared storage. SafeKit's approach uses synchronous data replication directly between servers, ensuring a zero data loss (RPO=0) scenario and dramatically simplifying deployment by eliminating the need for expensive SAN hardware and specialized storage skills. While traditional clusters provide robust features, the high complexity, high cost, and shared-disk dependency make them less flexible, especially for remote or multi-site deployments. For organizations prioritizing cost-effectiveness, simplicity, and fast deployment of VM High Availability without investing in complex storage infrastructure, SafeKit's SAN-less model provides a superior and modern alternative.SafeKit High Availability (HA) Solutions: Quick Installation Guides for Windows and Linux Clusters
This table presents the SafeKit High Availability (HA) solutions, categorized by application and operating environment (Databases, Web Servers, VMs, Cloud). Identify the specific pre‑configured .safe module (e.g., mirror.safe, farm.safe, and others) required for real‑time replication, load balancing, and automatic failover of critical business applications on Windows or Linux. Simplify your HA cluster setup with direct links to quick installation guides, each including a download link for the corresponding .safe module.
A SafeKit .safe module is essentially a pre‑configured High Availability (HA) template that defines how a specific application will be clustered and protected by the SafeKit software. In practice, it contains a configuration file (userconfig.xml) and restart scripts.
| Application Category | HA Scenario (High Availability) | Technology / Product | .safe Module | Installation Guide |
|---|---|---|---|---|
| New Applications | Real-Time Replication and Failover | Windows | mirror.safe |
View Guide: Windows Replication |
| New Applications | Real-Time Replication and Failover | Linux | mirror.safe |
View Guide: Linux Replication |
| New Applications | Network Load Balancing and Failover | Windows | farm.safe |
View Guide: Windows Load Balancing |
| New Applications | Network Load Balancing and Failover | Linux | farm.safe |
View Guide: Linux Load Balancing |
| Databases | Replication and Failover | Microsoft SQL Server | sqlserver.safe |
View Guide: SQL Server Cluster |
| Databases | Replication and Failover | PostgreSQL | postgresql.safe |
View Guide: PostgreSQL Replication |
| Databases | Replication and Failover | MySQL | mysql.safe |
View Guide: MySQL Cluster |
| Databases | Replication and Failover | Oracle | oracle.safe |
View Guide: Oracle Failover Cluster |
| Databases | Replication and Failover | Firebird | firebird.safe |
View Guide: Firebird HA |
| Web Servers | Load Balancing and Failover | Apache | apache_farm.safe |
View Guide: Apache Load Balancing |
| Web Servers | Load Balancing and Failover | IIS | iis_farm.safe |
View Guide: IIS Load Balancing |
| Web Servers | Load Balancing and Failover | NGINX | farm.safe |
View Guide: NGINX Load Balancing |
| VMs and Containers | Replication and Failover | Hyper-V | hyperv.safe |
View Guide: Hyper-V VM Replication |
| VMs and Containers | Replication and Failover | KVM | kvm.safe |
View Guide: KVM VM Replication |
| VMs and Containers | Replication and Failover | Docker | mirror.safe |
View Guide: Docker Container Failover |
| VMs and Containers | Replication and Failover | Podman | mirror.safe |
View Guide: Podman Container Failover |
| VMs and Containers | Replication and Failover | Kubernetes K3S | k3s.safe |
View Guide: Kubernetes K3S Replication |
| AWS Cloud | Real-Time Replication and Failover | AWS | mirror.safe |
View Guide: AWS Replication Cluster |
| AWS Cloud | Network Load Balancing and Failover | AWS | farm.safe |
View Guide: AWS Load Balancing Cluster |
| GCP Cloud | Real-Time Replication and Failover | GCP | mirror.safe |
View Guide: GCP Replication Cluster |
| GCP Cloud | Network Load Balancing and Failover | GCP | farm.safe |
View Guide: GCP Load Balancing Cluster |
| Azure Cloud | Real-Time Replication and Failover | Azure | mirror.safe |
View Guide: Azure Replication Cluster |
| Azure Cloud | Network Load Balancing and Failover | Azure | farm.safe |
View Guide: Azure Load Balancing Cluster |
| Physical Security / VMS | Real-Time Replication and Failover | Milestone XProtect | milestone.safe |
View Guide: Milestone XProtect Failover |
| Physical Security / VMS | Real-Time Replication and Failover | Nedap AEOS | nedap.safe |
View Guide: Nedap AEOS Failover |
| Physical Security / VMS | Real-Time Replication and Failover | Genetec (SQL Server) | sqlserver.safe |
View Guide: Genetec SQL Failover |
| Physical Security / VMS | Real-Time Replication and Failover | Bosch AMS (Hyper-V) | hyperv.safe |
View Guide: Bosch AMS Hyper-V Failover |
| Physical Security / VMS | Real-Time Replication and Failover | Bosch BIS (Hyper-V) | hyperv.safe |
View Guide: Bosch BIS Hyper-V Failover |
| Physical Security / VMS | Real-Time Replication and Failover | Bosch BVMS (Hyper-V) | hyperv.safe |
View Guide: Bosch BVMS Hyper-V Failover |
| Physical Security / VMS | Real-Time Replication and Failover | Hanwha Vision (Hyper-V) | hyperv.safe |
View Guide: Hanwha Vision Hyper-V Failover |
| Physical Security / VMS | Real-Time Replication and Failover | Hanwha Wisenet (Hyper-V) | hyperv.safe |
View Guide: Hanwha Wisenet Hyper-V Failover |
| Siemens Products | Real-Time Replication and Failover | Siemens Siveillance suite (Hyper-V) | hyperv.safe |
View Guide: Siemens Siveillance HA |
| Siemens Products | Real-Time Replication and Failover | Siemens Desigo CC (Hyper-V) | hyperv.safe |
View Guide: Siemens Desigo CC HA |
| Siemens Products | Real-Time Replication and Failover | Siemens Siveillance VMS | SiveillanceVMS.safe |
View Guide: Siemens Siveillance VMS HA |
| Siemens Products | Real-Time Replication and Failover | Siemens SiPass (Hyper-V) | hyperv.safe |
View Guide: Siemens SiPass HA |
| Siemens Products | Real-Time Replication and Failover | Siemens SIPORT (Hyper-V) | hyperv.safe |
View Guide: Siemens SIPORT HA |
| Siemens Products | Real-Time Replication and Failover | Siemens SIMATIC PCS 7 (Hyper-V) | hyperv.safe |
View Guide: SIMATIC PCS 7 HA |
| Siemens Products | Real-Time Replication and Failover | Siemens SIMATIC WinCC (Hyper-V) | hyperv.safe |
View Guide: SIMATIC WinCC HA |












