Byte-level file replication vs block-level disk replication
Evidian SafeKit
Byte-level file replication vs block-level disk replication in a high availability cluster
Overview
This article explores the pros and cons of byte-level file replication vs block-level disk replication in a high availability cluster. We are looking at volume of replicated data, impact on application data organization, recovery time, simplicity of implementation.
The following comparative tables explain in detail the byte-level file replication implemented by SafeKit, a high availability software product.
What is byte-level file replication?
Byte-level file replication (like with SafeKit) means that only modifications inside files are replicated.
Synchronous replication is required in a high vailability cluster for having 0 data loss in case of failure. Asynchronous replication is for backup solutions.
The volume of replicated data is reduced to information modified by applications inside their files. No extra data is replicated.
There is no impact on data organization for an application. For instance, if an application has its data in the system disk, byte-level file replication is working.
Recovery time (RTO) in the event of a failover is reduced to the application restart time on the secondary server's replicated files.
Finally, the solution is very simple to configure as only the paths of directories to replicate are configured.
What is block-level disk replication?
Block-level disk replication (like with DRBD) means that only modifications inside a disk are replicated.
The volume of replicated data is not reduced to information modified by applications. Extra data are replicated like the meta data for managing the disk (list of free blocks, file system internal information).
There is a strong impact on the organization of application data. All data must be localized in the replicated disk. At least, it requires an application reconfiguration. Or, it is impossible if some data to replicate are in the system disk, because this disk must remain specific to each server.
The recovery time (RTO) increases with the file system recovery procedure on the replicated disk after a failover.
Finally, the solution is not easy to configure because skills are required to configure a special disk with a file system. Additionally, application skills are required to configure application data in the replicated disk.
Pros and cons of byte-level file replication vs block-level disk replication
Cluster with byte-level file replication | Cluster with block-level disk replication |
Product | |
SafeKit on Windows and Linux | Disks replication products as DRBD |
Application data organization | |
0 impact on application data organization with SafeKit.
Just define directories to replicate in real-time. Even directories inside the system disk can be replicated. |
Impact on application data organization.
Special configuration of the application to put its data in a replicated disk. Data in the system disk cannot be replicated. |
Data replication | |
Synchronous byte-level file replication.
Replicates file modification operations generated by application activity No meta data are replicated. |
Replicate all data modified inside a replicated disk.
Application data plus meta data are replicated. For instance, last access time on a file is replicated (last access time is modified each time the file is read). |
Complexity of deployment | |
No - install a software on 2 servers | Yes - require specific IT skills to configure OS and replicated disk |
Failover | |
Just restart the application on the second server. | Remount the file system on the replicated disk.
Pass the recovery procedure on the file system. And then restart the application. |
Failback | |
Automatic failback.
Resynchronization of data on the secondary server without stopping the application on the primary server. No application failover while data are not resynchronized. |
All products are not at the same level of features. |
Quorum and split brain | |
Application executed on a single server after a network isolation (split brain).
Coherency of data after a split brain. No need for a third machine or a quorum disk or a special heartbeat line for split brain. |
Require a special quorum disk or a third quorum server to manage split brain. |
Suited for | |
Software editors which want to add a simple high availability option to their application | Enterprise with IT skills in clustering. |
VM HA with the SafeKit Hyper-V or KVM module | Application HA with SafeKit application modules |
SafeKit inside 2 hypervisors: replication and failover of full VM | SafeKit inside 2 virtual or physical machines: replication and failover at application level |
Replicates more data (App+OS) | Replicates only application data |
Reboot of VM on hypervisor 2 if hypervisor 1 crashes Recovery time depending on the OS reboot VM checker and failover (Virtual Machine is unresponsive, has crashed, or stopped working) |
Quick recovery time with restart of App on OS2 if crash of server 1 Around 1 mn or less (see RTO/RPO here) Application checker and software failover |
Generic solution for any application / OS | Restart scripts to be written in application modules |
Works with Windows/Hyper-V and Linux/KVM but not with VMware | Platform agnostic, works with physical or virtual machines, cloud infrastructure and any hypervisor including VMware |
SafeKit with the Hyper-V module or the KVM module | Microsoft Hyper-V Cluster & VMware HA |
No shared disk - synchronous real-time replication instead with no data loss | Shared disk and specific extenal bay of disk |
Remote sites = no SAN for replication | Remote sites = replicated bays of disk across a SAN |
No specific IT skill to configure the system (with hyperv.safe and kvm.safe) | Specific IT skills to configure the system |
Note that the Hyper-V/SafeKit and KVM/SafeKit solutions are limited to replication and failover of 32 VMs. | Note that the Hyper-V built-in replication does not qualify as a high availability solution. This is because the replication is asynchronous, which can result in data loss during failures, and it lacks automatic failover and failback capabilities. |
Evidian SafeKit mirror cluster with real-time file replication and failover |
|
3 products in 1 More info > |
|
Very simple configuration More info > |
|
Synchronous replication More info > |
|
Fully automated failback More info > |
|
Replication of any type of data More info > |
|
File replication vs disk replication More info > |
|
File replication vs shared disk More info > |
|
Remote sites and virtual IP address More info > |
|
Quorum and split brain More info > |
|
Active/active cluster More info > |
|
Uniform high availability solution More info > |
|
RTO / RPO More info > |
|
Evidian SafeKit farm cluster with load balancing and failover |
|
No load balancer or dedicated proxy servers or special multicast Ethernet address More info > |
|
All clustering features More info > |
|
Remote sites and virtual IP address More info > |
|
Uniform high availability solution More info > |
|
Software clustering vs hardware clustering More info > |
|
|
|
Shared nothing vs a shared disk cluster More info > |
|
|
|
Application High Availability vs Full Virtual Machine High Availability More info > |
|
|
|
High availability vs fault tolerance More info > |
|
|
|
Synchronous replication vs asynchronous replication More info > |
|
|
|
Byte-level file replication vs block-level disk replication More info > |
|
|
|
Heartbeat, failover and quorum to avoid 2 master nodes More info > |
|
|
|
Virtual IP address primary/secondary, network load balancing, failover More info > |
|
|
|
Step 1. Real-time replication
Server 1 (PRIM) runs the application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.
Step 2. Automatic failover
When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the application automatically on Server 2.
The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.
Step 3. Automatic failback
Failback involves restarting Server 1 after fixing the problem that caused it to fail.
SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.
Failback takes place without disturbing the application, which can continue running on Server 2.
Step 4. Back to normal
After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the application running on Server 2 and SafeKit replicating file updates to Server 1.
If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
More information on power outage and network isolation in a cluster.
Why a replication of a few Tera-bytes?
Resynchronization time after a failure (step 3)
- 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
- 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.
Alternative
- For a large volume of data, use external shared storage.
- More expensive, more complex.
Why a replication < 1,000,000 files?
- Resynchronization time performance after a failure (step 3).
- Time to check each file between both nodes.
Alternative
- Put the many files to replicate in a virtual hard disk / virtual machine.
- Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.
Why a failover ≤ 32 replicated VMs?
- Each VM runs in an independent mirror module.
- Maximum of 32 mirror modules running on the same cluster.
Alternative
- Use an external shared storage and another VM clustering solution.
- More expensive, more complex.
Why a LAN/VLAN network between remote sites?
- Automatic failover of the virtual IP address with 2 nodes in the same subnet.
- Good bandwidth for resynchronization (step 3) and good latency for synchronous replication (typically a round-trip of less than 2ms).
Alternative
- Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
- Use backup solutions with asynchronous replication for high latency network.
New application (real-time replication and failover)
New application (network load balancing and failover)
Database (real-time replication and failover)
- Microsoft SQL Server mirror
- PostgreSQL mirror
- MySQL mirror
- Oracle mirror
- MariaDB mirror
- Firebird mirror
Web (network load balancing and failover)
Full VM or container real-time replication and failover
Amazon AWS
Google GCP
Microsoft Azure
Other clouds
Physical security (real-time replication and failover)
Siemens (real-time replication and failover)
New application (real-time replication and failover)
- Windows (mirror.safe)
- Linux (mirror.safe)
New application (network load balancing and failover)
Database (real-time replication and failover)
- Microsoft SQL Server (sqlserver.safe)
- PostgreSQL (postgresql.safe)
- MySQL (mysql.safe)
- Oracle (oracle.safe)
- MariaDB (sqlserver.safe)
- Firebird (firebird.safe)
Web (network load balancing and failover)
- Apache (apache_farm.safe)
- IIS (iis_farm.safe)
- NGINX (farm.safe)
Full VM or container real-time replication and failover
- Hyper-V (hyperv.safe)
- KVM (kvm.safe)
- Docker (mirror.safe)
- Podman (mirror.safe)
- Kubernetes K3S (k3s.safe)
Amazon AWS
- AWS (mirror.safe)
- AWS (farm.safe)
Google GCP
- GCP (mirror.safe)
- GCP (farm.safe)
Microsoft Azure
- Azure (mirror.safe)
- Azure (farm.safe)
Other clouds
- All Cloud Solutions
- Generic (mirror.safe)
- Generic (farm.safe)
Physical security (real-time replication and failover)
- Milestone XProtect (milestone.safe)
- Nedap AEOS (nedap.safe)
- Genetec SQL Server (sqlserver.safe)
- Bosch AMS (hyperv.safe)
- Bosch BIS (hyperv.safe)
- Bosch BVMS (hyperv.safe)
- Hanwha Vision (hyperv.safe)
- Hanwha Wisenet (hyperv.safe)
Siemens (real-time replication and failover)
- Siemens Siveillance suite (hyperv.safe)
- Siemens Desigo CC (hyperv.safe)
- Siemens Siveillance VMS (SiveillanceVMS.safe)
- Siemens SiPass (hyperv.safe)
- Siemens SIPORT (hyperv.safe)
- Siemens SIMATIC PCS 7 (hyperv.safe)
- Siemens SIMATIC WinCC (hyperv.safe)