eviden-logo

Evidian > Products > High Availability Software - Zero Extra Hardware > SAN vs NAS shared storage for a failover cluster

SAN vs NAS shared storage for a failover cluster

Evidian SafeKit

What is the simplest solution between a SAN vs a NAS shared storage for a failover cluster?

SAN shared storage or NAS iSCSI shared storage for a failover cluster

SAN shared storage or NAS iSCSI shared storage for a failover cluster

There are several elements that make this architecture complex to implement:

  • on failover, switching the shared storage requires low level instructions which are storage manufacturer dependent,
  • recovery procedure on the file system (FS) must be passed before restarting the application,
  • if both file systems on both nodes access the same raw disk at the same time, the full file system will be corrupted,
  • to avoid a double access, a quorum disk must be configured.

NAS SMB shared storage or NAS NFS shared storage for a failover cluster

NAS SMB shared storage or NAS NFS shared storage for a failover cluster

    There are several elements that make this architecture simple to implement:

  • on failover, switching the shared storage means only remounting the external file system,
  • no recovery procedure on the file system must be passed before restarting the application,
  • if both nodes access the same shared file system at the same time, the full file system will be not be corrupted,
  • however, there is still the possibility of a double execution of the same application corrupting its data in the shared storage when nodes are isolated.

Real-time replication and failover with Evidian SafeKit

Real-time replication and failover with Evidian SafeKit

There are no such issues with SafeKit because its replication and failover solution does not require a shared storage.

However, if SafeKit must manage a shared storage:

  • use a NAS SMB shared storage or a NAS NFS shared storage,
  • put in the restart scripts the mount/umount of the external file system,
  • configure the SafeKit split brain checker to avoid a double execution of the same application accessing the shared storage when nodes are isolated.

How the SafeKit mirror cluster works with Windows or Linux?

Step 1. Real-time replication

Server 1 (PRIM) runs the Windows or Linux application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network. 

File replication at byte level in a mirror Windows or Linux cluster

The replication is synchronous with no data loss on failure contrary to asynchronous replication.

You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.

Step 2. Automatic failover

When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the Windows or Linux application automatically on Server 2.

The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.

Failover of Windows or Linux in a mirror cluster

The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.

Step 3. Automatic failback

Failback involves restarting Server 1 after fixing the problem that caused it to fail.

SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.

Failback in a mirror Windows or Linux cluster

Failback takes place without disturbing the Windows or Linux application, which can continue running on Server 2.

Step 4. Back to normal

After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the Windows or Linux application running on Server 2 and SafeKit replicating file updates to Server 1.

Return to normal operation in a mirror Windows or Linux cluster

If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.

Choose between redundancy at the application level or at the virtual machine level

Redundancy at the application level

In this type of solution, only application data are replicated. And only the application is restared in case of failure.

Application HA - redundancy at the application level

With this solution, restart scripts must be written to restart the application.

We deliver application modules to implement redundancy at the application level (like the mirror module provided in the free trial below). They are preconfigured for well known applications and databases. You can customize them with your own services, data to replicate, application checkers. And you can combine application modules to build advanced multi-level architectures.

This solution is platform agnostic and works with applications inside physical machines, virtual machines, in the Cloud. Any hypervisor is supported (VMware, Hyper-V...).

  • Solution for a new application (restart scripts to write): Windows, Linux

Redundancy at the virtual machine level

In this type of solution, the full Virtual Machine (VM) is replicated (Application + OS). And the full VM is restarted in case of failure.

VM HA - redundancy at the virtual machine level

The advantage is that there is no restart scripts to write per application and no virtual IP address to define. If you do not know how the application works, this is the best solution.

This solution works with Windows/Hyper-V and Linux/KVM but not with VMware. This is an active/active solution with several virtual machines replicated and restarted between two nodes.

More comparison between VM HA vs Application HA

Typical usage with SafeKit

Why a replication of a few Tera-bytes?

Resynchronization time after a failure (step 3)

  • 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
  • 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.

Alternative

Why a replication < 1,000,000 files?

  • Resynchronization time performance after a failure (step 3).
  • Time to check each file between both nodes.

Alternative

  • Put the many files to replicate in a virtual hard disk / virtual machine.
  • Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.

Why a failover ≤ 32 replicated VMs?

  • Each VM runs in an independent mirror module.
  • Maximum of 32 mirror modules running on the same cluster.

Alternative

  • Use an external shared storage and another VM clustering solution.
  • More expensive, more complex.

Why a LAN/VLAN network between remote sites?

Alternative

  • Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
  • Use backup solutions with asynchronous replication for high latency network.

SafeKit Solutions and Quick Installation Guides

New application (real-time replication and failover)


New application (network load balancing and failover)


Database (real-time replication and failover)


Web (network load balancing and failover)


Full VM or container real-time replication and failover


Amazon AWS


Google GCP


Microsoft Azure


Other clouds


Physical security (real-time replication and failover)


Siemens (real-time replication and failover)


SafeKit High Availability Differentiators