Architectural Preparations

Axigen Documentation

Axigen setup uses an active/passive configuration for the back-end tier. Thus you must, first, prepare your setup from an architectural point of view.

This section only contains recommendations and they should be followed just as examples. The cluster can function with many names and storage configurations, but this chapter defines some guidelines for better administration and understanding of the cluster configuration and architecture, in the future.

Naming Conventions

One step in creating your architecture prerequisites is to establish your naming conventions.

An Axigen cluster service instance is configured in a failover domain of two nodes. This pair of two nodes in a failover domain must be named accordingly. For example, the first Axigen back-end node along with its hot stand-by counterpart may be named as axib1 (Axigen Back-end 1), the second may be named axib2 (Axigen Back-end 2) and so on. This name can be used to tag all the names that refer to this pair of nodes in the entire cluster. An example of such tag would be the corresponding failover domain for the axib1 pair, may be called failover-axib1, the Axigen cluster service may be called service-axib1.

The OpenLDAP tag may be ldap1, thus the failover domain for the active OpenLDAP node and its corresponding hot stand-by node would be named failover-ldap1.

IP Addresses

Each node in the cluster must have a static IP address from the same submask, for cluster communication purposes. For redundancy, each node should have at least 2 network interfaces connected through different ethernet switches. Bonding may be used for linking the two network interfaces on each node.

Each pair of active and passive nodes must have a corresponding floating IP address, which will be set by the cluster software on the active node in the failover domain.

 Please consider and remember the system network interface that will be used for the cluster floating IP address. You will be able to specify it in the cluster configuration, explained below.

DNS

A local DNS server, accessible from the cluster nodes, should be available in the infrastructure. All IP addresses should have a corresponding host name assignment in the local DNS server as an A record.

For example, for a cluster of three Axigen active nodes, a single OpenLDAP active node, and and a hot stand-by may be named in the cluster.local domain, as follows:

  • Static IP addresses:

    • first back-end node: b1.cluster.local

    • second back-end node: b2.cluster.local

    • third back-end node: b3.cluster.local

    • first LDAP node: l1.cluster.local

    • first hot stand-by node: f1.cluster.local

  • Dynamic/Floating IP addresses:

    • first failover domain (b1+f1): axib1.cluster.local

    • second failover domain (b2+f1): axib2.cluster.local

    • third failover domain (b3+f1): axib3.cluster.local

    • fourth failover domain (l1+f1): ldap1.cluster.local

 Please ensure that all the nodes in the cluster have the same /etc/resolv.conf file, containing the correct search and nameserver configuration options.

Storage

The external shared storage must be configured with at least one partition for each failover domain in the cluster. The partition will store the shared data for the corresponding service provided by that failover domain.

Axigen's shared data may be split into multiple virtual entities, each one of them may have a separate corresponding partition:

  1. Domains

    • This entity is usually the largest in size, because it contains all the mailboxes from all the domains. Its size depends on the mailbox quotas and average mailbox usage.

  2. Logs

    • Depending on your company's logs retention rules, logs can have a size of under 1 GB to several gigabytes.

  3. Queue

    • It only contains temporary e-mails stored in the queue. Its size depends a lot on the email traffic, but usually shouldn't be more than 1GB.

  4. The rest of Axigen data

    • This partition will contain the axigen configuration and filter files, WebMail and WebAdmin files, and the reporting and domain delegation databases. It is the smallest in size and should not be more than 200MB in size, but it may also grow to around 1GB.

When performing the LUN configuration on the shared storage device, each cluster service should have a separate virtual disk with multiple partitions. This will allow the isolation of each virtual disk only on the nodes contained in its corresponding failover domain.

Example Configuration

This document uses an example setup configuration, explained below.

IPs and DNS

Four nodes were involved, two Axigen active nodes, a single OpenLDAP active node, and their hot stand-by counterpart:

  1. n1.cl.axilab.local10.9.9.91 (Axigen Backend 1)

  2. n2.cl.axilab.local10.9.9.92 (Axigen Backend 2)

  3. n3.cl.axilab.local10.9.9.93 (OpenLDAP Backend)

  4. n4.cl.axilab.local10.9.9.94 (Failover)


The cluster service floating IP addresses follow:

  1. axib1.cl.axilab.local10.9.9.96

  2. axib2.cl.axilab.local10.9.9.97

  3. ldap.cl.axilab.local10.9.9.98

All names are registered and available in the DNS server, so there is no need to use the /etc/hosts file.

Storage

For each Axigen cluster service, a storage unit, shared between two nodes in the same failover domain, contains the following partitions:

  1. Axigen data directory

    • Block device: /dev/disk/by-uuid/bd537993-d78a-4bf9-acf7-ed0993edbafb

    • Mount point: /var/clusterfs/axib1-data

    • Structure:

      • axigen/

      • axigen/domains -> ../../axib1-dom/domains

      • axigen/log -> ../../axib1-lq/log

      • axigen/queue -> ../../axib1-lq/queue

  2. Axigen domains

    1. Block device: /dev/disk/by-uuid/4407d04d-f48a-456b-b3f4-11a8d0f6e254

    2. Mount point: /var/clusterfs/axib1-dom

    3. Structure:

      • domains/

  3. Axigen logs and queue

    • Block device: /dev/disk/by-uuid/59efb5e6-bf85-4486-ac75-57e2b40c2e78

    • Mount point: /var/clusterfs/axib1-lq

    • Structure:

      • log/

      • queue/


For each OpenLDAP cluster service, a storage unit may contain a single partition, organized as follows:

  1. OpenLDAP

    • Block device: /dev/disk/by-uuid/28a715bb-8b1d-4acd-959f-a67328f04d16

    • Mount point: /var/clusterfs/ldap

    • Structure:

      • db/

      • cf/

We have used the /dev/disk/by-uuid path in specifying the block devices, because the shared disk may be recognized differently on the two nodes as /dev/XdY. Also, if any kernel or udev change / update affects the name of the device, this path specification will always be common to both nodes.

It is recommended, for the ease of manual mount operations, that the /etc/fstab file to be filled on each node with the cluster shared storage partitions accessible from that node.

Do not set the partitions in /etc/fstab to be mounted at boot, because the mount operations will be performed automatically by the clustering software.

Fence Device

The nodes were handled by an APC power switch, available at the address 10.9.9.99. The fenceadm user will be used for power cycling the nodes, if considered necessary by the Red Hat Clustering software.