Clustering: Hardware Requirements

Axigen Documentation

Servers

The hardware recommendation for all of the solution nodes should match the performance of the following system configuration. More powerful hardware configurations can also be used and the hardware configuration should be accepted as a baseline example:

  • CPU: Intel Xeon Quad-Core (>2 GHz, 1333 MHz FSB)

  • RAM: 8 GB Fully Buffered

  • Array Controller (RAID 0/1/1+0/5)

  • 2x 72GB 15,000 RPM SAS Hot Plug Hard Drive

  • 2x Gigabit Network Adapters

As an alternative to the configuration example above, any compatible hardware parts and systems from any different vendor may be used.

Only heavy duty and reliable equipment (server framework / rackable) must be deployed within the clustering environment.

Load Balancer

Any layer 3-7 compatible hardware load balancer can be used to provide request connection balancing. The following device types can be used as the baseline, in terms of feature availability and performance. Compatible device counterparts can be used instead, if a different vendor is preferred:

  • Nortel Alteon Application Switch series (3408E, 2424E, 2424-SSL E, 2216E, 2208E)

  • Cisco CSS 11500 Content Services Switch series (11506, 11503, 11501)

  • Sun N2000 and N1000 Application Switch series (N1400, N2040, N2120)

Only one load balancer is required to have a functional solution, but two have to be used to achieve full redundancy for the entire messaging system.

Cluster Storage

The clustered configuration requires all active/passive paired nodes in the cluster to access the same storage system. The storage device that can be used is either a Fiber Optics Storage (SAN / Storage Area Network) or SCSI attached storage.

The baseline hardware recommendation for the cluster storage should match the performance of the following configuration. More powerful hardware configurations can also be used instead, as well as compatible hardware from any vendors:

  • Storage HDD support: Fiber Channel drives

  • Drive speed: 15,000 RPM

  • RAID support: 1/5/6 (6 is recommended)

  • Host connections: Minimum 4 Gb Fiber Channel / iSCSI

  • Dual controller support

  • Hot-plug support

In addition to the above specifications, the maximum HDD count supported and the maximum raw capacity of the storage needs to be scaled appropriately. The storage raw capacity must accommodate the size of the existing mailboxes, if a migration is planned. In addition to the expected space requirements, at least a 25% slack (free) space has to be available to accommodate the binary format overhead for the data stored. For example, if the initial storage requirement estimation is 1 TB, the storage must be scaled to accommodate 1.25 TB of data to meet the requirements.

Each of the back-end nodes connected to the shared storage device must have at least one (2 recommended for 100% redundancy) Host Based Adapter (HBA) installed. Any 4Gb fiber channel HBA type can be used for this purpose.

Fence Devices

Fence devices allow a failed node to be isolated from the storage so that, at no time, two nodes may write on the same partition on the shared storage. There are two types of fence devices:

  • Remote power switches (allow the cluster software to remotely power down/reboot a node that has failed);

  • I/O barriers (allow the cluster software to block access to the shared storage for a node that has failed)

 I/O barriers (fiber channel fabric switches) are preferred instead of the power switches as the latter type of device will hard reset the systems to restore functionality. This behavior can cause issues related to the debugging process later on, as the state of the system is not preserved.

These components are required for the back-end tier of the solution. At least one fence device port is required for each node pair (active/passive) in the back-end tier. For example, while using fence devices with 4 ports in a solution which has 8 nodes in the back-end tier, at least 2 fence devices are required.

The following requirements must be met by the fiber channel fabric switches to fit the proposed architecture design:

  • Host connections: Minimum 4 Gb Fiber Channel

  • Minimum number of ports: 1 per back-end node (2 to achieve high availability)