0

Heartbeat and DRBD both are employed for a cluster solution for just about any application utilizing two servers. Both servers are work with active and mode that is passive one server will work at the same time and another server as a backup server. DRBD (Distributed Replicated Block Device) is a service that is kernel-level synchronizes information between two servers in real-time. Heartbeat is an source that is open that allows a primary and a backup Linux server to determine if the other is “alive” and if the primary isn’t, failover resources to the backup. It will also manage the IP availability that is high other solutions within servers.

In this guide, we are going to learn to attain availability that is high of using Heartbeat and DRBD on Ubuntu 16.04 server.

Requirements

  • Two nodes with Ubuntu 16.04 server installed.
  • Two network cards installed on each node.
  • Additional unpartitioned drive that is hard for each node.
  • Non root individual with sudo privileges setup for each node.

Getting Started

Before beginning, you need to setup internet protocol address for each node. Utilize the IP that is following on each node:

Node1 :

172.16.0.1 on eth0 and 192.168.0.101 on eth1

Node2 :

172.16.0.2 on eth0 and 192.168.0.102 on eth1

IP 192.168.0.103 will be the high availability IP.

Next, you will also need to setup hostname and hostname resolution on each node. So each node can communicate with each other. The file as shown below:

Node1
 

Save and close the file when you are finished.( on the first Node, open the /etc/hosts file and /etc/hostname file:

sudo nano /etc/hosts

Add the following lines at the end of the file:

172.16.0.1  Node1
 172.16.0.2  Node2
 

sudo nano /etc/hostname

Change******)

On the node that is second open the /etc/hosts file and /etc/hostname file:

sudo nano /etc/hosts

Add the following lines at the end of the file:

172.16.0.1  Node1
 172.16.0.2  Node2
 

sudo nano /etc/hostname

Change the file as shown below:

Node2
 

Save and close the file when you are finished.

Next, update each node with the latest version with the following command:

sudo apt-get update -y
sudo apt-get upgrade -y

Once your system is updated, restart the system to apply these changes.

Install DRBD and Heartbeat

Next, you will need to install DRBD and Heartbeat on both nodes. By default, both are available in Ubuntu 16.04 default repository. You can install them by just running the command that is following both Nodes:

sudo apt-get install drbd8-utils heartbeat -y

Next, begin DRBD and Heartbeat solution and allow them to begin on boot time:

sudo systemctl begin drbd
sudo systemctl begin heartbeat
systemctl enable drbd
systemctl enable heartbeat

Configure DRBD and Heartbeat

Next, you need to setup DRBD unit for each Node. Create a partition that is single second unpartitioned drive /dev/sdb on each Node.

You can do this by just running the command that is following each Node:

sudo echo -e ‘nnpn1nnnw’ | fdisk /dev/sdb

Next, you need to configure DRBD on both nodes. This can be done by producing /etc/drbd.d/r0.res file for each Node.

sudo nano /etc/drbd.d/r0.res

Add these lines:

global {
 usage-count no;
 }
 resource r0 {
 protocol C;
 startup {
 degr-wfc-timeout 60;
 }
 disk {
 }
 syncer {
 rate 100M;
 }
 web {
 cram-hmac-alg sha1;
 shared-secret "aBcDeF";
 }
 on Node1 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 172.16.0.1:7789;
 meta-disk internal;
 }
 on Node2 {
 device /dev/drbd0;
 disk /dev/sdb1;
 address 172.16.0.2:7789;
 meta-disk internal;
 }
 }
 

Save and shut the file while completed, then start another setup file for each Node:

sudo nano /etc/ha.d/ha.cf

Add these lines:

# Check Interval
 keepalive 1
 # Time before host declared dead
 deadtime 10
 # additional hold off delay at boot
 initdead 60
 # Auto-failback
 auto_failback off
 # Heartbeat Interface
 bcast eth1
 # Nodes to monitor
 node Node1
 node Node2
 

Save and shut the file.

Next, start the resource file /etc/ha.d/haresources for each Node:

sudo nano /etc/ha.d/haresources

Add these lines:

Node1 192.168.0.103/24 drbddisk::r0 Filesystem::/dev/drbd0::/var/lib/mysql::ext4::noatime
 

right here, Node1 could be the hostname of the primary active node, 192.168.0.103 could be the drifting point internet protocol address, /var/lib/mysql is a mount point and /dev/drbd0 is DRBD unit.

Next, you need to should determine and keep authorization that is identical on both nodes. You can do this by /etc/ha.d/authkeys file on each Node:

sudo nano /etc/ha.d/authkeys

Add the lines that are following******)

auth1
 1 sha1 your-secure-password
 

right here, your-secure-password can be your protected password. Utilize the password that is same both nodes.

Next, create and start DRBD by running the command that is following Node1:

sudo drbdadm create-md r0
sudo systemctl restart drbd
sudo drbdadm outdate r0
sudo drbdadm — –overwrite-data-of-peer main all
sudo drbdadm main r0
sudo mkfs.ext4 /dev/drbd0
sudo chmod 600 /etc/ha.d/authkeys
sudo mkdir /var/lib/mysql

Once the DRBD disk is established on Node1, create the DRBD disk on Node2 with all the command that is following******)

sudo drbdadm create-md r0
sudo systemctl restart drbd
sudo chmod 600 /etc/ha.d/authkeys
sudo mkdir /var/lib/mysql

Now, you can verify the DRBD disk is connected and is properly syncing by running the command that is following******)

sudo pet /proc/drbd

If all things are fine, you ought to begin to see the after production:

version: 8.4.5 (api:1/proto:86-101)
 srcversion: F446E16BFEBS8B115AJB14H
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
 ns:210413 nr:0 dw:126413 dr:815311 al:35 bm:0 lo:0 pe:11 ua:0 ap:0 ep:1 wo:f oos:16233752
 [>....................] sync'ed: 3.3per cent (14752/14350)M
 finish: 0:12:23 rate: 12,156 (16,932) K/sec
 

Next, begin heartbeat on both Nodes allow the failover percentage of your setup.

sudo systemctl begin heartbeat

Next, verify the installed DRBD partition with all the after demand on Node1:

sudo mount | grep drbd

You should begin to see the after production:

/dev/drbd0 on /var/lib/mysql kind ext4 (rw,noatime,data=ordered)
 

Next, verify the drifting internet protocol address is bound to Node1 with all the after demand:

sudo internet protocol address addr show | grep 192.168.0.103

You should begin to see the after production:

inet 192.168.0.103/24 brd 192.168.0.255 range worldwide additional eth1:0
 

Install and Configure MariaDB

Once all things are configured correctly on both Nodes, it is time to install MariaDB host on both Nodes.

Run the command that is following both Nodes to install MariaDB server:

sudo apt-get install mariadb-server -y

Next, you will need to disable MariaDB service on both Nodes:

sudo systemctl disable mysql

Here, we will use Node1 as primary and databases on Node2 should be created and populated through synchronization with Node1. So you will need to stop MariaDB remove and service content inside /var/lib/mysql on Node2. This can be done with all the command that is following******)

sudo systemctl stop mysql
sudo rm -rf /var/lib/mysql/*

Next, you will need to copy the MySQL Maintenance configuration file from Node1 to Node2. You can do this by running the command that is following******)

sudo scp /etc/mysql/debian.cnf [email protected]:/etc/mysql/debian.cnf

Next, you need to produce a root individual for remote handling of and usage of the databases in the extremely available MySQL example.

You may do this by operating the command that is following Node1:

mysql -u root -p

Enter your root password, then create a root user with the following command:

MariaDB [(none)]> CREATE USER ‘root’@’192.168.0.%’ IDENTIFIED BY ‘password’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO ‘root’@’192.168.0..%’ WITH GRANT OPTION;
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> QUIT;

Next, Set the bind address for MySQL on both Nodes with the command that is following******)

sudo sed -i ‘s/127.0.0.1/0.0.0.0/g’ /etc/mysql/mariadb.conf.d/*.cnf

Initiate Heartbeat for MariaDB provider

Next, you need to include the MariaDB solution within heartbeat circumstances on both Nodes. This can be done by modifying /etc/ha.d/haresources file:

sudo nano /etc/ha.d/haresources

Modify the lines that are following******)

Node1 192.168.0.103/24 drbddisk::r0 Filesystem::/dev/drbd0::/var/lib/mysql::ext4::noatime mysql
 

Save and close the file, when you are finished.

Once the heartbeat is configured, you will need to restart it on both Nodes.

First, restart heartbea on Node1:

sudo systemctl restart heartbeat

Next, wait for 50 seconds, then restart heartbeat service on Node2:

sudo systemctl restart heartbeat

Test Heartbeat and DRBD

Now, everything is configured properly, it’s time to perform a series of tests to verify that heartbeat will actually trigger a transfer from the server that is active the passive host as soon as the active host fails for some reason.

First, verify that Node1 could be the main drbd node with all the after demand on Node1:

sudo pet /proc/drbd

You should begin to see the after production:

version: 8.4.5 (api:1/proto:86-101)
 srcversion: F446E16BFEBS8B115AJB14H
 O cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
 ns:22764644 nr:256 dw:529232 dr:22248299 al:111 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 

Next, we are going to confirm your DRBD disk is installed with all the after demand:

sudo mount | grep drbd

/dev/drbd0 on /var/lib/mysql kind ext4 (rw,noatime,data=ordered)
 

Next, verify the MariaDB solution with all the after demand:

sudo systemctl status mysql

Next, access MariaDB host through the remote device utilizing drifting internet protocol address and produce a test database:

mysql -h 192.168.0.103 -u root -p

MariaDB [(none)]> make database test;
MariaDB [(none)]> stop

Next, restart heartbeat on Node1:

sudo systemctl restart heartbeat

Now, heartbeat will interpret this restart as failing of MariaDB on Node1 and may trigger failover to create Node2 the main host.

You can be sure DRBD is currently dealing with Node1 due to the fact additional host with all the after demand on Node1:

sudo pet /proc/drbd

You should begin to see the after production:

version: 8.4.5 (api:1/proto:86-101)
 srcversion: F446E16BFEBS8B115AJB14H
 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
 ns:22764856 nr:388 dw:529576 dr:22248303 al:112 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 

Now, verify that Node2 could be the main drbd node by operating these demand on Node2:

sudo pet /proc/drbd

You should begin to see the after production:

version: 8.4.5 (api:1/proto:86-101)
 srcversion: F446E16BFEBS8B115AJB14H
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
 ns:412 nr:20880892 dw:20881304 dr:11463 al:7 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
 

Next, always check to ensure that MariaDB is operating on Node2:

sudo systemctl status mysql

Now, hook up to MariaDB host utilizing drifting internet protocol address on Node2 from remote individual.

mysql -h 192.168.0.103 -u root -p

Next, see the test database we created earlier in the day while Node1 ended up being the main host.

MariaDB [(none)]> show databases;

You should begin to see the output that is following******)

 +--------------------+
 | Database |
 +--------------------+
 | test |
 | information_schema |
 | lost+found |
 | mysql |
 | performance_schema |
 +--------------------+
 5 rows in set (0.04 sec)

Top 10 complimentary WordPress Portfolio Plugins

Previous article

How exactly to Install Webuzo in Centos 6.X Part-1

Next article

You may also like

Comments

Leave a Reply

More in Linux