DevTech101

DevTech101
1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...

Consul configuration in a multi Data Center environment

Below I am going to describe how to use Consul Health Checking across Multiple Data Centers. A picture is worth a thousand words, the diagram below describes a high level overview of the Consul multi Data Center architecture. Consul Multi Data Center Diagram below are the highlights covered in in this article in great detail.
  • OS required for Consul VMs/Docker installation
  • Consul Installation and configuration
  • Consul GEO(Query) configuration used in Consul multiple Data Center setup
  • Consul and Python API’s
  • Dashboard for Consul data (Host, Service Availability)
Consul from HashiCorp is a fantastic Service Discovery and Health Checking tool. Consul provides the fowling features out of the box:
  • Service Discovery
  • Health Checking (TCP, HTTP, Script, Docker)
  • KV Store
  • Multi Datacenter
Note: Because of limited hardware available for my test, I am using Solaris Kernel Zones in my examples below. using any VM hypervisor, Docker,etc.. should return the same or similar results. I hope to update this in the future with a Docker implementation example.

Architecture / Preparing / Installing the OS(Kernel Zones/VMs)

For the below Consul configuration you will need the below items.
  • 6 Consul servers – 3 per DC (can be VMs or Docker’s)
  • 2 Consul clients – 1 per DC Tip: For maximum HA use at least 2 clients per DC
  • Your end-hosts monitored by consul(with no Consul agent)
You Will also need to make sure, the below list of ports are open in your Firewall.
  1. Server RPC (Default 8300). This is used by servers to handle incoming requests from other agents. TCP only.
  2. Serf LAN (Default 8301). This is used to handle gossip in the LAN. Required by all agents. TCP and UDP.
  3. Serf WAN (Default 8302). This is used by servers to gossip over the WAN to other servers. TCP and UDP.
  4. HTTP API (Default 8500). This is used by clients to talk to the HTTP API. TCP only.
  5. HTTPS API (Default 443). This is used by clients to talk to the HTTPS API. TCP only.
  6. DNS Interface (Default 8600). Used to resolve DNS queries. TCP and UDP.
The table below lists names and IP address used in this configuration (feel free to replace them with your needs).
DC1
 Name  IP Addrss
 Consul Servers
 cons1  10.10.1.11
 cons2  10.10.1.12
 cons2  10.10.1.13
 Consul Client
 cl1  10.10.1.51
 cl2  10.10.1.52
 End Host
 host1  192.168.1.11
 host2-vip  192.168.1.50
 host2  192.168.1.12
DC2
 Name  IP Address
 Consul Servers
 cons11  10.10.1.111
 cons12  10.10.1.112
 cons13  10.10.1.113
 Consul Client
 cl1  10.10.1.151
 cl2  10.10.1.152
 End Host
 host3  172.16.1.11
 host3-vip  172.16.1.50
 host4  172.16.1.12
Note: An updated post using the most recent version of Consul (version 1.4.2) is available here, the below configuration will work with consul version 0.9.2 and below. As mentioned, since I had limited hardware I used Solaris Kernel Zones to create the environment. The zone configuration and installation are below. Note: If you are using a different VM hypervisor or Docker you can skip this section, I hope to update this in the future with a Docker implementation.

Solaris zones installation configuration

First, we need to create the Solaris zone configuration, below are the steps to create that for the consul server and client. Run the below to create the Consul zone configuration.
# Create Consul server zone configuration
for i in {1..3} {11..13};do 
   zonecfg -z cons$i create -t SYSsolaris-kz
done

# Create Consul client zone configuration
for i in {1..2};do 
   zonecfg -z cl$i create -t SYSsolaris-kz
done
Next, you need to generate a zone manifest and profile, run the below to do so.
for i in {1..3} {11..13};do
    sysconfig create-profile -o /tmp/cons${i}_profile.xml
    cp /usr/share/auto_install/manifest/zone_default.xml /tmp/cons${i}_manifest.xml
done

for i in {1..2};do
    sysconfig create-profile -o /tmp/cl${i}_profile.xml
    cp /usr/share/auto_install/manifest/zone_default.xml /tmp/cl${i}_manifest.xml
done
Now, we are ready to install the zone, Install the Consul zones by running the below.
# Install Consul server zones
for i in {1..3} {11..13};do   
   zoneadm -z cons$i install -b /root/Downloads/sol-11.3-text-x86.iso -m /tmp/cons${i}_default.xml -c /tmp/cons${i}_manifest.xml
done

# Install Consul client zones
for i in {1..2};do          
   zoneadm -z cl$i install -b /root/Downloads/sol-11.3-text-x86.iso -m /tmp/cl${i}_default.xml -c /tmp/cl{i}_manifest.xml
done
Firewall and NAT(optional)
I also created a NAT rule for all zones(10.10.1.x), used for outgoing traffic to the external interface. Note: The NAT configuration is absolutely not a requirement for Consul to work, its just a limitation in our network environment. To configure firewall NAT in BSD or Solaris, create a rule like the below. cat /etc/firewall/pf.conf
# https://www.openbsd.org/faq/pf/nat.html
ext_if = "eth0"    # macro for external interface

pass out on $ext_if from 10.10.1.0/24 to any nat-to 192.168.1.190
Now, lets enable the SMF firewall service.
svcadm enable svc:/network/firewall:default
We are now ready to move on to the Consul installation.

Consul installation

First, we need to download the latest Consul binary, you do so by running the below (replace with the OS and version you use), I used version 0.9.2.
wget https://releases.hashicorp.com/consul/0.9.2/consul_0.9.2_solaris_amd64.zip?_ga=2.100725697.1177602615.1504105338-325867268.1492723108
Consul is a single binary file, installation is simple, just run the below on all zones (server and client).
cd /usr/bin
unzip -qq consul_0.9.2_solaris_amd64.zip
Consul user and configuration
Next, we need to create the Consul running environment. Create a Consul user and directory’s outlined below. To create a consul user with the necessary directory’s. run the below on all Consul servers and clients.
groupadd consul
useradd -d /var/consul -g consul -m -s /bin/bash -c "Consul App" consul
mkdir -p /etc/consul.d/{bootstrap,server,client}

mkdir /var/consul
chown consul:consul /var/consul
Next, we need to generate a Consul secret join key, the key will be used for all cluster communications. Just run the below to do so.
consul keygen
G1Y/7ooXzfuyPmyzj2RlDg==
Finally we need to create the Consul config.json. you do so by running the below. Consul config.json for the Consul Servers Consul Server DC1 – First node config.json
cat <<'EOF' > /etc/consul.d/server/config.json
{
  "bind_addr": "10.10.1.11", 
  "datacenter": "dc1",
  "data_dir": "/var/consul",
  "encrypt": "G1Y/7ooXzfuyPmyzj2RlDg==",
  "log_level": "INFO",
  "enable_debug": true,
  "node_name": "ConsulServer1",
  "server": true,
  "bootstrap_expect": 3,
  "leave_on_terminate": false,
  "skip_leave_on_interrupt": true,
  "rejoin_after_leave": true,
  "disable_update_check": true,
  "performance": {
    "raft_multiplier": 1
  },
  "recursors": ["8.8.8.8"],
  "retry_join": [ 
    "10.10.1.11:8301",
    "10.10.1.12:8301",
    "10.10.1.13:8301"
    ],
  "retry_join_wan": [ 
    "10.10.1.111:8302",
    "10.10.1.112:8302",
    "10.10.1.113:8302"
    ]
}
EOF
Consul Server DC2 – First node config.json
cat <<'EOF' > /etc/consul.d/server/config.json
{
  "bind_addr": "10.10.1.111", 
  "datacenter": "dc2",
  "data_dir": "/var/consul",
  "encrypt": "G1Y/7ooXz1hpPmyzj2RlDg==",
  "log_level": "INFO",
  "enable_debug": true,
  "node_name": "ConsulServer4",
  "server": true,
  "bootstrap_expect": 3,
  "leave_on_terminate": false,
  "skip_leave_on_interrupt": true,
  "rejoin_after_leave": true,
  "disable_update_check": true,
  "performance": {
    "raft_multiplier": 1
  },
  "recursors": ["8.8.8.8"],
  "retry_join": [ 
    "10.10.1.111:8301",
    "10.10.1.112:8301",
    "10.10.1.113:8301"
    ],
  "retry_join_wan": [ 
    "10.10.1.11:8302",
    "10.10.1.12:8302",
    "10.10.1.13:8302"
    ]
}
EOF
Note1: The above config.json is for the first node. Make sure to replace the below fields on the two other nodes, nodes two and three.
  1. bind_addr
  2. node_name
Tip: The performance keyword is by default set to 5. the reason for that (to my understanding) is to accommodate AWS t2.tiny configurations, for maximum performance set this to 1. Next, to start the consul servers, just run the below. First on the DC1 3 nodes. then, once up, run on the 3 DC2 nodes. Tip: You can remove the nohup to run in the foreground.
nohup su - consul -c "/usr/bin/consul agent -config-dir /etc/consul.d/server/ -ui -client=0.0.0.0" &
Note: The above startup enables the Web UI. if you don’t like the Web UI on the Consul servers just remove the -ui option. Now, Lets move to the Consul client configuration. Consul config.json for the Consul Clients Consul Client DC1 – First node config.json
cat <<'EOF' > /etc/consul.d/client/config.json
{
        "bind_addr": "10.10.1.51",
        "datacenter": "dc1",
        "data_dir": "/var/consul",
        "encrypt": "G1Y/7ooXz1hpPmyzj2RlDg==",
        "log_level": "INFO",
        "enable_debug": true,
        "node_name": "Dc1Client1",
        "enable_script_checks": true,
        "server": false,
        "recursors": ["8.8.8.8"],
        "services": [{
                        "id": "http",
                        "name": "Apache",
                        "tags": ["HTTP"],
                        "port": 80,
                        "checks": [{
                                "script": "curl 192.168.1.190 >/dev/null 2>&1",
                                "interval": "10s"
                        }]
                },
                {
                        "id": "db1-prod-dc2",
                        "name": "db1",
                        "tags": ["dc2-db1"],
                        "Service": "db1",
                        "Address":"172.16.1.50",
                        "port": 22,
                        "checks": [
                                    {
                                      "id": "db1-dc2",
                                      "name": "Prod db1 DC2",
                                      "service_id": "db1",
                                      "tcp": "172.16.1.50:3306",
                                      "interval": "2s",
                                      "timeout": "1s"
                                    }
                                  ]
                }
        ],
        "rejoin_after_leave": true,
        "disable_update_check": true,
        "retry_join": [ 
          "10.10.1.11:8301",
          "10.10.1.12:8301",
          "10.10.1.13:8301"
          ]
}
EOF
Consul Client DC2 – First node config.json
cat <<'EOF' > /etc/consul.d/client/config.json
{
        "bind_addr": "10.10.1.151",
        "datacenter": "dc2",
        "data_dir": "/var/consul",
        "encrypt": "G1Y/7ooXz1hpPmyzj2RlDg==",
        "log_level": "INFO",
        "enable_debug": true,
        "node_name": "Dc2Client1",
        "enable_script_checks": true,
        "server": false,
        "recursors": ["8.8.8.8"],
        "services": [{
                        "id": "http",
                        "name": "Apache",
                        "tags": ["HTTP"],
                        "port": 80,
                        "checks": [{
                                "script": "curl 192.168.1.190 >/dev/null 2>&1",
                                "interval": "10s"
                        }]
                },
{
                        "id": "dc1-prod-dc1",
                        "name": "db1",
                        "tags": ["db1-dc1"],
                        "Service": "db1",
                        "Address":"192.168.1.50",
                        "port": 22,
                        "checks": [
                                    {
                                      "id": "db1-dc1",
                                      "name": "Prod db1 DC1",
                                      "service_id": "db1",
                                      "tcp": "192.168.1.50:3306",
                                      "interval": "2s",
                                      "timeout": "1s"
                                    }
                                  ]
                }
        ],
        "retry_join": [
          "10.10.1.111:8301",
          "10.10.1.112:8301",
          "10.10.1.113:8301"
          ],
        "rejoin_after_leave": true,
        "disable_update_check": true
}
EOF
Note: The Address property under services, can be used to replace the DNS reply address for this service lookup. To start the consul client, just run the below. Tip: You can remove the nohup to run in the foreground.
nohup su - consul -c "/usr/bin/consul agent -config-dir /etc/consul.d/client -ui -client=0.0.0.0" &
If all done correctly, you should now have a working Consul cluster. To access the Web UI , just go to any Consul server or client, port 8500. For example http://10.10.1.11:8500 would bring you to the below screen, pick your DC and continue to node and services selection. Pick you your Data Center List of Consul services A failed Consul services To continue reading part two, on how to configure Consul for Multi Data Center click here. Like what you’re reading? give it a thumbs up by rating the article. You might also like – realted to Docker Kubernetes / micro-services
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
%d bloggers like this: