(2 votes, average: 3.00 out of 5)
Loading...
Consul configuration in a multi Data Center environment
Below I am going to describe how to use Consul Health Checking across Multiple Data Centers. A picture is worth a thousand words, the diagram below describes a high level overview of the Consul multi Data Center architecture. Consul Multi Data Center Diagram below are the highlights covered in in this article in great detail.- OS required for Consul VMs/Docker installation
- Consul Installation and configuration
- Consul GEO(Query) configuration used in Consul multiple Data Center setup
- Consul and Python API’s
- Dashboard for Consul data (Host, Service Availability)
- Service Discovery
- Health Checking (TCP, HTTP, Script, Docker)
- KV Store
- Multi Datacenter
Architecture / Preparing / Installing the OS(Kernel Zones/VMs)
For the below Consul configuration you will need the below items.- 6 Consul servers – 3 per DC (can be VMs or Docker’s)
- 2 Consul clients – 1 per DC Tip: For maximum HA use at least 2 clients per DC
- Your end-hosts monitored by consul(with no Consul agent)
- Server RPC (Default 8300). This is used by servers to handle incoming requests from other agents. TCP only.
- Serf LAN (Default 8301). This is used to handle gossip in the LAN. Required by all agents. TCP and UDP.
- Serf WAN (Default 8302). This is used by servers to gossip over the WAN to other servers. TCP and UDP.
- HTTP API (Default 8500). This is used by clients to talk to the HTTP API. TCP only.
- HTTPS API (Default 443). This is used by clients to talk to the HTTPS API. TCP only.
- DNS Interface (Default 8600). Used to resolve DNS queries. TCP and UDP.
DC1 | |
---|---|
Name | IP Addrss |
Consul Servers | |
cons1 | 10.10.1.11 |
cons2 | 10.10.1.12 |
cons2 | 10.10.1.13 |
Consul Client | |
cl1 | 10.10.1.51 |
cl2 | 10.10.1.52 |
End Host | |
host1 | 192.168.1.11 |
host2-vip | 192.168.1.50 |
host2 | 192.168.1.12 |
DC2 | |
Name | IP Address |
Consul Servers | |
cons11 | 10.10.1.111 |
cons12 | 10.10.1.112 |
cons13 | 10.10.1.113 |
Consul Client | |
cl1 | 10.10.1.151 |
cl2 | 10.10.1.152 |
End Host | |
host3 | 172.16.1.11 |
host3-vip | 172.16.1.50 |
host4 | 172.16.1.12 |
Solaris zones installation configuration
First, we need to create the Solaris zone configuration, below are the steps to create that for the consul server and client. Run the below to create the Consul zone configuration.# Create Consul server zone configuration for i in {1..3} {11..13};do zonecfg -z cons$i create -t SYSsolaris-kz done # Create Consul client zone configuration for i in {1..2};do zonecfg -z cl$i create -t SYSsolaris-kz doneNext, you need to generate a zone manifest and profile, run the below to do so.
for i in {1..3} {11..13};do sysconfig create-profile -o /tmp/cons${i}_profile.xml cp /usr/share/auto_install/manifest/zone_default.xml /tmp/cons${i}_manifest.xml done for i in {1..2};do sysconfig create-profile -o /tmp/cl${i}_profile.xml cp /usr/share/auto_install/manifest/zone_default.xml /tmp/cl${i}_manifest.xml doneNow, we are ready to install the zone, Install the Consul zones by running the below.
# Install Consul server zones for i in {1..3} {11..13};do zoneadm -z cons$i install -b /root/Downloads/sol-11.3-text-x86.iso -m /tmp/cons${i}_default.xml -c /tmp/cons${i}_manifest.xml done # Install Consul client zones for i in {1..2};do zoneadm -z cl$i install -b /root/Downloads/sol-11.3-text-x86.iso -m /tmp/cl${i}_default.xml -c /tmp/cl{i}_manifest.xml done
Firewall and NAT(optional)
I also created a NAT rule for all zones(10.10.1.x), used for outgoing traffic to the external interface. Note: The NAT configuration is absolutely not a requirement for Consul to work, its just a limitation in our network environment. To configure firewall NAT in BSD or Solaris, create a rule like the below. cat /etc/firewall/pf.conf# https://www.openbsd.org/faq/pf/nat.html ext_if = "eth0" # macro for external interface pass out on $ext_if from 10.10.1.0/24 to any nat-to 192.168.1.190Now, lets enable the SMF firewall service.
svcadm enable svc:/network/firewall:defaultWe are now ready to move on to the Consul installation.
Consul installation
First, we need to download the latest Consul binary, you do so by running the below (replace with the OS and version you use), I used version 0.9.2.wget https://releases.hashicorp.com/consul/0.9.2/consul_0.9.2_solaris_amd64.zip?_ga=2.100725697.1177602615.1504105338-325867268.1492723108Consul is a single binary file, installation is simple, just run the below on all zones (server and client).
cd /usr/bin unzip -qq consul_0.9.2_solaris_amd64.zip
Consul user and configuration
Next, we need to create the Consul running environment. Create a Consul user and directory’s outlined below. To create a consul user with the necessary directory’s. run the below on all Consul servers and clients.groupadd consul useradd -d /var/consul -g consul -m -s /bin/bash -c "Consul App" consul mkdir -p /etc/consul.d/{bootstrap,server,client} mkdir /var/consul chown consul:consul /var/consulNext, we need to generate a Consul secret join key, the key will be used for all cluster communications. Just run the below to do so.
consul keygen G1Y/7ooXzfuyPmyzj2RlDg==Finally we need to create the Consul config.json. you do so by running the below. Consul config.json for the Consul Servers Consul Server DC1 – First node config.json
cat <<'EOF' > /etc/consul.d/server/config.json { "bind_addr": "10.10.1.11", "datacenter": "dc1", "data_dir": "/var/consul", "encrypt": "G1Y/7ooXzfuyPmyzj2RlDg==", "log_level": "INFO", "enable_debug": true, "node_name": "ConsulServer1", "server": true, "bootstrap_expect": 3, "leave_on_terminate": false, "skip_leave_on_interrupt": true, "rejoin_after_leave": true, "disable_update_check": true, "performance": { "raft_multiplier": 1 }, "recursors": ["8.8.8.8"], "retry_join": [ "10.10.1.11:8301", "10.10.1.12:8301", "10.10.1.13:8301" ], "retry_join_wan": [ "10.10.1.111:8302", "10.10.1.112:8302", "10.10.1.113:8302" ] } EOFConsul Server DC2 – First node config.json
cat <<'EOF' > /etc/consul.d/server/config.json { "bind_addr": "10.10.1.111", "datacenter": "dc2", "data_dir": "/var/consul", "encrypt": "G1Y/7ooXz1hpPmyzj2RlDg==", "log_level": "INFO", "enable_debug": true, "node_name": "ConsulServer4", "server": true, "bootstrap_expect": 3, "leave_on_terminate": false, "skip_leave_on_interrupt": true, "rejoin_after_leave": true, "disable_update_check": true, "performance": { "raft_multiplier": 1 }, "recursors": ["8.8.8.8"], "retry_join": [ "10.10.1.111:8301", "10.10.1.112:8301", "10.10.1.113:8301" ], "retry_join_wan": [ "10.10.1.11:8302", "10.10.1.12:8302", "10.10.1.13:8302" ] } EOFNote1: The above config.json is for the first node. Make sure to replace the below fields on the two other nodes, nodes two and three.
- bind_addr
- node_name
nohup su - consul -c "/usr/bin/consul agent -config-dir /etc/consul.d/server/ -ui -client=0.0.0.0" &Note: The above startup enables the Web UI. if you don’t like the Web UI on the Consul servers just remove the -ui option. Now, Lets move to the Consul client configuration. Consul config.json for the Consul Clients Consul Client DC1 – First node config.json
cat <<'EOF' > /etc/consul.d/client/config.json { "bind_addr": "10.10.1.51", "datacenter": "dc1", "data_dir": "/var/consul", "encrypt": "G1Y/7ooXz1hpPmyzj2RlDg==", "log_level": "INFO", "enable_debug": true, "node_name": "Dc1Client1", "enable_script_checks": true, "server": false, "recursors": ["8.8.8.8"], "services": [{ "id": "http", "name": "Apache", "tags": ["HTTP"], "port": 80, "checks": [{ "script": "curl 192.168.1.190 >/dev/null 2>&1", "interval": "10s" }] }, { "id": "db1-prod-dc2", "name": "db1", "tags": ["dc2-db1"], "Service": "db1", "Address":"172.16.1.50", "port": 22, "checks": [ { "id": "db1-dc2", "name": "Prod db1 DC2", "service_id": "db1", "tcp": "172.16.1.50:3306", "interval": "2s", "timeout": "1s" } ] } ], "rejoin_after_leave": true, "disable_update_check": true, "retry_join": [ "10.10.1.11:8301", "10.10.1.12:8301", "10.10.1.13:8301" ] } EOFConsul Client DC2 – First node config.json
cat <<'EOF' > /etc/consul.d/client/config.json { "bind_addr": "10.10.1.151", "datacenter": "dc2", "data_dir": "/var/consul", "encrypt": "G1Y/7ooXz1hpPmyzj2RlDg==", "log_level": "INFO", "enable_debug": true, "node_name": "Dc2Client1", "enable_script_checks": true, "server": false, "recursors": ["8.8.8.8"], "services": [{ "id": "http", "name": "Apache", "tags": ["HTTP"], "port": 80, "checks": [{ "script": "curl 192.168.1.190 >/dev/null 2>&1", "interval": "10s" }] }, { "id": "dc1-prod-dc1", "name": "db1", "tags": ["db1-dc1"], "Service": "db1", "Address":"192.168.1.50", "port": 22, "checks": [ { "id": "db1-dc1", "name": "Prod db1 DC1", "service_id": "db1", "tcp": "192.168.1.50:3306", "interval": "2s", "timeout": "1s" } ] } ], "retry_join": [ "10.10.1.111:8301", "10.10.1.112:8301", "10.10.1.113:8301" ], "rejoin_after_leave": true, "disable_update_check": true } EOFNote: The Address property under services, can be used to replace the DNS reply address for this service lookup. To start the consul client, just run the below. Tip: You can remove the nohup to run in the foreground.
nohup su - consul -c "/usr/bin/consul agent -config-dir /etc/consul.d/client -ui -client=0.0.0.0" &If all done correctly, you should now have a working Consul cluster. To access the Web UI , just go to any Consul server or client, port 8500. For example http://10.10.1.11:8500 would bring you to the below screen, pick your DC and continue to node and services selection. Pick you your Data Center List of Consul services A failed Consul services To continue reading part two, on how to configure Consul for Multi Data Center click here. Like what you’re reading? give it a thumbs up by rating the article. You might also like – realted to Docker Kubernetes / micro-services
0
0
votes
Article Rating