- VM 2대에 Keepalived를 설치하여 간단하게 HA를 구성하는 방안
1. Keepalived 기본 설명
- Keepalived는 가상 IP(VIP; Virtual IP)를 기반으로 작동
- 마스터 서버(Master Server)를 모니터링하다 해당 노드에 장애가 발생했을 시, 스탠바이 서버(Standby Server)로 페일오버(failover)되도록 지원
2. Keepalived 구성
- node 1, node 2라는 서버 2대를 생성
- node 1 서버를 MASTER 서버, node 2서버를 BACKUP 서버로 설정
- 두 서버는 서로 간의 헬스 체크를 수행하다가 MASTER 서버에 문제가 생겼을 시에 BACKUP 서버가 VIP(Virtual IP)를 Take-Over 하면서 지속적으로 서비스가 운영될 수 있도록 구성
- eth0에 IP Alias를 사용할 경우, IP Spoofing으로 인식하고 해당 VM의 네트워크 통신을 끊어버리기 때문에, 위와 같은 구성을 하기 위해선 일단 VM에 추가 인터페이스 할당 후, VIP(Virtual IP)를 생성 필요
- Keepalived를 통한 HA 구성을 하기 전에 아래 사전 작업 진행 필요
3. 3가지 작업을 통해 Keepalived를 활용한 HA 구성을 위한 사전 준비
- Private Subnet 생성 → 192.168.100.0/24 대역
- node 1, node 2 서버에 추가 인터페이스 할당 → 기존 인터페이스 사용 가능
- node 1, node 2 서버에 VIP 할당
3.1. Private Subnet 생성
- node 1, node 2 서버는 192.168.100.0/24 대역의 Private Subnet을 구성
3.2. node 1, node 2에 추가 인터페이스 할당
- node1 서버와 node 2 서버에 추가 인터페이스를 할당
- node 1 (MASTER) → 192.168.100.101
- node 2 (BACKUP) → 192.168.100.102
3.3. node 1, node 2에 VIP(Virtual IP) 할당
- node 1, node 2에 VIP(Virtual IP)를 할당
- VIP는 192.168.100.250으로 설정 → 각각 node 1, node2에 eth1:0 인터페이스에 추가
node1의 eth1:0에 192.168.100.250 추가
# node1에서 동작 -> 일시적으로 VIP 추가 $ ifconfig eth1:0 192.168.100.250 netmask 255.255.255.0 # node1에서 동작 -> 영구적으로 VIP 추가 $ cp /etc/sysconfig/network-scripts/ifcfg-eth1 /etc/sysconfig/network-scripts/ifcfg-eth1:0 $ vi /etc/sysconfig/network-scripts/ifcfg-eth1:0 DEVICE=eth1:0 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.100.250 NETMASK=255.255.255.0 # 네트워크 재시작 $ systemctl restart network # node1에서 정상적으로 설정되었는지 확인 $ ifconfig eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.100.101 netmask 255.255.255.0 broadcast 192.168.100.255 inet6 fe80::a00:27ff:fe30:d141 prefixlen 64 scopeid 0x20<link> ether 08:00:27:30:d1:41 txqueuelen 1000 (Ethernet) RX packets 11 bytes 1971 (1.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22 bytes 2342 (2.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth1:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.100.250 netmask 255.255.255.0 broadcast 192.168.100.255 ether 08:00:27:30:d1:41 txqueuelen 1000 (Ethernet)
node2의 eth1:0에 192.168.100.250 추가
# node2에서 동작 -> 일시적으로 VIP 추가 $ ifconfig eth1:0 192.168.100.250 netmask 255.255.255.0 # node2에서 동작 -> 영구적으로 VIP 추가 $ cp /etc/sysconfig/network-scripts/ifcfg-eth1 /etc/sysconfig/network-scripts/ifcfg-eth1:0 $ vi /etc/sysconfig/network-scripts/ifcfg-eth1:0 DEVICE=eth1:0 ONBOOT=yes BOOTPROTO=static IPADDR=192.168.100.250 NETMASK=255.255.255.0 # 네트워크 재시작 $ systemctl restart network # node1에서 정상적으로 설정되었는지 확인 $ ifconfig eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.100.102 netmask 255.255.255.0 broadcast 192.168.100.255 inet6 fe80::a00:27ff:fe44:4b29 prefixlen 64 scopeid 0x20<link> ether 08:00:27:44:4b:29 txqueuelen 1000 (Ethernet) RX packets 6 bytes 1107 (1.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 22 bytes 2342 (2.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth1:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.100.250 netmask 255.255.255.0 broadcast 192.168.100.255 ether 08:00:27:44:4b:29 txqueuelen 1000 (Ethernet)
4. Keepalived 설치와 설정 파일 수정을 통해 node 1과 node 2에 대한 HA 구성
- Keepalived를 node1, node2에 설치
- 설치를 마친 후, Keepalived의 설정 파일을 수정
4.1. Keepalived 설정 내용 → 설정 내용 적용 후 Keepalived 서비스를 node1, node2에 실행
- MASTER 서버의 Priority는 200으로 설정, BACKUP 서버의 Priority는 100 설정 → Priority 값이 높은 쪽이 MASTER 서버가 됨
- auth_pass 및 virtual_router 값은 MASTER 서버와 BACKUP 서버 모두 동일해야함 → 해당 값은 default 값으로 그대로 유지
- auth_pass는 간단하게 '1010'으로 설정
- auth_pass 값도 MASTER 서버와 BACKUP 서버가 동일하게 설정해야함
- virtual_ipaddress에는 VIP(192.168.100.250)로 설정해야함
4.2. node 1 (MASTER 서버)
MASTER 서버의 Keepalived 패키지 설치 및 실행
$ yum -y update # Keepalived 패키지 설치 $ yum -y install keepalived # Keepalived 설정 파일 수정 $ vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 200 advert_int 1 authentication { auth_type PASS auth_pass 1010 } virtual_ipaddress { 192.168.100.250 } } # keepalived 실행 $ systemctl start keepalived $ systemctl enable keepalived # keepalived 실행 상태 확인 $ systemctl status keepalived ● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2021-07-30 15:40:41 UTC; 11s ago Main PID: 1081 (keepalived) CGroup: /system.slice/keepalived.service ├─1081 /usr/sbin/keepalived -D ├─1082 /usr/sbin/keepalived -D └─1083 /usr/sbin/keepalived -D Jul 30 15:40:43 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Jul 30 15:40:43 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Jul 30 15:40:43 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Jul 30 15:40:43 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: VRRP_Instance(VI_1) Sending/queueing gratuitous...250 Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250 Hint: Some lines were ellipsized, use -l to show in full.
4.3.node 2 (BACKUP 서버)
BACKUP 서버의 Keepalived 패키지 설치 및 실행
$ yum -y update # Keepalived 패키지 설치 $ yum -y install keepalived # Keepalived 설정 파일 수정 $ vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived vrrp_instance VI_1 { state BACKUP interface eth1 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1010 } virtual_ipaddress { 192.168.100.250 } } # keepalived 실행 $ systemctl start keepalived $ systemctl enable keepalived # keepalived 실행 상태 확인 $ systemctl status keepalived ● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2021-07-30 15:40:41 UTC; 11s ago Main PID: 1075 (keepalived) CGroup: /system.slice/keepalived.service ├─1075 /usr/sbin/keepalived -D ├─1076 /usr/sbin/keepalived -D └─1077 /usr/sbin/keepalived -D Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Registering Kernel netlink reflector Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Registering Kernel netlink command channel Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Registering gratuitous ARP shared channel Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Opening file '/etc/keepalived/keepalived.conf'. Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) removing protocol VIPs. Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Using LinkWatch kernel netlink reflector... Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering BACKUP STATE Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: VRRP sockpool: [ifindex(3), proto(112), unicast...1)] Jul 30 15:40:41 master2 Keepalived_healthcheckers[1076]: Initializing ipvs Jul 30 15:40:41 master2 Keepalived_healthcheckers[1076]: Opening file '/etc/keepalived/keepaliv...'. Hint: Some lines were ellipsized, use -l to show in full.
5. 첫 번째 테스트 → Keepalived가 잘 실행되는지 확인
- node 1, node 2와 같은 Private Subnet(192.168.100.0/24 대역) 안에 있는 테스트 서버(192.168.100.201) 1대를 구성
- 생성한 VIP(192.168.100.250)로 PING 테스트를 하는 도중에, node 1(MASTER server)을 shut down
5.1. 테스트 서버(192.168.100.201)에서 VIP(192.168.100.250)로 PING 테스트 진행
# 테스트 서버(192.168.100.201)에서 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.477 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.469 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.337 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.322 ms
64 bytes from 192.168.100.250: icmp_seq=7 ttl=64 time=0.720 ms <----- Node1(MASTER 서버)에서 Node2 (BACKUP서버)로 fail-over
64 bytes from 192.168.100.250: icmp_seq=8 ttl=64 time=0.330 ms
64 bytes from 192.168.100.250: icmp_seq=9 ttl=64 time=0.417 ms
64 bytes from 192.168.100.250: icmp_seq=10 ttl=64 time=0.407 ms
64 bytes from 192.168.100.250: icmp_seq=11 ttl=64 time=0.314 ms
64 bytes from 192.168.100.250: icmp_seq=12 ttl=64 time=0.270 ms
64 bytes from 192.168.100.250: icmp_seq=13 ttl=64 time=0.404 ms
64 bytes from 192.168.100.250: icmp_seq=14 ttl=64 time=0.346 ms
64 bytes from 192.168.100.250: icmp_seq=15 ttl=64 time=0.271 ms
5.2. node2 (Backup 서버)에서 /var/log/messages 확인 결과, node 1(Master 서버)의 VIP(192.168.100.201)를 Take-Over 한것 확인 가능
# node2에서 확인
$ tail -f /var/log/messages
Jul 30 15:50:45 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:51:03 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Received advert with higher priority 200, ours 100
Jul 30 15:51:03 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 30 15:51:03 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) removing protocol VIPs.
6. 두 번째 테스트 → Keepalived가 잘 실행되는지 확인
- node 1, node 2와 같은 Private Subnet(192.168.100.0/24 대역) 안에 있는 테스트 서버(192.168.100.201) 1대를 구성
- 생성한 VIP(192.168.100.250)로 PING 테스트를 하는 도중에, node 1(MASTER server)의 keepalived 서비스 down 후 트래픽 확인
- node 1(MASTER server)의 keepalived 서비스 restart 후 트래픽 확인
6.1. 테스트 서버(192.168.100.201)에서 VIP(192.168.100.250)로 PING 테스트 진행
# 테스트 서버(192.168.100.201)에서 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.332 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.250 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.443 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.517 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.373 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.481 ms
64 bytes from 192.168.100.250: icmp_seq=7 ttl=64 time=0.442 ms
64 bytes from 192.168.100.250: icmp_seq=8 ttl=64 time=0.382 ms
64 bytes from 192.168.100.250: icmp_seq=9 ttl=64 time=0.445 ms
6.2. 일반적인 상황에서 테스트 서버에서 보낸 트래픽이 node1 (Master 서버)의 VIP(eth1:0)로만 트래픽 인입 확인 → node 2(Backup 서버)에는 트래픽 전송 X
# node1 (Master 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
16:52:20.057581 IP worker > master1: ICMP echo request, id 19909, seq 1, length 64
16:52:20.057609 IP master1 > worker: ICMP echo reply, id 19909, seq 1, length 64
16:52:20.292151 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:52:21.061380 IP worker > master1: ICMP echo request, id 19909, seq 2, length 64
16:52:21.061402 IP master1 > worker: ICMP echo reply, id 19909, seq 2, length 64
16:52:21.293746 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:52:22.062916 IP worker > master1: ICMP echo request, id 19909, seq 3, length 64
16:52:22.062938 IP master1 > worker: ICMP echo reply, id 19909, seq 3, length 64
16:52:22.295863 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)는 대기 -> master에서 상태 체크만 함
$ tcpdump -i eth1:0
16:02:23.931301 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
6.3. node1 (Master 서버)의 keepalived 서비스 down→ node 2(Backup 서버)에 트래픽 인입 확인
$ systemctl stop keepalived
# node1 (Master 서버)는 대기 -> backup에서 상태 체크만 함
$ tcpdump -i eth1:0
16:54:00.591586 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simp
# node1 (Master 서버)의 /var/log/messages 내용 -> HA Master 종료 확인
$ tail -f /var/log/messages
Jul 30 16:55:02 master1 kernel: device eth1 left promiscuous mode
Jul 30 16:55:29 master1 systemd: Stopping LVS and VRRP High Availability Monitor...
Jul 30 16:55:29 master1 Keepalived[4960]: Stopping
Jul 30 16:55:29 master1 Keepalived_vrrp[4962]: VRRP_Instance(VI_1) sent 0 priority
Jul 30 16:55:29 master1 Keepalived_vrrp[4962]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 30 16:55:29 master1 Keepalived_healthcheckers[4961]: Stopped
Jul 30 16:55:30 master1 Keepalived_vrrp[4962]: Stopped
Jul 30 16:55:30 master1 Keepalived[4960]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Jul 30 16:55:30 master1 systemd: Stopped LVS and VRRP High Availability Monitor.
# node2 (Backup 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
16:53:30.532423 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
16:53:30.548001 IP worker > master2: ICMP echo request, id 19910, seq 18, length 64
16:53:30.548028 IP master2 > worker: ICMP echo reply, id 19910, seq 18, length 64
16:53:31.549477 IP worker > master2: ICMP echo request, id 19910, seq 19, length 64
16:53:31.549499 IP master2 > worker: ICMP echo reply, id 19910, seq 19, length 64
16:53:31.549530 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
16:53:32.549784 IP worker > master2: ICMP echo request, id 19910, seq 20, length 64
16:53:32.549824 IP master2 > worker: ICMP echo reply, id 19910, seq 20, length 64
16:53:32.549972 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)의 /var/log/messages 내용 -> HA Backup 시작 확인
$ tail -f /var/log/messages
Jul 30 16:55:29 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:35 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:35 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
6.4. node1 (Master 서버)의 keepalived 서비스 restart → node 1(Master 서버)에 트래픽 다시 인입 확인
$ systemctl restart keepalived
# node1 (Master 서버)의 VIP로만 트래픽 다시 인입 확인
$ tcpdump -i eth1:0
16:54:17.656150 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:54:18.626524 IP worker > master1: ICMP echo request, id 19910, seq 66, length 64
16:54:18.626563 IP master1 > worker: ICMP echo reply, id 19910, seq 66, length 64
16:54:18.657517 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:54:19.628842 IP worker > master1: ICMP echo request, id 19910, seq 67, length 64
16:54:19.628864 IP master1 > worker: ICMP echo reply, id 19910, seq 67, length 64
16:54:19.658916 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
# node1 (Master 서버)의 /var/log/messages 내용 -> HA Master 재시작 확인
$ tail -f /var/log/messages
Jul 30 16:58:51 master1 Keepalived[4985]: Starting Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Jul 30 16:58:51 master1 Keepalived[4985]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 30 16:58:51 master1 Keepalived[4986]: Starting Healthcheck child process, pid=4987
Jul 30 16:58:51 master1 systemd: Started LVS and VRRP High Availability Monitor.
Jul 30 16:58:51 master1 Keepalived[4986]: Starting VRRP child process, pid=4988
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Registering Kernel netlink reflector
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Registering Kernel netlink command channel
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Registering gratuitous ARP shared channel
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Using LinkWatch kernel netlink reflector...
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: VRRP sockpool: [ifindex(3), proto(112), unicast(0), fd(10,11)]
Jul 30 16:58:51 master1 Keepalived_healthcheckers[4987]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
# node2 (Backup 서버)는 대기 -> master에서 상태 체크만 함
$ tcpdump -i eth1:0
16:54:37.679862 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)의 /var/log/messages 내용 -> HA backup이 Master로 빼았김 확인
$ tail -f /var/log/messages
Jul 30 16:58:51 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Received advert with higher priority 200, ours 100
Jul 30 16:58:51 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 30 16:58:51 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) removing protocol VIPs.
7. 세 번째 테스트 → Keepalived가 잘 실행되는지 확인
- master 네트워크 절단시 backup으로 인입 OK, master 네트워크 재시동 이후 트래픽이 자동으로 master에 오지 않음(keepalived 서비스 재시작 필요)
- node 1, node 2와 같은 Private Subnet(192.168.100.0/24 대역) 안에 있는 테스트 서버(192.168.100.201) 1대를 구성
- 생성한 VIP(192.168.100.250)로 PING 테스트를 하는 도중에, node 1(MASTER server)의 네트워크를 down
- 일정 시간 지난 후 다시 네트워크를 up한 후 eth1:0 VIP를 구성
- 네트워크가 up되어도 backup으로 계속 트래픽 발생 → master 서버로 돌아오지 않음
7.1. 테스트 서버(192.168.100.201)에서 VIP(192.168.100.250)로 PING 테스트 진행
# 테스트 서버(192.168.100.201)에서 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.477 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.469 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.337 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.322 ms
64 bytes from 192.168.100.250: icmp_seq=7 ttl=64 time=0.720 ms <----- Node1(MASTER 서버)에서 Node2 (BACKUP서버)로 fail-over
64 bytes from 192.168.100.250: icmp_seq=8 ttl=64 time=0.330 ms
64 bytes from 192.168.100.250: icmp_seq=9 ttl=64 time=0.417 ms
64 bytes from 192.168.100.250: icmp_seq=10 ttl=64 time=0.407 ms
64 bytes from 192.168.100.250: icmp_seq=11 ttl=64 time=0.314 ms
64 bytes from 192.168.100.250: icmp_seq=12 ttl=64 time=0.270 ms
64 bytes from 192.168.100.250: icmp_seq=13 ttl=64 time=0.404 ms
64 bytes from 192.168.100.250: icmp_seq=14 ttl=64 time=0.346 ms
64 bytes from 192.168.100.250: icmp_seq=15 ttl=64 time=0.271 ms
64 bytes from 192.168.100.250: icmp_seq=16 ttl=64 time=0.314 ms
7.2. 일반적인 상황에서 테스트 서버에서 보낸 트래픽이 node1 (Master 서버)의 VIP(eth1:0)로만 트래픽 인입 확인 → node 2(Backup 서버)에는 트래픽 전송 X
# node1 (Master 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
15:58:09.214853 IP worker > master1: ICMP echo request, id 1015, seq 1, length 64
15:58:09.214893 IP master1 > worker: ICMP echo reply, id 1015, seq 1, length 64
15:58:10.121214 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
15:58:10.216097 IP worker > master1: ICMP echo request, id 1015, seq 2, length 64
15:58:10.216121 IP master1 > worker: ICMP echo reply, id 1015, seq 2, length 64
15:58:11.122587 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
15:58:11.216543 IP worker > master1: ICMP echo request, id 1015, seq 3, length 64
15:58:11.216564 IP master1 > worker: ICMP echo reply, id 1015, seq 3, length 64
15:58:12.123952 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
15:58:12.218872 IP worker > master1: ICMP echo request, id 1015, seq 4, length 64
15:58:12.218894 IP master1 > worker: ICMP echo reply, id 1015, seq 4, length 64
# node2 (Backup 서버)는 대기 -> master에서 상태 체크만 함
$ tcpdump -i eth1:0
16:02:23.931301 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
7.3. node1 (Master 서버)의 network down→ node 2(Backup 서버)에 트래픽 인입 확인
$ systemctl stop network
# node1 (Master 서버)의 VIP로는 어떠한 값도 들어오지 않음 -> 통신 절단
$ tcpdump -i eth1:0
# node2 (Backup 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
16:06:43.837509 ARP, Request who-has master2 tell worker, length 46
16:06:43.837526 ARP, Reply master2 is-at 08:00:27:44:4b:29 (oui Unknown), length 28
16:06:43.837837 IP worker > master2: ICMP echo request, id 1015, seq 514, length 64
16:06:43.837854 IP master2 > worker: ICMP echo reply, id 1015, seq 514, length 64
16:06:44.001473 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:06:44.843864 IP worker > master2: ICMP echo request, id 1015, seq 515, length 64
16:06:44.843888 IP master2 > worker: ICMP echo reply, id 1015, seq 515, length 64
16:06:45.045735 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
7.4. node1 (Master 서버)의 network restart 이후 VIP(192.168.100.250) 다시 설정 → node 2(Backup 서버)에 트래픽 인입 확인
$ systemctl restart network
# node1 (Master 서버)의 eth1 인터페이스 up하면 자동으로 VIP가 생성되지 않음 -> eth1의 VIP(eth1:0)를 수동으로 설정 필요
$ ifconfig
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.101 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::a00:27ff:fe30:d141 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:30:d1:41 txqueuelen 1000 (Ethernet)
RX packets 625 bytes 59642 (58.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1663 bytes 120386 (117.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# eth1의 VIP(eth1:0)를 수동으로 설정
$ ifconfig eth1:0 192.168.100.250 netmask 255.255.255.0
$ ifconfig
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.101 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::a00:27ff:fe30:d141 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:30:d1:41 txqueuelen 1000 (Ethernet)
RX packets 625 bytes 59642 (58.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1679 bytes 121346 (118.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.250 netmask 255.255.255.0 broadcast 192.168.100.255
ether 08:00:27:30:d1:41 txqueuelen 1000 (Ethernet)
# # node1 (Master 서버)는 대기 -> backup에서 상태 체크만 함
$ tcpdump -i eth1:0
16:02:23.931301 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
16:06:43.837509 ARP, Request who-has master2 tell worker, length 46
16:06:43.837526 ARP, Reply master2 is-at 08:00:27:44:4b:29 (oui Unknown), length 28
16:06:43.837837 IP worker > master2: ICMP echo request, id 1015, seq 514, length 64
16:06:43.837854 IP master2 > worker: ICMP echo reply, id 1015, seq 514, length 64
16:06:44.001473 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:06:44.843864 IP worker > master2: ICMP echo request, id 1015, seq 515, length 64
16:06:44.843888 IP master2 > worker: ICMP echo reply, id 1015, seq 515, length 64
16:06:45.045735 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
7.5. 테스트 서버(192.168.100.201)에서 VIP(192.168.100.250)로 기존의 PING 테스트 세션을 종료 후 다시 진행해도 그대로 유지
# 테스트 서버(192.168.100.201)에서 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.477 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.469 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.337 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.322 ms
64 bytes from 192.168.100.250: icmp_seq=7 ttl=64 time=0.720 ms
64 bytes from 192.168.100.250: icmp_seq=8 ttl=64 time=0.330 ms
64 bytes from 192.168.100.250: icmp_seq=9 ttl=64 time=0.417 ms
64 bytes from 192.168.100.250: icmp_seq=10 ttl=64 time=0.407 ms
64 bytes from 192.168.100.250: icmp_seq=11 ttl=64 time=0.314 ms
64 bytes from 192.168.100.250: icmp_seq=12 ttl=64 time=0.270 ms
64 bytes from 192.168.100.250: icmp_seq=13 ttl=64 time=0.404 ms
64 bytes from 192.168.100.250: icmp_seq=14 ttl=64 time=0.346 ms
64 bytes from 192.168.100.250: icmp_seq=15 ttl=64 time=0.271 ms
64 bytes from 192.168.100.250: icmp_seq=16 ttl=64 time=0.314 ms
64 bytes from 192.168.100.250: icmp_seq=17 ttl=64 time=0.491 ms
64 bytes from 192.168.100.250: icmp_seq=18 ttl=64 time=0.480 ms
64 bytes from 192.168.100.250: icmp_seq=19 ttl=64 time=0.389 ms
64 bytes from 192.168.100.250: icmp_seq=20 ttl=64 time=0.362 ms
64 bytes from 192.168.100.250: icmp_seq=21 ttl=64 time=0.392 ms
64 bytes from 192.168.100.250: icmp_seq=22 ttl=64 time=0.399 ms
64 bytes from 192.168.100.250: icmp_seq=23 ttl=64 time=0.381 ms
64 bytes from 192.168.100.250: icmp_seq=24 ttl=64 time=0.277 ms
64 bytes from 192.168.100.250: icmp_seq=25 ttl=64 time=0.570 ms
[...생략]
^C
--- 192.168.100.250 ping statistics ---
908 packets transmitted, 889 received, 2% packet loss, time 910037ms
rtt min/avg/max/mdev = 0.151/0.452/6.065/0.216 ms
# 테스트 서버(192.168.100.201)에서 다시 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.446 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.608 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.395 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.452 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.504 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.483 ms
7.6. node 1(Master 서버)의 VIP(192.168.100.250)에 트래픽 다시 인입 확인
# node1 (Master 서버)는 대기 -> backup에서 상태 체크만 함
$ tcpdump -i eth1:0
16:41:59.321344 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
16:42:06.024044 IP worker > master2: ICMP echo request, id 19904, seq 45, length 64
16:42:06.024077 IP master2 > worker: ICMP echo reply, id 19904, seq 45, length 64
16:42:06.343284 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
16:42:07.033519 IP worker > master2: ICMP echo request, id 19904, seq 46, length 64
16:42:07.033543 IP master2 > worker: ICMP echo reply, id 19904, seq 46, length 64
16:42:07.346173 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
'Opensource(오픈 소스) > HAproxy' 카테고리의 다른 글
Nginx로 Load Balancer 구성 → Virtual Machine 테스트 (0) | 2022.07.30 |
---|---|
L4/L7 스위치의 대안, 오픈 소스 로드 밸런서 HAProxy란 (0) | 2022.07.30 |
단일 HAproxy 설치(HAproxy 1대와 웹 서버 2대) (0) | 2022.07.26 |
부하 분산(Load Balancing)이란 (0) | 2022.07.10 |