$ influx
# 초기 데이터베이스 확인
> show databases
name: databases
name
----
_internal
# fluentdb 이름을 가지는 데이터베이스 생성
> create database fluentdb
# 생성 확인
> show databases
name: databases
name
----
_internal
fluentdb
# 생성한 fluentdb 데이터베이스 접근
> use fluentdb
Using database fluentdb
$ docker images hippo_flask
REPOSITORY TAG IMAGE ID CREATED SIZE
hippo_flask v1 64d54ca5156c About a minute ago 51.7MB
3. hippo_flask:v1 docker 이미지를 통해 컨테이너 생성과 확인
컨테이너 생성
$ docker run --name version1 -d -p 9000:9000 hippo_flask:v1
3d75997676b1f71d80eb72070ac3dbba9695ead6810b8d2a6cefcc1bad83b939
생성된 컨테이너 확인
$ docker ps | grep version1
3d75997676b1 hippo_flask:v1 "python3 server.py" 14 seconds ago Up 13 seconds 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp version1
생성된 컨테이너 통신 테스트
1. 생성된 컨테이너에 트래픽 전송하여 정상적으로 통신 되는지 curl 트래픽 전송 테스트
curl 명령어를 통해 통신 확인
$ curl localhost:9000/version
version 1
2. 생성된 컨테이너에 for문을 활용하여 연속적으로 트래픽 전송
배포하는 과정에서 변화를 알기 위해 curl를 for 통해 무한 사용
$ for (( ; ; )) do curl localhost:9000/version; sleep 1; done
version 1
version 1
version 1
version 1
version 1
version 1
version 1
^C
3. 생성한 컨테이너 종료
# 종료할 hippo_flask:v1 컨테이너 확인
$ docker ps | grep hippo_flask
3d75997676b1 hippo_flask:v1 "python3 server.py" 2 minutes ago Up 2 minutes 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp version1
# 해당 컨테이너 종료
$ docker rm -f 3d75997676b1
# 컨테이너가 출력되지 않았으므로 정상적으로 삭제된 것 확인
$ docker ps | grep hippo_flask
gitlab private container registry에 이미지 올리기
1. gitlab private container registry 로그인
gitlab에 container registry에 로그인 작업 필요 → [서버 IP]:8001로 로그인 시도
docker login 명령어 입력 후 Username과 Password를 입력하면 로그인됨
$ docker login [서버 IP]:8001
Username: root
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
※ 참고 → docker repository에 http로 접근하면 오류 발생
$ docker login [서버 IP]:8001
Username: root
Password:
Error response from daemon: Get "https://서버 IP:8001/v2/": http: server gave HTTP response to HTTPS client
해결 방법 → 접근 시도하는 서버의 docker 설정 변경(insecure-registries 내용 부분 추가)
기본은 지정된 레파지토리 저장위치가 있으나 드라이브를 별도로 추가하여 운영하는 경우는 다른 드라이브로 저장 위치를 변경해야함
git_data_dirs 함수 아래에 원하는 저장 디렉토리를 지정
/dev/sdb1와 mount된 /data 디렉토리 아래에 gitlab_data에 저장(/data/gitlab_data)
/data/gitlab_data/ 디렉토리가 없다면 해당 디렉토리 생성 필요
$ mkdir /data/gitlab_data
# 레파지토리 저장 디렉토리 위치 변경
$ vi /etc/gitlab/gitlab.rb
## ...중략...
git_data_dirs({
"default" => {
"path" => "/data/gitlab_data"
}
})
3. 도메인 생성 및 SSL 적용
IP를 사용할 경우 SSL 사용 X
SSL을 사용하는 경우 도메인 필요
/etc/gitlab/ssl 디렉토리로 crt, key 파일 복사 필요
$ vi /etc/gitlab/gitlab.rb
external_url 'https:/도메인'
nginx['redirect_http_to_https'] = true
nginx['redirect_http_to_https_port'] = 80
nginx['ssl_certificate'] = "/etc/gitlab/ssl/도메인.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/도메인.key"
# 도메인 생성 및 SSL 설정 적용
$ gitlab-ctl reconfigure
multi masters의 단일 진입점인 LoadBalancer(LB) 구성 → 사실상 HA 클러스터 구성
192.168.0.100 IP를 사용하는 서버에 Nginx를 이용하여 LB 구성
Nginx docker 이미지를 이용하여 LB 사용 → docker 컨테이너 운영 관리에 쉬움
1. nginx 구성 파일을 만들어서 master들의 단일 진입점을 구성
# nginx로 Load Balancer할 수 있게 nginx.conf 파일 생성
$ mkdir /etc/nginx
$ cat << END > /etc/nginx/nginx.conf
events {}
stream {
upstream stream_backend {
least_conn;
server 192.168.0.200:6443;
server 192.168.0.201:6443;
server 192.168.0.202:6443;
}
server {
listen 6443;
proxy_pass stream_backend;
proxy_timeout 300s;
proxy_connect_timeout 1s;
}
}
END
2. 도커 컨테이너로 NGINX를 실행하면서 LB를 운영
# nginx 컨테이너 실행
$ docker run --name proxy -v /etc/nginx/nginx.conf:/etc/nginx/nginx.cof:ro --restart=always -p 6443:6443 -d nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
b380bbd43752: Pull complete
fca7e12d1754: Pull complete
745ab57616cb: Pull complete
a4723e260b6f: Pull complete
1c84ebdff681: Pull complete
858292fd2e56: Pull complete
Digest: sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Status: Downloaded newer image for nginx:latest
4324e6beaef0bd05d76af525fa415c4bcdf34fb807e4280e952108bf0a957630
# nginx 컨테이너가 실행 중 확인
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4324e6beaef0 nginx "/docker-entrypoint.…" 54 seconds ago Up 53 seconds 80/tcp, 0.0.0.0:6443->6443/tcp, :::6443->6443/tcp proxy
# 통신이 되는지 확인
$ curl 192.168.0.100:6443
전역 옵션(global) 섹션과 기본 옵션(defaults) 섹션, 프록시 옵션 섹션(listen)의 주요 옵션에 관한 설명
global # 전역 옵션 섹션
daemon → 백그라운드 모드(background mode)로 실행
log → syslog 설정
log-send-hostname → hostname 설정
uid → 프로세스의 userid를 number로 변경
user → 프로세스의 userid를 name으로 변경
node → 두 개 이상의 프로세스나 서버가 같은 IP 주소를 공유할 때 name 설정(HA 설정)
maxconn → 프로세스당 최대 연결 개수
Defaults # 기본 옵션 섹션
log → syslog 설정
maxconn → 프로세스당 최대 연결 개수
listen
listen webfarm 10.101.22.76:80→ listen haproxy name ip:port
mode http → 연결 프로토콜
option httpchk → health check
option log-health-checks → health 로그 남김 여부
option forwardfor → 클라이언트 정보 전달
option httpclose → keep-alive 문제 발생 시 off 옵션
cookie SERVERID rewrite → 쿠키로 서버 구별 시 사용 여부
cookie JSESSIONID prefix → HA 구성 시 prefix 이후에 서버 정보 주입 여부
balance roundrobin → 순환 분배 방식
stats enable → 서버 상태 보기 가능 여부
stats uri /admin → 서버 상태 보기 uri
server xvadm01.ncli 10.101.22.18:80 cookie admin_portal_1 check inter 1000 rise 2 fall 5 → real server 정보(server [host명] [ip]:[port] cookie [서버쿠키명] check inter [주기(m/s)] rise [서버구동여부점검횟수], fall [서비스중단여부점검횟수])
3. balance 옵션
로드 밸런싱의 경우 round robin 방식을 일반적으로 사용하지만 다른 여러 방식이 있음 → 옵션에 적용할 수 있는 로드 밸런싱 알고리즘
roundrobin → 순차적으로 분배(최대 연결 가능 서버 4128개)
static-rr → 서버에 부여된 가중치에 따라서 분배
leastconn → 접속 수가 가장 적은 서버로 분배
source → 운영 중인 서버의 가중치를 나눠서 접속자 IP를 해싱(hashing)해서 분배
uri → 접속하는 URI를 해싱해서 운영 중인 서버의 가중치를 나눠서 분배(URI의 길이 또는 depth로 해싱)
url_param → HTTP GET 요청에 대해서 특정 패턴이 있는지 여부 확인 후 조건에 맞는 서버로 분배(조건 없는 경우 round robin으로 처리)
hdr → HTTP 헤더 에서 hdr()으로 지정된 조건이 있는 경우에 대해서만 분배(조건 없는 경우 round robin으로 처리)
4. Keepalived 설치와 설정 파일 수정을 통해 node 1과 node 2에 대한 HA 구성
Keepalived를 node1, node2에 설치
설치를 마친 후, Keepalived의 설정 파일을 수정
4.1. Keepalived 설정 내용 → 설정 내용 적용 후 Keepalived 서비스를 node1, node2에 실행
MASTER 서버의 Priority는 200으로 설정, BACKUP 서버의 Priority는 100 설정 → Priority 값이 높은 쪽이 MASTER 서버가 됨
auth_pass 및 virtual_router 값은 MASTER 서버와 BACKUP 서버 모두 동일해야함 → 해당 값은 default 값으로 그대로 유지
auth_pass는 간단하게 '1010'으로 설정
auth_pass 값도 MASTER 서버와 BACKUP 서버가 동일하게 설정해야함
virtual_ipaddress에는 VIP(192.168.100.250)로 설정해야함
4.2. node 1 (MASTER 서버)
MASTER 서버의 Keepalived 패키지 설치 및 실행
$ yum -y update
# Keepalived 패키지 설치
$ yum -y install keepalived
# Keepalived 설정 파일 수정
$ vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 1010
}
virtual_ipaddress {
192.168.100.250
}
}
# keepalived 실행
$ systemctl start keepalived
$ systemctl enable keepalived
# keepalived 실행 상태 확인
$ systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-07-30 15:40:41 UTC; 11s ago
Main PID: 1081 (keepalived)
CGroup: /system.slice/keepalived.service
├─1081 /usr/sbin/keepalived -D
├─1082 /usr/sbin/keepalived -D
└─1083 /usr/sbin/keepalived -D
Jul 30 15:40:43 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:40:43 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:40:43 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:40:43 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: VRRP_Instance(VI_1) Sending/queueing gratuitous...250
Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:40:48 master1 Keepalived_vrrp[1083]: Sending gratuitous ARP on eth1 for 192.168.100.250
Hint: Some lines were ellipsized, use -l to show in full.
4.3.node 2 (BACKUP 서버)
BACKUP 서버의 Keepalived 패키지 설치 및 실행
$ yum -y update
# Keepalived 패키지 설치
$ yum -y install keepalived
# Keepalived 설정 파일 수정
$ vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1010
}
virtual_ipaddress {
192.168.100.250
}
}
# keepalived 실행
$ systemctl start keepalived
$ systemctl enable keepalived
# keepalived 실행 상태 확인
$ systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-07-30 15:40:41 UTC; 11s ago
Main PID: 1075 (keepalived)
CGroup: /system.slice/keepalived.service
├─1075 /usr/sbin/keepalived -D
├─1076 /usr/sbin/keepalived -D
└─1077 /usr/sbin/keepalived -D
Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Registering Kernel netlink reflector
Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Registering Kernel netlink command channel
Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Registering gratuitous ARP shared channel
Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: Using LinkWatch kernel netlink reflector...
Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 30 15:40:41 master2 Keepalived_vrrp[1077]: VRRP sockpool: [ifindex(3), proto(112), unicast...1)]
Jul 30 15:40:41 master2 Keepalived_healthcheckers[1076]: Initializing ipvs
Jul 30 15:40:41 master2 Keepalived_healthcheckers[1076]: Opening file '/etc/keepalived/keepaliv...'.
Hint: Some lines were ellipsized, use -l to show in full.
5. 첫 번째 테스트 → Keepalived가 잘 실행되는지 확인
node 1, node 2와 같은 Private Subnet(192.168.100.0/24 대역) 안에 있는 테스트 서버(192.168.100.201) 1대를 구성
생성한 VIP(192.168.100.250)로 PING 테스트를 하는 도중에, node 1(MASTER server)을 shut down
5.1. 테스트 서버(192.168.100.201)에서 VIP(192.168.100.250)로 PING 테스트 진행
# 테스트 서버(192.168.100.201)에서 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.477 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.469 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.337 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.322 ms
64 bytes from 192.168.100.250: icmp_seq=7 ttl=64 time=0.720 ms <----- Node1(MASTER 서버)에서 Node2 (BACKUP서버)로 fail-over
64 bytes from 192.168.100.250: icmp_seq=8 ttl=64 time=0.330 ms
64 bytes from 192.168.100.250: icmp_seq=9 ttl=64 time=0.417 ms
64 bytes from 192.168.100.250: icmp_seq=10 ttl=64 time=0.407 ms
64 bytes from 192.168.100.250: icmp_seq=11 ttl=64 time=0.314 ms
64 bytes from 192.168.100.250: icmp_seq=12 ttl=64 time=0.270 ms
64 bytes from 192.168.100.250: icmp_seq=13 ttl=64 time=0.404 ms
64 bytes from 192.168.100.250: icmp_seq=14 ttl=64 time=0.346 ms
64 bytes from 192.168.100.250: icmp_seq=15 ttl=64 time=0.271 ms
5.2. node2 (Backup 서버)에서 /var/log/messages 확인 결과, node 1(Master 서버)의 VIP(192.168.100.201)를 Take-Over 한것 확인 가능
# node2에서 확인
$ tail -f /var/log/messages
Jul 30 15:50:45 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:46 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:50:51 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 15:51:03 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Received advert with higher priority 200, ours 100
Jul 30 15:51:03 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 30 15:51:03 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) removing protocol VIPs.
6. 두 번째 테스트 → Keepalived가 잘 실행되는지 확인
node 1, node 2와 같은 Private Subnet(192.168.100.0/24 대역) 안에 있는 테스트 서버(192.168.100.201) 1대를 구성
생성한 VIP(192.168.100.250)로 PING 테스트를 하는 도중에, node 1(MASTER server)의 keepalived 서비스 down 후 트래픽 확인
node 1(MASTER server)의 keepalived 서비스 restart 후 트래픽 확인
6.1. 테스트 서버(192.168.100.201)에서 VIP(192.168.100.250)로 PING 테스트 진행
# 테스트 서버(192.168.100.201)에서 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.332 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.250 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.443 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.517 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.373 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.481 ms
64 bytes from 192.168.100.250: icmp_seq=7 ttl=64 time=0.442 ms
64 bytes from 192.168.100.250: icmp_seq=8 ttl=64 time=0.382 ms
64 bytes from 192.168.100.250: icmp_seq=9 ttl=64 time=0.445 ms
6.2. 일반적인 상황에서 테스트 서버에서 보낸 트래픽이 node1 (Master 서버)의 VIP(eth1:0)로만 트래픽 인입 확인 → node 2(Backup 서버)에는 트래픽 전송 X
# node1 (Master 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
16:52:20.057581 IP worker > master1: ICMP echo request, id 19909, seq 1, length 64
16:52:20.057609 IP master1 > worker: ICMP echo reply, id 19909, seq 1, length 64
16:52:20.292151 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:52:21.061380 IP worker > master1: ICMP echo request, id 19909, seq 2, length 64
16:52:21.061402 IP master1 > worker: ICMP echo reply, id 19909, seq 2, length 64
16:52:21.293746 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:52:22.062916 IP worker > master1: ICMP echo request, id 19909, seq 3, length 64
16:52:22.062938 IP master1 > worker: ICMP echo reply, id 19909, seq 3, length 64
16:52:22.295863 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)는 대기 -> master에서 상태 체크만 함
$ tcpdump -i eth1:0
16:02:23.931301 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
6.3. node1 (Master 서버)의 keepalived 서비스 down→ node 2(Backup 서버)에 트래픽 인입 확인
$ systemctl stop keepalived
# node1 (Master 서버)는 대기 -> backup에서 상태 체크만 함
$ tcpdump -i eth1:0
16:54:00.591586 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simp
# node1 (Master 서버)의 /var/log/messages 내용 -> HA Master 종료 확인
$ tail -f /var/log/messages
Jul 30 16:55:02 master1 kernel: device eth1 left promiscuous mode
Jul 30 16:55:29 master1 systemd: Stopping LVS and VRRP High Availability Monitor...
Jul 30 16:55:29 master1 Keepalived[4960]: Stopping
Jul 30 16:55:29 master1 Keepalived_vrrp[4962]: VRRP_Instance(VI_1) sent 0 priority
Jul 30 16:55:29 master1 Keepalived_vrrp[4962]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 30 16:55:29 master1 Keepalived_healthcheckers[4961]: Stopped
Jul 30 16:55:30 master1 Keepalived_vrrp[4962]: Stopped
Jul 30 16:55:30 master1 Keepalived[4960]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Jul 30 16:55:30 master1 systemd: Stopped LVS and VRRP High Availability Monitor.
# node2 (Backup 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
16:53:30.532423 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
16:53:30.548001 IP worker > master2: ICMP echo request, id 19910, seq 18, length 64
16:53:30.548028 IP master2 > worker: ICMP echo reply, id 19910, seq 18, length 64
16:53:31.549477 IP worker > master2: ICMP echo request, id 19910, seq 19, length 64
16:53:31.549499 IP master2 > worker: ICMP echo reply, id 19910, seq 19, length 64
16:53:31.549530 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
16:53:32.549784 IP worker > master2: ICMP echo request, id 19910, seq 20, length 64
16:53:32.549824 IP master2 > worker: ICMP echo reply, id 19910, seq 20, length 64
16:53:32.549972 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)의 /var/log/messages 내용 -> HA Backup 시작 확인
$ tail -f /var/log/messages
Jul 30 16:55:29 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:30 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:35 master2 Keepalived_vrrp[1077]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:55:35 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
6.4. node1 (Master 서버)의 keepalived 서비스 restart → node 1(Master 서버)에 트래픽 다시 인입 확인
$ systemctl restart keepalived
# node1 (Master 서버)의 VIP로만 트래픽 다시 인입 확인
$ tcpdump -i eth1:0
16:54:17.656150 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:54:18.626524 IP worker > master1: ICMP echo request, id 19910, seq 66, length 64
16:54:18.626563 IP master1 > worker: ICMP echo reply, id 19910, seq 66, length 64
16:54:18.657517 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
16:54:19.628842 IP worker > master1: ICMP echo request, id 19910, seq 67, length 64
16:54:19.628864 IP master1 > worker: ICMP echo reply, id 19910, seq 67, length 64
16:54:19.658916 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
# node1 (Master 서버)의 /var/log/messages 내용 -> HA Master 재시작 확인
$ tail -f /var/log/messages
Jul 30 16:58:51 master1 Keepalived[4985]: Starting Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Jul 30 16:58:51 master1 Keepalived[4985]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 30 16:58:51 master1 Keepalived[4986]: Starting Healthcheck child process, pid=4987
Jul 30 16:58:51 master1 systemd: Started LVS and VRRP High Availability Monitor.
Jul 30 16:58:51 master1 Keepalived[4986]: Starting VRRP child process, pid=4988
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Registering Kernel netlink reflector
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Registering Kernel netlink command channel
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Registering gratuitous ARP shared channel
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: Using LinkWatch kernel netlink reflector...
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: VRRP sockpool: [ifindex(3), proto(112), unicast(0), fd(10,11)]
Jul 30 16:58:51 master1 Keepalived_healthcheckers[4987]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 30 16:58:51 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:52 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
Jul 30 16:58:57 master1 Keepalived_vrrp[4988]: Sending gratuitous ARP on eth1 for 192.168.100.250
# node2 (Backup 서버)는 대기 -> master에서 상태 체크만 함
$ tcpdump -i eth1:0
16:54:37.679862 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)의 /var/log/messages 내용 -> HA backup이 Master로 빼았김 확인
$ tail -f /var/log/messages
Jul 30 16:58:51 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Received advert with higher priority 200, ours 100
Jul 30 16:58:51 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 30 16:58:51 master2 Keepalived_vrrp[1077]: VRRP_Instance(VI_1) removing protocol VIPs.
7. 세 번째 테스트 → Keepalived가 잘 실행되는지 확인
master 네트워크 절단시 backup으로 인입 OK, master 네트워크 재시동 이후 트래픽이 자동으로 master에 오지 않음(keepalived 서비스 재시작 필요)
node 1, node 2와 같은 Private Subnet(192.168.100.0/24 대역) 안에 있는 테스트 서버(192.168.100.201) 1대를 구성
생성한 VIP(192.168.100.250)로 PING 테스트를 하는 도중에, node 1(MASTER server)의 네트워크를 down
일정 시간 지난 후 다시 네트워크를 up한 후 eth1:0 VIP를 구성
네트워크가 up되어도 backup으로 계속 트래픽 발생 → master 서버로 돌아오지 않음
7.1. 테스트 서버(192.168.100.201)에서 VIP(192.168.100.250)로 PING 테스트 진행
# 테스트 서버(192.168.100.201)에서 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.477 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.469 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.337 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.322 ms
64 bytes from 192.168.100.250: icmp_seq=7 ttl=64 time=0.720 ms <----- Node1(MASTER 서버)에서 Node2 (BACKUP서버)로 fail-over
64 bytes from 192.168.100.250: icmp_seq=8 ttl=64 time=0.330 ms
64 bytes from 192.168.100.250: icmp_seq=9 ttl=64 time=0.417 ms
64 bytes from 192.168.100.250: icmp_seq=10 ttl=64 time=0.407 ms
64 bytes from 192.168.100.250: icmp_seq=11 ttl=64 time=0.314 ms
64 bytes from 192.168.100.250: icmp_seq=12 ttl=64 time=0.270 ms
64 bytes from 192.168.100.250: icmp_seq=13 ttl=64 time=0.404 ms
64 bytes from 192.168.100.250: icmp_seq=14 ttl=64 time=0.346 ms
64 bytes from 192.168.100.250: icmp_seq=15 ttl=64 time=0.271 ms
64 bytes from 192.168.100.250: icmp_seq=16 ttl=64 time=0.314 ms
7.2. 일반적인 상황에서 테스트 서버에서 보낸 트래픽이 node1 (Master 서버)의 VIP(eth1:0)로만 트래픽 인입 확인 → node 2(Backup 서버)에는 트래픽 전송 X
# node1 (Master 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
15:58:09.214853 IP worker > master1: ICMP echo request, id 1015, seq 1, length 64
15:58:09.214893 IP master1 > worker: ICMP echo reply, id 1015, seq 1, length 64
15:58:10.121214 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
15:58:10.216097 IP worker > master1: ICMP echo request, id 1015, seq 2, length 64
15:58:10.216121 IP master1 > worker: ICMP echo reply, id 1015, seq 2, length 64
15:58:11.122587 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
15:58:11.216543 IP worker > master1: ICMP echo request, id 1015, seq 3, length 64
15:58:11.216564 IP master1 > worker: ICMP echo reply, id 1015, seq 3, length 64
15:58:12.123952 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
15:58:12.218872 IP worker > master1: ICMP echo request, id 1015, seq 4, length 64
15:58:12.218894 IP master1 > worker: ICMP echo reply, id 1015, seq 4, length 64
# node2 (Backup 서버)는 대기 -> master에서 상태 체크만 함
$ tcpdump -i eth1:0
16:02:23.931301 IP master1 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 200, authtype simple, intvl 1s, length 20
7.5. 테스트 서버(192.168.100.201)에서 VIP(192.168.100.250)로 기존의 PING 테스트 세션을 종료 후 다시 진행해도 그대로 유지
# 테스트 서버(192.168.100.201)에서 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.335 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.392 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.477 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.469 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.337 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.322 ms
64 bytes from 192.168.100.250: icmp_seq=7 ttl=64 time=0.720 ms
64 bytes from 192.168.100.250: icmp_seq=8 ttl=64 time=0.330 ms
64 bytes from 192.168.100.250: icmp_seq=9 ttl=64 time=0.417 ms
64 bytes from 192.168.100.250: icmp_seq=10 ttl=64 time=0.407 ms
64 bytes from 192.168.100.250: icmp_seq=11 ttl=64 time=0.314 ms
64 bytes from 192.168.100.250: icmp_seq=12 ttl=64 time=0.270 ms
64 bytes from 192.168.100.250: icmp_seq=13 ttl=64 time=0.404 ms
64 bytes from 192.168.100.250: icmp_seq=14 ttl=64 time=0.346 ms
64 bytes from 192.168.100.250: icmp_seq=15 ttl=64 time=0.271 ms
64 bytes from 192.168.100.250: icmp_seq=16 ttl=64 time=0.314 ms
64 bytes from 192.168.100.250: icmp_seq=17 ttl=64 time=0.491 ms
64 bytes from 192.168.100.250: icmp_seq=18 ttl=64 time=0.480 ms
64 bytes from 192.168.100.250: icmp_seq=19 ttl=64 time=0.389 ms
64 bytes from 192.168.100.250: icmp_seq=20 ttl=64 time=0.362 ms
64 bytes from 192.168.100.250: icmp_seq=21 ttl=64 time=0.392 ms
64 bytes from 192.168.100.250: icmp_seq=22 ttl=64 time=0.399 ms
64 bytes from 192.168.100.250: icmp_seq=23 ttl=64 time=0.381 ms
64 bytes from 192.168.100.250: icmp_seq=24 ttl=64 time=0.277 ms
64 bytes from 192.168.100.250: icmp_seq=25 ttl=64 time=0.570 ms
[...생략]
^C
--- 192.168.100.250 ping statistics ---
908 packets transmitted, 889 received, 2% packet loss, time 910037ms
rtt min/avg/max/mdev = 0.151/0.452/6.065/0.216 ms
# 테스트 서버(192.168.100.201)에서 다시 ping 테스트
$ ping 192.168.100.250
PING 192.168.100.250 (192.168.100.250) 56(84) bytes of data.
64 bytes from 192.168.100.250: icmp_seq=1 ttl=64 time=0.446 ms
64 bytes from 192.168.100.250: icmp_seq=2 ttl=64 time=0.608 ms
64 bytes from 192.168.100.250: icmp_seq=3 ttl=64 time=0.395 ms
64 bytes from 192.168.100.250: icmp_seq=4 ttl=64 time=0.452 ms
64 bytes from 192.168.100.250: icmp_seq=5 ttl=64 time=0.504 ms
64 bytes from 192.168.100.250: icmp_seq=6 ttl=64 time=0.483 ms
7.6. node 1(Master 서버)의 VIP(192.168.100.250)에 트래픽 다시 인입 확인
# node1 (Master 서버)는 대기 -> backup에서 상태 체크만 함
$ tcpdump -i eth1:0
16:41:59.321344 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
# node2 (Backup 서버)의 VIP로만 트래픽 인입 확인
$ tcpdump -i eth1:0
16:42:06.024044 IP worker > master2: ICMP echo request, id 19904, seq 45, length 64
16:42:06.024077 IP master2 > worker: ICMP echo reply, id 19904, seq 45, length 64
16:42:06.343284 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20
16:42:07.033519 IP worker > master2: ICMP echo request, id 19904, seq 46, length 64
16:42:07.033543 IP master2 > worker: ICMP echo reply, id 19904, seq 46, length 64
16:42:07.346173 IP master2 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20