Keepalived双机热备概述
前言:
在现代IT架构中,服务的高可用性是保障业务连续性的核心需求之一。Keepalived作为一款轻量级的高可用解决方案,通过VRRP协议实现IP漂移和健康检测,能够有效构建双机热备系统,避免单点故障导致的业务中断。无论是Web服务、数据库还是负载均衡集群,Keepalived都能以简洁的配置和高效的故障切换机制,为关键应用提供稳定的冗余保障。本文将简要概述Keepalived双机热备的原理与部署要点,帮助读者快速掌握其核心能力。
目录
概述
一、keepalived初始化部署
二、配置文件
三、 其他配置项说明
四、名词解释
五、高阶使用
1、介绍
2、keepalived主要作用
3、工作在三层、四层和七层原理
4、健康状态检测方式
4.1 HTTP服务状态检测
4.2 TCP端口状态检测(使用TCP端口服务基本上都可以使用)
4.3 邮件服务器SMTP检测
4.4 用户自定义脚本检测real_server服务状态
5、状态转换通知功能
5.1 实例状态通知
5.2 虚拟服务器检测通知
六、项目实际中配置
总结
概述
Keepalived是一个基于VRRP协议(虚拟冗余路由协议)来实现的LVS服务高可用方案,可以利用其来避免单点故障。一个LVS服务会有2台服务器运行Keepalived,一台为主服务器(MASTER),一台为备份服务器(BACKUP),但是对外表现为一个虚拟IP,主服务器会发送特定的消息(心跳检测,heartbeat)给备份服务器,当备份服务器收不到这个消息的时候,即主服务器宕机的时候, 备份服务器就会接管虚拟IP,继续提供服务,从而保证了高可用性。
Keepalived的作用是检测服务器的状态,如果有一台web服务器死机,或工作出现故障,Keepalived将检测到,并将有故障的服务器从系统中剔除,同时使用其他服务器代替该服务器的工作,当服务器工作正常后Keepalived自动将服务器加入到服务器群中。
一、keepalived初始化部署
安装
[root@web1 ~]# yum install keepalived
配置并启动
[root@web2 keepalived]# systemctl enable --now keepalived.service nginx.service
验证
[root@web1 keepalived]# ip a2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:c8:dc:33 brd ff:ff:ff:ff:ff:ffinet 192.168.115.111/24 brd 192.168.115.255 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.115.250/32 scope global ens33##停掉master查看backup[root@web1 keepalived]# systemctl stop keepalived.service[root@web2 keepalived]# ip a2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:8a:4a:79 brd ff:ff:ff:ff:ff:ffinet 192.168.115.112/24 brd 192.168.115.255 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.115.250/32 scope global ens33
二、配置文件
global_defs {notification_email { #指定keepalived在发生切换时需要发送email到的对象,一行一个sysadmin@fire.loc}notification_email_from Alexandre.Cassen@firewall.loc #指定发件人smtp_server localhost #指定smtp服务器地址smtp_connect_timeout 30 #指定smtp连接超时时间router_id LVS_DEVEL #运行keepalived机器的一个标识}vrrp_sync_group VG_1{ #监控多个网段的实例group {inside_network #实例名outside_network}notify_master /path/xx.sh #指定当切换到master时,执行的脚本notify_backup /path/xx.sh #指定当切换到backup时,执行的脚本notify_fault "path/xx.sh VG_1" #故障时执行的脚本notify /path/xx.shsmtp_alert #使用global_defs中提供的邮件地址和smtp服务器发送邮件通知}
Keepalived在转换状态时会依照状态来呼叫:
-
当进入Master状态时会呼叫notify_master
-
当进入Backup状态时会呼叫notify_backup
-
当发现异常情况时进入Fault状态呼叫notify_fault
vrrp_instance inside_network {state BACKUP #指定那个为master,那个为backup,如果设置了nopreempt这个值不起作用,主备靠priority决定interface eth0 #设置实例绑定的网卡 VRRP心跳包从哪块网卡发出dont_track_primary #忽略vrrp的interface错误(默认不设置)track_interface{ #设置额外的监控,里面那个网卡出现问题都会切换eth1eth2}mcast_src_ip #发送多播包的地址,如果不设置默认使用绑定网卡的primary ipgarp_master_delay #在切换到master状态后,延迟进行gratuitous ARP请求virtual_router_id 50 #VPID标记 相同VRID的LVS属于同一组,根据优先级选举出一个主priority 99 #优先级,高优先级竞选为masteradvert_int 10 #检查间隔,默认1秒 VRRP心跳包(报文)的发送周期,单位为s 组播信息发送间隔,两个节点设置必须一样(实际并不一定完全是10秒,测试结果是小于10秒的随机值)nopreempt #设置为不抢占 注:这个配置只能设置在backup主机上,而且这个主机优先级要比另外一台高
首先nopreemt必须在state为BACKUP的节点上才生效(因为是BACKUP节点决定是否来成为MASTER的),其次要实现类似于关闭auto failback的功能需要将所有节点的state都设置为BACKUP,或者将master节点的priority设置的比BACKUP低。我个人推荐使用将所有节点的state都设置成BACKUP并且都加上nopreempt选项,这样就完成了关于autofailback功能,当想手动将某节点切换为MASTER时只需去掉该节点的nopreempt选项并且将priority改的比其他节点大,然后重新加载配置文件即可(等MASTER切过来之后再将配置文件改回去再reload一下)。
preempt_delay #抢占延时,默认5分钟debug #debug级别authentication { #设置认证auth_type PASS #认证方式auth_pass 111111 #认证密码(密码只识别前8位)}virtual_ipaddress { #设置vip192.168.202.200}}virtual_server 192.168.202.200 23 {delay_loop 6 #健康检查时间间隔(实际并不一定完全是6秒,测试结果是小于6秒的随机值?)lb_algo rr #lvs调度算法rr|wrr|lc|wlc|lblc|sh|dhlb_kind DR #负载均衡转发规则NAT|DR|TUNpersistence_timeout 5 #会话保持时间protocol TCP #使用的协议persistence_granularity <NETMASK> #lvs会话保持粒度virtualhost <string> #检查的web服务器的虚拟主机(host:头) sorry_server<IPADDR> <port> #备用机,所有realserver失效后启用real_server 192.168.200.5 23 {weight 1 #默认为1,0为失效inhibit_on_failure #在服务器健康检查失效时,将其设为0,而不是直接从ipvs中删除notify_up <string> | <quoted-string> #在检测到server up后执行脚本notify_down <string> | <quoted-string> #在检测到server down后执行脚本TCP_CHECK {connect_timeout 3 #连接超时时间nb_get_retry 3 #重连次数delay_before_retry 3 #重连间隔时间connect_port 23 #健康检查的端口的端口bindto <ip> #检查的IP地址}HTTP_GET | SSL_GET{url{ #检查url,可以指定多个path /digest <string> #检查后的摘要信息status_code 200 #检查的返回状态码,301 302 }connect_port <port>bindto <IPADD>connect_timeout 5nb_get_retry 3delay_before_retry 2}SMTP_CHECK{host{connect_ip <IP ADDRESS>connect_port <port> #默认检查25端口bindto <IP ADDRESS>}connect_timeout 5retry 3delay_before_retry 2helo_name <string> | <quoted-string> #smtp helo请求命令参数,可选}MISC_CHECK{misc_path <string> | <quoted-string> #外部脚本路径misc_timeout #脚本执行超时时间misc_dynamic #如设置该项,则退出状态码会用来动态调整服务器的权重,返回0 正常,不修改;返回1,检查失败,权重改为0;返回2-255,正常,权重设置为:返回状态码-2}}
三、 其他配置项说明
keepalived 的核心就是将IPVS配置成高可用,生成ipvs规则来完成负载均衡效果。
-
virtual server (虚拟服务)的定义:
-
virtual_server IP port #定义虚拟主机IP地址及其端口
-
virtual_server fwmark int #ipvs的防火墙打标,实现基于防火墙的负载均衡集群
-
virtual_server group string #将多个虚拟服务器定义成组,将组定义成虚拟服务
-
lb_algo{rr|wrr|lc|wlc|lblc|lblcr} #定义LVS的调度算法
-
lb_kind {NAT|DR|TUN} #定义LVS的模型
-
presitence_timeout #定义支持持久连接的时长
-
protocol TCP #规则所能支持的协议
-
sorry_server #如果所有real_server都出现故障了,利用此返回信息
四、名词解释
虚拟路由器: 由一个Master路由器和多个Backup路由器组成。主机将虚拟路由器当作默认网关;
VRID:虚拟路由器的标识。有相同VRID的一组路由器构成一个虚拟路由器;
Master路由器:虚拟路由器中承担报文转发任务的路由器;
Backup路由器 :Master路由器出现故障时,能够代替Master路由器工作的路由器;
虚拟IP 地址:虚拟路由器的IP 地址。一个虚拟路由器可以拥有一个或多个IP地址;
IP地址拥有者: 接口IP地址与虚拟IP地址相同的路由器被称为IP地址拥有者;
虚拟MAC地址: 一个虚拟路由器拥有一个虚拟MAC地址。通常情况下,虚拟路由器回应ARP请求使用的是虚拟MAC地址,只有虚拟路由器做特殊配置的时候,才回应接口的真实MAC地址;
优先级:VRRP根据优先级来确定虚拟路由器中每台路由器的地位;
非抢占方式:如果Backup路由器工作在非抢占方式下,则只要Master路由器没有出现故障,Backup路由器即使随后被配置了更高的优先级也不会成为Master路由器;
抢占方式:如果Backup路由器工作在抢占方式下,当它收到VRRP报文后,会将自己的优先级与通告报文中的优先级进行比较。如果自己的优先级比当前的Master路由器的优先级高,就会主动抢占成为Master路由器;否则,将保持Backup状态
五、高阶使用
1、介绍
Keeaplived 主要有两种应用场景,一个是通过配置keepalived结合ipvs做到负载均衡(LVS+Keepalived)。另一个是通过自身健康检查、资源接管功能做高可用(双机热备),实现故障转移。
以下内容主要针对Keepalived+MySQL双主实现双机热备为根据,主要讲解keepalived的状态转换通知功能,利用此功能可有效加强对MySQL数据库监控。
2、keepalived主要作用
keepalived采用VRRP(virtual router redundancy protocol),虚拟路由冗余协议,以软件的形式实现服务器热备功能。通常情况下是将两台linux服务器组成一个热备组(master-backup),同一时间热备组内只有一台主服务器(master)提供服务,同时master会虚拟出一个共用IP地址(VIP),这个VIP只存在master上并对外提供服务。如果keepalived检测到master宕机或服务故障,备服务器(backup)会自动接管VIP成为master,keepalived并将master从热备组移除,当master恢复后,会自动加入到热备组,默认再抢占成为master,起到故障转移功能。
3、工作在三层、四层和七层原理
Layer3: 工作在三层时,keepalived会定期向热备组中的服务器发送一个ICMP数据包,来判断某台服务器是否故障,如果故障则将这台服务器从热备组移除。
Layer4: 工作在四层时,keepalived以TCP端口的状态判断服务器是否故障,比如检测mysql 3306端口,如果故障则将这台服务器从热备组移除。
! Configuration File for keepalivedglobal_defs {notification_email { example@163.com } notification_email_from example@example.comsmtp_server 127.0.0.1smtp_connect_timeout 30router_id MYSQL_HA}vrrp_instance VI_1 { state BACKUPinterface eth1virtual_router_id 50 nopreempt #当主down时,备接管,主恢复,不自动接管 priority 100 advert_int 1authentication { auth_type PASS ahth_pass 123 } virtual_ipaddress { 192.168.1.200 #虚拟IP地址}virtual_server 192.168.1.200 3306 { delay_loop 6# lb_algo rr # lb_kind NATpersistence_timeout 50 protocol TCP real_server 192.168.1.201 3306 { #监控本机3306端口 weight 1 notify_down /etc/keepalived/kill_keepalived.sh #检测3306端口为down状态就执行此脚本(只有keepalived关闭,VIP才漂移 ) TCP_CHECK { #健康状态检测方式,可针对业务需求调整(TTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK)connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } #MISC_CHECK { ## 使用 MISC_CHECK 方式自定义脚本做健康检查# misc_path "/etc/keepalived/check.sh" ## 检测脚本# misc_timeout 10 ## 执行脚本的超时时间# misc_dynamic ## 根据退出状态码动态调整服务器的权重# }}}
Layer7:工作在七层时,keepalived根据用户设定的策略判断服务器上的程序是否正常运行,如果故障则将这台服务器从热备组移除。
! Configuration File for keepalivedglobal_defs {notification_email {example@163.com}notification_email_from example@example.com smtp_server 127.0.0.1smtp_connect_timeout 30router_id MYSQL_HA}vrrp_script check_nginx { script /etc/keepalived/check_nginx.sh #检测脚本interval 2 #执行间隔时间}vrrp_instance VI_1 {state BACKUPinterface eth1virtual_router_id 50nopreempt #当主down时,备接管,主恢复,不自动接管priority 100 advert_int 1authentication { auth_type PASS ahth_pass 123 } virtual_ipaddress {192.168.1.200 #虚拟IP地址 } track_script { #在实例中引用脚本 check_nginx } }
脚本内容如下:
# cat /etc/keepalived/check_nginx.shCount1=`netstat -antp |grep -v grep |grep nginx |wc -l`if [ $Count1 -eq 0 ]; then/usr/local/nginx/sbin/nginxsleep 2 Count2=`netstat -antp |grep -v grep |grep nginx |wc -l`if [ $Count2 -eq 0 ]; then service keepalived stop elseexit 0fi else exit 0fi #也可以简单如下:#!/bin/bash[[ `ps -C nginx --no-header |wc -l` -eq 0 ]] && exit 1 || exit 0# 如果没有nginx进程 返回错误状态1 #if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then# echo "$(date) nginx pid not found">>/etc/keepalived/keepalived.log# #killall keepalived#fi#在keepalived主机中,运行如下脚本,用来检测本机的nginx进行状态vim start_keepalived.sh#!/bin/bashwhile truedoif pgrep nginx &> /dev/null;thensystemctl start keepalivedfisleep 3done#mysql 检测脚本如下:#!/bin/bash[[ `ps -C mysqld --no-header |wc -l` -eq 0 ]] && exit 1 || exit 0# 如果没有nginx进程 返回错误状态1 if [ `ps -C nginx --no-header |wc -l` -eq 0 ];thenecho "$(date) nginx pid not found">>/etc/keepalived/keepalived.logkillall keepalivedfi
4、健康状态检测方式
4.1 HTTP服务状态检测
HTTP_GET或SSL_GET { url { path /index.html #检测url,可写多个 digest 24326582a86bee478bac72d5af25089e #检测效验码 #digest效验码获取方法:genhash -s IP -p 80 -u http://IP/index.html status_code 200 #检测返回http状态码 } connect_port 80 #连接端口 connect_timeout 3 #连接超时时间 nb_get_retry 3 #重试次数delay_before_retry 2 #连接间隔时间}
4.2 TCP端口状态检测(使用TCP端口服务基本上都可以使用)
TCP_CHECK { connect_port 80 #健康检测端口,默认为real_server后跟端口 connect_timeout 5 nb_get_retry 3 delay_before_retry 3}
4.3 邮件服务器SMTP检测
SMTP_CHECK { #健康检测邮件服务器smtp host { connect_ip connect_port } connect_timeout 5 retry 2 delay_before_retry 3 hello_name "mail.domain.com" }
4.4 用户自定义脚本检测real_server服务状态
MISC_CHECK { misc_path /script.sh #指定外部程序或脚本位置 misc_timeout 3 #执行脚本超时时间 !misc_dynamic #不动态调整服务器权重(weight),如果启用将通过退出状态码动态调整real_server权重值# misc_dynamic ## 根据退出状态码动态调整服务器的权重}
5、状态转换通知功能
keepalived主配置邮件通知功能,默认当real_server宕机或者恢复时才会发出邮件。有时我们更想知道keepalived的主服务器故障切换后,VIP是否顺利漂移到备服务器,MySQL服务器是否正常?那写个监控脚本吧,可以,但没必要,因为keepalived具备状态检测功能,所以我们直接使用就行了。
主配置默认邮件通知配置模板如下:global_defs # Block id{ notification_email # To: { admin@example1.com ... }# From: from address that will be in header notification_email_from admin@example.comsmtp_server 127.0.0.1 # IP smtp_connect_timeout 30 # integer, secondsrouter_id my_hostname # string identifying the machine, # (doesn't have to be hostname).
5.1 实例状态通知
a) notify_master :节点变为master时执行
b) notify_backup : 节点变为backup时执行
c) notify_fault : 节点变为故障时执行
5.2 虚拟服务器检测通知
a) notify_up : 虚拟服务器up时执行
b) notify_down : 虚拟服务器down时执行
示例:! Configuration File for keepalivedglobal_defs { notification_email {example@163.com } notification_email_from example@example.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id MYSQL_HA} vrrp_instance VI_1 { state BACKUP interface eth1virtual_router_id 50 nopreempt #当主down时,备接管,主恢复,不自动接管 priority 100advert_int 1 authentication { auth_type PASS ahth_pass 123 } virtual_ipaddress { 192.168.1.200 } notify_master /etc/keepalived/to_master.sh notify_backup /etc/keepalived/to_backup.shnotify_fault /etc/keepalived/to_fault.sh } virtual_server 192.168.1.200 3306 { delay_loop 6 persistence_timeout 50 protocol TCP real_server 192.168.1.201 3306 { weight 1 notify_up /etc/keepalived/mysql_up.sh notify_down /etc/keepalived/mysql_down.sh TCP_CHECK {connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }
状态参数后可以是bash命令,也可以是shell脚本,内容根据自己需求定义,以上示例中所涉及状态脚本如下:
1.当服务器改变为主时执行此脚本
yum install -y sendmail mailxsystemctl enabled --now sendmail# cat to_master.sh #!/bin/bashDate=$(date +%F" "%T)IP=$(ifconfig eth0 |grep "inet addr" |cut -d":" -f2 |awk '{print $1}')Mail="z13516052620@163.com"echo "$Date $IP change to master." |mail -s "Master-Backup Change Status" $Mail
2.当服务器改变为备时执行此脚本
# cat to_backup.sh#!/bin/bashDate=$(date +%F" "%T)IP=$(ifconfig eth0 |grep "inet addr" |cut -d":" -f2 |awk '{print $1}')Mail="baojingtongzhi@163.com"echo "$Date $IP change to backup." |mail -s "Master-Backup Change Status" $Mail
3.当服务器改变为故障时执行此脚本
# cat to_fault.sh#!/bin/bashDate=$(date +%F" "%T)IP=$(ifconfig eth0 |grep "inet addr" |cut -d":" -f2 |awk '{print $1}')Mail="baojingtongzhi@163.com"echo "$Date $IP change to fault." |mail -s "Master-Backup Change Status" $Mail
4.当检测TCP端口3306为不可用时,执行此脚本,杀死keepalived,实现切换
# cat mysql_down.sh#!/bin/bashDate=$(date +%F" "%T)IP=$(ifconfig eth0 |grep "inet addr" |cut -d":" -f2 |awk '{print $1}')Mail="baojingtongzhi@163.com"pkill keepalivedecho "$Date $IP The mysql service failure,kill keepalived." |mail -s "Master-Backup MySQL Monitor" $Mail
5.当检测TCP端口3306可用时,执行此脚本
# cat mysql_up.sh#!/bin/bashDate=$(date +%F" "%T)IP=$(ifconfig eth0 |grep "inet addr" |cut -d":" -f2 |awk '{print $1}')Mail="baojingtongzhi@163.com"echo "$Date $IP The mysql service is recovery." |mail -s "Master-Backup MySQL Monitor" $Mail
六、项目实际中配置
Keepalived高可用集群一主一从配置
web1、web2主机分别下载keepalived、nginx
[root@web1 ~]# yum install keepalived nginx
[root@web2 ~]# yum install keepalived nginx
主web1修改配置文件
[root@web1 ~]# cd /etc/keepalived/
[root@web1 keepalived]# ls
keepalived.conf.sample
[root@web1 keepalived]# cp keepalived.conf.sample keepalived.conf
从web2修改配置文件
[root@web2 ~]# cd /etc/keepalived/
[root@web2 keepalived]# ls
keepalived.conf.sample
[root@web2 keepalived]# cp keepalived.conf.sample keepalived.conf
[root@web2 keepalived]# vim keepalived.conf
切换到主web1,开启keepalived.service
[root@web1 keepalived]# systemctl start keepalived.service
keepalived.service启动后ip a命令查看该ens33接口承载了多个虚拟IP(VIP),表明Keepalived已成功启动并分配了VIP
[root@web1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:74 brd ff:ff:ff:ff:ff:ffinet 192.168.100.1/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feb0:774/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:7e brd ff:ff:ff:ff:ff:ffinet 192.168.72.176/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1167sec preferred_lft 1167secinet6 fe80::20c:29ff:feb0:77e/64 scope link noprefixroute valid_lft forever preferred_lft forever
切换到从web2,开启keepalived.service
[root@web2 ~]# systemctl start keepalived
当前web2未显示VIP因为是BACKUP节点,
[root@web2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:4f brd ff:ff:ff:ff:ff:ffinet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe59:8d4f/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:59 brd ff:ff:ff:ff:ff:ffinet 192.168.72.178/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1600sec preferred_lft 1600secinet6 fe80::20c:29ff:fe59:8d59/64 scope link noprefixroute valid_lft forever preferred_lft forever
上述Keepalived高可用集群一主一从配置完成
模拟故障:
手动停止web1的keepalived,验证web2是否接管VIP
切换到主web1
[root@web1 ~]# systemctl stop keepalived.service
切换到从web2测试故障转移
[root@web2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:4f brd ff:ff:ff:ff:ff:ffinet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe59:8d4f/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:59 brd ff:ff:ff:ff:ff:ffinet 192.168.72.178/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1139sec preferred_lft 1139secinet6 fe80::20c:29ff:fe59:8d59/64 scope link noprefixroute valid_lft forever preferred_lft forever
web1、web2分别开启防火墙,web1、web2分别关闭keepalived.service
[root@web1 ~]# systemctl start firewalld
[root@web1 ~]# systemctl stop keepalived.service[root@web2 ~]# systemctl start firewalld
[root@web2 ~]# systemctl stop keepalived.service
web1再次开启keepalived.service,VIP返回
[root@web1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:74 brd ff:ff:ff:ff:ff:ffinet 192.168.100.1/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feb0:774/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:7e brd ff:ff:ff:ff:ff:ffinet 192.168.72.176/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1507sec preferred_lft 1507secinet6 fe80::20c:29ff:feb0:77e/64 scope link noprefixroute valid_lft forever preferred_lft forever
web2也开启keepalived.service,此时可以看到同一个热备组,但是相同的IP,则发生脑裂
[root@web2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:4f brd ff:ff:ff:ff:ff:ffinet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe59:8d4f/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:59 brd ff:ff:ff:ff:ff:ffinet 192.168.72.178/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1184sec preferred_lft 1184secinet6 fe80::20c:29ff:fe59:8d59/64 scope link noprefixroute valid_lft forever preferred_lft forever
故障恢复:
web1、web2分别关闭防火墙
[root@web1 ~]# systemctl stop firewalld
[root@web2 ~]# systemctl stop firewalld
web1脑裂现象恢复
[root@web1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:74 brd ff:ff:ff:ff:ff:ffinet 192.168.100.1/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feb0:774/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:7e brd ff:ff:ff:ff:ff:ffinet 192.168.72.176/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1730sec preferred_lft 1730secinet6 fe80::20c:29ff:feb0:77e/64 scope link noprefixroute valid_lft forever preferred_lft forever
web2脑裂现象恢复
[root@web2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:4f brd ff:ff:ff:ff:ff:ffinet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe59:8d4f/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:59 brd ff:ff:ff:ff:ff:ffinet 192.168.72.178/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1567sec preferred_lft 1567secinet6 fe80::20c:29ff:fe59:8d59/64 scope link noprefixroute valid_lft forever preferred_lft forever
验证Keepalived实现高可用以及Nginx流量分发问题:
###web1开启nginx,输入内容到index.html文件中
[root@web1 ~]# systemctl start nginx
[root@web1 ~]# echo web1 > /usr/share/nginx/html/index.html###web2开启nginx,输入内容到index.html文件中
[root@web2 ~]# systemctl start nginx
[root@web2 ~]# echo web2 > /usr/share/nginx/html/index.html
访问192.168.100.101web站点,只能访问到web1上,因为web1为主持有VIP,而web2为从只能等待接管,当web1正常时,所有请求都流向web1,只有web1故障时,VIP才会漂移到web2
此时web1关闭keepalived.service模拟宕机
[root@web1 ~]# systemctl stop keepalived.service
切换到web2查看到已将web1之前持有的VIP接管,表明VIP当前已漂移到web2节点
[root@web2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:4f brd ff:ff:ff:ff:ff:ffinet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe59:8d4f/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:59 brd ff:ff:ff:ff:ff:ffinet 192.168.72.178/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1667sec preferred_lft 1667secinet6 fe80::20c:29ff:fe59:8d59/64 scope link noprefixroute valid_lft forever preferred_lft forever
访问192.168.100.101web站点,此时可以看到web2接管VIP后所有流量都会指向当前持有VIP的web2节点,此时成功测试了故障转移,web2已接管服务
此时切换到web1上重新启动keepalived.service,
[root@web1 ~]# systemctl start keepalived.service
重启web1的keepalived后,因web1主设置的优先级大于web2从,VIP 192.168.100.101已成功绑定到web1的ens33接口,同时还有其他VIP(192.168.100.102/103)也绑定在该接口,表明web1已恢复为MASTER节点
[root@web1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:74 brd ff:ff:ff:ff:ff:ffinet 192.168.100.1/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feb0:774/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:7e brd ff:ff:ff:ff:ff:ffinet 192.168.72.176/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1060sec preferred_lft 1060secinet6 fe80::20c:29ff:feb0:77e/64 scope link noprefixroute valid_lft forever preferred_lft forever
[root@web1 ~]#
以上当前状态表明Keepalived高可用集群配置正确,能够按预期进行主备切换
Keepalived与Nginx集成配置高可用环境
web1修改配置文件
[root@web1 keepalived]# vim keepalived.conf
编写脚本
[root@web1 keepalived]# vim /etc/keepalived/check_nginx.sh
[root@web1 keepalived]# cat /etc/keepalived/check_nginx.sh Count1=`netstat -antp |grep -v grep |grep nginx |wc -l`if [ $Count1 -eq 0 ]; thensystemctl restart nginxsleep 2 Count2=`netstat -antp |grep -v grep |grep nginx |wc -l`if [ $Count2 -eq 0 ]; then service keepalived stop elseexit 0fi
else exit 0
fi
给脚本赋予执行权限
[root@web1 keepalived]# chmod +x check_nginx.sh
[root@web1 keepalived]# ls
check_nginx.sh keepalived.conf keepalived.conf.sample
将脚本传给web2
[root@web1 keepalived]# scp check_nginx.sh 192.168.100.2:/etc/keepalived/
The authenticity of host '192.168.100.2 (192.168.100.2)' can't be established.
ED25519 key fingerprint is SHA256:8qsJ8qXVwW4GcC9bkntVyvJjAgoWaVXcjyQYB6pLCtY.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.100.2' (ED25519) to the list of known hosts.Authorized users only. All activities may be monitored and reported.
root@192.168.100.2's password:
check_nginx.sh 100% 345 398.4KB/s 00:00
切换到web2查看到web1的脚本传输成功并修改web2的配置文件
[root@web2 ~]# cd /etc/keepalived/
[root@web2 keepalived]# ls
check_nginx.sh keepalived.conf keepalived.conf.sample
[root@web2 keepalived]# vim keepalived.conf
web1、web2分别重启keepalived.service
[root@web1 keepalived]# systemctl restart keepalived.service
[root@web2 keepalived]# systemctl restart keepalived.service
web1、web2验证配置正常
[root@web1 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:74 brd ff:ff:ff:ff:ff:ffinet 192.168.100.1/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feb0:774/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:7e brd ff:ff:ff:ff:ff:ffinet 192.168.72.176/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1547sec preferred_lft 1547secinet6 fe80::20c:29ff:feb0:77e/64 scope link noprefixroute valid_lft forever preferred_lft forever[root@web2 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:4f brd ff:ff:ff:ff:ff:ffinet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe59:8d4f/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:59 brd ff:ff:ff:ff:ff:ffinet 192.168.72.178/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1452sec preferred_lft 1452secinet6 fe80::20c:29ff:fe59:8d59/64 scope link noprefixroute valid_lft forever preferred_lft forever
模拟故障:
web1停止nginx服务
[root@web1 keepalived]# systemctl stop nginx###查看服务状态
[root@web1 keepalived]# systemctl status nginx
○ nginx.service - The nginx HTTP and reverse proxy serverLoaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; preset: disabled)Active: inactive (dead)7月 15 23:26:03 web1 systemd[1]: Starting The nginx HTTP and reverse proxy server...
7月 15 23:26:04 web1 nginx[4712]: nginx: the configuration file /etc/nginx/nginx.conf synt>
7月 15 23:26:04 web1 nginx[4712]: nginx: configuration file /etc/nginx/nginx.conf test is >
7月 15 23:26:04 web1 systemd[1]: Started The nginx HTTP and reverse proxy server.
7月 15 23:26:29 web1 systemd[1]: Stopping The nginx HTTP and reverse proxy server...
7月 15 23:26:29 web1 systemd[1]: nginx.service: Deactivated successfully.
7月 15 23:26:29 web1 systemd[1]: Stopped The nginx HTTP and reverse proxy server.
当web1的Nginx停止后,VIP(192.168.100.101/102/103)自动从web1迁移到web2,故障转移成功
[root@web1 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:74 brd ff:ff:ff:ff:ff:ffinet 192.168.100.1/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:feb0:774/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:b0:07:7e brd ff:ff:ff:ff:ff:ffinet 192.168.72.176/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1779sec preferred_lft 1779secinet6 fe80::20c:29ff:feb0:77e/64 scope link noprefixroute valid_lft forever preferred_lft forever
切换到web2,ip a命令查看输出显示:web2已接管所有VIP(新增192.168.100.101/102/103)web1已释放VIP(仅剩192.168.100.1)
[root@web2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:4f brd ff:ff:ff:ff:ff:ffinet 192.168.100.2/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.101/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.102/32 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe59:8d4f/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:59:8d:59 brd ff:ff:ff:ff:ff:ffinet 192.168.72.178/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1621sec preferred_lft 1621secinet6 fe80::20c:29ff:fe59:8d59/64 scope link noprefixroute valid_lft forever preferred_lft forever
访问192.168.100.101web站点,此时依然可以看到web2接管VIP后所有流量都会指向当前持有VIP的web2节点
但脚本中写了关于nginx服务启动二次检测,当检测到web1的nginx服务停止后2s则会进行回切,将web1的nginx服务重新启动,此时web2的接管服务会重新回到web1,此时脚本和高可用生效
基于LVS(DR模式)+Keepalived+Nginx的七层高可用
lvs1主机修改配置文件
[root@lvs1 keepalived]# vim keepalived.conf
[root@lvs1 keepalived]# cat keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS1
}vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.103}
}virtual_server 192.168.100.103 80 {delay_loop 6lb_algo rrlb_kind DRpersistence_timeout 50protocol TCPreal_server 192.168.100.1 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}real_server 192.168.100.2 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}
}
lvs1将配置文件传给lvs2
[root@lvs1 keepalived]# scp keepalived.conf 192.168.100.101:/etc/keepalived/
The authenticity of host '192.168.100.101 (192.168.100.101)' can't be established.
ED25519 key fingerprint is SHA256:8qsJ8qXVwW4GcC9bkntVyvJjAgoWaVXcjyQYB6pLCtY.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.100.101' (ED25519) to the list of known hosts.Authorized users only. All activities may be monitored and reported.
root@192.168.100.101's password:
keepalived.conf 100% 910 1.6MB/s
lvs2修改配置文件
[root@lvs2 keepalived]# vim keepalived.conf
[root@lvs2 keepalived]# cat keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS2
}vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.103}
}virtual_server 192.168.100.103 80 {delay_loop 6lb_algo rrlb_kind DRpersistence_timeout 50protocol TCPreal_server 192.168.100.1 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}real_server 192.168.100.2 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}
}
切换到web1主机在html目录下创建新文件并输入内容
[root@web1 keepalived]# cd /usr/share/nginx/html/
[root@web1 html]# ls
404.html 50x.html index.html nginx-logo.png
[root@web1 html]# vim test.html
[root@web1 html]# cat test.html
test
再切换到web2主机在html目录下创建新文件并输入内容
[root@web2 keepalived]# cd /usr/share/nginx/html/
[root@web2 html]# ls
404.html 50x.html index.html nginx-logo.png
[root@web2 html]# vim test.html
[root@web2 html]# cat test.html
test
lvs1、lvs2分别加载内核模块 (ip_vs
)。
[root@lvs1 keepalived]# modprobe ip_vs
[root@lvs2 keepalived]# modprobe ip_vs
lvs1、lvs2启动Keepalived,查看LVS 规则已生效。
[root@lvs1 keepalived]# systemctl start keepalived.service
[root@lvs1 keepalived]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.100.103:80 rr persistent 50-> 192.168.100.1:80 Route 1 0 0 -> 192.168.100.2:80 Route 1 0 0 [root@lvs2 keepalived]# systemctl start keepalived.service
[root@lvs2 keepalived]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.100.103:80 rr persistent 50-> 192.168.100.1:80 Route 1 0 0 -> 192.168.100.2:80 Route 1 1 0
流量将通过轮询策略分发到两个后端 Nginx 节点(192.168.100.1
和 192.168.100.2
)
[root@lvs1 keepalived]# curl 192.168.100.1/test.html
test
[root@lvs1 keepalived]# curl 192.168.100.2/test.html
test
切换到web1
[root@web1 html]# vim /etc/sysctl.conf
[root@web1 html]# sysctl -p
kernel.sysrq = 0
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
kernel.dmesg_restrict = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.ens33.arp_filter = 0
net.ipv4.conf.ens33.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0###添加临时IP
[root@web1 ~]# ifconfig lo:0 192.168.100.103/32
[root@web1 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500inet 192.168.100.1 netmask 255.255.255.0 broadcast 192.168.100.255inet6 fe80::20c:29ff:feb0:774 prefixlen 64 scopeid 0x20<link>ether 00:0c:29:b0:07:74 txqueuelen 1000 (Ethernet)RX packets 19682 bytes 1662245 (1.5 MiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 35410 bytes 29300791 (27.9 MiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500inet 192.168.72.176 netmask 255.255.255.0 broadcast 192.168.72.255inet6 fe80::20c:29ff:feb0:77e prefixlen 64 scopeid 0x20<link>ether 00:0c:29:b0:07:7e txqueuelen 1000 (Ethernet)RX packets 537 bytes 39137 (38.2 KiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 61 bytes 7656 (7.4 KiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 127.0.0.1 netmask 255.0.0.0inet6 ::1 prefixlen 128 scopeid 0x10<host>loop txqueuelen 1000 (Local Loopback)RX packets 114 bytes 5568 (5.4 KiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 114 bytes 5568 (5.4 KiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 192.168.100.103 netmask 0.0.0.0loop txqueuelen 1000 (Local Loopback)###写路由条目
[root@web1 ~]# route add -host 192.168.100.103 dev lo:0
[root@web1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.72.2 0.0.0.0 UG 101 0 0 ens34
192.168.72.0 0.0.0.0 255.255.255.0 U 101 0 0 ens34
192.168.100.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
192.168.100.103 0.0.0.0 255.255.255.255 UH 0 0 0 lo
切换到web2
[root@web2 html]# vim /etc/sysctl.conf
[root@web2 html]# sysctl -p
kernel.sysrq = 0
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
kernel.dmesg_restrict = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.ens33.arp_filter = 0
net.ipv4.conf.ens33.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0###添加临时IP
[root@web2 html]# ifconfig lo:0 192.168.100.103/32
[root@web2 html]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500inet 192.168.100.2 netmask 255.255.255.0 broadcast 192.168.100.255inet6 fe80::20c:29ff:fe59:8d4f prefixlen 64 scopeid 0x20<link>ether 00:0c:29:59:8d:4f txqueuelen 1000 (Ethernet)RX packets 18102 bytes 1658904 (1.5 MiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 35844 bytes 31147681 (29.7 MiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500inet 192.168.72.178 netmask 255.255.255.0 broadcast 192.168.72.255inet6 fe80::20c:29ff:fe59:8d59 prefixlen 64 scopeid 0x20<link>ether 00:0c:29:59:8d:59 txqueuelen 1000 (Ethernet)RX packets 566 bytes 41317 (40.3 KiB)RX errors 0 dropped 0 overruns 0 frame 0TX packets 68 bytes 8132 (7.9 KiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 127.0.0.1 netmask 255.0.0.0inet6 ::1 prefixlen 128 scopeid 0x10<host>loop txqueuelen 1000 (Local Loopback)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536inet 192.168.100.103 netmask 0.0.0.0loop txqueuelen 1000 (Local Loopback)###写路由条目
[root@web2 html]# route add -host 192.168.100.103 dev lo:0
[root@web2 html]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.72.2 0.0.0.0 UG 101 0 0 ens34
192.168.72.0 0.0.0.0 255.255.255.0 U 101 0 0 ens34
192.168.100.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
192.168.100.103 0.0.0.0 255.255.255.255 UH 0 0 0 lo
验证流量分发
故障模拟:
故障1
切换到web1,到html目录移走test.html文件,让lvs1、lvs2无法探测到test.html
[root@web1 html]# ls
404.html 50x.html index.html nginx-logo.png test.html
[root@web1 html]# mv test.html /opt/
[root@web1 html]# ls
404.html 50x.html index.html nginx-logo.png
此时回到lvs1,查看messages日志,日志显示 192.168.100.1:80
的 HTTP 检查失败,Keepalived 已将其从 LVS 规则中移除
[root@lvs1 ~]# tail -f /var/log/messages
当前 ipvsadm -Ln
输出仅剩 192.168.100.2:80,
验证了节点摘除操作
[root@lvs2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.100.103:80 rr persistent 50-> 192.168.100.2:80 Route 1 0 0
此时打开浏览器访问192.168.100.103web站点VIP分发到了web2,实现故障转移
切换到web2,到html目录移走test.html文件,让lvs1、lvs2无法探测到test.html
[root@web2 ~]# cd /usr/share/nginx/html/
[root@web2 html]# ls
404.html 50x.html index.html nginx-logo.png test.html
[root@web2 html]# mv test.html /opt/
[root@web2 html]# ls
404.html 50x.html index.html nginx-logo.png
此时回到lvs1,查看messages日志,日志显示 192.168.100.2:80
的 HTTP 检查失败,Keepalived 已将其从 LVS 规则中移除
[root@lvs1 ~]# tail -f /var/log/messages
当前 ipvsadm -Ln
输出仅剩 192.168.100.1:80,
验证了节点摘除操作
[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.100.103:80 rr persistent 50-> 192.168.100.1:80 Route 1 0 0
此时打开浏览器访问192.168.100.103web站点VIP分发到了web1
故障恢复:
web1、web2将文件移回html目录
[root@web1 html]# mv /opt/test.html ./
[root@web1 html]# ls
404.html 50x.html index.html nginx-logo.png test.html[root@web2 html]# mv /opt/test.html ./
[root@web2 html]# ls
404.html 50x.html index.html nginx-logo.png test.html
[root@web2 html]#
切换到lvs1,查看messages日志,IP为192.168.100.200
的客户端通过SSH登录lvs1
,执行命令后立即退出日志,显示无报错
查看LVS规则节点添回,实现高可用
[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.100.103:80 rr persistent 50-> 192.168.100.1:80 Route 1 0 0 -> 192.168.100.2:80 Route 1 1 0
故障2
此时在lvs1停止keepalived
[root@lvs1 ~]# systemctl stop keepalived.service
切换到lvs2,查看到
[root@lvs2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:9b:f9:1b brd ff:ff:ff:ff:ff:ffinet 192.168.100.101/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe9b:f91b/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:9b:f9:25 brd ff:ff:ff:ff:ff:ffinet 192.168.72.183/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1716sec preferred_lft 1716secinet6 fe80::20c:29ff:fe9b:f925/64 scope link noprefixroute valid_lft forever preferred_lft forever
再次访问192.168.100.103web网站,实现故障转移
配置sorry_server
新开一台CentOS7.9
[root@ding ~]# yum install -y nginx
###开启nginx
[root@ding ~]# systemctl start nginx
###在html目录中index.html文件中写入内容sorry
[root@ding ~]# echo sorry > /usr/share/nginx/html/index.html [root@ding ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:b6:ec:db brd ff:ff:ff:ff:ff:ffinet 192.168.100.20624 brd 192.168.72.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::74bb:b5be:6bf1:18e9/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:b6:ec:e5 brd ff:ff:ff:ff:ff:ffinet 192.168.72.139/24 brd 192.168.72.255 scope global noprefixroute dynamic ens37valid_lft 1422sec preferred_lft 1422secinet6 fe80::8ce3:19df:bd0:6a62/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000link/ether 52:54:00:a2:49:3b brd ff:ff:ff:ff:ff:ffinet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000link/ether 52:54:00:a2:49:3b brd ff:ff:ff:ff:ff:ff
切换到lvs1主机验证流量分发正常
[root@lvs1 ~]# curl 192.168.100.206
sorry
此时lvs1主机修改配置文件,添加一条sorry_server
[root@lvs1 ~]# vim /etc/keepalived/keepalived.conf
lvs2主机修改配置文件,同时也添加一条sorry_server
[root@lvs2 ~]# vim /etc/keepalived/keepalived.conf
lvs1、lvs2重启keepalived服务
[root@lvs2 ~]# systemctl restart keepalived.service
切换到192.168.100.206主机添加临时网卡,并修改sysctl.conf文件添加ARP 抑制并生效
[root@ding ~]# vim /etc/sysctl.conf
[root@ding ~]# sysctl -p
kernel.sysrq = 0
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_syncookies = 1
kernel.dmesg_restrict = 1
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.ens33.arp_filter = 0
net.ipv4.conf.ens33.rp_filter = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0###添加临时IP
[root@ding ~]# ifconfig lo:0 192.168.100.103/32###添加路由条目
[root@ding ~]# route add -host 192.168.100.103 dev lo:0
故障模拟
分别在web1、web2上从html目录中移走test.html文件
[root@web1 html]# ls
404.html 50x.html index.html nginx-logo.png test.html
[root@web1 html]# mv test.html /opt/
[root@web1 html]# ls
404.html 50x.html index.html nginx-logo.png[root@web2 html]# ls
404.html 50x.html index.html nginx-logo.png test.html
[root@web2 html]# mv test.html /opt/
[root@web2 html]# ls
404.html 50x.html index.html nginx-logo.png
此时切换到lvs1主机上查看日志,成功将sorry_server添加进LVS
[root@lvs1 ~]# tail -f /var/log/messages
此时查看LVS规则,查到添加了sorry_server的IP
[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.100.103:80 rr persistent 50-> 192.168.100.206:80 Route 1 0 0
此时访问192.168.100.103web网站,显示出sorry字段,实现故障切换
LVS-DR双主模式配置
切换到lvs1主机修改配置文件
[root@lvs1 ~]# vim /etc/keepalived/keepalived.conf
[root@lvs1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS1
}vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.103}
}virtual_server 192.168.100.103 80 {delay_loop 6lb_algo rrlb_kind DRpersistence_timeout 50protocol TCPsorry_server 192.168.100.206 80real_server 192.168.100.1 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}real_server 192.168.100.2 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}
}
vrrp_instance VI_2 {state BACKUPinterface ens33virtual_router_id 52priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.104}
}virtual_server 192.168.100.104 80 {delay_loop 6lb_algo rrlb_kind DRpersistence_timeout 50protocol TCPsorry_server 192.168.100.206 80real_server 192.168.100.1 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}real_server 192.168.100.2 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}
}
lvs2主机修改配置文件
[root@lvs2 ~]# vim /etc/keepalived/keepalived.conf
[root@lvs2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS2
}vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.103}
}virtual_server 192.168.100.103 80 {delay_loop 6lb_algo rrlb_kind DRpersistence_timeout 50protocol TCPsorry_server 192.168.100.206 80real_server 192.168.100.1 80 {weight 1HTTP_GET { url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}real_server 192.168.100.2 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}
}
vrrp_instance VI_2 {state MASTERinterface ens33virtual_router_id 52priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.104}
}virtual_server 192.168.100.104 80 {delay_loop 6lb_algo rrlb_kind DRpersistence_timeout 50protocol TCPsorry_server 192.168.100.206 80real_server 192.168.100.1 80 {weight 1HTTP_GET { url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}real_server 192.168.100.2 80 {weight 1HTTP_GET {url {path /test.html}connect_timeout 3retry 3delay_before_retry 3}}
}
lvs1、lvs2重启keepalived
[root@lvs1 ~]# systemctl restart keepalived
[root@lvs2 ~]# systemctl restart keepalived
lvs1 输入ip a命令输出查看ens33接口VIP绑定192.168.100.103
[root@lvs1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.100.103:80 rr persistent 50-> 192.168.100.1:80 Route 1 0 0 -> 192.168.100.2:80 Route 1 0 0
TCP 192.168.100.104:80 rr persistent 50-> 192.168.100.1:80 Route 1 0 0 -> 192.168.100.2:80 Route 1 0 0
[root@lvs1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:c2:74:d2 brd ff:ff:ff:ff:ff:ffinet 192.168.100.100/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.103/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fec2:74d2/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:c2:74:dc brd ff:ff:ff:ff:ff:ffinet 192.168.72.181/24 brd 192.168.72.255 scope global dynamic noprefixroute ens36valid_lft 1648sec preferred_lft 1648secinet6 fe80::20c:29ff:fec2:74dc/64 scope link noprefixroute valid_lft forever preferred_lft forever
lvs2 输入ip a命令输出查看ens33接口VIP绑定192.168.100.104
[root@lvs2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.100.103:80 rr persistent 50-> 192.168.100.1:80 Route 1 0 0 -> 192.168.100.2:80 Route 1 0 0
TCP 192.168.100.104:80 rr persistent 50-> 192.168.100.1:80 Route 1 0 0 -> 192.168.100.2:80 Route 1 0 0
[root@lvs2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:9b:f9:1b brd ff:ff:ff:ff:ff:ffinet 192.168.100.101/24 brd 192.168.100.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.100.104/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe9b:f91b/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000link/ether 00:0c:29:9b:f9:25 brd ff:ff:ff:ff:ff:ffinet 192.168.72.183/24 brd 192.168.72.255 scope global dynamic noprefixroute ens34valid_lft 1194sec preferred_lft 1194secinet6 fe80::20c:29ff:fe9b:f925/64 scope link noprefixroute valid_lft forever preferred_lft forever
打开windows本机进入到C:\Windows\System32\drivers\etc目录中找到hosts文件拖到桌面进行修改
修改完后保存文件再拖入etc目录
通过相同的域名 www.c2505.com
访问,浏览器书签分别显示 web1
和 web2,
验证请求已成功通过不同 LVS 节点(如 lvs1
和 lvs2
)分发到后端 web1
和 web2,此时
LVS 的轮询(rr
)调度策略生效,实现高可用性
总结:
Keepalived通过主备节点间的状态监控与自动切换,实现了服务的高可用性,其优势在于配置灵活、资源占用低且与Linux生态无缝集成。双机热备架构不仅提升了系统的容错能力,还能结合Nginx、LVS等组件构建更复杂的负载均衡方案。然而,实际部署时需注意脑裂风险、网络延迟等潜在问题,通过合理的健康检测策略和日志监控,方能最大化发挥Keepalived的价值。在业务云化与分布式趋势下,Keepalived仍是传统架构中保障关键服务稳定性的可靠选择