Mysql使用PXC实现高可用
1、PXC的介绍
1.1 简介
PXC 是一套 MySQL 高可用集群解决方案,与传统的基于主从复制模式的集群架构相比,PXC 最突出特点就是解决了诟病已久的数据复制延迟问题,基本上可以达到实时同步。而且节点与节点之间,他们相互的关系是对等的。PXC 最关注的是数据的一致性,对待事物的行为时,要么在所有节点上执行,要么都不执行,它的实现机制决定了它对待一致性的行为非常严格,这也能非常完美的保证 MySQL 集群的数据一致性。
1.2 PXC的优缺点
优点:
- 服务高可用。
- 数据同步复制(并发复制),几乎无延迟。
- 多个可同时读写节点,可实现写扩展,不过最好事先进行分库分表,让各个节点分别写不同的表或者库,避免让 galera 解决数据冲突。
- 新节点可以自动部署,部署操作简单。
- 数据严格一致性,尤其适合电商类应用。
- 完全兼容 MySQL。
缺点:
- 复制只支持InnoDB 引擎,其他存储引擎的更改不复制。
- 写入效率取决于节点中最弱的一台,因为 PXC 集群采用的是强一致性原则,一个更改操作在所有节点都成功才算执行成功。
- 所有表都要有主键。
- 不支持 LOCK TABLE 等显式锁操作。
- 锁冲突、死锁问题相对更多。
1.3 PXC与Replication的区别
Replication | PXC |
---|---|
数据同步是单向的,master 负责写,然后异步复制给 slave;如果 slave 写入数据,不会复制给 master | 数据同步时双向的,任何一个 mysql 节点写入数据,都会同步到集群中其它的节点 |
异步复制,从和主无法保证数据的一致性 | 同步复制,事务在所有集群节点要么同时提交,要么同时不提交 |
1.4 PXC常用端口
-
3306:数据库对外服务的端口号。
-
4444:请求 SST 的端口(全量传输)。
-
4567:组成员之间进行沟通的一个端口号。
-
4568:用于传输 IST(增量传输)。
2、搭建PXC集群
Percona XtraDB Cluster (简称 PXC)集群是基于 Galera 2.x library,事务型应用下的通用的多主同步复制插件,主要用于解决强一致性问题,使得各个节点之间的数据保持实时同步以及实现多节点同时读写。提高了数据库的可靠性,也可以实现读写分离,是 MySQL 关系型数据库中大家公认的集群优选方案之一。
2.1 环境准备
准备好下面三台服务器,本文搭建PXC集群基于Rocky8:
IP | 端口 | 角色 |
192.168.93.20 | 3306 | PXC1 |
192.168.93.21 | 3306 | PXC1 |
192.168.93.22 | 3306 | PXC1 |
配置hosts解析:
192.168.93.20 pxc1
192.168.93.21 pxc2
192.168.93.22 pxc3
安装PXC
yum module disable mysql
yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
percona-release setup pxc-80
yum install percona-xtradb-cluster
2.2 初始化数据库
编辑/etc/my.cnf,配置server_id
[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
binlog_expire_logs_seconds=604800 # 过期时间(秒,约七天)
修改默认密码
[root@Rocky-1 ~]# grep -i password /var/log/mysqld.log
2025-05-12T02:56:24.185472Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: 7ykhy4y6uF,smysql> alter user root@localhost identified by 'Mysql@123';
Query OK, 0 rows affected (0.01 sec)
2.3 创建集群节点
编辑配置文件:
# Binary log expiration period is 604800 seconds, which equals 7 days
binlog_expire_logs_seconds=604800######## wsrep ###############
# Path to Galera library
wsrep_provider=/usr/lib64/galera4/libgalera_smm.so# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://192.168.93.20,192.168.93.21,192.168.93.22# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW# Slave thread to use
wsrep_slave_threads=8wsrep_log_conflicts# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2# Node IP address
#wsrep_node_address=192.168.70.63
# Cluster name
wsrep_cluster_name=pxc-cluster#If wsrep_node_name is not specified, then system hostname will be used
wsrep_node_name=pxc1#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING# SST method
wsrep_sst_method=xtrabackup-v2
引导服务mysql目录下的*.pem拷贝给其它服务器
默认情况下,系统变量pxc_encrypt_cluster_traffic设置为ON,意味着所有集群流量都使用证书进行保护,所以要求所有节点使用相同的密钥和证书文件,systemctl start mysql启动初始话过程中生成默认密钥和证书文件,并将其放在datadir数据目录下,这就造成故障节点和其他节点密钥和证书文件不一致,因此需要将故障节点的*.pem全部删除,并将某一个正常节点数据目录下的 *.pem文件全部复制到故障节点。
scp /var/lib/mysql/*.pem pxc2:/var/lib/mysql/
scp /var/lib/mysql/*.pem pxc3:/var/lib/mysql/拷贝完之后记得查看所属组和用户是不是MySQL,不是则要修改为MySQL
[root@Rocky-2 mysql]# chown mysql.mysql *.pem
2.4 启动集群
第一个节点需要以引导模式启动
systemctl start mysql@bootstrap.service
第二和第三个节点上正常启动数据库服务
systemctl start mysql
查看集群信息
mysql> show status like 'wsrep%';
+----------------------------------+--------------------------------------------------------------------------------------------------------------------- ---------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------------------------------------------------------------------------------------- ---------------------------+
| wsrep_local_state_uuid | b37f8a6e-2edc-11f0-859b-c7d0dd443ae8 |
| wsrep_protocol_version | 11 |
| wsrep_last_applied | 8 |
| wsrep_protocol_application | 4 |
| wsrep_protocol_replicator | 11 |
| wsrep_protocol_GCS | 5 |
| wsrep_last_committed | 8 |
| wsrep_monitor_status (L/A/C) | [ (15, 15), (8, 8), (8, 8) ] |
| wsrep_replicated | 1 |
| wsrep_replicated_bytes | 240 |
| wsrep_repl_keys | 1 |
| wsrep_repl_keys_bytes | 32 |
| wsrep_repl_data_bytes | 141 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 14 |
| wsrep_received_bytes | 1760 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.0714286 |
| wsrep_local_cached_downto | 1 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_active | false |
| wsrep_flow_control_requested | false |
| wsrep_flow_control_interval | [ 173, 173 ] |
| wsrep_flow_control_interval_low | 173 |
| wsrep_flow_control_interval_high | 173 |
| wsrep_flow_control_status | OFF |
| wsrep_cert_deps_distance | 1 |
| wsrep_apply_oooe | 0 |
| wsrep_apply_oool | 0 |
| wsrep_apply_window | 1 |
| wsrep_apply_waits | 0 |
| wsrep_commit_oooe | 0 |
| wsrep_commit_oool | 0 |
| wsrep_commit_window | 1 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 1 |
| wsrep_cert_bucket_count | 3 |
| wsrep_gcache_pool_size | 3928 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0 |
| wsrep_open_transactions | 0 |
| wsrep_open_connections | 0 |
| wsrep_ist_receive_status | |
| wsrep_ist_receive_seqno_start | 0 |
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.93.22:3306,192.168.93.20:3306,192.168.93.21:3306 |
| wsrep_cluster_weight | 3 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | b37f2827-2edc-11f0-ae01-6a2cff187a02 |
| wsrep_gmcast_segment | 0 |
| wsrep_cluster_capabilities | |
| wsrep_cluster_conf_id | 7 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | b37f8a6e-2edc-11f0-859b-c7d0dd443ae8 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 1 |
| wsrep_provider_capabilities | :MULTI_MASTER:CERTIFICATION:PARALLEL_APPLYING:TRX_REPLAY:ISOLATION:PAUSE:CAUSAL_READS:INCREMENTAL_WRITESET:UNORDERED :PREORDERED:STREAMING:NBO: |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> (modified by Percona <https://percona.com/>) |
| wsrep_provider_version | 4.21(79eda49) |
| wsrep_ready | ON |
| wsrep_thread_count | 9 |
+----------------------------------+--------------------------------------------------------------------------------------------------------------------- ---------------------------+
82 rows in set (0.01 sec)
通过视图performance_schema.pxc_cluster_view、dstat工具监控同步状态。
mysql> select * from performance_schema.pxc_cluster_view;
+-----------+--------------------------------------+--------+-------------+---------+
| HOST_NAME | UUID | STATUS | LOCAL_INDEX | SEGMENT |
+-----------+--------------------------------------+--------+-------------+---------+
| pxc3 | 4663933f-2edf-11f0-84fb-bb398399f763 | SYNCED | 0 | 0 |
| pxc1 | b37f2827-2edc-11f0-ae01-6a2cff187a02 | SYNCED | 1 | 0 |
| pxc2 | b37fe5b6-2edf-11f0-9c31-d7116dec32db | SYNCED | 2 | 0 |
+-----------+--------------------------------------+--------+-------------+---------+
3 rows in set (0.00 sec)
查看PXC集群状态信息
mysql> show status like 'wsrep_cluster%';
+----------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------+--------------------------------------+
| wsrep_cluster_weight | 3 |
| wsrep_cluster_capabilities | |
| wsrep_cluster_conf_id | 7 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | b37f8a6e-2edc-11f0-859b-c7d0dd443ae8 |
| wsrep_cluster_status | Primary |
+----------------------------+--------------------------------------+
6 rows in set (0.00 sec)mysql> show status where Variable_name in ('wsrep_cluster_size','wsrep_cluster_status','wsrep_connected','wsrep_ready');
+----------------------+---------+
| Variable_name | Value |
+----------------------+---------+
| wsrep_cluster_size | 3 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_ready | ON |
+----------------------+---------+
4 rows in set (0.00 sec)
2.5 节点下线
PXC 集群允许动态下线节点,但需要注意的是节点的启动命令和关闭命令必须一致。
以引导模式启动的第一个节点必须以引导模式来进行关闭:
systemctl stop mysql@bootstrap.service
其他节点则可以按照正常方式关闭:
systemctl stop mysql