龙空技术网

MySQL原生高可用MGR(二)

智明杂谈 171

前言:

而今兄弟们对“mgrmysql”大约比较关注,朋友们都需要学习一些“mgrmysql”的相关文章。那么小编也在网摘上搜集了一些对于“mgrmysql””的相关内容,希望朋友们能喜欢,姐妹们快快来了解一下吧!

接上一篇MySQL Group Replication高可用架构原理实现.MGR默认是单主模式.那么.我们在单主模式下怎么切换多主模式呢?

1 单主切换多主

mysql> SELECT * FROM performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+| group_replication_applier | 39bcdd1b-dfb5-11eb-bc6a-0050568522f1 | ceshi       |        3306 | ONLINE       | SECONDARY   | 8.0.21         || group_replication_applier | 52281b80-dee5-11eb-9289-005056be91d3 | db02        |        3306 | ONLINE       | PRIMARY     | 8.0.21         || group_replication_applier | 7e0649b3-ded7-11eb-9dd0-005056bef803 | db01        |        3306 | ONLINE       | SECONDARY   | 8.0.21         || group_replication_applier | ed5d5311-dfa6-11eb-8ede-005056be12a2 | xunjian     |        3306 | ONLINE       | SECONDARY   | 8.0.21         |+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+4 rows in set (0.00 sec)mysql> SELECT group_replication_switch_to_multi_primary_mode()    -> ;+--------------------------------------------------+| group_replication_switch_to_multi_primary_mode() |+--------------------------------------------------+| Mode switched to multi-primary successfully.     |+--------------------------------------------------+1 row in set (1.00 sec)

通过上边的命令我们就切换到了多主模式

1.1 查看单主切换多主

mysql> SELECT * FROM performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+| group_replication_applier | 39bcdd1b-dfb5-11eb-bc6a-0050568522f1 | ceshi       |        3306 | ONLINE       | PRIMARY     | 8.0.21         || group_replication_applier | 52281b80-dee5-11eb-9289-005056be91d3 | db02        |        3306 | ONLINE       | PRIMARY     | 8.0.21         || group_replication_applier | 7e0649b3-ded7-11eb-9dd0-005056bef803 | db01        |        3306 | ONLINE       | PRIMARY     | 8.0.21         || group_replication_applier | ed5d5311-dfa6-11eb-8ede-005056be12a2 | xunjian     |        3306 | ONLINE       | PRIMARY     | 8.0.21         |+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+

单主模式下切换primary

mysql> SELECT group_replication_set_as_primary('7e0649b3-ded7-11eb-9dd0-005056bef803');+--------------------------------------------------------------------------+| group_replication_set_as_primary('7e0649b3-ded7-11eb-9dd0-005056bef803') |+--------------------------------------------------------------------------+| Primary server switched to: 7e0649b3-ded7-11eb-9dd0-005056bef803         |+--------------------------------------------------------------------------+1 row in set (1.02 sec)
1.2 从多主切换回单主
mysql> SELECT group_replication_switch_to_single_primary_mode();+---------------------------------------------------+| group_replication_switch_to_single_primary_mode() |+---------------------------------------------------+| Mode switched to single-primary successfully.     |+---------------------------------------------------+1 row in set (1.12 sec)查看多主切换多主mysql> SELECT * FROM performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+| group_replication_applier | 39bcdd1b-dfb5-11eb-bc6a-0050568522f1 | ceshi       |        3306 | ONLINE       | PRIMARY     | 8.0.21         || group_replication_applier | 52281b80-dee5-11eb-9289-005056be91d3 | db02        |        3306 | ONLINE       | SECONDARY   | 8.0.21         || group_replication_applier | 7e0649b3-ded7-11eb-9dd0-005056bef803 | db01        |        3306 | ONLINE       | SECONDARY   | 8.0.21         || group_replication_applier | ed5d5311-dfa6-11eb-8ede-005056be12a2 | xunjian     |        3306 | ONLINE       | SECONDARY   | 8.0.21         |+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1.3 选择特定主机作为从多主切换回单主的primary
查看你想要切换到的主机的uuid:mysql> select @@server_uuid;+--------------------------------------+| @@server_uuid                        |+--------------------------------------+| 52281b80-dee5-11eb-9289-005056be91d3 |+--------------------------------------+1 row in set (0.00 sec)切换到对应主机为主:mysql> SELECT group_replication_switch_to_single_primary_mode('52281b80-dee5-11eb-9289-005056be91d3');+-----------------------------------------------------------------------------------------+| group_replication_switch_to_single_primary_mode('52281b80-dee5-11eb-9289-005056be91d3') |+-----------------------------------------------------------------------------------------+| Mode switched to single-primary successfully.                                           |+-----------------------------------------------------------------------------------------+1 row in set (1.01 sec)
2 集群外新节点加入集群

接下来开启超神模式.

2.1 新节点加入的数据恢复方式介绍从MySQL8017版本之后.MySQL引入了clone 插件..这个插件的加入简直就是给MGR注入了超神的血液.那么MGR第一种数据恢复方式就是通过clone 插件.clone插件需要集群内的每个节点都加载.新加入的节点也需要加载.通过传统的二进制日志回放的方式进行新节点的数据恢复

接下来我们就了解一下这两种数据恢复方式的具体方式:

2.1.1 clone数据恢复方式介绍集群内所有节点都下载clone插件.新节点也需要下载这个插件集群内的服务器或者donor和新加入节点必须在相同的操作系统上.以及MySQL版本都必须在8017之上clone默认开启数据压缩传输

4. 授权复制用户backup_admin权限

 mysql> grant backup_admin on *.* to repl_mgr@'%'; Query OK, 0 rows affected (0.01 sec)

5. group_replication_clone_threshold 设置集群和新加入节点相差超过多大的gtid号之间的差值时开启clone

此参数的值不宜设置太低.不然在新节点完成clone 重新启动的过程中.集群还在接收新的事务.当新节点重新启动之后发现自己和集群的数据

差又超过了这个值.那么就又会开启clone 为了避免进入死循环.设置一个合理的值

在新节点加入之前 集群和加入节点都必须配置好这个参数 配置好这个参数之后需要重启

注意: clone 方式会清空新加入节点数据目录下的数据

2.1.2 通过二进制日志恢复

使用传统的二进制日志恢复方式.会在数据目录下边生成group_replication_recovery系列文件

MGR会自动选择数据恢复方式.如果数据量相差较大将选择clone方式.或者二进制日志不全也会选择clone方式.

注意:如果没有clone插件,请通过数据补偿的方式补偿完数据之后再加入节点.(可以通过Percona Xtra Backup 或者mysqldump)

3 通过clone方式加入一个新节点

既然8017之后可以使用这个新功能.我们就使用这个新功能加入一个新节点 lets go

3.1 环境准备

所有节点加载clone插件,因为现在我们是单主模式.从节点是开启super_read_only的

3.1.1 插件下载

主节点执行:

INSTALL PLUGIN clone SONAME 'mysql_clone.so';

从节点执行:

mysql> stop group_replication;				Query OK, 0 rows affected (4.49 sec)								mysql> set sql_log_bin=0;				Query OK, 0 rows affected (0.00 sec)								mysql> set global super_read_only=0;				Query OK, 0 rows affected (0.00 sec)								mysql> INSTALL PLUGIN clone SONAME 'mysql_clone.so';				Query OK, 0 rows affected (0.01 sec)								mysql> set global super_read_only=1;				Query OK, 0 rows affected (0.00 sec)								mysql> set sql_log_bin=1;				Query OK, 0 rows affected (0.00 sec)								mysql> start group_replication;				Query OK, 0 rows affected (2.54 sec)

新加入节点执行:

INSTALL PLUGIN clone SONAME 'mysql_clone.so';
3.1.2 开启白名单
集群内节点从主节点开始,轮流执行(所有集群内节点都执行.轮流执行)mysql> stop group_replication;Query OK, 0 rows affected (4.09 sec)mysql> set global group_replication_ip_whitelist="10.10.119.0/24,10.55.2.152";Query OK, 0 rows affected (0.00 sec)mysql> start group_replication;Query OK, 0 rows affected (2.62 sec)
3.1.3 设置参数
设置clone开启的阈值参数group_replication_clone_threshold=1 #不代表生产配置参数介绍:设置集群和新加入节点相差超过多大的gtid号之间的差值时开启clone我们为了让新加入的节点开启clone.所以设置的小一点
4 新节点加入集群4.1 初始化方式及配置文件和其他从节点一样
当新节点起来之后进入连接进去执行CHANGE MASTER TO MASTER_USER='repl_mgr', MASTER_PASSWORD='123' FOR CHANNEL 'group_replication_recovery';start GROUP_REPLICATION;注意:repl_mgr需要拥有backup_admin权限.注意自己集群内的这个用户有没有授权这个权限

4.2 新节点日志查看:

2021-07-08T06:40:59.633390Z 16 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.2021-07-08T06:41:02.656022Z 8 [System] [MY-011511] [Repl] Plugin group_replication reported: 'This server is working as secondary member with primary member address db02:3306.'2021-07-08T06:41:03.656499Z 0 [Warning] [MY-013469] [Repl] Plugin group_replication reported: 'This member will start distributed recovery using clone. It is due to the number of missing transactions being higher than the configured threshold of 1.'2021-07-08T06:41:04.657963Z 0 [System] [MY-013471] [Repl] Plugin group_replication reported: 'Distributed recovery will transfer data using: Cloning from a remote group donor.'2021-07-08T06:41:04.658264Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to ceshi:3306, db02:3306, db01:3306, xunjian:3306 on view 16257187819268871:30.'2021-07-08T06:41:04.658531Z 25 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.'2021-07-08T06:41:04.726017Z 25 [Warning] [MY-013460] [InnoDB] Clone removing all user data for provisioning: Started2021-07-08T06:41:04.957919Z 25 [Warning] [MY-013460] [InnoDB] Clone removing all user data for provisioning: Finished2021-07-08T06:42:31.245022Z 0 [Warning] [MY-010909] [Server] /data/app/mysql/bin/mysqld: Forcing close of thread 8  user: 'root'.2021-07-08T06:42:34.666608Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'2021-07-08T06:42:34.667641Z 0 [System] [MY-011651] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' has been stopped.'2021-07-08T06:42:35.555729Z 0 [System] [MY-010910] [Server] /data/app/mysql/bin/mysqld: Shutdown complete (mysqld 8.0.21)  MySQL Community Server - GPL.2021-07-08T06:42:35.978378Z 0 [System] [MY-010116] [Server] /data/app/mysql/bin/mysqld (mysqld 8.0.21) starting as process 182992021-07-08T06:42:35.990723Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.2021-07-08T06:42:36.846598Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.2021-07-08T06:42:37.065407Z 0 [ERROR] [MY-013180] [Server] Function 'group_replication' already exists.2021-07-08T06:42:37.065683Z 0 [Warning] [MY-010736] [Server] Couldn't load plugin named 'group_replication' with soname 'group_replication.so'.2021-07-08T06:42:37.069374Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /tmp/mysqlx.sock2021-07-08T06:42:37.173331Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.2021-07-08T06:42:37.173653Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.2021-07-08T06:42:37.195617Z 0 [Warning] [MY-010604] [Repl] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=ceshi-relay-bin' to avoid this problem.2021-07-08T06:42:37.243899Z 0 [System] [MY-010931] [Server] /data/app/mysql/bin/mysqld: ready for connections. Version: '8.0.21'  socket: '/tmp/mysql.sock'  port: 3306  MySQL Community Server - GPL.2021-07-08T06:43:35.250958Z 8 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'2021-07-08T06:43:35.252111Z 9 [System] [MY-011565] [Repl] Plugin group_replication reported: 'Setting super_read_only=ON.'2021-07-08T06:43:35.255389Z 8 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'2021-07-08T06:43:35.255416Z 8 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv6 localhost address to the whitelist. It is mandatory that it is added.'2021-07-08T06:43:35.265604Z 10 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.

从日志中可以看到clone的工作过程以及最后可以看到已经克隆完成.同时可以通过这张表查看克隆状态:

mysql> select * from performance_schema.clone_status\G*************************** 1. row ***************************             ID: 1            PID: 0          STATE: Completed     BEGIN_TIME: 2021-07-08 14:41:04.661       END_TIME: 2021-07-08 14:42:36.846         SOURCE: db01:3306    DESTINATION: LOCAL INSTANCE       ERROR_NO: 0  ERROR_MESSAGE:     BINLOG_FILE: binlog.000003BINLOG_POSITION: 4053  GTID_EXECUTED: ea559632-ded2-11eb-8c25-0050568522f1:1-191 row in set (0.00 sec)

4.3 新节点加入集群:

mysql> set sql_log_bin=0;Query OK, 0 rows affected (0.00 sec)mysql> set global group_replication_ip_whitelist="10.10.119.0/24,10.55.2.152";Query OK, 0 rows affected (0.00 sec)mysql> set sql_log_bin=1;Query OK, 0 rows affected (0.00 sec)mysql> start group_replication;Query OK, 0 rows affected (3.20 sec)

此时.新节点就正式加入集群了.查看此时的日志:

2021-07-08T06:43:35.250958Z 8 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'2021-07-08T06:43:35.252111Z 9 [System] [MY-011565] [Repl] Plugin group_replication reported: 'Setting super_read_only=ON.'2021-07-08T06:43:35.255389Z 8 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'2021-07-08T06:43:35.255416Z 8 [Warning] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Automatically adding IPv6 localhost address to the whitelist. It is mandatory that it is added.'2021-07-08T06:43:35.265604Z 10 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.2021-07-08T06:43:37.446562Z 8 [System] [MY-011511] [Repl] Plugin group_replication reported: 'This server is working as secondary member with primary member address db02:3306.'2021-07-08T06:43:38.447100Z 0 [System] [MY-013471] [Repl] Plugin group_replication reported: 'Distributed recovery will transfer data using: Incremental recovery from a group donor'2021-07-08T06:43:38.447451Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to ceshi:3306, db02:3306, db01:3306, xunjian:3306 on view 16257187819268871:32.'2021-07-08T06:43:38.462168Z 17 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='db01', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.2021-07-08T06:43:38.481712Z 18 [Warning] [MY-010897] [Repl] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.2021-07-08T06:43:38.488892Z 18 [System] [MY-010562] [Repl] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl_mgr@db01:3306',replication started in log 'FIRST' at position 42021-07-08T06:43:38.547899Z 17 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='db01', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.2021-07-08T06:43:38.569929Z 0 [System] [MY-011490] [Repl] Plugin group_replication reported: 'This server was declared online within the replication group.'

在日志中可以看到:

This server was declared online within the replication group

证明节点已经成功加入集群.此时集群就有4台了.查看节点成员

mysql> SELECT * FROM performance_schema.replication_group_members;+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+| group_replication_applier | 39bcdd1b-dfb5-11eb-bc6a-0050568522f1 | ceshi       |        3306 | ONLINE       | SECONDARY   | 8.0.21         || group_replication_applier | 52281b80-dee5-11eb-9289-005056be91d3 | db02        |        3306 | ONLINE       | PRIMARY     | 8.0.21         || group_replication_applier | 7e0649b3-ded7-11eb-9dd0-005056bef803 | db01        |        3306 | ONLINE       | SECONDARY   | 8.0.21         || group_replication_applier | ed5d5311-dfa6-11eb-8ede-005056be12a2 | xunjian     |        3306 | ONLINE       | SECONDARY   | 8.0.21         |+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+

至此.新节点已经成功加入了集群.

标签: #mgrmysql