Understanding – Group Replication and InnoDB Cluster
In this blog post, I’d like to show some relationships of “Group Replication” and “InnoDB Cluster“. My hope is that I will be able to draw meaningful correlations and attempt to simplify our way of thinking about these setups.
The MySQL development team released the first GA release of Group Replication in MySQL 5.7.17, which turns the InnoDB storage engine into a distributed database platform. Since then 4 patch releases of 5.7 have been produced with various new GR capabilities added in each one. In one of my early blogs, I mentioned the minimal set of SQL commands needed to configure the environment. That blog represents the least that needs to be done to build a Group Replication setup. In fact, if you deviate and over-complicate it instead of staying the path, the cluster doesn’t come together so easily. …Imagine that.
The steps in brief:
-
Have a configuration file setup with all expected Group Replication settings, including Group members listed in the group seeds configuration
-
Build a new set of database instance files (with something like mysqld --initialize ) and start the instance, turn off super_read_only (if on, which still leaves it in read_only) and reset the MySQL root user password (with no logging)
-
Configure the replication channel group_replication_recovery using a database user created in step 5.
-
Enable the group replication bootstrap configuration then stop & start group replication, and finally disable the bootstrap config
-
Create the replication database user that was utilized in step 3 above
-
Create additional members, only using steps (1), (2), (3)…then just (re)start group replication
Seems easy enough, right! sigh….
Simplify The Complex
The reality is, it’s a complex system…which is so because it self-handles the even more complex tasks of things like auto-failover and transaction-safe multi-master writes. Thankfully, the MySQL development team has been building the tools to abstract away that complexity. The introduction of the MySQL Shell and the revamped MySQL Router act as the initial set of core tools to take on this task of building out a Group Replication (GR) distributed setup, on our behalf (sort of). The tools change the approach so much that when building this way, we call it an InnoDB Cluster (IC).
Building a Cluster, GR vs. IC – Compared
Reminder, GR is a plugin on MySQL that enables distributed write capability on MySQL InnoDB tables. Generally speaking, when configured properly and used in accordance with its limitations, is a self-contained capability. Let’s look at comparing those 6 “steps in brief” noted above, and compare what’s involved for GR vs. IC:
Steps 1 & 2: Configuring a GR setup vs. InnoDB Cluster (IC)
Step 2 first:
It’s worth noting here, initializing the database instance’s files can be done before or after getting our fully defined configurations in place. This is true for the core GR/IC setups at least…some other configurations “may” need to be established before initializing files.
InnoDB Cluster will require the database instance’s files to be built first with the root user already protected with an intentionally set password to get that user unlocked.
Plus you will also need to have an admin-like user that can connect locally over 127.0.0.1 and another admin-like account (can be the same name) to connect remotely to the other servers in your cluster.
These extra accounts aren’t “needed” for a GR only setup! …but they’re useful for practical management of your group either way.
Adding the configuration super_read_only=ON is very important to have in your configuration when you are building a GR or IC setup. Both will deal with it being on (as of MySQL 5.7.20) and it’ll protect your clustered setups.
Just remember to disable it when resetting your root password and adding any initial user accounts.
Interesting read_only fact:
Enabling super_read_only=ON also enables read_only=ON.
Disabling super_read_only=OFF retains read_only=ON.
But disabling read_only=OFF will also disable super_read_only=OFF.
Also remember to use no logging SET SQL_LOG_BIN=0; ….otherwise reset your GTIDs on each server as part of your build process
SET SQL_LOG_BIN=0;
SET GLOBAL SUPER_READ_ONLY=OFF;
ALTER USER USER() IDENTIFIED BY '#GoodP@ssw0rdhErE';
## if not using the first command above, then use this below after you are done
RESET MASTER;
Step 1: GR’s setup (and a review of the configuration entries):
All replication members in any MySQL replication topology all need a unique “server_id” value.
Many of the other configurations listed here are also required for GR or IC to work properly.
server_id=1 #remember all members in any replication setup needs a unique server_id >0
port=3306
log_bin=mysql-bin # name or no name, binary logging is required
log_slave_updates=ON
binlog_checksum=NONE
enforce_gtid_consistency=ON
gtid_mode=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
transaction_write_set_extraction=XXHASH64
super_read_only=ON
Listed first below is the process of enabling the group_replication plugin so that the GR functionality is available to the server.
Normally with MySQL, all instances have their own unique hash ID and that’s how we identify which server wrote which GTID transactions. GR members (when clustered) operate under a single unique hash ID, so all group members would need this below.
plugin_load_add ="group_replication.so"
group_replication_group_name = 550fa9ee-a1f8-4b6d-9bfe-c03c12cd1c72
# generate your own cluster ID on Linux with `uuidgen -t`, all members use this config
Here are a collection of GR configurations that add to the core of how members know “who’s who in the cluster”.
“Group Seeds” are a list of possible candidates that can sponsor a new member coming online to provide it with transactions it doesn’t have.
To note: I have listed all 3 members in my group_seeds configuration below, whereas only the 2 remote members are needed. This is more regarding form, having configuration files only being different where it matters. Having the local members entry in its own seed configuration doesn’t “seem” to cause any problems.
The “Local Address” is the reference on the current instance that other’s would know it by.
Both Group Seeds and Local Address identify a network port that XCOM (member communication) occurs on…different from the typical MySQL port of 3306. Choose an XCOM port that works for your network (some networks may have different port numbering rules). It’s a good practice for all members to use the same XCOM port.
The “Start on Boot” configuration is put in for good measure – we don’t want to startup a member without GR’s service coming online. This also aligns with the benefit of having super_read_only=ON in your config file as the GR plugin will manage it.
group_replication_start_on_boot = ON
group_replication_local_address = '192.168.56.128:13306'
group_replication_group_seeds = '192.168.56.127:13306,192.168.56.128:13306,192.168.56.129:13306''
This GR configuration documentation page from the MySQL team discusses all the needed configurations for your GR setup. Particularly useful as things evolve beyond the current time of this blog.
Steps 1: IC’s setup approach
InnoDB Cluster is still dependent on the same configurations as above…as both systems use GR. However, we can use JavaScript or Python commands to configure and implement it for us!!!
This URL to the documentation for “Working with InnoDB Cluster” provides many of the commands we would need to use.
The 2 little commands below will be used, both at the beginning of our setup, and a little at the very end too
The core commands to configure most things needed for IC
[root@gr127 gr-files]# mysqlsh
...
## verifies if our instance is missing any substantial GR/IC configuration setups
## this command also needs a 127.0.0.1 host reference
mysql-js> dba.checkInstanceConfiguration('admin@gr127:3306')
...
## if so, then we can use an OS user that has file system write permission on our my.cnf
## file, to launch the MySQL Shell and run the this command below (let's see what happens in MySQL 8.0 with this)
mysql-js> dba.configureLocalInstance('admin@gr127:3306')
HIDE output of the command in-use from below
The core commands in action...show/hide above
[root@gr127 gr-files]# mysqlsh
MySQL Shell 1.0.11
...
Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.
mysql-js> dba.checkInstanceConfiguration('admin@gr127:3306')
Please provide the password for 'admin@gr127:3306':
Validating instance...
The instance 'gr127:3306' is not valid for Cluster usage.
The following issues were encountered:
- Some configuration options need to be fixed.
+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum | CRC32 | NONE | Update the server variable or restart the server |
| transaction_write_set_extraction | OFF | XXHASH64 | Restart the server |
+----------------------------------+---------------+----------------+--------------------------------------------------+
Please fix these issues, restart the server and try again.
....
mysql-js> dba.configureLocalInstance('admin@gr127:3306')
Please provide the password for 'admin@gr127:3306':
Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf
Do you want to modify this file? [Y|n]: [Y|n]: Y
...
Validating instance...
The configuration has been updated but it is required to restart the server.
{
"config_errors": [
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
mysql-js> \q
Bye!
[root@gr127 gr-files]# systemctl restart mysqld
[root@gr127 gr-files]# mysqlsh
MySQL Shell 1.0.11
...
Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.
mysql-js> dba.checkInstanceConfiguration('admin@gr127:3306')
Please provide the password for 'admin@gr127:3306':
Validating instance...
The instance 'gr127:3306' is valid for Cluster usage
{
"status": "ok"
}
mysql-js>
Now Remember to do these commands first on all the members you’d like to add to your IC setup. That way you’ll be certain that the configurations are ready to get the cluster going when its time.
Steps 3, 4 & 5: Configuring a GR setup vs. InnoDB Cluster (IC)
GR’s setup (and a review of the configuration entries):
There is very little that needs to be done to get a GR setup up and running with members once you’ve got the members all configured.
The activity for “step 3” is to configured the group_replication_recovery channel as below. This is the channel subsequent members will use to retrieve missed transactions from a donor member.
To “initiate” a NEW GR cluster setup, after the first member instance starts up, you need to use the following SQL statements to bootstrap the Group’s clustering capabilities:
-- Step 3: establish the recovery channel for subsequent
CHANGE MASTER TO MASTER_USER 'rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL'group_replication_recovery';
-- Step 4: bootstrap the initial member
SET group_replication_bootstrap_group=ON;
STOP GROUP_REPLICATION; # GR is set to start on boot by default, so the plugin just needs to be recycled
START GROUP_REPLICATION;
SET group_replication_bootstrap_group=OFF;
-- Step 5: create the user account for the replication setup, something like below
CREATE USER 'rpl_user'@'192.168.56.%' IDENTIFIED BY 'rpl_pass';
GRANT REPLICATION SLAVE ON *.* TO 'rpl_user'@'192.168.56.%';
Then creating your replication user account related to the group_replication_recovery channel. By creating the account after the bootstrap process is done and the GR plugin is restarted, it will get replicated to the rest of the cluster automatically. But its initial usage will be for the remote members to connect to “this” bootstrapped master.
InnoDB Cluster’s cluster setup
As we move forward with building an IC setup, the MySQL Shell continues to handle our cluster configuration and setups.
The cluster is initialized using this one command. cluster=dba.createCluster('icBlog'); Here, the cluster is actually getting bootstrapped in the same way Group Replication would when it initializes its first member…that is at least one of the core things that happens here.
[root@gr127 gr-files]# mysqlsh admin@gr127:3306
Creating a Session to 'admin@gr127:3306'
Enter password:
Your MySQL connection id is 7
Server version: 5.7.21-log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
MySQL Shell 1.0.11
Copyright (c) 2016, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type '\help' or '\?' for help; '\quit' to exit.
Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.
mysql-js> var cluster=dba.createCluster('icBlog');
A new InnoDB cluster will be created on instance 'admin@gr127:3306'.
The MySQL instance at 'gr127:3306' currently has the super_read_only
system variable set to protect it from inadvertent updates from applications.
You must first unset it to be able to perform any changes to this instance.
For more information see: https://dev.mysql.com/doc/refman/en/server-system-variables.html#sysvar_super_read_only.
Note: there are open sessions to 'gr127:3306'.
You may want to kill these sessions to prevent them from performing unexpected updates:
1 open session(s) of 'admin@gr127'.
Do you want to disable super_read_only and continue? [y|N]: y
Creating InnoDB cluster 'icBlog' on 'admin@gr127:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
mysql-js>
Step 6: Adding subsequent members
GR, adding additional members:
If your other GR members are configured properly group_replication_group_seeds="IP-ADDRESS:xcom-port,hostname:xcom-port,..." …then you’re most of the way there. If they are started up, you can either restart the instance, or restart the GR plugin service as shown above.
## Either restart the database instance
[root@gr129 gr-files]# systemctl restart mysqld
## Or restart the GR service
[root@gr129 gr-files]# mysql -e "STOP GROUP_REPLICATION; START GROUP_REPLICATION;"
IC, adding additional members:
Continuing to configure an IC cluster, adding members produces output that reminds us from the createCluster() command.
Adding 2 more members to the bootstrapped initial member
mysql-js> cluster.addInstance('admin@gr128:3306');
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'admin@gr128:3306':
Adding instance to the cluster ...
The instance 'admin@gr128:3306' was successfully added to the cluster.
mysql-js> cluster.addInstance('admin@gr129:3306');
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'admin@gr129:3306':
Adding instance to the cluster ...
The instance 'admin@gr129:3306' was successfully added to the cluster.
mysql-js> cluster.describe()
{
"clusterName": "icBlog",
"defaultReplicaSet": {
"instances": [
{
"host": "gr127:3306",
"label": "gr127:3306",
"role": "HA"
},
{
"host": "gr128:3306",
"label": "gr128:3306",
"role": "HA"
},
{
"host": "gr129:3306",
"label": "gr129:3306",
"role": "HA"
}
],
"name": "default"
}
}
mysql-js> cluster.status()
{
"clusterName": "icBlog",
"defaultReplicaSet": {
"name": "default",
"primary": "gr127:3306",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"gr127:3306": {
"address": "gr127:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"gr128:3306": {
"address": "gr128:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"gr129:3306": {
"address": "gr129:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
mysql-js>
** VERY IMPORTANT FINAL STEP **
It is also VERY VERY important in an InnoDB Clustered environment to go around “configuring” ALL members a 2nd time. Once the clustered setup is functioning, it needs to persist a final configuration setup in the local my.cnf file of each member.
This requires running dba.configureLocalInstance('admin@????:3306') again on each member.
[root@gr127 gr-files]# mysqlsh
MySQL Shell 1.0.11
Copyright (c) 2016, 2017, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type '\help' or '\?' for help; '\quit' to exit.
Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.
mysql-js> dba.configureLocalInstance('admin@gr127:3306')
Please provide the password for 'admin@gr127:3306':
Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf
Do you want to modify this file? [Y|n]: [Y|n]: Y
Validating instance...
The instance 'gr127:3306' is valid for Cluster usage
You can now use it in an InnoDB Cluster.
{
"status": "ok"
}
mysql-js> \q
Bye!
[root@gr127 gr-files]# ##### Now to verify the complete set of group_replication setups
[root@gr127 gr-files]# grep ^group_ /etc/my.cnf
group_replication_start_on_boot = ON
group_replication_local_address = gr127:13306
group_replication_group_seeds = 192.168.56.128:16601,192.168.56.129:16601
group_replication = FORCE_PLUS_PERMANENT
group_replication_allow_local_disjoint_gtids_join = OFF
group_replication_allow_local_lower_version_join = OFF
group_replication_auto_increment_increment = 7
group_replication_bootstrap_group = OFF
group_replication_components_stop_timeout = 31536000
group_replication_compression_threshold = 1000000
group_replication_enforce_update_everywhere_checks = OFF
group_replication_flow_control_applier_threshold = 25000
group_replication_flow_control_certifier_threshold = 25000
group_replication_flow_control_mode = QUOTA
group_replication_force_members
group_replication_group_name = 90d6a230-fc9e-11e7-b68e-080027dae810
group_replication_gtid_assignment_block_size = 1000000
group_replication_ip_whitelist = AUTOMATIC
group_replication_member_weight = 50
group_replication_poll_spin_loops = 0
group_replication_recovery_complete_at = TRANSACTIONS_APPLIED
group_replication_recovery_reconnect_interval = 60
group_replication_recovery_retry_count = 10
group_replication_recovery_ssl_ca
group_replication_recovery_ssl_capath
group_replication_recovery_ssl_cert
group_replication_recovery_ssl_cipher
group_replication_recovery_ssl_crl
group_replication_recovery_ssl_crlpath
group_replication_recovery_ssl_key
group_replication_recovery_ssl_verify_server_cert = OFF
group_replication_recovery_use_ssl = OFF
group_replication_single_primary_mode = ON
group_replication_ssl_mode = DISABLED
group_replication_transaction_size_limit = 0
group_replication_unreachable_majority_timeout = 0
I’m hoping that by showing the correlations of how the core Group Replication layer is involved in its configured setup, that it will give more “meaning” to what the InnoDB Cluster shell commands might be doing under the hood.
感谢原作者,博客转自:https://thesubtlepath.com/blog/mysql/understanding-group-replication-and-innodb-cluster/