Table of Contents
This chapter covers MySQL InnoDB cluster, which combines various MySQL technologies to enable you to create highly available clusters of MySQL instances.
MySQL InnoDB cluster is a collection of products that work together to provide a complete High Availability solution for MySQL. A group of MySQL servers can be configured to create a cluster using MySQL Shell. In the default single-primary mode, the cluster of servers has a single read-write primary. Multiple secondary servers are replicas of the primary. Creating a cluster with at least three servers ensures a high availability cluster. A client application is connected to the primary via MySQL Router. If the primary fails, a secondary is automatically promoted to the role of primary, and MySQL Router routes requests to the new primary. Advanced users can also configure a cluster to have multiple-primaries.
InnoDB cluster does not provide support for MySQL NDB Cluster.
NDB Cluster depends on the NDB
storage
engine as well as a number of programs specific to NDB Cluster which
are not furnished with MySQL Server 5.7;
NDB
is available only as part of the MySQL
NDB Cluster distribution. In addition, the MySQL server binary
(mysqld) that is supplied with MySQL Server
5.7 cannot be used with NDB Cluster. For more
information about MySQL NDB Cluster, see
Chapter 21, MySQL NDB Cluster 7.5 and NDB Cluster 7.6.
Section 21.1.5, “MySQL Server Using InnoDB Compared with NDB Cluster”, provides information
about the differences between the InnoDB
and
NDB
storage engines.
To provide a highly available database solution, InnoDB cluster uses the following MySQL technologies:
MySQL Shell 1.0.9 or higher. Includes the AdminAPI, which enables you to script the creation and administration of an InnoDB cluster, using either JavaScript or Python.
MySQL Router 2.1.3 or higher. Caches the metadata of the InnoDB cluster and routes read/write client requests to the current primary. If the primary instance becomes unavailable, MySQL Router automatically routes client requests to a promoted secondary (the new primary).
MySQL Server 5.7.17 or higher. This provides the MySQL Group Replication mechanism to allow data to be replicated within the cluster, with built-in failover. For more information on MySQL Group Replication, see Chapter 17, Group Replication which explains the technical details used by InnoDB cluster.
An overview of how these technologies work together is shown in the following diagram:
For additional information about the AdminAPI available in MySQL Shell, see the JavaScript and Python MySQL Shell reference documentation.
Before installing InnoDB cluster, ensure that the server instances you intend to use meet the following requirements.
When using a sandbox deployment the instances are configured to meet these requirements automatically.
InnoDB cluster uses Group Replication and therefore your server instances must meet the same requirements. See Section 17.7.1, “Group Replication Requirements”.
In addition, the provisioning scripts that MySQL Shell uses to configure servers for use in InnoDB cluster require access to Python (2.7 and above). On Windows MySQL Shell includes Python and no user configuration is required. On Unix Python must be found as part of the enviroment. To check that your system has Python configured correctly issue:
$ /usr/bin/env python
If a Python interpreter starts, no further action is required.
If the previous command fails, create a soft link between
/usr/bin/python
and your chosen Python
binary.
The following methods of installing InnoDB cluster are available:
Downloading and installing the components using the following documentation:
MySQL Server 5.7.17 or higher. For details, see Chapter 2, Installing and Upgrading MySQL.
MySQL Shell 1.0.9 or higher. For details, see Section 19.3.1, “Installing MySQL Shell”.
MySQL Router 2.1.3 or higher. For details, see Installation.
Using the MySQL Installer for Windows. For details, see Section 2.3.3.2.1, “InnoDB Cluster Sandbox Test Setup”.
This section explains how to set up a single-primary InnoDB cluster and configure MySQL Router to achieve high availability. You create and administer your InnoDB clusters using MySQL Shell with the included AdminAPI. Familiarity with MySQL Shell is assumed, see Chapter 18, MySQL Shell User Guide for further information.
This tutorial shows how to use MySQL Shell to create an InnoDB cluster consisting of a MySQL Server instance which provides the seed instance of the InnoDB cluster and holds the initial data set. Two more sandbox MySQL server instances are created and added to the InnoDB cluster. Then MySQL Router is deployed and used to route connections to the InnoDB cluster, and high availability is tested. These are the steps:
MySQL Shell includes the AdminAPI, which is accessed
through the dba
global variable and its
associated methods. The dba
variable's
methods enable you to administer your cluster, for example you
use the dba.deploySandboxInstance()
method to
add a sandbox MySQL instance.
To list all available dba
commands, use the
dba.help()
method. You can obtain detailed
information for a specific method using the general format
object.help('methodname')
. For example:
mysql-js> dba.help('getCluster')
Retrieves a cluster from the Metadata Store.
SYNTAX
<Dba>.getCluster([name])
WHERE
name: Parameter to specify the name of the cluster to be returned.
DESCRIPTION
If name is not specified, the default cluster will be returned.
If name is specified, and no cluster with the indicated name is found, an error
will be raised.
In addition to this documentation, there is developer
documentation for all dba
methods in the
JavaScript
and
Python
developer documentation.
Initially deploying and using local sandbox instances of MySQL is a good way to start your exploration of InnoDB cluster. You can fully test out InnoDB cluster locally, prior to deployment on your production servers. MySQL Shell has built in functionality for creating sandbox instances. MySQL Shell creates the sandbox instances correctly configured to work with Group Replication in a locally deployed clustered scenario.
Sandbox instance are only suitable for deploying and running on your local machine. In a production environment the MySQL Server instances would be deployed on various hosts on the network. This is explained later in this guide.
The dba
module provides several functions for
administration of sandbox instances. For this example setup, you
create three sandbox instances. The AdminAPI provides a
function for that purpose:
dba.deploySandboxInstance()
.
Start MySQL Shell from a command prompt by issuing the command:
shell> mysqlsh
MySQL Shell provides two scripting languages: JavaScript and
Python. Throughout this guide MySQL Shell is used primarily in
JavaScript mode
. When MySQL Shell starts it is in JavaScript mode by default.
You switch into JavaScript mode, Python mode and SQL mode using
the commands \js
, \py
, and
\sql
. Ensure you are in JavaScript mode by
issuing the \js
command, then execute:
mysql-js> dba.deploySandboxInstance(3310)
Semi-colons are not required at the end of the line in JavaScript mode.
The argument passed to
deploySandboxInstance()
is the TCP port
number where the MySQL Server instance listens for connections.
By default the sandbox is created in a directory named
$HOME/mysql-sandboxes/
on Unix systems. For Microsoft Windows systems the directory is
port
%userprofile%\MySQL\mysql-sandboxes\
.
port
The root password for the instance is prompted for.
Each instance has its own password. Defining the same password for all sandboxes in this tutorial makes it easier, but remember to use different passwords for each instance on production systems.
To add further server instances, use
deploySandboxInstance()
. For this example
sandbox cluster add two more instances using different port
numbers by issuing:
mysql-js>dba.deploySandboxInstance(3320)
mysql-js>dba.deploySandboxInstance(3330)
You now have three MySQL server sandbox instances running on ports 3310, 3320 and 3330.
The next step is to create the InnoDB cluster while connected to the seed MySQL Server instance. The seed instance is the instance that you are connected to via MySQL Shell and that you want to replicate to the other instances. In this example, the sandbox instances are blank instances, therefore we can choose any instance. In a production set up the seed instance would be the one which contains your existing data set and would be replicated to the other instances in the cluster.
Connect MySQL Shell to the seed instance, in this case the one at port 3310:
mysql-js> \connect root@localhost:3310
The syntax \connect
is a shortcut for the
MySQL Shell connect method shell.connect()
.
Alternatively use the following command:
mysql-js> shell.connect('root@localhost:3310')
Use the createCluster()
method to create the
InnoDB cluster with the currently connected instance as the
seed:
mysql-js> var cluster = dba.createCluster('testCluster')
The createCluster()
method returns the
created cluster, the above statement assigns this to the
cluster
variable. The parameter passed to the
createCluster()
method is a symbolic name
given to this InnoDB cluster, in this case
testCluster
. The resulting InnoDB cluster
is assigned to the cluster
variable. This
function deploys the metadata to the selected instance,
configures it for Group Replication and adds the instance as the
seed of the new InnoDB cluster.
After validating that the instance is properly configured, it is added to the InnoDB cluster as the seed instance and the replication subsystem is started.
The provided sandbox instances are pre-configured to work with
Group Replication, but if you use a pre-existing instance, it is
possible that some configuration options might not be set in a
compatible way. The createCluster()
command
ensures that the settings are correct and if not, it changes
their values. If a change requires MySQL Server to be restarted,
you are prompted to restart it manually whenever convenient.
In summary, when dba.createCluster()
is
executed, the following steps are carried out:
The InnoDB cluster Metadata Schema is created (if it does not already exist) or is updated to the latest version. Schema objects or columns are only added, never removed.
The new InnoDB cluster information, including the specified name and password, is inserted into the InnoDB cluster Metadata.
The seed instance is added to the InnoDB cluster.
The seed instance information is inserted into the InnoDB cluster Metadata.
Once you have created a cluster, obtain the cluster instance variable using a command such as:
mysql-js> var cluster = dba.getCluster("testCluster")
You specify the name of the cluster you wish to obtain the instance variable for. If you do not specify the name of the cluster the default cluster is returned.
The next step is to add secondary instances to the InnoDB cluster. Any transactions that were executed by the seed instance are re-executed by each secondary instance as it is added. We use the sandbox instances that were created earlier.
The seed instance in this example was recently created, so it is nearly empty. Therefore, there is little data that needs to be replicated from the seed instance to the secondary instances. In a production environment, where you have an existing database on the seed instance, you could use a tool such as MySQL Enterprise Backup to ensure that the secondaries have matching data before replication starts. This avoids the possibility of lengthy delays while data replicates from the primary to the secondaries. Once the cluster is formed, writes to the primary result in data being replicated to the secondaries.
Add the second instance to the InnoDB cluster:
mysql-js> cluster.addInstance('root@localhost:3320')
The root user's password is prompted for.
Add the third instance:
mysql-js> cluster.addInstance('root@localhost:3330')
The root user's password is prompted for.
At this point you have created a cluster with three instances: a primary, and two secondaries.
You could have added additional details to the logs when adding an instance to a cluster. Pass in 'verbose' to enable additional logging, so our last example would have looked like this:
mysql-js> cluster.addInstance('root@localhost:3330', {verbose: true})
You can only specify localhost
in
addInstance()
if the instance is a sandbox
instance. This also applies to the implicit
addInstance()
after issuing
createCluster()
.
In order for client applications to handle failover, they need
to be aware of the InnoDB cluster topology. They also need to
know whether an instance is the PRIMARY
in
single-primary mode, or is "R/W" in multi-primary mode. While it
is possible for applications to implement that logic, MySQL Router
provides this functionality for you and is designed for
InnoDB cluster.
The recommended deployment of MySQL Router is on the same host as the application. In this tutorial, everything is running on a single host, so you deploy MySQL Router to the same host.
Assuming MySQL Router is already installed (see Installation), the only required step is to bootstrap it with the location of the InnoDB cluster metadata server. The following does this using all default settings:
shell> mysqlrouter --bootstrap root@localhost:3310 --user=mysqlrouter
You are prompted for the instance password and encryption key for MySQL Router to use. This encryption key is used to encrypt the instance password used by MySQL Router to connect to the cluster. The ports you can use to connect to the InnoDB cluster are also displayed.
Currently only Classic Protocol connections are supported between MySQL Router and InnoDB cluster.
MySQL Router connects to the InnoDB cluster, fetches its metadata
and configures itself for use. The generated configuration
creates two TCP ports: one for read-write sessions (which
redirects connections to "R/W" instances) and one for read-only
sessions (which redirects connections to one of the
SECONDARY
instances).
Once bootstrapped and configured, start MySQL Router (or set up a service for it to start automatically when the system boots):
shell> mysqlrouter &
You can now connect a MySQL client, such as MySQL Shell to one
of the incoming MySQL Router ports and see how the client gets
transparently connected to one of the InnoDB cluster
instances. To see which instance you are actually connected to,
simply query the port
status variable.
shell>mysqlsh --uri root@localhost:6442
mysql-js>\sql
Switching to SQL mode... Commands end with ; mysql-sql>select @@port;
+--------+ | @@port | +--------+ | 3310 | +--------+ 1 row in set (0.00 sec)
To test if failover works, simulate an unexpected halt by
killing the PRIMARY
instance using the
dba.killSandboxInstance()
function and check
that one of the other instances takes over automatically.
mysql-js> dba.killSandboxInstance(3310)
Then you can again check which instance you are connected to.
The first SELECT
statement fails as the
connection to the original PRIMARY
was lost.
MySQL Shell automatically reconnects for you and when you
issue the command again the new port is confirmed.
mysql-js>\sql
Switching to SQL mode... Commands end with ; mysql-sql>SELECT @@port;
ERROR: 2013 (HY000): Lost connection to MySQL server during query The global session got disconnected. Attempting to reconnect to 'root@localhost:6446'... The global session was successfully reconnected. mysql-sql>select @@port;
+--------+ | @@port | +--------+ | 3330 | +--------+ 1 row in set (0.00 sec)
This shows that the InnoDB cluster provided us with automatic
failover, that MySQL Router has automatically reconnected us to the
new PRIMARY
instance, and that we have high
availability.
You can bring the instance that you killed back online.
mysql-js>dba.startSandboxInstance(3310)
mysql-js>cluster.rejoinInstance('root@localhost:3310')
mysql-js>cluster.status()
This section explains how to work with clusters, and how to handle common administration tasks.
Use the Cluster
object's
status()
method to check a cluster's status:
mysql-js> cluster.status()
This retrieves the current InnoDB cluster status which the server instance you are connected to is aware of and outputs a status report. It is important to understand that the instance's state in the cluster influences the information in the status report. A member which has left the cluster would provide a different view of the cluster compared to a instance which belongs to the cluster.
The instance status is one of the following:
ONLINE
: The instance is online and
participating in the cluster.
OFFLINE
: The instance may have lost
connection to the other instances.
RECOVERING
: The instance is attempting to
synchronize with the cluster by pulling in transactions it
needs before it can become an ONLINE
member.
UNREACHABLE
: The instance has lost
communication with the cluster.
ERROR
: The instance has encountered an
error during the recovery phase or while applying a
transaction.
(MISSING)
: The state of an instance which
is part of the configured cluster, but is currently
unavailable.
The MISSING
state is specific to InnoDB
cluster, it is not a state generated by Group Replication.
MySQL Shell uses this state to indicate instances that
are registered in the metadata, but cannot be found in the
live cluster view.
Once an instance enters ERROR
state, the
super_read_only
option is set
to ON
. To leave the
ERROR
state you must manually configure the
instance with
super_read_only=OFF
.
The mode indicates either
R/W
(read and writable) or
R/O
(read only). In single-primary mode, only
the instance marked "R/W" can execute transactions that update
the database, so it is the PRIMARY. If that instance becomes
unreachable for any reason (like an unexpected halt), one of the
remaining "R/O" instances automatically takes over its place and
becomes the new "R/W" PRIMARY
. In
multi-primary mode, multiple instances are marked as "R/W" and
there is no elected PRIMARY.
To check the status of the InnoDB cluster at a later time, you
can get a reference to the InnoDB cluster object by connecting
to any of its instances. However, if you want to make changes to
the InnoDB cluster, you must connect to a "R/W" instance. For
information about how the InnoDB cluster is running, use the
status()
method:
mysql-js>var cluster = dba.getCluster()
mysql-js>cluster.status()
{ "clusterName": "test", "defaultReplicaSet": { "status": "Cluster tolerant to up to ONE failure.", "topology": { "localhost:3310": { "address": "localhost:3310", "status": "ONLINE", "role": "HA", "mode": "R/W", "leaves": { "localhost:3320": { "address": "localhost:3320", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} }, "localhost:3330": { "address": "localhost:3330", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} } } } } } }
As the above output demonstrates, status information includes the InnoDB cluster name, topology, PRIMARY, and more.
To get information about the structure of the InnoDB cluster
itself, use the cluster.describe()
function:
mysql-js> cluster.describe();
{
"clusterName": "test",
"adminType": "local",
"defaultReplicaSet": {
"name": "default",
"instances": [
{
"name": "localhost:3310",
"host": "localhost:3310",
"role": "HA"
},
{
"name": "localhost:3320",
"host": "localhost:3320",
"role": "HA"
},
{
"name": "localhost:3330",
"host": "localhost:3330",
"role": "HA"
}
]
}
}
The output from this function shows the structure of the InnoDB cluster including all of its configuration information, and so on.
Once a sandbox instance is running, it is possible to change its status at any time using the following:
Stop: dba.stopSandboxInstance()
Start: dba.startSandboxInstance()
Kill: dba.killSandboxInstance()
Kills the MySQL Server instance process on the local host, Useful to help simulate an unexpected halt while testing failover.
Delete: dba.deleteSandboxInstance()
Completely removes the sandbox instance from your file system.
You can remove an instance from a cluster at any time should you
wish to do so. This can be done with the
removeInstance()
method, as in the following
example:
mysql-js> cluster.removeInstance("192.168.1.1:3306")
If an instance leaves the cluster, for example because it lost connection and did not or could not automatically rejoin the cluster, it may be necessary to rejoin it to the cluster at a later stage. Because the Group Replication configuration is not persisted in the instance's local configuration file, restarting an instance causes it to leave the Replication Group, so it must rejoin to add the instance back.
The command to rejoin an instance to a cluster is
cluster.rejoinInstance()
.
In the case where an instance has been configured using
dba.configureLocalInstance()
, its Group
Replication information is persisted to the configuration file,
and will rejoin the cluster automatically. More information on
this can be found in the section
Configuring the Instance.
When MySQL Router is bootstrapped it records the bootstrap server
addresses in its configuration. These servers contain metadata
used my MySQL Router in order to route correctly. If any additional
instances are added to the cluster after bootstrapping the
MySQL Router, they are automatically detected and used for
connection routing. If however, all of the original metadata
servers go offline for some reason, MySQL Router would no longer be
able to route correctly. Consider the following line in a
mysqlrouter.conf
file:
... bootstrap_server_addresses=mysql://192.168.56.101:3310,mysql://192.168.56.101:3320,mysql://192.168.56.101:3330 ...
There are three original metadata servers specified here. Now if two additional servers (call them D and E) were added, you would have a five instance cluster, and MySQL Router routes to these additional instances as required. If the original metadata instances, A, B and C, stopped unexpectedly and left the cluster, you would be left with only instances D and E running. At this point, instances D and E are still alive and form a quorum. So it should be possible to route calls to them. However, as all original metadata servers are down (instances A, B and C), MySQL Router shuts off all routing. In such a situation you can configure MySQL Router manually.
Modify the MySQL Router instance's configuration file to specify
the correct metadata servers in the
bootstrap_server_addresses
option.
Restart the MySQL Router instance, the updated metadata server is detetced and used.
If you want to remove all information associated with a cluster,
you can use the cluster.dissolve()
method.
This removes all metadata and configuration associated with the
cluster. Once you have dissolved the cluster you need to create
it again from scratch, using
dba.createCluster()
.
After calling cluster.dissolve()
, the
cluster
object is no longer valid.
You can automate cluster configuration with scripts. For example:
shell> mysqlsh -f setup-innodb-cluster.js
Any command line options specified after the script file name
are passed to the script and not to
MySQL Shell. You can access those options using the
os.argv
array in JavaScript, or the
sys.argv
array in Python. In both cases,
the first option picked up in the array is the script name.
The contents for an example script file is shown here:
print('MySQL InnoDB cluster sandbox set up\n'); print('==================================\n'); print('Setting up a MySQL InnoDB cluster with 3 MySQL Server sandbox instances.\n'); print('The instances will be installed in ~/mysql-sandboxes.\n'); print('They will run on ports 3310, 3320 and 3330.\n\n'); var dbPass = shell.prompt('Please enter a password for the MySQL root account: ', {type:"password"}); try { print('\nDeploying the sandbox instances.'); dba.deploySandboxInstance(3310, {password: dbPass}); print('.'); dba.deploySandboxInstance(3320, {password: dbPass}); print('.'); dba.deploySandboxInstance(3330, {password: dbPass}); print('.\nSandbox instances deployed successfully.\n\n'); print('Setting up InnoDB cluster...\n'); shell.connect('root@localhost:3310', dbPass); var cluster = dba.createCluster("devCluster"); print('Adding instances to the cluster.'); cluster.addInstance({user: "root", host: "localhost", port: 3320, password: dbPass}); print('.'); cluster.addInstance({user: "root", host: "localhost", port: 3330, password: dbPass}); print('.\nInstances successfully added to the cluster.'); print('\nInnoDB cluster deployed successfully.\n'); } catch(e) { print('\nThe InnoDB cluster could not be created.\n\nError: ' + + e.message + '\n'); }
When working in a production environment, the MySQL Server instances are running on hosts as part of a network rather than on your local machine as described in previous sections.
The following diagram illustrates the scenario you work with in the following section:
The user account used to administer an instance does not have to
be the root account, however the user needs to be assigned full
read and write privileges on the Metadata tables in addition to
full MySQL administrator privileges (SUPER
,
GRANT OPTION
, CREATE
,
DROP
and so on). To give the user
your_user
the privileges needed to
administer InnoDB cluster issue:
GRANT ALL PRIVILEGES ON mysql_innodb_cluster_metadata.* WITH GRANT OPTION TOyour_user
; GRANT RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TOyour_user
; GRANT SELECT ON performance_schema.* TOyour_user
; GRANT SELECT, INSERT, UPDATE, DELETE ON mysql.* TOyour_user
;
If only read operations are needed (such as for monitoring
purposes), an account with more restricted privileges may be used.
To give the user your_user
the
privileges needed to monitor InnoDB cluster issue:
GRANT SELECT ON mysql_innodb_cluster_metadata.* TOyour_user
; GRANT SELECT ON performance_schema.global_status TOyour_user
; GRANT SELECT ON performance_schema.replication_* TOyour_user
;
When working with a production deployment it is a good idea to activate verbose logging for MySQL Shell initially. This is helpful in finding and resolving any issues that may arise when you are preparing the server to work as part of InnoDB cluster. To start MySQL Shell with a verbose logging level type:
shell> mysqlsh --log-level=DEBUG3
The log file is located in
~/.mysqlsh/mysqlsh.log
for Unix-based
systems. On Microsoft Windows systems it is located in
%APPDATA%\MySQL\mysqlsh\mysqlsh.log
. See
Section 18.6, “MySQL Shell Application Log”.
The cluster.checkInstanceState()
function can
be used for the following purposes:
To validate if an instance can be added to the cluster.
The instance is consistent with the seed instances, meaning that it has not executed any transactions which the cluster has not, and can be recovered to the same state as the rest of the cluster.
Before creating a cluster from remote instances you need to
check that the servers are suitably configured. This can be done
using the dba.checkInstanceConfiguration()
function. For detailed help on this function you can type
dba.help('checkInstanceConfiguration')
.
The dba.checkInstanceConfiguration()
function
checks if the server instances are valid for InnoDB cluster
usage.
The following demonstrates this:
mysql-js> dba.checkInstanceConfiguration('user@139.59.177.10:3306')
Please provide the password for 'user@139.59.177.10:3306':
Validating instance...
The instance '139.59.177.10:3306' is not valid for Cluster usage.
The following issues were encountered:
- Some configuration options need to be fixed.
+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum | CRC32 | NONE | Update the server variable or restart the server |
| enforce_gtid_consistency | OFF | ON | Restart the server |
| gtid_mode | OFF | ON | Restart the server |
| log_bin | 0 | 1 | Restart the server |
| log_slave_updates | 0 | ON | Restart the server |
| master_info_repository | FILE | TABLE | Restart the server |
| relay_log_info_repository | FILE | TABLE | Restart the server |
| transaction_write_set_extraction | OFF | XXHASH64 | Restart the server |
+----------------------------------+---------------+----------------+--------------------------------------------------+
Please fix these issues , restart the server and try again.
{
"config_errors": [
{
"action": "server_update",
"current": "CRC32",
"option": "binlog_checksum",
"required": "NONE"
},
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "0",
"option": "log_bin",
"required": "1"
},
{
"action": "restart",
"current": "0",
"option": "log_slave_updates",
"required": "ON"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
mysql-js>
The report shows the configuration changes required for that instance before it can be added to the cluster.
Once the configuration issues have been identified you can reconfigure your server instance manually. Alternatively, if you can run MySQL Shell directly on the same machine where the instance of MySQL is running, log in to the server and run MySQL Shell on the server. On the server to be configured run:
shell> mysqlsh --log-level=DEBUG3 --uri=root@localhost
The function you use to configure a server for InnoDB cluster
use is dba.configureLocalInstance()
. This
function runs provisioning scripts for you that modify the MySQL
server's configuration file.
The dba.configureLocalInstance()
function can
only configure servers connected to locally. If you try to run
dba.configureLocalInstance()
remotely you get
the following error:
mysql-js> dba.configureLocalInstance('user@139.59.177.10:3306')
Dba.configureLocalInstance: This function only works with local instances (RuntimeError)
If MySQL Shell is started locally, then output will be similar to:
mysql-js> dba.configureLocalInstance('root@localhost:3306')
Please provide the password for 'root@localhost:3306':
Please specify the path to the MySQL configuration file: /etc/mysql/mysql.conf.d/mysqld.cnf
Validating instance...
The configuration has been updated but it is required to restart the server.
{
"config_errors": [
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "0",
"option": "log_bin",
"required": "1"
},
{
"action": "restart",
"current": "0",
"option": "log_slave_updates",
"required": "ON"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}
mysql-js>
As with dba.checkInstanceConfiguration()
, the
configuration requirements are identified, but this time the
entered configuration file is modified. For the changes to take
effect you need to restart the MySQL Server. For example:
shell> sudo service mysql restart
If dba.configureLocalInstance()
is used on
a instance that is already a member of a cluster, then its
Group Replication configuration information is persisted to
the server configuration file and a call to
rejoinInstance()
is not required in that
case. When restarted, the instance is automatically joined to
the cluster. This is illustrated in the following example:
shell.connect({host: 'localhost', port: 3333, user: 'root', password: 'somePwd'}); var cluster = dba.createCluster('devCluster'); // Here, configureLocalInstance makes sure the instance is configured for Group Replication dba.configureLocalInstance('localhost:3334', {password:'somePwd', mycnfPath:'some path
'}) cluster.addInstance('localhost:3334', {password:'somePwd'}) dba.configureLocalInstance('localhost:3335', {password:'somePwd', mycnfPath:'some path
'}) cluster.addInstance('localhost:3335', {password:'somePwd'}) // A restart here, would require using rejoin to put the instance back into the cluster dba.killSandboxInstance(3335); dba.startSandboxInstance(3335); cluster.rejoinInstance('localhost:3335', {password:'somePwd'}) // Calling configureLocalInstance again, since the instance is already part of the cluster // It will persist the Group Replication server variables dba.configureLocalInstance('localhost:3335', {password:'somePwd', mycnfPath:'some path
'}) // On a new restart, the instance automatically joins the Cluster (no need to rejoinInstance) dba.killSandboxInstance(3335); dba.startSandboxInstance(3335);
Once the server has restarted, you can use MySQL Shell again to check the configuration:
mysql-js> dba.checkInstanceConfiguration('root@localhost:3306')
Please provide the password for 'root@localhost:3306':
Validating instance...
The instance 'localhost:3306' is valid for Cluster usage
{
"status": "ok"
}
mysql-js>
Log in to the remote instance and use MySQL Shell to configure the instance automatically and ensure the configuration changes are persisted.
shell> mysqlsh --uri user@139.59.177.10:3306
Creating a Session to 'user@139.59.177.10:3306'
Enter password: *********
Classic Session successfully established. No default schema selected.
Now create the cluster:
mysql-js> var cluster = dba.createCluster('devCluster');
A new InnoDB cluster will be created on instance 'user@139.59.177.10:3306'.
Creating InnoDB cluster 'devCluster' on 'user@139.59.177.10:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
First, check the instance configuration:
mysql-js> dba.checkInstanceConfiguration('user@139.59.177.10:3306')
Please provide the password for 'user@139.59.177.10:3306':
Validating instance...
The instance '139.59.177.10:3306' is valid for Cluster usage
{
"status": "ok"
}
You can also check the instance state:
mysql-js> cluster.checkInstanceState('user@139.59.177.10:3306')
Please provide the password for 'user@139.59.177.10:3306':
Analyzing the instance replication state...
The instance '139.59.177.10:3306' is valid for the cluster.
The instance is fully recoverable.
{
"reason": "recoverable",
"state": "ok"
}
Check the cluster status:
mysql-js> cluster.status()
{
"clusterName": "devCluster",
"defaultReplicaSet": {
"name": "default",
"status": "Cluster is NOT tolerant to any failures.",
"topology": {}
}
}
You need to add two more instances to the cluster to make it tolerant to a server failure.
Check the configuration of the next instance to add to the cluster:
mysql-js> dba.checkInstanceConfiguration('user@139.59.177.11:3306')
Please provide the password for 'user@139.59.177.10:3306':
Validating instance...
The instance '139.59.177.11:3306' is valid for Cluster usage
{
"status": "ok"
}
The instance can now be added into the cluster:
mysql-js> cluster.addInstance("user@139.59.177.11:3306");
Please provide a password for 'user@139.59.177.11:3306': *****
A new instance will be added to the InnoDB cluster. Depending on the
amount of data on the cluster this might take from a few seconds to
several hours.
Adding instance 139.59.177.11:3306 to the cluster...
The instance '139.59.177.11:3306' was successfully added to the
cluster.
The next instance can now be added into the cluster:
mysql-js> cluster.addInstance("user@139.59.177.12:3306");
Please provide a password for 'user@139.59.177.12:3306': *****
A new instance will be added to the InnoDB cluster. Depending on the
amount of data on the cluster this might take from a few seconds to
several hours.
Adding instance 139.59.177.12:3306 to the cluster...
The instance '139.59.177.12:3306' was successfully added to the
cluster.
Now recheck cluster status.
When using the createCluster()
,
addInstance()
, and
rejoinInstance()
methods you can optionally
specify a list of approved servers that belong to the cluster,
referred to a whitelist. By specifying the whitelist explicitly
in this way you can increase the security of your cluster
because only servers in the whitelist can connect to the
cluster.
By default, if not specified explicitly, the whitelist is
automatically set to the private network addresses that the
server has network interfaces on. To configure the whitelist,
specify the servers to add with the
ipWhitelist
option when using the method. For
example:
mysql-js> c.addInstance("root:guidev!@localhost:3320", {ipWhitelist: "10.157.120.0/24, 192.168.1.110"})
This configures the instance to only accept connections from
servers at addresses 10.157.120.0/24
and
192.168.1.110
.
Using the ipWhitelist
option configures the
group_replication_ip_whitelist
system variable on the instance.
If a instance (or instances) fail, then a cluster can lose its
quorum, which is the ability to vote in a new primary. In this
case you can re-establish quorum using the method
cluster.forceQuorumUsingPartitionOf()
, as
shown in the following MySQL Shell example:
// open session to a cluster mysql-js>cluster = dba.getCluster("devCluster")
// The cluster lost its quorum and its status shows // "status": "NO_QUORUM" mysql-js>cluster.forceQuorumUsingPartitionOf("localhost:3310")
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [localhost:3310] Please provide the password for 'root@localhost:3310': ****** Restoring the InnoDB cluster ... The InnoDB cluster was successfully restored using the partition from the instance 'root@localhost:3310'. WARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset are removed or joined back to the group that was restored.
If your cluster suffers from a complete outage, you can ensure
it is reconfigured correctly using
dba.rebootClusterFromCompleteOutage()
. An
example of use is as follows:
mysql-js> shell.connect('root@localhost:3310');
mysql-js> var cluster = dba.rebootClusterFromCompleteOutage();
This ensures the cluster is correctly reconfigured after a complete outage. It picks the instance the MySQL Shell is connected to as the new seed instance and recovers the cluster based on the existing metadata of that instance.
It is also possible to provide the cluster name as an input parameter:
mysql-js>var cluster = dba.createCluster("devCluster")
... ... mysql-js>var cluster = dba.rebootClusterFromCompleteOutage("devCluster");
If this process fails, and the cluster metadata has become badly
corrupted, you may need to drop the metadata and create the
cluster again from scratch. You can drop the cluster metadata
using dba.dropMetaDataSchema()
.
The dba.dropMetaDataSchema()
method should
only be used as a last resort, when it is not possible to
restore the cluster. It can not be undone.
If changes to the Group Replication configurations are made
without using MySQL Shell you need to rescan your cluster. For
example, if you create a cluster with three instances, and then
without using MySQL Shell you add a new instance to that Group
Replication group, the AdminAPI is not aware of that
instance. The same would apply if you removed an instance from a
Group Replication group without using MySQL Shell. It is
necessary to rescan the cluster with
cluster.rescan()
in such scenarios.
After the command cluster.rescan()
has been
run, instances are identified that are newly discovered
instances. You are prompted to add each of these newly
discovered instances into your cluster as required, or you can
choose to ignore them.
Nodes that no longer belong to the cluster or which are
unavailable are also reported. In this case you are prompted to
remove the instance, or you can later attempt to add it back
into the cluster using a command such as
cluster.rejoin('instancex.example.com:3340')
.
If you have an existing deployment of Group Replication and you
want to manage it using the MySQL Shell, the option
adoptFromGR
from the
dba.createCluster()
function can be used.
shell> mysqlsh --uri root@192.168.0.11:3306
Creating a Session to 'root@192.168.0.11:3306'
Enter password: ****
Classic Session successfully established. No default schema selected.
MySQL Shell JavaScript Code:
mysql-js> var cluster = dba.createCluster('prodCluster', {adoptFromGR: true});
A new InnoDB cluster will be created on instance 'root@192.168.0.11:3306'.
Creating InnoDB cluster 'prodCluster' on 'root@192.168.0.11:3306'...
Adding Seed Instance...
Cluster successfully created. Use cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
mysql-js> cluster.describe();
{
"clusterName": "prodCluster",
"adminType": "local",
"defaultReplicaSet": {
"name": "default",
"instances": [
{
"name": "localhost:3306",
"host": "localhost:3306",
"role": "HA"
},
{
"name": "localhost:3307",
"host": "localhost:3307",
"role": "HA"
},
{
"name": "localhost:3308",
"host": "localhost:3308",
"role": "HA"
}
]
}
}
Server instances can be configured to use secure connections. For general information on using SSL with MySQL see Section 6.4, “Using Secure Connections”. This section explains how to configure a cluster to use SSL.
When using createCluster()
to set up a cluster,
if the server instance provides SSL encryption then it is
automatically enabled on the seed instance. Pass the
memberSslMode
option to the
createCluster()
method to specify a different
SSL mode. The memberSslMode
option is a string
that configures the SSL mode to be used, it defaults to
AUTO
. The permitted values are
DISABLED
, REQUIRED
, and
AUTO
. These modes are defined as:
Setting
createCluster(memberSslMode=DISABLED)
ensures SSL encryption is disabled for the seed instance in
the cluster.
Setting
createCluster(memberSslMode=REQUIRED)
then
SSL encryption is enabled for the seed instance in the
cluster. If it cannot be enabled an error is raised.
Setting createCluster(memberSslMode=AUTO)
(the default) then SSL encryption is automatically enabled if
the server instance supports it, or disabled if the server
does not support it.
When you issue the addInstance()
and
rejoinInstance()
commands, SSL encryption on
the instance is enabled or disabled based on the setting found for
the seed instance. For more control, the
addInstance()
, and
rejoinInstance()
commands accept the
memberSslMode
option. The behavior of the
commands in this case is:
Setting memberSslMode=DISABLED
ensures SSL
encryption is disabled for the instance in the cluster.
Setting memberSslMode=REQUIRED
forces SSL
encryption to be enabled for the instance in the cluster.
Setting memberSslMode=AUTO
(the default)
then SSL encryption is automatically enabled or disabled based
on the setting used by the seed instance (other members of the
cluster) and the available SSL support provided by the
instance itself.
When using createCluster()
with the
adoptFromGR
option to adopt an existing Group
Replication group, no SSL settings are changed on the adopted
cluster:
memberSslMode
cannot be used with
adoptFromGR
.
If the SSL settings of the adopted cluster are different from the ones supported by the MySQL Shell, in other words SSL for Group Replication recovery and Group Communication, both settings are not modified. This means you are not be able to add new instances to the cluster, unless you change the settings manually for the adopted cluster.
MySQL Shell always enables or disables SSL for the cluster for
both Group Replication recovery and Group Communication. A
verification is performed and an error issued in case those
settings are different for the seed instance (for example as the
result of a createCluster()
using
adoptFromGR
) when adding a new instance to the
cluster. SSL encryption must be enabled or disabled for all
instances in the cluster. Verifications are performed to ensure
that this invariant holds when adding a new instance to the
cluster.
The deploySandboxInstance()
command attempts to
deploy sandbox instances with SSL encryption support by default.
If it is not possible, the server instance is deployed without SSL
support. Use the ignoreSslError
option set to
false to ensure that sandbox instances are deployed with SSL
support, issuing an error if SSL support cannot be provided. When
ignoreSslError
is true, which is the default,
no error is issued during the operation if the SSL support cannot
be provided and the server instance is deployed without SSL
support.
This section describes the known limitations of InnoDB cluster. As InnoDB cluster uses Group Replication, you should also be aware of its limitations - see Section 17.7.2, “Group Replication Limitations”.
The formatting of results which contain multi-byte characters sometimes do not have correctly aligned columns. Similarly, non-standard character sets are being corrupted in results.
AdminAPI does not support Unix socket connections. MySQL Shell currently does not prevent you from attempting to use socket connections to a cluster, and attempting to use a socket connection to a cluster can cause unexpected results.
The MySQL Shell help describes an invalid URI:
USER[:PASS]@::SOCKET[/DB].
This is invalid because the @
symbol can
not be present if no user information is provided.
Whenever MySQL Shell is started using a connection command
parameter such as --uri
,
--user
,
--host
,
--port
, and so on,
MySQL Shell attempts to create a connection using default
values for parameters that have not been specified. If you use
the --socket
command parameter
MySQL Shell should attempt to create a socket connection
using the default values for host and user, but it does not.
If a session type is not specified when creating the global session, MySQL Shell provides automatic protocol detection which attempts to first create a NodeSession and if that fails it tries to create a ClassicSession. With an InnoDB cluster that consists of three server instances, where there is one read-write port and two read-only ports, this can cause MySQL Shell to only connect to one of the read-only instances. Therefore it is recommended to always specify the session type when creating the global session.
When adding non-sandbox server instances (instances which you
have configured manually rather than using
dba.deploySandboxInstance()
) to a cluster,
MySQL Shell is not able to persist any configuration changes
in the instance's configuration file. This leads to one or
both of the following scenarios:
The Group Replication configuration is not persisted in the instance's configuration file and upon restart the instance does not rejoin the cluster.
The instance is not valid for cluster usage. Although the
instance can be verified with
dba.checkInstanceConfiguration()
, and
MySQL Shell makes the required configuration changes in
order to make the instance ready for cluster usage, those
changes are not persisted in the configuration file and so
are lost once a restart happens.
If only 1
happens, the instance does not
rejoin the cluster after a restart.
If 2
also happens, and you observe that the
instance did not rejoin the cluster after a restart, you
cannot use the recommended
dba.rebootClusterFromCompleteOutage()
in
this situation to get the cluster back online. This is because
the instance loses any configuration changes made by
MySQL Shell, and because they were not persisted, the
instance reverts to the previous state before being configured
for the cluster. This causes Group Replication to stop
responding, and eventually the command times out.
To avoid this problem it is strongly recommended to use
dba.configureLocalInstance()
before adding
instances to a cluster in order to persist the configuration
changes.
When using dba.configureLocalInstance()
or
dba.deploySandboxInstance()
, server
instances are configured with random 32-bit unsigned integers
for the server-id
system
variable. The value of
server-id
is also used as the
default for the
auto_increment_offset
system
variable, which determines the starting point for
AUTO_INCREMENT. Group Replication has its own increment
mechanism and uses
group_replication_auto_increment_increment
which defaults to 7 (the maximum group size). This is used to
configure the value of
auto_increment_increment
.
This can result in server instances configured with values
such as:
auto_increment_offset = 214748366 (max 32-bit integer value - 1) auto_increment_increment = 7
When the next record is inserted: 214748366 + 7 this can result in overflow. Similarly a collision can occur between numbers generated by different servers in the same cluster.
Using MySQL server instances configured with the
validate_password plugin and password policy set to
STRONG
causes InnoDB cluster
createCluster()
and MySQL Router bootstrap
operations to fail. This is because the internal user required
for access to the server instance can not be validated.
The MySQL Router --bootstrap
command line option
does not accept IPv6 addresses.
The commercial version of MySQL Router does not have the correct
setting for AppArmor. A work around is to edit the AppArmor
profile configuration file
/etc/apparmor.d/usr.sbin.mysqlrouter
and
modify the line containing /usr/sbin/mysqld
to use the path to MySQL Router, for example
/usr/sbin/mysqlrouter
.