Difference between revisions of "MySQL InnoDB Cluster"

From Michael's Information Zone
Jump to navigation Jump to search
Line 47: Line 47:
 
<pre>
 
<pre>
 
sudo dnf install -y mysql-router-community
 
sudo dnf install -y mysql-router-community
sudo mysqlrouter --bootstrap dbclusteradmin@node01:3306 --user=mysqlrouter --name=routerusername --conf-bind-address 127.0.0.1
+
sudo mysqlrouter --bootstrap dbclusteradmin@node01:3306 --account-host=local.hostname --account-create=if-not-exists --account=router_zabbix --user=mysqlrouter --name=routerusername --conf-bind-address 127.0.0.1
  
 
## MySQL Classic protocol
 
## MySQL Classic protocol

Revision as of 07:55, 28 April 2021

Purpose

To attempt installing a 3 node cluster.

Process

Started with downloading and installing mysql80-community-release-el7-3.noarch.rpm on each node, and the workstation I used for the admin API.

  • Install and configure mysql server on first node. In my case this is Amazon Linux 2
sudo yum install mysql-community-server
sudo systemctl enable --now mysqld.service
  • Grab the temp password from /var/log/mysqld.log [1] and run secure setup.
    • Root by default is restricted to localhost. I added an account for the mysqlsh to connect from my workstation. Also added the permissions to configure the clusteradmin
mysql> create user 'root'@'ip address' identified with mysql_native_password BY 'password';
mysql> grant all privileges on *.* to 'root'@'ip address';
mysql> GRANT CLONE_ADMIN, CONNECTION_ADMIN, CREATE USER, EXECUTE, FILE, GROUP_REPLICATION_ADMIN, PERSIST_RO_VARIABLES_ADMIN, PROCESS, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, REPLICATION_APPLIER, REPLICATION_SLAVE_ADMIN, ROLE_ADMIN, SELECT, SHUTDOWN, SYSTEM_VARIABLES_ADMIN ON *.* TO 'root'@'ip address' WITH GRANT OPTION;
mysql> flush privileges;

[2]

Configure Nodes with MySQL Shell

  • Installed mysql shell on my Fedora 33 workstation.
sudo dnf install -y mysql-shell
  • Set variables and run setup function.
mysqlsh
MySQL  JS > var values={'password':'<root password>', 'clusterAdmin':'<cluster admin name>', 'clusterAdminPassword':'<cluster password>'}
MySQL  JS > dba.configureInstance('root@node1:3306', values )
  • At this point I exited mysql shell with ctrl+d and connected to node1 directly using shell (This can be reworked from the beginnging)
mysqlsh --mysql -u root -h nod1
MySQL  node1:3306 ssl  JS > var cluster = dba.createCluster('testcluster')
  • I left shell open since it had the cluster functions configure. Using a separate ssh session I connected to the other two nodes, repeating the steps above until I got to the cluster config.
  • Now we add the two additional nodes to the cluster from the first shell I already had open. I used the clone option for this.
MySQL  node1:3306 ssl  JS > cluster.addInstance('mysql02.domout.net')

MySQL Router

  • Using the same shell as above, create an account for the mysql router.
cluster.setupRouterAccount('routerusername')
  • On an app server needing access to mysql, install router and bootstrap. In this case I am running on my Fedora Workstation.
  • NOTE : The --user switch is for the local user on the machine that the router will run as. I tried running it as myself but ran into many issues.
sudo dnf install -y mysql-router-community
sudo mysqlrouter --bootstrap dbclusteradmin@node01:3306 --account-host=local.hostname --account-create=if-not-exists --account=router_zabbix --user=mysqlrouter --name=routerusername --conf-bind-address 127.0.0.1

## MySQL Classic protocol

- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447

## MySQL X protocol

- Read/Write Connections: localhost:6448
- Read/Only Connections:  localhost:6449