MatrixOne Distributed Cluster Deployment
This document will mainly describe how to deploy MatrixOne distributed database, based on a private Kubernetes cluster that separates computing and storage resources in a cloud-native manner, starting from scratch.
Main Steps
- Deploy Kubernetes cluster
- Deploy object storage MinIO
- Create and connect MatrixOne cluster
Key Concepts
As this document involves many Kubernetes-related terms, to help everyone understand the deployment process, we will provide brief explanations of important terms related to Kubernetes. If you need to know more about Kubernetes-related content, see Kubernetes Documentation
- Pod
Pod is the smallest resource management component in Kubernetes and the smallest resource object for running containerized applications. A Pod represents a process running in the cluster. In simple terms, we can consider a group of applications that provide specific functions as a pod containing one or more container objects that work together to provide services to the outside world.
- Storage Class
Storage Class, abbreviated as SC, marks the characteristics and performance of storage resources. According to the description of SC, we can intuitively understand the aspects of various storage resources and then apply storage resources according to the application's requirements. Administrators can define storage resources as a specific category, just as storage devices describe their configuration profiles.
- CSI
Kubernetes provides the CSI interface (Container Storage Interface, Container Storage Interface). Based on this set of CSI interfaces, custom CSI plug-ins can be developed to support specific storage and achieve the purpose of decoupling.
- PersistentVolume
PersistentVolume, abbreviated as PV, mainly includes setting key information such as storage capacity, access mode, storage type, recycling strategy, and backend storage type as a storage resource.
- PersistentVolumeClaim
PersistentVolumeClaim, or PVC, is used as a user's request for storage resources, mainly including the setting of information such as storage space request, access mode, PV selection conditions, and storage category.
- Service
Also called SVC, it matches a group of Pods to external access services through label selection. Each svc can be understood as a microservice.
- Operator
Kubernetes Operator is a way to package, deploy and manage Kubernetes applications. We use the Kubernetes API (Application Programming Interface) and the Kubectl tool to deploy and manage Kubernetes applications on Kubernetes.
Deployment Architecture
Dependent components
MatrixOne distributed system depends on the following components:
-
Kubernetes: As a resource management platform for the entire MatrixOne cluster, components such as Logservice, CN, and TN all run in Pods managed by Kubernetes. In the event of a failure, Kubernetes will cull the failed Pod and start a new one to replace it.
-
Minio: Provides object storage services for the entire MatrixOne cluster, and all MatrixOne data is stored in the object storage provided by Minio.
Additionally, for container management and orchestration on Kubernetes, we need the following plugins:
-
Helm: Helm is a package management tool for managing Kubernetes applications, similar to APT for Ubuntu and YUM for CentOS. It is used to manage pre-configured installation package resources called Charts.
-
local-path-provisioner: As a plug-in that implements the CSI (Container Storage Interface) interface in Kubernetes, local-path-provisioner is responsible for creating persistent volumes (PVs) for Pods and Minios of MatrixOne components to achieve data persistence storage.
Overall structure
The overall deployment architecture is shown in the following figure:
The overall architecture consists of the following components:
-
The bottom layer is three server nodes: the first host1 is the springboard machine for installing Kubernetes, the second is the master node (master) of Kubernetes, and the third is Kubernetes' working node (node).
-
The installed Kubernetes and Docker environment is the upper layer, which constitutes the cloud-native platform layer.
-
A Kubernetes plugin layer for management based on Helm, including the local-path-storage plugin implementing the CSI interface, Minio, and the MatrixOne Operator.
-
The topmost layer is several Pods and Services generated by these component configurations.
Pod and storage architecture of MatrixOne
MatrixOne creates a series of Kubernetes objects according to the rules of the Operator, and these objects are classified according to components and classified into resource groups, namely CNSet, TNSet, and LogSet.
-
Service: The services in each resource group must be provided externally through the Service. Service hosts the external connection function to ensure the service can still be provided when the Pod crashes or is replaced. External applications connect through the Service's exposed ports, and the Service forwards connections to the corresponding Pods through internal forwarding rules.
-
Pod: A containerized instance of MatrixOne components in which MatrixOne's core kernel code runs.
-
PVC: Each Pod declares the storage resources it needs through PVC (Persistent Volume Claim). In our architecture, CN and TN must apply for a storage resource as a cache, and LogService requires corresponding S3 resources. These requirements are declared through PVCs.
-
PV: PV (Persistent Volume) is an abstract representation of storage media, which can be regarded as a storage unit. After applying for a PVC, create a PV through software that implements the CSI interface and binds it to the PVC used for resources.
1. Deploying a Kubernetes Cluster
As MatrixOne's distributed deployment relies on a Kubernetes cluster, we need to have one in place. This article will guide you through setting up a Kubernetes cluster using Kuboard-Spray.
Preparing the Cluster Environment
To prepare the cluster environment, you need to do the following:
-
Have three virtual machines
-
Use CentOS 7.9 as the operating system (by default, it does allow root account remote login). Two machines will be used for deploying Kubernetes and other dependencies for MatrixOne, while the third will act as a jump host to set up the Kubernetes cluster.
-
External network access conditions. The three servers all need to pull the external network image.
The specific distribution of the machines is shown below:
Host | Intranet IP | Extranet IP | mem | CPU | Disk | Role |
---|---|---|---|---|---|---|
kuboardspray | 10.206.0.6 | 1.13.2.100 | 2G | 2C | 50G | 跳板机 |
master0 | 10.206.134.8 | 118.195.255.252 | 8G | 2C | 50G | master etcd |
node0 | 10.206.134.14 | 1.13.13.199 | 8G | 2C | 50G | worker |
Deploying Kuboard Spray on a Jump Server
Kuboard Spray is a tool used for visualizing the deployment of Kubernetes clusters. It uses Docker to quickly launch a web application that can visualize the deployment of a Kubernetes cluster. Once the Kubernetes cluster environment has been deployed, the Docker application can be stopped.
Preparing the Jump Server Environment
-
Install Docker: A Docker environment is required. Install and start Docker on the springboard machine with the following commands:
curl -sSL https://get.docker.io/ | sh #If you are in a domestic network restricted environment, you can change the following domestic mirror address curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
-
Start Docker:
[root@VM-0-6-centos ~]# systemctl start docker [root@VM-0-6-centos ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Sun 2023-05-07 11:48:06 CST; 15s ago Docs: https://docs.docker.com Main PID: 5845 (dockerd) Tasks: 8 Memory: 27.8M CGroup: /system.slice/docker.service └─5845 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock May 07 11:48:06 VM-0-6-centos systemd[1]: Starting Docker Application Container Engine... May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.391166236+08:00" level=info msg="Starting up" May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.421736631+08:00" level=info msg="Loading containers: start." May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.531022702+08:00" level=info msg="Loading containers: done." May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.544715135+08:00" level=info msg="Docker daemon" commit=94d3ad6 graphdriver= overlay2 version=23.0.5 May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.544798391+08:00" level=info msg="Daemon has completed initialization" May 07 11:48:06 VM-0-6-centos systemd[1]: Started Docker Application Container Engine. May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.569274215+08:00" level=info msg="API listen on /run/docker. sock"
Once the environment is prepared, Kuboard Spray can be deployed.
Deploying Kuboard Spray
Execute the following command to install Kuboard Spray:
docker run -d \
--privileged \
--restart=unless-stopped \
--name=kuboard-spray \
-p 80:80/tcp \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/kuboard-spray-data:/data \
eipwork/kuboard-spray:latest-amd64
If the image pull fails due to network issues, use the backup address below:
docker run -d \
--privileged \
--restart=unless-stopped \
--name=kuboard-spray \
-p 80:80/tcp \
-v /var/run/docker.sock:/var/run/docker.sock \
-v ~/kuboard-spray-data:/data \
swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard-spray:latest-amd64
After executing the command, open the Kuboard Spray web interface by entering http://1.13.2.100
(jump server IP address) in a web browser, then log in to the Kuboard Spray interface using the username admin
and the default password Kuboard123
, as shown below:
After logging in, the Kubernetes cluster deployment can be started.
Visual Deployment of Kubernetes Cluster
After logging into the Kuboard-Spray interface, you can begin visually deploying a Kubernetes cluster.
Importing Kubernetes-related Resource Packages
The installation interface will download the Kubernetes cluster's corresponding resource package via online downloading to achieve offline installation of the Kubernetes cluster.
-
Click Resource Package Management and select the appropriate version of the Kubernetes resource package to download:
Download
spray-v2.18.0b-2_k8s-v1.23.17_v1.24-amd64
版本 -
Click Import > Load Resource Package, select the appropriate download source, and wait for the resource package to finish downloading.
Note
We recommend choosing Docker as the container engine for your K8s cluster. Once Docker is selected as the container engine for K8s, Kuboard-Spray will automatically utilize Docker to run various components of the K8s cluster, including containers on both Master and Worker nodes.
-
This will
pull
the related image dependencies: -
After the image resource package is successfully pulled, return to the Kuboard-Spray web interface. You can see that the corresponding version of the resource package has been imported.
Installing a Kubernetes Cluster
This chapter will guide you through the installation of a Kubernetes cluster.
-
Select Cluster Management and choose Add Cluster Installation Plan:
-
In the pop-up dialog box, define the name of the cluster, select the version of the resource package that was just imported, and click OK, as shown in the following figure:
Cluster Planning
Based on the predefined roles, the Kubernetes cluster is deployed with a pattern of 1 master + 1 worker + 1 etcd.
After defining the cluster name and selecting the resource package version, click OK, and then proceed to the cluster planning stage.
-
Select the corresponding node roles and names:
- Master node: Select the etcd and control node and name it master0. (If you want the master node to participate in the work, you can select the worker node simultaneously. This method can improve resource utilization but will reduce the high availability of Kubernetes.)
- Worker node: Select only the worker node and name it node0.
-
After filling in the roles and node names for each node, please fill in the corresponding connection information on the right, as shown in the following figure:
-
After filling in all the roles, click Save. You can now prepare to install the Kubernetes cluster.
Installing Kubernetes Cluster
After completing all roles and saving in the previous step, click Execute to start installing the Kubernetes cluster.
-
Click OK as shown in the figure below to start installing the Kubernetes cluster:
-
When installing the Kubernetes cluster, the
ansible
script will be executed on the corresponding node to install the Kubernetes cluster. The overall installation time will vary depending on the machine configuration and network. Generally, it takes 5 to 10 minutes.Note: If an error occurs, you can check the log to confirm whether the version of Kuboard-Spray is mismatched. If the version is mismatched, please replace it with a suitable version.
-
After the installation is complete, execute
kubectl get node
on the master node of the Kubernetes cluster:[root@master0 ~]# kubectl get node NAME STATUS ROLES AGE VERSION master0 Ready control-plane,master 52m v1.23.17 node0 Ready <none> 52m v1.23.17
-
The command result shown in the figure above indicates that the Kubernetes cluster has been successfully installed.
-
Adjust the DNS routing table on each node in Kubernetes. Please execute the following command on each machine to find the nameserver containing
169.254.25.10
and delete the record. (This record may affect the communication efficiency between Pods, if this record does not exist, there is no need to change it)vim /etc/resolve.conf
2. Deploying Helm
Helm is a package management tool for managing Kubernetes applications. Similar to APT for Ubuntu and YUM for CentOS, Helm provides a convenient way to install, upgrade, and manage Kubernetes applications. It simplifies the application deployment and management process using charts (preconfigured installation package resources).
Before installing Minio, we need to install Helm first because the installation process of Minio depends on Helm. Here are the steps to install Helm:
Note: All operations in this section are performed on the master0 node.
-
Download the Helm installation package:
wget https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz #You can use the following domestic mirror address if the trained network is limited. wget https://mirrors.huaweicloud.com/helm/v3.10.2/helm-v3.10.2-linux-amd64.tar.gz
-
Extract and install:
tar -zxf helm-v3.10.2-linux-amd64.tar.gz mv linux-amd64/helm /usr/local/bin/helm
-
Verify the version to check if it is installed:
[root@k8s01 home]# helm version version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.18.8"}
The version information shown above indicates that the installation is complete.
3. CSI Deployment
CSI is a storage plugin for Kubernetes that provides storage services for MinIO and MatrixOne. This section will guide you through the use of the local-path-provisioner
plugin.
Note: All the commands in this section should be executed on the master0 node.
-
Install CSI using the following command line:
wget https://github.com/rancher/local-path-provisioner/archive/refs/tags/v0.0.23.zip unzip v0.0.23.zip cd local-path-provisioner-0.0.23/deploy/chart/local-path-provisioner helm install --set nodePathMap[0].paths[0]="/opt/local-path-provisioner",nodePathMap[0].node=DEFAULT_PATH_FOR_NON_LISTED_NODES --create-namespace --namespace local-path-storage local-path-storage ./
-
After a successful installation, the command line should display as follows:
root@master0:~# kubectl get pod -n local-path-storage NAME READY STATUS RESTARTS AGE local-path-storage-local-path-provisioner-57bf67f7c-lcb88 1/1 Running 0 89s
Note: After installation, this storageClass will provide storage services in the "/opt/local-path-provisioner" directory on the worker node. You can modify it to another path.
-
Set the default
storageClass
:kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
-
After setting the default, the command line should display as follows:
root@master0:~# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path (default) cluster.local/local-path-storage-local-path-provisioner Delete WaitForFirstConsumer true 115s
4. MinIO Deployment
MinIO is used to provide object storage for MatrixOne. This section will guide you through the deployment of a single-node MinIO.
Note: All the commands in this section should be executed on the master0 node.
Installation and Startup
-
The command line for installing and starting MinIO is as follows:
helm repo add minio https://charts.min.io/ mkdir minio_ins && cd minio_ins helm fetch minio/minio ls -lth tar -zxvf minio-5.0.9.tgz # This version may change; the actual download shall prevail cd ./minio/ kubectl create ns mostorage helm install minio \ --namespace mostorage \ --set resources.requests.memory=512Mi \ --set replicas=1 \ --set persistence.size=10G \ --set mode=standalone \ --set rootUser=rootuser,rootPassword=rootpass123 \ --set consoleService.type=NodePort \ --set image.repository=minio/minio \ --set image.tag=latest \ --set mcImage.repository=minio/mc \ --set mcImage.tag=latest \ -f values.yaml minio/minio
Note
-
--set resources.requests.memory=512Mi
sets the minimum memory consumption of MinIO--set persistence.size=1G
sets the storage size of MinIO to 1G--set rootUser=rootuser,rootPassword=rootpass123
the parameters set for rootUser and rootPassword are required for creating the secrets file for the Kubernetes cluster later, so use something that you can remember.
-
If it is repeatedly executed due to network or other reasons, it needs to be uninstalled first:
helm uninstall minio --namespace most storage
-
-
After a successful installation and start, the command line should display as follows:
NAME: minio LAST DEPLOYED: Sun May 7 14:17:18 2023 NAMESPACE: most storage STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: MinIO can be accessed via port 9000 on the following DNS name from within your cluster: minio.mostorage.svc.cluster.local To access MinIO from localhost, run the following commands: 1. export POD_NAME=$(kubectl get pods --namespace moststorage -l "release=minio" -o jsonpath="{.items[0].metadata.name}") 2. kubectl port-forward $POD_NAME 9000 --namespace most storage Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/ You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client: 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart 2. export MC_HOST_minio-local=http://$(kubectl get secret --namespace most storage minio -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace most storage minio -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000 3. mc ls minio-local
So far, Minio has been successfully installed. During the subsequent installation of MatrixOne, MatrixOne will communicate with Minio directly through the Kubernetes Service (SVC) without additional configuration.
However, if you want to connect to Minio from
localhost
, you can execute the following command line to set thePOD_NAME
variable and connectmostorage
to port 9000:export POD_NAME=$(kubectl get pods --namespace mostorage -l "release=minio" -o jsonpath="{.items[0].metadata.name}") nohup kubectl port-forward --address 0.0.0.0 $POD_NAME -n mostorage 9000:9000 &
-
After startup, use http://118.195.255.252:32001/ to log in to the MinIO page and create object storage information. As shown in the figure below, the account password is the rootUser and rootPassword set by
--set rootUser=rootuser,rootPassword=rootpass123
in the above steps: -
After logging in, you need to create object storage related information:
Fill in the Bucket Name with minio-mo under Bucket > Create Bucket. After filling it in, click the Create Bucket button at the bottom right.
5. Deploying a MatrixOne Cluster
This section will guide you through the process of deploying a MatrixOne cluster.
Note: All steps in this section are performed on the master0 node.
Installing the MatrixOne-Operator
MatrixOne Operator is a standalone software tool for deploying and managing MatrixOne clusters on Kubernetes. You can install the latest Operator Release installation package from the project's Release List.
Follow the steps below to install MatrixOne Operator on master0. We will create a separate namespace, matrixone-operator
for the Operator.
-
Download the latest MatrixOne Operator installation package:
wget https://github.com/matrixorigin/matrixone-operator/releases/download/chart-0.8.0-alpha.7/matrixone-operator-0.8.0-alpha.7.tgz
-
Unzip the installation package:
tar -xvf matrixone-operator-0.8.0-alpha.7.tgz cd /root/matrixone-operator/
-
Define the namespace variable:
NS="matrixone-operator"
-
Use Helm to install MatrixOne Operator and create a namespace:
helm install --create-namespace --namespace ${NS} matrixone-operator ./ --dependency-update
-
After the installation is successful, use the following command to confirm the installation status:
kubectl get pod -n matrixone-operator
Ensure all pods have a running status in the above command output.
[root@master0 matrixone-operator]# kubectl get pod -n matrixone-operator NAME READY STATUS RESTARTS AGE matrixone-operator-f8496ff5c-fp6zm 1/1 Running 0 3m26s
As shown in the above line of code, the status of the corresponding Pods is normal.
Create a MatrixOne cluster
-
First, create the namespace of MatrixOne:
NS="mo-hn" kubectl create ns ${NS}
-
Customize the
yaml
file of the MatrixOne cluster, and write the followingmo.yaml
file:apiVersion: core.matrixorigin.io/v1alpha1 kind: MatrixOneCluster metadata: name: mo namespace: mo-hn spec: # 1. Configuration for tn Tn: cacheVolume: # Disk cache for tn size: 5Gi # Modify according to actual disk size and requirements storageClassName: local-path # If not specified, the default storage class of the system will be used resources: requests: cpu: 100m #1000m=1c memory: 500Mi # 1024Mi limits: # Note that limits should not be lower than requests and should not exceed the capacity of a single node. Generally allocate based on actual circumstances, usually set limits and requests to be consistent. cpu: 200m memory: 1Gi config: | # Configuration for tn [dn.Txn.Storage] backend = "TAE" log-backend = "logservice" [dn.Ckp] flush-interval = "60s" min-count = 100 scan-interval = "5s" incremental-interval = "60s" global-interval = "100000s" [log] level = "error" format = "json" max-size = 512 replicas: 1 # The number of copies of TN, which cannot be modified. The current version only supports a setting of 1. # 2. Configuration for logservice logService: replicas: 3 # Number of logservice replicas resources: requests: cpu: 100m #1000m=1c memory: 500Mi # 1024Mi limits: # Note that limits should not be lower than requests and should not exceed the capacity of a single node. Generally allocate based on actual circumstances, usually set limits and requests to be consistent. cpu: 200m memory: 1Gi sharedStorage: # Configuration for logservice to connect to s3 storage s3: type: minio # Type of s3 storage to connect to is minio path: minio-mo # Path to the minio bucket used by mo, previously created through the console or mc command endpoint: http://minio.mostorage:9000 # The svc address and port of the minio service secretRef: # Configuration for accessing minio, the secret name is minio name: minio pvcRetentionPolicy: Retain # Configuration for the lifecycle policy of the pvc bucket after the cluster is destroyed, Retain means to keep, Delete means to delete volume: size: 1Gi # Configuration for the size of S3 object storage, modify according to actual disk size and requirements config: | # Configuration for logservice [log] level = "error" format = "json" max-size = 512 # 3. Configuration for cn tp: cacheVolume: # Disk cache for cn size: 5Gi # Modify according to actual disk size and requirements storageClassName: local-path # If not specified, the default storage class of the system will be used resources: requests: cpu: 100m #1000m=1c memory: 500Mi # 1024Mi limits: # Note that limits should not be lower than requests and should not exceed the capacity of a single node. Generally allocate based on actual circumstances, usually set limits and requests to be consistent. cpu: 200m memory: 2Gi serviceType: NodePort # cn needs to provide access entry to the outside, so its svc is set to NodePort nodePort: 31429 # NodePort port setting config: | # Configuration for cn [cn.Engine] type = "distributed-tae" [log] level = "debug" format = "json" max-size = 512 replicas: 1 version: nightly-54b5e8c # The version of the MO image. You can check it on Docker Hub. Generally, cn, TN, and logservice are packaged in the same image, so the same field can be used to specify it. It also supports specifying separately in each section, but unless there are special circumstances, use a unified image version. # https://hub.docker.com/r/matrixorigin/matrixone/tags imageRepository: matrixorigin/matrixone # Image repository address. If it is pulled locally and the tag has been modified, you can adjust this configuration item. imagePullPolicy: IfNotPresent # Image pull policy, consistent with the configurable values of k8s official.
-
Execute the following command to create a Secret service for accessing MinIO in the namespace
mo-hn
:kubectl -n mo-hn create secret generic minio --from-literal=AWS_ACCESS_KEY_ID=rootuser --from-literal=AWS_SECRET_ACCESS_KEY=rootpass123
The username and password use the
rootUser
androotPassword
set when creating the MinIO cluster. -
Execute the following command to deploy the MatrixOne cluster:
kubectl apply -f mo.yaml
-
Please wait patiently for about 10 minutes. If the Pod restarts, please continue to wait. Until you see the following information, the deployment is successful:
[root@master0 mo]# kubectl get pods -n mo-hn NAME READY STATUS RESTARTS AGE mo-tn-0 1/1 Running 0 74s mo-log-0 1/1 Running 1 (25s ago) 2m2s mo-log-1 1/1 Running 1 (24s ago) 2m2s mo-log-2 1/1 Running 1 (22s ago) 2m2s mo-tp-cn-0 1/1 Running 0 50s
6. Connect to MatrixOne cluster
To connect to the MatrixOne cluster, you need to map the port of the corresponding service to the MatrixOne node. Here are the instructions for connecting to a MatrixOne cluster using kubectl port-forward
:
- Only allow local access:
nohup kubectl port-forward -nmo-hn svc/mo-tp-cn 6001:6001 &
- Specify a specific machine or all machines to access:
nohup kubectl port-forward -nmo-hn --address 0.0.0.0 svc/mo-tp-cn 6001:6001 &
After specifying Allow local access or Specify a specific machine or all machines to access, you can use the MySQL client to connect to MatrixOne:
# Connect to the MySQL server using the 'mysql' command line tool
# Use 'kubectl get svc/mo-tp-cn -n mo-hn -o jsonpath='{.spec.clusterIP}' ' to get the cluster IP address of the service in the Kubernetes cluster
# The '-h' parameter specifies the hostname or IP address of the MySQL service
# The '-P' parameter specifies the port number of the MySQL service, here is 6001
# '-uroot' means log in with root user
# '-p111' means the initial password is 111
mysql -h $(kubectl get svc/mo-tp-cn -n mo-hn -o jsonpath='{.spec.clusterIP}') -P 6001 -uroot -p111
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 163
Server version: 8.0.30-MatrixOne-v1.2.2 MatrixOne
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
After explicit mysql>
, the distributed MatrixOne cluster is established and connected.
Info
The login account in the above code snippet is the initial account; please change the initial password after logging in to MatrixOne; see Password Management.