TiUP在线布署TIDB分布式数据库集群节点删除
检查当前集群节点状态:
tiup cluster display hdcluster
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster display hdcluster
Cluster type: tidb
Cluster name: hdcluster
Cluster version: v4.0.8
SSH type: builtin
Dashboard URL:
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.254.91:9093 alertmanager 172.16.254.91 9093/9094 linux/x86_64 Up /tidb/tidb-data/alertmanager-9093 /tidb/tidb-deploy/alertmanager-9093
172.16.254.91:3000 grafana 172.16.254.91 3000 linux/x86_64 Up - /tidb/tidb-deploy/grafana-3000
172.16.254.101:2379 pd 172.16.254.101 2379/2380 linux/x86_64 Up /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379
172.16.254.102:2379 pd 172.16.254.102 2379/2380 linux/x86_64 Up /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379
172.16.254.92:2379 pd 172.16.254.92 2379/2380 linux/x86_64 Up|L /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379
172.16.254.93:2379 pd 172.16.254.93 2379/2380 linux/x86_64 Up|UI /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379
172.16.254.94:2379 pd 172.16.254.94 2379/2380 linux/x86_64 Up /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379
172.16.254.91:9090 prometheus 172.16.254.91 9090 linux/x86_64 Up /tidb/tidb-data/prometheus-9090 /tidb/tidb-deploy/prometheus-9090
172.16.254.103:4000 tidb 172.16.254.103 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000
172.16.254.95:4000 tidb 172.16.254.95 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000
172.16.254.96:4000 tidb 172.16.254.96 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000
172.16.254.97:4000 tidb 172.16.254.97 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000
172.16.254.91:9000 tiflash 172.16.254.91 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb/tidb-data/tiflash-9000 /tidb/tidb-deploy/tiflash-9000
172.16.254.100:20160 tikv 172.16.254.100 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160
172.16.254.104:20160 tikv 172.16.254.104 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160
172.16.254.98:20160 tikv 172.16.254.98 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160
172.16.254.99:20160 tikv 172.16.254.99 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160
Total nodes: 17
依次删除上篇文章中添加的节点:及2台pd_server,1台tidb_server,1台tikv_server
删除tikv_servers:
tiup cluster scale-in hdcluster --node 172.16.254.104:20160
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.104:20160
This operation will delete the 172.16.254.104:20160 nodes in `hdcluster` and all their data.
Do you want to continue? [y/N]: y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.100
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.92
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.93
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.94
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.101
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.102
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.98
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.99
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.97
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.104
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.95
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.96
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.103
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.104:20160] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]}
The component `tikv` will become tombstone, maybe exists in several minutes or hours, after that you can use the prune command to clean it
+ [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`''`
+ [ Serial ] - UpdateTopology: cluster=hdcluster
+ Refresh instance configs
- Regenerate config pd -> 172.16.254.92:2379 ... Done
- Regenerate config pd -> 172.16.254.93:2379 ... Done
- Regenerate config pd -> 172.16.254.94:2379 ... Done
- Regenerate config pd -> 172.16.254.101:2379 ... Done
- Regenerate config pd -> 172.16.254.102:2379 ... Done
- Regenerate config tikv -> 172.16.254.98:20160 ... Done
- Regenerate config tikv -> 172.16.254.99:20160 ... Done
- Regenerate config tikv -> 172.16.254.100:20160 ... Done
- Regenerate config tidb -> 172.16.254.95:4000 ... Done
- Regenerate config tidb -> 172.16.254.96:4000 ... Done
- Regenerate config tidb -> 172.16.254.97:4000 ... Done
- Regenerate config tidb -> 172.16.254.103:4000 ... Done
- Regenerate config tiflash -> 172.16.254.91:9000 ... Done
- Regenerate config prometheus -> 172.16.254.91:9090 ... Done
- Regenerate config grafana -> 172.16.254.91:3000 ... Done
- Regenerate config alertmanager -> 172.16.254.91:9093 ... Done
+ [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service
Scaled cluster `hdcluster` in successfully
删除pd_servers:
tiup cluster scale-in hdcluster --node 172.16.254.101:2379
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.101:2379
This operation will delete the 172.16.254.101:2379 nodes in `hdcluster` and all their data.
Do you want to continue? [y/N]: y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.92
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.93
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.94
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.101
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.102
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.98
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.99
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.100
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.104
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.95
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.96
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.97
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.103
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.101:2379] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]}
Stopping component pd
Stopping instance 172.16.254.101
Stop pd 172.16.254.101:2379 success
Destroying component pd
Destroying instance 172.16.254.101
Destroy 172.16.254.101 success
- Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service]
Stopping component node_exporter
Stopping component blackbox_exporter
Destroying monitored 172.16.254.101
Destroying instance 172.16.254.101
Destroy monitored on 172.16.254.101 success
Delete public key 172.16.254.101
Delete public key 172.16.254.101 success
+ [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.101:2379'`
+ [ Serial ] - UpdateTopology: cluster=hdcluster
+ Refresh instance configs
- Regenerate config pd -> 172.16.254.92:2379 ... Done
- Regenerate config pd -> 172.16.254.93:2379 ... Done
- Regenerate config pd -> 172.16.254.94:2379 ... Done
- Regenerate config pd -> 172.16.254.102:2379 ... Done
- Regenerate config tikv -> 172.16.254.98:20160 ... Done
- Regenerate config tikv -> 172.16.254.99:20160 ... Done
- Regenerate config tikv -> 172.16.254.100:20160 ... Done
- Regenerate config tikv -> 172.16.254.104:20160 ... Done
- Regenerate config tidb -> 172.16.254.95:4000 ... Done
- Regenerate config tidb -> 172.16.254.96:4000 ... Done
- Regenerate config tidb -> 172.16.254.97:4000 ... Done
- Regenerate config tidb -> 172.16.254.103:4000 ... Done
- Regenerate config tiflash -> 172.16.254.91:9000 ... Done
- Regenerate config prometheus -> 172.16.254.91:9090 ... Done
- Regenerate config grafana -> 172.16.254.91:3000 ... Done
- Regenerate config alertmanager -> 172.16.254.91:9093 ... Done
+ [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service
Scaled cluster `hdcluster` in successfully
tiup cluster scale-in hdcluster --node 172.16.254.102:2379
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.102:2379
This operation will delete the 172.16.254.102:2379 nodes in `hdcluster` and all their data.
Do you want to continue? [y/N]: y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.92
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.93
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.94
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.102
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.98
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.99
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.100
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.104
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.95
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.96
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.97
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.103
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.102:2379] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]}
Stopping component pd
Stopping instance 172.16.254.102
Stop pd 172.16.254.102:2379 success
Destroying component pd
Destroying instance 172.16.254.102
Destroy 172.16.254.102 success
- Destroy pd paths: [/tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379/log /tidb/tidb-deploy/pd-2379 /etc/systemd/system/pd-2379.service]
Stopping component node_exporter
Stopping component blackbox_exporter
Destroying monitored 172.16.254.102
Destroying instance 172.16.254.102
Destroy monitored on 172.16.254.102 success
Delete public key 172.16.254.102
Delete public key 172.16.254.102 success
+ [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.102:2379'`
+ [ Serial ] - UpdateTopology: cluster=hdcluster
+ Refresh instance configs
- Regenerate config pd -> 172.16.254.92:2379 ... Done
- Regenerate config pd -> 172.16.254.93:2379 ... Done
- Regenerate config pd -> 172.16.254.94:2379 ... Done
- Regenerate config tikv -> 172.16.254.98:20160 ... Done
- Regenerate config tikv -> 172.16.254.99:20160 ... Done
- Regenerate config tikv -> 172.16.254.100:20160 ... Done
- Regenerate config tikv -> 172.16.254.104:20160 ... Done
- Regenerate config tidb -> 172.16.254.95:4000 ... Done
- Regenerate config tidb -> 172.16.254.96:4000 ... Done
- Regenerate config tidb -> 172.16.254.97:4000 ... Done
- Regenerate config tidb -> 172.16.254.103:4000 ... Done
- Regenerate config tiflash -> 172.16.254.91:9000 ... Done
- Regenerate config prometheus -> 172.16.254.91:9090 ... Done
- Regenerate config grafana -> 172.16.254.91:3000 ... Done
- Regenerate config alertmanager -> 172.16.254.91:9093 ... Done
+ [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service
Scaled cluster `hdcluster` in successfully
删除tidb_servers:
tiup cluster scale-in hdcluster --node 172.16.254.103:4000
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster scale-in hdcluster --node 172.16.254.103:4000
This operation will delete the 172.16.254.103:4000 nodes in `hdcluster` and all their data.
Do you want to continue? [y/N]: y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/hdcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.92
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.93
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.94
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.98
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.99
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.100
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.104
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.95
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.96
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.91
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.97
+ [Parallel] - UserSSH: user=tidb, host=172.16.254.103
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[172.16.254.103:4000] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[]}
Stopping component tidb
Stopping instance 172.16.254.103
Stop tidb 172.16.254.103:4000 success
Destroying component tidb
Destroying instance 172.16.254.103
Destroy 172.16.254.103 success
- Destroy tidb paths: [/tidb/tidb-deploy/tidb-4000 /etc/systemd/system/tidb-4000.service /tidb/tidb-deploy/tidb-4000/log]
Stopping component node_exporter
Stopping component blackbox_exporter
Destroying monitored 172.16.254.103
Destroying instance 172.16.254.103
Destroy monitored on 172.16.254.103 success
Delete public key 172.16.254.103
Delete public key 172.16.254.103 success
+ [ Serial ] - UpdateMeta: cluster=hdcluster, deleted=`'172.16.254.103:4000'`
+ [ Serial ] - UpdateTopology: cluster=hdcluster
+ Refresh instance configs
- Regenerate config pd -> 172.16.254.92:2379 ... Done
- Regenerate config pd -> 172.16.254.93:2379 ... Done
- Regenerate config pd -> 172.16.254.94:2379 ... Done
- Regenerate config tikv -> 172.16.254.98:20160 ... Done
- Regenerate config tikv -> 172.16.254.99:20160 ... Done
- Regenerate config tikv -> 172.16.254.100:20160 ... Done
- Regenerate config tikv -> 172.16.254.104:20160 ... Done
- Regenerate config tidb -> 172.16.254.95:4000 ... Done
- Regenerate config tidb -> 172.16.254.96:4000 ... Done
- Regenerate config tidb -> 172.16.254.97:4000 ... Done
- Regenerate config tiflash -> 172.16.254.91:9000 ... Done
- Regenerate config prometheus -> 172.16.254.91:9090 ... Done
- Regenerate config grafana -> 172.16.254.91:3000 ... Done
- Regenerate config alertmanager -> 172.16.254.91:9093 ... Done
+ [ Serial ] - SystemCtl: host=172.16.254.91 action=reload prometheus-9090.service
Scaled cluster `hdcluster` in successfully
检查集群节点状态:
tiup cluster display hdcluster
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.3.2/tiup-cluster display hdcluster
Cluster type: tidb
Cluster name: hdcluster
Cluster version: v4.0.8
SSH type: builtin
Dashboard URL:
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.254.91:9093 alertmanager 172.16.254.91 9093/9094 linux/x86_64 Up /tidb/tidb-data/alertmanager-9093 /tidb/tidb-deploy/alertmanager-9093
172.16.254.91:3000 grafana 172.16.254.91 3000 linux/x86_64 Up - /tidb/tidb-deploy/grafana-3000
172.16.254.92:2379 pd 172.16.254.92 2379/2380 linux/x86_64 Up|L /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379
172.16.254.93:2379 pd 172.16.254.93 2379/2380 linux/x86_64 Up|UI /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379
172.16.254.94:2379 pd 172.16.254.94 2379/2380 linux/x86_64 Up /tidb/tidb-data/pd-2379 /tidb/tidb-deploy/pd-2379
172.16.254.91:9090 prometheus 172.16.254.91 9090 linux/x86_64 Up /tidb/tidb-data/prometheus-9090 /tidb/tidb-deploy/prometheus-9090
172.16.254.95:4000 tidb 172.16.254.95 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000
172.16.254.96:4000 tidb 172.16.254.96 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000
172.16.254.97:4000 tidb 172.16.254.97 4000/10080 linux/x86_64 Up - /tidb/tidb-deploy/tidb-4000
172.16.254.91:9000 tiflash 172.16.254.91 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb/tidb-data/tiflash-9000 /tidb/tidb-deploy/tiflash-9000
172.16.254.100:20160 tikv 172.16.254.100 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160
172.16.254.104:20160 tikv 172.16.254.104 20160/20180 linux/x86_64 Tombstone /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160
172.16.254.98:20160 tikv 172.16.254.98 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160
172.16.254.99:20160 tikv 172.16.254.99 20160/20180 linux/x86_64 Up /tidb/tidb-data/tikv-20160 /tidb/tidb-deploy/tikv-20160
Total nodes: 14
There are some nodes can be pruned:
Nodes: [172.16.254.104:20160]
You can destroy them with the command: `tiup cluster prune hdcluster`
缩容完毕。
TiUP在线布署TIDB分布式数据库集群节点删除相关推荐
- MySQL Cluste(入门篇)—分布式数据库集群搭建
目录 前言 1 概述 1.1 分布式数据库集群 1.2 数据库的分布式和主从的区别 2 环境说明 2.1 系统环境 2.2 软件环境 3 安装MySQL Cluster 4 配置安装管理节点 4.1 ...
- 大型分布式数据库集群的研究
1.为什么要设计成分布式数据库,数据为什么要分区? 当数据量很大的时候,即使服务器在没有任何压力的情况下,某些复杂的查询操作都会非常缓慢,影响了最终用户的体验. 在大数据量下对数据库的装载与导出,备份 ...
- MySQL Cluste—分布式数据库集群搭建
一.为什么要用MySQL Cluset分布式集群? 大家可以看这两位大佬的文章: https://blog.csdn.net/qq_15092079/article/details/82665307 ...
- Mycat高可用架构原理_Mycat集群搭建_HA高可用集群_高可用_单表存储千万级_海量存储_分表扩展---MyCat分布式数据库集群架构工作笔记0027
技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 前面我们已经讲了,对于数据库来说,mycat可以,我们通过搭建一主一从,双主双从,来实现数据库集群 ...
- 基于分布式数据库集群的大数据职位信息统计
目录 任务一: MongoDB 分布式集群关键配置信息截图(启动参数文件.初始化参数文件.启动命令等) ch0的参数文件配置: 编辑 ch1的参数文件配置: 编辑chconfig的参数文件配置: ...
- 分表扩展全局序列实际操作_高可用_单表存储千万级_海量存储_分表扩展---MyCat分布式数据库集群架构工作笔记0026
技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 然后我们动手配置: 全局序列,来给咱们的集群,保证id的自增长和不重复. 首先我们在dn1上创建这 ...
- 分表扩展全局序列原理_高可用_单表存储千万级_海量存储_分表扩展---MyCat分布式数据库集群架构工作笔记0025
技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 然后我们来说一下全局序列,现在我们做了分布式以后,东西都是全局的了 都要加个全局,以前可是没有,都 ...
- MyCat分布式数据库集群架构工作笔记0021---高可用_单表存储千万级_海量存储_水平分表全局表
技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 然后咱们说现在咱们已经实现了水平分表,ER表,但是现在还有一个问题 下面这个 字典表,因为字典表放 ...
- MyCat分布式数据库集群架构工作笔记0020---高可用_单表存储千万级_海量存储_水平分表ER表
技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 上面咱们配置了orders订单表的,水平分片,把orders表的,数据放到了不同的数据库上 下面咱 ...
最新文章
- 快春运了,做个火车余票查询接口,余票来源12306,图是百度地图
- Hybris Accelerator 搭建调试环境时遇到的错误消息
- MAUI中构建跨平台原生控件实现
- 【动态规划】机器分配 (ssl 1639)
- 深度学习--Matlab使用LSTM长短期记忆网络对负荷进行分类
- axis=0 与axis=1 的区分
- python 字符编码、格式化
- 微型计算机 持续更新,2020年南京邮电大学810《微机原理及应用》硕士研究生入学考试大纲...
- python文件夹遍历_Python练习6-文件遍历
- 路径规划之DWA类算法简述
- 学习dubbo(四): 启动时检查
- S/4 HANA中的数据库锁策略
- 自动驾驶 9-2: 卡尔曼滤波器和偏置BLUEs Kalman Filter and The Bias BLUEs
- 遭遇应用程序正常初始化失败
- 『IT视界』 [原创评论]揪出"程序员"身上的"六宗罪"
- 《计算机工程》期刊投稿经验分享
- 如何申请:悟空问答,达人,金V认证!
- bootstrap手风琴_快速提示:如何自定义Bootstrap 4的手风琴组件
- 如何解决电脑使用中任务栏“卡死”问题。
- 基于EasyNVR实现RTSP_Onvif监控摄像头Web无插件化直播监控之视频直播网络占用率大如何解决的问题分析