Mysql迁移到GaussDb_GaussDB T 使用DUMP/LOAD导出导入迁移备份数据
GaussDB T 支持使用dump和load将数据库中的数据导出成标准化通用文件,可使用gaussdb T 的 load导入Oracle、MySQL、SQLserver等数据库导出的文件,同时也可以使用dump导出gaussdb T 的数据,然后导入到其他数据库或者其他gaussdb数据库。
功能类似于Oracle的sqlldr,同时也可用于第三方工具导入到其他数据库中。这种通用格式的文件内容如下:[omm@pr7 ~]$ more BMSQL_HISTORY.dmp
150001,1,1,6,1,6,2019-12-26 11:15:43.086000,10,jRNtYWMBOaKB
150002,2,1,6,1,6,2019-12-26 11:15:43.091000,10,Wq66ccQqhdv4U3TuX
150003,3,1,6,1,6,2019-12-26 11:15:43.091000,10,oAMMJOA2wxZx
150004,4,1,6,1,6,2019-12-26 11:15:43.091000,10,RPqp4vCRYiHn
导出数据 dump
dump支持将表或者一个查询的SQL转存到文件中,用于迁移和备份,关于dump命令的详细介绍及参数说明参考:
华为GaussDB T DUMP https://www.modb.pro/db/8480
查看帮助:SQL> dump -u;
The syntax of data dumper is:
DUMP {TABLE table_name | QUERY "select_query"}
INTO FILE "file_name"
[FILE SIZE 'uint64_file_size']
[{FIELDS | COLUMNS} ENCLOSED BY 'ascii_char' [OPTIONALLY]]
[{FIELDS | COLUMNS} TERMINATED BY 'string']
[{LINES | ROWS} TERMINATED BY 'string']
[CHARSET string]
[ENCRYPT BY 'password'];
导出BMSQL_HISTORY表全部数据:
dump table BMSQL_HISTORY into file
‘BMSQL_HISTORY.dmp’;[omm@pr7 ~]$ zsql steven/"modb123$"@127.0.0.1:1888 -q
connected.
SQL> desc BMSQL_HISTORY
Name Null? Type
----------------------------------- -------- ------------------------------------
HIST_ID NOT NULL BINARY_INTEGER
H_C_ID BINARY_INTEGER
H_C_D_ID BINARY_INTEGER
H_C_W_ID BINARY_INTEGER
H_D_ID BINARY_INTEGER
H_W_ID BINARY_INTEGER
H_DATE TIMESTAMP(6)
H_AMOUNT NUMBER(6, 2)
H_DATA VARCHAR(24 BYTE)
SQL> select count(*) from BMSQL_HISTORY ;
COUNT(*)
--------------------329908
1 rows fetched.
SQL> dump table BMSQL_HISTORY into file 'BMSQL_HISTORY.dmp';
5000 rows dumped.
10000 rows dumped.
15000 rows dumped.
20000 rows dumped.
25000 rows dumped.
30000 rows dumped.
35000 rows dumped.
40000 rows dumped.
45000 rows dumped.
50000 rows dumped.
55000 rows dumped.
60000 rows dumped.
65000 rows dumped.
70000 rows dumped.
75000 rows dumped.
80000 rows dumped.
85000 rows dumped.
90000 rows dumped.
95000 rows dumped.
100000 rows dumped.
105000 rows dumped.
110000 rows dumped.
115000 rows dumped.
120000 rows dumped.
125000 rows dumped.
130000 rows dumped.
135000 rows dumped.
140000 rows dumped.
145000 rows dumped.
150000 rows dumped.
155000 rows dumped.
160000 rows dumped.
165000 rows dumped.
170000 rows dumped.
175000 rows dumped.
180000 rows dumped.
185000 rows dumped.
190000 rows dumped.
195000 rows dumped.
200000 rows dumped.
205000 rows dumped.
210000 rows dumped.
215000 rows dumped.
220000 rows dumped.
225000 rows dumped.
230000 rows dumped.
235000 rows dumped.
240000 rows dumped.
245000 rows dumped.
250000 rows dumped.
255000 rows dumped.
260000 rows dumped.
265000 rows dumped.
270000 rows dumped.
275000 rows dumped.
280000 rows dumped.
285000 rows dumped.
290000 rows dumped.
295000 rows dumped.
300000 rows dumped.
305000 rows dumped.
310000 rows dumped.
315000 rows dumped.
320000 rows dumped.
325000 rows dumped.
329908 rows dumped.
Dump TABLE successfully:
329908 rows are totally dumped.
查看导出文件:[omm@pr7 ~]$ ls -l BMSQL_HISTORY.dmp
-rw------- 1 omm dbgrp 22851266 Jan 9 18:15 BMSQL_HISTORY.dmp
[omm@pr7 ~]$ more BMSQL_HISTORY.dmp
150001,1,1,6,1,6,2019-12-26 11:15:43.086000,10,jRNtYWMBOaKB
150002,2,1,6,1,6,2019-12-26 11:15:43.091000,10,Wq66ccQqhdv4U3TuX
150003,3,1,6,1,6,2019-12-26 11:15:43.091000,10,oAMMJOA2wxZx
150004,4,1,6,1,6,2019-12-26 11:15:43.091000,10,RPqp4vCRYiHn
150005,5,1,6,1,6,2019-12-26 11:15:43.091000,10,KF2JEs44N7DQF1Q
150006,6,1,6,1,6,2019-12-26 11:15:43.092000,10,HLYGQlJfcx54Nv
150007,7,1,6,1,6,2019-12-26 11:15:43.092000,10,tYZtZ9MVsxUGr13b
150008,8,1,6,1,6,2019-12-26 11:15:43.092000,10,IVH0kTgcptDKmJA0
150009,9,1,6,1,6,2019-12-26 11:15:43.093000,10,JrZ7xLw8Vrq24SorDXv
......
导出BMSQL_HISTORY表的最新数据:
dump query “select HIST_ID,H_DATE,H_DATA from BMSQL_HISTORY where H_DATE>to_date(‘20191226 112000’,‘yyyymmdd hh24miss’)” into file ‘BMSQL_HISTORY_20191226.dmp’ COLUMNS ENCLOSED BY ‘’’’ COLUMNS TERMINATED BY ‘|’;SQL> select count(*) from BMSQL_HISTORY where H_DATE>to_date('20191226 112000','yyyymmdd hh24miss');
COUNT(*)
--------------------29908
1 rows fetched.
SQL> dump query "select HIST_ID,H_DATE,H_DATA from BMSQL_HISTORY where H_DATE>to_date('20191226 112000','yyyymmdd hh24miss')"
2 into file 'BMSQL_HISTORY_20191226.dmp'
3 COLUMNS ENCLOSED BY ''''
4 COLUMNS TERMINATED BY '|';
5000 rows dumped.
10000 rows dumped.
15000 rows dumped.
20000 rows dumped.
25000 rows dumped.
29908 rows dumped.
Dump QUERY successfully:
29908 rows are totally dumped.
查看导出文件:[omm@pr7 ~]$ ls -l BMSQL_HISTORY_20191226.dmp
-rw------- 1 omm dbgrp 1811899 Jan 9 18:17 BMSQL_HISTORY_20191226.dmp
[omm@pr7 ~]$ more BMSQL_HISTORY_20191226.dmp
'330328'|'2019-12-26 11:22:09.908000'|'Uw0bMTwQq lt9m9Xe'
'330330'|'2019-12-26 11:22:09.943000'|'XM4gpUz XrQhdWYUiV'
'330332'|'2019-12-26 11:22:09.942000'|'gi2eaJ EXLbfacgXB'
'330338'|'2019-12-26 11:22:10.197000'|'Uw0bMTwQq E2vLqtD'
'330340'|'2019-12-26 11:22:10.218000'|'XM4gpUz XrQhdWYUiV'
'330342'|'2019-12-26 11:22:10.195000'|'gi2eaJ r0pVEx'
'330343'|'2019-12-26 11:22:10.268000'|'XM4gpUz uQmoDnGj'
'330345'|'2019-12-26 11:22:10.268000'|'Uw0bMTwQq PgBomyJ2u'
导入数据 load
在数据库迁移或者数据备份时,需要进行数据导入导出,GaussDB T支持使用“LOAD”命令导入数据。详细参数说明参考:
华为GaussDB T LOAD
https://www.modb.pro/db/8483
导入数据导备份表BMSQL_HISTORY_BAK:
load data infile “BMSQL_HISTORY.dmp” into table BMSQL_HISTORY_BAK;SQL> create table BMSQL_HISTORY_BAK as select * from BMSQL_HISTORY where 1=2;
Succeed.
SQL> load data infile "BMSQL_HISTORY.dmp" into table BMSQL_HISTORY_BAK;
15365 rows have been committed.
30798 rows have been committed.
46201 rows have been committed.
61483 rows have been committed.
76892 rows have been committed.
92173 rows have been committed.
107466 rows have been committed.
122796 rows have been committed.
138123 rows have been committed.
153400 rows have been committed.
168709 rows have been committed.
183939 rows have been committed.
199248 rows have been committed.
214567 rows have been committed.
229861 rows have been committed.
245158 rows have been committed.
260440 rows have been committed.
275391 rows have been committed.
290436 rows have been committed.
304915 rows have been committed.
318853 rows have been committed.
329908 rows have been committed.
Complete the data load.
totally read rows: 329908
ignored rows: 0
loaded rows: 329908
committed rows: 329908
error rows: 0
skip rows: 0
SQL> select count(*) from BMSQL_HISTORY_BAK;
COUNT(*)
--------------------
329908
1 rows fetched.
导入DUMP QUERY出来的数据:
THREADS指定线程数,NOLOGGING指定插入数据不记录redo日志和undo日志。SQL> load data infile "BMSQL_HISTORY_20191226.dmp" into table BMSQL_HISTORY_20191216
2 COLUMNS ENCLOSED BY ''''
3 COLUMNS TERMINATED BY '|'
4 THREADS 3
5 NOLOGGING;
12596 rows have been committed.
29908 rows have been committed.
Complete the data load.
totally read rows: 29908
ignored rows: 0
loaded rows: 29908
committed rows: 29908
error rows: 0
skip rows: 0SQL> select count(*) from BMSQL_HISTORY_20191216;
COUNT(*)
--------------------
29908
1 rows fetched.
转自墨天轮
Mysql迁移到GaussDb_GaussDB T 使用DUMP/LOAD导出导入迁移备份数据相关推荐
- mysql 载入主体时出错_mysql遇到load data导入文件数据出现1290错误的解决方案
错误出现情景 在cmd中使用mysql命令,学生信息表添加数据.使用load data方式简单批量导入数据. 准备好文本数据: xueshengxinxi.txt 文件 数据之间以tab键进行分割 ...
- mysql数据库导出导入(大量数据也适用)
1.首先在windows下,执行命令,将bg1901数据库导出到d:/data.sql,生成脚本文件 mysqldump -uroot -p bg1901 >d:/data.sql 2.如果要导 ...
- mysql表导出和导入命令行_MySQL命令行导出导入数据库和数据表-Go语言中文社区...
MySQL命令行导出数据库: 1,进入MySQL目录下的bin文件夹:cd MySQL中到bin文件夹的目录 如我输入的命令行:cd C:Program FilesMySQLMySQL Server ...
- MySQL命令行导出导入数据库和数据表
MySQL命令行导出数据库: 1,进入MySQL目录下的bin文件夹:cd MySQL中到bin文件夹的目录 如我输入的命令行:cd C:\Program Files\MySQL\MySQL Serv ...
- mysql数据转储方法_Mysql数据库各种导出导入数据方式的区别(我的理解错误还望指正)...
mysqldump,NAVICAT转储,select * outfile在千级数据,万级,百万级数据下的表现. 千级数据 mysqldump导出sql文件 导出是出了拒绝访问的错误:为对应目录(.sq ...
- MySQL 5.6 dump/load buffer pool实验
Using MySQL Preloading Buffer Pool for fast restart. 什么场景下,会使用该功能. 当你的数据库系统较大,比如有128G物理内存,那么对应的buffe ...
- mysql dump hbase_mysqldump导出全数据库
mysql备份与恢复 导出所有数据库 mysqldump -q --single-transaction -A >all.sql 导出某几个数据库 mysqldump -q --single-t ...
- mysql数据库快速传输方案_MySQL数据库迁移快速导出导入大量数据
数据库迁移是我们经常可遇到的问题,对于少量的数据,迁移基本上不会有什么问题.生产环境中,有以下情况需要做迁移工作: 磁盘空间不够.比如一些老项目,选用的机型并不一定适用于数据库.随着时间的推移,硬盘很 ...
- mysql5.6 load_MySQL 5.6 dump/load buffer pool实验
Using MySQL Preloading Buffer Pool for fast restart. 什么场景下,会使用该功能. 当你的数据库系统较大,比如有128G物理内存,那么对应的buffe ...
最新文章
- 软件测试论坛_浅谈软件测试的未来,我们该如何做好准备
- ViSP创建之VS工程详细创建步骤(命令行方式)
- chattr和lsattr命令的使用(对于root用户也无法修改删除的操作问题)
- 王家林 云计算分布式大数据Hadoop实战高手之路第七讲Hadoop图文训练课程:通过HDFS的心跳来测试replication具体的工作机制和流程...
- 【最小生成树】水箱(P5952)
- 拒绝双写:巧用Lindorm数据订阅
- Java基础入门笔记-Eclipse快捷操作
- 设计灵感|双重曝光放在海报里原来这么好看!
- ClassLoader背景知识
- php简单实现二级联动
- 开发者中的领导与管理(译文)
- 【原】GitHub使用教程
- 年仅44岁,又一高校教师英年早逝
- 关闭Windows10 易升
- Arduino蓝牙无线自动下载程序 Arduino无线下载
- 软件测试方案和计划的区别和联系
- Linux系统管理----centos7系统进程管理
- 语音交互的基本概念和设计实践
- 把100PB数据迁移到阿里云,需要几步?
- AXI中READY与VALID之间握手关系