DataX操作MySQL

一、 从MySQL读取

介绍

MysqlReader插件实现了从Mysql读取数据。在底层实现上,MysqlReader通过JDBC连接远程Mysql数据库,并执行相应的sql语句将数据从mysql库中SELECT出来。

不同于其他关系型数据库,MysqlReader不支持FetchSize.

实现原理

简而言之,MysqlReader通过JDBC连接器连接到远程的Mysql数据库,并根据用户配置的信息生成查询SELECT SQL语句,然后发送到远程Mysql数据库,并将该SQL执行返回结果使用DataX自定义的数据类型拼装为抽象的数据集,并传递给下游Writer处理。

对于用户配置Table、Column、Where的信息,MysqlReader将其拼接为SQL语句发送到Mysql数据库;对于用户配置querySql信息,MysqlReader直接将其发送到Mysql数据库。

json如下

{"job": {"setting": {"speed": {"channel": 3},"errorLimit": {"record": 0,"percentage": 0.02}

},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "root","password": "123456","column": ["id","name"],"splitPk": "id","connection": [{"table": ["datax_test"],"jdbcUrl": ["jdbc:mysql://192.168.1.123:3306/test"]

}]

}

},"writer": {"name": "streamwriter","parameter": {"print": true}

}

}]

}

}

参数说明--jdbcUrl

描述:描述的是到对端数据库的JDBC连接信息,使用JSON的数组描述,并支持一个库填写多个连接地址。之所以使用JSON数组描述连接信息,是因为阿里集团内部支持多个IP探测,如果配置了多个,MysqlReader可以依次探测ip的可连接性,直到选择一个合法的IP。如果全部连接失败,MysqlReader报错。 注意,jdbcUrl必须包含在connection配置单元中。对于阿里集团外部使用情况,JSON数组填写一个JDBC连接即可。

jdbcUrl按照Mysql官方规范,并可以填写连接附件控制信息。

必选:是

默认值:无

--username

描述:数据源的用户名

必选:是

默认值:无

--password

描述:数据源指定用户名的密码

必选:是

默认值:无

--table

描述:所选取的需要同步的表。使用JSON的数组描述,因此支持多张表同时抽取。当配置为多张表时,用户自己需保证多张表是同一schema结构,MysqlReader不予检查表是否同一逻辑表。注意,table必须包含在connection配置单元中。

必选:是

默认值:无

--column

描述:所配置的表中需要同步的列名集合,使用JSON的数组描述字段信息。用户使用*代表默认使用所有列配置,例如['*']。

支持列裁剪,即列可以挑选部分列进行导出。

支持列换序,即列可以不按照表schema信息进行导出。

支持常量配置,用户需要按照Mysql SQL语法格式: ["id", "`table`", "1", "'bazhen.csy'", "null", "to_char(a + 1)", "2.3" , "true"] id为普通列名,`table`为包含保留在的列名,1为整形数字常量,'bazhen.csy'为字符串常量,null为空指针,to_char(a + 1)为表达式,2.3为浮点数,true为布尔值。

必选:是

默认值:无

--splitPk

描述:MysqlReader进行数据抽取时,如果指定splitPk,表示用户希望使用splitPk代表的字段进行数据分片,DataX因此会启动并发任务进行数据同步,这样可以大大提供数据同步的效能。

推荐splitPk用户使用表主键,因为表主键通常情况下比较均匀,因此切分出来的分片也不容易出现数据热点。

-- 目前splitPk仅支持整形数据切分,不支持浮点、字符串、日期等其他类型。如果用户指定其他非支持类型,MysqlReader将报错!

--如果splitPk不填写,包括不提供splitPk或者splitPk值为空,DataX视作使用单通道同步该表数据。

必选:否

默认值:空

--where

描述:筛选条件,MysqlReader根据指定的column、table、where条件拼接SQL,并根据这个SQL进行数据抽取。在实际业务场景中,往往会选择当天的数据进行同步,可以将where条件指定为gmt_create > $bizdate 。注意:不可以将where条件指定为limit 10,limit不是SQL的合法where子句。

where条件可以有效地进行业务增量同步。如果不填写where语句,包括不提供where的key或者value,DataX均视作同步全量数据。

必选:否

默认值:无

--querySql

描述:在有些业务场景下,where这一配置项不足以描述所筛选的条件,用户可以通过该配置型来自定义筛选SQL。当用户配置了这一项之后,DataX系统就会忽略table,column这些配置型,直接使用这个配置项的内容对数据进行筛选,例如需要进行多表join后同步数据,使用select a,b from table_a join table_b on table_a.id = table_b.id

当用户配置querySql时,MysqlReader直接忽略table、column、where条件的配置,querySql优先级大于table、column、where选项。

必选:否

默认值:无

mysqlreader类型转换表

请注意:

--除上述罗列字段类型外,其他类型均不支持。

--tinyint(1) DataX视作为整形。

--year DataX视作为字符串类型

--bit DataX属于未定义行为。

执行

FengZhendeMacBook-Pro:bin FengZhen$ ./datax.py /Users/FengZhen/Desktop/Hadoop/dataX/json/mysql/reader_all.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !Copyright (C)2010-2017, Alibaba Group. All Rights Reserved.2018-11-18 16:22:04.599 [main] INFO VMInfo - VMInfo# operatingSystem class =>sun.management.OperatingSystemImpl2018-11-18 16:22:04.612 [main] INFO Engine - the machine info =>osInfo: Oracle Corporation1.8 25.162-b12

jvmInfo: Mac OS X x86_6410.13.4cpu num:4totalPhysicalMemory:-0.00G

freePhysicalMemory:-0.00G

maxFileDescriptorCount:-1currentOpenFileDescriptorCount:-1GC Names [PS MarkSweep, PS Scavenge]

MEMORY_NAME| allocation_size |init_size

PS Eden Space| 256.00MB | 256.00MB

Code Cache| 240.00MB | 2.44MB

Compressed Class Space| 1,024.00MB | 0.00MB

PS Survivor Space| 42.50MB | 42.50MB

PS Old Gen| 683.00MB | 683.00MB

Metaspace| -0.00MB | 0.00MB2018-11-18 16:22:04.638 [main] INFO Engine -{"content":[

{"reader":{"name":"mysqlreader","parameter":{"column":["id","name"],"connection":[

{"jdbcUrl":["jdbc:mysql://192.168.1.123:3306/test"],"table":["datax_test"]

}

],"password":"******","splitPk":"id","username":"root"}

},"writer":{"name":"streamwriter","parameter":{"print":true}

}

}

],"setting":{"errorLimit":{"percentage":0.02,"record":0},"speed":{"channel":3}

}

}2018-11-18 16:22:04.673 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null

2018-11-18 16:22:04.678 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0

2018-11-18 16:22:04.678 [main] INFO JobContainer -DataX jobContainer starts job.2018-11-18 16:22:04.681 [main] INFO JobContainer - Set jobId = 0

2018-11-18 16:22:05.323 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true.

2018-11-18 16:22:05.478 [job-0] INFO OriginalConfPretreatmentUtil -table:[datax_test] has columns:[id,name].2018-11-18 16:22:05.490 [job-0] INFO JobContainer - jobContainer starts to doprepare ...2018-11-18 16:22:05.491 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] doprepare work .2018-11-18 16:22:05.492 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] doprepare work .2018-11-18 16:22:05.493 [job-0] INFO JobContainer - jobContainer starts to dosplit ...2018-11-18 16:22:05.493 [job-0] INFO JobContainer - Job set Channel-Number to 3channels.2018-11-18 16:22:05.618 [job-0] INFO SingleTableSplitUtil - split pk [sql=SELECT MIN(id),MAX(id) FROM datax_test] isrunning...2018-11-18 16:22:05.665 [job-0] INFO SingleTableSplitUtil - After split(), allQuerySql=[select id,name from datax_test where (1 <= id AND id < 2)select id,name from datax_test where (2 <= id AND id < 3)select id,name from datax_test where (3 <= id AND id < 4)select id,name from datax_test where (4 <= id AND id <= 5)select id,name from datax_test whereid IS NULL

].2018-11-18 16:22:05.666 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [5] tasks.2018-11-18 16:22:05.667 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] splits to [5] tasks.2018-11-18 16:22:05.697 [job-0] INFO JobContainer - jobContainer starts to doschedule ...2018-11-18 16:22:05.721 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.2018-11-18 16:22:05.744 [job-0] INFO JobContainer -Running by standalone Mode.2018-11-18 16:22:05.758 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [3] channels for [5] tasks.2018-11-18 16:22:05.765 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to -1, No bps activated.2018-11-18 16:22:05.766 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.2018-11-18 16:22:05.790 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] isstarted2018-11-18 16:22:05.795 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] attemptCount[1] isstarted2018-11-18 16:22:05.796 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:05.796 [0-0-1-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:05.820 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] attemptCount[1] isstarted2018-11-18 16:22:05.821 [0-0-2-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:05.981 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

1test12018-11-18 16:22:06.030 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[241]ms2018-11-18 16:22:06.033 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] attemptCount[1] isstarted2018-11-18 16:22:06.034 [0-0-3-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:06.041 [0-0-2-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

3test32018-11-18 16:22:06.137 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] is successed, used[326]ms2018-11-18 16:22:06.139 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] attemptCount[1] isstarted2018-11-18 16:22:06.139 [0-0-4-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test whereid IS NULL

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:06.157 [0-0-1-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2test22018-11-18 16:22:06.243 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] is successed, used[449]ms2018-11-18 16:22:11.295 [0-0-3-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

4test45test52018-11-18 16:22:11.393 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] is successed, used[5360]ms2018-11-18 16:22:15.784 [job-0] INFO StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00%

2018-11-18 16:22:25.166 [0-0-4-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test whereid IS NULL

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:22:25.413 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] is successed, used[19274]ms2018-11-18 16:22:25.417 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.

2018-11-18 16:22:25.786 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 3B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:22:25.786 [job-0] INFO AbstractScheduler -Scheduler accomplished all tasks.2018-11-18 16:22:25.787 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] dopost work.2018-11-18 16:22:25.788 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] dopost work.2018-11-18 16:22:25.788 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.2018-11-18 16:22:25.791 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /Users/FengZhen/Desktop/Hadoop/dataX/datax/hook2018-11-18 16:22:25.796 [job-0] INFO JobContainer -[total cpu info]=>averageCpu| maxDeltaCpu |minDeltaCpu-1.00% | -1.00% | -1.00%[total gc info]=>NAME| totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime |minDeltaGCTime

PS MarkSweep| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s

PS Scavenge| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s2018-11-18 16:22:25.797 [job-0] INFO JobContainer - PerfTrace not enable!

2018-11-18 16:22:25.798 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:22:25.799 [job-0] INFO JobContainer -任务启动时刻 :2018-11-18 16:22:04任务结束时刻 :2018-11-18 16:22:25任务总计耗时 : 21s

任务平均流量 : 1B/s

记录写入速度 : 0rec/s

读出记录总数 :5读写失败总数 :0

在控制台可看到结果输出

二、从MySQL按条件读取数据

json如下

{"job": {"setting": {"speed": {"channel": 1}

},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "root","password": "123456","connection": [{"querySql": ["select * from datax_test where id < 3;"],"jdbcUrl": ["jdbc:mysql://bad_ip:3306/database","jdbc:mysql://127.0.0.1:bad_port/database","jdbc:mysql://192.168.1.123:3306/test"]

}]

}

},"writer": {"name": "streamwriter","parameter": {"print": true,"encoding": "UTF-8"}

}

}]

}

}

执行

FengZhendeMacBook-Pro:bin FengZhen$ ./datax.py /Users/FengZhen/Desktop/Hadoop/dataX/json/mysql/reader_select.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !Copyright (C)2010-2017, Alibaba Group. All Rights Reserved.2018-11-18 16:31:20.508 [main] INFO VMInfo - VMInfo# operatingSystem class =>sun.management.OperatingSystemImpl2018-11-18 16:31:20.521 [main] INFO Engine - the machine info =>osInfo: Oracle Corporation1.8 25.162-b12

jvmInfo: Mac OS X x86_6410.13.4cpu num:4totalPhysicalMemory:-0.00G

freePhysicalMemory:-0.00G

maxFileDescriptorCount:-1currentOpenFileDescriptorCount:-1GC Names [PS MarkSweep, PS Scavenge]

MEMORY_NAME| allocation_size |init_size

PS Eden Space| 256.00MB | 256.00MB

Code Cache| 240.00MB | 2.44MB

Compressed Class Space| 1,024.00MB | 0.00MB

PS Survivor Space| 42.50MB | 42.50MB

PS Old Gen| 683.00MB | 683.00MB

Metaspace| -0.00MB | 0.00MB2018-11-18 16:31:20.557 [main] INFO Engine -{"content":[

{"reader":{"name":"mysqlreader","parameter":{"connection":[

{"jdbcUrl":["jdbc:mysql://bad_ip:3306/database","jdbc:mysql://127.0.0.1:bad_port/database","jdbc:mysql://192.168.1.123:3306/test"],"querySql":["select * from datax_test where id < 3;"]

}

],"password":"******","username":"root"}

},"writer":{"name":"streamwriter","parameter":{"encoding":"UTF-8","print":true}

}

}

],"setting":{"speed":{"channel":1}

}

}2018-11-18 16:31:20.609 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null

2018-11-18 16:31:20.612 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0

2018-11-18 16:31:20.613 [main] INFO JobContainer -DataX jobContainer starts job.2018-11-18 16:31:20.618 [main] INFO JobContainer - Set jobId = 0

2018-11-18 16:31:21.140 [job-0] WARN DBUtil - test connection of [jdbc:mysql://bad_ip:3306/database] failed, for Code:[MYSQLErrCode-02], Description:[数据库服务的IP地址或者Port错误,请检查填写的IP地址和Port或者联系DBA确认IP地址和Port是否正确。如果是同步中心用户请联系DBA确认idb上录入的IP和PORT信息和数据库的当前实际信息是一致的]. - 具体错误信息为:com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was0 milliseconds ago. The driver has not received any packets fromthe server..2018-11-18 16:31:21.143 [job-0] WARN DBUtil - test connection of [jdbc:mysql://127.0.0.1:bad_port/database] failed, for Code:[DBUtilErrorCode-10], Description:[连接数据库失败. 请检查您的 账号、密码、数据库名称、IP、Port或者向 DBA 寻求帮助(注意网络环境).]. - 具体错误信息为:com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Cannot load connection class because of underlying exception: 'java.lang.NumberFormatException: For input string: "bad_port"'..

2018-11-18 16:31:21.498 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true.

2018-11-18 16:31:21.512 [job-0] INFO JobContainer - jobContainer starts to doprepare ...2018-11-18 16:31:21.518 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] doprepare work .2018-11-18 16:31:21.520 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] doprepare work .2018-11-18 16:31:21.521 [job-0] INFO JobContainer - jobContainer starts to dosplit ...2018-11-18 16:31:21.524 [job-0] INFO JobContainer - Job set Channel-Number to 1channels.2018-11-18 16:31:21.546 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [1] tasks.2018-11-18 16:31:21.548 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] splits to [1] tasks.2018-11-18 16:31:21.587 [job-0] INFO JobContainer - jobContainer starts to doschedule ...2018-11-18 16:31:21.592 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.2018-11-18 16:31:21.597 [job-0] INFO JobContainer -Running by standalone Mode.2018-11-18 16:31:21.629 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks.2018-11-18 16:31:21.639 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to -1, No bps activated.2018-11-18 16:31:21.639 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.2018-11-18 16:31:21.658 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] isstarted2018-11-18 16:31:21.667 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select * from datax_test where id < 3;

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:31:21.814 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select * from datax_test where id < 3;

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

1test12test22018-11-18 16:31:21.865 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[211]ms2018-11-18 16:31:21.866 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.

2018-11-18 16:31:31.685 [job-0] INFO StandAloneJobContainerCommunicator - Total 2 records, 12 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:31:31.685 [job-0] INFO AbstractScheduler -Scheduler accomplished all tasks.2018-11-18 16:31:31.686 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] dopost work.2018-11-18 16:31:31.687 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] dopost work.2018-11-18 16:31:31.687 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.2018-11-18 16:31:31.688 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /Users/FengZhen/Desktop/Hadoop/dataX/datax/hook2018-11-18 16:31:31.693 [job-0] INFO JobContainer -[total cpu info]=>averageCpu| maxDeltaCpu |minDeltaCpu-1.00% | -1.00% | -1.00%[total gc info]=>NAME| totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime |minDeltaGCTime

PS MarkSweep| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s

PS Scavenge| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s2018-11-18 16:31:31.693 [job-0] INFO JobContainer - PerfTrace not enable!

2018-11-18 16:31:31.694 [job-0] INFO StandAloneJobContainerCommunicator - Total 2 records, 12 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:31:31.695 [job-0] INFO JobContainer -任务启动时刻 :2018-11-18 16:31:20任务结束时刻 :2018-11-18 16:31:31任务总计耗时 : 11s

任务平均流量 : 1B/s

记录写入速度 : 0rec/s

读出记录总数 :2读写失败总数 :0

三、从MySQL读取写入MySQL

写入MySQL简介

MysqlWriter 插件实现了写入数据到 Mysql 主库的目的表的功能。在底层实现上, MysqlWriter 通过 JDBC 连接远程 Mysql 数据库,并执行相应的 insert into ... 或者 ( replace into ...) 的 sql 语句将数据写入 Mysql,内部会分批次提交入库,需要数据库本身采用 innodb 引擎。

实现原理

MysqlWriter 通过 DataX 框架获取 Reader 生成的协议数据,根据你配置的 writeMode 生成

insert into...(当主键/唯一性索引冲突时会写不进去冲突的行)

或者

replace into...(没有遇到主键/唯一性索引冲突时,与 insert into 行为一致,冲突时会用新行替换原有行所有字段) 的语句写入数据到 Mysql。出于性能考虑,采用了 PreparedStatement + Batch,并且设置了:rewriteBatchedStatements=true,将数据缓冲到线程上下文 Buffer 中,当 Buffer 累计到预定阈值时,才发起写入请求。

json如下

{"job": {"setting": {"speed": {"channel": 3},"errorLimit": {"record": 0,"percentage": 0.02}

},"content": [{"reader": {"name": "mysqlreader","parameter": {"username": "root","password": "123456","column": ["id","name"],"splitPk": "id","connection": [{"table": ["datax_test"],"jdbcUrl": ["jdbc:mysql://192.168.1.123:3306/test"]

}]

}

},"writer": {"name": "mysqlwriter","parameter": {"writeMode": "insert","username": "root","password": "123456","column": ["id","name"],"session": ["set session sql_mode='ANSI'"],"preSql": ["delete from datax_target_test"],"connection": [{"jdbcUrl": "jdbc:mysql://192.168.1.123:3306/test?useUnicode=true&characterEncoding=gbk","table": ["datax_target_test"]

}]

}

}

}]

}

}

执行

FengZhendeMacBook-Pro:bin FengZhen$ ./datax.py /Users/FengZhen/Desktop/Hadoop/dataX/json/mysql/3.mysql2mysql.json

DataX (DATAX-OPENSOURCE-3.0), From Alibaba !Copyright (C)2010-2017, Alibaba Group. All Rights Reserved.2018-11-18 16:49:13.176 [main] INFO VMInfo - VMInfo# operatingSystem class =>sun.management.OperatingSystemImpl2018-11-18 16:49:13.189 [main] INFO Engine - the machine info =>osInfo: Oracle Corporation1.8 25.162-b12

jvmInfo: Mac OS X x86_6410.13.4cpu num:4totalPhysicalMemory:-0.00G

freePhysicalMemory:-0.00G

maxFileDescriptorCount:-1currentOpenFileDescriptorCount:-1GC Names [PS MarkSweep, PS Scavenge]

MEMORY_NAME| allocation_size |init_size

PS Eden Space| 256.00MB | 256.00MB

Code Cache| 240.00MB | 2.44MB

Compressed Class Space| 1,024.00MB | 0.00MB

PS Survivor Space| 42.50MB | 42.50MB

PS Old Gen| 683.00MB | 683.00MB

Metaspace| -0.00MB | 0.00MB2018-11-18 16:49:13.218 [main] INFO Engine -{"content":[

{"reader":{"name":"mysqlreader","parameter":{"column":["id","name"],"connection":[

{"jdbcUrl":["jdbc:mysql://192.168.1.123:3306/test"],"table":["datax_test"]

}

],"password":"******","splitPk":"id","username":"root"}

},"writer":{"name":"mysqlwriter","parameter":{"column":["id","name"],"connection":[

{"jdbcUrl":"jdbc:mysql://192.168.1.123:3306/test?useUnicode=true&characterEncoding=gbk","table":["datax_target_test"]

}

],"password":"******","preSql":["delete from datax_target_test"],"session":["set session sql_mode='ANSI'"],"username":"root","writeMode":"insert"}

}

}

],"setting":{"errorLimit":{"percentage":0.02,"record":0},"speed":{"channel":3}

}

}2018-11-18 16:49:13.268 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null

2018-11-18 16:49:13.271 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0

2018-11-18 16:49:13.272 [main] INFO JobContainer -DataX jobContainer starts job.2018-11-18 16:49:13.280 [main] INFO JobContainer - Set jobId = 0

2018-11-18 16:49:13.991 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true.

2018-11-18 16:49:14.147 [job-0] INFO OriginalConfPretreatmentUtil -table:[datax_test] has columns:[id,name].2018-11-18 16:49:14.567 [job-0] INFO OriginalConfPretreatmentUtil -table:[datax_target_test] all columns:[

id,name

].2018-11-18 16:49:14.697 [job-0] INFO OriginalConfPretreatmentUtil -Write data [

insert INTO%s (id,name) VALUES(?,?)

], which jdbcUrl like:[jdbc:mysql://192.168.1.123:3306/test?useUnicode=true&characterEncoding=gbk&yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]

2018-11-18 16:49:14.698 [job-0] INFO JobContainer - jobContainer starts to doprepare ...2018-11-18 16:49:14.698 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] doprepare work .2018-11-18 16:49:14.699 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] doprepare work .2018-11-18 16:49:14.765 [job-0] INFO CommonRdbmsWriter$Job - Begin to execute preSqls:[delete from datax_target_test]. context info:jdbc:mysql://192.168.1.123:3306/test?useUnicode=true&characterEncoding=gbk&yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true.

2018-11-18 16:49:14.770 [job-0] INFO JobContainer - jobContainer starts to dosplit ...2018-11-18 16:49:14.771 [job-0] INFO JobContainer - Job set Channel-Number to 3channels.2018-11-18 16:49:14.879 [job-0] INFO SingleTableSplitUtil - split pk [sql=SELECT MIN(id),MAX(id) FROM datax_test] isrunning...2018-11-18 16:49:14.926 [job-0] INFO SingleTableSplitUtil - After split(), allQuerySql=[select id,name from datax_test where (1 <= id AND id < 2)select id,name from datax_test where (2 <= id AND id < 3)select id,name from datax_test where (3 <= id AND id < 4)select id,name from datax_test where (4 <= id AND id <= 5)select id,name from datax_test whereid IS NULL

].2018-11-18 16:49:14.926 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [5] tasks.2018-11-18 16:49:14.928 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] splits to [5] tasks.2018-11-18 16:49:14.974 [job-0] INFO JobContainer - jobContainer starts to doschedule ...2018-11-18 16:49:14.991 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups.2018-11-18 16:49:14.995 [job-0] INFO JobContainer -Running by standalone Mode.2018-11-18 16:49:15.011 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [3] channels for [5] tasks.2018-11-18 16:49:15.022 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to -1, No bps activated.2018-11-18 16:49:15.022 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated.2018-11-18 16:49:15.041 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] isstarted2018-11-18 16:49:15.052 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] attemptCount[1] isstarted2018-11-18 16:49:15.052 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:15.052 [0-0-1-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:15.057 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] attemptCount[1] isstarted2018-11-18 16:49:15.057 [0-0-2-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:15.175 [0-0-0-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:15.215 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:15.233 [0-0-2-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.387 [0-0-1-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.457 [0-0-2-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.575 [0-0-2-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.612 [0-0-1-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.687 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] is successed, used[4632]ms2018-11-18 16:49:19.693 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] attemptCount[1] isstarted2018-11-18 16:49:19.693 [0-0-0-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.696 [0-0-3-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.796 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[4761]ms2018-11-18 16:49:19.799 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] attemptCount[1] isstarted2018-11-18 16:49:19.799 [0-0-4-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test whereid IS NULL

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.873 [0-0-3-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:19.882 [0-0-3-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5)

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:19.989 [0-0-1-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:20.000 [0-0-4-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test whereid IS NULL

] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true].

2018-11-18 16:49:20.074 [0-0-3-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:20.107 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] is successed, used[5055]ms2018-11-18 16:49:20.142 [0-0-4-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:20.212 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] is successed, used[522]ms2018-11-18 16:49:25.061 [job-0] INFO StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00%

2018-11-18 16:49:25.578 [0-0-4-writer] INFO DBUtil - execute sql:[set session sql_mode='ANSI']2018-11-18 16:49:25.671 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] is successed, used[5872]ms2018-11-18 16:49:25.671 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it's tasks.

2018-11-18 16:49:35.064 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 3B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:49:35.064 [job-0] INFO AbstractScheduler -Scheduler accomplished all tasks.2018-11-18 16:49:35.065 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] dopost work.2018-11-18 16:49:35.066 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] dopost work.2018-11-18 16:49:35.067 [job-0] INFO JobContainer - DataX jobId [0] completed successfully.2018-11-18 16:49:35.068 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /Users/FengZhen/Desktop/Hadoop/dataX/datax/hook2018-11-18 16:49:35.072 [job-0] INFO JobContainer -[total cpu info]=>averageCpu| maxDeltaCpu |minDeltaCpu-1.00% | -1.00% | -1.00%[total gc info]=>NAME| totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime |minDeltaGCTime

PS MarkSweep| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s

PS Scavenge| 0 | 0 | 0 | 0.000s | 0.000s | 0.000s2018-11-18 16:49:35.072 [job-0] INFO JobContainer - PerfTrace not enable!

2018-11-18 16:49:35.073 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00%

2018-11-18 16:49:35.074 [job-0] INFO JobContainer -任务启动时刻 :2018-11-18 16:49:13任务结束时刻 :2018-11-18 16:49:35任务总计耗时 : 21s

任务平均流量 : 1B/s

记录写入速度 : 0rec/s

读出记录总数 :5读写失败总数 :0

参数说明:

--jdbcUrl

描述:目的数据库的 JDBC 连接信息。作业运行时,DataX 会在你提供的 jdbcUrl 后面追加如下属性:yearIsDateType=false&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true

注意:1、在一个数据库上只能配置一个 jdbcUrl 值。这与 MysqlReader 支持多个备库探测不同,因为此处不支持同一个数据库存在多个主库的情况(双主导入数据情况)

2、jdbcUrl按照Mysql官方规范,并可以填写连接附加控制信息,比如想指定连接编码为 gbk ,则在 jdbcUrl 后面追加属性 useUnicode=true&characterEncoding=gbk。具体请参看 Mysql官方文档或者咨询对应 DBA。

必选:是

默认值:无

--username

描述:目的数据库的用户名

必选:是

默认值:无

--password

描述:目的数据库的密码

必选:是

默认值:无

--table

描述:目的表的表名称。支持写入一个或者多个表。当配置为多张表时,必须确保所有表结构保持一致。

注意:table 和 jdbcUrl 必须包含在 connection 配置单元中

必选:是

默认值:无

--column

描述:目的表需要写入数据的字段,字段之间用英文逗号分隔。例如: "column": ["id","name","age"]。如果要依次写入全部列,使用表示, 例如: "column": [""]。

**column配置项必须指定,不能留空!**

注意:1、我们强烈不推荐你这样配置,因为当你目的表字段个数、类型等有改动时,你的任务可能运行不正确或者失败

2、 column 不能配置任何常量值

必选:是

默认值:否

--session

描述: DataX在获取Mysql连接时,执行session指定的SQL语句,修改当前connection session属性

必须: 否

默认值: 空

--preSql

描述:写入数据到目的表前,会先执行这里的标准语句。如果 Sql 中有你需要操作到的表名称,请使用 @table 表示,这样在实际执行 Sql 语句时,会对变量按照实际表名称进行替换。比如你的任务是要写入到目的端的100个同构分表(表名称为:datax_00,datax01, ... datax_98,datax_99),并且你希望导入数据前,先对表中数据进行删除操作,那么你可以这样配置:"preSql":["delete from 表名"],效果是:在执行到每个表写入数据前,会先执行对应的 delete from 对应表名称

必选:否

默认值:无

--postSql

描述:写入数据到目的表后,会执行这里的标准语句。(原理同 preSql )

必选:否

默认值:无

--writeMode

描述:控制写入数据到目标表采用 insert into 或者 replace into 或者 ON DUPLICATE KEY UPDATE 语句

必选:是

所有选项:insert/replace/update

默认值:insert

--batchSize

描述:一次性批量提交的记录数大小,该值可以极大减少DataX与Mysql的网络交互次数,并提升整体吞吐量。但是该值设置过大可能会造成DataX运行进程OOM情况。

必选:否

默认值:1024

类型转换

类似 MysqlReader ,目前 MysqlWriter 支持大部分 Mysql 类型,但也存在部分个别类型没有支持的情况,请注意检查你的类型。

下面列出 MysqlWriter 针对 Mysql 类型转换列表:

bit类型目前是未定义类型转换

datax mysql replace_DataX-MySQL(读写)相关推荐

  1. MySQL主从(MySQL proxy Lua读写分离设置,一主多从同步配置,分库分表方案)

    Mysql Proxy Lua读写分离设置 一.读写分离说明 读写分离(Read/Write Splitting),基本的原理是让主数据库处理事务性增.改.删操作(INSERT.UPDATE.DELE ...

  2. Ubuntu10下MySQL搭建Amoeba_读写分离

    一.背景知识 Amoeba(变形虫)项目,专注 分布式数据库 proxy 开发.座落与Client.DB Server(s)之间.对客户端透明.具有负载均衡.高可用性.sql过滤.读写分离.可路由相关 ...

  3. mysql主从读写Windows_Windows操作系统下的MySQL主从复制及读写分离

    一.主服务器(master)配置 1.修改MySQL配置文件my.ini [mysqld] log-bin=mysql-bin log-bin-index=mysql-bin.index server ...

  4. 使用mysql-proxy 快速实现mysql 集群 读写分离

    使用mysql-proxy 快速实现mysql 集群 读写分离 2011-12-29 17:03 目前较为常见的mysql读写分离分为两种: 1. 基于程序代码内部实现:在代码中对select操作分发 ...

  5. MySQL + Atlas 部署读写分离

    原文地址MySQL + Atlas --- 部署读写分离 序章 Atlas是360团队弄出来的一套基于MySQL-Proxy基础之上的代理,修改了MySQL-Proxy的一些BUG,并且优化了很多东西 ...

  6. Laravel5.5 MySQL配置、读写分离及操作

    2019独角兽企业重金招聘Python工程师标准>>> Laravel 让连接不同数据库以及对数据库进行增删改查操作: 参考:http://laravelacademy.org/po ...

  7. docker mysql主从_使用docker 实现MySQL主从同步/读写分离

    1. 利用 docker 实现 mysql 主从同步 / 读写分离 为了保证数据的完整和安全,mysql 设计了主从同步,一个挂掉还可以用另个.最近重构论坛,想来改成主从吧.担心失误,就先拿 dock ...

  8. mysql主从复制、读写分离到数据库水平拆分及库表散列

    文章转载自http://blog.csdn.net/sd4422739/article/details/49514981 web项目最原始的情况是一台服务器只能连接一个mysql服务器(c3p0只能配 ...

  9. mysql主从同步读写分离

    https://pan.baidu.com/s/1tm_FQ4C8heQqzx01URr85A //软件连接百度网盘 三台mysql数据库:主数据库服务器:192.168.80.100从数据库服务器1 ...

  10. mysql主从与读写分离_MySQL主从复制与读写分离

    MySQL主从复制(Master-Slave)与读写分离(MySQL-Proxy)实践 Mysql作为目前世界上使用最广泛的免费数据库,相信所有从事系统运维的工程师都一定接触过.但在实际的生产环境中, ...

最新文章

  1. 强弱AI的辩论:关于人工智能意识的奇妙理论
  2. J2Cache 中使用 Lettuce 替代 Jedis 管理 Redis 连接
  3. 指数衰减学习率的意义与使用方法
  4. Node.js 代码阅读笔记系列(0)Timer 的实现
  5. Linux系统下Configure命令参数解释说明
  6. JENKINS使用DOCKER运行PYTEST并且出ALLURE报告
  7. 快速搭建Nextcloud+OnlyOffice私有云办公平台
  8. [转载] python字符串只留数字_Python工匠:数字与字符串(下)
  9. 软件工程项目之Windows Phone Application的一个设想
  10. 微型计算机期末总结卷首语,个人总结卷首语.doc
  11. 可靠消息服务在支付领域的应用
  12. CorelDRAWX4的VBA插件开发(二十九)使用C++制作动态连接库DLL辅助VBA构键强大功能-(3)制作最简单的可供调用的DLL
  13. C# 使用HTMLhelp生成chm文件添加搜索并解决搜索找不到主题的问题
  14. java在线测试工具_9个最好用的在线编译/调试工具
  15. 【数据处理】Python matplotlib绘制双柱状图以及绘制堆积柱状图——保姆级教程
  16. python中关于__new__和__init__的个人理解
  17. 三种方法破解系统管理员密码
  18. 面对妖艳的配置文件,python小技巧来帮你!
  19. 合天网安实验室CTF-Steg150-一段欢快的曲调
  20. 马桶的尺寸是多少 马桶尺寸是多大2022

热门文章

  1. 如何运行 Angular library 的原理图 Schematics
  2. 使用了SAP Spartacus的一个在线网站:乐高Storefront
  3. 一个好用的时间管理Chrome扩展 - Calendar and Countdown
  4. Comments on task “Smart Service II: Wrap up and make it ready for Demo“
  5. 如何导入某网站的certificate证书到SAP ABAP系统
  6. SAP Cloud for Customer Sales Lead明细页面视图的UI模型
  7. Spark练习 - 提交作业到集群 - submit job via cluster
  8. 一步步把一个SpringBoot应用打包成Docker镜像并运行
  9. SAP GUI是如何启动本地安装的office word应用的
  10. Hybris订单价格的折扣维护