TiDB Binlog部署

环境信息

中控机是10.152.x.133

[tidb_servers]
10.152.x.10
10.152.x.11
10.152.x.12[tikv_servers]
10.152.x.10
10.152.x.11
10.152.x.12[pd_servers]
10.152.x.10
10.152.x.11
10.152.x.12## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
10.152.x.133[grafana_servers]
10.152.x.133# node_exporter and blackbox_exporter servers
[monitored_servers]
10.152.x.10
10.152.x.11
10.152.x.12
10.152.x.133[alertmanager_servers]
10.152.x.133

先造点数据

yum install sysbench -ymysql -uroot -p -h10.152.x.10 -P4000 -e"create database sysbench"sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-user=root --mysql-password=supersecret --mysql-port=4000 \
--mysql-host=10.152.x.10 \
--mysql-db=sysbench  --tables=10 --table-size=5000  --threads=4 \
--events=5000 --report-interval=5 --db-driver=mysql prepare

部署Pump

1.修改 tidb-ansible/inventory.ini 文件

设置 enable_binlog = True,表示 TiDB 集群开启 binlog

## binlog trigger
enable_binlog = True

2.为 pump_servers 主机组添加部署机器 IP

[pump_servers
pump1 ansible_host=10.152.x.10
pump2 ansible_host=10.152.x.11
pump3 ansible_host=10.152.x.12

如果想要为pump专门指定目录, 可以使用deploy_dir

pump1 ansible_host=10.152.x.10 deploy_dir=/data1/pump
pump2 ansible_host=10.152.x.11 deploy_dir=/data2/pump
pump3 ansible_host=10.152.x.12 deploy_dir=/data3/pump

默认 Pump 保留 7 天数据,如需修改可修改 tidb-ansible/conf/pump.yml(TiDB 3.0.2 及之前版本中为 tidb-ansible/conf/pump-cluster.yml)文件中 gc 变量值,并取消注释。

global:# an integer value to control the expiry date of the binlog data, which indicates for how long (in days) the binlog data would be stored# must be bigger than 0# gc: 7

3.部署 pump_servers 和 node_exporters

ansible-playbook deploy.yml --tags=pump -l pump1,pump2,pump3

如果没有为pump指定别名则为

ansible-playbook deploy.yml --tags=pump -l 10.152.x.10,10.152.x.11,10.152.x.12

以上命令中,逗号后不要加空格,否则会报错

4.启动 pump_servers

ansible-playbook start.yml --tags=pump

查看一下

# ps -ef| grep pump
tidb     26199     1  0 21:05 ?        00:00:00 bin/pump --addr=0.0.0.0:8250 --advertise-addr=pump1:8250 --pd-urls=http://10.152.x.10:2379,http://10.152.x.11:2379,http://10.152.x.12:2379 --data-dir=/DATA1/home/tidb/deploy/data.pump --log-file=/DATA1/home/tidb/deploy/log/pump.log --config=conf/pump.toml

5.更新并重启 tidb_servers

为了让enable_binlog = True生效

ansible-playbook rolling_update.yml --tags=tidb

6.更新监控信息

ansible-playbook rolling_update_monitor.yml --tags=prometheus

7.查看 Pump 服务状态

使用 binlogctl 查看 Pump 服务状态,pd-urls 参数请替换为集群 PD 地址,结果 State 为 online 表示 Pump 启动成功

$resources/bin/binlogctl -pd-urls=http://10.152.x.10:2379 -cmd pumps
[2019/12/01 21:11:47.655 +08:00] [INFO] [nodes.go:49] ["query node"] [type=pump] [node="{NodeID: knode10-152-x-12:8250, Addr: pump3:8250, State: online, MaxCommitTS: 412930776489787394, UpdateTime: 2019-12-01 21:11:45 +0800 CST}"]
[2019/12/01 21:11:47.655 +08:00] [INFO] [nodes.go:49] ["query node"] [type=pump] [node="{NodeID: knode10-152-x-10:8250, Addr: pump1:8250, State: online, MaxCommitTS: 412930776463572993, UpdateTime: 2019-12-01 21:11:45 +0800 CST}"]
[2019/12/01 21:11:47.655 +08:00] [INFO] [nodes.go:49] ["query node"] [type=pump] [node="{NodeID: knode10-152-x-11:8250, Addr: pump2:8250, State: online, MaxCommitTS: 412930776489787393, UpdateTime: 2019-12-01 21:11:45 +0800 CST}"]

或者登陆TiDB使用show pump status查看状态

root@10.152.x.10 21:10:32 [(none)]> show pump status;
+-----------------------+------------+--------+--------------------+---------------------+
| NodeID                | Address    | State  | Max_Commit_Ts      | Update_Time         |
+-----------------------+------------+--------+--------------------+---------------------+
| knode10-152-x-10:8250 | pump1:8250 | online | 412930792991752193 | 2019-12-01 21:12:48 |
| knode10-152-x-11:8250 | pump2:8250 | online | 412930793017966593 | 2019-12-01 21:12:48 |
| knode10-152-x-12:8250 | pump3:8250 | online | 412930793017966594 | 2019-12-01 21:12:48 |
+-----------------------+------------+--------+--------------------+---------------------+
3 rows in set (0.01 sec)

部署Drainer

部署Drainer将MySQL作为TiDB从库

1.下载tidb-enterprise-tools安装包

wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz
tar -zxvf tidb-enterprise-tools-latest-linux-amd64

2.使用TiDB Mydumper备份TiDB数据

$sudo ./tidb-enterprise-tools-latest-linux-amd64/bin/mydumper --ask-password -h 10.152.x.10 -P 4000 -u root --threads=4 --chunk-filesize=64 --skip-tz-utc --regex '^(?!(mysql\.|information_schema\.|performance_schema\.))'  -o /mfw_rundata/dump/ --verbose=3
Enter MySQL Password:
** Message: 21:30:17.767: Server version reported as: 5.7.25-TiDB-v3.0.5
** Message: 21:30:17.767: Connected to a TiDB server
** Message: 21:30:17.771: Skipping locks because of TiDB
** Message: 21:30:17.772: Set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.782: Started dump at: 2019-12-01 21:30:17** Message: 21:30:17.782: Written master status
** Message: 21:30:17.784: Thread 1 connected using MySQL connection ID 20
** Message: 21:30:17.794: Thread 1 set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.796: Thread 2 connected using MySQL connection ID 21
** Message: 21:30:17.807: Thread 2 set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.809: Thread 3 connected using MySQL connection ID 22
** Message: 21:30:17.819: Thread 3 set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.820: Thread 4 connected using MySQL connection ID 23
** Message: 21:30:17.832: Thread 4 set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.843: Thread 2 dumping data for `sysbench`.`sbtest1`
** Message: 21:30:17.844: Non-InnoDB dump complete, unlocking tables
** Message: 21:30:17.843: Thread 3 dumping data for `sysbench`.`sbtest2`
** Message: 21:30:17.843: Thread 1 dumping data for `sysbench`.`sbtest10`
** Message: 21:30:17.843: Thread 4 dumping data for `sysbench`.`sbtest3`
** Message: 21:30:17.882: Thread 4 dumping data for `sysbench`.`sbtest4`
** Message: 21:30:17.883: Thread 2 dumping data for `sysbench`.`sbtest5`
** Message: 21:30:17.887: Thread 3 dumping data for `sysbench`.`sbtest6`
** Message: 21:30:17.890: Thread 1 dumping data for `sysbench`.`sbtest7`
** Message: 21:30:17.911: Thread 4 dumping data for `sysbench`.`sbtest8`
** Message: 21:30:17.925: Thread 1 dumping data for `sysbench`.`sbtest9`
** Message: 21:30:17.938: Thread 4 dumping schema for `sysbench`.`sbtest1`
** Message: 21:30:17.939: Thread 4 dumping schema for `sysbench`.`sbtest10`
** Message: 21:30:17.941: Thread 4 dumping schema for `sysbench`.`sbtest2`
** Message: 21:30:17.942: Thread 4 dumping schema for `sysbench`.`sbtest3`
** Message: 21:30:17.943: Thread 4 dumping schema for `sysbench`.`sbtest4`
** Message: 21:30:17.944: Thread 4 dumping schema for `sysbench`.`sbtest5`
** Message: 21:30:17.945: Thread 4 dumping schema for `sysbench`.`sbtest6`
** Message: 21:30:17.946: Thread 4 dumping schema for `sysbench`.`sbtest7`
** Message: 21:30:17.947: Thread 4 dumping schema for `sysbench`.`sbtest8`
** Message: 21:30:17.948: Thread 4 dumping schema for `sysbench`.`sbtest9`
** Message: 21:30:17.949: Thread 4 shutting down
** Message: 21:30:18.079: Thread 2 shutting down
** Message: 21:30:18.084: Thread 3 shutting down
** Message: 21:30:18.087: Thread 1 shutting down
** Message: 21:30:18.087: Finished dump at: 2019-12-01 21:30:18

获取tso值

$sudo cat /mfw_rundata/dump/metadata
Started dump at: 2019-12-01 21:30:17
SHOW MASTER STATUS:Log: tidb-binlogPos: 412931068452667405GTID:Finished dump at: 2019-12-01 21:30:18tso为 412931068452667405

为下游MySQL创建Drainer专用同步账号

CREATE USER IF NOT EXISTS 'drainer'@'%';
ALTER USER 'drainer'@'%' IDENTIFIED BY 'drainer_supersecret';
GRANT INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, EXECUTE, INDEX, SELECT ON *.* TO 'drainer'@'%';

导入数据到MySQL

$sudo ./tidb-enterprise-tools-latest-linux-amd64/bin/loader -d /mfw_rundata/dump/ -h 10.132.2.143 -u drainer -p drainer_supersecret -P 3308 -t 2 -m '' -status-addr ':8723'
2019/12/01 22:12:55 printer.go:52: [info] Welcome to loader
2019/12/01 22:12:55 printer.go:53: [info] Release Version: v1.0.0-76-gad009d9
2019/12/01 22:12:55 printer.go:54: [info] Git Commit Hash: ad009d917b2cdc2a9cc26bc4e7046884c1ff43e7
2019/12/01 22:12:55 printer.go:55: [info] Git Branch: master
2019/12/01 22:12:55 printer.go:56: [info] UTC Build Time: 2019-10-21 06:22:03
2019/12/01 22:12:55 printer.go:57: [info] Go Version: go version go1.12 linux/amd64
2019/12/01 22:12:55 main.go:51: [info] config: {"log-level":"info","log-file":"","status-addr":":8723","pool-size":2,"dir":"/mfw_rundata/dump/","db":{"host":"10.132.2.143","user":"drainer","port":3308,"sql-mode":"","max-allowed-packet":67108864},"checkpoint-schema":"tidb_loader","config-file":"","route-rules":null,"do-table":null,"do-db":null,"ignore-table":null,"ignore-db":null,"rm-checkpoint":false}
2019/12/01 22:12:55 loader.go:532: [info] [loader] prepare takes 0.000565 seconds
2019/12/01 22:12:55 checkpoint.go:207: [info] calc checkpoint finished. finished tables (map[])
2019/12/01 22:12:55 loader.go:715: [info] [loader][run db schema]/mfw_rundata/dump//sysbench-schema-create.sql[start]
2019/12/01 22:12:55 loader.go:720: [info] [loader][run db schema]/mfw_rundata/dump//sysbench-schema-create.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest10-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest10-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest3-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest3-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest9-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest9-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest2-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest2-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest4-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest4-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest5-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest5-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest7-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest7-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest6-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest6-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest8-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest8-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest1-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest1-schema.sql[finished]
2019/12/01 22:12:55 loader.go:715: [info] [loader][run db schema]/mfw_rundata/dump//test-schema-create.sql[start]
2019/12/01 22:12:55 loader.go:720: [info] [loader][run db schema]/mfw_rundata/dump//test-schema-create.sql[finished]
2019/12/01 22:12:55 loader.go:773: [info] [loader] create tables takes 0.334379 seconds
2019/12/01 22:12:55 loader.go:788: [info] [loader] all data files have been dispatched, waiting for them finished
2019/12/01 22:12:55 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest3.sql[start]
2019/12/01 22:12:55 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest8.sql[start]
2019/12/01 22:12:55 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest8.sql scanned finished.
2019/12/01 22:12:55 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest3.sql scanned finished.
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest8.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest9.sql[start]
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest3.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest4.sql[start]
2019/12/01 22:12:56 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest9.sql scanned finished.
2019/12/01 22:12:56 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest4.sql scanned finished.
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest4.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest7.sql[start]
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest9.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest6.sql[start]
2019/12/01 22:12:56 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest7.sql scanned finished.
2019/12/01 22:12:56 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest6.sql scanned finished.
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest7.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest1.sql[start]
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest6.sql[finished]
2019/12/01 22:12:57 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest10.sql[start]
2019/12/01 22:12:57 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest1.sql scanned finished.
2019/12/01 22:12:57 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest10.sql scanned finished.
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest10.sql[finished]
2019/12/01 22:12:57 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest2.sql[start]
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest1.sql[finished]
2019/12/01 22:12:57 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest5.sql[start]
2019/12/01 22:12:57 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest2.sql scanned finished.
2019/12/01 22:12:57 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest5.sql scanned finished.
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest2.sql[finished]
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest5.sql[finished]
2019/12/01 22:12:57 loader.go:791: [info] [loader] all data files has been finished, takes 2.037124 seconds
2019/12/01 22:12:57 main.go:88: [info] loader stopped and exits 

3.修改 tidb-ansible/inventory.ini 文件

drainer_servers 主机组添加部署机器 IP,initial_commit_ts 请设置为获取的 initial_commit_ts,仅用于 Drainer 第一次启动

[drainer_servers]
drainer_mysql ansible_host=10.152.x.12 initial_commit_ts="412931068452667405"

修改配置文件

[tidb@knode10-152-x-133 21:33:21 ~/tidb-ansible]
$cp conf/drainer.toml conf/drainer_mysql_drainer.tomlvim conf/drainer_mysql_drainer.tomldb-type = "mysql"[syncer.to]
host = "10.132.2.143"
user = "drainer"
password = "drainer_supersecret"
port = 3308

4.部署Drainer

ansible-playbook deploy_drainer.yml

5.启动 Drainer

ansible-playbook start_drainer.yml

6.查看Drainer状态

root@10.152.x.10 22:06:49 [(none)]> show drainer status;
+-----------------------+------------------+--------+--------------------+---------------------+
| NodeID                | Address          | State  | Max_Commit_Ts      | Update_Time         |
+-----------------------+------------------+--------+--------------------+---------------------+
| knode10-152-x-12:8249 | 10.152.x.12:8249 | online | 412931643727675393 | 2019-12-01 22:06:52 |
+-----------------------+------------------+--------+--------------------+---------------------+
1 row in set (0.00 sec)$resources/bin/binlogctl -pd-urls=http://10.152.x.10:2379 -cmd drainers
[2019/12/01 22:07:27.531 +08:00] [INFO] [nodes.go:49] ["query node"] [type=drainer] [node="{NodeID: knode10-152-x-12:8249, Addr: 10.152.x.12:8249, State: online, MaxCommitTS: 412931651605102594, UpdateTime: 2019-12-01 22:07:26 +0800 CST}"]

TiDB Binlog部署相关推荐

  1. 1离线 TiDB Ansible 部署方案-详解

    一: 环境要求 Linux 操作系统版本要求 Linux 操作系统平台 版本 Red Hat Enterprise Linux 7.3 及以上 CentOS 7.3 及以上 Oracle Enterp ...

  2. TiDB DM部署及使用

    1.在中控机上安装依赖包 yum -y install wget yum -y install epel-release git curl sshpass && \ yum -y in ...

  3. TiDB+TiSpark部署--安装,扩缩容及升级操作

    作者: tracy0984 原文来源: https://tidb.net/blog/9dc6c38e 背景 随着业务的变更,可能经常会遇到TiDB数据库的TiKV或TIDB Server节点扩缩容的需 ...

  4. 分布式数据库TiDB的部署

    转自:https://my.oschina.net/Kenyon/blog/908370 一.环境 CentOS Linux release 7.3.1611 (Core) 172.26.11.91 ...

  5. 分布式关系型数据库TiDB

    企业级分布式关系型数据库TiDB 一.TiDB简介 TiDB 是 PingCAP 公司受 Google Spanner / F1 论文启发而设计的开源分布式关系型数据库,结合了传统的 RDBMS 和N ...

  6. docker部署TiDB

    TiDB 是 PingCAP 公司自主设计.研发的开源分布式关系型数据库,是一款同时支持在线事务处理与在线分析处理 (Hybrid Transactional and Analytical Proce ...

  7. TiUP部署 TiDB 数据库集群

    TiUP 是 TiDB 4.0 版本引入的集群运维工具,TiUP cluster 是 TiUP 提供的使用 Golang 编写的集群管理组件,通过 TiUP cluster 组件就可以进行日常的运维工 ...

  8. 通过 TiUP 部署 TiDB 集群的拓扑文件配置

    通过 TiUP 部署或扩容 TiDB 集群时,需要提供一份拓扑文件(示例)来描述集群拓扑. 同样,修改集群配置也是通过编辑拓扑文件来实现的,区别在于修改配置时仅允许修改部分字段.本文档介绍拓扑文件的各 ...

  9. 从零部署TiDB集群

    点击上方蓝色"程序猿DD",选择"设为星标" 回复"资源"获取独家整理的学习资料! 作者 | yangyidba 来源 | 公众号「yang ...

最新文章

  1. WSGI 是什么?和nginx有什么关系?
  2. ifstat 命令查看linux网络I/O情况
  3. LightOJ 1112 - Curious Robin Hood 树状数组
  4. 矩阵快速幂各类题型总结(一般,共轭,1 * n, 矩阵简化)
  5. 基于结构体的二进制文件读写
  6. 论文浅尝 - WWW2020 | 通过对抗学习从用户—项目交互数据中挖掘隐含的实体偏好来用于知识图谱补全任务...
  7. linux根文件系统 /etc/resolv.conf 文件详解
  8. Java bitset转string_将java BitSet保存到DB
  9. 气死N个女孩子的图片
  10. eclipse查看android源码包(eclipse导入android源码包)
  11. 合肥工业大学网络程序设计实验报告_杭州电子科技大学网络空间安全学院2020考研数据速览,专硕竟然有缺额!!!...
  12. 亚马逊,应用网关_Amazon API网关
  13. jsp+servlet实现最基本的注册登陆功能
  14. 学习笔记-小甲鱼Python3学习第二十三、二十四讲:函数:这帮小兔崽子、汉诺塔...
  15. php 监听端口数据客户端ip_PHP获取客户端和服务器端IP
  16. Linux CentOS 7修改主机名称
  17. 怎样将计算机和电视机连接网络连接,电脑怎么连接电视 电脑和电视连接方法图文教程...
  18. 计算机毕业设计springboot+vue+elementUI校园志愿者管理系统
  19. performSelector一系列方法调用和延时调用导致的内存泄露
  20. 怎么把flac转换为mp3格式

热门文章

  1. 电池容量的使用时间计算
  2. vue中选中行数据传递到下个页面
  3. 如何快速获取并分析自己所在城市的房价行情?
  4. 【数据分析】数据分析方法(三):PEST 分析方法
  5. 详解ASP.net的CheckBox和CheckBoxList控件
  6. 自然语言处理-应用场景-文本生成:Seq2Seq --> 看图说话【将一张图片转为一段文本】
  7. 作为码农 ,我们为什么要写作
  8. 多棱柱(可以自己设置多少条边)
  9. html5哪个属性规定输入字段是必填的,HTML5期末考试题(卷)型
  10. SnapdragonVR☀️ 一、SnapdragonVR SDK介绍