【IOT数据库】tdengine学习使用,非常方便,速度十分快,开源IOT数据库,支持集群方式进行部署,支持分区,支持Topic,流式计算
目录
- 前言
- 1,关于Tdengine背景
- 2,使用直接使用docker尝鲜
- 3,总结
前言
本文的原文连接是:
https://blog.csdn.net/freewebsys/article/details/108971807
未经博主允许不得转载。
博主CSDN地址是:https://blog.csdn.net/freewebsys
博主掘金地址是:https://juejin.cn/user/585379920479288
博主知乎地址是:https://www.zhihu.com/people/freewebsystem
1,关于Tdengine背景
背景:2019年,陶建辉创办的“涛思数据”宣布将其时序数据库产品“TDengine”开源,这一举动曾着实让行业惊讶:一款面向物联网、车联网、工业互联网以及其他典型时序数据处理场景的时序数据库(Time-Series Database)究竟能在GitHub上获得多大关注?核心代码的开放会让一家初创企业获得更多机会,还是反被制约?然而上线3个月,Star数量上万,并连续获得三轮融资,这使涛思数据的战略逐渐受到认可。
https://www.163.com/dy/article/H2GLI1PQ0530UH99.html
官方网站:
https://www.taosdata.com/
2,使用直接使用docker尝鲜
https://docs.taosdata.com/get-started/
https://hub.docker.com/r/tdengine/tdengine
启动:
$ docker run -itd \-p 6030-6041:6030-6041 \-p 6030-6041:6030-6041/udp \tdengine/tdengine
然后执行测试命令
taosBenchmark
[09/07 09:43:40.210544] INFO: taos client version: 3.0.1.0Press enter key to continue or Ctrl-C to stop[09/07 09:47:29.304734] INFO: create database: <CREATE DATABASE IF NOT EXISTS test precision 'ms';>
[09/07 09:47:31.314766] INFO: stable meters does not exist, will create one
[09/07 09:47:31.316766] INFO: create stable: <CREATE TABLE IF NOT EXISTS test.meters (ts TIMESTAMP,current float,voltage int,phase float) TAGS (groupid int,location binary(16))>
[09/07 09:47:31.324614] INFO: generate stable<meters> columns data with lenOfCols<80> * prepared_rand<10000>
[09/07 09:47:31.334906] INFO: generate stable<meters> tags data with lenOfTags<54> * childTblCount<10000>
[09/07 09:47:31.338802] INFO: start creating 10000 table(s) with 8 thread(s)
[09/07 09:47:31.340590] INFO: thread[0] start creating table from 0 to 1249
[09/07 09:47:31.341547] INFO: thread[1] start creating table from 1250 to 2499
[09/07 09:47:31.342576] INFO: thread[2] start creating table from 2500 to 3749
[09/07 09:47:31.343289] INFO: thread[3] start creating table from 3750 to 4999
[09/07 09:47:31.344594] INFO: thread[4] start creating table from 5000 to 6249
[09/07 09:47:31.345317] INFO: thread[5] start creating table from 6250 to 7499
[09/07 09:47:31.346115] INFO: thread[6] start creating table from 7500 to 8749
[09/07 09:47:31.350073] INFO: thread[7] start creating table from 8750 to 9999
[09/07 09:47:33.166145] INFO: Spent 1.8280 seconds to create 10000 table(s) with 8 thread(s), already exist 0 table(s), actual 10000 table(s) pre created, 0 table(s) will be auto createdPress enter key to continue or Ctrl-C to stop[09/07 09:48:25.825688] INFO: record per request (30000) is larger than insert rows (10000) in progressive mode, which will be set to 10000
[09/07 09:48:25.840859] INFO: Estimate memory usage: 11.74MBPress enter key to continue or Ctrl-C to stop[09/07 09:48:35.176099] INFO: thread[0] start progressive inserting into table from 0 to 1249
[09/07 09:48:35.176250] INFO: thread[1] start progressive inserting into table from 1250 to 2499
[09/07 09:48:35.176307] INFO: thread[2] start progressive inserting into table from 2500 to 3749
[09/07 09:48:35.176597] INFO: thread[4] start progressive inserting into table from 5000 to 6249
[09/07 09:48:35.176876] INFO: thread[6] start progressive inserting into table from 7500 to 8749
[09/07 09:48:35.176939] INFO: thread[7] start progressive inserting into table from 8750 to 9999
[09/07 09:48:35.178943] INFO: thread[5] start progressive inserting into table from 6250 to 7499
[09/07 09:48:35.182180] INFO: thread[3] start progressive inserting into table from 3750 to 4999[09/07 09:49:05.349983] INFO: thread[4] has currently inserted rows: 4370000
[09/07 09:49:05.360251] INFO: thread[6] has currently inserted rows: 4500000
[09/07 09:49:05.364481] INFO: thread[0] has currently inserted rows: 4620000
[09/07 09:49:05.370841] INFO: thread[5] has currently inserted rows: 4380000
[09/07 09:49:05.378918] INFO: thread[7] has currently inserted rows: 4510000
[09/07 09:49:05.393132] INFO: thread[2] has currently inserted rows: 4440000
[09/07 09:49:05.400107] INFO: thread[3] has currently inserted rows: 4360000
[09/07 09:49:05.401223] INFO: thread[1] has currently inserted rows: 4470000
[09/07 09:49:35.434374] INFO: thread[1] has currently inserted rows: 8970000
[09/07 09:49:35.440311] INFO: thread[7] has currently inserted rows: 8900000
[09/07 09:49:35.445450] INFO: thread[4] has currently inserted rows: 8880000
[09/07 09:49:35.452617] INFO: thread[5] has currently inserted rows: 8890000
[09/07 09:49:35.452651] INFO: thread[3] has currently inserted rows: 8690000
[09/07 09:49:35.456406] INFO: thread[0] has currently inserted rows: 9030000
[09/07 09:49:35.467113] INFO: thread[6] has currently inserted rows: 8810000
[09/07 09:49:35.493582] INFO: thread[2] has currently inserted rows: 8710000
[09/07 09:49:59.716787] INFO: thread[6] completed total inserted rows: 12500000, 151074.78 records/second
[09/07 09:50:00.189079] INFO: thread[0] completed total inserted rows: 12500000, 150185.50 records/second
[09/07 09:50:00.213614] INFO: thread[1] completed total inserted rows: 12500000, 150167.37 records/second
[09/07 09:50:00.676656] INFO: thread[5] completed total inserted rows: 12500000, 149337.44 records/second
[09/07 09:50:00.883392] INFO: thread[4] completed total inserted rows: 12500000, 148994.40 records/second
[09/07 09:50:01.317899] INFO: thread[3] completed total inserted rows: 12500000, 148216.44 records/second
[09/07 09:50:01.371287] INFO: thread[7] completed total inserted rows: 12500000, 148093.75 records/second
[09/07 09:50:01.409091] INFO: thread[2] completed total inserted rows: 12500000, 148080.71 records/second
[09/07 09:50:01.414528] INFO: Spent 86.232537 seconds to insert rows: 100000000 with 8 thread(s) into test 1159655.08 records/second
[09/07 09:50:01.414588] INFO: insert delay, min: 1.79ms, avg: 67.00ms, p90: 300.71ms, p95: 322.98ms, p99: 343.11ms, max: 419.40ms
mac笔记本配置:
CPU 2.6 GHz 六核Intel Core i7
内存 16 GB 2667 MHz DDR4
基准测试脚本分2步,按回车开始执行。
1)首先创建表
1.8280 seconds to create 10000 table(s) with 8 thread(s)
2)然后插入了10亿数据
Spent 86.232537 seconds to insert rows: 10,0000,000 with 8 thread(s) into test 1159655.08 records/second
速度杠杠的。
命令和 mysql 非常像,而且可以使用
taos
Welcome to the TDengine Command Line Interface, Client Version:3.0.1.0
Copyright (c) 2022 by TDengine, all rights reserved.Server is Community Edition.taos> show databases;name |
=================================information_schema |performance_schema |test |
Query OK, 3 rows in database (0.008884s)taos> use test;
Database changed.taos> show tables;table_name |
=================================d1250 |d2500 |d1251 |d2501 |d7500 |...Query OK, 10000 rows in database (0.069536s)taos> select * from d1;ts | current | voltage | phase |
======================================================================================2017-07-14 02:40:00.000 | 10.00000 | 110 | 0.32222 |2017-07-14 02:40:00.001 | 9.84000 | 114 | 0.32222 |2017-07-14 02:40:00.002 | 10.12000 | 116 | 0.33056 |2017-07-14 02:40:00.003 | 10.04000 | 114 | 0.34167 |2017-07-14 02:40:00.004 | 9.96000 | 112 | 0.33056 |
...Query OK, 10000 rows in database (0.012816s)
还支持JDBC 方式插入:
https://docs.taosdata.com/develop/insert-data/sql-writing/
package com.taos.example;import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Arrays;
import java.util.List;public class RestInsertExample {private static Connection getConnection() throws SQLException {String jdbcUrl = "jdbc:TAOS-RS://localhost:6041?user=root&password=taosdata";return DriverManager.getConnection(jdbcUrl);}private static List<String> getRawData() {return Arrays.asList("d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,'California.SanFrancisco',2","d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,'California.SanFrancisco',2","d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,'California.SanFrancisco',2","d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,'California.SanFrancisco',3","d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,'California.LosAngeles',2","d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,'California.LosAngeles',2","d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,'California.LosAngeles',3","d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,'California.LosAngeles',3");}/*** The generated SQL is: */private static String getSQL() {StringBuilder sb = new StringBuilder("INSERT INTO ");for (String line : getRawData()) {String[] ps = line.split(",");sb.append("power." + ps[0]).append(" USING power.meters TAGS(").append(ps[5]).append(", ") // tag: location.append(ps[6]) // tag: groupId.append(") VALUES(").append('\'').append(ps[1]).append('\'').append(",") // ts.append(ps[2]).append(",") // current.append(ps[3]).append(",") // voltage.append(ps[4]).append(") "); // phase}return sb.toString();}public static void insertData() throws SQLException {try (Connection conn = getConnection()) {try (Statement stmt = conn.createStatement()) {stmt.execute("CREATE DATABASE power KEEP 3650");stmt.execute("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) " +"TAGS (location BINARY(64), groupId INT)");String sql = getSQL();int rowCount = stmt.executeUpdate(sql);System.out.println("rowCount=" + rowCount); // rowCount=8}}}public static void main(String[] args) throws SQLException {insertData();}
}
还支持 topic ,流式计算:
https://docs.taosdata.com/taos-sql/tmq/
CREATE TOPIC [IF NOT EXISTS] topic_name AS subquery;SHOW TOPICS;
3,总结
Tdengine 还真的是不错的数据库,因为IOT数据结构简单,数据量大。
根据这系统特点 Tdengine 做了很多优化,支持海量数据处理。非常的好。
而且还有公司在背后做技术支持。非常不错。
本文的原文连接是:
https://blog.csdn.net/freewebsys/article/details/108971807
【IOT数据库】tdengine学习使用,非常方便,速度十分快,开源IOT数据库,支持集群方式进行部署,支持分区,支持Topic,流式计算相关推荐
- 流式计算strom,Strom解决的问题,实现实时计算系统要解决那些问题,离线计算是什么,流式计算什么,离线和实时计算区别,strom应用场景,Strorm架构图和编程模型(来自学习资料)
1.背景-流式计算与storm 2011年在海量数据处理领域,Hadoop是人们津津乐道的技术,Hadoop不仅可以用来存储海量数据,还以用来计算海量数据.因为其高吞吐.高可靠等特点,很多互联网公司都 ...
- 大数据学习系列----基于Spark Streaming流式计算
2019独角兽企业重金招聘Python工程师标准>>> 个性化的需求 随着互联网知识信息指数级膨胀,个性化的需求对于用户来说越来越重要,通过推荐算法和用户点击行为的流式计算可以很简单 ...
- java学习笔记20(Lambda表达式、函数式编程、流式计算、练习)
文章目录 11.3 学习内容 Lambda表达式 Lambda标准格式 格式说明 省略规则 使用前提 函数式接口 预定义的函数式接口 工作内容 任务1 总结&明日计划 11.4 学习内容 流式 ...
- 超级实习生计划学习打卡——流式计算
学习内容 流式计算 Stream,也就是流,也叫做流式计算.利用 Steam ,可以让 java 以声明性地迭代方式处理集合. 元素是特定类型的对象,形成一个队列. Stream并不会存储元素,而是按 ...
- Steram流式计算学习笔记
Steram流式计算 一.概述 说到Stream便容易想到 I/O Stream,而实际上,谁规定"流"就一定是"I0流"呢 ?在Java 8中,得益于Lamb ...
- Java嵌入式数据库H2学习总结(三)——在Web应用中嵌入H2数据库
H2作为一个嵌入型的数据库,它最大的好处就是可以嵌入到我们的Web应用中,和我们的Web应用绑定在一起,成为我们Web应用的一部分.下面来演示一下如何将H2数据库嵌入到我们的Web应用中. 一.搭建测 ...
- CSS3与页面布局学习笔记(四)——页面布局大全(负边距、双飞翼、多栏、弹性、流式、瀑布流、响应式布局)
一.负边距与浮动布局 1.1.负边距 所谓的负边距就是margin取负值的情况,如margin:-100px,margin:-100%.当一个元素与另一个元素margin取负值时将拉近距离.常见的功能 ...
- java流式计算学习笔记
Java Stream流式计算:Java 8 API添加了一个新的抽象称为流Stream,可以让你以一种声明的方式处理数据. Stream API可以极大提高Java程序员的生产力,让程序员写出高效率 ...
- TDengine学习笔记
文章目录 1. 基本概念 1.1 表设计的优点 2. 开发指南 2.1 建立连接 2.1.1 原生连接 2.1.2 REST 连接 2.2 数据建模 2.2.1 建库 2.2.2 创建超级表 2.2. ...
最新文章
- Python实例介绍正则化贪心森林算法(附代码)
- Python Django 配置URL的方式(url传参方式)
- 关于Linux和Windows的换行符
- SAP ABAP规划 使用LOOP READ TABLE该方法取代双LOOP内部表的方法
- Python默认参数
- Linux下ARP相关操作
- django+uwsgi+nginx部署
- python+Treelite:Sklearn树模型训练迁移到c、java部署
- MegaRAID Storage Manager RAID管理工具实用教程
- Oracle怎么查看离散任务,Oracle ERP操作手册
- python 使用PIL工具包中的pytesseract函数识别英文字符
- 在文档中怎么画横线实线_怎样在一个word文档中添加一根实线,用作分开不同
- vue之menu弹出菜单效果
- PHP 伪装IP地址 数据采集 GET、POST请求
- 【华为OD机试真题 C语言】机器人走迷宫
- 小白到高级UI设计师,如何实现完美蜕变?
- mysql二亿大表_面对有2亿条数据的mysql表
- css表格文字位置调整,word表格中的文字距离表格四周太远,怎么才能调的近一些,除了调字大小。...
- 大华摄像机找不到服务器,大华无法找到网络主机解决方法
- 【值转换器】 WPF中Image数据绑定Icon对象