TimescaleDB API Reference

共六部分函数接口:
(1)第一部分: 超级表的管理函数
(2)第二部分:压缩函数
(3)第三部分:聚合函数
(4)第四部分:自动化策略
(5)第五部分:数据分析
(6)第六部分:实用程序/统计

函数接口列表如下

add_dimension
add_drop_chunks_policy
add_reorder_policy
add_compress_chunks_policy
alter_job_schedule
alter table (compression)
alter view (continuous aggregate)
attach_tablespace
chunk_relation_size
chunk_relation_size_pretty
compress_chunk
create_hypertable
create index (transaction per chunk)
create view (continuous aggregate)
decompress_chunk
detach_tablespace
detach_tablespaces
drop_chunks
drop view (continuous aggregate)
first
get_telemetry_report
histogram
hypertable_approximate_row_count
hypertable_relation_size
hypertable_relation_size_pretty
indexes_relation_size
indexes_relation_size_pretty
interpolate
last
locf
move_chunk
refresh materialized view (continuous aggregate)
remove_compress_chunks_policy
remove_drop_chunks_policy
remove_reorder_policy
reorder_chunk
set_chunk_time_interval
set_integer_now_func
set_number_partitions
show_chunks
show_tablespaces
time_bucket
time_bucket_gapfill
timescaledb_information.hypertable
timescaledb_information.license
timescaledb_information.compressed_chunk_stats
timescaledb_information.compressed_hypertable_stats
timescaledb_information.continuous_aggregates
timescaledb_information.continuous_aggregate_stats
timescaledb_information.drop_chunks_policies
timescaledb_information.policy_stats
timescaledb_information.reorder_policies
timescaledb.license_key
timescaledb_pre_restore
timescaledb_post_restore

Hypertable management**++++++++++++++++++++++++++++++++++++++++++++++++++**
第一部分:超级表管理函数
add_dimension()
增加表分区维度:
Add an additional partitioning dimension to a TimescaleDB hypertable. The column selected as the dimension can either use interval partitioning (e.g., for a second time partition) or hash partitioning.

WARNING:The add_dimension command can only be executed after a table has been converted to a hypertable (via create_hypertable), but must similarly be run only on an empty hypertable.

Space partitions: The use of additional partitioning is a very specialized use case. Most users will not need to use it.

Space partitions use hashing: Every distinct item is hashed to one of N buckets. Remember that we are already using (flexible) time intervals to manage chunk sizes; the main purpose of space partitioning is to enable parallel I/O to the same time interval.

Parallel I/O can benefit in two scenarios: (a) two or more concurrent queries should be able to read from different disks in parallel, or (b) a single query should be able to use query parallelization to read from multiple disks in parallel.

Note that queries to multiple chunks can be executed in parallel when TimescaleDB is running on Postgres 11, but PostgreSQL 9.6 or 10 does not support such parallel chunk execution.

Thus, users looking for parallel I/O have two options:

Use a RAID setup across multiple physical disks, and expose a single logical disk to the hypertable (i.e., via a single tablespace).For each physical disk, add a separate tablespace to the database. TimescaleDB allows you to actually add multiple tablespaces to a single hypertable (although under the covers, a hypertable's chunks are spread across the tablespaces associated with that hypertable).

We recommend a RAID setup when possible, as it supports both forms of parallelization described above (i.e., separate queries to separate disks, single query to multiple disks in parallel). The multiple tablespace approach only supports the former. With a RAID setup, no spatial partitioning is required.

That said, when using space partitions, we recommend using 1 space partition per disk.

TimescaleDB does not benefit from a very large number of space partitions (such as the number of unique items you expect in partition field). A very large number of such partitions leads both to poorer per-partition load balancing (the mapping of items to partitions using hashing), as well as much increased planning latency for some types of queries.
Required Arguments
Name Description
main_table Identifier of hypertable to add the dimension to.
column_name Name of the column to partition by.
Optional Arguments
Name Description
number_partitions Number of hash partitions to use on column_name. Must be > 0.
chunk_time_interval Interval that each chunk covers. Must be > 0.
partitioning_func The function to use for calculating a value’s partition (see create_hypertable instructions).
if_not_exists Set to true to avoid throwing an error if a dimension for the column already exists. A notice is issued instead. Defaults to false.
Returns
Column Description
dimension_id ID of the dimension in the TimescaleDB internal catalog.
schema_name Schema name of the hypertable.
table_name Table name of the hypertable.
column_name Column name of the column to partition by.
created True if the dimension was added, false when if_not_exists is true and no dimension was added.

When executing this function, either number_partitions or chunk_time_interval must be supplied, which will dictate if the dimension will use hash or interval partitioning.

The chunk_time_interval should be specified as follows:

If the column to be partitioned is a TIMESTAMP, TIMESTAMPTZ, or DATE, this length should be specified either as an INTERVAL type or an integer value in microseconds.If the column is some other integer type, this length should be an integer that reflects the column's underlying semantics (e.g., the chunk_time_interval should be given in milliseconds if this column is the number of milliseconds since the UNIX epoch).WARNING:Supporting more than one additional dimension is currently experimental. For any production environments, users are recommended to use at most one "space" dimension (in addition to the required time dimension specified in create_hypertable).

Sample Usage

First convert table conditions to hypertable with just time partitioning on column time, then add an additional partition key on location with four partitions:

SELECT create_hypertable(‘conditions’, ‘time’);
SELECT add_dimension(‘conditions’, ‘location’, number_partitions => 4);

Convert table conditions to hypertable with time partitioning on time and space partitioning (2 partitions) on location, then add two additional dimensions.

SELECT create_hypertable(‘conditions’, ‘time’, ‘location’, 2);
SELECT add_dimension(‘conditions’, ‘time_received’, chunk_time_interval => interval ‘1 day’);
SELECT add_dimension(‘conditions’, ‘device_id’, number_partitions => 2);
SELECT add_dimension(‘conditions’, ‘device_id’, number_partitions => 2, if_not_exists => true);

attach_tablespace()&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
增加表空间到表:
Attach a tablespace to a hypertable and use it to store chunks. A tablespace is a directory on the filesystem that allows control over where individual tables and indexes are stored on the filesystem. A common use case is to create a tablespace for a particular storage disk, allowing tables to be stored there. Please review the standard PostgreSQL documentation for more information on tablespaces.

将表空间附加到超级表并使用它存储块。表空间是文件系统上的一个目录,它允许控制单个表和索引在文件系统上的存储位置。一个常见的用例是为特定的存储磁盘创建一个表空间,允许表存储在那里。有关表空间的更多信息,请查看标准PostgreSQL文档。

TimescaleDB can manage a set of tablespaces for each hypertable, automatically spreading chunks across the set of tablespaces attached to a hypertable. If a hypertable is hash partitioned, TimescaleDB will try to place chunks that belong to the same partition in the same tablespace. Changing the set of tablespaces attached to a hypertable may also change the placement behavior. A hypertable with no attached tablespaces will have its chunks placed in the database’s default tablespace.

TimescaleDB可以为每个超表管理一组表空间,从而自动在连接到超表的一组表空间上散布块。 如果对超表进行哈希分区,则TimescaleDB将尝试将属于同一分区的块放在同一表空间中。 更改附加到超表的表空间集也可能会更改放置行为。 没有附加表空间的超表将其块放置在数据库的默认表空间中。

必须的参数

变量名称 描述
tablespace 要附加的表空间名称
hypertable 附加到表空间的超级表的标识符

Tablespaces need to be created before being attached to a hypertable. Once created, tablespaces can be attached to multiple hypertables simultaneously to share the underlying disk storage. Associating a regular table with a tablespace using the TABLESPACE option to CREATE TABLE, prior to calling create_hypertable, will have the same effect as calling attach_tablespace immediately following create_hypertable.

在将表空间附加到超表之前,需要先创建表空间。 创建表空间后,可以将其同时附加到多个超表,以共享基础磁盘存储。 在调用create_hypertable之前,使用TABLESPACE选项将CREATE TABLE与常规表相关联的表空间将具有与在create_hypertable之后立即调用attach_tablespace相同的效果。

可选参数
Name Description
if_not_attached Set to true to avoid throwing an error if the tablespace is already attached to the table. A notice is issued instead. Defaults to false.

变量名称 描述
if_not_attached 设置为true以避免在表空间已附加到表的情况下引发错误。 而是发出通知。 默认为false。

使用案例:

Attach the tablespace disk1 to the hypertable conditions:
附加表空间disk1到超级表conditions上:

SELECT attach_tablespace('disk1', 'conditions');
SELECT attach_tablespace('disk2', 'conditions', if_not_attached => true);
WARNING:The management of tablespaces on hypertables is currently an experimental feature.

create_hypertable() &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

Creates a TimescaleDB hypertable from a PostgreSQL table (replacing the latter), partitioned on time and with the option to partition on one or more other columns (i.e., space). All actions, such as ALTER TABLE, SELECT, etc., still work on the resulting hypertable.

从PostgreSQL表创建一个TimescaleDB超表(替换后者),该表按时间进行分区,并且可以选择在一个或多个其他列(即空间)上进行分区。 所有操作(例如ALTER TABLE,SELECT等)仍对生成的超表起作用。

必须的参数

变量名称 描述
main_table 要转换成超级表的表的标识符
time_column_name 包含时间值的列名称或者要作为分区依据的主列。

可选参数

变量名称 描述
partitioning_column 要作为分区依据的附加列的名称。 如果提供,则还必须提供number_partitions参数。
number_partitions 用于partitioning_column的哈希分区数。 必须> 0。
chunk_time_interva 每个块覆盖的事件时间间隔。 必须大于0。从TimescaleDB v0.11.0开始,默认值为7天。 对于以前的版本,默认为1个月。
create_default_indexes 布尔值是否在时间/分区列上创建默认索引。 默认值为TRUE。
if_not_exists 布尔值是否在表已转换为超表或引发异常时显示警告。 默认值为FALSE。
partitioning_func 用于计算值分区的函数。
associated_schema_name 内部超表的模式名称。 默认值为“ _timescaledb_internal”。
associated_table_prefix 内部超表块名称的前缀。 默认值为“ _hyper”。
migrate_data 设置为true可以将任何现有的main_table数据迁移到新的超表中的块中。 如果没有此选项,则非空表将生成错误。 请注意,对于大型表,迁移可能需要很长时间。 默认为false。
time_partitioning_func 将不兼容的主时间列值转换为兼容值的功能。 该功能必须是IMMUTABLE。

返回值

描述
hypertable_id 在Timescaledb中超级表的ID
schema_name 转换成超级表的表的模式
table_name 转换成超级表的表名
created 如果创建了超级表,则为true;如果if_not_exists为true,并且未创建任何超表,则为false

TIP:If you use SELECT * FROM create_hypertable(…) you will get the
return value formatted as a table with column headings. 提示:如果使用SELECT

  • FROM create_hypertable(…),您将获得返回值,格式为带有列标题的表。

**WARNING:**The use of the migrate_data argument to convert a non-empty table can lock the table for a significant amount of
time,depending on how much data is in the table.
If you would like finer control over index formation and other aspects of your hypertable, follow these migration instructions
instead.
When converting a normal SQL table to a hypertable, pay attention to how you handle constraints. A hypertable can contain foreign keys
to normal SQL table columns, but the reverse is not allowed. UNIQUE
and PRIMARY constraints must include the partitioning key.
警告:使用migrate_data参数转换非空表可能会锁定表相当长的时间,具体取决于表中的数据量。
如果您希望更好地控制索引的形成以及超表的其他方面,请改用这些迁移说明。 将普通SQL表转换为超表时,请注意如何处理约束。
超表可以包含普通SQL表列的外键,但不允许相反。 UNIQUE和PRIMARY约束必须包含分区键。

Units

类型
Timestamp (TIMESTAMP, TIMESTAMPTZ)
DATE
Integer (SMALLINT, INT, BIGINT)

TIP:The type flexibility of the ‘time’ column allows the use of
non-time-based values as the primary chunk partitioning column, as
long as those values can increment.

提示:“时间”列的类型灵活性允许将非基于时间的值用作主要的块分区列,只要这些值可以递增即可。

**TIP:**For incompatible data types (e.g. jsonb) you can specify a
function to the time_partitioning_func argument which can extract a
compatible data type

提示:对于不兼容的数据类型(例如jsonb),您可以为time_partitioning_func参数指定一个函数,该函数可以提取兼容的数据类型

The units of chunk_time_interval should be set as follows:
chunk_time_interval的单位应设置如下:

For time columns having timestamp or DATE types, the
chunk_time_interval should be specified either as an interval type or
an integral value in microseconds.

对于具有时间戳或DATE类型的时间列,应当将chunk_time_interval指定为间隔类型或整数值(以微秒为单位)。

For integer types, the chunk_time_interval must be set explicitly,
as the database does not otherwise understand the semantics of what
each integer value represents (a second, millisecond, nanosecond,
etc.). So if your time column is the number of milliseconds since the
UNIX epoch, and you wish to each chunk to cover 1 day, you should
specify chunk_time_interval => 86400000.

对于整数类型,必须显式设置chunk_time_interval,因为数据库无法以其他方式理解每个整数值所代表的语义(秒,毫秒,纳秒等)。 因此,如果您的时间列是自UNIX纪元以来的毫秒数,并且您希望每个块都包含1天,则应指定chunk_time_interval => 86400000。

In case of hash partitioning (i.e., number_partitions is greater than zero), it is possible to optionally specify a custom partitioning function. If no custom partitioning function is specified, the default partitioning function is used. The default partitioning function calls PostgreSQL’s internal hash function for the given type, if one exists. Thus, a custom partitioning function can be used for value types that do not have a native PostgreSQL hash function. A partitioning function should take a single anyelement type argument and return a positive integer hash value. Note that this hash value is not a partition ID, but rather the inserted value’s position in the dimension’s key space, which is then divided across the partitions.

在哈希分区的情况下(即number_partitions大于零),可以有选择地指定自定义分区函数。 如果未指定自定义分区功能,则使用默认分区功能。 如果存在给定类型,默认的分区函数将调用PostgreSQL的内部哈希函数。 因此,自定义分区函数可用于不具有本地PostgreSQL哈希函数的值类型。 分区函数应采用单个anyelement类型参数,并返回正整数哈希值。 请注意,此哈希值不是分区ID,而是插入值在维的键空间中的位置,然后在分区之间进行划分。

TIP:The time column in create_hypertable must be defined as NOT
NULL. If this is not already specified on table creation,
create_hypertable will automatically add this constraint on the table
when it is executed.

提示:create_hypertable中的time列必须定义为NOT NULL。 如果在创建表时尚未指定此约束,则create_hypertable将在执行该约束时自动在表上添加此约束。

使用示例

Convert table conditions to hypertable with just time partitioning on column time:

SELECT create_hypertable('conditions', 'time');

Convert table conditions to hypertable, setting chunk_time_interval to 24 hours.

SELECT create_hypertable('conditions', 'time', chunk_time_interval => 86400000000);
SELECT create_hypertable('conditions', 'time', chunk_time_interval => interval '1 day');

Convert table conditions to hypertable with time partitioning on time and space partitioning (4 partitions) on location:

SELECT create_hypertable('conditions', 'time', 'location', 4);

The same as above, but using a custom partitioning function:

SELECT create_hypertable('conditions', 'time', 'location', 4, partitioning_func => 'location_hash');

Convert table conditions to hypertable. Do not raise a warning if conditions is already a hypertable:

SELECT create_hypertable('conditions', 'time', if_not_exists => TRUE);

Time partition table measurements on a composite column type report using a time partitioning function: Requires an immutable function that can convert the column value into a supported column value:

CREATE TYPE report AS (reported timestamp with time zone, contents jsonb);
CREATE FUNCTION report_reported(report)RETURNS timestamptzLANGUAGE SQLIMMUTABLE AS'SELECT $1.reported';SELECT create_hypertable('measurements', 'report', time_partitioning_func => 'report_reported');
Time partition table events, on a column type jsonb (event), which has a top level key (started) containing an ISO 8601 formatted timestamp:CREATE FUNCTION event_started(jsonb)RETURNS timestamptzLANGUAGE SQLIMMUTABLE AS$func$SELECT ($1->>'started')::timestamptz$func$;SELECT create_hypertable('events', 'event', time_partitioning_func => 'event_started');

最佳实践

One of the most common questions users of TimescaleDB have revolves around configuring chunk_time_interval.
TimescaleDB用户最常见的问题之一是围绕配置chunk_time_interval。

Time intervals: The current release of TimescaleDB enables both the manual and automated adaption of its time intervals. With manually-set intervals, users should specify a chunk_time_interval when creating their hypertable (the default value is 1 week). The interval used for new chunks can be changed by calling set_chunk_time_interval().
时间间隔:TimescaleDB的当前版本允许对其时间间隔进行手动和自动调整。 使用手动设置的时间间隔,用户应在创建其超表时指定chunk_time_interval(默认值为1周)。 可以通过调用set_chunk_time_interval()来更改用于新块的间隔。

The key property of choosing the time interval is that the chunk (including indexes) belonging to the most recent interval (or chunks if using space partitions) fit into memory. As such, we typically recommend setting the interval so that these chunk(s) comprise no more than 25% of main memory.
选择时间间隔的关键属性是属于最新间隔的块(包括索引)(如果使用空间分区,则为块)适合内存。 因此,我们通常建议设置时间间隔,以使这些块占主内存的比例不超过25%。

TIP:Make sure that you are planning for recent chunks from all
active hypertables to fit into 25% of main memory, rather than 25% per
hypertable.

提示:请确保您计划所有活动超表中的最新块都适合主内存的25%,而不是每个超表的25%。

To determine this, you roughly need to understand your data rate. If you are writing roughly 2GB of data per day and have 64GB of memory, setting the time interval to a week would be good. If you are writing 10GB per day on the same machine, setting the time interval to a day would be appropriate. This interval would also hold if data is loaded more in batches, e.g., you bulk load 70GB of data per week, with data corresponding to records from throughout the week.
为了确定这一点,您大致需要了解您的数据速率。 如果您每天要写大约2GB的数据并且有64GB的内存,那么将时间间隔设置为一周将是不错的选择。 如果您每天在同一台计算机上写入10GB,则将时间间隔设置为一天会比较合适。 如果要分批加载更多数据,例如,您每周批量加载70GB数据,并且该数据对应于整周的记录,则此间隔也将保持不变。

While it’s generally safer to make chunks smaller rather than too large, setting intervals too small can lead to many chunks, which corresponds to increased planning latency for some types of queries.
虽然通常将块做成较小而不是太大是比较安全的,但是将间隔设置得太小会导致很多块,这对应于某些类型的查询的计划延迟增加。

TIP:One caveat is that the total chunk size is actually dependent on
both the underlying data size and any indexes, so some care might be
taken if you make heavy use of expensive index types (e.g., some
PostGIS geospatial indexes). During testing, you might check your
total chunk sizes via the chunk_relation_size function.

提示:一个警告是,总块大小实际上取决于基础数据大小和任何索引,因此如果您大量使用昂贵的索引类型(例如,某些PostGIS地理空间索引),则应格外小心。 在测试期间,您可以通过chunk_relation_size函数检查总块大小。

Space partitions: In most cases, it is advised for users not to use space partitions. The rare cases in which space partitions may be useful are described in the [add dimension][] section.
CREATE INDEX (Transaction Per Chunk)

空间分区:在大多数情况下,建议用户不要使用空间分区。 在[add dimension][] 部分中介绍了一些有用的空间分隔情况。

CREATE INDEX … WITH (timescaledb.transaction_per_chunk, …); &&&&&&&&&&&&&&&&&&&&&

This option extends CREATE INDEX with the ability to use a separate transaction for each chunk it creates an index on, instead of using a single transaction for the entire hypertable. This allows INSERTs, and other operations to to be performed concurrently during most of the duration of the CREATE INDEX command. While the index is being created on an individual chunk, it functions as if a regular CREATE INDEX were called on that chunk, however other chunks are completely un-blocked.

TIP:This version of CREATE INDEX can be used as an alternative to CREATE INDEX CONCURRENTLY, which is not currently supported on hypertables.WARNING:If the operation fails partway through, indexes may not be created on all hypertable chunks. If this occurs, the index on the root table of the hypertable will be marked as invalid (this can be seen by running \d+ on the hypertable). The index will still work, and will be created on new chunks, but if you wish to ensure all chunks have a copy of the index, drop and recreate it.

Sample Usage

Anonymous index

CREATE INDEX ON conditions(time, device_id) WITH (timescaledb.transaction_per_chunk);

Other index methods

CREATE INDEX ON conditions(time, location) USING brin
WITH (timescaledb.transaction_per_chunk);

detach_tablespace()&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

Detach a tablespace from one or more hypertables. This only means that new chunks will not be placed on the detached tablespace. This is useful, for instance, when a tablespace is running low on disk space and one would like to prevent new chunks from being created in the tablespace. The detached tablespace itself and any existing chunks with data on it will remain unchanged and will continue to work as before, including being available for queries. Note that newly inserted data rows may still be inserted into an existing chunk on the detached tablespace since existing data is not cleared from a detached tablespace. A detached tablespace can be reattached if desired to once again be considered for chunk placement.
Required Arguments
Name Description
tablespace Name of the tablespace to detach.

When giving only the tablespace name as argument, the given tablespace will be detached from all hypertables that the current role has the appropriate permissions for. Therefore, without proper permissions, the tablespace may still receive new chunks after this command is issued.
Optional Arguments
Name Description
hypertable Identifier of hypertable to detach a the tablespace from.
if_attached Set to true to avoid throwing an error if the tablespace is not attached to the given table. A notice is issued instead. Defaults to false.

When specifying a specific hypertable, the tablespace will only be detached from the given hypertable and thus may remain attached to other hypertables.
Sample Usage

Detach the tablespace disk1 from the hypertable conditions:

SELECT detach_tablespace(‘disk1’, ‘conditions’);
SELECT detach_tablespace(‘disk2’, ‘conditions’, if_attached => true);

Detach the tablespace disk1 from all hypertables that the current user has permissions for:

SELECT detach_tablespace(‘disk1’);

detach_tablespaces()&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

Detach all tablespaces from a hypertable. After issuing this command on a hypertable, it will no longer have any tablespaces attached to it. New chunks will instead be placed in the database’s default tablespace.
Required Arguments
Name Description
hypertable Identifier of hypertable to detach a the tablespace from.
Sample Usage

Detach all tablespaces from the hypertable conditions:

SELECT detach_tablespaces(‘conditions’);

drop_chunks()&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

Removes data chunks whose time range falls completely before (or after) a specified time, operating either across all hypertables or for a specific one. Shows a list of the chunks that were dropped in the same style as the show_chunks function.

Chunks are defined by a certain start and end time. If older_than is specified, a chunk is dropped if its end time is older than the specified timestamp. Alternatively, if newer_than is specified, a chunk is dropped if its start time is newer than the specified timestamp. Note that, because chunks are removed if and only if their time range falls fully before (or after) the specified timestamp, the remaining data may still contain timestamps that are before (or after) the specified one.
Required Arguments

Function requires at least one of the following arguments. These arguments have the same semantics as the show_chunks function.
Name Description
older_than Specification of cut-off point where any full chunks older than this timestamp should be removed.
newer_than Specification of cut-off point where any full chunks newer than this timestamp should be removed.
Optional Arguments
Name Description
table_name Hypertable name from which to drop chunks. If not supplied, all hypertables are affected.
schema_name Schema name of the hypertable from which to drop chunks. Defaults to public.
cascade Boolean on whether to CASCADE the drop on chunks, therefore removing dependent objects on chunks to be removed. Defaults to FALSE.
cascade_to_materializations Set to TRUE to also remove chunk data from any associated continuous aggregates. Set to FALSE to only drop raw chunks (while keeping data in the continuous aggregates). Defaults to NULL, which errors if continuous aggregates exist.

The older_than and newer_than parameters can be specified in two ways:

interval type: The cut-off point is computed as now() - older_than and similarly now() - newer_than. An error will be returned if an INTERVAL is supplied and the time column is not one of a TIMESTAMP, TIMESTAMPTZ, or DATE.timestamp, date, or integer type: The cut-off point is explicitly given as a TIMESTAMP / TIMESTAMPTZ / DATE or as a SMALLINT / INT / BIGINT. The choice of timestamp or integer must follow the type of the hypertable's time column.WARNING:When using just an interval type, the function assumes that you are are removing things in the past. If you want to remove data in the future (i.e., erroneous entries), use a timestamp.

When both arguments are used, the function returns the intersection of the resulting two ranges. For example, specifying newer_than => 4 months and older_than => 3 months will drop all full chunks that are between 3 and 4 months old. Similarly, specifying newer_than => ‘2017-01-01’ and older_than => ‘2017-02-01’ will drop all full chunks between ‘2017-01-01’ and ‘2017-02-01’. Specifying parameters that do not result in an overlapping intersection between two ranges will result in an error.

TIP:By default, calling drop_chunks on a table that has a continuous aggregate will throw an error. This can be resolved by setting cascade_to_materializations to TRUE, which will cause the corresponding aggregated data to also be dropped.

Sample Usage

Drop all chunks older than 3 months ago:

SELECT drop_chunks(interval ‘3 months’);

Example output:

          drop_chunks

_timescaledb_internal._hyper_3_5_chunk
_timescaledb_internal._hyper_3_6_chunk
_timescaledb_internal._hyper_3_7_chunk
_timescaledb_internal._hyper_3_8_chunk
_timescaledb_internal._hyper_3_9_chunk
(5 rows)

Drop all chunks more than 3 months in the future. This is useful for correcting data ingested with incorrect clocks:

SELECT drop_chunks(newer_than => now() + interval ‘3 months’);

Drop all chunks from hypertable conditions older than 3 months:

SELECT drop_chunks(interval ‘3 months’, ‘conditions’);

Drop all chunks from hypertable conditions before 2017:

SELECT drop_chunks(‘2017-01-01’::date, ‘conditions’);

Drop all chunks from hypertable conditions before 2017, where time column is given in milliseconds from the UNIX epoch:

SELECT drop_chunks(1483228800000, ‘conditions’);

Drop all chunks from hypertable conditions older than 3 months, including dependent objects (e.g., views):

SELECT drop_chunks(interval ‘3 months’, ‘conditions’, cascade => TRUE);

Drop all chunks newer than 3 months ago:

SELECT drop_chunks(newer_than => interval ‘3 months’);

Drop all chunks older than 3 months ago and newer than 4 months ago:

SELECT drop_chunks(older_than => interval ‘3 months’, newer_than => interval ‘4 months’, table_name => ‘conditions’)

Drop all chunks older than 3 months, and delete this data from any continuous aggregates based on it:

SELECT drop_chunks(interval ‘3 months’, ‘conditions’, cascade_to_materializations => true);

set_chunk_time_interval()

Sets the chunk_time_interval on a hypertable. The new interval is used when new chunks are created but the time intervals on existing chunks are not affected.
Required Arguments
Name Description
main_table Identifier of hypertable to update interval for.
chunk_time_interval Interval in event time that each new chunk covers. Must be > 0.
Optional Arguments
Name Description
dimension_name The name of the time dimension to set the number of partitions for. Only used when hypertable has multiple time dimensions.

The valid types for the chunk_time_interval depend on the type of hypertable time column:

TIMESTAMP, TIMESTAMPTZ, DATE: The specified chunk_time_interval should be given either as an INTERVAL type (interval '1 day') or as an integer or bigint value (representing some number of microseconds).INTEGER: The specified chunk_time_interval should be an integer (smallint, int, bigint) value and represent the underlying semantics of the hypertable's time column, e.g., given in milliseconds if the time column is expressed in milliseconds (see create_hypertable instructions).

Sample Usage

For a TIMESTAMP column, set chunk_time_interval to 24 hours.

SELECT set_chunk_time_interval(‘conditions’, interval ‘24 hours’);
SELECT set_chunk_time_interval(‘conditions’, 86400000000);

For a time column expressed as the number of milliseconds since the UNIX epoch, set chunk_time_interval to 24 hours.

SELECT set_chunk_time_interval(‘conditions’, 86400000);

set_number_partitions()

Sets the number of partitions (slices) of a space dimension on a hypertable. The new partitioning only affects new chunks.
Required Arguments
Name Description
main_table Identifier of hypertable to update the number of partitions for.
number_partitions The new number of partitions for the dimension. Must be greater than 0 and less than 32,768.
Optional Arguments
Name Description
dimension_name The name of the space dimension to set the number of partitions for.

The dimension_name needs to be explicitly specified only if the hypertable has more than one space dimension. An error will be thrown otherwise.
Sample Usage

For a table with a single space dimension:

SELECT set_number_partitions(‘conditions’, 2);

For a table with more than one space dimension:

SELECT set_number_partitions(‘conditions’, 2, ‘device_id’);

set_integer_now_func() &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

This function is only relevant for hypertables with integer (as opposed to TIMESTAMP/TIMESTAMPTZ/DATE) time values. For such hypertables, it sets a function that returns the now() value (current time) in the units of the time column. This is necessary for running some policies on integer-based tables. In particular, many policies only apply to chunks of a certain age and a function that returns the current time is necessary to determine the age of a chunk.
Required Arguments
Name Description
main_table (REGCLASS) Identifier of hypertable to set the integer now function for .
integer_now_func (REGPROC) A function that returns the current time value in the same units as the time column.
Optional Arguments
Name Description
replace_if_exists (BOOLEAN) Whether to override the function if one is already set. Defaults to false.
Sample Usage

To set the integer now function for a hypertable with a time column in unix time (number of seconds since the unix epoch, UTC).

CREATE OR REPLACE FUNCTION unix_now() returns BIGINT LANGUAGE SQL STABLE as SELECTextract(epochfromnow())::BIGINTSELECT extract(epoch from now())::BIGINT SELECTextract(epochfromnow())::BIGINT;

SELECT set_integer_now_func(‘test_table_bigint’, ‘unix_now’);

show_chunks() &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

Get list of chunks associated with hypertables.
Optional Arguments

Function accepts the following arguments. These arguments have the same semantics as the drop_chunks function.
Name Description
hypertable Hypertable name from which to select chunks. If not supplied, all chunks are shown.
older_than Specification of cut-off point where any full chunks older than this timestamp should be shown.
newer_than Specification of cut-off point where any full chunks newer than this timestamp should be shown.

The older_than and newer_than parameters can be specified in two ways:

interval type: The cut-off point is computed as now() - older_than and similarly now() - newer_than. An error will be returned if an INTERVAL is supplied and the time column is not one of a TIMESTAMP, TIMESTAMPTZ, or DATE.timestamp, date, or integer type: The cut-off point is explicitly given as a TIMESTAMP / TIMESTAMPTZ / DATE or as a SMALLINT / INT / BIGINT. The choice of timestamp or integer must follow the type of the hypertable's time column.

When both arguments are used, the function returns the intersection of the resulting two ranges. For example, specifying newer_than => 4 months and older_than => 3 months will show all full chunks that are between 3 and 4 months old. Similarly, specifying newer_than => ‘2017-01-01’ and older_than => ‘2017-02-01’ will show all full chunks between ‘2017-01-01’ and ‘2017-02-01’. Specifying parameters that do not result in an overlapping intersection between two ranges will result in an error.

Sample Usage

Get list of all chunks. Returns 0 if there are no hypertables:

SELECT show_chunks();

The expected output:

show_chunks

_timescaledb_internal._hyper_1_10_chunk
_timescaledb_internal._hyper_1_11_chunk
_timescaledb_internal._hyper_1_12_chunk
_timescaledb_internal._hyper_1_13_chunk
_timescaledb_internal._hyper_1_14_chunk
_timescaledb_internal._hyper_1_15_chunk
_timescaledb_internal._hyper_1_16_chunk
_timescaledb_internal._hyper_1_17_chunk
_timescaledb_internal._hyper_1_18_chunk

Get list of all chunks associated with a table:

SELECT show_chunks(‘conditions’);

Get all chunks older than 3 months:

SELECT show_chunks(older_than => interval ‘3 months’);

Get all chunks more than 3 months in the future. This is useful for showing data ingested with incorrect clocks:

SELECT show_chunks(newer_than => now() + interval ‘3 months’);

Get all chunks from hypertable conditions older than 3 months:

SELECT show_chunks(‘conditions’, older_than => interval ‘3 months’);

Get all chunks from hypertable conditions before 2017:

SELECT show_chunks(‘conditions’, older_than => ‘2017-01-01’::date);

Get all chunks newer than 3 months:

SELECT show_chunks(newer_than => interval ‘3 months’);

Get all chunks older than 3 months and newer than 4 months:

SELECT show_chunks(older_than => interval ‘3 months’, newer_than => interval ‘4 months’);

reorder_chunk() Community edition**&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&**

Reorder a single chunk’s heap to follow the order of an index. This function acts similarly to the PostgreSQL CLUSTER command , however it uses lower lock levels so that, unlike with the CLUSTER command, the chunk and hypertable are able to be read for most of the process. It does use a bit more disk space during the operation.

This command can be particularly useful when data is often queried in an order different from that in which it was originally inserted. For example, data is commonly inserted into a hypertable in loose time order (e.g., many devices concurrently sending their current state), but one might typically query the hypertable about a specific device. In such cases, reordering a chunk using an index on (device_id, time) can lead to significant performance improvement for these types of queries.

One can call this function directly on individual chunks of a hypertable, but using add_reorder_policy is often much more convenient.
Required Arguments
Name Description
chunk (REGCLASS) Name of the chunk to reorder.
Optional Arguments
Name Description
index (REGCLASS) The name of the index (on either the hypertable or chunk) to order by.
verbose (BOOLEAN) Setting to true will display messages about the progress of the reorder command. Defaults to false.
Returns

This function returns void.
Sample Usage

SELECT reorder_chunk(’_timescaledb_internal._hyper_1_10_chunk’, ‘conditions_device_id_time_idx’);

runs a reorder on the _timescaledb_internal._hyper_1_10_chunk chunk using the conditions_device_id_time_idx index.

move_chunk() Enterprise edition**&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&**

TimescaleDB allows users to move data (and indexes) to alternative tablespaces. This allows the user the ability to move data to more cost effective storage as it ages. This function acts like the combination of the PostgreSQL CLUSTER command and the PostgreSQL ALTER TABLE…SET TABLESPACE command. However, it uses lower lock levels so that, unlike with these PostgreSQL commands, the chunk and hypertable are able to be read for most of the process. It does use a bit more disk space during the operation.
Required Arguments
Name Description
chunk (REGCLASS) Name of chunk to be moved.
destination_tablespace (Name) Target tablespace for chunk you are moving.
index_destination_tablespace (Name) Target tablespace for index associated with the chunk you are moving.
Optional Arguments
Name Description
reorder_index (REGCLASS) The name of the index (on either the hypertable or chunk) to order by.
verbose (BOOLEAN) Setting to true will display messages about the progress of the move_chunk command. Defaults to false.
Sample Usage

SELECT move_chunk(
chunk => ‘_timescaledb_internal._hyper_1_4_chunk’,
destination_tablespace => ‘tablespace_2’,
index_destination_tablespace => ‘tablespace_3’,
reorder_index => ‘conditions_device_id_time_idx’,
verbose => TRUE
);

Compression Community edition +++++++++++++++++++++++++++++++++++++++++++++
第二部分:块压缩的函数
We highly recommend reading the blog post and tutorial about compression before trying to set it up for the first time.

Setting up compression on TimescaleDB requires users to first configure the hypertable for compression and then set up a policy for when to compress chunks.

Advanced usage of compression alows users to compress chunks manually, instead of automatically as they age.

WARNING:Compression is only available for Postgres 10.2+ or 11.

Restrictions

Version 1.5 does not support altering or inserting data into compressed chunks. The data can be queried without any modifications, however if you need to backfill or update data in a compressed chunk you will need to decompress the chunk(s) first.
Associated commands

ALTER TABLE
add_compress_chunks_policy
remove_compress_chunks_policy
compress_chunk
decompress_chunk

ALTER TABLE (Compression) Community edition

‘ALTER TABLE’ statement is used to turn on compression and set compression options.

The syntax is:

ALTER TABLE <table_name> SET (timescaledb.compress, timescaledb.compress_orderby = ‘<column_name> [ASC | DESC] [ NULLS { FIRST | LAST } ] [, …]’,
timescaledb.compress_segmentby = ‘<column_name> [, …]’
);

Required Options
Name Description
timescaledb.compress Boolean to enable compression
Other Options
Name Description
timescaledb.compress_orderby Order used by compression, specified in the same way as the ORDER BY clause in a SELECT query. The default is the descending order of the hypertable’s time column.
timescaledb.compress_segmentby Column list on which to key the compressed segments. An identifier representing the source of the data such as device_id or tags_id is usually a good candidate. The default is no segment by columns.
Parameters
Name Description
table_name Name of the hypertable that will support compression
column_name Name of the column used to order by and/or segment by
Sample Usage

Configure a hypertable that ingests device data to use compression.

ALTER TABLE metrics SET (timescaledb.compress, timescaledb.compress_orderby = ‘time DESC’, timescaledb.compress_segmentby = ‘device_id’);

add_compress_chunks_policy() Community edition

Allows you to set a policy by which the system will compress a chunk automatically in the background after it reaches a given age.
Required Arguments
Name Description
table_name (REGCLASS) Name of the table that the policy will act on.
time_interval (INTERVAL or integer) The age after which the policy job will compress chunks.

The time_interval parameter should be specifified differently depending on the type of the time column of the hypertable:

For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the time interval should be an INTERVAL type
For hypertables with integer-based timestamps: the time interval should be an integer type (this requires the integer_now_func to be set).

Sample Usage

Add a policy to compress chunks older than 60 days on the ‘cpu’ hypertable.

SELECT add_compress_chunks_policy(‘cpu’, INTERVAL ‘60d’);

Add a compress chunks policy to a hypertable with an integer-based time column:

SELECT add_compress_chunks_policy(‘table_with_bigint_time’, BIGINT ‘600000’);

remove_compress_chunks_policy() Community edition

If you need to remove the compression policy. To re-start policy basd compression again you will need to re-add the policy.
Required Arguments
Name Description
table_name (REGCLASS) Name of the hypertable the policy should be removed from.
Sample Usage

Remove the compression policy from the ‘cpu’ table:

SELECT remove_compress_chunks_policy(‘cpu’);

compress_chunk() Community edition

The compress_chunk function is used to compress a specific chunk. This is most often used instead of the add_compress_chunks_policy function, when a user wants more control over the scheduling of compression. For most users, we suggest using the policy framework instead.

TIP:You can get a list of chunks belonging to a hypertable using the show_chunks function.

Required Arguments
Name Description
chunk_name (REGCLASS) Name of the chunck to be compressed
Sample Usage

Compress a single chunk.

SELECT compress_chunk(’_timescaledb_internal._hyper_1_2_chunk’);

decompress_chunk() Community edition

If you need to modify or add data to a chunk that has already been compressed, you will need to decompress the chunk first. This is especially useful for backfilling old data.

TIP:Prior to decompressing chunks for the purpose of data backfill or updating you should first stop any compression policy that is active on the hypertable you plan to perform this operation on. Once the update and/or backfill is complete simply turn the policy back on and the system will recompress your chucks.

Required Arguments
Name Description
chunk_name (REGCLASS) Name of the chunk to be decompressed.
Sample Usage

Decompress a single chunk

SELECT decompress_chunk(’_timescaledb_internal._hyper_2_2_chunk’);

Continuous Aggregates Community edition**+++++++++++++++++++++++++++++++++**
第三部分: 聚合函数
TimescaleDB allows users the ability to automatically recompute aggregates at predefined intervals and materialize the results. This is suitable for frequently used queries. For a more detailed discussion of this capability, please see using TimescaleDB Continuous Aggregates.

CREATE VIEW
ALTER VIEW
REFRESH MATERIALIZED VIEW
DROP VIEW

CREATE VIEW (Continuous Aggregate) Community edition

CREATE VIEW statement is used to create continuous aggregates.

The syntax is:

CREATE VIEW <view_name> [ ( column_name [, …] ) ]
WITH ( timescaledb.continuous [, timescaledb. = ] )
AS
<select_query>

<select_query> is of the form :

SELECT <grouping_exprs>, <aggregate_functions>
FROM
[WHERE … ]
GROUP BY <time_bucket( <const_value>, <partition_col_of_hypertable> ),
[ optional grouping exprs>]
[HAVING …]

Parameters
Name Description
<view_name> Name (optionally schema-qualified) of continuous aggregate view to be created.
<column_name> Optional list of names to be used for columns of the view. If not given, the column names are deduced from the query.
WITH clause This clause specifies options for the continuous aggregate view.
<select_query> A SELECT query that uses the specified syntax.
Required WITH clause options
Name
timescaledb.continuous
Description Type Default
If timescaledb.continuous is not specified, then this is a regular PostgresSQL view. BOOLEAN
Optional WITH clause options
Name
timescaledb.refresh_lag
Description Type Default
Refresh lag controls the amount by which the materialization will lag behind the current time. The continuous aggregate view lags behind by bucket_width + refresh_lag value. refresh_lag can be set to positive and negative values. Same datatype as the bucket_width argument from the time_bucket expression. The default value is twice the bucket width (as specified by the time_bucket expression).

Name
timescaledb.refresh_interval
Description Type Default
Refresh interval controls how often the background materializer is run. Note that if refresh_lag is set to -<bucket_width>, the continuous aggregate will run whenever new data is received, regardless of what the refresh_interval value is. INTERVAL By default, this is set to twice the bucket width (if the datatype of the bucket_width argument from the time_bucket expression is an INTERVAL), otherwise it is set to 12 hours.

Name
timescaledb.max_interval_per_job
Description Type Default
Max interval per job specifies the amount of data processed by the background materializer job when the continuous aggregate is updated. Same datatype as the bucket_width argument from the time_bucket expression. The default value is 20 * bucket width.

Name
timescaledb.create_group_indexes
Description Type Default
Create indexes on the materialization table for the group by columns (specified by the GROUP BY clause of the SELECT query). BOOLEAN Indexes are created by default for every group by expression + time_bucket expression pair.

Name
timescaledb.ignore_invalidation_older_than
Description Type Default
Time interval after which invalidations are ignored. Same datatype as the bucket_width argument from the time_bucket expression. By default all invalidations are processed.

TIP:Say, the continuous aggregate uses time_bucket('2h', time_column) and we want to keep the view up to date with the data. We can do this by modifying the refresh_lag setting. Set refresh_lag to -2h. E.g. ALTER VIEW contview set (timescaledb.refresh_lag = '-2h'); Please refer to the caveats.

Restrictions

SELECT query should be of the form specified in the syntax above.
The hypertable used in the SELECT may not have row-level-security policies enabled.
GROUP BY clause must include a time_bucket expression. The time_bucket expression must use the time dimension column of the hypertable.
time_bucket_gapfill is not allowed in continuous aggs, but may be run in a SELECT from the continuous aggregate view.
In general, aggregates which can be parallelized by PostgreSQL are allowed in the view definition, this includes most aggregates distributed with PostgreSQL. Aggregates with ORDER BY, DISTINCT and FILTER clauses are not permitted.
All functions and their arguments included in SELECT, GROUP BY and HAVING clauses must be immutable.
Queries with ORDER BY are disallowed.
The view is not allowed to be a security barrier view.TIP:You can find the settings for continuous aggregates and statistics in timescaledb_information views.

Sample Usage

Create a continuous aggregate view.

CREATE VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
WITH ( timescaledb.continuous,
timescaledb.refresh_lag = ‘5 hours’,
timescaledb.refresh_interval = ‘1h’ )
AS
SELECT time_bucket(‘1day’, timec), min(location), sum(temperature), sum(humidity)
FROM conditions
GROUP BY time_bucket(‘1day’, timec)

Add additional continuous aggregates on top of the same raw hypertable.

CREATE VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
WITH ( timescaledb.continuous,
timescaledb.refresh_lag = ‘5 hours’,
timescaledb.refresh_interval = ‘1h’ )
AS
SELECT time_bucket(‘30day’, timec), min(location), sum(temperature), sum(humidity)
FROM conditions
GROUP BY time_bucket(‘30day’, timec);

TIP:In order to keep the continuous aggregate up to date with incoming data, the refresh lag can be set to -<bucket_width>. Please note that by doing so, you will incur higher write amplification and incur performance penalties.

CREATE VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
WITH (timescaledb.continuous,
timescaledb.refresh_lag = ‘-1h’,
timescaledb.refresh_interval = ‘30m’)
AS
SELECT time_bucket(‘1h’, timec), min(location), sum(temperature), sum(humidity)
FROM conditions
GROUP BY time_bucket(‘1h’, timec);

ALTER VIEW (Continuous Aggregate) Community edition

ALTER VIEW statement can be used to modify the WITH clause options for the continuous aggregate view.

ALTER VIEW <view_name> SET ( timescaledb.option = )

Parameters
Name Description
<view_name> Name (optionally schema-qualified) of continuous aggregate view to be created.
Sample Usage

Set the max interval processed by a materializer job (that updates the continuous aggregate) to 1 week.

ALTER VIEW contagg_view SET (timescaledb.max_interval_per_job = ‘1 week’);

Set the refresh lag to 1 hour, the refresh interval to 30 minutes and the max interval processed by a job to 1 week for the continuous aggregate.

ALTER VIEW contagg_view SET (timescaledb.refresh_lag = ‘1h’, timescaledb.max_interval_per_job = ‘1 week’, timescaledb.refresh_interval = ‘30m’);

TIP:Only WITH options can be modified using the ALTER statment. If you need to change any other parameters, drop the view and create a new one.

REFRESH MATERIALIZED VIEW (Continuous Aggregate) Community edition

The continuous aggregate view can be manually updated by using REFRESH MATERIALIZED VIEW statement. A background materializer job will run immediately and update the continuous aggregate.

REFRESH MATERIALIZED VIEW <view_name>

Parameters
Name Description
<view_name> Name (optionally schema-qualified) of continuous aggregate view to be created.
Sample Usage

Update the continuous aggregate view immediately.

REFRESH MATERIALIZED VIEW contagg_view;

TIP:Note that max_interval_per_job and refresh_lag parameter settings are used by the materialization job when the REFRESH is run. So the materialization (of the continuous aggregate) does not necessarily include all the updates to the hypertable.

DROP VIEW (Continuous Aggregate) Community edition

Continuous aggregate views can be dropped using DROP VIEW statement.

This deletes the hypertable that stores the materialized data for the continuous aggregate; it does not affect the data in the underlying hypertable from which the continuous aggregate is derived (i.e., the raw data). The CASCADE parameter is required for this command.

DROP VIEW <view_name> CASCADE;

Parameters
Name Description
<view_name> Name (optionally schema-qualified) of continuous aggregate view to be created.
Sample Usage

Drop existing continuous aggregate.

DROP VIEW contagg_view CASCADE;

WARNING:CASCADE will drop those objects that depend on the continuous aggregate, such as views that are built on top of the continuous aggregate view.

Automation policies Enterprise edition**+++++++++++++++++++++++++++++++++++++++++++++**
第四部分:自动化策略

TimescaleDB includes an automation framework for allowing background tasks to run inside the database, controllable by user-supplied policies. These tasks currently include capabilities around data retention and data reordering for improving query performance.
TimescaleDB包括一个自动化框架,该框架允许后台任务在数据库内部运行,并且可由用户提供的策略控制。 这些任务当前包括围绕数据保留和数据重新排序的功能,以提高查询性能。

The following functions allow the administrator to create/remove/alter policies that schedule administrative actions to take place on a hypertable. The actions are meant to implement data retention or perform tasks that will improve query performance on older chunks. Each policy is assigned a scheduled job which will be run in the background to enforce it.
下列功能允许管理员创建/删除/更改策略,这些策略计划在超表上执行管理操作。 这些操作旨在实现数据保留或执行可改善旧数据块查询性能的任务。 每个策略都分配有一个计划的作业,该作业将在后台运行以强制执行

add_drop_chunks_policy() Enterprise edition
添加删除块策略
Create a policy to drop chunks older than a given interval of a particular hypertable on a schedule in the background. (See drop_chunks). This implements a data retention policy and will remove data on a schedule. Only one drop_chunks policy may exist per hypertable.
创建一个策略,以在后台的调度中删除早于给定间隔的特定超表的块。 (请参见drop_chunks)。 这将实施数据保留策略,并将按计划删除数据。 每个超表只能存在一个drop_chunks策略。

必须的参数

变量名称 描述
hypertable 要转换成超级表的表的标识符
older_than 包含时间值的列名称或者要作为分区依据的主列

可选参数

变量名称 描述
cascade (REGCLASS)要为其创建策略的超表的名称。
if_not_exists (BOOLEAN)设置为true以避免在drop_chunks_policy已经存在的情况下引发错误。 而是发出通知。 默认为false
cascade_to_materializations (BOOLEAN)设置为TRUE还可以从任何关联的连续聚合中删除组数据。 设置为FALSE仅丢弃原始块(同时将数据保留在连续聚合中)。 默认为NULL,如果存在连续聚合,则会出错。

TIP:When dropping data from the raw hypertable while retaining data on
a continuous aggregate, the older_than parameter to drop_chunks has to
be longer than the timescaledb.ignore_invalidation_older_than
parameter on the continuous aggregate. That is because we cannot
process invalidations on data regions where the raw data has been
dropped.

提示:从原始超表中删除数据同时将数据保留在连续聚合上时,drop_chunks的old_than参数必须比连续聚合上的timescaledb.ignore_invalidation_older_than参数更长。 这是因为我们无法在已删除原始数据的数据区域上处理无效数据。

WARNING:If a drop chunks policy is setup which does not set
cascade_to_materializations to either TRUE or FALSE on a hypertable
that has a continuous aggregate, the policy will not drop any chunks.

警告:如果设置了丢弃块策略,该策略未在具有连续聚合的超表上将cascade_to_materializations设置为TRUE或FALSE,则该策略将不会丢弃任何块。

返回值:

描述
job_id 要转换成超级表的表的标识符
older_than (INTEGER)为实施此策略而创建的TimescaleDB后台作业ID

使用示例:

SELECT add_drop_chunks_policy('conditions', INTERVAL '6 months');

creates a data retention policy to discard chunks greater than 6 months old.
创建数据保留策略以丢弃大于6个月的数据块。

remove_drop_chunks_policy() Enterprise edition**&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&**

Remove a policy to drop chunks of a particular hypertable.
Required Arguments
Name Description
hypertable (REGCLASS) Name of the hypertable to create the policy for.
Optional Arguments
Name Description
if_exists (BOOLEAN) Set to true to avoid throwing an error if the drop_chunks_policy does not exist. Defaults to false.
Sample Usage

SELECT remove_drop_chunks_policy(‘conditions’);

removes the existing data retention policy for the conditions table.

add_reorder_policy() Enterprise edition**&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&**

Create a policy to reorder chunks older on a given hypertable index in the background. (See reorder_chunk). Only one reorder policy may exist per hypertable. Only chunks that are the 3rd from the most recent will be reordered to avoid reordering chunks that are still being inserted into.
创建一个策略以在后台对给定的超表索引上的旧块进行重新排序。 (请参阅reorder_chunk)。 每个超表只能存在一个重新排序策略。 仅对最近的第3个块进行重新排序,以避免重新排序仍在插入的块。

TIP:Once a chunk has been reordered by the background worker it will
not be reordered again. So if one were to insert significant amounts
of data in to older chunks that have already been reordered, it might
be necessary to manually re-run the reorder_chunk function on older
chunks, or to drop and re-create the policy if many older chunks have
been affected.

必须的参数

变量名称 描述
hypertable (REGCLASS)要为其创建策略的超表的名称。
index_name (NAME)现有索引,用于按顺序对磁盘上的行进行排序。

可选参数

变量名称 描述
if_not_exists (BOOLEAN)设置为true以避免在reorder_policy已经存在的情况下引发错误。 而是发出通知。 默认为false。

返回值

变量名称 描述
job_id (INTEGER)为实施此策略而创建的TimescaleDB后台作业ID

使用试例:

SELECT add_reorder_policy('conditions', 'conditions_device_id_time_idx');

creates a policy to reorder completed chunks by the existing (device_id, time) index. (See reorder_chunk).
创建一个策略以按现有(device_id,time)索引对完成的块进行重新排序。 (请参阅reorder_chunk)。

remove_reorder_policy() Enterprise edition**&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&**

Remove a policy to reorder a particular hypertable.
Required Arguments
Name Description
hypertable (REGCLASS) Name of the hypertable from which to remove the policy.
Optional Arguments
Name Description
if_exists (BOOLEAN) Set to true to avoid throwing an error if the reorder_policy does not exist. A notice is issued instead. Defaults to false.
Sample Usage

SELECT remove_reorder_policy(‘conditions’, if_exists => true);

removes the existing reorder policy for the conditions table if it exists.

alter_job_schedule() Community edition**&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&**

Policy jobs are scheduled to run periodically via a job run in a background worker. You can change the schedule using alter_job_schedule. To alter an existing job, you must refer to it by job_id. The job_id which implements a given policy and its current schedule can be found in views in the timescaledb_information schema corresponding to different types of policies or in the general timescaledb_information.policy_stats view. This view additionally contains information about when each job was last run and other useful statistics for deciding what the new schedule should be.

计划策略作业通过在后台工作程序中运行的作业定期运行。 您可以使用alter_job_schedule更改时间表。 要更改现有作业,必须通过job_id引用它。 可以在与不同类型的策略相对应的timescaledb_information模式中的视图中或在一般timescaledb_information.policy_stats视图中的视图中找到实现给定策略及其当前计划的job_id。 此视图还包含有关每个作业上次运行的时间的信息,以及其他有用的统计信息,用于确定新的时间表。

TIP:Altering the schedule will only change the frequency at which the
background worker checks the policy. If you need to change the data
retention interval or reorder by a different index, you’ll need to
remove the policy and add a new one.

提示:更改时间表只会更改后台工作人员检查策略的频率。 如果您需要更改数据保留间隔或按其他索引重新排序,则需要删除该策略并添加新的策略。

必须的参数

变量名称 描述
job_id (INTEGER)正在修改的策略作业的ID

可选参数

变量名称 描述
schedule_interval (interval)作业运行的间隔
max_runtime (interval)后台作业调度程序在停止作业之前将允许其运行的最长时间
max_retries (INTEGER)如果作业失败,将重试该作业的次数
retry_period (interval)调度程序在两次失败重试之间等待的时间
if_exists (BOOLEAN)设置为true可以避免如果作业不存在而引发错误,则将发出通知。 默认为false
next_start (TIMESTAMPTZ)下一次运行作业的时间。 可以通过将此值设置为“ infinity”来暂停作业(并使用now()的值重新启动)

返回值

变量名称 描述
schedule_interval (interval)作业运行的间隔
max_runtime (interval)后台作业调度程序在停止作业之前将允许其运行的最长时间
max_retries (INTEGER)如果作业失败,将重试该作业的次数
retry_period (interval)调度程序在两次失败重试之间等待的时间

使用示例:

SELECT alter_job_schedule(job_id, schedule_interval => INTERVAL '2 days')
FROM timescaledb_information.reorder_policies
WHERE hypertable = 'conditions';

reschedules the reorder policy job for the conditions table so that it runs every two days.
重新安排条件表的重新排序策略作业,使其每两天运行一次。

Analytics**+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++**
第五部分:数据分析函数

first()

The first aggregate allows you to get the value of one column as ordered by another. For example, first(temperature, time) will return the earliest temperature value based on time within an aggregate group.
Required Arguments
Name Description
value The value to return (anyelement)
time The timestamp to use for comparison (TIMESTAMP/TIMESTAMPTZ or integer type)
Sample Usage

Get the earliest temperature by device_id:

SELECT device_id, first(temp, time)
FROM metrics
GROUP BY device_id;

WARNING:The last and first commands do not use indexes, and instead perform a sequential scan through their groups. They are primarily used for ordered selection within a GROUP BY aggregate, and not as an alternative to an ORDER BY time DESC LIMIT 1 clause to find the latest value (which will use indexes).

histogram()

The histogram() function represents the distribution of a set of values as an array of equal-width buckets. It partitions the dataset into a specified number of buckets (nbuckets) ranging from the inputted min and max values.

The return value is an array containing nbuckets+2 buckets, with the middle nbuckets bins for values in the stated range, the first bucket at the head of the array for values under the lower min bound, and the last bucket for values greater than or equal to the max bound. Each bucket is inclusive on its lower bound, and exclusive on its upper bound. Therefore, values equal to the min are included in the bucket starting with min, but values equal to the max are in the last bucket.
Required Arguments
Name Description
value A set of values to partition into a histogram
min The histogram’s lower bound used in bucketing (inclusive)
max The histogram’s upper bound used in bucketing (exclusive)
nbuckets The integer value for the number of histogram buckets (partitions)
Sample Usage

A simple bucketing of device’s battery levels from the readings dataset:

SELECT device_id, histogram(battery_level, 20, 60, 5)
FROM readings
GROUP BY device_id
LIMIT 10;

The expected output:

device_id | histogram
------------±-----------------------------
demo000000 | {0,0,0,7,215,206,572}
demo000001 | {0,12,173,112,99,145,459}
demo000002 | {0,0,187,167,68,229,349}
demo000003 | {197,209,127,221,106,112,28}
demo000004 | {0,0,0,0,0,39,961}
demo000005 | {12,225,171,122,233,80,157}
demo000006 | {0,78,176,170,8,40,528}
demo000007 | {0,0,0,126,239,245,390}
demo000008 | {0,0,311,345,116,228,0}
demo000009 | {295,92,105,50,8,8,442}

interpolate() Community edition

The interpolate function does linear interpolation for missing values. It can only be used in an aggregation query with time_bucket_gapfill. The interpolate function call cannot be nested inside other function calls.
Required Arguments
Name Description
value The value to interpolate (int2/int4/int8/float4/float8)
Optional Arguments
Name Description
prev The lookup expression for values before the gapfill time range (record)
next The lookup expression for values after the gapfill time range (record)

Because the interpolation function relies on having values before and after each bucketed period to compute the interpolated value, it might not have enough data to calculate the interpolation for the first and last time bucket if those buckets do not otherwise contain valid values. For example, the interpolation would require looking before this first time bucket period, yet the query’s outer time predicate WHERE time > … normally restricts the function to only evaluate values within this time range. Thus, the prev and next expression tell the function how to look for values outside of the range specified by the time predicate. These expressions will only be evaluated when no suitable value is returned by the outer query (i.e., the first and/or last bucket in the queried time range is empty). The returned record for prev and next needs to be a time, value tuple. The datatype of time needs to be the same as the time datatype in the time_bucket_gapfill call. The datatype of value needs to be the same as the value datatype of the interpolate call.
Sample Usage

Get the temperature every day for each device over the last week interpolating for missing readings:

SELECT
time_bucket_gapfill(‘1 day’, time, now() - interval ‘1 week’, now()) AS day,
device_id,
avg(temperature) AS value,
interpolate(avg(temperature))
FROM metrics
WHERE time > now () - interval ‘1 week’
GROUP BY day, device_id
ORDER BY day;

       day          | device_id | value | interpolate

------------------------±----------±------±------------
2019-01-10 01:00:00+01 | 1 | |
2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
2019-01-12 01:00:00+01 | 1 | | 6.0
2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
2019-01-14 01:00:00+01 | 1 | | 7.5
2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
(7 row)

Get the average temperature every day for each device over the last 7 days interpolating for missing readings with lookup queries for values before and after the gapfill time range:

SELECT
time_bucket_gapfill(‘1 day’, time, now() - interval ‘1 week’, now()) AS day,
device_id,
avg(value) AS value,
interpolate(avg(temperature),
(SELECT (time,temperature) FROM metrics m2 WHERE m2.time < now() - interval ‘1 week’ AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1),
(SELECT (time,temperature) FROM metrics m2 WHERE m2.time > now() AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1)
) AS interpolate
FROM metrics m
WHERE time > now () - interval ‘1 week’
GROUP BY day, device_id
ORDER BY day;

       day          | device_id | value | interpolate

------------------------±----------±------±------------
2019-01-10 01:00:00+01 | 1 | | 3.0
2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
2019-01-12 01:00:00+01 | 1 | | 6.0
2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
2019-01-14 01:00:00+01 | 1 | | 7.5
2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
(7 row)

last()

The last aggregate allows you to get the value of one column as ordered by another. For example, last(temperature, time) will return the latest temperature value based on time within an aggregate group.
Required Arguments
Name Description
value The value to return (anyelement)
time The timestamp to use for comparison (TIMESTAMP/TIMESTAMPTZ or integer type)
Sample Usage

Get the temperature every 5 minutes for each device over the past day:

SELECT device_id, time_bucket(‘5 minutes’, time) AS interval,
last(temp, time)
FROM metrics
WHERE time > now () - interval ‘1 day’
GROUP BY device_id, interval
ORDER BY interval DESC;

WARNING:The last and first commands do not use indexes, and instead perform a sequential scan through their groups. They are primarily used for ordered selection within a GROUP BY aggregate, and not as an alternative to an ORDER BY time DESC LIMIT 1 clause to find the latest value (which will use indexes).

locf() Community edition

The locf function (last observation carried forward) allows you to carry the last seen value in an aggregation group forward. It can only be used in an aggregation query with time_bucket_gapfill. The locf function call cannot be nested inside other function calls.
Required Arguments
Name Description
value The value to carry forward (anyelement)
Optional Arguments
Name Description
prev The lookup expression for values before gapfill start (anyelement)
treat_null_as_missing Ignore NULL values in locf and only carry non-NULL values forward

Because the locf function relies on having values before each bucketed period to carry forward, it might not have enough data to fill in a value for the first bucket if it does not contain a value. For example, the function would need to look before this first time bucket period, yet the query’s outer time predicate WHERE time > … normally restricts the function to only evaluate values within this time range. Thus, the prev expression tell the function how to look for values outside of the range specified by the time predicate. The prev expression will only be evaluated when no previous value is returned by the outer query (i.e., the first bucket in the queried time range is empty).
Sample Usage

Get the average temperature every day for each device over the last 7 days carrying forward the last value for missing readings:

SELECT
time_bucket_gapfill(‘1 day’, time, now() - interval ‘1 week’, now()) AS day,
device_id,
avg(temperature) AS value,
locf(avg(temperature))
FROM metrics
WHERE time > now () - interval ‘1 week’
GROUP BY day, device_id
ORDER BY day;

       day          | device_id | value | locf

------------------------±----------±------±-----
2019-01-10 01:00:00+01 | 1 | |
2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
2019-01-12 01:00:00+01 | 1 | | 5.0
2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
2019-01-14 01:00:00+01 | 1 | | 7.0
2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
(7 row)

Get the average temperature every day for each device over the last 7 days carrying forward the last value for missing readings with out-of-bounds lookup

SELECT
time_bucket_gapfill(‘1 day’, time, now() - interval ‘1 week’, now()) AS day,
device_id,
avg(temperature) AS value,
locf(
avg(temperature),
(SELECT temperature FROM metrics m2 WHERE m2.time < now() - interval ‘2 week’ AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1)
)
FROM metrics m
WHERE time > now () - interval ‘1 week’
GROUP BY day, device_id
ORDER BY day;

       day          | device_id | value | locf

------------------------±----------±------±-----
2019-01-10 01:00:00+01 | 1 | | 1.0
2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
2019-01-12 01:00:00+01 | 1 | | 5.0
2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
2019-01-14 01:00:00+01 | 1 | | 7.0
2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
(7 row)

time_bucket()

This is a more powerful version of the standard PostgreSQL date_trunc function. It allows for arbitrary time intervals instead of the second, minute, hour, etc. provided by date_trunc. The return value is the bucket’s start time. Below is necessary information for using it effectively.

TIP:TIMESTAMPTZ arguments are bucketed by the time at UTC. So the alignment of buckets is on UTC time. One consequence of this is that daily buckets are aligned to midnight UTC, not local time.If the user wants buckets aligned by local time, the TIMESTAMPTZ input should be cast to TIMESTAMP (such a cast converts the value to local time) before being passed to time_bucket (see example below). Note that along daylight savings time boundaries the amount of data aggregated into a bucket after such a cast is irregular: for example if the bucket_width is 2 hours, the number of UTC hours bucketed by local time on daylight savings time boundaries can be either 3 hours or 1 hour.

Required Arguments
Name Description
bucket_width A PostgreSQL time interval for how long each bucket is (interval)
time The timestamp to bucket (timestamp/timestamptz/date)
Optional Arguments
Name Description
offset The time interval to offset all buckets by (interval)
origin Buckets are aligned relative to this timestamp (timestamp/timestamptz/date)
For Integer Time Inputs
Required Arguments
Name Description
bucket_width The bucket width (integer)
time The timestamp to bucket (integer)
Optional Arguments
Name Description
offset The amount to offset all buckets by (integer)
Sample Usage

Simple 5-minute averaging:

SELECT time_bucket(‘5 minutes’, time) AS five_min, avg(cpu)
FROM metrics
GROUP BY five_min
ORDER BY five_min DESC LIMIT 10;

To report the middle of the bucket, instead of the left edge:

SELECT time_bucket(‘5 minutes’, time) + ‘2.5 minutes’
AS five_min, avg(cpu)
FROM metrics
GROUP BY five_min
ORDER BY five_min DESC LIMIT 10;

For rounding, move the alignment so that the middle of the bucket is at the 5 minute mark (and report the middle of the bucket):

SELECT time_bucket(‘5 minutes’, time, ‘-2.5 minutes’) + ‘2.5 minutes’
AS five_min, avg(cpu)
FROM metrics
GROUP BY five_min
ORDER BY five_min DESC LIMIT 10;

To shift the alignment of the buckets you can use the origin parameter (passed as a timestamp, timestamptz, or date type). In this example, we shift the start of the week to a Sunday (the default is a Monday).

SELECT time_bucket(‘1 week’, timetz, TIMESTAMPTZ ‘2017-12-31’)
AS one_week, avg(cpu)
FROM metrics
GROUP BY one_week
WHERE time > TIMESTAMPTZ ‘2017-12-01’ AND time < TIMESTAMPTZ ‘2018-01-03’
ORDER BY one_week DESC LIMIT 10;

The value of the origin parameter we used in this example was 2017-12-31, a Sunday within the period being analyzed. However, the origin provided to the function can be before, during, or after the data being analyzed. All buckets are calculated relative to this origin. So, in this example, any Sunday could have been used. Note that because time < TIMESTAMPTZ ‘2018-01-03’ in this example, the last bucket would have only 4 days of data.

Bucketing a TIMESTAMPTZ at local time instead of UTC(see note above):

SELECT time_bucket(‘2 hours’, timetz::TIMESTAMP)
AS five_min, avg(cpu)
FROM metrics
GROUP BY five_min
ORDER BY five_min DESC LIMIT 10;

Note that the above cast to TIMESTAMP converts the time to local time according to the server’s timezone setting.

WARNING:For users upgrading from a version before 1.0.0, please note that the default origin was moved from 2000-01-01 (Saturday) to 2000-01-03 (Monday) between versions 0.12.1 and 1.0.0. This change was made to make time_bucket compliant with the ISO standard for Monday as the start of a week. This should only affect multi-day calls to time_bucket. The old behavior can be reproduced by passing 2000-01-01 as the origin parameter to time_bucket.

time_bucket_gapfill() Community edition

The time_bucket_gapfill function works similar to time_bucket but also activates gap filling for the interval between start and finish. It can only be used with an aggregation query. Values outside of start and finish will pass through but no gap filling will be done outside of the specified range.
Required Arguments
Name Description
bucket_width A PostgreSQL time interval for how long each bucket is (interval)
time The timestamp to bucket (timestamp/timestamptz/date)
Optional Arguments
Name Description
start The start of the gapfill period (timestamp/timestamptz/date)
finish The end of the gapfill period (timestamp/timestamptz/date)

Starting with version 1.3.0 start and finish are optional arguments and will be inferred from the WHERE clause if not supplied as arguments.
For Integer Time Inputs
Required Arguments
Name Description
bucket_width integer interval for how long each bucket is (int2/int4/int8)
time The timestamp to bucket (int2/int4/int8)
Optional Arguments
Name Description
start The start of the gapfill period (int2/int4/int8)
finish The end of the gapfill period (int2/int4/int8)

Starting with version 1.3.0 start and finish are optional arguments and will be inferred from the WHERE clause if not supplied as arguments.
Sample Usage

Get the metric value every day over the last 7 days:

SELECT
time_bucket_gapfill(‘1 day’, time) AS day,
device_id,
avg(value) AS value
FROM metrics
WHERE time > now() - interval ‘1 week’ AND time < now()
GROUP BY day, device_id
ORDER BY day;

       day          | device_id | value

------------------------±----------±------
2019-01-10 01:00:00+01 | 1 |
2019-01-11 01:00:00+01 | 1 | 5.0
2019-01-12 01:00:00+01 | 1 |
2019-01-13 01:00:00+01 | 1 | 7.0
2019-01-14 01:00:00+01 | 1 |
2019-01-15 01:00:00+01 | 1 | 8.0
2019-01-16 01:00:00+01 | 1 | 9.0
(7 row)

Get the metric value every day over the last 7 days carrying forward the previous seen value if none is available in an interval:

SELECT
time_bucket_gapfill(‘1 day’, time) AS day,
device_id,
avg(value) AS value,
locf(avg(value))
FROM metrics
WHERE time > now() - interval ‘1 week’ AND time < now()
GROUP BY day, device_id
ORDER BY day;

       day          | device_id | value | locf

------------------------±----------±------±-----
2019-01-10 01:00:00+01 | 1 | |
2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
2019-01-12 01:00:00+01 | 1 | | 5.0
2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
2019-01-14 01:00:00+01 | 1 | | 7.0
2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0

Get the metric value every day over the last 7 days interpolating missing values:

SELECT
time_bucket_gapfill(‘5 minutes’, time) AS day,
device_id,
avg(value) AS value,
interpolate(avg(value))
FROM metrics
WHERE time > now() - interval ‘1 week’ AND time < now()
GROUP BY day, device_id
ORDER BY day;

       day          | device_id | value | interpolate

------------------------±----------±------±------------
2019-01-10 01:00:00+01 | 1 | |
2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
2019-01-12 01:00:00+01 | 1 | | 6.0
2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
2019-01-14 01:00:00+01 | 1 | | 7.5
2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0

Utilities/Statistics
timescaledb_information.hypertable

Get information about hypertables.
Available Columns
Name Description
table_schema Schema name of the hypertable.
table_name Table name of the hypertable.
table_owner Owner of the hypertable.
num_dimensions Number of dimensions.
num_chunks Number of chunks.
table_size Disk space used by hypertable
index_size Disk space used by indexes
toast_size Disk space of toast tables
total_size Total disk space used by the specified table, including all indexes and TOAST data
Sample Usage

Get information about all hypertables.

SELECT * FROM timescaledb_information.hypertable;

table_schema | table_name | table_owner | num_dimensions | num_chunks | table_size | index_size | toast_size | total_size
--------------±-----------±------------±---------------±-----------±-----------±-----------±-----------±-----------
public | metrics | postgres | 1 | 5 | 99 MB | 96 MB | | 195 MB
public | devices | postgres | 1 | 1 | 8192 bytes | 16 kB | | 24 kB
(2 rows)

Check whether a table is a hypertable.

SELECT * FROM timescaledb_information.hypertable
WHERE table_schema=‘public’ AND table_name=‘metrics’;

table_schema | table_name | table_owner | num_dimensions | num_chunks | table_size | index_size | toast_size | total_size
--------------±-----------±------------±---------------±-----------±-----------±-----------±-----------±-----------
public | metrics | postgres | 1 | 5 | 99 MB | 96 MB | | 195 MB
(1 row)

timescaledb_information.license

Get information about current license.
Available Columns
Name Description
edition License key type (apache_only, community, enterprise)
expired Expiration status of license key (bool)
expiration_time Time of license key expiration
Sample Usage

Get information about current license.

SELECT * FROM timescaledb_information.license;

edition | expired | expiration_time
------------±--------±-----------------------
enterprise | f | 2019-02-15 13:44:53-05
(1 row)

timescaledb_information.compressed_chunk_stats

Get statistics about chunk compression.
Available Columns
Name Description
hypertable_name (REGCLASS) the name of the hypertable
chunk_name (REGCLASS) the name of the chunk
compression_status (TEXT) ‘Compressed’ or ‘Uncompressed’ depending on the status of the chunk
uncompressed_heap_bytes (TEXT) human-readable size of the heap before compression (NULL if currently uncompressed)
uncompressed_index_bytes (TEXT) human-readable size of all the indexes before compression (NULL if currently uncompressed)
uncompressed_toast_bytes (TEXT) human-readable size of the TOAST table before compression (NULL if currently uncompressed)
uncompressed_total_bytes (TEXT) human-readable size of the entire table (heap+indexes+toast) before compression (NULL if currently uncompressed)
compressed_heap_bytes (TEXT) human-readable size of the heap after compression (NULL if currently uncompressed)
compressed_index_bytes (TEXT) human-readable size of all the indexes after compression (NULL if currently uncompressed)
compressed_toast_bytes (TEXT) human-readable size of the TOAST table after compression (NULL if currently uncompressed)
compressed_total_bytes (TEXT) human-readable size of the entire table (heap+indexes+toast) after compression (NULL if currently uncompressed)
Sample Usage

SELECT * FROM timescaledb_information.compressed_chunk_stats;
-[ RECORD 1 ]------------±--------------------------------------
hypertable_name | foo
chunk_name | _timescaledb_internal._hyper_1_1_chunk
compression_status | Uncompressed
uncompressed_heap_bytes |
uncompressed_index_bytes |
uncompressed_toast_bytes |
uncompressed_total_bytes |
compressed_heap_bytes |
compressed_index_bytes |
compressed_toast_bytes |
compressed_total_bytes |
-[ RECORD 2 ]------------±--------------------------------------
hypertable_name | foo
chunk_name | _timescaledb_internal._hyper_1_2_chunk
compression_status | Compressed
uncompressed_heap_bytes | 8192 bytes
uncompressed_index_bytes | 32 kB
uncompressed_toast_bytes | 0 bytes
uncompressed_total_bytes | 40 kB
compressed_heap_bytes | 8192 bytes
compressed_index_bytes | 32 kB
compressed_toast_bytes | 8192 bytes
compressed_total_bytes | 48 kB

timescaledb_information.compressed_hypertable_stats

Get statistics about hypertable compression.
Available Columns
Name Description
hypertable_name (REGCLASS) the name of the hypertable
total_chunks (INTEGER) the number of chunks used by the hypertable
number_compressed_chunks (INTEGER) the number of chunks used by the hypertable that are currently compressed
uncompressed_heap_bytes (TEXT) human-readable size of the heap before compression (NULL if currently uncompressed)
uncompressed_index_bytes (TEXT) human-readable size of all the indexes before compression (NULL if currently uncompressed)
uncompressed_toast_bytes (TEXT) human-readable size of the TOAST table before compression (NULL if currently uncompressed)
uncompressed_total_bytes (TEXT) human-readable size of the entire table (heap+indexes+toast) before compression (NULL if currently uncompressed)
compressed_heap_bytes (TEXT) human-readable size of the heap after compression (NULL if currently uncompressed)
compressed_index_bytes (TEXT) human-readable size of all the indexes after compression (NULL if currently uncompressed)
compressed_toast_bytes (TEXT) human-readable size of the TOAST table after compression (NULL if currently uncompressed)
compressed_total_bytes (TEXT) human-readable size of the entire table (heap+indexes+toast) after compression (NULL if currently uncompressed)
Sample Usage

SELECT * FROM timescaledb_information.compressed_hypertable_stats;
-[ RECORD 1 ]------------±----------
hypertable_name | foo
total_chunks | 4
number_compressed_chunks | 1
uncompressed_heap_bytes | 8192 bytes
uncompressed_index_bytes | 32 kB
uncompressed_toast_bytes | 0 bytes
uncompressed_total_bytes | 40 kB
compressed_heap_bytes | 8192 bytes
compressed_index_bytes | 32 kB
compressed_toast_bytes | 8192 bytes
compressed_total_bytes | 48 kB

timescaledb_information.continuous_aggregates

Get metadata and settings information for continuous aggregates.
Available Columns
Name Description
view_name User supplied name for continuous aggregate view
view_owner Owner of the continuous aggregate view
refresh_lag Amount by which the materialization for the continuous aggregate lags behind the current time
refresh_interval Interval between updates of the continuous aggregate materialization
max_interval_per_job Maximum amount of data processed by a materialization job in a single run
materialization_hypertable Name of the underlying materialization table
view_definition SELECT query for continuous aggregate view
Sample Usage

select * from timescaledb_information.continuous_aggregates;
-[ RECORD 1 ]--------------±------------------------------------------------
view_name | contagg_view
view_owner | postgres
refresh_lag | 2
refresh_interval | 12:00:00
max_interval_per_job | 20
materialization_hypertable | _timescaledb_internal._materialized_hypertable_2
view_definition | SELECT foo.a, +
| count(foo.b) AS countb +
| FROM foo +
| GROUP BY (time_bucket(1, foo.a)), foo.a;

– description of foo
\d foo
Table “public.foo”
Column | Type | Collation | Nullable | Default
--------±--------±----------±---------±--------
a | integer | | not null |
b | integer | | |
c | integer | | |

timescaledb_information.continuous_aggregate_stats

Get information about background jobs and statistics related to continuous aggregates.
Available Columns
Name Description
view_name User supplied name for continuous aggregate.
completed_threshold Completed threshold for the last materialization job.
invalidation_threshold Invalidation threshold set by the latest materialization job
last_run_started_at Start time of the last job
last_run_status Whether the last run succeeded or failed
job_status Status of the materialization job . Valid values are ‘Running’ and ‘Scheduled’
last_run_duration Time taken by the last materialization job
next_scheduled_run Start time of the next materialization job
total_runs The total number of runs of this job
total_successes The total number of times this job succeeded
total_failures The total number of times this job failed
total_crashes The total number of times this job crashed
Sample Usage

select * from timescaledb_information.continuous_aggregate_stats;
-[ RECORD 1 ]----------±-----------------------------
view_name | contagg_view
completed_threshold | 1
invalidation_threshold | 1
job_id | 1003
last_run_started_at | 2019-07-03 15:00:26.016018-04
last_run_status | Success
job_status | scheduled
last_run_duration | 00:00:00.039163
next_scheduled_run | 2019-07-03 15:00:56.055181-04
total_runs | 3
total_successes | 3
total_failures | 0
total_crashes | 0

timescaledb_information.drop_chunks_policies

Shows information about drop_chunks policies that have been created by the user. (See add_drop_chunks_policy for more information about drop_chunks policies).
Available Columns
Name Description
hypertable (REGCLASS) The name of the hypertable on which the policy is applied
older_than (INTERVAL) Chunks fully older than this amount of time will be dropped when the policy is run
cascade (BOOLEAN) Whether the policy will be run with the cascade option turned on, which will cause dependent objects to be dropped as well as chunks.
job_id (INTEGER) The id of the background job set up to implement the drop_chunks policy
schedule_interval (INTERVAL) The interval at which the job runs
max_runtime (INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries (INTEGER) The number of times the job will be retried should it fail
retry_period (INTERVAL) The amount of time the scheduler will wait between retries of the job on failure
Sample Usage

Get information about drop_chunks policies.

SELECT * FROM timescaledb_information.drop_chunks_policies;

   hypertable       | older_than | cascade | job_id | schedule_interval | max_runtime | max_retries | retry_period

------------------------±-----------±--------±-------±------------------±------------±------------±-------------
conditions | @ 4 mons | t | 1001 | @ 1 sec | @ 5 mins | -1 | @ 12 hours
(1 row)

timescaledb_information.reorder_policies

Shows information about reorder policies that have been created by the user. (See add_reorder_policy for more information about reorder policies).
Available Columns
Name Description
hypertable (REGCLASS) The name of the hypertable on which the policy is applied
index (NAME) Chunks fully older than this amount of time will be dropped when the policy is run
job_id (INTEGER) The id of the background job set up to implement the reorder policy
schedule_interval (INTERVAL) The interval at which the job runs
max_runtime (INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries (INTEGER) The number of times the job will be retried should it fail
retry_period (INTERVAL) The amount of time the scheduler will wait between retries of the job on failure
Sample Usage

Get information about reorder policies.

SELECT * FROM timescaledb_information.reorder_policies;

 hypertable   |     hypertable_index_name     | job_id | schedule_interval | max_runtime | max_retries | retry_period

--------------------±----------------------------±-------±------------------±------------±------------±-------------
conditions | conditions_device_id_time_idx | 1000 | @ 4 days | @ 0 | -1 | @ 1 day
(1 row)

timescaledb_information.policy_stats

Shows information and statistics about policies created to manage data retention and other administrative tasks on hypertables. (See policies). The statistics include information useful for administering jobs and determining whether they ought be rescheduled, such as: when and whether the background job used to implement the policy succeeded and when it is scheduled to run next.
Available Columns
Name Description
hypertable (REGCLASS) The name of the hypertable on which the policy is applied
job_id (INTEGER) The id of the background job created to implement the policy
job_type (TEXT) The type of policy the job was created to implement
last_run_success (BOOLEAN) Whether the last run succeeded or failed
last_finish (TIMESTAMPTZ) The time the last run finished
last_start (TIMESTAMPTZ) The time the last run started
next_start (TIMESTAMPTZ) The time the next run will start
total_runs (INTEGER) The total number of runs of this job
total_failures (INTEGER) The total number of times this job failed
Sample Usage

Get information about statistics on created policies.

SELECT * FROM timescaledb_information.policy_stats;

   hypertable       | job_id |  job_type   | last_run_success |         last_finish          |          last_start          |          next_start          | total_runs | total_failures

------------------------±-------±------------±-----------------±-----------------------------±-----------------------------±-----------------------------±-----------±---------------
conditions | 1001 | drop_chunks | t | Fri Dec 31 16:00:01 1999 PST | Fri Dec 31 16:00:01 1999 PST | Fri Dec 31 16:00:02 1999 PST | 2 | 0
(1 row)

timescaledb.license_key
Sample Usage

View current license key.

SHOW timescaledb.license_key;

chunk_relation_size()

Get relation size of the chunks of an hypertable.
Required Arguments
Name Description
main_table Identifier of hypertable to get chunk relation sizes for.
Returns
Column Description
chunk_id TimescaleDB id of a chunk
chunk_table Table used for the chunk
partitioning_columns Partitioning column names
partitioning_column_types Types of partitioning columns
partitioning_hash_functions Hash functions of partitioning columns
dimensions Partitioning dimension names
ranges Partitioning ranges for each dimension
table_bytes Disk space used by main_table
index_bytes Disk space used by indexes
toast_bytes Disk space of toast tables
total_bytes Disk space used in total
Sample Usage

SELECT * FROM chunk_relation_size(‘conditions’);

or, to reduce the output, a common use is:

SELECT chunk_table, table_bytes, index_bytes, total_bytes
FROM chunk_relation_size(‘conditions’);

The expected output:

             chunk_table                 | table_bytes | index_bytes | total_bytes

---------------------------------------------±------------±------------±------------
“_timescaledb_internal”."_hyper_1_1_chunk" | 29220864 | 37773312 | 67002368
“_timescaledb_internal”."_hyper_1_2_chunk" | 59252736 | 81297408 | 140558336

Where chunk_table is the table that contains the data, table_bytes is the size of that table, index_bytes is the size of the indexes of the table, and total_bytes is the size of the table with indexes.
chunk_relation_size_pretty()

Get relation size of the chunks of an hypertable.
Required Arguments
Name Description
main_table Identifier of hypertable to get chunk relation sizes for.
Returns
Column Description
chunk_id TimescaleDB id of a chunk
chunk_table Table used for the chunk
partitioning_columns Partitioning column names
partitioning_column_types Types of partitioning columns
partitioning_hash_functions Hash functions of partitioning columns
ranges Partitioning ranges for each dimension
table_size Pretty output of table_bytes
index_size Pretty output of index_bytes
toast_size Pretty output of toast_bytes
total_size Pretty output of total_bytes
Sample Usage

SELECT * FROM chunk_relation_size_pretty(‘conditions’);

or, to reduce the output, a common use is:

SELECT chunk_table, table_size, index_size, total_size
FROM chunk_relation_size_pretty(‘conditions’);

The expected output:

            chunk_table                  | table_size | index_size | total_size

---------------------------------------------±-----------±-----------±-----------
“_timescaledb_internal”."_hyper_1_1_chunk" | 28 MB | 36 MB | 64 MB
“_timescaledb_internal”."_hyper_1_2_chunk" | 57 MB | 78 MB | 134 MB

Where chunk_table is the table that contains the data, table_size is the size of that table, index_size is the size of the indexes of the table, and total_size is the size of the table with indexes.
get_telemetry_report()

If background telemetry is enabled, returns the string sent to our servers. If telemetry is not enabled, outputs INFO message affirming telemetry is disabled and returns a NULL report.
Optional Arguments
Name Description
always_display_report Set to true to always view the report, even if telemetry is disabled
Sample Usage

If telemetry is enabled, view the telemetry report.

SELECT get_telemetry_report();

If telemetry is disabled, view the telemetry report locally.

SELECT get_telemetry_report(always_display_report := true);

hypertable_approximate_row_count()

Get approximate row count for hypertable(s) based on catalog estimates.
Optional Arguments
Name Description
main_table Hypertable to get row count for. If omitted, all hypertabls are returned.
Sample Usage

Get the approximate row count for a single hypertable.

SELECT * FROM hypertable_approximate_row_count(‘conditions’);

Get the approximate row count for all hypertables.

SELECT * FROM hypertable_approximate_row_count();

The expected output:

schema_name | table_name | row_estimate
-------------±-----------±-------------
public | conditions | 240000

hypertable_relation_size()

Get relation size of hypertable like pg_relation_size(hypertable).
Required Arguments
Name Description
main_table Identifier of hypertable to get relation size for.
Returns
Column Description
table_bytes Disk space used by main_table (like pg_relation_size(main_table))
index_bytes Disk space used by indexes
toast_bytes Disk space of toast tables
total_bytes Total disk space used by the specified table, including all indexes and TOAST data
Sample Usage

SELECT * FROM hypertable_relation_size(‘conditions’);

or, to reduce the output, a common use is:

SELECT table_bytes, index_bytes, toast_bytes, total_bytes
FROM hypertable_relation_size(‘conditions’);

The expected output:

table_bytes | index_bytes | toast_bytes | total_bytes
-------------±------------±------------±------------
1227661312 | 1685979136 | 180224 | 2913820672

hypertable_relation_size_pretty()

Get relation size of hypertable like pg_relation_size(hypertable).
Required Arguments
Name Description
main_table Identifier of hypertable to get relation size for.
Returns
Column Description
table_size Pretty output of table_bytes
index_size Pretty output of index_bytes
toast_size Pretty output of toast_bytes
total_size Pretty output of total_bytes
Sample Usage

SELECT * FROM hypertable_relation_size_pretty(‘conditions’);

or, to reduce the output, a common use is:

SELECT table_size, index_size, toast_size, total_size
FROM hypertable_relation_size_pretty(‘conditions’);

The expected output:

table_size | index_size | toast_size | total_size
------------±-----------±-----------±-----------
1171 MB | 1608 MB | 176 kB | 2779 MB

indexes_relation_size()

Get sizes of indexes on a hypertable.
Required Arguments
Name Description
main_table Identifier of hypertable to get indexes size for.
Returns
Column Description
index_name Index on hypertable
total_bytes Size of index on disk
Sample Usage

SELECT * FROM indexes_relation_size(‘conditions’);

The expected output:

          index_name              | total_bytes

--------------------------------------±------------
public.conditions_device_id_time_idx | 1198620672
public.conditions_time_idx | 487358464

indexes_relation_size_pretty()

Get sizes of indexes on a hypertable.
Required Arguments
Name Description
main_table Identifier of hypertable to get indexes size for.
Returns
Column Description
index_name Index on hypertable
total_size Pretty output of total_bytes
Sample Usage

SELECT * FROM indexes_relation_size_pretty(‘conditions’);

The expected output:

         index_name_              | total_size

--------------------------------------±-----------
public.conditions_device_id_time_idx | 1143 MB
public.conditions_time_idx | 465 MB

show_tablespaces()

Show the tablespaces attached to a hypertable.
Required Arguments
Name Description
hypertable Identifier of hypertable to show attached tablespaces for.
Sample Usage

SELECT * FROM show_tablespaces(‘conditions’);

show_tablespaces

disk1
disk2

timescaledb_pre_restore()

Perform the proper operations to allow restoring of the database via pg_restore to commence. Specifically this sets the timescaledb.restoring GUC to on and stops any background workers which may have been performing tasks until the timescaledb_post_restore fuction is run following the restore. See backup/restore docs for more information.

WARNING:After running SELECT timescaledb_pre_restore() you must run the timescaledb_post_restore function before using the database normally.

Sample Usage

SELECT timescaledb_pre_restore();

timescaledb_post_restore()

Perform the proper operations after restoring the database has completed. Specifically this sets the timescaledb.restoring GUC to off and restarts any background workers. See backup/restore docs for more information.
Sample Usage

SELECT timescaledb_post_restore();

Dump TimescaleDB meta data

To help when asking for support and reporting bugs, TimescaleDB includes a SQL script that outputs metadata from the internal TimescaleDB tables as well as version information. The script is available in the source distribution in scripts/ but can also be downloaded separately. To use it, run:

psql [your connect flags] -d your_timescale_db < dump_meta_data.sql > dumpfile.txt

and then inspect dump_file.txt before sending it together with a bug report or support question

TimescaleDB API 接口相关推荐

  1. java跨域权重_爱站权重查询 API 接口请求调用

    原标题:爱站权重查询 API 接口请求调用 爱站权重查询 API 接口在网上已经很多且大都封装成了 API 供别人调用.支持前台跨域请求,以GET/POST方式提交即可.爱站权重查询 API 接口可以 ...

  2. 使用Node.js写一个简单的api接口

    引入Http模块 默认你已经安装了Node.js Node当中内置了Http模块: 可以使用 var http= require("http"); 复制代码 引入http模块: H ...

  3. Swagger 生成 PHP restful API 接口文档

    需求和背景 需求: 为客户端同事写接口文档的各位后端同学,已经在各种场合回忆了使用自动化文档工具前手写文档的血泪史. 我的故事却又不同,因为首先来说,我在公司是 Android 组负责人,属于上述血泪 ...

  4. shell脚本api接口考虑并发问题的可行性操作

    当我们通过收集每台客户端数据后通过api接口上传到云服务器时,可能会由于客户端过多,几千以至于几万,这时不得不考虑个问题: 并发的问题,同时并发上传文件,可能导致api接口挂掉,但如果我们让文件错开时 ...

  5. js学习总结----crm客户管理系统之项目开发流程和api接口文档

    CRM ->客户管理系统 CMS ->内容发布管理系统 ERP ->企业战略信息管理系统 OA -> 企业办公管理系统 产品 / UI设计:需求分析,产品定位,市场调查...按 ...

  6. 看看人家那后端API接口写得,那叫一个优雅!

    点击上方蓝色"方志朋",选择"设为星标" 回复"666"获取独家整理的学习资料! 来源:r6d.cn/tEvn 在移动互联网,分布式.微服务 ...

  7. 拒绝接口裸奔!开放API接口签名验证!

    点击上方蓝色"方志朋",选择"设为星标" 回复"666"获取独家整理的学习资料! 来源:r6d.cn/kChH 接口安全问题 请求身份是否合 ...

  8. Java 如何设计 API 接口,实现统一格式返回?

    点击上方"方志朋",选择"设为星标" 回复"666"获取新整理的面试资料 来源:老顾聊技术 前言 接口交互 返回格式 控制层Controll ...

  9. 面试官:如何做 API 接口防刷??

    点击上方"方志朋",选择"设为星标" 回复"666"获取新整理的面试资料 来源:blog.csdn.net/weixin_42533856/ ...

最新文章

  1. Linux 守护进程一
  2. VC++6.0使用GDI++出现'ULONG_PTR'未定义和'token' 未定义的解决办法
  3. java中Class.getResource用法(用于配置文件的读取)
  4. 来自后端的突袭? --开包即食的教程带你浅尝最新开源的C# Web引擎 Blazor
  5. 【华为云技术分享】Nginx应用调优案例
  6. 马斯克:不要把员工变成“螺丝钉”
  7. Extjs GridPanel用法详解
  8. mdt 计算机名_MDT Administrator
  9. JanusGraph 创建索引步骤
  10. 前端性能优化gzip压缩
  11. UVM基本介绍(UVM class hierarchy、验证平台、树状图)
  12. vue 高德地图 不同区域显示不同颜色_高德地图这样用成为你的图表神器
  13. JVM2:垃圾收集器与内存分配策略
  14. Vue学习笔记01-基础部分
  15. 教你如何面试进入阿里巴巴!
  16. 8.4 Hyperplanes (超平面)
  17. 微信小程序开发之——比较数字大小-页面样式(2.3)
  18. PPT配色的实用小技巧分享
  19. ON DUPLICATE KEY UPDATE 用法
  20. 解决百度爬虫无法爬取 Github Pages 个人博客的问题

热门文章

  1. Jquery打造的个性网站
  2. App使用中,被打开了第三方App
  3. 中国泡沫玻璃市场调研及投资前景风险预测报告2022-2028年
  4. 技术改变生活,激情成就梦想
  5. python原生如何实现抓包_python实现抓包、解析流程,超过瘾!
  6. linux 硬盘扇区错误,Linux系统扇区错乱的问题
  7. 教育部首设儿童成长阶梯标准:4岁学会体谅父母
  8. MADDPG:Multi-Agent Actor-Critic for Mixed Cooperative MPE:Multi-Agent Particle Environment
  9. antD的table组件,长度过长使用省略号表示
  10. 笔记本外接键盘解决方案:禁用笔记本自带键盘