1.下载pinpoint

https://github.com/pinpoint-apm/pinpoint

2.执行hbase脚本

初始化数据(hbase-create.hbase)

D:\hbase-2.3.3\hbase-2.3.3\bin>hbase shell D:/pinpoint-2.1.1/pinpoint-2.1.1/hbase/scripts/hbase-create.hbase
Created table AgentInfo
Took 3.6380 secondsCreated table AgentStatV2
Took 4.4220 secondsCreated table ApplicationStatAggre
Took 2.1830 secondsCreated table ApplicationIndex
Took 0.6380 secondsCreated table AgentLifeCycle
Took 0.6380 secondsCreated table AgentEvent
Took 0.6400 secondsCreated table StringMetaData
Took 0.6450 secondsCreated table ApiMetaData
Took 0.6330 secondsCreated table SqlMetaData_Ver2
Took 0.6350 secondsCreated table TraceV2
Took 4.1810 secondsCreated table ApplicationTraceIndex
Took 0.6360 secondsCreated table ApplicationMapStatisticsCaller_Ver2
Took 0.6350 secondsCreated table ApplicationMapStatisticsCallee_Ver2
Took 1.1680 secondsCreated table ApplicationMapStatisticsSelf_Ver2
Took 1.2470 secondsCreated table HostApplicationMap_Ver2
Took 0.6530 secondsTABLEAgentEventAgentInfoAgentLifeCycleAgentStatV2ApiMetaDataApplicationIndexApplicationMapStatisticsCallee_Ver2ApplicationMapStatisticsCaller_Ver2ApplicationMapStatisticsSelf_Ver2ApplicationStatAggreApplicationTraceIndexHostApplicationMap_Ver2SqlMetaData_Ver2StringMetaDataTraceV215 row(s)
Took 0.1600 secondsD:\hbase-2.3.3\hbase-2.3.3\bin>

3. Pinpoint Collector

java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-collector-boot-2.1.1.jar

hbase的zookeeper的ip:127.0.0.1  默认端口2181

D:\pinpoint>java -jar -Dpinpoint.zookeeper.address=127.0.0.1 pinpoint-collector-boot-2.2.0.jar
01-06 14:11:33.711 INFO  ProfileApplicationListener          : onApplicationEvent-ApplicationEnvironmentPreparedEvent
01-06 14:11:33.712 INFO  ProfileApplicationListener          : spring.profiles.active:[release]
Tomcat started on port(s): 8081 (http) with context path ''
01-06 14:13:33.033 [           main] INFO  c.n.p.c.CollectorApp                     : Started CollectorApp in 124.488 seconds (JVM running for 154.548)

默认配置

  • pinpoint-collector-root.properties - contains configurations for the collector. Check the following values with the agent’s configuration options :

    • collector.receiver.base.port (agent’s profiler.collector.tcp.port - default: 9994/TCP)
    • collector.receiver.stat.udp.port (agent’s profiler.collector.stat.port - default: 9995/UDP)
    • collector.receiver.span.udp.port (agent’s profiler.collector.span.port - default: 9996/UDP)
pinpoint.zookeeper.address=localhost# base data receiver config  ---------------------------------------------------------------------
collector.receiver.base.ip=0.0.0.0
collector.receiver.base.port=9994# number of tcp worker threads
collector.receiver.base.worker.threadSize=8
# capacity of tcp worker queue
collector.receiver.base.worker.queueSize=1024
# monitoring for tcp worker
collector.receiver.base.worker.monitor=truecollector.receiver.base.request.timeout=3000
collector.receiver.base.closewait.timeout=3000
# 5 min
collector.receiver.base.ping.interval=300000
# 30 min
collector.receiver.base.pingwait.timeout=1800000# stat receiver config  ---------------------------------------------------------------------
collector.receiver.stat.udp=true
collector.receiver.stat.udp.ip=0.0.0.0
collector.receiver.stat.udp.port=9995
collector.receiver.stat.udp.receiveBufferSize=4194304
## required linux kernel 3.9 & java 9+
collector.receiver.stat.udp.reuseport=false
## If not set, follow the cpu count automatically.
#collector.receiver.stat.udp.socket.count=1# Should keep in mind that TCP transport load balancing is per connection.(UDP transport loadbalancing is per packet)
collector.receiver.stat.tcp=false
collector.receiver.stat.tcp.ip=0.0.0.0
collector.receiver.stat.tcp.port=9995collector.receiver.stat.tcp.request.timeout=3000
collector.receiver.stat.tcp.closewait.timeout=3000
# 5 min
collector.receiver.stat.tcp.ping.interval=300000
# 30 min
collector.receiver.stat.tcp.pingwait.timeout=1800000# number of udp statworker threads
collector.receiver.stat.worker.threadSize=8
# capacity of udp statworker queue
collector.receiver.stat.worker.queueSize=64
# monitoring for udp stat worker
collector.receiver.stat.worker.monitor=true# span receiver config  ---------------------------------------------------------------------
collector.receiver.span.udp=true
collector.receiver.span.udp.ip=0.0.0.0
collector.receiver.span.udp.port=9996
collector.receiver.span.udp.receiveBufferSize=4194304
## required linux kernel 3.9 & java 9+
collector.receiver.span.udp.reuseport=false
## If not set, follow the cpu count automatically.
#collector.receiver.span.udp.socket.count=1# Should keep in mind that TCP transport load balancing is per connection.(UDP transport loadbalancing is per packet)
collector.receiver.span.tcp=false
collector.receiver.span.tcp.ip=0.0.0.0
collector.receiver.span.tcp.port=9996collector.receiver.span.tcp.request.timeout=3000
collector.receiver.span.tcp.closewait.timeout=3000
# 5 min
collector.receiver.span.tcp.ping.interval=300000
# 30 min
collector.receiver.span.tcp.pingwait.timeout=1800000# number of udp statworker threads
collector.receiver.span.worker.threadSize=32
# capacity of udp statworker queue
collector.receiver.span.worker.queueSize=256
# monitoring for udp stat worker
collector.receiver.span.worker.monitor=true# configure l4 ip address to ignore health check logs
# support raw address and CIDR address (Ex:10.0.0.1,10.0.0.1/24)
collector.l4.ip=# change OS level read/write socket buffer size (for linux)
#sudo sysctl -w net.core.rmem_max=
#sudo sysctl -w net.core.wmem_max=
# check current values using:
#$ /sbin/sysctl -a | grep -e rmem -e wmem# number of agent event worker threads
collector.agentEventWorker.threadSize=4
# capacity of agent event worker queue
collector.agentEventWorker.queueSize=1024# Determines whether to register the information held by com.navercorp.pinpoint.collector.monitor.CollectorMetric to jmx
collector.metric.jmx=false
collector.metric.jmx.domain=pinpoint.collector.metrics# -------------------------------------------------------------------------------------------------
# The cluster related options are used to establish connections between the agent, collector, and web in order to send/receive data between them in real time.
# You may enable additional features using this option (Ex : RealTime Active Thread Chart).
# -------------------------------------------------------------------------------------------------
# Usage : Set the following options for collector/web components that reside in the same cluster in order to enable this feature.
# 1. cluster.enable (pinpoint-web.properties, pinpoint-collector-root.properties) - "true" to enable
# 2. cluster.zookeeper.address (pinpoint-web.properties, pinpoint-collector-root.properties) - address of the ZooKeeper instance that will be used to manage the cluster
# 3. cluster.web.tcp.port (pinpoint-web.properties) - any available port number (used to establish connection between web and collector)
# -------------------------------------------------------------------------------------------------
# Please be aware of the following:
#1. If the network between web, collector, and the agents are not stable, it is advisable not to use this feature.
#2. We recommend using the cluster.web.tcp.port option. However, in cases where the collector is unable to establish connection to the web, you may reverse this and make the web establish connection to the collector.
#   In this case, you must set cluster.connect.address (pinpoint-web.properties); and cluster.listen.ip, cluster.listen.port (pinpoint-collector-root.properties) accordingly.
cluster.enable=true
cluster.zookeeper.address=${pinpoint.zookeeper.address}
cluster.zookeeper.sessiontimeout=30000
cluster.listen.ip=
cluster.listen.port=-1#collector.admin.password=
#collector.admin.api.rest.active=
#collector.admin.api.jmx.active=collector.spanEvent.sequence.limit=10000# Flink configuration
flink.cluster.enable=false
flink.cluster.zookeeper.address=${pinpoint.zookeeper.address}
flink.cluster.zookeeper.sessiontimeout=3000
  • pinpoint-collector-grpc.properties - contains configurations for the grpc.

    • collector.receiver.grpc.agent.port (agent’s profiler.transport.grpc.agent.collector.portprofiler.transport.grpc.metadata.collector.port - default: 9991/TCP)
    • collector.receiver.grpc.stat.port (agent’s profiler.transport.grpc.stat.collector.port - default: 9992/TCP)
    • collector.receiver.grpc.span.port (agent’s profiler.transport.grpc.span.collector.port - default: 9993/TCP)
# gRPC
# Agent
collector.receiver.grpc.agent.enable=true
collector.receiver.grpc.agent.ip=0.0.0.0
collector.receiver.grpc.agent.port=9991
# Executor of Server
collector.receiver.grpc.agent.server.executor.thread.size=8
collector.receiver.grpc.agent.server.executor.queue.size=256
collector.receiver.grpc.agent.server.executor.monitor.enable=true
# Executor of Worker
collector.receiver.grpc.agent.worker.executor.thread.size=16
collector.receiver.grpc.agent.worker.executor.queue.size=1024
collector.receiver.grpc.agent.worker.executor.monitor.enable=true# Stat
collector.receiver.grpc.stat.enable=true
collector.receiver.grpc.stat.ip=0.0.0.0
collector.receiver.grpc.stat.port=9992
# Executor of Server
collector.receiver.grpc.stat.server.executor.thread.size=4
collector.receiver.grpc.stat.server.executor.queue.size=256
collector.receiver.grpc.stat.server.executor.monitor.enable=true
# Executor of Worker
collector.receiver.grpc.stat.worker.executor.thread.size=16
collector.receiver.grpc.stat.worker.executor.queue.size=1024
collector.receiver.grpc.stat.worker.executor.monitor.enable=true
# Stream scheduler for rejected execution
collector.receiver.grpc.stat.stream.scheduler.thread.size=1
collector.receiver.grpc.stat.stream.scheduler.period.millis=1000
collector.receiver.grpc.stat.stream.call.init.request.count=100
collector.receiver.grpc.stat.stream.scheduler.recovery.message.count=100# Span
collector.receiver.grpc.span.enable=true
collector.receiver.grpc.span.ip=0.0.0.0
collector.receiver.grpc.span.port=9993
# Executor of Server
collector.receiver.grpc.span.server.executor.thread.size=4
collector.receiver.grpc.span.server.executor.queue.size=256
collector.receiver.grpc.span.server.executor.monitor.enable=true
# Executor of Worker
collector.receiver.grpc.span.worker.executor.thread.size=32
collector.receiver.grpc.span.worker.executor.queue.size=1024
collector.receiver.grpc.span.worker.executor.monitor.enable=true
# Stream scheduler for rejected execution
collector.receiver.grpc.span.stream.scheduler.thread.size=1
collector.receiver.grpc.span.stream.scheduler.period.millis=1000
collector.receiver.grpc.span.stream.call.init.request.count=100
collector.receiver.grpc.span.stream.scheduler.recovery.message.count=100
  • hbase.properties - contains configurations to connect to HBase.

    • hbase.client.host (default: localhost)
    • hbase.client.port (default: 2181)
hbase.client.host=${pinpoint.zookeeper.address}
hbase.client.port=2181# hbase default:/hbase
hbase.zookeeper.znode.parent=/hbase# hbase namespace to use default:default
hbase.namespace=default# ==================================================================================
# hbase client thread pool option
hbase.client.thread.max=32
hbase.client.threadPool.queueSize=5120
# prestartAllCoreThreads
hbase.client.threadPool.prestart=false# warmup hbase connection cache
hbase.client.warmup.enable=false# enable hbase async operation. default: false
hbase.client.async.enable=false

When Using Released Binary (Recommended)

  • You can override any configuration values with -D option. For example,通过命令参数修改

    • java -jar -Dspring.profiles.active=release -Dpinpoint.zookeeper.address=localhost -Dhbase.client.port=1234 pinpoint-collector-boot-2.1.1.jar
  • 从jar包拉出来修改完在拖进去

Pinpoint Collector provides two profiles: release and local (default)

4. Pinpoint Web

java -jar -Dpinpoint.zookeeper.address=localhost pinpoint-web-boot-2.1.1.jar

hbase的zookeeper的ip:127.0.0.1  默认端口2181

默认配置

There are 2 configuration files used for Pinpoint Web: pinpoint-web-root.properties, and hbase.properties.

  • hbase.properties - contains configurations to connect to HBase.

    • hbase.client.host (default: localhost)
    • hbase.client.port (default: 2181)

When Using Released Binary (Recommended)

  • You can override any configuration values with -D option. For example,通过命令参数修改

    • java -jar -Dspring.profiles.active=release -Dpinpoint.zookeeper.address=localhost -Dhbase.client.port=1234 pinpoint-web-boot-2.1.1.jar
  • 从jar包拉出来修改完在拖进去
    • config/web.properties

Pinpoint Web provides two profiles: release (default) and local.

http://localhost:8080/main

D:\pinpoint>java -jar -Dpinpoint.zookeeper.address=127.0.0.1 pinpoint-web-boot-2.2.0.jar
01-06 14:21:54.862 INFO  ProfileApplicationListener          : onApplicationEvent-ApplicationEnvironmentPreparedEvent
01-06 14:21:54.864 INFO  ProfileApplicationListener          : spring.profiles.active:[release]
01-06 14:21:54.866 INFO  ProfileApplicationListener          : pinpoint.profiles.active:release
01-06 14:21:54.867 INFO  ProfileApplicationListener          : PropertiesPropertySource pinpoint.profiles.active=release
01-06 14:21:54.868 INFO  ProfileApplicationListener          : PropertiesPropertySource logging.config=classpath:profiles/release/log4j2.xml
com.navercorp.pinpoint.agent.plugin.proxy.nginx.NginxRequestType@59a67c3a
01-06 14:24:01.001 [           main] INFO  c.n.p.w.c.LogConfiguration               -- LogConfiguration{logLinkEnable=false, logButtonName='', logPageUrl='', disableButtonMessage=''}
01-06 14:24:14.014 [           main] INFO  o.a.c.h.Http11NioProtocol                -- Starting ProtocolHandler ["http-nio-8080"]
01-06 14:24:14.014 [           main] INFO  o.s.b.w.e.t.TomcatWebServer              -- Tomcat started on port(s): 8080 (http) with context path ''
01-06 14:24:14.014 [           main] INFO  c.n.p.w.WebApp                           -- Started WebApp in 145.568 seconds (JVM running for 176.405)

  • hbase-root.properties:配置pp_web从哪个数据源获取采集数据,这里我只指定Hbase的zk地址
  • jdbc-root.properties :pp_web连接自身Mysql数据库的连接认证配置文件
  • sql目录 pp_web本身有些数据需要存放在MySQL数据库中,需初始化表结构(执行两个.sql脚本即可)

Pinpoint的Alarm功能需要MySQL服务

如果要使用Pinpoint的Alarm功能需要MySQL服务支持,否则点击pp web页面右上角的齿轮后,其中一些功能(如编辑用户、用户组、报警等功能)会出现如图所示的异常:

注意:查看jdbc-root.properties,先创建库,在创建表

5. Pinpoint Agent  测试应用

tomcat启动方式

1.使用D:\pinpoint-2.1.1\pinpoint-2.1.1\quickstart\testapp提供的应用测试
2.进入testapp目录,运行mvn install -Dmaven.test.skip=true 编译app
3.修改当前tomcat的bin/catalina.bat文件,添加启动参数

set CATALINA_OPTS=-javaagent:D:/tomcat/apache-tomcat-8.0.36/pinpoint-agent-2.2.0/pinpoint-bootstrap-2.2.0.jar -Dpinpoint.agentId=pp202101061457 -Dpinpoint.applicationName=MyTomcatPP

默认配置 pinpoint-root.config.

修改pinpoint-agent/pinpoint.config  profiler.collector.ip=127.0.0.1

这个ip地址和你安装了pinpoint机器地址保持一致。pinpoint-root.config和 pinpoint-bootstrap-$VERSION.jar在同一目录

THRIFT

  • profiler.collector.ip (default: 127.0.0.1)
  • profiler.collector.tcp.port (collector’s collector.receiver.base.port - default: 9994/TCP)
  • profiler.collector.stat.port (collector’s collector.receiver.stat.udp.port - default: 9995/UDP)
  • profiler.collector.span.port (collector’s collector.receiver.span.udp.port - default: 9996/UDP)

GRPC

  • profiler.transport.grpc.collector.ip (default: 127.0.0.1)
  • profiler.transport.grpc.agent.collector.port (collector’s collector.receiver.grpc.agent.port - default: 9991/TCP)
  • profiler.transport.grpc.metadata.collector.port (collector’s collector.receiver.grpc.agent.port - default: 9991/TCP)
  • profiler.transport.grpc.stat.collector.port (collector’s collector.receiver.grpc.stat.port - default: 9992/TCP)
  • profiler.transport.grpc.span.collector.port (collector’s collector.receiver.grpc.span.port - default: 9993/TCP)

从tomcat的日志中看到pinpoint-agent已经加载

jar包启动方式

java -javaagent:D:/tomcat/apache-tomcat-8.0.36/pinpoint-agent-2.2.0/pinpoint-bootstrap-2.2.0.jar -Dpinpoint.agentId=pp202101191756 -Dpinpoint.applicationName=testapp-2.1.1 -jar D:/pinpoint-2.1.1/pinpoint-2.1.1/quickstart/testapp/target/pinpoint-quickstart-testapp-2.1.1.jar

http://localhost:8082/

collector日志截图

web日志截图

wei图形展示

源码编译

mvnw clean install -DskipTests=true -s E:\maven\apache-maven-3.5.0-bin\apache-maven-3.5.0\conf\settings.xml

一定指定setting文件,否则会使用mvnw集成的maven,那样就不会使用阿里镜像源,下载的jar包都在\.m2\repository下

即使环境变量在指向了当地的maven,也不起作用,即使在环境变量设置了。

集群部署

pinpoint支持集群部署,通过需要配置Zookeeper地址
默认是集群模式:cluster.enable=true

cluster.enable=true
cluster.zookeeper.address=localhost
cluster.zookeeper.sessiontimeout=30000

问题

错误一

最近在部署好了 pinpoint 后,然后 agent 也启动了,并且在web 检测发现该agent 也有注册信息。但是服务调用的信息和 服务的 JVM 信息等一些其他信息是没有收集的。

问题所在:
pinpoint collector 监听使用的端口是 :9994、9995、9996.
但是 9995和 9996 使用是 tcp 协议。
我们之前开放这些端口的协议是 udp 协议。

也折射处理一个问题:
telnet 使用的是 tcp 协议。 用 telnet udp 监听的端口,是不通的。 可以使用 nc

解决方案

nc -u  127.0.0.1 8080    发送端

nc -l -u 8080   服务端

如果不指定端口的话,当发送端关闭,再重新打开时,会出现,服务端不接收数据的情况,主要原因是发送端的端口号是随机的,

如果没有指定的话,下一次打开的与上一次就不一定一样, NC 在接收建立成功后,不会接收其它的端口号发的数据

改成这样就可以了

nc -u -p 1122  127.0.0.1 8080

当需要测试udp连接是否正常的时候,可以使用nc命令

基本操作

A服务器上安装nc工具
yum -y install nc
B客户端上安装nc工具
yum -y install nc

测试

A服务器上用nc监听udp模式下的1080
nc -ulp 8888

B客户端在udp模式下想A的1080端口发送信息(nc)
nc -u A服务器IP 8888

结果
当B客户端发送信息给A是,A能够收到表示A服务器UDP正常

检测UDP端口是否正常

检测系统的IP为:192.168.50.66 端口为:8888
检测系统开启UDP的端口8888
检测系统:nc -ulp 8888
客户端:nc -zvu 192.168.50.66 8888

问题二:清理过期数据

1. 当我们 应用和服务下线了,我们如何删除对应的应用和服务。[官方文档](https://naver.github.io/pinpoint/1.7.3/faq.html#how-do-i-delete-application-name-andor-agent-id-from-hbase)
    一旦注册了应用程序名称和代理ID,它们就会保留在HBase中,直到它们的TTL过期(默认为1年)

我们可以通过 api(pinpoint web) 进行删除。

/admin/removeApplicationName.pinpoint?applicationName=$APPLICATION_NAME&password=$PASSWORD
/admin/removeAgentId.pinpoint?applicationName=$APPLICATION_NAME&agentId=$AGENT_ID&password=$PASSWORD

请注意,password参数的值是您admin.password,在pinpoint-web.properties中定义的属性。保留此空白将使您无需密码参数即可调用管理API。

2. 清理 Hbase 里面的数据缩短 TTL 值(对于AgentStatV2和TraceV2),删除   AgentStatV2和TraceV2表(调用堆栈数据)中的数据可能是最安全的。详细操作见:https://www.cnblogs.com/FireLL/p/11612522.html
/home/yeemiao/hbase-1.2.11/bin/hbase shell
#查看表的状态
describe  'TraceV2'
#禁用该表
disable 'TraceV2'
#修改表的ttl数值,单位为秒
alter 'TraceV2' , {NAME=>'S',TTL=>'3888000'}
#合并文件,清除删除、过期、多余版本的数据
major_compact  'TraceV2'
#启用该表
enable 'TraceV2'

pinpoint配置相关推荐

  1. pinpoint配置mysql_Pinpoint配置报警功能

    pinpoint添加钉钉报警(安装篇) 说明: 此处是我们已经下载别人已经封装好的报警机制 一.上传已封装的文件 2.监控路径上传已封装的包 监控路径: /opt/website/pinpoint-w ...

  2. pinpoint配置mysql_PinPoint安装部署

    1.前期准备 1.1准备三台服务器,分别用来安装pinpoint和pinpoint-agent和hbase.zookeeper 我安装它用到的3台服务器,一台主要部署pinpoint的主程序,一台主要 ...

  3. pinpoint配置mysql_pinpoint 安装部署

    阅读目录 1. 环境配置 1.1 获取需要的依赖包 1.2 配置jdk1.7 2. 安装Hbase 2.1 解压Hbase 2.2 修改Hbase的配置 2.3 启动Hbase 3. 安装pinpoi ...

  4. pinpoint配置mysql_pinpoint的安装和部署

    简介 Pinpoint是用Java编写的大型分布式系统的APM(应用程序性能管理)工具. 受Dapper的启发,Pinpoint提供了一种解决方案,通过在分布式应用程序中跟踪事务来帮助分析系统的整体结 ...

  5. Pinpoint【环境搭建 01】JDK\HBase\Pinpoint Collector+Web 最新版 2.3.3 安装配置运行验证及脚本文件分享(避坑指南捷径指北)

    本文主要是介绍 Pinpoint 环境的部署,小伙伴儿们也可以参考 Pinpoint <官网>的<快速入门>手册,最新版本v2.3.3组件可到官方<GitHub仓库> ...

  6. pinpoint全链路监控系统安装配置

    #1 Pinpoint安装 pinpoint是开源在github上的一款APM监控工具,它是用Java编写的,用于大规模分布式系统监控.它对性能的影响最小(只增加约3%资源利用率),安装agent是无 ...

  7. SpringCloud(八) 微服务安全实战 Prometheus配置grafana可见性监控,ELK日志,pinpoint追踪(Tracing,Metrics,Logging)

    Metrics:按照时间维度聚合各个参数,以数字形式呈现出来(次高 -> 通过prometheus实现UI呈现) Logging:描述离散的,不连续的事件,以文字的形式呈现(重要 -> 通 ...

  8. Dubbo 整合 Pinpoint 做分布式服务请求跟踪

    在使用Dubbo进行服务化或者整合应用后,假设某个服务后台日志显示有异常,这个服务又被多个应用调用的情况下,我们通常很难判断是哪个应用调用的,问题的起因是什么,因此我们需要一套分布式跟踪系统来快速定位 ...

  9. 实现一个全链路监控平台很难吗?Pinpoint、skywalking、zipkin,哪个实现比较好?...

    点击上方蓝色"方志朋",选择"设为星标"回复"666"获取独家整理的学习资料! 随着微服务架构的流行,服务按照不同的维度进行拆分,一次请求往 ...

最新文章

  1. Java连接数据库(3)
  2. linux script 命令
  3. 【LeetCode - 42. 接雨水】
  4. JQuery 为radio赋值问题
  5. asp.net mvc4使用DropDownList
  6. 10.25模拟 列车调度
  7. Unity零基础到入门 ☀️| 学会这些Unity常用组件,Unity中必备组件技能学习!
  8. 钢笔与矢量形状—文字路径制作印章效果
  9. TBase环境部署过程及使用一
  10. Layabox 实现 PageView 翻页
  11. Python实时垃圾分类系统(环境教程&完整源码&数据集)
  12. Mac设置鼠标滚轮方向
  13. python安装numpy模块教程_python安装numpy科学计算模块
  14. 统计学的Python实现-019:任意正态分布计算概率
  15. SIPC的认证算法java实现
  16. from StyleFrame import StyleFrame, Styler ModuleNotFoundError: No module named ‘StyleFrame‘
  17. 数据分析实战 -- 股票量化交易分析
  18. LINUX进程内存占用情况如何查看的方法
  19. 绝地求生2020服务器维护中,绝地求生2020最新维护公告几点开服?3月18日更新内容一览...
  20. 电脑故障由于其配置信息(注册表中的)不完整或已损坏,Windows 无法启动这个硬件设备. (代码 19)

热门文章

  1. 计算机调整分区出现无法读取文件提示,电脑从硬盘分区往其他分区复制文件的时候提示“一个意外错误使你无法复制该文件……”...
  2. 实验五:数据库综合查询及完整性约束
  3. html日历功课表代码,自定义透明的桌面课程表带日历
  4. 如何在php网站上插入站长统计,wordpress加入站长统计功能
  5. Qt QVector简单用法
  6. 整数小拼接--二分法
  7. 压缩jpg图片怎么弄?在线图片大小压缩工具怎么用?
  8. linux ps uptime,2、uptime命令
  9. 2021年资料员-通用基础(资料员)考试及资料员-通用基础(资料员)考试报名
  10. 隐私号码怎么拨打?拨打的流程有哪些?