Linux性能测试工具之Disk(四)
Disks
4.1 iostat
iostat(1) summarizes per-disk I/O statistics, providing metrics for workload characterization, utilization, and saturation.
The name “iostat” is short for “I/O statistics”, although it might have been better to call it “diskiostat” to reflect the type of I/O it reports.
root@ubuntu:~# iostat
Linux 5.4.0-59-generic (ubuntu) 2021年01月06日 _x86_64_ (2 CPU)avg-cpu: %user %nice %system %iowait %steal %idle0.96 0.07 1.04 0.57 0.00 97.37Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
loop0 0.01 0.06 0.00 0.00 338 0 0
loop1 0.01 0.06 0.00 0.00 348 0 0
loop10 0.00 0.00 0.00 0.00 8 0 0
loop2 0.01 0.18 0.00 0.00 1063 0 0
loop3 0.01 0.18 0.00 0.00 1072 0 0
loop4 0.01 0.06 0.00 0.00 341 0 0
loop5 0.01 0.18 0.00 0.00 1058 0 0
loop6 0.01 0.06 0.00 0.00 342 0 0
loop7 0.01 0.06 0.00 0.00 351 0 0
loop8 0.01 0.06 0.00 0.00 336 0 0
loop9 1.78 1.83 0.00 0.00 10784 0 0
sda 7.13 174.03 41.48 0.00 1023155 243849 0
%user | CPU运行在用户态的百分比 |
%nice | CPU处在带NICE值的用户模式下的时间百分比 |
%system | CPU运行在内核态的百分比 |
%iowait | CPU等待输入输出完成时间的百分比 |
%steal | 管理程序维护另一个虚拟处理器时,虚拟CPU的无意识等待时间百分比 |
%idle | CPU空闲时间百分比 |
tps | 该设备每秒的传输次数(IOPS) |
kB_read/s | 每秒从设备(drive expressed)读取的数据量 |
kB_wrtn/s | 每秒向设备(drive expressed)写入的数据量 |
kB_dscd/s | 每秒设备丢弃的数据量 |
kB_read | 读取的总数据量 |
kB_wrtn | 写入的总数据量 |
kB_dscd | 丢弃的总数据量 |
如果%iowait的值过高,表示硬盘存在I/O瓶颈。
如果%idle值高,表示CPU较空闲。
如果%idle值高但系统响应慢时,可能是CPU等待分配内存,应加大内存容量。
如果%idle值持续低于10,表明CPU处理能力相对较低,系统中最需要解决的资源是CPU。
4.2 PSI
io压力
# cat /proc/pressure/io
some avg10=63.11 avg60=32.18 avg300=8.62 total=667212021
full avg10=60.76 avg60=31.13 avg300=8.35 total=622722632
4.3 pidstat
The Linux pidstat(1) tool prints CPU usage by default and includes a -d option for disk I/O statistics.
root@ubuntu:~# pidstat -d 2
Linux 5.4.0-59-generic (ubuntu) 2021年01月06日 _x86_64_ (2 CPU)16时44分08秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command16时44分10秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command16时44分12秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
16时44分14秒 0 353 0.00 70.00 0.00 0 systemd-journal16时44分14秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
16时44分16秒 0 353 0.00 24.00 0.00 0 systemd-journal16时44分16秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
16时44分18秒 0 313 0.00 2.00 0.00 0 jbd2/sda5-8
16时44分18秒 0 353 0.00 2.00 0.00 0 systemd-journal16时44分18秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
16时44分20秒 104 762 0.00 2.00 0.00 0 rsyslogd16时44分20秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command16时44分22秒 UID PID kB_rd/s kB_wr/s kB_ccwr/s iodelay Command
16时44分24秒 0 313 0.00 6.00 0.00 0 jbd2/sda5-8
16时44分24秒 0 353 0.00 2.00 0.00 0 systemd-journal
[...]
其中,kB_ccwr/s的含义是:
Number of kilobytes whose writing to disk has been cancelled by the task. This may occur when the task truncates some dirty pagecache. In this case, some IO which another task has been accounted for will not be happening.
4.4 perf
perf列出和block相关的事件
[root@localhost ~]# perf list 'block:*'List of pre-defined events (to be used in -e):block:block_bio_backmerge [Tracepoint event]block:block_bio_bounce [Tracepoint event]block:block_bio_complete [Tracepoint event]block:block_bio_frontmerge [Tracepoint event]block:block_bio_queue [Tracepoint event]block:block_bio_remap [Tracepoint event]block:block_dirty_buffer [Tracepoint event]block:block_getrq [Tracepoint event]block:block_plug [Tracepoint event]block:block_rq_complete [Tracepoint event]block:block_rq_insert [Tracepoint event]block:block_rq_issue [Tracepoint event]block:block_rq_remap [Tracepoint event]block:block_rq_requeue [Tracepoint event]block:block_sleeprq [Tracepoint event]block:block_split [Tracepoint event]block:block_touch_buffer [Tracepoint event]block:block_unplug [Tracepoint event]Metric Groups:
磁盘I/O延时
perf record -e block:block_rq_issue,block:block_rq_complete -a sleep 20
perf script --header > disk.txt
### 这样得出分析文件,但还需在手动计算,比如用awk等工具对文本进行处理
4.5 biolatency-bpfcc
这是直观的显示io latency的BCC工具。
usage: biolatency-bpfcc [-h] [-T] [-Q] [-m] [-D] [-F] [interval] [count]Summarize block device I/O latency as a histogrampositional arguments:interval output interval, in secondscount number of outputsoptional arguments:-h, --help show this help message and exit-T, --timestamp include timestamp on output-Q, --queued include OS queued time in I/O time-m, --milliseconds millisecond histogram-D, --disks print a histogram per disk device-F, --flags print a histogram per set of I/O flagsexamples:./biolatency # summarize block I/O latency as a histogram./biolatency 1 10 # print 1 second summaries, 10 times./biolatency -mT 1 # 1s summaries, milliseconds, and timestamps./biolatency -Q # include OS queued time in I/O time./biolatency -D # show each disk device separately./biolatency -F # show I/O flags separately
-F选项可以根据flags分别显示io latency,这个选项很实用
4.6 biosnoop-bpfcc
Trace all block device I/O and print a summary line per I/O
usage: biosnoop-bpfcc [-h] [-Q]Trace block I/Ooptional arguments:-h, --help show this help message and exit-Q, --queue include OS queued timeexamples:./biosnoop # trace all block I/O./biosnoop -Q # include OS queued time
root@ubuntu:test_mem# biosnoop-bpfcc -Q
TIME(s) COMM PID DISK T SECTOR BYTES QUE(ms) LAT(ms)
0.000000 ? 0 R 0 8 0.00 0.57
2.017184 ? 0 R 0 8 0.00 1.44
3.168129 jbd2/sda5-8 313 sda W 206940176 16384 0.00 0.18
3.168304 jbd2/sda5-8 313 sda W 206940208 4096 0.00 0.13
4.031720 ? 0 R 0 8 0.00 0.31
[...]
4.7 biostack.bt
biostacks(8)14 is a bpftrace tool that traces the block I/O request time (from OS enqueue to device completion) with the I/O initialization stack trace.
root@ubuntu:test_mem# biostacks.bt
Attaching 5 probes...
cannot attach kprobe, probe entry may not exist
Warning: could not attach probe kprobe:blk_start_request, skipping.
Tracing block I/O with init stacks. Hit Ctrl-C to end.
^C@usecs[blk_account_io_start+1generic_make_request+207submit_bio+72submit_bh_wbc+386submit_bh+19journal_submit_commit_record.part.0+475jbd2_journal_commit_transaction+4754kjournald2+182kthread+260ret_from_fork+53
]:
[4K, 8K) 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|@usecs[blk_account_io_start+1blk_attempt_plug_merge+274blk_mq_make_request+863generic_make_request+207submit_bio+72submit_bh_wbc+386submit_bh+19jbd2_journal_commit_transaction+1484kjournald2+182kthread+260ret_from_fork+53
]:
[4K, 8K) 1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|[...]
4.8 blktrace
blktrace - generate traces of the i/o traffic on block devices.
[root@localhost ~]# blktrace -d /dev/sdc1 -o - | blkparse -i -8,32 4 1 0.000000000 1385858 A W 3470177952 + 8 <- (8,33) 34701759048,33 4 2 0.000000992 1385858 Q W 3470177952 + 8 [kworker/u113:3]8,33 4 3 0.000007305 1385858 G W 3470177952 + 8 [kworker/u113:3]8,33 4 4 0.000008454 1385858 P N [kworker/u113:3]8,32 4 5 0.000015076 1385858 A W 9638099512 + 8 <- (8,33) 96380974648,33 4 6 0.000015273 1385858 Q W 9638099512 + 8 [kworker/u113:3]8,33 4 7 0.000016198 1385858 G W 9638099512 + 8 [kworker/u113:3]8,32 4 8 0.000027067 1385858 A WM 3036678400 + 8 <- (8,33) 30366763528,33 4 9 0.000027267 1385858 Q WM 3036678400 + 8 [kworker/u113:3]8,33 4 10 0.000028193 1385858 G WM 3036678400 + 8 [kworker/u113:3]8,32 4 11 0.000033291 1385858 A W 302800272 + 8 <- (8,33) 3027982248,33 4 12 0.000033485 1385858 Q W 302800272 + 8 [kworker/u113:3]8,33 4 13 0.000034269 1385858 G W 302800272 + 8 [kworker/u113:3]8,33 4 14 0.000035653 1385858 U N [kworker/u113:3] 48,33 4 15 0.000036376 1385858 I W 3470177952 + 8 [kworker/u113:3]8,33 4 16 0.000036877 1385858 I W 9638099512 + 8 [kworker/u113:3]8,33 4 17 0.000037129 1385858 I WM 3036678400 + 8 [kworker/u113:3]8,33 4 18 0.000037273 1385858 I W 302800272 + 8 [kworker/u113:3]8,33 4 19 0.000042762 1385858 D W 3470177952 + 8 [kworker/u113:3]8,33 4 20 0.000050812 1385858 D W 9638099512 + 8 [kworker/u113:3]8,33 4 21 0.000052794 1385858 D WM 3036678400 + 8 [kworker/u113:3]8,33 4 22 0.000054746 1385858 D W 302800272 + 8 [kworker/u113:3]8,33 12 1 0.000427923 0 C W 302800272 + 8 [0]8,33 12 2 0.000438597 0 C WM 3036678400 + 8 [0]8,33 12 3 0.000441113 0 C W 3470177952 + 8 [0]8,33 12 4 0.000443239 0 C W 9638099512 + 8 [0]
^CCPU4 (8,33):Reads Queued: 0, 0KiB Writes Queued: 4, 16KiBRead Dispatches: 0, 0KiB Write Dispatches: 4, 16KiBReads Requeued: 0 Writes Requeued: 0Reads Completed: 0, 0KiB Writes Completed: 0, 0KiBRead Merges: 0, 0KiB Write Merges: 0, 0KiBRead depth: 0 Write depth: 4IO unplugs: 1 Timer unplugs: 0
CPU12 (8,33):Reads Queued: 0, 0KiB Writes Queued: 0, 0KiBRead Dispatches: 0, 0KiB Write Dispatches: 0, 0KiBReads Requeued: 0 Writes Requeued: 0Reads Completed: 0, 0KiB Writes Completed: 4, 16KiBRead Merges: 0, 0KiB Write Merges: 0, 0KiBRead depth: 0 Write depth: 4IO unplugs: 0 Timer unplugs: 0Total (8,33):Reads Queued: 0, 0KiB Writes Queued: 4, 16KiBRead Dispatches: 0, 0KiB Write Dispatches: 4, 16KiBReads Requeued: 0 Writes Requeued: 0Reads Completed: 0, 0KiB Writes Completed: 4, 16KiBRead Merges: 0, 0KiB Write Merges: 0, 0KiBIO unplugs: 1 Timer unplugs: 0Throughput (R/W): 0KiB/s / 0KiB/s
Events (8,33): 26 entries
Skips: 0 forward (0 - 0.0%)
4.9 bpftrace
#Count block I/O tracepoints events:
bpftrace -e 'tracepoint:block:* { @[probe] = count(); }'#Summarize block I/O size as a histogram:
bpftrace -e 't:block:block_rq_issue { @bytes = hist(args->bytes); }'#Count block I/O request user stack traces:
bpftrace -e 't:block:block_rq_issue { @[ustack] = count(); }'
bpftrace -e 't:block:block_rq_insert { @[ustack] = count(); }'#Count block I/O type flags:
bpftrace -e 't:block:block_rq_issue { @[args->rwbs] = count(); }'#Trace block I/O errors with device and I/O type:
bpftrace -e 't:block:block_rq_complete /args->error/ { printf("dev %d type %s error %d\n", args->dev, args->rwbs, args->error); }'#Count SCSI opcodes:
bpftrace -e 't:scsi:scsi_dispatch_cmd_start { @opcode[args->opcode] = count(); }'#Count SCSI result codes:
bpftrace -e 't:scsi:scsi_dispatch_cmd_done { @result[args->result] = count(); }'#Count SCSI driver functions:
bpftrace -e 'kprobe:scsi* { @[func] = count(); }'
Linux性能测试工具之Disk(四)相关推荐
- Linux性能测试工具 stress,stress-ng,sysbench,fio以及检测dstat
一.什么是dstat? 通过man帮助,可以看到官方对dstat的定义为:多功能系统资源统计生成工具( versatile tool for generating system resource st ...
- Linux 性能测试工具 sysbench 的安装与简单使用
Linux 性能测试工具 sysbench 的安装与简单使用 一 背景 sysbench是一款开源的多线程性能测试工具,可以执行CPU/内存/线程/IO/数据库等方面的性能测试. sysbench 支 ...
- Linux 性能测试工具 sysbench 的安装与简单使用 1
文章目录 Linux 性能测试工具 sysbench 的安装与简单使用 一 背景 二 实验环境 2.1 操作系统 2.2 其他配置 三 安装 四 简单使用过程 4.1 查看软件版本 4.2 查看系统帮 ...
- 记录——linux性能测试工具 stress常用命令
Linux性能测试工具 stress 常用命令: 对CPU进行加压,可以使用top命令查看使用情况 stress -c 2 增加2个cpu进程,处理sqrt()函数函数,以提高系统CPU负荷, 对内存 ...
- linux性能测试工具的记录
stress 是一个 Linux 系统压力测试工具,用作异常进程模拟平均负载升高的场景. strees: 压测命令,–cpu cpu压测选项,-i io压测选项,-c 进程数压测选项,–timeout ...
- mac linux 性能测试工具下载,8款SSD固态硬盘性能测试软件,适用于Windows、Linux、MacOS、安卓系统等不同操作系统的...
虽然各存储厂商对SSD的读写参数都有说明,但通常是不太准确的.唯一值得相信的就是自己测试,用自己的真实环境得到真正的数据. 警告 - 请不要不必要地重复读/写测试你的SSD固态盘,重复读/写测试可能会 ...
- Linux性能测试工具之CPU(一)
CPUs 1.1 uptime Load averages 的三个值分别代表最近 1/5/15 分钟的平均系统负载.在多核系统中,这些值有可能经常大于1,比如四核系统的 100% 负载为 4,八核系统 ...
- Linux性能测试工具之filesystem(三)
File Systems 3.1 mount 挂载文件系统工具,也可显示已经挂载的文件系统,比如: [root@localhost ~]# mount -l sysfs on /sys type sy ...
- mac linux 性能测试工具,Mac/Linux压力测试神器Siege详解(附安装过程)
背景描述 在做Web项目或者一个服务器程序的时候,往往会遇到以下的场景,想要确定自己服务器吞吐量有多大.在服务器中需要应用线程池但不知道线程池的数量应该设置多少.对JVM相关参数进行调优验证时需要大量 ...
最新文章
- jetson nano 人脸识别
- 第一个SpringBoot入门级项目(超详细步骤)
- [Xcode 实际操作]七、文件与数据-(3)创建文本文件、属性列表文件、图片文件
- 【转】SharePoint Content Database简介
- CentOS 安装Python 3.52
- 关于解决service 'sapdp00' not found的办法
- Enterprise Library访问Access数据库
- 【Kafka】UnsupportedVersionException: The broker does not support DESCRIBE_LOG_DIRS
- php curl 客户端,PHP易用的http客户端:curlpp
- python调试教程_python进阶教程之==、is和调试
- linux常用命令集(用户和组操作-共15个)
- Byshell:无进程无DLL无硬盘文件无启动项
- nodejs后台系列--第四篇--koa(二)
- 图书馆借阅系统(应用)的设计与实现
- 屏幕快照之旅:Android 4.2 Jelly Bean的10个新功能
- mysql内表和外表_hive内表和外表的创建、载入数据、区别
- easypanel php.ini,Linux下EasyPanel及PHP安装升级
- 数据中心温度云图三维可视化
- web中各种命令注入的检测和利用二
- android 电视qq视频,腾讯视频电视版安
热门文章
- bzoj 1863 [Zjoi2006]trouble 皇帝的烦恼
- 国家信息安全等级考试NISP一级题库(1)第1至100题
- JAVA基础(12.Java中的常用类String)
- html中图片亮度调节,HTML+CSS+JS 模仿 Win10 亮度调节效果
- Python实现Word表格转成Excel表格
- 一篇实验结果统计检验入门文档
- 2012年度最佳分享:仿webQQ界面,详情请下载,不吃亏
- 分享几个有意思的游戏
- 今天是教师节,我也即将成为一名老师
- 在c语言中括号里面有两个算式,聪明题有答案