直接路径读取对于延迟块清除的影响
Applies to:
Oracle Server - Enterprise Edition - Version: 11.1.0.6 to 11.1.0.7 This problem can occur on any platform.
Symptoms
After migrating an 11g database from a standalone to a 4-node RAC, a noticeable increase of 'direct path read' waits were observed at times.Here are the Cache sizes and Top 5 events.waitsCache Sizes Begin End ~~~~~~~~~~~ ---------- ----------Buffer Cache: 3,232M 3,616M Std Block Size: 8KShared Pool Size: 6,736M 6,400M Log Buffer: 8,824K Top 5 Timed Foreground Events ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Avgwait % DB Event Waits Time(s) (ms) time Wait Class ------------------------------ ------------ ----------- ------ ------ ---------- DB CPU 13,916 42.1 direct path read 1,637,344 13,359 8 40.4 User I/O db file sequential read 47,132 1,111 24 3.4 User I/O DFS lock handle 301,278 1,028 3 3.1 Other db file parallel read 14,724 554 38 1.7 User I/OChanges
Migrated from a standalone database to a 4-node RAC. Moved from Unix file system storage to ASM. Using Automatic Shared Memory Management (ASMM). The setting of db_cache_size in spfile/pfile is low compared to normal workload requirements.
Cause
There have been changes in 11g in the heuristics to choose between direct path reads or reads through buffer cache for serial table scans. In 10g, serial table scans for "large" tables used to go through cache (by default) which is not the case anymore. In 11g, this decision to read via direct path or through cache is based on the size of the table, buffer cache size and various other stats. Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches.
Solution
When using Automatic Shared Memory Management (ASMM) and with buffer cache low limit set at a low end compared to the normal workload requirements and usually after startup, 11g might choose to do serial direct path read scans for large tables that do not fit in the SGA. When ASMM increases the buffer cache due to increased demand, 11g might not again do serial direct path read scans for these same large tables. If you like to avoid this from happening, you should note the buffer cache and share pool requirements for a normal workload and set the low limits of buffer cache and shared pool in spfile/pfile close to these normal workload values. db_cache_size shared_pool_size
下面我们对直接路径读取对于延迟块清除造成的影响进行测试:
SQL> create table tv as select rownum rn,rpad('A',600,'Z') rp from dual
2 connect by level <=300000;表已创建。新建一个会话a:SQL> set linesize 200 pagesize 1400;
SQL> select count(*) from tv;COUNT(*)
----------
300000SQL> select vm.sid, vs.name, vm.value
2 from v$mystat vm, v$sysstat vs
3 where vm.statistic# = vs.statistic#
4 and vs.name in ('cleanouts only - consistent read gets',
5 'session logical reads',
6 'physical reads',
7 'physical reads direct');SID NAME VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads 27281
25 physical reads 27273
25 physical reads direct 27273
25 cleanouts only - consistent read gets 0-- 显然查询采用了直接路径读取方式SQL> update tv set rn=rn+1; -- 尝试批量更新SQL> alter system flush buffer_cache;
-- 刷新高速缓存,造成延迟块清除的情景,并提交系统已更改。SQL> commit;提交完成。新建一个会话b:SQL> set linesize 200 pagesize 1400;
SQL> select count(*) from tv;COUNT(*)
----------
300000SQL> select vm.sid, vs.name, vm.value
2 from v$mystat vm, v$sysstat vs
3 where vm.statistic# = vs.statistic#
4 and vs.name in ('cleanouts only - consistent read gets',
5 'session logical reads',
6 'physical reads',
7 'physical reads direct','redo size');SID NAME VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads 54554
25 physical reads 27273
25 physical reads direct 27273
25 redo size 0
25 cleanouts only - consistent read gets 27273
--查询采用direct path read时产生了延迟块清除操作,但不产生redoSQL> select count(*) from tv;COUNT(*)
----------
300000SQL> select vm.sid, vs.name, vm.value
2 from v$mystat vm, v$sysstat vs
3 where vm.statistic# = vs.statistic#
4 and vs.name in ('cleanouts only - consistent read gets',
5 'session logical reads',
6 'physical reads',
7 'physical reads direct','redo size');SID NAME VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads 109104
25 physical reads 54546
25 physical reads direct 54546
25 redo size 0
25 cleanouts only - consistent read gets 54546
再次查询仍采用直接路径读取,产生了相同数目的延迟块清除操作,并没有产生redo;可见direct path read的清除操作仅是针对从磁盘上读取到PGA内存中的镜像,而不对实际的块做任何修改,因而也没有任何redo; 下面我们使用普通串行全表扫描方式,设置event 10949可以避免采用直接路径读取方式.关于该事件可以参见这里.
SQL> ALTER SESSION SET EVENTS '10949 TRACE NAME CONTEXT FOREVER';会话已更改。SQL> select count(*) from tv;COUNT(*)
----------
300000SQL> select vm.sid, vs.name, vm.value
2 from v$mystat vm, v$sysstat vs
3 where vm.statistic# = vs.statistic#
4 and vs.name in ('cleanouts only - consistent read gets',
5 'session logical reads',
6 'physical reads',
7 'physical reads direct','redo size');SID NAME VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads 163662
25 physical reads 81819
25 physical reads direct 54546
25 redo size 1966560
25 cleanouts only - consistent read gets 81819SQL> select count(*) from tv;COUNT(*)
----------
300000SQL> select vm.sid, vs.name, vm.value
2 from v$mystat vm, v$sysstat vs
3 where vm.statistic# = vs.statistic#
4 and vs.name in ('cleanouts only - consistent read gets',
5 'session logical reads',
6 'physical reads',
7 'physical reads direct','redo size');SID NAME VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads 190947
25 physical reads 95673
25 physical reads direct 54546
25 redo size 1966560
25 cleanouts only - consistent read gets 81819
第一次采用普通全表扫描方式时产生了与direct path read时相同量的延迟块清除操作,并因此产生了大量的redo,这种模式回归到了最经典的延迟块清除情景中;之后的一次读取则不再需要清除块和产生重做了,我们在读取一个“干净”的表段。 从以上测试我们可以了解到,11g中使用更为广泛的direct path read方式对有需要延迟块清除操作的段所可能产生的影响,因为实际没有一个“修改块”的操作,所以虽然延迟块清除操作在该种模式下每次都必须产生,却实际没有产生脏块,因而也就不会有“写块”的必要,故而也没有redo的产生。所产生的负载可能更多的体现在cpu time的使用上。
直接路径读取对于延迟块清除的影响相关推荐
- [20170420]关于延迟块清除3.txt
[20170420]关于延迟块清除3.txt --昨天做延迟块清除测试时,有一个问题我自己一直没搞明白,就是把表空间设置为只读的情况下,当访问块时, --由于没法更新对应块,不知道为什么每次重启数据库 ...
- [20150409]只读表空间与延迟块清除.txt
[20150409]只读表空间与延迟块清除.txt --昨天测试只读表空间的数据库恢复问题,突然想到一种情况,如果只读表空间存在延迟块清除情况,这样在下次访问是会更新块的信息吗? --自己还是做1个测 ...
- Java相对路径读取文件
不管你是新手还是老鸟,在程序中读取资源文件总会遇到一些找不到文件的问题,这与Java底层的实现有关,不能算bug,只要方法得当,问题还是可以解决的. 项目的文件夹结构: repathtest ├─sr ...
- java 相对路径 文件读取,Java相对路径读取文件
Java相对路径读取文件 不管你是新手还是老鸟,在程序中读取资源文件总会遇到一些找不到文件的问题,这与Java底层的实现有关,不能算bug,只要方法得当,问题还是可以解决的. 项目的文件夹结构: re ...
- java io 文件路径_如何从Java项目中的相对路径读取文件? java.io.File找不到指定的路径...
如何从Java项目中的相对路径读取文件? java.io.File找不到指定的路径 我有一个包含2个包的项目: ListStopWords.txt ListStopWords.txt 在包(2)中我有 ...
- MVC中根据后台绝对路径读取图片并显示在IMG中
数据库存取图片并在MVC3中显示在View中 根据路径读取图片: byte[] img = System.IO.File.ReadAllBytes(@"d:\xxxx.jpg"); ...
- C# Note5:使用相对路径读取文件
一.C#中使用相对路径读取配置文件 一般Solution的目录结构如下图所示: (如过看不到某些文件,可以点击 "显示所有文件" 图标) 方法一:由于生成的exe文件在bin\de ...
- 新手零基础:飞桨代码中关于图片路径读取和资源解压报错
#飞桨代码中关于图片路径读取和资源解压报错 1.路径读取 在进行路径图片读取时,不同版本的python的os模块在路径拼接时会报错,一般情况下os.path.join(path,name),是可以将路 ...
- java 文件路径读取,java中依据路径读取文件
java中根据路径读取文件 根据文件路径读取文件.具体代码如下: /** * 根据文件路径读取文件 * @param path * @return String * @throws IOExcepti ...
最新文章
- Oracle日常巡检
- 从网上找到一个清晰CSS视频教程和大家分享一下
- ITK:翻译矢量图像
- 1131:基因相关性
- html5制作线路图,HTML5绘制上海地铁线路图
- 客制化键盘编程_客制化键盘劝退指南
- python中astr是啥_python的基本操作
- oppo r9 android7.0,OPPO R9 Plus的手机系统是什么
- Spring Cloud 服务治理
- 鱼腥草可以随便吃吗?
- TCP的流量控制和阻塞控制
- fpga从入门到放弃(一)基于vivado2018环境开发板Artix 7系列BASYS3(更新中)
- Java17,有史以来最快 JDK!
- linux系统中怎么安装谷歌浏览器,linux怎么安装谷歌浏览器?
- 单片机流水灯C语言实验报告,单片机LED灯实验报告.doc
- html修改按钮属性,button属性
- 从懵懵懂懂到如今的恍恍惚惚
- 爱的台阶之危险流浪者
- sdn网络搭建以及负载均衡
- java加密与解密-核心包中的部分API(2)
热门文章
- free是自由,不是免费,从王开源说起
- 关于tableview的优化
- AndroidRichText 让Textview轻松的支持富文本(图像ImageSpan、点击效果等等类似QQ微信聊天)...
- Windows 8.1 explorer.exe总是崩溃的解决办法
- 自绘制HT For Web ComboBox下拉框组件
- IE常见的CSS的BUG(一)
- JBOSS的管理员账号和密码设定
- Silverlight学习之——事件编程
- 广告行业中常说的 CPC,CPM,CPD,CPT,CPA,CPS 等词的意思是什么?
- 关于前后端分离我的理解