引言:首先说明并行技术属于大数据范畴,适合OLAP系统,在任务分割、数据块分割、资源充裕的场合应用较广,本次分享主要概括并行原理、实际应用、性能对比、并行直接加载、索引属性、特点小结等六个小点去重点阐述。下面的测试是我的笔记,这些笔记也参考了《让oracle跑的更快2》作者:谭怀远 一书的引导,在此向谭总表示感谢,向帮助过我们的人表示感谢 zhixiang yangqiaojie等好友

一、简单介绍OLTP和OLAP系统的特点小结

答:OLTP和OLAP是我们大家在日常生产库中最常用到的2种系统,简单的说OLTP是基于多事务短时间片的系统,内存的效率决定了数据库的效率。

OLAP是基于大数据集长时间片的系统,SQL执行效率决定了数据库的效率。因此说“并行parallel”技术属于OLAP系统范畴

二、并行技术实现机制和场合

答:并行是相对于串行而言的,一个大的数据块分割成n个小的数据块,同时启动n个进程分别处理n个数据块,最后由并行协调器coordinater整合结果返回给用户。实际上在一个并行执行的过程中还存在着并行进程之间的通信问题(并行间的交互操作)。上面也说过并行是属于大数据处理的技术适合OLAP,并不适合OLTP,因为OLTP系统中的sql执行效率通常都是非常高的。

三、测试并行技术在实际中的应用和规则

(1)在有索引的表leo_t上使用并行技术,但没有起作用的情况

创建一张表

LS@LEO> create table leo_t as select rownum id ,object_name,object_type from dba_objects;

在表id列上创建索引

LS@LEO> create index leo_t_idx on leo_t(id);

收集表leo_t统计信息

LS@LEO> execute dbms_stats.gather_table_stats(ownname=>'LS',tabname=>'LEO_T',method_opt=>'for all indexed columns size 2',cascade=>TRUE);

为表启动4个并行度

LS@LEO> alter table leo_t parallel 4;

启动执行计划

LS@LEO> set autotrace trace explain stat

LS@LEO> select * from leo_t where id=100; 使用索引检索的数据,并没有启动并行

Execution Plan 执行计划

----------------------------------------------------------

Plan hash value: 2049660393

-----------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

-----------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 28 | 2 (0)| 00:00:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| LEO_T | 1 | 28 | 2 (0)| 00:00:01 |

|* 2 | INDEX RANGE SCAN | LEO_T_IDX | 1 | | 1 (0)| 00:00:01 |

-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access("ID"=100)

Statistics 统计信息

----------------------------------------------------------

1 recursive calls

0 db block gets

4 consistent gets 4次一致性读,即处理4个数据块

0 physical reads

0 redo size

544 bytes sent via SQL*Net to client

381 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

1 rows processed

说明:我们在这个表上启动了并行但没有起作用是因为CBO优化器使用了B-tree索引来检索的数据直接就定位到rowid(B-tree索引特点适合重复率比较低的字段),所以才发生了4个一致性读,发现使用索引效率非常高,资源代价比较小没有使用并行的必要了。

(2)读懂一个并行执行计划

LS@LEO> select object_type,count(*) from leo_t group by object_type; 对象类型分组统计

35 rows selected.

Execution Plan 并行执行计划

----------------------------------------------------------

Plan hash value: 852105030

------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |

------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 10337 | 111K| 6 (17)| 00:00:01 | | | |

| 1 | PX COORDINATOR | | | | | | | | |

| 2 | PX SEND QC (RANDOM) | :TQ10001 | 10337 | 111K| 6 (17)| 00:00:01 | Q1,01 | P->S | QC (RAND) |

| 3 | HASH GROUP BY | | 10337 | 111K| 6 (17)| 00:00:01 | Q1,01 | PCWP | |

| 4 | PX RECEIVE | | 10337 | 111K| 6 (17)| 00:00:01 | Q1,01 | PCWP | |

| 5 | PX SEND HASH | :TQ10000 | 10337 | 111K| 6 (17)| 00:00:01 | Q1,00 | P->P | HASH |

| 6 | HASH GROUP BY | | 10337 | 111K| 6 (17)| 00:00:01 | Q1,00 | PCWP | |

| 7 | PX BLOCK ITERATOR | | 10337 | 111K| 5 (0)| 00:00:01 | Q1,00 | PCWC | |

| 8 | TABLE ACCESS FULL| LEO_T | 10337 | 111K| 5 (0)| 00:00:01 | Q1,00 | PCWP | |

------------------------------------------------------------------------------------------------------------------

Statistics 统计信息

----------------------------------------------------------

44 recursive calls

0 db block gets

259 consistent gets 259次一致性读,即处理259个数据块

0 physical reads

0 redo size

1298 bytes sent via SQL*Net to client

403 bytes received via SQL*Net from client

4 SQL*Net roundtrips to/from client

1 sorts (memory)

0 sorts (disk)

35 rows processed

ps -ef | grep oracle 从后台进程上看也能发现起了4个并行进程和1个协调进程

oracle 25075 1 0 22:58 ? 00:00:00 ora_p000_LEO

oracle 25077 1 0 22:58 ? 00:00:00 ora_p001_LEO

oracle 25079 1 0 22:58 ? 00:00:00 ora_p002_LEO

oracle 25081 1 0 22:58 ? 00:00:00 ora_p003_LEO

oracle 25083 1 0 22:58 ? 00:00:00 ora_p004_LEO

说明:在进行分组整理的select中,会处理大量的数据集(发生了259次一致性读),这时使用并行来分割数据块处理可以提高效率,因此oracle使用了并行技术,解释一下并行执行计划步骤,并行执行计划应该从下往上读,当看见PX(parallel execution)关键字说明使用了并行技术

1.首先全表扫描

2.并行进程以迭代iterator的方式访问数据块,并将扫描结果提交给父进程做hash group

3.并行父进程对子进程传递过来的数据做hash group操作

4.并行子进程(PX SEND HASH)将处理完的数据发送出去,子和父是相对而言的,我们定义发送端为子进程,接收端为父进程

5.并行父进程(PX RECEIVE)将处理完的数据接收

6.按照随机顺序发送给并行协调进程QC(query coordinator)整合结果(对象类型分组统计)

7.完毕后QC将整合结果返回给用户

说明并行执行计划中特有的IN-OUT列的含义(指明了操作中数据流的方向)

Parallel to Serial(P->S): 表示一个并行操作向一个串行操作发送数据,通常是将并行结果发送给并行调度进程QC进行汇总

Parallel to Parallel(P->P):表示一个并行操作向另一个并行操作发送数据,一般是并行父进程与并行子进程之间的数据交流。

Parallel Combined with parent(PCWP): 同一个从属进程执行的并行操作,同时父操作也是并行的。

Parallel Combined with Child(PCWC): 同一个从属进程执行的并行操作,同时子操作也是并行的。

Serial to Parallel(S->P): 表示一个串行操作向一个并行操作发送数据,如果select部分是串行操作,就会出现这个情况

(3)介绍4个我们常用的并行初始化参数

parallel_min_percent 50% 表示指定SQL并行度最小阀值才能执行,如果没有达到这个阀值,oracle将会报ora-12827错误

parallel_adaptive_multi_user TRUE 表示按照系统资源情况动态调整SQL并行度,已取得最好的执行性能

parallel_instance_group 表示在几个实例间起并行

parallel_max_servers 100 表示整个数据库实例的并行进程数不能超过这个值

parallel_min_servers 0 表示数据库启动时初始分配的并行进程数,如果我们设置的并行度小于这个值,并行协调进程会按我们的并行度来分配并行进程数,如果我们设置的并行度大于这个值,并行协调进程会额外启动其他的并行进程来满足我们的需求

(4)使用hint方式测试DML并行查询性能

首先说一下什么时候可以使用并行技术

1.对象属性:在创建的时候,就指定了并行关键字,长期有效

2.sql强制执行:在sql中使用hint提示方法使用并行,临时有效,它是约束sql语句的执行方式,本次测试就是使用的hint方式

LS@LEO> select /*+ parallel(leo_t 4) */ count(*) from leo_t where object_name in (select /*+ parallel(leo_t1 4) */ object_name from leo_t1);

Execution Plan 执行计划

----------------------------------------------------------

Plan hash value: 3814758652

-------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |

-------------------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 94 | 16 (0)| 00:00:01 | | | |

| 1 | SORT AGGREGATE | | 1 | 94 | | | | | |

| 2 | PX COORDINATOR | | | | | | | | |

| 3 | PX SEND QC (RANDOM) | :TQ10002 | 1 | 94 | | | Q1,02 | P->S | QC (RAND) |

| 4 | SORT AGGREGATE | | 1 | 94 | | | Q1,02 | PCWP | |

|* 5 | HASH JOIN SEMI | | 10337 | 948K| 16 (0)| 00:00:01 | Q1,02 | PCWP | |

| 6 | PX RECEIVE | | 10337 | 282K| 5 (0)| 00:00:01 | Q1,02 | PCWP | |

| 7 | PX SEND HASH | :TQ10000 | 10337 | 282K| 5 (0)| 00:00:01 | Q1,00 | P->P | HASH |

| 8 | PX BLOCK ITERATOR | | 10337 | 282K| 5 (0)| 00:00:01 | Q1,00 | PCWC | |

| 9 | TABLE ACCESS FULL| LEO_T | 10337 | 282K| 5 (0)| 00:00:01 | Q1,00 | PCWP | |

| 10 | PX RECEIVE | | 10700 | 689K| 11 (0)| 00:00:01 | Q1,02 | PCWP | |

| 11 | PX SEND HASH | :TQ10001 | 10700 | 689K| 11 (0)| 00:00:01 | Q1,01 | P->P | HASH |

| 12 | PX BLOCK ITERATOR | | 10700 | 689K| 11 (0)| 00:00:01 | Q1,01 | PCWC | |

| 13 | TABLE ACCESS FULL| LEO_T1 | 10700 | 689K| 11 (0)| 00:00:01 | Q1,01 | PCWP | |

-------------------------------------------------------------------------------------------------------------------

并行先扫描子查询leo_t1表,然后对主查询leo_t表进行扫描,按照随机顺序发送到并行协调进程QC整合结果,最后将结果返回给用户

Predicate Information (identified by operation id):

---------------------------------------------------

5 - access("OBJECT_NAME"="OBJECT_NAME"

Note

-----

- dynamic sampling used for this statement

Statistics 统计信息

----------------------------------------------------------

28 recursive calls

0 db block gets

466 consistent gets 466次一致性读,即处理了446个数据块

0 physical reads

0 redo size

413 bytes sent via SQL*Net to client

381 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

2 sorts (memory)

0 sorts (disk)

1 rows processed

(5)并行DDL测试

使用10046事件生成文法追踪文件,level 12:包括sql语句解析、执行、提取、提交和回滚与等待事件,这是最高级别,而且向下兼容

10046事件解释:10046 event是oracle用于系统性能分析的重要事件。当激活这个事件后,将通知oracle kernel追踪会话的相关即时信息,并写入到相应trace文件中。这些有用的信息主要包括sql是如何进行解析,绑定变量的使用情况,会话中发生的等待事件等10046event 可分成不同的级别(level),分别追踪记录不同程度的有用信息。对于这些不同的级别,应当注意的是向下兼容的,即高一级的trace信息包含低于此级的所有信息。

启动10046事件命令:alter session set events '10046 trace name context forever,level 12';

关闭10046事件命令:alter session set events '10046 trace name context off';

注:oracle提供了一个tkprof工具来对trace文件进行格式化翻译,过滤出有用的信息

LS@LEO> alter session set events '10046 trace name context forever,level 12';

Session altered.

表对象属性,在创建的时候就直接指定好了并行度,后面我们会从trace文件中看出,已经列出了sql解析、执行、取操作的性能指标,后面又列出了等待事件,在等待事件中我们可以看到PX并行等待事件,说明使用了并行技术执行

S@LEO> create table leo_t2 parallel 4 as select * from dba_objects;

Table created.

格式化trace文件

[oracle@secdb1 udump]$ pwd

/u01/app/oracle/admin/LEO/udump

[oracle@secdb1 udump]$ tkprof leo_ora_20558.trc leo.txt sys=no

TKPROF: Release 10.2.0.1.0 - Production on Sat Aug 4 14:54:21 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

输出内容

create table leo_t2 parallel 4 as select * from dba_objects

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.01 0.03 0 0 1 0

Execute 1 0.41 4.26 199 2985 1176 10336

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 2 0.42 4.29 199 2985 1177 10336

Misses in library cache during parse: 1

Optimizer mode: ALL_ROWS

Parsing user id: 27

Elapsed times include waiting on following events:

Event waited on 等待时间列表 Times Max. Wait Total Waited

---------------------------------------- Waited ---------- ------------

os thread startup 7 0.21 0.43

PX Deq: Join ACK 连接应答 5 0.01 0.05

PX qref latch 闩 2 0.01 0.01

PX Deq: Parse Reply 解析回复 4 0.17 0.23

enq: PS - contention 1 0.00 0.00

PX Deq: Execute Reply 执行回复 12 1.01 2.24

rdbms ipc reply 3 0.13 0.33

db file scattered read 3 0.00 0.00

log file sync 日志文件同步 2 0.00 0.00

PX Deq: Signal ACK 信号应答 4 0.01 0.01

SQL*Net message to client 1 0.00 0.00

SQL*Net message from client 1 0.00 0.00

********************************************************************************

索引对象属性,在创建索引的时候使用并行可以大大提高执行的效率,前提是系统资源充裕,否则可能适得其反哦:)

机制:把全部索引分成4份给4个并行进程去处理,把处理完的数据随机顺序发给QC整合结果,最后QC把最终结果返回给用户,完成sql操作

创建B-tree索引

LS@LEO> create index leo2_t_index on leo_t2(object_id) parallel 4;

Index created.

重建索引

LS@LEO> alter index leo2_t_index rebuild parallel 4;

Index altered.

输出内容

create index leo2_t_index on leo_t2(object_id) parallel 4

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 2 0.02 0.06 0 3 0 0

Execute 2 0.11 4.72 80 632 471 0

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 4 0.14 4.79 80 635 471 0

Misses in library cache during parse: 2

Optimizer mode: ALL_ROWS

Parsing user id: 27

Elapsed times include waiting on following events:

Event waited on Times Max. Wait Total Waited

---------------------------------------- Waited ---------- ------------

os thread startup 10 0.04 0.25

PX Deq: Join ACK 10 0.01 0.02

enq: PS - contention 4 0.00 0.00

PX qref latch 37 0.09 0.37

PX Deq: Parse Reply 7 0.01 0.06

PX Deq: Execute Reply 81 1.96 3.15

PX Deq: Table Q qref 3 0.24 0.24

log file sync 2 0.00 0.00

PX Deq: Signal ACK 6 0.00 0.01

latch: session allocation 1 0.01 0.01

SQL*Net message to client 2 0.00 0.00

SQL*Net message from client 2 0.00 0.00

********************************************************************************

alter index leo2_t_index rebuild parallel 4

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 2 0.02 0.09 0 54 6 0

Execute 2 0.03 0.83 122 390 458 0

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 4 0.05 0.93 122 444 464 0

Misses in library cache during parse: 2

Optimizer mode: ALL_ROWS

Parsing user id: 27

Elapsed times include waiting on following events:

Event waited on Times Max. Wait Total Waited

---------------------------------------- Waited ---------- ------------

enq: PS - contention 3 0.00 0.00

PX Deq: Parse Reply 3 0.00 0.00

PX Deq: Execute Reply 84 0.06 0.40

PX qref latch 3 0.08 0.09

PX Deq: Table Q qref 4 0.00 0.01

log file sync 5 0.00 0.00

PX Deq: Signal ACK 7 0.00 0.00

reliable message 2 0.00 0.00

enq: RO - fast object reuse 2 0.00 0.00

db file sequential read 2 0.00 0.00

rdbms ipc reply 4 0.00 0.00

SQL*Net message to client 2 0.00 0.00

SQL*Net message from client 2 0.00 0.00

********************************************************************************

(6)并行DML测试

前提:首先说明oracle对并行操作是有限制的,必须设置启用会话并行度,否则即使SQL指定了并行,oracle也不会执行DML并行操作

其次oracle只对partition table分区表做并行处理(有几个分区就开几个并行),普通表oracle不做并行处理,只限delete update merge操作

LS@LEO> alter session enable parallel dml; 启动会话并行度

Session altered.

我的表leo_t1是普通表,liusheng_hash分区表(包括10个分区)

LS@LEO> explain plan for delete /*+ parallel(leo_t1 2) */ from leo_t1;

Explained.

LS@LEO> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT 执行计划,对于普通表即使设置了并行度,oracle也不做并行处理,看还是使用的全表扫描

--------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 3964128955

---------------------------------------------------------------------

| Id | Operation | Name | Rows | Cost (%CPU)| Time |

---------------------------------------------------------------------

| 0 | DELETE STATEMENT | | 10700 | 40 (0)| 00:00:01 |

| 1 | DELETE | LEO_T1 | | | |

| 2 | TABLE ACCESS FULL| LEO_T1 | 10700 | 40 (0)| 00:00:01 |

---------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

LS@LEO> explain plan for delete /*+ parallel(liusheng_hash 2) */ from liusheng_hash;

Explained.

LS@LEO> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT 执行计划,oracle对于分区表是做并行处理的,从in-out字段上也可以看出并行全表扫描

--------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 1526574995

----------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |

----------------------------------------------------------------------------------------------------------------------------

| 0 | DELETE STATEMENT | | 10996 | 26 (0)| 00:00:01 | | | | | |

| 1 | PX COORDINATOR | | | | | | | | | |

| 2 | PX SEND QC (RANDOM) | :TQ10000 | 10996 | 26 (0)| 00:00:01 | | | Q1,00 | P->S | QC (RAND) |

| 3 | DELETE | LIUSHENG_HASH | | | | | | Q1,00 | PCWP | |

| 4 | PX BLOCK ITERATOR | | 10996 | 26 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |

| 5 | TABLE ACCESS FULL| LIUSHENG_HASH | 10996 | 26 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWP | |

----------------------------------------------------------------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

LS@LEO> explain plan for update /*+ parallel(liusheng_hash 4) */ liusheng_hash set object_name=object_name||' ';

Explained.

LS@LEO> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT 执行计划 更新操作也是一样

--------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 225854777

------------------------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |

------------------------------------------------------------------------------------------------------------------------------------

| 0 | UPDATE STATEMENT | | 10996 | 708K| 13 (0)| 00:00:01 | | | | | |

| 1 | PX COORDINATOR | | | | | | | | | | |

| 2 | PX SEND QC (RANDOM) | :TQ10000 | 10996 | 708K| 13 (0)| 00:00:01 | | | Q1,00 | P->S | QC (RAND) |

| 3 | UPDATE | LIUSHENG_HASH | | | | | | | Q1,00 | PCWP | |

| 4 | PX BLOCK ITERATOR | | 10996 | 708K| 13 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWC | |

| 5 | TABLE ACCESS FULL| LIUSHENG_HASH | 10996 | 708K| 13 (0)| 00:00:01 | 1 | 10 | Q1,00 | PCWP | |

------------------------------------------------------------------------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

接下来做insert并行测试,在insert测试中只有insert into ...... select ......做并行才有意义,insert into ......values ......单条插入没有意义

LS@LEO> explain plan for insert /*+ parallel(leo_t1 4) */ into leo_t1 select /*+ parallel(leo_t2 4) */ * from leo_t2;

Explained.

LS@LEO> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT 执行计划 insert和select操作别分使用了并行,它们是相互独立的互不干涉

--------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 1922268564

-----------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |

-----------------------------------------------------------------------------------------------------------------

| 0 | INSERT STATEMENT | | 10409 | 1799K| 11 (0)| 00:00:01 | | | |

| 1 | PX COORDINATOR | | | | | | | | |

| 2 | PX SEND QC (RANDOM) | :TQ10001 | 10409 | 1799K| 11 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |

| 3 | LOAD AS SELECT | LEO_T1 | | | | | Q1,01 | PCWP | |

| 4 | PX RECEIVE | | 10409 | 1799K| 11 (0)| 00:00:01 | Q1,01 | PCWP | |

| 5 | PX SEND ROUND-ROBIN| :TQ10000 | 10409 | 1799K| 11 (0)| 00:00:01 | Q1,00 | P->P | RND-ROBIN |

| 6 | PX BLOCK ITERATOR | | 10409 | 1799K| 11 (0)| 00:00:01 | Q1,00 | PCWC | |

| 7 | TABLE ACCESS FULL| LEO_T2 | 10409 | 1799K| 11 (0)| 00:00:01 | Q1,00 | PCWP | |

-----------------------------------------------------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

下面的insert语句没有在select使用并行,那么我们看看select语句是否用的串行操作

LS@LEO> explain plan for insert /*+ parallel(leo_t1 4) */ into leo_t1 select * from leo_t2;

Explained.

LS@LEO> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT 执行计划的in-out(进程间数据流)中可以看出S->P:Serial to Parallel一个串行操作(全表扫描)向一个并行操作发送数据,例如select子句是串行操作,所以就会出现这种情况

--------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 2695467291

------------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |

------------------------------------------------------------------------------------------------------------------

| 0 | INSERT STATEMENT | | 10409 | 1799K| 40 (0)| 00:00:01 | | | |

| 1 | PX COORDINATOR | | | | | | | | |

| 2 | PX SEND QC (RANDOM) | :TQ10001 | 10409 | 1799K| 40 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |

| 3 | LOAD AS SELECT | LEO_T1 | | | | | Q1,01 | PCWP | |

| 4 | BUFFER SORT | | | | | | Q1,01 | PCWC | |

| 5 | PX RECEIVE | | 10409 | 1799K| 40 (0)| 00:00:01 | Q1,01 | PCWP | |

| 6 | PX SEND ROUND-ROBIN| :TQ10000 | 10409 | 1799K| 40 (0)| 00:00:01 | | S->P | RND-ROBIN |

| 7 | TABLE ACCESS FULL | LEO_T2 | 10409 | 1799K| 40 (0)| 00:00:01 | | | |

------------------------------------------------------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

下面的insert语句没有在insert使用并行,让我们看看效果怎么样

LS@LEO> explain plan for insert into leo_t1 select /*+ parallel(leo_t2 4) */ * from leo_t2;

Explained.

LS@LEO> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT 执行计划 TABLE ACCESS FULL - PCWP 全表扫描用的是并行,PX SEND QC (RANDOM) - P->S 表示一个并行操作向一个串行操作发送数据,这就表示了我们先用并行select后面insert用的是串行了

--------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 985193522

--------------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |

--------------------------------------------------------------------------------------------------------------

| 0 | INSERT STATEMENT | | 10409 | 1799K| 11 (0)| 00:00:01 | | | |

| 1 | PX COORDINATOR | | | | | | | | |

| 2 | PX SEND QC (RANDOM)| :TQ10000 | 10409 | 1799K| 11 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |

| 3 | PX BLOCK ITERATOR | | 10409 | 1799K| 11 (0)| 00:00:01 | Q1,00 | PCWC | |

| 4 | TABLE ACCESS FULL| LEO_T2 | 10409 | 1799K| 11 (0)| 00:00:01 | Q1,00 | PCWP | |

--------------------------------------------------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

(7)使用并行的3种方法

1.hint 方式 临时有效

LS@LEO> set autotrace trace exp

LS@LEO> select /*+ parallel(leo_t1 4) */ * from leo_t1;

LS@LEO> select /*+ parallel(leo_t1 4) */ count(*) from leo_t1;

Execution Plan 执行计划 hint方式

----------------------------------------------------------

Plan hash value: 2648044456

--------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |

--------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 11 (0)| 00:00:01 | | | |

| 1 | SORT AGGREGATE | | 1 | | | | | |

| 2 | PX COORDINATOR | | | | | | | |

| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |

| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |

| 5 | PX BLOCK ITERATOR | | 10700 | 11 (0)| 00:00:01 | Q1,00 | PCWC | |

| 6 | TABLE ACCESS FULL| LEO_T1 | 10700 | 11 (0)| 00:00:01 | Q1,00 | PCWP | |

--------------------------------------------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

2.alter table 定义方式 长期有效

LS@LEO> alter table leo_t1 parallel 4;

Table altered.

LS@LEO> select count(*) from leo_t1;

Execution Plan 执行计划 定义方式

----------------------------------------------------------

Plan hash value: 2648044456

--------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |

--------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 11 (0)| 00:00:01 | | | |

| 1 | SORT AGGREGATE | | 1 | | | | | |

| 2 | PX COORDINATOR | | | | | | | |

| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |

| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |

| 5 | PX BLOCK ITERATOR | | 10700 | 11 (0)| 00:00:01 | Q1,00 | PCWC | |

| 6 | TABLE ACCESS FULL| LEO_T1 | 10700 | 11 (0)| 00:00:01 | Q1,00 | PCWP | |

--------------------------------------------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

3.alter session force parallel 强制定义并行度

LS@LEO> alter table leo_t1 parallel 1; 首先我们已经修改并行度为1

Table altered.

LS@LEO> alter session force parallel query parallel 4; 再次强制定义并行度为4

Session altered.

LS@LEO> select count(*) from leo_t1;

Execution Plan 执行计划 强制使用并行度4执行SQL

----------------------------------------------------------

Plan hash value: 2648044456

--------------------------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |

--------------------------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 11 (0)| 00:00:01 | | | |

| 1 | SORT AGGREGATE | | 1 | | | | | |

| 2 | PX COORDINATOR | | | | | | | |

| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |

| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |

| 5 | PX BLOCK ITERATOR | | 10700 | 11 (0)| 00:00:01 | Q1,00 | PCWC | |

| 6 | TABLE ACCESS FULL| LEO_T1 | 10700 | 11 (0)| 00:00:01 | Q1,00 | PCWP | |

--------------------------------------------------------------------------------------------------------

Note

-----

- dynamic sampling used for this statement

(8)/*+ append */直接加载

直接加载:指数据不经过db_buffer_cache内存区,直接写入到数据文件中,实际上是直接追加到数据段的最后,不在段中寻找空闲空间而插入

LS@LEO> create table leo_t3 as select * from dba_objects; 创建表leo_t3

Table created.

LS@LEO> insert /*+ append*/ into leo_t3 select * from dba_objects; 直接加载数据

10337 rows created.

LS@LEO> create table leo_t4 as select * from leo_t3 where rownum<10000; 创建表leo_t4

Table created.

LS@LEO> select segment_name,extent_id,bytes from user_extents where segment_name='LEO_T4'; 表leo_t4占用了16个区

SEGMENT_NAME EXTENT_ID BYTES

--------------------------------------------------------------------------------- ---------- ----------

LEO_T4 0 65536

LEO_T4 1 65536

LEO_T4 2 65536

LEO_T4 3 65536

LEO_T4 4 65536

LEO_T4 5 65536

LEO_T4 6 65536

LEO_T4 7 65536

LEO_T4 8 65536

LEO_T4 9 65536

LEO_T4 10 65536

LEO_T4 11 65536

LEO_T4 12 65536

LEO_T4 13 65536

LEO_T4 14 65536

LEO_T4 15 65536

LEO_T4 16 1048576

LS@LEO> delete from leo_t4; 删除所有记录

9999 rows deleted.

LS@LEO> commit; 提交

Commit complete.

LS@LEO> select segment_name,extent_id,bytes from user_extents where segment_name='LEO_T4'; 删除之后为什么还占用16个区呢,我来解释一下,oracle在delete操作后数据并没有真实的删除了。只是打上一个“标记”说明这些数据不可用了,也说明了为什么删除之后磁盘空间没有回收的问题。

SEGMENT_NAME EXTENT_ID BYTES

--------------------------------------------------------------------------------- ---------- ----------

LEO_T4 0 65536

LEO_T4 1 65536

LEO_T4 2 65536

LEO_T4 3 65536

LEO_T4 4 65536

LEO_T4 5 65536

LEO_T4 6 65536

LEO_T4 7 65536

LEO_T4 8 65536

LEO_T4 9 65536

LEO_T4 10 65536

LEO_T4 11 65536

LEO_T4 12 65536

LEO_T4 13 65536

LEO_T4 14 65536

LEO_T4 15 65536

LEO_T4 16 1048576

LS@LEO> insert into leo_t4 select * from leo_t3 where rownum<10000; 传统加载 oracle会找段中的空闲空间插入数据,看还是利旧了原来的16个区

9999 rows created.

LS@LEO> select segment_name,extent_id,bytes from user_extents where segment_name='LEO_T4';

SEGMENT_NAME EXTENT_ID BYTES

--------------------------------------------------------------------------------- ---------- ----------

LEO_T4 0 65536

LEO_T4 1 65536

LEO_T4 2 65536

LEO_T4 3 65536

LEO_T4 4 65536

LEO_T4 5 65536

LEO_T4 6 65536

LEO_T4 7 65536

LEO_T4 8 65536

LEO_T4 9 65536

LEO_T4 10 65536

LEO_T4 11 65536

LEO_T4 12 65536

LEO_T4 13 65536

LEO_T4 14 65536

LEO_T4 15 65536

LEO_T4 16 1048576

LS@LEO> delete from leo_t4; 删除所有记录

9999 rows deleted.

LS@LEO> commit;

Commit complete.

LS@LEO> select count(*) from leo_t4; 记录数为0

COUNT(*)

----------

0

LS@LEO> select segment_name,extent_id,bytes from user_extents where segment_name='LEO_T4'; 这个表还是占用16个区,数据块有数据但是可以覆盖,我们认为是空闲的块

SEGMENT_NAME EXTENT_ID BYTES

--------------------------------------------------------------------------------- ---------- ----------

LEO_T4 0 65536

LEO_T4 1 65536

LEO_T4 2 65536

LEO_T4 3 65536

LEO_T4 4 65536

LEO_T4 5 65536

LEO_T4 6 65536

LEO_T4 7 65536

LEO_T4 8 65536

LEO_T4 9 65536

LEO_T4 10 65536

LEO_T4 11 65536

LEO_T4 12 65536

LEO_T4 13 65536

LEO_T4 14 65536

LEO_T4 15 65536

LEO_T4 16 1048576

LS@LEO> insert /*+ append */ into leo_t4 select * from leo_t3 where rownum<10000; 直接加载方式,oracle把新数据直接插入到新的20个区里了,并没有使用原来的16个区空闲块,也就应了不在段中寻找空闲块插入

9999 rows created.

LS@LEO> commit; 必须commit之后,oracle才讲HWM高水位线移动到新数据块之上,如果没有commit,oracle不会移动HWM高水位线,因此看不到数据字典里面的变化(也就是不显示后面的20个区),如果此时回滚的话,HWM高水位线不用动,就想什么都没有发生一样

Commit complete.

LS@LEO> select segment_name,extent_id,bytes from user_extents where segment_name='LEO_T4';

SEGMENT_NAME EXTENT_ID BYTES

--------------------------------------------------------------------------------- ---------- ----------

LEO_T4 0 65536

LEO_T4 1 65536

LEO_T4 2 65536

LEO_T4 3 65536

LEO_T4 4 65536

LEO_T4 5 65536

LEO_T4 6 65536

LEO_T4 7 65536

LEO_T4 8 65536

LEO_T4 9 65536

LEO_T4 10 65536

LEO_T4 11 65536

LEO_T4 12 65536

LEO_T4 13 65536

LEO_T4 14 65536

LEO_T4 15 65536

LEO_T4 16 65536

LEO_T4 17 65536

LEO_T4 18 65536

LEO_T4 19 65536

LEO_T4 20 65536

LEO_T4 21 65536

LEO_T4 22 65536

LEO_T4 23 65536

LEO_T4 24 65536

LEO_T4 25 65536

LEO_T4 26 65536

LEO_T4 27 65536

LEO_T4 28 65536

LEO_T4 29 65536

LEO_T4 30 65536

LEO_T4 31 65536

LEO_T4 32 65536

LEO_T4 33 65536

LEO_T4 34 65536

LEO_T4 35 65536

LEO_T4 36 65536

37 rows selected.

(9)/*+ append */直接加载和redo

LS@LEO> create table leo_t5 as select object_id,object_name from dba_objects; 创建表leo_t5

Table created.

LS@LEO> create table leo_t6 as select object_id,object_name from dba_objects; 创建表leo_t6

Table created.

LS@LEO> alter table leo_t5 logging; 设置产生redo日志模式

Table altered.

LS@LEO> truncate table leo_t5; 截断表

Table truncated.

LS@LEO> set autotrace trace stat; 启动统计信息

insert into leo_t5 select * from leo_t6; 传统加载

LS@LEO>

10340 rows created.

Statistics 统计信息

----------------------------------------------------------

197 recursive calls

185 db block gets

92 consistent gets

60 physical reads

37128 redo size 37128 redo量

664 bytes sent via SQL*Net to client

571 bytes received via SQL*Net from client

4 SQL*Net roundtrips to/from client

3 sorts (memory)

0 sorts (disk)

10340 rows processed

LS@LEO> rollback; 回滚

Rollback complete.

LS@LEO> insert /*+ append */ into leo_t5 select * from leo_t6; 直接加载

10340 rows created.

Statistics

----------------------------------------------------------

111 recursive calls

180 db block gets

79 consistent gets

21 physical reads

36640 redo size 36640 redo量

664 bytes sent via SQL*Net to client

585 bytes received via SQL*Net from client

4 SQL*Net roundtrips to/from client

2 sorts (memory)

0 sorts (disk)

10340 rows processed

小结:我们看到传统加载和直接加载产生的redo量并没有太大的差异,因为只要底层数据块发生变化,就会生成redo信息,不管传统和直接都会修改数据块,用来恢复依据,所以并没有太大的差异。

(10)直接加载和索引

LS@LEO> set autotrace trace stat;

LS@LEO> insert /*+ append */ into leo_t5 select * from leo_t6; 直接加载,但表上没有索引

10340 rows created.

Statistics 统计信息

----------------------------------------------------------

111 recursive calls

175 db block gets

81 consistent gets

15 physical reads

36816 redo size 36816 redo量

664 bytes sent via SQL*Net to client

585 bytes received via SQL*Net from client

4 SQL*Net roundtrips to/from client

2 sorts (memory)

0 sorts (disk)

10340 rows processed

LS@LEO> create index leo_t5_index on leo_t5(object_id); 给表创建索引

Index created.

LS@LEO> rollback; 回滚

Rollback complete.

LS@LEO> insert /*+ append */ into leo_t5 select * from leo_t6; 直接加载,但表上有索引

10340 rows created.

Statistics 统计信息

----------------------------------------------------------

120 recursive calls

193 db block gets

85 consistent gets

22 physical reads

37344 redo size 37344 redo量

664 bytes sent via SQL*Net to client

585 bytes received via SQL*Net from client

4 SQL*Net roundtrips to/from client

3 sorts (memory)

0 sorts (disk)

10340 rows processed

小结:因为有了索引,直接加载redo量比没有索引时有一定的提升,可能是我的测试数据少所以这种提升并不明显,如果在实际生产库上发生了大量的redo,建议先将索引drop,加载数据后,在重建rebuild索引

(11)直接加载和并行

直接加载和并行是可以一起使用的,以此大幅度提高sql执行效率

LS@LEO> alter session enable parallel dml; 设置会话并行度

Session altered.

LS@LEO> alter session set events '10046 trace name context forever,level 12'; 使用trace文件跟踪sql性能指标

Session altered.

LS@LEO> insert /*+ append parallel(leo_t5,2) */ into leo_t5 select * from leo_t6; 直接加载+并行插入

10340 rows created.

LS@LEO> rollback;

Rollback complete.

LS@LEO> insert /*+ parallel(leo_t5,2) */ into leo_t5 select * from leo_t6; 并行插入

10340 rows created.

LS@LEO> rollback;

Rollback complete.

LS@LEO> insert /*+ append */ into leo_t5 select * from leo_t6; 直接加载

10340 rows created.

LS@LEO> rollback;

Rollback complete.

LS@LEO> insert into leo_t5 select * from leo_t6; 什么特性也没有用

10340 rows created.

LS@LEO> commit; 提交

Commit complete.

[oracle@secdb1 udump]$ tkprof leo_ora_20558.trc leo.txt sys=no 格式化trace文件

TKPROF: Release 10.2.0.1.0 - Production on Sun Aug 5 22:13:38 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

insert /*+ append parallel(leo_t5,2) */ into leo_t5 select * from leo_t6

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.00 0.01 0 1 0 0

Execute 1 0.03 2.51 8 46 67 10340

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 2 0.04 2.53 8 47 67 10340

Misses in library cache during parse: 1

Optimizer mode: ALL_ROWS

Parsing user id: 27

Rows Row Source Operation

------- ---------------------------------------------------

2 PX COORDINATOR (cr=46 pr=0 pw=0 time=2201632 us)

0 PX SEND QC (RANDOM) :TQ10001 (cr=0 pr=0 pw=0 time=0 us)

0 LOAD AS SELECT (cr=0 pr=0 pw=0 time=0 us)

0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)

0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)

0 PX SEND ROUND-ROBIN :TQ10000 (cr=0 pr=0 pw=0 time=0 us)

10340 TABLE ACCESS FULL LEO_T6 (cr=42 pr=0 pw=0 time=1356361 us)

insert /*+ parallel(leo_t5,2) */ into leo_t5 select * from leo_t6

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.00 0.01 0 1 0 0

Execute 1 0.02 1.66 7 44 64 10340

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 2 0.02 1.67 7 45 64 10340

Misses in library cache during parse: 1

Optimizer mode: ALL_ROWS

Parsing user id: 27

Rows Row Source Operation

------- ---------------------------------------------------

2 PX COORDINATOR (cr=44 pr=0 pw=0 time=1209712 us)

0 PX SEND QC (RANDOM) :TQ10001 (cr=0 pr=0 pw=0 time=0 us)

0 LOAD AS SELECT (cr=0 pr=0 pw=0 time=0 us)

0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)

0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)

0 PX SEND ROUND-ROBIN :TQ10000 (cr=0 pr=0 pw=0 time=0 us)

10340 TABLE ACCESS FULL LEO_T6 (cr=42 pr=0 pw=0 time=186185 us)

insert /*+ append */ into leo_t5 select * from leo_t6

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.00 0.00 0 1 0 0

Execute 1 0.06 0.24 62 113 373 10340

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 2 0.06 0.24 62 114 373 10340

Misses in library cache during parse: 1

Optimizer mode: ALL_ROWS

Parsing user id: 27

Rows Row Source Operation

------- ---------------------------------------------------

1 LOAD AS SELECT (cr=113 pr=62 pw=39 time=241775 us)

10340 TABLE ACCESS FULL LEO_T6 (cr=42 pr=0 pw=0 time=62104 us) 没有使用并行操作

insert into leo_t5 select * from leo_t6

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 0.00 0.00 0 43 0 0

Execute 1 0.14 0.54 100 101 1022 10340

Fetch 0 0.00 0.00 0 0 0 0

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 2 0.15 0.55 100 144 1022 10340

Misses in library cache during parse: 1

Optimizer mode: ALL_ROWS

Parsing user id: 27

Rows Row Source Operation

------- ---------------------------------------------------

10340 TABLE ACCESS FULL LEO_T6 (cr=42 pr=0 pw=0 time=744520 us) 只有全表扫描

小结:insert /*+ append parallel(leo_t5,2) */ into leo_t5 select * from leo_t6和insert /*+ parallel(leo_t5,2) */ into leo_t5 select * from leo_t6执行计划是一样的,因为当使用parallel并行插入时,oracle默认使用直接加载方式来加载数据,因此append关键字可忽略了。

注:如果执行alter session disable parallel dml; oracle就会禁用DML并行操作,就算有hint提示也不会起作用,那么insert /*+ append parallel(leo_t5,2) */和insert /*+ append */的执行计划都应该是一样的了,都是只有直接加载,没有并行效果了

(12)直接加载和sqlload

sqlload 是我们常用的文本加载工具,它可以把文本文件按照一定的格式批量加载到数据库中去,现在我们测试传统加载conventional、直接加载direct、并行parallel直接加载的性能对比和执行效率。

-rwxrwxrwx 1 oracle oinstall 283 Aug 9 00:11 leo_test.ctl 控制文件

-rwxrwxrwx 1 oracle oinstall 8983596 Aug 8 20:57 leo_test.data 数据文件,10万行数据,9个字段

-rwxrwxrwx 1 oracle oinstall 2099 Aug 9 00:15 leo_test.log 日志文件

1.传统加载conventional 10万行记录->表LEO_TEST_SQLLOAD

sqlldr userid=ls/ls control=leo_test.ctl 传统加载数据

LS@LEO> select count(*) from leo_test_sqlload;

COUNT(*)

----------

100000

SQL*Loader: Release 10.2.0.1.0 - Production on Thu Aug 9 00:14:15 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Control File: leo_test.ctl 控制文件

Data File: /home/oracle/leo_test.data 数据文件

Bad File: leo_test.bad 坏文件

Discard File: none specified

(Allow all discards)

Number to load: ALL

Number to skip: 0

Errors allowed: 50

Bind array: 64 rows, maximum of 256000 bytes

Continuation: none specified

Path used: Conventional sqlload采用传统加载方式,数据要通过缓冲区加载到表中

Table LEO_TEST_SQLLOAD, loaded from every logical record.

Insert option in effect for this table: APPEND 采用追加的方式加载,新数据不覆盖旧数据,而是结尾累加

TRAILING NULLCOLS option in effect

Column Name Position Len Term Encl Datatype 列信息

------------------------------ ---------- ----- ---- ---- ---------------------

START_TIME FIRST * | DATE YYYY-MM-DD HH24:MI:SS

END_TIME NEXT * | DATE YYYY-MM-DD HH24:MI:SS

PROTOCOL NEXT * | CHARACTER

PRIVATE_IP NEXT * | CHARACTER

PRIVATE_PORT NEXT * | CHARACTER

SRC_IP NEXT * | CHARACTER

SRC_PORT NEXT * | CHARACTER

DEST_IP NEXT * | CHARACTER

DEST_PORT NEXT * | CHARACTER

Table LEO_TEST_SQLLOAD:

100000 Rows successfully loaded. 10万行记录成功加载

0 Rows not loaded due to data errors.

0 Rows not loaded because all WHEN clauses were failed.

0 Rows not loaded because all fields were null.

Space allocated for bind array: 148608 bytes(64 rows)

Read buffer bytes: 1048576

Total logical records skipped: 0

Total logical records read: 100000

Total logical records rejected: 0

Total logical records discarded: 0

Run began on Thu Aug 09 00:14:15 2012

Run ended on Thu Aug 09 00:15:21 2012

Elapsed time was: 00:01:05.60 耗时65秒

CPU time was: 00:00:00.81

2.直接加载direct 10万行记录->表LEO_TEST_SQLLOAD1

LS@LEO> select df.tablespace_name "表空间名",totalspace "总空间M",freespace "剩余空间M",round((1-freespace/totalspace)*100,2) "使用率%"

from (select tablespace_name,round(sum(bytes)/1024/1024) totalspace from dba_data_files group by tablespace_name) df,

(select tablespace_name,round(sum(bytes)/1024/1024) freespace from dba_free_space group by tablespace_name) fs

where df.tablespace_name=fs.tablespace_name order by df.tablespace_name ; 2 3 4 5

表空间名 总空间M 剩余空间M 使用率%

------------------------------ ---------- ------------- ----------

CTXSYS 32 27 15.63

EXAMPLE 200 199 .5

SYSAUX 325 266 18.15

SYSTEM 325 84 74.15

UNDOTBS 200 189 5.5

USERS 600 501 16.5 没有加载表leo_test_sqlload1之前空间情况

sqlldr userid=ls/ls control=leo_test1.ctl data=leo_test.data log=leo_test1.log direct=true 直接加载10万行数据

LS@LEO> select count(*) from leo_test_sqlload1;

COUNT(*)

----------

100000 (3M)

LS@LEO> select df.tablespace_name "表空间名",totalspace "总空间M",freespace "剩余空间M",round((1-freespace/totalspace)*100,2) "使用率%"

from

(select tablespace_name,round(sum(bytes)/1024/1024) totalspace from dba_data_files group by tablespace_name) df,

(select tablespace_name,round(sum(bytes)/1024/1024) freespace from dba_free_space group by tablespace_name) fs

where df.tablespace_name=fs.tablespace_name order by df.tablespace_name ; 2 3 4 5

表空间名 总空间M 剩余空间M 使用率%

------------------------------ ---------- ------------- ----------

CTXSYS 32 27 15.63

EXAMPLE 200 199 .5

SYSAUX 325 266 18.15

SYSTEM 325 84 74.15

UNDOTBS 200 189 5.5

USERS 600 498 17 10万行记录加载后使用了3M空间

SQL*Loader: Release 10.2.0.1.0 - Production on Thu Aug 9 01:07:52 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Control File: leo_test1.ctl 控制文件

Data File: leo_test.data 数据文件,10万行数据,9个字段

Bad File: leo_test.bad 坏文件

Discard File: none specified

(Allow all discards)

Number to load: ALL

Number to skip: 0

Errors allowed: 50

Continuation: none specified

Path used: Direct sqlload采用直接加载方式,数据不通过缓冲区和sql语法引擎直接加载到表中

Table LEO_TEST_SQLLOAD1, loaded from every logical record.

Insert option in effect for this table: APPEND 采用追加的方式加载,新数据不覆盖旧数据,而是结尾累加

TRAILING NULLCOLS option in effect

Column Name Position Len Term Encl Datatype 列信息

------------------------------ ---------- ----- ---- ---- ---------------------

START_TIME FIRST * | DATE YYYY-MM-DD HH24:MI:SS

END_TIME NEXT * | DATE YYYY-MM-DD HH24:MI:SS

PROTOCOL NEXT * | CHARACTER

PRIVATE_IP NEXT * | CHARACTER

PRIVATE_PORT NEXT * | CHARACTER

SRC_IP NEXT * | CHARACTER

SRC_PORT NEXT * | CHARACTER

DEST_IP NEXT * | CHARACTER

DEST_PORT NEXT * | CHARACTER

Table LEO_TEST_SQLLOAD1:

100000 Rows successfully loaded. 10万行记录成功加载,占用3M磁盘空间

0 Rows not loaded due to data errors.

0 Rows not loaded because all WHEN clauses were failed.

0 Rows not loaded because all fields were null.

Date cache:

Max Size: 1000

Entries : 65

Hits : 199935

Misses : 0

Bind array size not used in direct path.

Column array rows : 5000

Stream buffer bytes: 256000

Read buffer bytes: 1048576

Total logical records skipped: 0

Total logical records read: 100000

Total logical records rejected: 0

Total logical records discarded: 0

Total stream buffers loaded by SQL*Loader main thread: 26

Total stream buffers loaded by SQL*Loader load thread: 17

Run began on Thu Aug 09 01:07:52 2012

Run ended on Thu Aug 09 01:07:56 2012

Elapsed time was: 00:00:03.53 耗时3秒.53 比 传统加载65秒节约了94%时间

CPU time was: 00:00:00.25

小结:因此我们知道直接加载要比传统加载执行效率高很多,当我们的系统负载不高,资源充裕时可以考虑使用直接加载direct方式批量导入数据,即减少了I/O和内存开销,又提高了数据加载效率。

3.并行直接加载direct 10万行记录->表LEO_TEST_SQLLOAD2

表空间名 总空间M 剩余空间M 使用率%

------------------------------ ---------- ------------- ----------

USERS 600 498 17 没有加载前表空间的空间状态

sqlldr userid=ls/ls control=leo_test2.ctl data=leo_test.data log=leo_test2.log direct=true parallel=true 并行直接加载10万行数据

LS@LEO> select count(*) from leo_test_sqlload2;

COUNT(*)

----------

100000 (8M)

表空间名 总空间M 剩余空间M 使用率%

------------------------------ ---------- ------------- ----------

USERS 600 490 18.33 10万行记录加载后使用了8M空间

SQL*Loader: Release 10.2.0.1.0 - Production on Thu Aug 9 07:25:00 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Control File: leo_test2.ctl 控制文件

Data File: leo_test.data 数据文件,10万行数据,9个字段

Bad File: leo_test.bad 坏文件

Discard File: none specified

(Allow all discards)

Number to load: ALL

Number to skip: 0

Errors allowed: 50

Continuation: none specified

Path used: Direct - with parallel option. sqlload采用并行+直接加载方式,既有并行,又有直接加载,双重功效,效率更高

Table LEO_TEST_SQLLOAD2, loaded from every logical record.

Insert option in effect for this table: APPEND 采用追加的方式加载,新数据不覆盖旧数据,而是结尾累加

TRAILING NULLCOLS option in effect

Column Name Position Len Term Encl Datatype 列信息

------------------------------ ---------- ----- ---- ---- ---------------------

START_TIME FIRST * | DATE YYYY-MM-DD HH24:MI:SS

END_TIME NEXT * | DATE YYYY-MM-DD HH24:MI:SS

PROTOCOL NEXT * | CHARACTER

PRIVATE_IP NEXT * | CHARACTER

PRIVATE_PORT NEXT * | CHARACTER

SRC_IP NEXT * | CHARACTER

SRC_PORT NEXT * | CHARACTER

DEST_IP NEXT * | CHARACTER

DEST_PORT NEXT * | CHARACTER

Table LEO_TEST_SQLLOAD2:

100000 Rows successfully loaded. 10万行记录成功加载,占用8M磁盘空间

0 Rows not loaded due to data errors.

0 Rows not loaded because all WHEN clauses were failed.

0 Rows not loaded because all fields were null.

Date cache:

Max Size: 1000

Entries : 65

Hits : 199935

Misses : 0

Bind array size not used in direct path.

Column array rows : 5000

Stream buffer bytes: 256000

Read buffer bytes: 1048576

Total logical records skipped: 0

Total logical records read: 100000

Total logical records rejected: 0

Total logical records discarded: 0

Total stream buffers loaded by SQL*Loader main thread: 26

Total stream buffers loaded by SQL*Loader load thread: 17

Run began on Thu Aug 09 07:25:00 2012

Run ended on Thu Aug 09 07:25:13 2012

Elapsed time was: 00:00:12.77 耗时00:00:12.77 比 直接加载3秒.53节约了93%时间

CPU time was: 00:00:00.98

小结:从时间成本上我们就可看出,并行直接加载效率要远远的高出串行直接加载,在海量数据的环境中使用并行和直接加载的技术,对提高效率和性能那是如虎添翼(并行并不一定比串行好,主要看业务类型其次看资源情况),我们应该思考“理解技术如何为业务服务”,这要比单纯学技术更加重要,谢谢!!!

(12)sqlload直接加载对索引的影响

所谓对索引的影响是指使用sqlload加载存在索引的表的数据时索引是否有效

非约束索引:sqlload直接加载完毕后维护索引的完整性,此时索引不失效

约束索引:例如 主键 外键 唯一索引 sqlload直接加载完毕后,数据会入库但索引会失效unusable,此时要重建索引

1.非约束索引,直接加载完毕后维护索引的完整性,此时索引不失效

LS@LEO> select count(*) from leo_test_sqlload1; 表中有10条记录

COUNT(*)

----------

100000

LS@LEO> create index leo_test_sqlload1_index on leo_test_sqlload1(private_ip); 在private_ip上创建B-tree索引

Index created.

LS@LEO> select status from user_indexes where table_name='LEO_TEST_SQLLOAD1'; 检查索引的有效性valid

STATUS

--------

VALID

sqlldr userid=ls/ls control=leo_test1.ctl data=leo_test.data log=leo_test1.log direct=true 直接加载后会维护索引的完整性

SQL*Loader: Release 10.2.0.1.0 - Production on Thu Aug 9 15:27:03 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Load completed - logical record count 100000. 10万行记录成功加载

LS@LEO> select count(*) from leo_test_sqlload1; 现在表中有20万条记录

COUNT(*)

----------

200000

LS@LEO> select status from user_indexes where table_name='LEO_TEST_SQLLOAD1'; 自动维护索引的有效性vaild,对非约束索引而言

STATUS

--------

VALID

2.约束索引:例如 主键 外键 唯一索引 sqlload直接加载完毕后,数据会入库但索引会失效unusable,此时要重建索引

LS@LEO> create table leo_test_sqlload3

(

START_TIME date,

END_TIME date,

PROTOCOL varchar(20),

PRIVATE_IP varchar(20),

PRIVATE_PORT varchar(20) constraint pk_leo_test_sqlload3 primary key , 我们创建一个带主键的表

SRC_IP varchar(20),

SRC_PORT varchar(20),

DEST_IP varchar(20),

DEST_PORT varchar(20)

);

Table created.

LS@LEO> select * from leo_test_sqlload3; 现在表中没有数据

no rows selected

sqlldr userid=ls/ls control=leo_test3.ctl data=leo_test1.data log=leo_test3.log direct=true

SQL*Loader: Release 10.2.0.1.0 - Production on Thu Aug 9 15:49:10 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Load completed - logical record count 100. 100行记录成功加载

LS@LEO> select * from leo_test_sqlload3; 数据已加载,但PRIVATE_PORT主键索引已经失效,因为我们的值全是一样的

START_TIME END_TIME PR PRIVATE_IP PRIV SRC_IP SRC_PORT DEST_IP DEST

---------------------- ---------------------- -- ------------ ---- ------------ -------- ------------ ----

2012-08-08 20:59:54 2012-08-08 21:00:28 6 2886756061 1111 3395517721 45031 3419418065 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886900807 1111 3395507143 51733 3658060738 80

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43516 2071873572 80

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43534 2071873572 80

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43523 2071873572 80

2012-08-08 21:00:14 2012-08-08 21:00:28 6 2886832065 1111 3395507109 51442 2099718013 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886794376 1111 3395507104 57741 2071819251 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886758392 1111 3395517723 56875 1007173560 80

2012-08-08 21:00:22 2012-08-08 21:00:28 6 2886862137 1111 3395517760 17744 3626142915 7275

2012-08-08 21:00:25 2012-08-08 21:00:28 6 2886741689 1111 3395517708 14954 2007469330 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886891044 1111 3395517787 23626 1872834975 443

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886790049 1111 3395507100 54215 1884995806 80

2012-08-08 21:00:15 2012-08-08 21:00:28 6 2886771544 1111 3395507083 32261 1872832004 80

2012-08-08 21:00:24 2012-08-08 21:00:28 6 2886796616 1111 3395517729 18634 2007467546 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886839912 1111 3395507117 10102 1850510469 5242

2012-08-08 21:00:23 2012-08-08 21:00:28 6 2886742978 1111 3395517709 28276 1021181676 80

2012-08-08 21:00:16 2012-08-08 21:00:28 6 2886792600 1111 3395507103 15204 974546887 80

2012-08-08 21:00:23 2012-08-08 21:00:28 6 2886890096 1111 3395517786 30741 1884983225 80

2012-08-08 21:00:00 2012-08-08 21:00:28 6 2886743885 1111 3395517710 18678 1884968358 80

2012-08-08 21:00:16 2012-08-08 21:00:28 6 2886792600 1111 3395507103 15237 974547338 80

2012-08-08 21:00:10 2012-08-08 21:00:28 6 2886828509 1111 3395507106 30179 2007493616 80

2012-08-08 21:00:25 2012-08-08 21:00:28 6 2886811814 1111 3395517743 34249 2072702869 80

2012-08-08 20:59:57 2012-08-08 21:00:28 6 2886780595 1111 3395507091 63169 1872834775 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886745283 1111 3395517711 38566 1863134645 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886852868 1111 3395507129 19216 989566331 80

2012-08-08 21:00:22 2012-08-08 21:00:28 6 2886758076 1111 3395517723 37910 3061190502 80

2012-08-08 21:00:22 2012-08-08 21:00:28 6 2886758076 1111 3395517723 37886 2079006794 80

2012-08-08 21:00:25 2012-08-08 21:00:28 6 2886788330 1111 3395507099 15078 460553383 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886756269 1111 3395517721 57538 2008813541 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886906371 1111 3395507148 65509 1884961048 80

2012-08-08 20:59:51 2012-08-08 21:00:28 6 2886893244 1111 3395517789 27585 2071802397 995

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886810351 1111 3395517742 10465 1971814472 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886908390 1111 3395507150 58599 3419418057 80

2012-08-08 21:00:11 2012-08-08 21:00:28 6 2886811967 1111 3395517743 43433 2099759129 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886908416 1111 3395507150 60161 1027056891 80

2012-08-08 21:00:24 2012-08-08 21:00:28 6 2886794472 1111 3395507104 63499 1872769542 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886859643 1111 3395507135 41589 1008470934 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886908926 1111 3395507151 26758 1027061456 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886844821 1111 3395507121 48598 989542829 80

2012-08-08 21:00:14 2012-08-08 21:00:28 6 2886811914 1111 3395517743 40207 2071819051 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886776231 1111 3395507087 57398 1027061476 80

2012-08-08 21:00:21 2012-08-08 21:00:28 6 2886895128 1111 3395507138 31084 1020918811 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886896369 1111 3395507139 41560 2071819499 80

2012-08-08 21:00:15 2012-08-08 21:00:28 6 2886866997 1111 3395517764 53220 1008528500 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886733364 1111 3395517700 27617 1850417510 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886763900 1111 3395507076 21749 2072679568 80

2012-08-08 21:00:24 2012-08-08 21:00:28 6 2886848688 1111 3395507125 24485 460553373 80

2012-08-08 20:59:50 2012-08-08 21:00:28 6 2886866792 1111 3395517764 40930 2072313366 80

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43536 2071873572 80

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43542 2071873572 80

2012-08-08 20:59:53 2012-08-08 21:00:28 6 2886801934 1111 3395517734 17623 2007483189 8080

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43537 2071873572 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886886283 1111 3395517782 58048 2071816694 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886735314 1111 3395517702 16591 2071799544 80

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43524 2071873572 80

2012-08-08 21:00:20 2012-08-08 21:00:28 6 2886849684 1111 3395507126 20262 2008825959 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886872604 1111 3395517770 5537 3419418056 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886853794 1111 3395507130 10753 2099722272 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886755008 1111 3395517720 45872 1883357744 80

2012-08-08 21:00:21 2012-08-08 21:00:28 6 2886895128 1111 3395507138 31121 2078933535 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886864839 1111 3395517762 51804 1850417452 80

2012-08-08 21:00:19 2012-08-08 21:00:28 6 2886858061 1111 3395507134 10700 2071819372 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886776231 1111 3395507087 57410 1027061476 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886858854 1111 3395507134 58306 1020914578 80

2012-08-08 21:00:21 2012-08-08 21:00:28 6 2886774805 1111 3395507086 35831 1883303354 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886794557 1111 3395507105 4593 3708103499 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886747135 1111 3395517713 21641 2099740446 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886863802 1111 3395517761 53630 1863145458 5224

2012-08-08 21:00:22 2012-08-08 21:00:28 6 2886911235 1111 3395507153 37254 2095615735 21

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886860043 1111 3395507136 1581 294986889 5223

2012-08-08 20:59:56 2012-08-08 21:00:28 6 2886780595 1111 3395507091 63161 1883302610 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886732547 1111 3395517699 42653 294986856 5223

2012-08-08 20:59:54 2012-08-08 21:00:28 6 2886734208 1111 3395517701 14230 2007484922 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886866964 1111 3395517764 51273 2072105082 80

2012-08-08 21:00:00 2012-08-08 21:00:28 6 2886780595 1111 3395507091 63144 1872834775 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886914262 1111 3395507156 26777 2072104968 80

2012-08-08 20:59:54 2012-08-08 21:00:28 6 2886734208 1111 3395517701 14273 2007484922 80

2012-08-08 21:00:25 2012-08-08 21:00:28 6 2886847997 1111 3395507124 47084 2021394494 80

2012-08-08 21:00:21 2012-08-08 21:00:28 6 2886785128 1111 3395507096 15002 294986849 5223

2012-08-08 21:00:25 2012-08-08 21:00:28 6 2886783177 1111 3395507094 26001 2072101596 443

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886735924 1111 3395517702 53178 1850417918 80

2012-08-08 21:00:09 2012-08-08 21:00:28 6 2886837532 1111 3395507114 59353 2071819198 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886891515 1111 3395517787 51880 1884983223 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886737305 1111 3395517704 8009 1872834975 443

2012-08-08 21:00:16 2012-08-08 21:00:28 6 2886755910 1111 3395517721 35947 2918544417 80

2012-08-08 21:00:27 2012-08-08 21:00:28 6 2886771117 1111 3395507083 6645 1884960474 80

2012-08-08 21:00:20 2012-08-08 21:00:28 6 2886785801 1111 3395507096 55430 2099718013 80

2012-08-08 21:00:24 2012-08-08 21:00:28 6 2886756061 1111 3395517721 45056 3419418065 80

2012-08-08 21:00:14 2012-08-08 21:00:28 6 2886771706 1111 3395507083 41990 1883302599 80

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43511 2071873572 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886853131 1111 3395507129 34983 296567345 443

2012-08-08 20:59:55 2012-08-08 21:00:28 6 2886917742 1111 3395507159 43538 2071873572 80

2012-08-08 21:00:23 2012-08-08 21:00:28 6 2886857519 1111 3395507133 42212 460553373 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886886465 1111 3395517783 4972 989566680 80

2012-08-08 21:00:25 2012-08-08 21:00:28 6 2886753976 1111 3395517719 47964 1884981528 80

2012-08-08 20:59:56 2012-08-08 21:00:28 6 2886809185 1111 3395517741 4537 2071872692 80

2012-08-08 21:00:26 2012-08-08 21:00:28 6 2886840353 1111 3395507117 36547 1027051331 80

2012-08-08 21:00:20 2012-08-08 21:00:28 6 2886840637 1111 3395507117 53634 1872832059 80

2012-08-08 21:00:19 2012-08-08 21:00:28 6 2886876032 1111 3395517773 19163 1884968518 80

2012-08-08 21:00:19 2012-08-08 21:00:28 6 2886876032 1111 3395517773 19158 1884968518 80

100 rows selected.

LS@LEO> select index_name,index_type,status from user_indexes where table_name='LEO_TEST_SQLLOAD3'; 我们创建主键已经失效

INDEX_NAME INDEX_TYPE STATUS

------------------------------ --------------------------- --------

PK_LEO_TEST_SQLLOAD3 NORMAL UNUSABLE

3.sqlload并行+直接加载存在索引的表,此时加载会失败,skip_index_maintenance=true参数可以跳过索引维护完成加载,此时索引状态unusable需要手工重建rebuild

create table leo_test_sqlload4 定义一个有主键的表

(

START_TIME date,

END_TIME date,

PROTOCOL varchar(20),

PRIVATE_IP varchar(20),

PRIVATE_PORT varchar(20) constraint pk_leo_test_sqlload4 primary key ,

SRC_IP varchar(20),

SRC_PORT varchar(20),

DEST_IP varchar(20),

DEST_PORT varchar(20)

);

sqlldr userid=ls/ls control=leo_test4.ctl data=leo_test1.data log=leo_test4.log direct=true parallel=true 并行+直接加载

SQL*Loader: Release 10.2.0.1.0 - Production on Thu Aug 9 16:19:25 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

SQL*Loader-951: Error calling once/load initialization 报错:加载初始化参数错误

ORA-26002: Table LS.LEO_TEST_SQLLOAD4 has index defined upon it. 表上有索引定义,所以加载会失败

LS@LEO> select index_name,index_type,status from user_indexes where table_name='LEO_TEST_SQLLOAD4';

INDEX_NAME INDEX_TYPE STATUS

------------------------------ --------------------------- --------

PK_LEO_TEST_SQLLOAD4 NORMAL VALID 现在索引还是有效的

sqlldr userid=ls/ls control=leo_test4.ctl data=leo_test1.data log=leo_test4.log direct=true parallel=true skip_index_maintenance=true;

SQL*Loader: Release 10.2.0.1.0 - Production on Thu Aug 9 16:30:52 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Load completed - logical record count 100. 使用skip_index_maintenance=true跳过索引维护,100行记录成功加载

LS@LEO> select count(*) from leo_test_sqlload4;

COUNT(*)

----------

100

LS@LEO> select index_name,index_type,status from user_indexes where table_name='LEO_TEST_SQLLOAD4';

INDEX_NAME INDEX_TYPE STATUS

------------------------------ --------------------------- --------

PK_LEO_TEST_SQLLOAD4 NORMAL UNUSABLE 加载后索引状态变成unusable需要手工重建rebuild

小结:我们在sqlload工具加载数据时一定要关注表上是否有索引,并且是什么类型的,正像世界万物一样,没有完美的工具,有得必有失,如果提高性能就会索引失效,如果要维护索引的完整性那么就会增加性能开销,我们要做的更加细心、严谨、谦虚,以不变应万变。

转自:

Blog:http://space.itpub.net/26686207

oracle的并行原理相关推荐

  1. Oracle 并行原理与示例总结

    <Oracle 并行原理与示例总结> 并行:把一个工作分成几份,分给不同进程同时进行处理. 进程层面 并发:多个会话同时进行访问,就是通常所说并发数.会话层面 数据库版本 LEO1@LEO ...

  2. oracle INS-40930,Oracle 并行原理深入解析及案例精粹

    Oracle 并行原理深入解析及案例精粹 [日期:2012-08-12] 来源:Linux社区 作者:Leonarding [字体:大 中 小] (12)sqlload直接加载对索引的影响 所谓对索引 ...

  3. oracle parallel 并行 设置 理解

    引子:以前一直没太关注oracle并行这个特性.前几天一个兄弟碰到的一个问题,才让我觉得这个东西还是有很多需要注意的地方,有必要仔细熟悉下.其实碰到的问题不复杂: 类似如下的一条语句:insert i ...

  4. oracle 查看并行数据库,Oracle数据库并行查询出错的解决方法

    Oracle的并行查询是使用多个操作系统级别的Server Process来同时完成一个SQL查询,本文讲解Oracle数据库并行查询出错的解决方法如下: 1.错误描述 ORA-12801: 并行查询 ...

  5. Oracle的rownum原理和使用

    俺程序中用到的rownum分页+排序的实现(有机会仔细研究) select * from (select f.*, rownum as rnum from (select a.CIPID as cip ...

  6. Oracle特殊恢复原理与实战(DSI系列)

    1.深入浅出Oracle(DSI系列Ⅰ) 2.Oracle特殊恢复原理与实战(DSI系列Ⅱ) 3.Oracle SQL Tuning(DSI系列Ⅲ)即将开设 4.Oracle DB Performan ...

  7. 资源放送丨《并行不悖——Oracle数据库并行的是是非非》PPT视频

    前段时间,墨天轮邀请Oracle的资深专家 杨廷琨 老师分享了<并行不悖--Oracle数据库并行的是是非非>,在这里我们将课件PPT和实况录像分享出来,供大家参考学习. 并行执行是Ora ...

  8. 明晚来墨天轮直播间,听杨长老聊聊Oracle数据库并行的是是非非

    并行不悖--Oracle数据库并行的是是非非-04.01 并行执行是Oracle应对大数据量处理的强大能力之一,而由于其内部的复杂性,很多DBA对于并行执行的特点甚至是如何看懂并行执行计划都不是很清楚 ...

  9. 万字详解Oracle架构、原理、进程,学会世间再无复杂架构

    学习是一个循序渐进的过程,从面到点.从宏观到微观,逐步渗透,各个击破,对于Oracle, 怎么样从宏观上来理解呢?先来看一个图,这个图取自于教材,这个图对于从整体上理解ORACLE 的体系结构组件,非 ...

最新文章

  1. listview 的 selection mode 训练小例子
  2. 红外传感器_基于红外避障传感器控制无人机
  3. 【opencv】VideoCapture打不开本地视频文件或者网络IP摄像头
  4. 小程序外接web-view坑------alert显示域名
  5. pthread_cond_signal函数《代码》--生产者与消费者
  6. 保护公民信息安全 中国在行动
  7. 【操作系统】进程通信-思维导图
  8. 如何看注解的源码_我们为什么要看源码、应该如何看源码?
  9. 《快学Scala》第6章 对象 练习
  10. Sentaurus 入门之一安装教程
  11. unity点击按钮跳转页面
  12. Swift 下标用法
  13. hihoCoder 1498 Diligent Robots
  14. ad16 导入dwg_AD16怎样精确导入CAD文件
  15. Rust图片类型识别
  16. 为OpenStack量身打造 OVS推出全新OVN项目
  17. 环形电流计算公式_环形电流的磁场分布 怎么计算
  18. 用python爬取全国和全球疫情数据,并进行可视化分析(过程详细代码可运行)
  19. crypto密码总结
  20. 【SecureFx服务器无法上传文中文件】

热门文章

  1. 2021英国硕士计算机专业排名,2020年伦敦国王学院先进的计算机专业硕士申请条件-学费-世界排名...
  2. 启动记事本后在任务栏可以看到它,但是却没有在桌面窗口显示的问题
  3. oracle编程题考试题,oracle考试题
  4. ViewOverlay 浮层
  5. win10专业版 hyper-v 找不到
  6. 系统架构演变到Spring Cloud
  7. Unity5混音器DSP插件编写教程【一】
  8. 微信聊天记录迁移及故障修复
  9. sdutacm-小雷的冰茶几
  10. jQuery和CSS3超酷二级下拉菜单插件