作者 | 罗贵林: 云和恩墨技术工程师,具有8年以上的 Oracle 数据库工作经验,曾任职于大型的国家电信、省级财政、省级公安的维护,性能调优等。精通 Oracle 数据库管理,调优,问题诊断。擅长 SQL 调优,Oracle RAC 等维护,管理。


本文由恩墨大讲堂155期线上分享整理而成。课程回看可点击文末“阅读原文”。


承接上篇:举一反三:跨平台版本迁移之 XTTS 方案操作指南


7XTTS 迁移后检查


7.1 更改用户默认表空间

更改用户默认表空间,将用户默认表空间设置与源数据库保持一致:

@default_tablespace.sql

源端执行:

spool default_tablespace.sql

select 'alter user '||username||' default tablespace '||default_tablespace||';' from dba_users where default_tablespace in(‘DATATBS ’);

spool off

添加表空间配额权限:

@unlimited_tablespace.sql

源库:

select 'alter user '||username||' quota unlimited on '|| default_tablespace||';' from dba_users where default_tablespace in (‘DATATBS ’);

 

7.2 数据库对象并行重编译


exec utl_recomp.recomp_parallel(32);

set echo off feedback off timing off verify off

set pagesize 0 linesize 500 trimspool on trimout on

Set heading off;

set feedback off;

set echo off;

Set lines 999;

spool compile.sql

select  'alter '||

decode(object_type,'SYNONYM',decode(owner,'PUBLIC','PUBLIC SYNONYM '||object_name,

'SYNONYM '||OWNER||'.'||OBJECT_NAME)||' compile;',

decode(OBJECT_TYPE ,'PACKAGE BODY','PACKAGE',OBJECT_TYPE)||

' '||owner||'.'||object_name||' compile '||

decode(OBJECT_TYPE ,'PACKAGE BODY','BODY;',' ;'))

from dba_objects where status<>'VALID'

order by owner,OBJECT_NAME;

spool off

@compile.sql

正式环境没有无效对象。

7.3 数据库对象数据比对


运行数据库对比脚本,通过创建 dblink,运行相关的数据库对象比对脚本。这里我们主要比对了存储过程,函数,触发器,试图,索引,表等等。

创建到生产环境 DB LINK

CREATE DATABASE LINK TEST_COMPARE CONNECT TO SYSTEM IDENTIFIED BY password xxx USING 'xxxx:1521/xxxx';

使用如下脚本对比数据库中对象个数:

SELECT OWNER, OBJECT_NAME, OBJECT_TYPE

FROM DBA_OBJECTS@TEST_COMPARE

WHERE OBJECT_NAME NOT LIKE 'BIN%'

AND OBJECT_NAME NOT LIKE 'SYS_%'

AND OWNER IN ('LUOKLE')

MINUS

SELECT OWNER, OBJECT_NAME, OBJECT_TYPE

FROM DBA_OBJECTS

WHERE OBJECT_NAME NOT LIKE 'BIN%'

AND OBJECT_NAME NOT LIKE 'SYS_%'

AND OWNER IN ('LUOKLE');

源库:

select object_type,count(*) from dba_objects where owner

in (select username from 源库) group by object_type;

目标:

select object_type,count(*) from dba_objects where owner

in (select username from 目标库) group by object_type;

如果索引缺失可能是由于没有存放在传输的表空间所以需要重新创建,而缺失的表可能是临时表,需要手工创建。

使用如下脚本进行创建:

CREATE INDEX "LUOKLE"."IDX_XXX" ON "LUOKLE"."BI_XXXX" TABLESPACEDATATBS parallel 8;

Alter index "LUOKLE"."IDX_XX" noparallel;

CREATE GLOBAL TEMPORARY TABLE "LUOKLE"."TEMP_PAY_BATCH_CREATE_INSTR"

(    "BATCH_ID" NUMBER,

"STATUS" CHAR(1)

) ON COMMIT PRESERVE ROWS ;

使用 hash 函数进行数据对比

两边分别创建存放 hash 数据的表

create table system.get_has_value (dbname varchar2(20),owner varchar2(30),table_name varchar2(100),value varchar2(100),error varchar2(2000));

创建需要验证的表:

create sequence system.sequence_checkout_table start with  1 increment by 1 order cycle maxvalue 10 nocache;

CREATE TABLE SYSTEM.checkout_table as select  sys_context('USERENV', 'INSTANCE_NAME') dbnme,owner,table_name, system.sequence_checkout_table.NEXTVAL  groupid from dba_tables where owner='LUOKLE'

结果显示:

1  SELECT owner, groupid, COUNT (*)

2      FROM SYSTEM.checkout_table

3* GROUP BY owner, groupid,dbnme Order by owner,groupid

14:05:21 SQL> SELECT owner, groupid, COUNT (*)

14:05:31   2      FROM SYSTEM.checkout_table

14:05:32   3  GROUP BY owner, groupid,dbnme Order by owner,groupid;

OWNER                             GROUPID   COUNT(*)

------------------------------ ---------- ----------

LUOKLE                                1         32

LUOKLE                                2         31

LUOKLE                                3         31

LUOKLE                                4         31

LUOKLE                                5         31

LUOKLE                                6         31

LUOKLE                                7         31

LUOKLE                                8         31

LUOKLE                                9         31

LUOKLE                               10         31

创建 hash 函数

grant select on sys.dba_tab_columns to system;

CREATE OR REPLACE PROCEDURE SYSTEM.get_hv_of_data (

avc_owner    VARCHAR2,

avc_table    VARCHAR2)

AS

lvc_sql_text    VARCHAR2 (30000);

ln_hash_value   NUMBER;

lvc_error       VARCHAR2 (100);

BEGIN

SELECT    'select /*+parallel(a,25)*/sum(dbms_utility.get_hash_value('

|| column_name_path

|| ',0,power(2,30)) ) from '

|| owner

|| '.'

|| table_name

|| ' a '

INTO LVC_SQL_TEXT

FROM (SELECT owner,

table_name,

column_name_path,

ROW_NUMBER ()

OVER (PARTITION BY table_name

ORDER BY table_name, curr_level DESC)

column_name_path_rank

FROM (    SELECT owner,

table_name,

column_name,

RANK,

LEVEL AS curr_level,

LTRIM (

SYS_CONNECT_BY_PATH (column_name, '||''|''||'),

'||''|''||')

column_name_path

FROM (  SELECT owner,

table_name,

'"' || column_name || '"' column_name,

ROW_NUMBER ()

OVER (PARTITION BY table_name

ORDER BY table_name, column_name)

RANK

FROM dba_tab_columns

WHERE     owner = UPPER (avc_owner)

AND table_name = UPPER (avc_table)

AND DATA_TYPE IN ('TIMESTAMP(3)',

'INTERVAL DAY(3) TO SECOND(0)',

'TIMESTAMP(6)',

'NVARCHAR2',

'CHAR',

'BINARY_DOUBLE',

'NCHAR',

'DATE',

'RAW',

'TIMESTAMP(6)',

'VARCHAR2',

'NUMBER')

ORDER BY table_name, column_name)

CONNECT BY     table_name = PRIOR table_name

AND RANK - 1 = PRIOR RANK))

WHERE column_name_path_rank = 1;

EXECUTE IMMEDIATE lvc_sql_text INTO ln_hash_value;

lvc_sql_text :=

'insert into system.get_has_value(owner,table_name,value) values(:x1,:x2,:x3)';

EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, ln_hash_value;

commit;

DBMS_OUTPUT.put_line (

avc_owner || '.' || avc_table || '      ' || ln_hash_value);

EXCEPTION

WHEN NO_DATA_FOUND

THEN

lvc_error := 'NO DATA FOUND';

lvc_sql_text :=

'insert into system.get_has_value(owner,table_name,error) values(:x1,:x2,:x3)';

EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, lvc_error;

commit;

WHEN OTHERS

THEN

lvc_sql_text :=

'insert into system.get_has_value(owner,table_name,value) values(:x1,:x2,:x3)';

EXECUTE IMMEDIATE lvc_sql_text USING avc_owner, avc_table, SQLERRM;

commit;

END;

/

sqlplus system/oracle<<EOF

set heading off linesize 170 pagesize 0 feedback off echo off trimout on trimspool on termout off verify off

exit;

EOF

nohup ./check_source.sh LUOKLE 1 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 2 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 3 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 4 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 5 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 6 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 7 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 8 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 9 >./source_LUOKLE_cd_1.log  2>&1 &

nohup ./check_source.sh LUOKLE 10 >./source_LUOKLE_cd_1.log  2>&1 &

checkdata_source.sh

date

sqlplus system/oracle<<EOF

set heading off linesize 170 pagesize 0 feedback off

spool source_check_$1_$2.sql

SELECT    'exec system.get_hv_of_data('''

|| owner

|| ''','''

|| table_name

|| ''')'

FROM system.checkout_table

WHERE     owner = UPPER ('$1') and groupid=$2

AND table_name NOT IN (SELECT table_name

FROM dba_tables

WHERE     owner = UPPER ('$1')

AND iot_type IS NOT NULL)

AND table_name IN (SELECT table_name

FROM (  SELECT table_name, COUNT (*)

FROM dba_tab_columns

WHERE     owner = UPPER ('$1')

AND DATA_TYPE IN ('TIMESTAMP(3)',

'INTERVAL DAY(3) TO SECOND(0)',

'TIMESTAMP(6)',

'NVARCHAR2',

'CHAR',

'BINARY_DOUBLE',

'NCHAR',

'DATE',

'RAW',

'VARCHAR2',

'NUMBER')

GROUP BY table_name

HAVING COUNT (*) > 0))

ORDER BY table_name;

spool off

set serveroutput on

@source_check_$1_$2.sql

exit;

EOF

date

运行 hash 计算函数脚本,在LINUX环境对 LUOKLE 下所有表进行 hash 计算耗时30分钟,总共311张表,有52张表没有计算出 hash 经分析发现这些表为空表。

SQL> select count(*) from LUOKLE.XXXX;

COUNT(*)

----------

0

7.4 数据库对象间权限比对处理

对比源库和目标库数据库的对象级别间权限,如若权限不一致建议将源库跑出的 grant_tab_privs.log 到目标端执行。

复核对象上的 select 和 DML 权限赋予给用户

@grant_tab_privs.sql

源库:

select 'grant ' || privilege || ' on ' || owner || '.' || table_name || ' to ' || grantee || ';' from dba_tab_privs where (grantee in(select username from dba_users where default_tablespace in(‘DATATBS ’)) or owner in(select username from dba_users where default_tablespace in(DATATBS ))) and privilege in('SELECT','DELETE','UPDATE','INSERT') and grantable='NO'

union

select 'grant ' || privilege || ' on ' || owner || '.' || table_name || ' to ' || grantee || ' with grant option;' from dba_tab_privs where (grantee in(select username from dba_users where default_tablespace in(DATATBS )) or owner in(select username from dba_users where default_tablespace in(DATATBS ))) and privilege in('SELECT','DELETE','UPDATE','INSERT') and grantable='YES';

7.5 收集统计信息


为了防止同时收集统计信息,造成系统资源的消耗,建议提前关闭后台自动收集统计信息的任务。

exec DBMS_AUTO_TASK_ADMIN.DISABLE(client_name => 'auto optimizer stats collection',operation => NULL,window_name => NULL);

查看柱状图信息:

select count(*),owner,table_name,column_name from dba_tab_histograms
group by owner,table_name,column_name
having count(*) > 2;

手工运行收集脚本:

exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','TRUE');设置并发收集模式

exec

dbms_stats.gather_database_stats(

estimate_percent =>dbms_stats.AUTO_SAMPLE_SIZE,   / for all columns size repeat

METHOD_OPT=>'FOR ALL COLUMNS SIZE 1',

options=> 'GATHER',degree=>8,

granularity =>’all’,

cascade=> TRUE

);

select  * from dba_scheduler_jobs  where schedule_type = 'IMMEDIATE'  and state = 'RUNNING';

收集数据字典统计信息:

exec DBMS_STATS.GATHER_DICTIONARY_STATS(degree=>16);

固定对象的统计信息:

EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

开启默认收集

exec DBMS_AUTO_TASK_ADMIN.ENABLE(client_name => 'auto optimizer stats collection',operation => NULL,window_name => NULL);

exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','false');

以下为测试过程:

13:23:41 SQL> select count(*),owner,table_name,column_name from dba_tab_histograms

13:23:45   2  where owner='LUOKLE'

13:23:46   3  group by owner,table_name,column_name

13:23:46   4  having count(*) > 2;

no rows selected

Elapsed: 00:00:00.10

13:28:06 SQL> exec dbms_stats.gather_database_stats(estimate_percent =>dbms_stats.AUTO_SAMPLE_SIZE,METHOD_OPT=>'FOR ALL COLUMNS SIZE 1',options=> 'GATHER',degree=>8, granularity =>'all',cascade=> TRUE);

PL/SQL procedure successfully completed.

Elapsed: 00:26:51.34

13:55:05 SQL>

全库统计信息收集耗时26分钟

13:57:14 SQL> exec DBMS_STATS.GATHER_DICTIONARY_STATS(degree=>16);

PL/SQL procedure successfully completed.

Elapsed: 00:00:18.94

7.6 修改 job 参数


show parameter job_queue_processes;

alter system set job_queue_processes=100 scope=both;

8XTTS 迁移测试耗时(20T)

9XTTS 迁移测试问题记录


expdp \'/ as sysdba\'  directory=xtts dumpfile=expdp_LUOKLE_meta0822.dmp logfile=expdp_LUOKLE_meta0822.log CONTENT=metadata_only SCHEMAS=LUOKLE  15:06 开始到出

ORA-39014: One or more workers have prematurely exited.

ORA-39029: worker 1 with process name "DW00" prematurely terminated

ORA-31671: Worker process DW00 had an unhandled exception.

ORA-04030: out of process memory when trying to allocate 3704 bytes (kkoutlCreatePh,kkotbi : kkotbal)

ORA-06512: at "SYS.KUPW$WORKER", line 1887

ORA-06512: at line 2

在做元数据导出时候后台报大量 ORA-04030 错误,经过分析为 AMM 问题,通过关闭 AMM 手工管理内存解决。

10g 的 sga_target 设置为0

Errors in file /oracle/app/oracle/diag/rdbms/LUOKLE/orcl1/trace/orcl1_ora_13107324.trc  (incident=28001):

ORA-04030: out of process memory when trying to allocate 32808 bytes (TCHK^cadd45dc,kggec.c.kggfa)

经过分析发现 AIX stack 设置偏小导致,修改限制解决。

ERROR IN CONVERSION ORA-19624: operation failed, retry possible

ORA-19505:

failed to identify file "/aix_xtts/oradata2/f8rdl6vi_1_1"

ORA-27037: unable to

obtain file status

Linux-x86_64 Error: 2: No such file or directory

Additional

information: 3

ORA-19600: input file is backup piece

(/aix_xtts/oradata2/f8rdl6vi_1_1)

ORA-19601: output file is backup piece

(/aix_xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_)

CONVERTED BACKUP PIECE/aix_xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_

PL/SQL procedure successfully completed.

ERROR IN CONVERSION ORA-19624: operation failed, retry possible

ORA-19505:

failed to identify file "/aix_xtts/oradata2/f9rdl70m_1_1"

ORA-27037: unable to

obtain file status

Linux-x86_64 Error: 2: No such file or directory

Additional

information: 3

ORA-19600: input file is backup piece

(/aix_xtts/oradata2/f9rdl70m_1_1)

ORA-19601: output file is backup piece

(/aix_xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_)

CONVERTED BACKUP PIECE/aix_xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_

经过分析发现增量备份没有放在对应目录导致。

failed to create file

"/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_"

ORA-27040: file create

error, unable to create file

Linux-x86_64 Error: 13: Permission

denied

Additional information: 1

ORA-19600: input file is backup piece

(/xtts/oradata2/f9rdl70m_1_1)

ORA-19601: output file is backup piece

(/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_)

CONVERTED BACKUP PIECE/xtts/incr/xib_f9rdl70m_1_1_7_9_11_13_15_17_19_21_23_

PL/SQL procedure successfully completed.

ERROR IN CONVERSION ORA-19624: operation failed, retry possible

ORA-19504:

failed to create file

"/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_"

ORA-27040: file create

error, unable to create file

Linux-x86_64 Error: 13: Permission

denied

Additional information: 1

ORA-19600: input file is backup piece

(/xtts/oradata2/f8rdl6vi_1_1)

ORA-19601: output file is backup piece

(/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_)

CONVERTED BACKUP PIECE/xtts/incr/xib_f8rdl6vi_1_1_6_8_10_12_14_16_18_20_22_

NFS 目录权限问题导致不行读写,修改权限解决。

NFS 问题:

mount: 1831-008 giving up on:

192.168.1.100:/xtts

vmount: Operation not permitted.

# nfso -p -o nfs_use_reserved_ports=1

Setting nfs_use_reserved_ports to 1

Setting nfs_use_reserved_ports to 1 in nextboot file

# mount -o cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 10.20.28.21:/xtts /aix_xtts

10总结

XTTS 支持跨平台跨版本迁移,操作起来比较方便,由于停机时间较短,可以较轻松完成迁移工作,在大数据量的跨平台跨版本迁移场景中,建议作为首选方案。

建议在做 XTTS 迁移的时候减少批次,批次越多,增量备份的数据越少,数据越少,最后停机时间越短,但是这个过程如果做太多就越容易出错。一般使用一次增量备份再做一次正式迁移,甚至初始化后直接做正式迁移。

11附录 - xttdriver.pl 脚本使用说明

详见:11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (文档 ID 1389592.1)

Description of Perl Script xttdriver.pl Options

The following table describes the options available for the main supporting script xttdriver.pl.

Option

Description

-S prepare source for transfer

-S option is used only when Prepare phase method is dbms_file_transfer.

Prepare step is run once on the source system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.  This step creates files xttnewdatafiles.txt and getfile.sql.

-G get datafiles from source

-G option is used only when Prepare phase method is dbms_file_transfer.

Get datafiles step is run once on the destination system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.  The -S option must be run beforehand and files xttnewdatafiles.txt and getfile.sql transferred to the destination system.

This option connects to the destination database and runs script getfile.sql.  getfile.sql invokes dbms_file_transfer.get_file() subprogram for each datafile to transfer it from the source database directory object (defined by parameter srcdir) to the destination database directory object (defined by parameter dstdir) over a database link (defined by parameter srclink).

-p prepare source for backup

-p option is used only when Prepare phase method is RMAN backup.

Prepare step is run once on the source system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.

This step connects to the source database and runs the xttpreparesrc.sql script once for each tablespace to be transported, as configured in xtt.properties.  xttpreparesrc.sql does the following:

1. Verifies the tablespace is online, in READ WRITE mode, and contains no offline datafiles.

2. Identifies the SCN that will be used for the first iteration of the incremental backup step and writes it into file $TMPDIR/xttplan.txt.

3. Creates the initial datafile copies on the destination system in the location specified by the parameter dfcopydir set in xtt.properties.  These datafile copies must be transferred manually to the destination system.

4. Creates RMAN script $TMPDIR/rmanconvert.cmd that will be used to convert the datafile copies to the required endian format on the destination system.

-c convert datafiles

-c option is used only when Prepare phase method is RMAN backup.

Convert datafiles step is run once on the destination system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step uses the rmanconvert.cmd file created in the Prepare step to convert the datafile copies to the proper endian format.  Converted datafile copies are written on the destination system to the location specified by the parameter storageondest set in xtt.properties.

-i create incremental

Create incremental step is run one or more times on the source system with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.

This step reads the SCNs listed in $TMPDIR/xttplan.txt and generates an incremental backup that will be used to roll forward the datafile copies on the destination system.

-r rollforward datafiles

Rollforward datafiles step is run once for every incremental backup created with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step connects to the incremental convert instance using the parameters cnvinst_home and cnvinst_sid, converts the incremental backup pieces created by the Create Incremental step, then connects to the destination database and rolls forward the datafile copies by applying the incremental for each tablespace being transported.

-s determine new FROM_SCN

Determine new FROM_SCN step is run one or more times with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1. It reports the mapping of the new FROM_SCN to wall clock time to indicate how far behind the changes in the next incremental backup will be.

-e generate Data Pump TTS command

Generate Data Pump TTS command step is run once on the destination system with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step creates the template of a Data Pump Import command that uses a network_link to import metadata of objects that are in the tablespaces being transported.

-d debug

-d option enables debug mode for xttdriver.pl and RMAN commands it executes.  Debug mode can also be enabled by setting environment variable XTTDEBUG=1.

资源下载

关注公众号:数据和云(OraNews)回复关键字获取

‘2017DTC’,2017 DTC 大会 PPT

‘DBALIFE’,“DBA 的一天”海报

‘DBA04’,DBA 手记4 经典篇章电子书

‘RACV1’, RAC 系列课程视频及 PPT

‘122ARCH’,Oracle 12.2 体系结构图

‘2017OOW’,Oracle OpenWorld 资料

‘PRELECTION’,大讲堂讲师课程资料

续:跨平台版本迁移之 XTTS 方案操作指南相关推荐

  1. 举一反三:跨平台版本迁移之 XTTS 方案操作指南

    作者 | 罗贵林: 云和恩墨技术工程师,具有8年以上的 Oracle 数据库工作经验,曾任职于大型的国家电信.省级财政.省级公安的维护,性能调优等.精通 Oracle 数据库管理,调优,问题诊断.擅长 ...

  2. 时过境迁:Oracle跨平台迁移之XTTS方案与实践

    作者简介 谢金融 云和恩墨东区交付部 Oracle 工程师,多年来从事 Oracle 第三方服务,曾服务过金融.制造业.物流.政府等许多行业的客户,精通数据库迁移.性能优化和故障诊断. 本文由恩墨大讲 ...

  3. Oracle XTTS跨平台数据库迁移(从Unix迁移数据库到Linux)_Oracle数据库迁移项

    Oracle数据库迁移教程04 Oracle XTTS跨平台数据库迁移实战 (真正超越OCP/OCM的项目实战系列教程) 本套风哥Oracle教程学习内容 1.Oracle XTTS技术基础知识 2. ...

  4. 自建Hive数据仓库跨版本迁移到阿里云Databricks数据洞察

    简介:客户在IDC或者公有云环境自建Hadoop集群构建数据仓库和分析系统,购买阿里云Databricks数据洞察集群之后,涉及到数仓数据和元数据的迁移以及Hive版本的订正更新. 直达最佳实践:[自 ...

  5. oracle 导数据报exp00058_一文看懂oracle12c数据库跨小版本迁移

    概述 简单记录一下最近做的一个测试环境迁移,主要是从12.2.0.1 跨两个小版本迁移到19c(12.2.0.3),操作系统均为centos7,下面一起来看看吧~ 一.导出测试环境数据 1. 创建路径 ...

  6. MySQL迁移到ClickHouse方案

    MySQL迁移到ClickHouse方案 这篇文章主要介绍了mysql迁移到clickhouse的几种方法,本文给大家介绍的非常详细,具有一定的参考借鉴价值,需要的朋友可以参考下.数据迁移需要从mys ...

  7. 12c创建为容器数据库_详解oracle12c数据库跨小版本迁移步骤

    概述 简单记录一下最近做的一个测试环境迁移,主要是从12.2.0.1 跨两个小版本迁移到19c(12.2.0.3),操作系统均为centos7,下面一起来看看吧~ 一.导出测试环境数据 1. 创建路径 ...

  8. dmp导入数据 oracle_一文看懂oracle12c数据库跨小版本迁移

    概述 简单记录一下最近做的一个测试环境迁移,主要是从12.2.0.1 跨两个小版本迁移到19c(12.2.0.3),操作系统均为centos7,下面一起来看看吧~ 一.导出测试环境数据 1. 创建路径 ...

  9. SAP S/4 HANA物料账实际成本:理解版本迁移时的基本改变和影响

    SAP S/4 HANA物料账实际成本:理解版本迁移时的基本改变和影响 随着SAP S/4HANA 1610中对实际成本处理的巨大简化,物料账运行时间会大幅减少.现在物料账关闭明显更快速.以前四个独立 ...

最新文章

  1. end-to-end 的神经网络
  2. 【亲测有效】如何下载和安装Resilio BTSync
  3. php不判断比较大小,php怎么不分大小比较字符串
  4. struts2中改变struts.xml默认路径
  5. Docker系列之烹饪披萨(二)
  6. hadoop 卡主_HDFS DisTCP执行卡住了,怎么办?
  7. go连接mysql数据库
  8. zabbix mysql复制延迟_Zabbix监控mysql主从复制状态
  9. ibm量子计算机 申请,【IBM量子计算机问世,造福人类的杰作!】IBM量子计算机已面前全球开放申请使用_来自网易大神圈子_科学企图玄学...
  10. linux下安装chrome
  11. cc1 transform链
  12. HightChar图表控件
  13. chrome 模拟点击_详解爬虫模拟登陆的三种方法
  14. eclipse安装Windows Builder
  15. 自然语言处理技术的工作原理与应用
  16. 【Android】安卓开发中的MVP架构模式与应用实例
  17. 一个服务器多个网站搜外问答,多个网站共用一台服务器对SEO有影响吗?
  18. Python地学分析 — GDAL通过矢量裁剪遥感图像
  19. openwrt之修改Luci界面
  20. 学习笔记(2):Java面试:核心考点突击大串讲-Redis的持久化机制

热门文章

  1. LeetCode 84. 柱状图中最大的矩形(Largest Rectangle in Histogram)
  2. Bootstrap 模态框插件Modal 的事件
  3. es6 对象的扩展运算符
  4. pandas 画折线图_如何从Pandas数据帧绘制多个折线图
  5. 深度学习笔记(44) Triplet 损失
  6. Python笔记(7) 字符串
  7. ROS笔记(16) ArbotiX
  8. java ftp模糊查询_前端js模糊搜索(模糊查询)
  9. body里写注释 postman_是时候扔掉 Postman 了,试试 IntelliJ IDEA 自带的高能神器!
  10. 12.输入一行字符,分别统计出其中英文字母、空格、数字和其它字符的个数