墨天轮原文链接:https://www.modb.pro/db/22393

导读:记录19.3 rac 打补丁过程中遇到的一些问题19.3升19.5、19.5升19.6。

新装的19.3 rac 需要安装补丁,目前最新的RU是19.6,由于最新的可能不稳定,选择了次新的19.5,打第一套比较顺利,后面的几套都出现些大大小小的问题 ,记录一下。

19.3存在一个比较严重的crs-6015错误,是个bug,在19.6得到了修复,我打完4套19.5,又重新打了遍19.6,比较坑,强列建议直接打19.6。

a)下载RU19.6补丁:p30463609_190000_Linux-x86-64.zip ,包含GI、DB、OJVM 累积增量补丁。

b)补丁安装顺序:GI–>DB-OJVM。

c)打了19.5可以直接打19.6,不需要卸载。

d)打gi和db都是在root下面操作,只有ojvm需要在oracle用户下面操作。

一、补丁安装方法

1. 检查环境:

由于新装的,我这里就省略掉了,可以看README.html里面的方法.

2. 解压补丁包

我下载的是gi的RU,里面包含gi和db的补丁,我是解压到/tmp下面。

[root@xydb8node1 ~]# unzip p30116789_190000_Linux-x86-64.zip -d /tmp/ru19.5

[root@xydb8node1 ~]# chmod -R 777 /tmp/ru19.5

3. 先打gi补丁【节点1打完,再打节点2】,使用opatchauto。

打gi要用gi_home的opatchauto,打oracle用oracle_home的opatchauto ,切记都是在root下面执行命令,这时用的是全路径,配置Path切换容易出错。

[root@xydb8node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/ru19.5/30116789

4. 检查gi是否成功

[grid@xydb8node1 ~]$ /u01/app/19.3.0/grid/OPatch/opatch lspatches

30125133;Database Release Update : 19.5.0.0.191015 (30125133)

30122167;ACFS RELEASE UPDATE 19.5.0.0.0 (30122167)

30122149;OCW RELEASE UPDATE 19.5.0.0.0 (30122149)

29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.

5. 打db补丁【节点1打完,再打节点2】,使用opatchauto。

[root@xydb8node1 ~]# /u01/app/oracle/product/19.3.0/db_1/OPatch/opatchauto apply /tmp/ru19.5/30116789 -oh /u01/app/oracle/product/19.3.0/db_1

6. 检查db是否成功

[oracle@xydb8node1 ~]$ /u01/app/oracle/product/19.3.0/db_1/OPatch/opatch lspatches

30125133;Database Release Update : 19.5.0.0.191015 (30125133)

30122149;OCW RELEASE UPDATE 19.5.0.0.0 (30122149)

OPatch succeeded.

7. 打OJVM补丁【节点1打完,再打节点2】

[root@xydb8node1 ~]# cd /tmp/ru19.6/30463609/30484981/

[root@xydb8node1 30484981]# /u01/app/oracle/product/19.3.0/db_1/OPatch/opatch apply

#按提示输入y,y即可。

8. 回退方法

#gi回退

/u01/app/19.3.0/grid/OPatch/opatchauto rollback /tmp/grid_path/30116789 -oh /u01/app/19.3.0/grid

#db回退

/u01/app/oracle/product/19.3.0/db_1/OPatch/opatchauto rollback /tmp/grid_path/30116789 -oh /u01/app/oracle/product/19.3.0/db_1

9. 小结

先打节点1,或2都行,没有强制要求先打节点1,习惯而已。补丁安装过程中可能会遇到各种权限问题及其它问题,后面针对遇到的问题都做了下记录,让后面的人少踩坑。

二、遇到的一些错误

错误No.1

Patch: /tmp/grid_path/30116789/30122149

Log: /u01/app/oracle/product/19.3.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-03-09_17-44-51PM_1.log

Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)'

After fixing the cause of failure Run opatchauto resume

]

OPATCHAUTO-68061: The orchestration engine failed.

OPATCHAUTO-68061: The orchestration engine failed with return code 1

OPATCHAUTO-68061: Check the log for more details.

OPatchAuto failed.

OPatchauto session completed at Mon Mar  9 17:45:31 2020

Time taken to complete the session 1 minute, 16 seconds

opatchauto failed with error code 42

问题描述:

DB补丁安装过程中报出的权限不足,具体原因不明,没有深入去分析,19c打补丁过程中会遇到各种权限问题。

解决办法:

[root@xydb8node1 ~]# chmod 777 /u01/app/oraInventory/ContentsXML/oui-patch.xml

#resume是接着上次失败的地方继续安装的意思。

[root@xydb8node1 ~]# /u01/app/oracle/product/19.3.0/db_1/OPatch/opatchauto resume

错误No.2

2020-03-10 11:18:18.961 [CSSDMONITOR(150856)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 150856

2020-03-10T11:18:19.092125+08:00

Errors in file /u01/app/grid/diag/crs/xydb8node2/crs/trace/ohasd.trc  (incident=41):

CRS-6015 [] [] [] [] [] [] [] [] [] [] [] []

Incident details in: /u01/app/grid/diag/crs/xydb8node2/crs/incident/incdir_41/ohasd_i41.trc

2020-03-10 11:18:19.081 [OHASD(147218)]CRS-6015: Oracle Clusterware has experienced an internal error. Details at (:CLSGEN00100:) {0:0:2} in /u01/app/grid/diag/crs/xydb8node2/crs/trace/ohasd.trc.

2020-03-10 11:18:19.106 [OHASD(147218)]CRS-8505: Oracle Clusterware OHASD process with operating system process ID 147218 encountered internal error CRS-06015

trace日志:/u01/app/grid/diag/crs/xydb8node2/crs/trace/ohasd.trc

截取部份错误日志,如下:

2020-03-10 11:18:19.057 :CRSSHARED:4034262784: [     INFO] [F-ALGO]{0:0:2} getIpcPath returning (ADDRESS=(PROTOCOL=IPC)(KEY=OHASD_UI_SOCKET))

2020-03-10 11:18:19.058 :GIPCXCPT:4038465280:  gipcInternalConnectSync: failed sync request, addr 0x7f9c9405c720 [000000000000b814] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, ret gipcretConnectionRefused (29)

2020-03-10 11:18:19.058 :GIPCXCPT:4038465280:  gipcConnectSyncF [EvmConConnect : evmgipcio.c : 235]: EXCEPTION[ ret gipcretConnectionRefused (29) ]  failed sync connect endp 0x7f9c9405b2a0 [000000000000b80d] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 1, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f9c9405e350, sendp 0x7f9c9405e100 status 13flags 0xa108871a, flags-2 0x0, usrFlags 0x30020 }, addr 0x7f9c9405c720 [000000000000b814] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, flags 0x8000000

2020-03-10 11:18:19.058 :UiServer:4034262784: [     INFO] {0:0:2} GIPC address: clsc://(ADDRESS=(PROTOCOL=IPC)(KEY=OHASD_UI_SOCKET))

2020-03-10 11:18:19.058 :    GIPC:4034262784:  sgipcnDSBindHelper: file /var/tmp/.oracle/sOHASD_UI_SOCKET_lock is locked by PID 147162

2020-03-10 11:18:19.058 :GIPCXCPT:4034262784:  gipcmodNetworkProcessBind: failed to bind endp 0x7f9c8c000950 [000000000000b819] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', remoteAddr '', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x562c5e71a240, ready 0, wobj 0x7f9c8c03b390, sendp 0x7f9c8c03b140 status 13flags 0xa1000712, flags-2 0x0, usrFlags 0x20 }, addr 0x7f9c8c039460 [000000000000b81b] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x5 }

2020-03-10 11:18:19.058 :GIPCXCPT:4034262784:  gipcmodNetworkProcessBind: slos op  :  sgipcnDSBindHelper

2020-03-10 11:18:19.058 :GIPCXCPT:4034262784:  gipcmodNetworkProcessBind: slos dep :  Resource temporarily unavailable (11)

2020-03-10 11:18:19.058 :GIPCXCPT:4034262784:  gipcmodNetworkProcessBind: slos loc :  lockf

2020-03-10 11:18:19.058 :GIPCXCPT:4034262784:  gipcmodNetworkProcessBind: slos info:  failed to grab a lock for (/var/tmp/.oracle/sOHASD_UI_SOCKET_lock)

2020-03-10 11:18:19.058 :GIPCXCPT:4034262784:  gipcListenF [initServerSocket : clsSocket.cpp : 584]: EXCEPTION[ ret gipcretAddressInUse (20) ]  failed to listen on endp 0x7f9c8c000950 [000000000000b819] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', remoteAddr '', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0x562c5e71a240, ready 0, wobj 0x7f9c8c03b390, sendp 0x7f9c8c03b140 status 13flags 0xa1000712, flags-2 0x0, usrFlags 0x20 }, flags 0x0

2020-03-10 11:18:19.058 :UiServer:4034262784: [    ERROR] {0:0:2} SS(0x7f9c8c000eb0)GIPC Fatal Listen Error. gipc ret: gipcretAddressInUse. Address=clsc://(ADDRESS=(PROTOCOL=IPC)(KEY=OHASD_UI_SOCKET))

2020-03-10 11:18:19.059 : CLSCEVT:4038465280: (:CLSCE0047:)clsce_publish_internal 0x7f9c94038da0 EvmConnCreate failed with status = 13, try = 0

2020-03-10 11:18:19.060 :GIPCXCPT:4038465280:  gipcInternalConnectSync: failed sync request, addr 0x7f9c9405c7e0 [000000000000b834] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, ret gipcretConnectionRefused (29)

2020-03-10 11:18:19.060 :GIPCXCPT:4038465280:  gipcConnectSyncF [EvmConConnect : evmgipcio.c : 235]: EXCEPTION[ ret gipcretConnectionRefused (29) ]  failed sync connect endp 0x7f9c9405b360 [000000000000b82d] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 1, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f9c9405e330, sendp 0x7f9c9405e0e0 status 13flags 0xa108871a, flags-2 0x0, usrFlags 0x30020 }, addr 0x7f9c9405c7e0 [000000000000b834] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, flags 0x8000000

2020-03-10 11:18:19.061 : CLSCEVT:4038465280: (:CLSCE0047:)clsce_publish_internal 0x7f9c94038da0 EvmConnCreate failed with status = 13, try = 1

2020-03-10 11:18:19.061 :  CRSEVT:4038465280: [     INFO] {0:0:2} ClusterPubSub::publish Error posting to event stream. Connection will be retried on next publish [4]

2020-03-10 11:18:19.081 :  CRSRPT:4038465280: [     INFO] {0:0:2} ClusterConnectException caught CRS_SERVER_STATE_CHANGE for xydb8node2

Trace file /u01/app/grid/diag/crs/xydb8node2/crs/trace/ohasd.trc

Oracle Database 19c Clusterware Release 19.0.0.0.0 - Production

Version 19.6.0.0.0 Copyright 1996, 2019 Oracle. All rights reserved.

DDE: Flood control is not active

2020-03-10T11:18:19.092594+08:00

Incident 41 created, dump file: /u01/app/grid/diag/crs/xydb8node2/crs/incident/incdir_41/ohasd_i41.trc

CRS-6015 [] [] [] [] [] [] [] [] [] [] [] []

2020-03-10 11:18:19.107 :GIPCXCPT:421820160:  gipcInternalConnectSync: failed sync request, addr 0x7f9d10022eb0 [000000000000b867] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, ret gipcretConnectionRefused (29)

2020-03-10 11:18:19.107 :GIPCXCPT:421820160:  gipcConnectSyncF [EvmConConnect : evmgipcio.c : 235]: EXCEPTION[ ret gipcretConnectionRefused (29) ]  failed sync connect endp 0x7f9d10021a30 [000000000000b860] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 1, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f9d1005d250, sendp 0x7f9d1005d000 status 13flags 0xa108871a, flags-2 0x0, usrFlags 0x30020 }, addr 0x7f9d10022eb0 [000000000000b867] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, flags 0x8000000

2020-03-10 11:18:19.108 : CLSCEVT:421820160: (:CLSCE0047:)clsce_publish_internal 0x562c5e45bb90 EvmConnCreate failed with status = 13, try = 0

2020-03-10 11:18:19.108 :GIPCXCPT:421820160:  gipcInternalConnectSync: failed sync request, addr 0x7f9d10022e70 [000000000000b878] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, ret gipcretConnectionRefused (29)

2020-03-10 11:18:19.109 :GIPCXCPT:421820160:  gipcConnectSyncF [EvmConConnect : evmgipcio.c : 235]: EXCEPTION[ ret gipcretConnectionRefused (29) ]  failed sync connect endp 0x7f9d10021a10 [000000000000b871] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 1, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7f9d1005d230, sendp 0x7f9d1005cfe0 status 13flags 0xa108871a, flags-2 0x0, usrFlags 0x30020 }, addr 0x7f9d10022e70 [000000000000b878] { gipcAddress : name 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=SYSTEM.evm.acceptor.auth)(GIPCID=00000000-00000000-0))', objFlags 0x0, addrFlags 0x4 }, flags 0x8000000

2020-03-10 11:18:19.110 : CLSCEVT:421820160: (:CLSCE0047:)clsce_publish_internal 0x562c5e45bb90 EvmConnCreate failed with status = 13, try = 1

2020-03-10 11:18:19.171 : CRSCOMM:4059477760: [     INFO]  IpcL: Accepted connection 45931 from user root member number 3

故障现象:

集群能正常安装,安装完成后重启集群中其中一个节点可能会启不来,crs alert日志中抛出异常crs-6015 ,gipcInternalConnectSync: failed sync request 错误。

解决方法:

查询mos发现是个bug,测试在19.5中未进行修复,在最新19.6的RU中已进行了修复,所以新装的19.3 RAC 建议直接升级到19.6。

错误No.3

[root@xydb7node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /tmp/ru19.6/30463609/30501910 -oh /u01/app/19.3.0/grid

OPatchauto session is initiated at Tue Mar 10 15:37:44 2020

OPATCHAUTO-72083: Performing bootstrap operations failed.

OPATCHAUTO-72083: The bootstrap execution failed because failed to detect Grid Infrastructure setup due to null.

OPATCHAUTO-72083: Fix the reported problem and re-run opatchauto.

OPatchauto session completed at Tue Mar 10 15:38:07 2020

Time taken to complete the session 0 minute, 23 seconds

opatchauto bootstrapping failed with error code 255.

问题分析:

这个错误在正常打补丁过程中,如果shell断开,再重新执行命令会报这个错误。

解决方法:

不能重新执行之前的命令,要用resume,如下,已经正常在跑了。

[root@xydb7node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume

OPatchauto session is initiated at Tue Mar 10 15:40:33 2020

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-10_03-40-34PM.log

Resuming existing session with id E7W9

Start applying binary patch on home /u01/app/19.3.0/grid

Binary patch applied successfully on home /u01/app/19.3.0/grid

Checking shared status of home.....

Starting CRS service on home /u01/app/19.3.0/grid

错误No.4

Failed to start CRS service on home /u01/app/19.3.0/grid

Execution of [GIStartupAction] patch action failed, check log for more details. Failures:

Patch Target : xydb7node1->/u01/app/19.3.0/grid Type[crs]

Details: [

---------------------------Patching Failed---------------------------------

Command execution failed during patching in home: /u01/app/19.3.0/grid, host: xydb7node1.

Command failed:  /u01/app/19.3.0/grid/perl/bin/perl -I/u01/app/19.3.0/grid/perl/lib -I/u01/app/19.3.0/grid/OPatch/auto/dbtmp/bootstrap_xydb7node1/patchwork/crs/install -I/u01/app/19.3.0/grid/OPatch/auto/dbtmp/bootstrap_xydb7node1/patchwork/xag /u01/app/19.3.0/grid/OPatch/auto/dbtmp/bootstrap_xydb7node1/patchwork/crs/install/rootcrs.pl -postpatch

Command failure output:

Using configuration parameter file: /u01/app/19.3.0/grid/OPatch/auto/dbtmp/bootstrap_xydb7node1/patchwork/crs/install/crsconfig_params

The log of current session can be found at:

/u01/app/grid/crsdata/xydb7node1/crsconfig/crs_postpatch_xydb7node1_2020-03-10_03-41-09PM.log

2020/03/10 15:41:20 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-41053: checking Oracle Grid Infrastructure for file permission issues

PRVG-2032 : Group of file "/etc/oracleafd.conf" did not match the expected value on node "xydb7node1". [Expected = "oinstall(1001)" ; Found = "asmadmin(1005)"]

PRVH-0116 : Path "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" with permissions "rw-r--r--" does not have execute permissions for the owner, file's group, and others on node "xydb7node1".

PRVG-2031 : Owner of file "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" did not match the expected value on node "xydb7node1". [Expected = "grid(1002)" ; Found = "root(0)"]

PRVG-2032 : Group of file "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" did not match the expected value on node "xydb7node1". [Expected = "oinstall(1001)" ; Found = "root(0)"]

PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/libagtsh.so" with permissions "rwxr-x---" does not have read permissions for others on node "xydb7node1".

PRVH-0113 : Path "/u01/app/19.3.0/grid/lib/libagtsh.so" with permissions "rwxr-x---" does not have execute permissions for others on node "xydb7node1".

PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/libagtsh.so.1.0" with permissions "rwxr-x---" does not have read permissions for others on node "xydb7node1".

PRVH-0113 : Path "/u01/app/19.3.0/grid/lib/libagtsh.so.1.0" with permissions "rwxr-x---" does not have execute permissions for others on node "xydb7node1".

PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/clntshcore.map" with permissions "rw-r-----" does not have read permissions for others on node "xydb7node1".

PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/clntsh.map" with permissions "rw-r-----" does not have read permissions for others on node "xydb7node1".

PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/libocci.so" with permissions "rwxr-x---" does not have read permissions for others on node "xydb7node1".

PRVH-0113 : Path "/u01/app/19.3.0/grid/lib/libocci.so" with permissions "rwxr-x---" does not have execute permissions for others on node "xydb7node1".

PRVH-0111 : Path "/u01/app/19.3.0/grid/lib/libocci.so.19.1" with permissions "rwxr-x---" does not have read permissions for others on node "xydb7node1".

PRVH-0113 : Path "/u01/app/19.3.0/grid/lib/libocci.so.19.1" with permissions "rwxr-x---" does not have execute permissions for others on node "xydb7node1".

CRS-4124: Oracle High Availability Services startup failed.

CRS-4000: Command Start failed, or completed with errors.

2020/03/10 15:46:52 CLSRSC-117: Failed to start Oracle Clusterware stack

After fixing the cause of failure Run opatchauto resume

]

OPATCHAUTO-68061: The orchestration engine failed.

OPATCHAUTO-68061: The orchestration engine failed with return code 1

OPATCHAUTO-68061: Check the log for more details.

OPatchAuto failed.

OPatchauto session completed at Tue Mar 10 15:46:54 2020

Time taken to complete the session 6 minutes, 21 seconds

opatchauto failed with error code 42

问题分析:

这个也是文件权限的问题,按要求设置权限就行。通过lspatches直接检查gi的版本,发现已经是19.6了,估计不改应该也行,我还是按要求来改了。

解决办法:

修改这2个文件的权限,继续resume,后续很可能遇到crs-6015错误。

[root@xydb7node1 ~]# chown grid:oinstall /etc/oracleafd.conf

[root@xydb7node1 ~]# chown grid:oinstall /u01/app/19.3.0/grid/crs/install/cmdllroot.sh

[root@xydb7node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume

错误No.5

```shell

[root@xydb7node1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume

OPatchauto session is initiated at Tue Mar 10 16:06:47 2020

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2020-03-10_04-06-47PM.log

Resuming existing session with id E7W9

Checking shared status of home.....

Starting CRS service on home /u01/app/19.3.0/grid

=====>resume后这里一直卡着,检查alter日志有如下错误:

2020-03-10 16:07:22.095 [OHASD(126635)]CRS-6015: Oracle Clusterware has experienced an internal error. Details at (:CLSGEN00100:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.

2020-03-10T16:07:22.106550+08:00

Errors in file /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc  (incident=9):

CRS-6015 [] [] [] [] [] [] [] [] [] [] [] []

Incident details in: /u01/app/grid/diag/crs/xydb7node1/crs/incident/incdir_9/ohasd_i9.trc

2020-03-10 16:07:22.120 [OHASD(126635)]CRS-8505: Oracle Clusterware OHASD process with operating system process ID 126635 encountered internal error CRS-06015

2020-03-10 16:10:51.606 [OHASD(89349)]CRS-5828: Could not start agent '/u01/app/19.3.0/grid/bin/orarootagent_root'. Details at (:CRSAGF00130:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.

2020-03-10 16:10:51.638 [OHASD(89349)]CRS-5828: Could not start agent '/u01/app/19.3.0/grid/bin/oraagent_grid'. Details at (:CRSAGF00130:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.

2020-03-10 16:12:51.684 [OHASD(89349)]CRS-5828: Could not start agent '/u01/app/19.3.0/grid/bin/cssdagent_root'. Details at (:CRSAGF00130:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.

2020-03-10 16:12:51.705 [OHASD(89349)]CRS-5828: Could not start agent '/u01/app/19.3.0/grid/bin/cssdmonitor_root'. Details at (:CRSAGF00130:) {0:0:2} in /u01/app/grid/diag/crs/xydb7node1/crs/trace/ohasd.trc.

问题分析:

到了这一步说明gi补丁已安装成功,在启动crs集群时卡住了,这里我为了完美打补丁,不强行ctrl+c 结束,想了个办法帮它重启crs。(这个错误是个bug,这里就不略过了)。

决办法:

复制一个shell窗本,先停掉has,再启动has就行了,具体操作如下:

[root@xydb7node1 ~]# crsctl stop has -f

[root@xydb7node1 ~]# crsctl start has

等待一会,gi补丁就安装成功了,如下:

CRS service started successfully on home /u01/app/19.3.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:xydb7node1

CRS Home:/u01/app/19.3.0/grid

Version:19.0.0.0.0

Summary:

==Following patches were SKIPPED:

Patch: /tmp/ru19.6/30463609/30501910/30489227

Reason: This patch is already been applied, so not going to apply again.

Patch: /tmp/ru19.6/30463609/30501910/30489632

Reason: This patch is already been applied, so not going to apply again.

Patch: /tmp/ru19.6/30463609/30501910/30557433

Reason: This patch is already been applied, so not going to apply again.

Patch: /tmp/ru19.6/30463609/30501910/30655595

Reason: This patch is already been applied, so not going to apply again.

OPatchauto session completed at Tue Mar 10 16:26:39 2020

Time taken to complete the session 19 minutes, 53 seconds

总结

以上就是在整个19.3 rac 补丁安装过程中遇到的一些问题汇报,希望能有所帮助,crs-6015这个bug没想到在19.6才修复,前面几个ru也没处理,对于印象中高大上的Oracle来说,实属意外,需要加倍学习。最后感谢部门各位砖家的给力支持,谢谢。

Oracle 19c RAC打补丁过程避坑指南相关推荐

  1. oracle 19c打补丁,Oracle 19c RAC打补丁过程避坑指南

    #db回退/u01/app/ oracle/product/ 19.3.0/db_1/OPatch/opatchauto rollback/tmp/grid_path/ 30116789-oh /u0 ...

  2. Oracle 19C RAC 安装遇到的坑

    作者 | JiekeXu 来源 | JiekeXu DBA之路(ID: JiekeXu_IT) 大家好,我是 JiekeXu,很高兴又和大家见面了,今天和大家聊聊 Oracle 19c RAC 安装遇 ...

  3. 使用 VMware 16 RHEL7.7 虚拟机静默安装 Oracle 19c RAC

    作者 | JiekeXu 来源 | JiekeXu DBA之路(ID: JiekeXu_IT) 大家好,我是 JiekeXu,很高兴又和大家见面了,今天和大家一起来看看 使用 VMware 16  R ...

  4. Oracle 19c rac的搭建

    Oracle  19c rac的搭建 基于18c的rac进行删除再搭建: http://blog.itpub.net/26736162/viewspace-2220931/ hostnamectl s ...

  5. 安装oracle 19c rac报错:2节点执行root.sh asm实例启动失败

    安装oracle 19c rac报错:2节点执行root.sh asm实例启动失败 背景 解决过程 查看lmon trc 查看mos 真的是网络的问题 haip 禁用haip 安装好的环境禁用haip ...

  6. Oracle 19C RAC 静默(silent)安装on RHEL7.x

    后续文档修订详见Github 修订记录 日期 版本 描述 作者 2018-05-09 v1.0 初稿 Yong 一.安装准备 1.1.RHEL版本及IP规划 1.1.1.OS版本信息 [root@lo ...

  7. Oracle 19c rac的搭建

    Oracle 19c rac的搭建 基于18c的rac进行删除再搭建: http://blog.itpub.net/26736162/viewspace-2220931/ hostnamectl se ...

  8. 【19c】Oracle 19c rac的搭建

    Oracle 19c rac的搭建 hostnamectl set-hostname raclhr-19c-n1 hostnamectl set-hostname raclhr-19c-n2 #Pub ...

  9. Oracle 19C RAC 安装

    目录 安装前规划 1.系统规划 2.网络规划 3. 磁盘存储 安装前准备 1.Oracle Linux 安装(rac1&rac2) 2. 网络设置(rac1&rac2) 3. 安装依赖 ...

最新文章

  1. 希尔排序算法原理与实现
  2. php把字符串变为数组_php将字符串转换为数组
  3. 日常打卡:平淡无奇的一天
  4. C语言高级编程:预处理中的 # 和 ##
  5. python url拼接_教你写python爬虫——用python爬原图
  6. 复选框操作checked选中为true,反之为False,也可以赋值为true,false
  7. 用cocos2d-html5做的消除类游戏《英雄爱消除》(1)——系统主菜单
  8. 经典中的品味:第二章 C++基本的对象,类型和值(上)
  9. ASP.NET TextBox 当鼠标点击后清空默认提示文字
  10. javaEE项目发布方法
  11. Linux shell脚本详解及实战(三)——shell脚本循环
  12. [翻译] 使用ElasticSearch,Kibana,ASP.NET Core和Docker可视化数据
  13. python程序员专用壁纸_Python程序员必用的电脑桌面
  14. python自制一款职位分析器,一键生成岗位分析报告
  15. Win10各版本区别
  16. 系列推荐 |《最强的 VLC 多媒体开发教程》
  17. 【kong系列十一】之JWT插件RSA256非对称加密使用
  18. 机器人总动员拟人_《机器人总动员》从三个角度解析这部电影带给我们的思考与感动...
  19. Bug复现辅助神器-EV录屏
  20. 入职新人如何快速了解业务

热门文章

  1. (35)Gulp 构建任务组合
  2. (4)css2.1选择器
  3. es6 super关键字
  4. 虚拟计算机用户权限分配,虚拟机实例的权限和安全配置
  5. 魔兽服务器联盟在线,《魔兽世界》怀旧服再开新服,部落联盟泾渭分明?
  6. 解析ajax数据显示到from表单中,jQuery Ajax从另一页上的Form请求中提取数据?
  7. trunk口_什么是Trunk?Trunk详解
  8. 单链表实现一元多项式相加_python面试系列 01如何实现单链表的逆序
  9. conda如何升级pytorch_第一节 PyTorch简介及环境配置
  10. 简述计算机通信网络的技术指标,计算机网络基础知识之数据通信中的主要技术指标...