RAC上是vmware vspere的虚拟机上有两个OEL6.3的虚拟机,上面跑的库是11.2.0.4

发现节点2挂掉了

1.检查节点1的alert

[root@racnode1 racnode1]# pwd

/u01/apps/grid/gridhome/11.2.0/grid/log/racnode1

[root@racnode1 racnode1]# tail -1000 alertracnode1.log

2014-04-15 09:41:20.815:

[crsd(27311)]CRS-2765:Resource 'ora.net1.network' has failed on server 'racnode1'.

2014-04-15 09:41:42.760:

[cssd(26972)]CRS-1612:Network communication with node racnode2 (2) missing for 50% of timeout interval.  Removal of this node from cluster in 15.000 seconds

2014-04-15 09:41:50.763:

[cssd(26972)]CRS-1611:Network communication with node racnode2 (2) missing for 75% of timeout interval.  Removal of this node from cluster in 7.000 seconds

2014-04-15 09:41:54.764:

[cssd(26972)]CRS-1610:Network communication with node racnode2 (2) missing for 90% of timeout interval.  Removal of this node from cluster in 3.000 seconds

2014-04-15 09:41:57.766:

[cssd(26972)]CRS-1607:Node racnode2 is being evicted in cluster incarnation 291818318; details at (:CSSNM00007:) in /u01/apps/grid/gridhome/11.2.0/grid/log/racnode1/cssd/ocssd.log.

2014-04-15 09:42:06.052:

[cssd(26972)]CRS-1625:Node racnode2, number 2, was manually shut down

2014-04-15 09:42:06.059:

[cssd(26972)]CRS-1601:CSSD Reconfiguration complete. Active nodes are racnode1 .

2014-04-15 09:42:06.950:

[crsd(27311)]CRS-5504:Node down event reported for node 'racnode2'.

2014-04-15 09:42:24.882:

[crsd(27311)]CRS-2773:Server 'racnode2' has been removed from pool 'Generic'.

2014-04-15 09:42:24.882:

[crsd(27311)]CRS-2773:Server 'racnode2' has been removed from pool 'ora.pera'.

[root@racnode1 racnode1]#

节点2被驱逐时间:2014-04-15 09:41:57

2.查看ocssd.log

# more /u01/apps/grid/gridhome/11.2.0/grid/log/racnode1/cssd/ocssd.log |grep "2014-04-15 09:41"

2014-04-15 09:41:23.620: [    CSSD][906479360]clssnmSendingThread: sending status msg to all nodes

2014-04-15 09:41:23.621: [    CSSD][906479360]clssnmSendingThread: sent 4 status msgs to all nodes

2014-04-15 09:41:28.758: [    CSSD][906479360]clssnmSendingThread: sending status msg to all nodes

2014-04-15 09:41:28.758: [    CSSD][906479360]clssnmSendingThread: sent 5 status msgs to all nodes

2014-04-15 09:41:33.145: [GIPCHGEN][919283456] gipchaInterfaceFail: marking interface failing 0x7fc01821d4c0 { host '', haName 'CSS_racnode-cluster', local (nil), ip '172.168.1.11:52955', subnet '172.168.1.0', mask '255.255.255.0', mac '00-50-56-a1-7b-e2', ifname 'eth1', numRef 1, numFail 0, idxBoot 0, flags 0x184d }

2014-04-15 09:41:33.760: [    CSSD][906479360]clssnmSendingThread: sending status msg to all nodes

2014-04-15 09:41:33.760: [GIPCHGEN][920860416] gipchaInterfaceFail: marking interface failing 0x7fc024040a20 { host 'racnode2', haName 'CSS_racnode-cluster', local 0x7fc01821d4c0, ip '172.168.1.12:60678', subnet '172.168.1.0', mask '255.255.255.0', mac '', ifname '', numRef 0, numFail 0, idxBoot 4, flags 0x6 }

2014-04-15 09:41:33.760: [    CSSD][906479360]clssnmSendingThread: sent 5 status msgs to all nodes

2014-04-15 09:41:33.760: [GIPCHGEN][920860416] gipchaInterfaceDisable: disabling interface 0x7fc01821d4c0 { host '', haName 'CSS_racnode-cluster', local (nil), ip '172.168.1.11:52955', subnet '172.168.1.0', mask '255.255.255.0', mac '00-50-56-a1-7b-e2', ifname 'eth1', numRef 0, numFail 1, idxBoot 0, flags 0x19cd }

2014-04-15 09:41:33.760: [GIPCHGEN][920860416] gipchaInterfaceDisable: disabling interface 0x7fc024040a20 { host 'racnode2', haName 'CSS_racnode-cluster', local 0x7fc01821d4c0, ip '172.168.1.12:60678', subnet '172.168.1.0', mask '255.255.255.0', mac '', ifname '', numRef 0, numFail 0, idxBoot 4, flags 0x86 }

2014-04-15 09:41:33.761: [GIPCHALO][920860416] gipchaLowerCleanInterfaces: performing cleanup of disabled interface 0x7fc024040a20 { host 'racnode2', haName 'CSS_racnode-cluster', local 0x7fc01821d4c0, ip '172.168.1.12:60678', subnet '172.168.1.0', mask '255.255.255.0', mac '', ifname '', numRef 0, numFail 0, idxBoot 4, flags 0xa6 }

2014-04-15 09:41:33.885: [GIPCHDEM][920860416] gipchaWorkerCleanInterface: performing cleanup of disabled interface 0x7fc01821d4c0 { host '', haName 'CSS_racnode-cluster', local (nil), ip '172.168.1.11:52955', subnet '172.168.1.0', mask '255.255.255.0', mac '00-50-56-a1-7b-e2', ifname 'eth1', numRef 0, numFail 0, idxBoot 0, flags 0x19ed }

2014-04-15 09:41:34.145: [GIPCHDEM][919283456] gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e22410 [0000000000000010] { gipchaContext : host 'racnode1', name 'CSS_racnode-cluster', luid '8bbba732-00000000', numNode 1, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd

2014-04-15 09:41:37.993: [GIPCHALO][920860416] gipchaLowerProcessNode: no valid interfaces found to node for 5240 ms, node 0x7fc018226540 { host 'racnode2', haName 'CSS_racnode-cluster', srcLuid 8bbba732-d4368a71, dstLuid 0fc9914c-ec29f220 numInf 0, contigSeq 1787047, lastAck 1786995, lastValidAck 1787046, sendSeq [1786996 : 1787014], createTime 1683284, sentRegister 1, localMonitor 1, flags 0x2408 }

2014-04-15 09:41:38.762: [    CSSD][906479360]clssnmSendingThread: sending status msg to all nodes

2014-04-15 09:41:38.762: [    CSSD][906479360]clssnmSendingThread: sent 5 status msgs to all nodes

2014-04-15 09:41:40.146: [GIPCHDEM][919283456] gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e22410 [0000000000000010] { gipchaContext : host 'racnode1', name 'CSS_racnode-cluster', luid '8bbba732-00000000', numNode 1, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd

2014-04-15 09:41:42.760: [    CSSD][908056320]clssnmPollingThread: node racnode2 (2) at 50% heartbeat fatal, removal in 15.000 seconds

2014-04-15 09:41:42.760: [    CSSD][908056320]clssnmPollingThread: node racnode2 (2) is impending reconfig, flag 2294796, misstime 15000

2014-04-15 09:41:42.760: [    CSSD][908056320]clssnmPollingThread: local diskTimeout set to 27000 ms, remote disk timeout set to 27000, impending reconfig status(1)

2014-04-15 09:41:42.764: [    CSSD][906479360]clssnmSendingThread: sending status msg to all nodes

2014-04-15 09:41:42.764: [    CSSD][906479360]clssnmSendingThread: sent 4 status msgs to all nodes

2014-04-15 09:41:43.764: [GIPCHALO][920860416] gipchaLowerProcessNode: no valid interfaces found to node for 11010 ms, node 0x7fc018226540 { host 'racnode2', haName 'CSS_racnode-cluster', srcLuid 8bbba732-d4368a71, dstLuid 0fc9914c-ec29f220 numInf 0, contigSeq 1787047, lastAck 1786995, lastValidAck 1787046, sendSeq [1786996 : 1787025], createTime 1683284, sentRegister 1, localMonitor 1, flags 0x2408 }

2014-04-15 09:41:43.822: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526103/1203750934

2014-04-15 09:41:43.906: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1193997, LATS 1203751024, lastSeqNo 1175771, uniqueness 1396323841, timestamp 1397526102/1203310914

2014-04-15 09:41:45.102: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526105/1203752214

2014-04-15 09:41:45.772: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526105/1203752884

2014-04-15 09:41:46.147: [GIPCHDEM][919283456] gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e22410 [0000000000000010] { gipchaContext : host 'racnode1', name 'CSS_racnode-cluster', luid '8bbba732-00000000', numNode 1, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd

2014-04-15 09:41:46.181: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1193999, LATS 1203753294, lastSeqNo 1193997, uniqueness 1396323841, timestamp 1397526104/1203313164

2014-04-15 09:41:46.782: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526106/1203753894

2014-04-15 09:41:47.120: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194001, LATS 1203754234, lastSeqNo 1193999, uniqueness 1396323841, timestamp 1397526106/1203314834

2014-04-15 09:41:47.622: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526107/1203754734

2014-04-15 09:41:47.765: [    CSSD][906479360]clssnmSendingThread: sending status msg to all nodes

2014-04-15 09:41:47.765: [    CSSD][906479360]clssnmSendingThread: sent 5 status msgs to all nodes

2014-04-15 09:41:48.004: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194002, LATS 1203755124, lastSeqNo 1194001, uniqueness 1396323841, timestamp 1397526107/1203315684

2014-04-15 09:41:48.507: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526108/1203755624

2014-04-15 09:41:48.774: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194003, LATS 1203755894, lastSeqNo 1194002, uniqueness 1396323841, timestamp 1397526108/1203316514

2014-04-15 09:41:49.213: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526109/1203756324

2014-04-15 09:41:49.766: [GIPCHALO][920860416] gipchaLowerProcessNode: no valid interfaces found to node for 17010 ms, node 0x7fc018226540 { host 'racnode2', haName 'CSS_racnode-cluster', srcLuid 8bbba732-d4368a71, dstLuid 0fc9914c-ec29f220 numInf 0, contigSeq 1787047, lastAck 1786995, lastValidAck 1787046, sendSeq [1786996 : 1787037], createTime 1683284, sentRegister 1, localMonitor 1, flags 0x2408 }

2014-04-15 09:41:49.893: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194004, LATS 1203757004, lastSeqNo 1194003, uniqueness 1396323841, timestamp 1397526109/1203317334

2014-04-15 09:41:49.903: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526109/1203757014

2014-04-15 09:41:50.528: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194005, LATS 1203757644, lastSeqNo 1194004, uniqueness 1396323841, timestamp 1397526109/1203317964

2014-04-15 09:41:50.548: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526110/1203757664

2014-04-15 09:41:50.763: [    CSSD][908056320]clssnmPollingThread: node racnode2 (2) at 75% heartbeat fatal, removal in 7.000 seconds

2014-04-15 09:41:50.875: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194006, LATS 1203757994, lastSeqNo 1194005, uniqueness 1396323841, timestamp 1397526110/1203318664

2014-04-15 09:41:51.113: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526111/1203758224

2014-04-15 09:41:51.396: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194007, LATS 1203758514, lastSeqNo 1194006, uniqueness 1396323841, timestamp 1397526110/1203319204

2014-04-15 09:41:51.692: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526111/1203758804

2014-04-15 09:41:52.148: [GIPCHDEM][919283456] gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e22410 [0000000000000010] { gipchaContext : host 'racnode1', name 'CSS_racnode-cluster', luid '8bbba732-00000000', numNode 1, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd

2014-04-15 09:41:52.294: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194008, LATS 1203759404, lastSeqNo 1194007, uniqueness 1396323841, timestamp 1397526111/1203319754

2014-04-15 09:41:52.305: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526112/1203759424

2014-04-15 09:41:52.767: [    CSSD][906479360]clssnmSendingThread: sending status msg to all nodes

2014-04-15 09:41:52.767: [    CSSD][906479360]clssnmSendingThread: sent 5 status msgs to all nodes

2014-04-15 09:41:52.772: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194009, LATS 1203759884, lastSeqNo 1194008, uniqueness 1396323841, timestamp 1397526112/1203320364

2014-04-15 09:41:52.974: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526112/1203760084

2014-04-15 09:41:53.793: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194010, LATS 1203760904, lastSeqNo 1194009, uniqueness 1396323841, timestamp 1397526112/1203321034

2014-04-15 09:41:53.934: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526113/1203761044

2014-04-15 09:41:54.401: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194012, LATS 1203761514, lastSeqNo 1194010, uniqueness 1396323841, timestamp 1397526114/1203322404

2014-04-15 09:41:54.494: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526114/1203761604

2014-04-15 09:41:54.764: [    CSSD][908056320]clssnmPollingThread: node racnode2 (2) at 90% heartbeat fatal, removal in 3.000 seconds, seedhbimpd 1

2014-04-15 09:41:55.044: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526115/1203762154

2014-04-15 09:41:55.134: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194013, LATS 1203762254, lastSeqNo 1194012, uniqueness 1396323841, timestamp 1397526114/1203322964

2014-04-15 09:41:55.640: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526115/1203762754

2014-04-15 09:41:55.768: [GIPCHALO][920860416] gipchaLowerProcessNode: no valid interfaces found to node for 23010 ms, node 0x7fc018226540 { host 'racnode2', haName 'CSS_racnode-cluster', srcLuid 8bbba732-d4368a71, dstLuid 0fc9914c-ec29f220 numInf 0, contigSeq 1787047, lastAck 1786995, lastValidAck 1787046, sendSeq [1786996 : 1787048], createTime 1683284, sentRegister 1, localMonitor 1, flags 0x2408 }

2014-04-15 09:41:56.498: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526116/1203763614

2014-04-15 09:41:57.145: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194015, LATS 1203764254, lastSeqNo 1194013, uniqueness 1396323841, timestamp 1397526116/1203324554

2014-04-15 09:41:57.645: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526117/1203764764

2014-04-15 09:41:57.764: [    CSSD][908056320]clssnmPollingThread: Removal started for node racnode2 (2), flags 0x23040c, state 3, wt4c 0

2014-04-15 09:41:57.764: [    CSSD][908056320]clssnmMarkNodeForRemoval: node 2, racnode2 marked for removal

2014-04-15 09:41:57.765: [    CSSD][908056320]clssnmDiscHelper: racnode2, node(2) connection failed, endp (0x3938), probe(0x7fc000000000), ninf->endp 0x3938

2014-04-15 09:41:57.765: [    CSSD][908056320]clssnmDiscHelper: node 2 clean up, endp (0x3938), init state 5, cur state 5

2014-04-15 09:41:57.765: [GIPCXCPT][908056320] gipcInternalDissociate: obj 0x7fc024056a70 [0000000000003938] { gipcEndpoint : localAddr 'gipcha://racnode1:nm2_racnode-cluster/da4b-8718-50e0-b51', remoteAddr 'gipcha://racnode2:7b5c-4672-87e8-565', numPend 1, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fc02402dd90, sendp (nil)flags 0x138606, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)

2014-04-15 09:41:57.765: [GIPCXCPT][908056320] gipcDissociateF [clssnmDiscHelper : clssnm.c : 3485]: EXCEPTION[ ret gipcretFail (1) ]  failed to dissociate obj 0x7fc024056a70 [0000000000003938] { gipcEndpoint : localAddr 'gipcha://racnode1:nm2_racnode-cluster/da4b-8718-50e0-b51', remoteAddr 'gipcha://racnode2:7b5c-4672-87e8-565', numPend 1, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 1, wobj 0x7fc02402dd90, sendp (nil)flags 0x138606, usrFlags 0x0 }, flags 0x0

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmDoSyncUpdate: Initiating sync 291818318

2014-04-15 09:41:57.765: [    CSSD][904902400]clssscCompareSwapEventValue: changed NMReconfigInProgress  val 1, from -1, changes 7

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmDoSyncUpdate: local disk timeout set to 27000 ms, remote disk timeout set to 27000

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmDoSyncUpdate: new values for local disk timeout and remote disk timeout will take effect when the sync is completed.

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmDiscEndp: gipcDestroy 0x3938

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmDoSyncUpdate: Starting cluster reconfig with incarnation 291818318

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmSetupAckWait: Ack message type (11)

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmSetupAckWait: node(1) is ALIVE

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmSendSync: syncSeqNo(291818318), indicating EXADATA fence initialization complete

2014-04-15 09:41:57.765: [    CSSD][904902400]List of nodes that have ACKed my sync: NULL

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmSendSync: syncSeqNo(291818318)

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmHandleSync: Node racnode1, number 1, is EXADATA fence capable

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmWaitForAcks: Ack message type(11), ackCount(1)

2014-04-15 09:41:57.765: [    CSSD][903325440]clssscUpdateEventValue: NMReconfigInProgress  val 1, changes 8

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmHandleSync: local disk timeout set to 27000 ms, remote disk timeout set to 27000

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmHandleSync: initleader 1 newleader 1

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmQueueClientEvent:  Sending Event(2), type 2, incarn 291818317

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmQueueClientEvent: Node[1] state = 3, birth = 291818316, unique = 1396323475

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmQueueClientEvent: Node[2] state = 5, birth = 291818317, unique = 1396323841

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmHandleSync: Acknowledging sync: src[1] srcName[racnode1] seq[11] sync[291818318]

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmSendAck: node 1, racnode1, syncSeqNo(291818318) type(11)

2014-04-15 09:41:57.765: [    CSSD][903325440]clssnmHandleAck: Received ack type 11 from node racnode1, number 1, with seq 0 for sync 291818318, waiting for 0 acks

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmSendSync: syncSeqNo(291818318), indicating EXADATA fence initialization complete

2014-04-15 09:41:57.765: [    CSSD][941274880]clssgmStartNMMon: node 1 active, birth 291818316

2014-04-15 09:41:57.765: [    CSSD][941274880]clssgmStartNMMon: node 2 active, birth 291818317

2014-04-15 09:41:57.765: [    CSSD][941274880]NMEVENT_SUSPEND [00][00][00][06]

2014-04-15 09:41:57.765: [    CSSD][941274880]clssgmCompareSwapEventValue: changed CmInfo State  val 5, from 11, changes 21

2014-04-15 09:41:57.765: [    CSSD][941274880]clssgmSuspendAllGrocks: Issue SUSPEND

2014-04-15 09:41:57.765: [    CSSD][904902400]List of nodes that have ACKed my sync: 1

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmWaitForAcks: done, syncseq(291818318), msg type(11)

2014-04-15 09:41:57.765: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(IGPERAperaXDB) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x0, state 0x0

2014-04-15 09:41:57.765: [    CSSD][904902400]clssnmSetMinMaxVersion:node1  product/protocol (11.2/1.4)

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmSetMinMaxVersion: properties common to all nodes: 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmSetMinMaxVersion: min product/protocol (11.2/1.4)

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmSetMinMaxVersion: max product/protocol (11.2/1.4)

2014-04-15 09:41:57.766: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(IG+ASMSYS$USERS) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x0, state 0x0

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmNeedConfReq: No configuration to change

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmDoSyncUpdate: Terminating node 2, racnode2, misstime(30000) state(5)

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmDoSyncUpdate: Wait for 0 vote ack(s)

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmCheckDskInfo: Checking disk info...

2014-04-15 09:41:57.766: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(IGPERASYS$USERS) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x0, state 0x0

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmCheckSplit: Node 2, racnode2, is alive, DHB (1397526116, 1203324554) more than disk timeout of 27000 after the last NHB (1397526085, 1203293884)

2014-04-15 09:41:57.766: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(crs_version) count(3) master(0) event(2), incarn 3, mbrc 3, to member 0, events 0x20, state 0x0

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmCheckDskInfo: My cohort: 1

2014-04-15 09:41:57.766: [    CSSD][904902400]clssnmRemove: Start

2014-04-15 09:41:57.766: [    CSSD][904902400](:CSSNM00007:)clssnmrRemoveNode: Evicting node 2, racnode2, from the cluster in incarnation 291818318, node birth incarnation 291818317, death incarnation 291818318, stateflags 0x234000 uniqueness value 1396323841

2014-04-15 09:41:57.766: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(crs_version) count(3) master(0) event(2), incarn 3, mbrc 3, to member 1, events 0x0, state 0x0

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(CRF-) count(4) master(0) event(2), incarn 4, mbrc 4, to member 0, events 0x38, state 0x0

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(CRF-) count(4) master(0) event(2), incarn 4, mbrc 4, to member 1, events 0x38, state 0x0

2014-04-15 09:41:57.767: [ default][904902400]kgzf_gen_node_reid2: generated reid cid=a2998261b3ccff8abf36841a04ffe27b,icin=291818316,nmn=2,lnid=291818317,gid=0,gin=0,gmn=0,umemid=0,opid=0,opsn=0,lvl=node hdr=0xfece0100

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(CLSN.ONSPROC.MASTER) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0xa0, state 0x0

2014-04-15 09:41:57.767: [    CSSD][904902400]clssnmrFenceSage: Fenced node racnode2, number 2, with EXADATA, handle 0

2014-04-15 09:41:57.767: [    CSSD][904902400]clssnmSendShutdown: req to node 2, kill time 1203764884

2014-04-15 09:41:57.767: [    CSSD][904902400]clssnmsendmsg: not connected to node 2

2014-04-15 09:41:57.767: [    CSSD][904902400]clssnmSendShutdown: Send to node 2 failed

2014-04-15 09:41:57.767: [    CSSD][904902400]clssnmWaitOnEvictions: Start

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DB+ASM) count(2) master(0) event(2), incarn 2, mbrc 2, to member 0, events 0x68, state 0x0

2014-04-15 09:41:57.767: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526116, 1203324554, 1201720), seedhbimpd TRUE

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DG+ASM) count(2) master(0) event(2), incarn 2, mbrc 2, to member 0, events 0x0, state 0x0

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(IG+ASMSYS$BACKGROUND) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x0, state 0x0

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DBPERA) count(2) master(0) event(2), incarn 2, mbrc 2, to member 0, events 0x68, state 0x0

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(VT+ASM) count(2) master(1) event(2), incarn 6, mbrc 2, to member 1, events 0x60, state 0x0

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(IGPERASYS$BACKGROUND) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x0, state 0x0

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DG_FRA) count(2) master(1) event(2), incarn 5, mbrc 2, to member 1, events 0x4, state 0x0

2014-04-15 09:41:57.767: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DG+ASM0) count(2) master(0) event(2), incarn 2, mbrc 2, to member 0, events 0x0, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(GR+GCR1) count(4) master(0) event(2), incarn 26, mbrc 4, to member 0, events 0x280, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(GR+GCR1) count(4) master(0) event(2), incarn 26, mbrc 4, to member 2, events 0x280, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DG_CRS) count(2) master(0) event(2), incarn 2, mbrc 2, to member 0, events 0x4, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DGPERA-) count(2) master(0) event(2), incarn 2, mbrc 2, to member 0, events 0x0, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DGPERA0) count(2) master(0) event(2), incarn 2, mbrc 2, to member 0, events 0x0, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DG_CRS1) count(2) master(1) event(2), incarn 5, mbrc 2, to member 1, events 0x4, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(DG_DATA) count(2) master(1) event(2), incarn 5, mbrc 2, to member 1, events 0x4, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(CLSN.RLB.pera.MASTER) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0xa0, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(CLSFRAME) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x8, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(EVMDMAIN) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x8, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(CRSDMAIN) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x8, state 0x0

2014-04-15 09:41:57.768: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(EVMDMAIN2) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x8, state 0x0

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(IGPERAALL) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x0, state 0x0

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(CTSSGROUP) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x8, state 0x0

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(CLSN.AQPROC.pera.MASTER) count(2) master(2) event(2), incarn 2, mbrc 2, to member 1, events 0xa0, state 0x0

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(IGPERApera) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x0, state 0x0

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmQueueGrockEvent: groupName(ocr_racnode-cluster) count(2) master(1) event(2), incarn 2, mbrc 2, to member 1, events 0x78, state 0x0

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmSuspendAllGrocks: done

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmCompareSwapEventValue: changed CmInfo State  val 2, from 5, changes 22

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmUpdateEventValue: ConnectedNodes  val 291818317, changes 7

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmCleanupNodeContexts():  cleaning up nodes, rcfg(291818317)

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmCleanupNodeContexts():  successful cleanup of nodes rcfg(291818317)

2014-04-15 09:41:57.769: [GIPCHAUP][920860416] gipchaUpperDisconnect: initiated discconnect umsg 0x7fc024054c60 { msg 0x7fc0240565c8, ret gipcretRequestPending (15), flags 0x2 }, msg 0x7fc0240565c8 { type gipchaMsgTypeDisconnect (5), srcCid 00000000-000038ed, dstCid 00000000-0000059b }, endp 0x7fc0240553a0 [00000000000038ed] { gipchaEndpoint : port 'nm2_racnode-cluster/da4b-8718-50e0-b51b', peer 'racnode2:7b5c-4672-87e8-565a', srcCid 00000000-000038ed,  dstCid 00000000-0000059b, numSend 29, maxSend 100, groupListType 2, hagroup 0x2021010, usrFlags 0x4000, flags 0x21c }

2014-04-15 09:41:57.769: [    CSSD][941274880]clssgmStartNMMon:  completed node cleanup

2014-04-15 09:41:57.769: [    CSSD][909633280]clssgmUpdateEventValue: HoldRequest  val 1, changes 5

2014-04-15 09:41:57.775: [    CSSD][906479360]clssnmSendingThread: sending status msg to all nodes

2014-04-15 09:41:57.775: [    CSSD][906479360]clssnmSendingThread: sent 5 status msgs to all nodes

2014-04-15 09:41:57.867: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526116, 1203324554, 1201720), seedhbimpd TRUE

2014-04-15 09:41:57.934: [    CSSD][914388736]clssnmvDHBValidateNcopy: node 2, racnode2, has a disk HB, but no network HB, DHB has rcfg 291818318, wrtcnt, 1194016, LATS 1203765044, lastSeqNo 1194015, uniqueness 1396323841, timestamp 1397526117/1203325474

2014-04-15 09:41:57.934: [    CSSD][914388736]clssnmvDiskEvict: Kill block write, file /dev/asm-diskb flags 0x00010004, kill block unique 1396323841, stamp 1203764884/1203764884

2014-04-15 09:41:57.934: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526117, 1203325474, 1201721), seedhbimpd TRUE

2014-04-15 09:41:58.035: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526117, 1203325474, 1201721), seedhbimpd TRUE

2014-04-15 09:41:58.135: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526117, 1203325474, 1201721), seedhbimpd TRUE

2014-04-15 09:41:58.149: [GIPCHDEM][919283456] gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e22410 [0000000000000010] { gipchaContext : host 'racnode1', name 'CSS_racnode-cluster', luid '8bbba732-00000000', numNode 1, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd

2014-04-15 09:41:58.205: [    CSSD][917706496]clssnmvDiskPing: Writing with status 0x3, timestamp 1397526118/1203765314

2014-04-15 09:41:58.235: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526117, 1203325474, 1201721), seedhbimpd TRUE

2014-04-15 09:41:58.335: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526117, 1203325474, 1201721), seedhbimpd TRUE

2014-04-15 09:41:58.435: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526117, 1203325474, 1201721), seedhbimpd TRUE

2014-04-15 09:41:58.535: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526117, 1203325474, 1201721), seedhbimpd TRUE

2014-04-15 09:41:58.551: [    CSSD][916129536]clssnmvDiskKillCheck: not evicted, file /dev/asm-diskb flags 0x00000000, kill block unique 0, my unique 1396323475

2014-04-15 09:41:58.636: [    CSSD][904902400]clssnmWaitOnEvictions: node 2, undead 1, EXADATA fence handle 0 kill reqest id 0, last DHB (1397526117, 1203325474, 1201721), seedhbimpd TRUE

oracle rac evict,OEL6.3上 Oracle RAC 上节点驱逐检查过程相关推荐

  1. ESX上ORACLE 10.2RAC(4.在REHAT4.7中安装ORACLE RAC)

    四. 安装CRS软件 上传cluster软件到rac1,rac2的/home/oracle目录下 [root@rac1 ~]# cd /home/oracle [root@rac1 oracle]# ...

  2. oracle oui gi是什么,【翻译自mos文章】在windows 2012上安装rac时,GI 的安装失败,报OUI-35024...

    在windows 2012上安装rac时,GI 的安装失败,报OUI-35024 来源于: RAC on Windows 2012: Grid Infrastructure Installation ...

  3. oracle 裸设备 ocr,裸设备建立RAC的OCR设备不一致的问题

    如果使用裸设备建立RAC环境,各个节点的OCR设置对应的物理设备名称可能是不一致的. 对于ASM磁盘组而言,三个节点上裸设备的物理名称不一致没有问题.比如在这个RAC环境中: bash-2.03$ s ...

  4. 【Oracle 集群】Linux下Oracle RAC集群搭建之基本测试与使用(九)

    Oracle 11G RAC数据库安装(九) 概述:写下本文档的初衷和动力,来源于上篇的<oracle基本操作手册>.oracle基本操作手册是作者研一假期对oracle基础知识学习的汇总 ...

  5. oracle ipc message,【案例】Oracle RAC IPC send timeout error导致RAC的节点挂起解决办法

    天萃荷净 Oracle研究中心案例分析:运维DBA反映Oracle RAC环境数据库节点挂起,分享日志发现是由于IPC send timeout error导致RAC的节点挂起. 本站文章除注明转载外 ...

  6. rac ogg to mysql_GoldenGate从oracle迁移数据到mysql

    1软件简介 安装时应该选择最为稳定的安装版本,现在官方发布的版本主要为: Oracle GoldenGate 11.2.1.0.1 对应不同的数据库和版本,有不同的安装介质.下面是根据现网情况使用的两 ...

  7. oracle rac维护命令,2015年oracle rac日常基本维护命令.doc

    Oracle RAC 资料收集 http://www.D 数据库吧 oracle rac日常基本维护命令2 Oracle RAC性能调整12 详解Oracle RAC入门和提高27 ORACLE RA ...

  8. oracle 11g 环境,Linux彻底清理Oracle 11g RAC环境方案

    参考文档: Linux环境下11.2.0.3 rac的快速卸载脚本 在Oracle 11.1和Oracle 10.1,10.2上,都是官方提供手工清理RAC环境的方法的(比如环境有问题,或者RAC安装 ...

  9. 基于权重的节点驱逐 - Oracle RAC 12.2 新特性

    在 Oracle RAC 中,多个节点之间需要能够正常通信来保持集群的一致性.当一个节点发生故障或者发生脑裂,节点因网络等原因不能与其他节点互通时,很可能会在集群重新配置的过程中被驱逐出去. RAC ...

最新文章

  1. Python爬虫(一)
  2. UTF-8,UTF-16和UTF-32
  3. halcon 旋转_HALCON高级篇:3D相机标定(2/3)
  4. 数据库的UNDO和REDO
  5. iptables中关于limit和limit-burst的解释
  6. NYOJ-45 棋盘覆盖
  7. React开发(131):ant design学习指南之form中的resetFields
  8. 【python】Macbook M1/M1pro/M1max 安装anaconda记录
  9. Vue Nginx反向代理配置 解决生产环境跨域
  10. [jQuery] 按比例缩小图片
  11. mokoid android open source HAL hacking in a picture
  12. 任务栏管理器无法结束任务 taskkill也无法结束任务 pchunter加载驱动失败
  13. 网页或PDF等复制文本的格式快速规范
  14. usb 键盘码表_电脑键盘对应的二进制码表
  15. 物流工程要学计算机吗,物流工程专业是文科还是理科
  16. python gpio 接口_树莓派python中gpio库有哪些
  17. EventBus实现原理
  18. 阿里云ACR关联gitlab账号
  19. C语言 分数加减法(输出最简形式)
  20. 简单方法实现仿超级课程表界面

热门文章

  1. 华为鸿蒙os操作系统有pc版,华为开源操作系统 鸿蒙OS 升级版曝光,打通PC等一大批硬件...
  2. Kaggle文本可读性识别大赛银牌方案复盘
  3. 傅里叶描述子、HOG特征描述子原理及matlab代码
  4. AcWing 4246. 最短路径和(反向建图+链式前向星+堆优化)
  5. 狂妄之人计算机音乐,Undertale音乐 MEGALOVANIA 狂妄之人
  6. 工信部华为鸿蒙,九零科技日报 华为鸿蒙系统疑似被曝光 工信部向三大运营商颁发5G牌照​...
  7. 清华大学计算机学院软件工程,中国“软件工程”专业最好的3所大学,都是985,清华大学上榜...
  8. 美式口语发音技巧:《英美发音区别》
  9. Python 爬取微信公众号文章和评论 (有源码)
  10. Java 开发规范文档