Posts

Showing posts from 2011

How to perform out-of place manual upgrade from 11.2.0.1 to 11.2.0.2

Solution : 11.2.0.2 patch set software is a full release. 11.2.0.2 OUI does not update existing 11.2.0.1 ORACLE_HOME files and performs a new installation. There are two method Out-of-Place (Oracle's recommended approach) In-Place But here, we need to understand what exactly this method is: In prior releases, how it worked   DB Upgrade 8i to 10g (Out-Of-Place) We install Oracle binaries for 10g at new location, then use DBUA or catupgrd.sql script to upgrade DB. This is Out-of-Place migration (i.e. created new oracle home, from inventory perspective). DB Upgrade 10.2.0.1 to 10.2.0.4 (In-Place) Here we won’t install 10.2.0.4 patchset in new home ( Oracle won’t let us do that, as it is a patch ), but select the existing home to upgrade. We also keep a backup of original 10.2.0.1, as we may have to fail back. As 11.2.0.2 is not a patch and full install .  It does not allow us to install binaries in existing 11.2.0.1 home (i.e. In-Place). But as I mentioned above, we can perform

CRS daemon not up after reboot node1 on two nodes RAC

Please check log crsctl.* in /tmp dir, Will see like below error. $ cat crsctl.2717 Failure -2 opening file handle for (c6t0d1)  --------->> Voting disk not access from node1 Failure 1 checking the CSS voting device. --------->> Voting disk not access from node1 ocssd1.log on node1- more  /u01/crs/oracle/product/10.1.0/crs/css/log/ocssd1.log 2008-09-09 12:06:43.683 [5] >TRACE:   clssgmClientConnectMsg: Connect from con(600000000080fca0), proc(6000000000b497b0)  pid() 2008-09-09 12:08:32.836 [5] >TRACE:   clssgmClientConnectMsg: Connect from con(6000000000810120), proc(6000000000b168e0)  pid() 2008-09-09 12:08:32.848 [5] >TRACE:   clsc_receive: (6000000000b3f590) Connection failed, transport error (507, 0, 0) 2008-09-09 12:08:32.848 [5] >TRACE:   clscreceive: (6000000000810120) Physical connection (6000000000b3f590) not active, rc 11 2008-09-09 12:08:32.848 [5] >TRACE:   clscreceive: (60000000008105a0) Physical connection (6000000000b3f590) not ac

CRS-0215: Could not start resource 'ora..vip' during vipca in RAC 10g

VIPCA is fail during RAC 10g.  crs_stat -t output shows VIP is offline and starting it explicitly gives error : CRS-0215: Could not start resource 'ora.dbtest2.vip'. Example: crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....st2.gsd application ONLINE ONLINE dbtest2 ora....st2.ons application ONLINE ONLINE dbtest2 ora....st2.vip application ONLINE OFFLINE # ./srvctl start nodeapps -n dbtest2 dbtest2:ora.dbtest2.vip:Interface eri0 checked failed (host=dbtest2) dbtest2:ora.dbtest2.vip:Failed to start VIP 10.11.11.198 (host=dbtest2) dbtest2:ora.dbtest2.vip:Interface eri0 checked failed (host=dbtest2) dbtest2:ora.dbtest2.vip:Failed to start VIP 10.11.11.198 (host=dbtest2) CRS-1006: No more members to consider CRS-0215: Could not start resource 'ora.dbtest2.vip'. CRS-0210: Could not find resource ora.dbtest2.LISTEN

Rebuild Corrupt ASM disk

I want to share my experience when I was adding new node3 existing two node RAC node1 and node2 environment. Due to wrong disk mapping on node3, one ASM disk group has got corrupted. We was using external redundancy. Our existing RAC nodes were node1 and node2 during root.sh, adding node3 entry into OCR file, but the OCR disk (/dev/rhdisk5) on node 3 not pointing to the actual OCR disk (/dev/rhdisk5) on node 1&2. It was pointing to one asm disk (/dev/rhdisk3) which was part of ASMDB2 disk group on node1&2. ANALYSIS -------------- Check dd if=/dev/rhdisk5 bs=8192 count=1 |od –x   ------- on node3 it was showing different contents from node 1 and node 2 disk contents. So OCR on node 3 is not pointing to correct OCR disk on node 1&2. Solution: For ASM disk part: I identified all disk path of ASMDB2 disk group this were (/dev/rhdisk3, /dev/rhisk2 and /dev/rhdisk1). (By view v$asm_disk) Note: We should know all disk are using ASMDB2. Care should be taken at thi