ORACLE 10.2.0.5 RAC升级到11.2.0.3 (二)将OCR和VOTEING DISK迁移到ASM
ORACLE 10.2.0.5 RAC升级到11.2.0.3 (三)DATABASE升级
在考虑升级到11.2.0.3之前,一定要先阅读Things to Consider Before Upgrading to 11.2.0.3 Grid Infrastructure/ASM [ID 1363369.1],这个文档会告诉你,你的那个版本升级到11.2.0.3.需要打什么样的补丁.
Source Oracle Clusterware version | Minimum source version | Required PSU or one off patch |
---|---|---|
10.2.0.4 | NULL | PATCH 8705958 – 10.2.0.4.2 CRS PSU |
10.2.0.5 | NULL | PATCH 9952245 – 10.2.0.5.2 CRS PSU
|
11.1.0.7 | NULL | PATCH 11724953 – 11.1.0.7.7 CRS PSU |
11.2.0.1 | NULL | PATCH 9413827
Or PATCH 9706490 if on AIX, Solaris or HP |
11.2.0.2 | 11.2.0.2 GI PSU1(which includes database PSU1) | PATCH 12539000
The patch must be applied to GI home and is recommended for 11.2.0.2 database homes. It’s available on top of all DB PSUs which are part of corresponding GI PSU. It’s recommended to be applied with “opatch auto” with latest 11.2 opatch (patch 6880880) as root: # $GRID_HOME/OPatch/opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <GRID_HOME> # $GRID_HOME/OPatch/opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <DB_HOME> When opatch asks the following question, enter ‘yes’ without quote: Enter ‘yes’ if you have unzipped this patch to an empty directory to proceed (yes/no):yes If opatch auto fails, apply it manually: 1. Stop all databases manually as database users: <DB_HOME>/bin/srvctl stop database -d <dbname> 2. Unlock GI home as root user: For GI Cluster environment: # $GRID_HOME/crs/install/rootcrs.pl -unlock For GI Standalone (Oracle Restart): # $GRID_HOME/crs/install/roothas.pl -unlock 3. Apply the patch to GI and database homes: As grid user: $ $GRID_HOME/OPatch/opatch napply -oh <GRID_HOME> -local <UNZIPPED_PATH_LOCATION>/12539000 As database user: <DB_HOME>/OPatch/opatch napply -oh <DB_HOME> -local <UNZIPPED_PATH_LOCATION>/12539000 4. Lock GI home as root user: For GI Cluster environment: # $GRID_HOME/rdbms/install/rootadd_rdbms.sh For GI Standalone (Oracle Restart): # $GRID_HOME/rdbms/install/rootadd_rdbms.sh 5. Start all databases manually as database users: <DB_HOME>/bin/srvctl start database -d <dbname> As of 1/15/2012, patch readme section “manual steps to apply” on top of PSU1/2/3 is wrong, and patch readme on top of PSU4/5(11.2.0.2.4/5) is wrong. |
我的10.2.0.5RAC环境如下所示:
[oracle@rac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.gsd application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip application 0/0 0/0 ONLINE ONLINE rac1 ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.gsd application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip application 0/0 0/0 ONLINE ONLINE rac2 ora.racdb.db application 0/0 0/1 ONLINE ONLINE rac1 ora....b1.inst application 0/5 0/0 ONLINE ONLINE rac1 ora....b2.inst application 0/5 0/0 ONLINE ONLINE rac2
OCR和voteing disk信息如下:
[oracle@rac1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 521836 Used space (kbytes) : 3816 Available space (kbytes) : 518020 ID : 1024751843 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded [oracle@rac1 ~]$ [oracle@rac1 ~]$ crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s).
1.配置11g RAC安装环境需求.
安装软件包如下:
rpm -Uvh binutils-2.* rpm -Uvh compat-libstdc++-33* rpm -Uvh elfutils-libelf-0.* rpm -Uvh elfutils-libelf-devel-* rpm -Uvh gcc-4.* rpm -Uvh gcc-c++-4.* rpm -Uvh glibc-2.* rpm -Uvh glibc-common-2.* rpm -Uvh glibc-devel-2.* rpm -Uvh glibc-headers-2.* rpm -Uvh ksh-2* rpm -Uvh libaio-0.* rpm -Uvh libaio-devel-0.* rpm -Uvh libgcc-4.* rpm -Uvh libstdc++-4.* rpm -Uvh libstdc++-devel-4.* rpm -Uvh make-3.* rpm -Uvh sysstat-7.* rpm -Uvh unixODBC-2.* rpm -Uvh unixODBC-devel-2.*
都修改/etc/sysctl.conf内核参数
kernel.shmall = 2097152 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default=262144 net.core.rmem_max=4194304 net.core.wmem_default=262144 net.core.wmem_max=1048586
修改/etc/security/limits.conf限制
oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536
修改/etc/pam.d/login文件
session required pam_limits.so
增加用户组,修改Oracle用户和属主
[root@rac1 Server]# id oracle uid=200(oracle) gid=200(oinstall) groups=200(oinstall),201(dba) [root@rac1 Server]# groupadd -g 1020 asmadmin [root@rac1 Server]# groupadd -g 1021 asmdba [root@rac1 Server]# groupadd -g 1022 asmoper [root@rac1 Server]# groupadd -g 1032 oper [root@rac1 oracle]# usermod -g oinstall -G dba,oper,asmadmin,asmdba,asmoper oracle [root@rac1 oracle]# id oracle uid=200(oracle) gid=200(oinstall) groups=200(oinstall),201(dba),1020(asmadmin),1021(asmdba),1022(asmoper),1032(oper)
创建文件夹
[root@rac1 ~]# mkdir -p /oracle/app/11.2.0/grid [root@rac1 ~]# mkdir -p /oracle/app/oracle/product/11.2.0/db_1 [root@rac1 ~]# chown -R oracle:oinstall /oracle
禁用ntp服务
#service ntpd stop #chkconfig ntpd off #mv /etc/ntp.conf /etc/ntp.conf.orig #rm /var/run/ntpd.pid
设置Oracle用户环境变量,因为是11g R2 RAC升级,如果创建Grid用户,升级会报错.所以我们只需要配置Oracle用户.为了让Oracle用户轻松切换不同的环境变量,我们先创建一个基本的环境变量把共通点列在里面,然后在我们在创建grid_env和db_env两个文件来存放异同点,方便切换.
[oracle@rac1 ~]$ cp .bash_profile .11bash_profile [oracle@rac1 ~]$ vi .11bash_profile # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_HOSTNAME=rac1.localdomain; export ORACLE_HOSTNAME ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE GRID_HOME=/oracle/app/11.2.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/11.2.0/db_1; export DB_HOME ORACLE_HOME=$DB_HOME; export ORACLE_HOME ORACLE_SID=racdb1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi alias grid_env='. /home/oracle/grid_env' alias db_env='. /home/oracle/db_env'
grid_env如下所示:
ORACLE_SID=+ASM1; export ORACLE_SID ORACLE_HOME=$GRID_HOME; export ORACLE_HOME PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
db_env如下所示:
ORACLE_SID=racdb1; export ORACLE_SID ORACLE_HOME=$DB_HOME; export ORACLE_HOME PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
上传安装介质,并安装/grid/rpm/cvuqdisk包,在安装介质里面
[root@rac2 tmp]# rpm -Uvh cvuqdisk-1.0.9-1.rpm Preparing... ########################################### [100%] 1:cvuqdisk ########################################### [100%]
2.开始安装前的校验
[oracle@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /oracle/app/oracle/product/10.2.0/crs_1 -dest_crshome /oracle/app/11.2.0/grid -dest_version 11.2.0.3.0 -fixup -fixupdir /home/oracle/fixupscript -verbose Performing pre-checks for cluster services setup Checking node reachability... Check: Node reachability from node "rac1" Destination Node Reachable? ------------------------------------ ------------------------ rac2 yes rac1 yes Result: Node reachability check passed from node "rac1" Checking user equivalence... Check: User equivalence for user "oracle" Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed Result: User equivalence check passed for user "oracle" Checking CRS user consistency Result: CRS user consistency check successful Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed Verification of the hosts config file successful Interface information for node "rac2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.136 192.168.56.0 0.0.0.0 10.10.10.1 08:00:27:B5:0E:93 1500 eth0 192.168.56.138 192.168.56.0 0.0.0.0 10.10.10.1 08:00:27:B5:0E:93 1500 eth1 10.10.10.102 10.10.10.0 0.0.0.0 10.10.10.1 08:00:27:F9:CC:36 1500 Interface information for node "rac1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.56.135 192.168.56.0 0.0.0.0 10.10.10.1 08:00:27:96:59:13 1500 eth0 192.168.56.137 192.168.56.0 0.0.0.0 10.10.10.1 08:00:27:96:59:13 1500 eth1 10.10.10.101 10.10.10.0 0.0.0.0 10.10.10.1 08:00:27:35:C4:E6 1500 Check: Node connectivity for interface "eth0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac2[192.168.56.136] rac2[192.168.56.138] yes rac2[192.168.56.136] rac1[192.168.56.135] yes rac2[192.168.56.136] rac1[192.168.56.137] yes rac2[192.168.56.138] rac1[192.168.56.135] yes rac2[192.168.56.138] rac1[192.168.56.137] yes rac1[192.168.56.135] rac1[192.168.56.137] yes Result: Node connectivity passed for interface "eth0" Check: TCP connectivity of subnet "192.168.56.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1:192.168.56.135 rac2:192.168.56.136 passed rac1:192.168.56.135 rac2:192.168.56.138 passed rac1:192.168.56.135 rac1:192.168.56.137 passed Result: TCP connectivity check passed for subnet "192.168.56.0" Check: Node connectivity for interface "eth1" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac2[10.10.10.102] rac1[10.10.10.101] yes Result: Node connectivity passed for interface "eth1" Check: TCP connectivity of subnet "10.10.10.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- rac1:10.10.10.101 rac2:10.10.10.102 passed Result: TCP connectivity check passed for subnet "10.10.10.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.56.0". Subnet mask consistency check passed for subnet "10.10.10.0". Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed. Checking subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "10.10.10.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Checking OCR integrity... Check for compatible storage device for OCR location "/dev/raw/raw1"... Checking OCR device "/dev/raw/raw1" for sharedness... OCR device "/dev/raw/raw1" is shared... Checking size of the OCR location "/dev/raw/raw1" ... Size check for OCR location "/dev/raw/raw1" successful... OCR integrity check passed Checking ASMLib configuration. Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed Result: Check for ASMLib configuration passed. Check: Total memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 1.9642GB (2059632.0KB) 1.5GB (1572864.0KB) passed rac1 1.9642GB (2059632.0KB) 1.5GB (1572864.0KB) passed Result: Total memory check passed Check: Available memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 1.2185GB (1277644.0KB) 50MB (51200.0KB) passed rac1 1.1651GB (1221676.0KB) 50MB (51200.0KB) passed Result: Available memory check passed Check: Swap space Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 4.0064GB (4200988.0KB) 2.9463GB (3089448.0KB) passed rac1 4.0064GB (4200988.0KB) 2.9463GB (3089448.0KB) passed Result: Swap space check passed Check: Free disk space for "rac2:/oracle/app/11.2.0/grid,rac2:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /oracle/app/11.2.0/grid rac2 / 8.3916GB 7.5GB passed /tmp rac2 / 8.3916GB 7.5GB passed Result: Free disk space check passed for "rac2:/oracle/app/11.2.0/grid,rac2:/tmp" Check: Free disk space for "rac1:/oracle/app/11.2.0/grid,rac1:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /oracle/app/11.2.0/grid rac1 / 6.4582GB 7.5GB failed /tmp rac1 / 6.4582GB 7.5GB failed Result: Free disk space check failed for "rac1:/oracle/app/11.2.0/grid,rac1:/tmp" Check: User existence for "oracle" Node Name Status Comment ------------ ------------------------ ------------------------ rac2 passed exists(200) rac1 passed exists(200) Checking for multiple users with UID value 200 Result: Check for multiple users with UID value 200 passed Result: User existence check passed for "oracle" Check: Group existence for "oinstall" Node Name Status Comment ------------ ------------------------ ------------------------ rac2 passed exists rac1 passed exists Result: Group existence check passed for "oinstall" Check: Membership of user "oracle" in group "oinstall" [as Primary] Node Name User Exists Group Exists User in Group Primary Status ---------------- ------------ ------------ ------------ ------------ ------------ rac2 yes yes yes yes passed rac1 yes yes yes yes passed Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed Check: Run level Node Name run level Required Status ------------ ------------------------ ------------------------ ---------- rac2 5 3,5 passed rac1 5 3,5 passed Result: Run level check passed Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac2 hard 65536 65536 passed rac1 hard 65536 65536 passed Result: Hard limits check passed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac2 soft 1024 1024 passed rac1 soft 1024 1024 passed Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac2 hard 16384 16384 passed rac1 hard 16384 16384 passed Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- rac2 soft 2047 2047 passed rac1 soft 2047 2047 passed Result: Soft limits check passed for "maximum user processes" There are no oracle patches required for home "/oracle/app/oracle/product/10.2.0/crs_1". There are no oracle patches required for home "/oracle/app/11.2.0/grid". Check: System architecture Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 x86_64 x86_64 passed rac1 x86_64 x86_64 passed Result: System architecture check passed Check: Kernel version Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 2.6.18-164.el5 2.6.18 passed rac1 2.6.18-164.el5 2.6.18 passed Result: Kernel version check passed Check: Kernel parameter for "semmsl" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 250 250 250 passed rac1 250 250 250 passed Result: Kernel parameter check passed for "semmsl" Check: Kernel parameter for "semmns" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 32000 32000 32000 passed rac1 32000 32000 32000 passed Result: Kernel parameter check passed for "semmns" Check: Kernel parameter for "semopm" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 100 100 100 passed rac1 100 100 100 passed Result: Kernel parameter check passed for "semopm" Check: Kernel parameter for "semmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 128 128 128 passed rac1 128 128 128 passed Result: Kernel parameter check passed for "semmni" Check: Kernel parameter for "shmmax" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 68719476736 68719476736 1054531584 passed rac1 68719476736 68719476736 1054531584 passed Result: Kernel parameter check passed for "shmmax" Check: Kernel parameter for "shmmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 4096 4096 4096 passed rac1 4096 4096 4096 passed Result: Kernel parameter check passed for "shmmni" Check: Kernel parameter for "shmall" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 2097152 2097152 2097152 passed rac1 2097152 2097152 2097152 passed Result: Kernel parameter check passed for "shmall" Check: Kernel parameter for "file-max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 6815744 6815744 6815744 passed rac1 6815744 6815744 6815744 passed Result: Kernel parameter check passed for "file-max" Check: Kernel parameter for "ip_local_port_range" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed rac1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed Result: Kernel parameter check passed for "ip_local_port_range" Check: Kernel parameter for "rmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 262144 262144 262144 passed rac1 262144 262144 262144 passed Result: Kernel parameter check passed for "rmem_default" Check: Kernel parameter for "rmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 4194304 4194304 4194304 passed rac1 4194304 4194304 4194304 passed Result: Kernel parameter check passed for "rmem_max" Check: Kernel parameter for "wmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 262144 262144 262144 passed rac1 262144 262144 262144 passed Result: Kernel parameter check passed for "wmem_default" Check: Kernel parameter for "wmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 1048586 1048586 1048576 passed rac1 1048586 1048586 1048576 passed Result: Kernel parameter check passed for "wmem_max" Check: Kernel parameter for "aio-max-nr" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ rac2 1048576 1048576 1048576 passed rac1 1048576 1048576 1048576 passed Result: Kernel parameter check passed for "aio-max-nr" Check: Package existence for "make" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 make-3.81-3.el5 make-3.81 passed rac1 make-3.81-3.el5 make-3.81 passed Result: Package existence check passed for "make" Check: Package existence for "binutils" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 binutils-2.17.50.0.6-12.el5 binutils-2.17.50.0.6 passed rac1 binutils-2.17.50.0.6-12.el5 binutils-2.17.50.0.6 passed Result: Package existence check passed for "binutils" Check: Package existence for "gcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 gcc(x86_64)-4.1.2-46.el5 gcc(x86_64)-4.1.2 passed rac1 gcc(x86_64)-4.1.2-46.el5 gcc(x86_64)-4.1.2 passed Result: Package existence check passed for "gcc(x86_64)" Check: Package existence for "libaio(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libaio(x86_64)-0.3.106-3.2 libaio(x86_64)-0.3.106 passed rac1 libaio(x86_64)-0.3.106-3.2 libaio(x86_64)-0.3.106 passed Result: Package existence check passed for "libaio(x86_64)" Check: Package existence for "glibc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 glibc(x86_64)-2.5-42 glibc(x86_64)-2.5-24 passed rac1 glibc(x86_64)-2.5-42 glibc(x86_64)-2.5-24 passed Result: Package existence check passed for "glibc(x86_64)" Check: Package existence for "compat-libstdc++-33(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed rac1 compat-libstdc++-33(x86_64)-3.2.3-61 compat-libstdc++-33(x86_64)-3.2.3 passed Result: Package existence check passed for "compat-libstdc++-33(x86_64)" Check: Package existence for "elfutils-libelf(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed rac1 elfutils-libelf(x86_64)-0.137-3.el5 elfutils-libelf(x86_64)-0.125 passed Result: Package existence check passed for "elfutils-libelf(x86_64)" Check: Package existence for "elfutils-libelf-devel" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed rac1 elfutils-libelf-devel-0.137-3.el5 elfutils-libelf-devel-0.125 passed Result: Package existence check passed for "elfutils-libelf-devel" Check: Package existence for "glibc-common" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 glibc-common-2.5-42 glibc-common-2.5 passed rac1 glibc-common-2.5-42 glibc-common-2.5 passed Result: Package existence check passed for "glibc-common" Check: Package existence for "glibc-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 glibc-devel(x86_64)-2.5-42 glibc-devel(x86_64)-2.5 passed rac1 glibc-devel(x86_64)-2.5-42 glibc-devel(x86_64)-2.5 passed Result: Package existence check passed for "glibc-devel(x86_64)" Check: Package existence for "glibc-headers" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 glibc-headers-2.5-42 glibc-headers-2.5 passed rac1 glibc-headers-2.5-42 glibc-headers-2.5 passed Result: Package existence check passed for "glibc-headers" Check: Package existence for "gcc-c++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 gcc-c++(x86_64)-4.1.2-46.el5 gcc-c++(x86_64)-4.1.2 passed rac1 gcc-c++(x86_64)-4.1.2-46.el5 gcc-c++(x86_64)-4.1.2 passed Result: Package existence check passed for "gcc-c++(x86_64)" Check: Package existence for "libaio-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libaio-devel(x86_64)-0.3.106-3.2 libaio-devel(x86_64)-0.3.106 passed rac1 libaio-devel(x86_64)-0.3.106-3.2 libaio-devel(x86_64)-0.3.106 passed Result: Package existence check passed for "libaio-devel(x86_64)" Check: Package existence for "libgcc(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libgcc(x86_64)-4.1.2-46.el5 libgcc(x86_64)-4.1.2 passed rac1 libgcc(x86_64)-4.1.2-46.el5 libgcc(x86_64)-4.1.2 passed Result: Package existence check passed for "libgcc(x86_64)" Check: Package existence for "libstdc++(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libstdc++(x86_64)-4.1.2-46.el5 libstdc++(x86_64)-4.1.2 passed rac1 libstdc++(x86_64)-4.1.2-46.el5 libstdc++(x86_64)-4.1.2 passed Result: Package existence check passed for "libstdc++(x86_64)" Check: Package existence for "libstdc++-devel(x86_64)" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 libstdc++-devel(x86_64)-4.1.2-46.el5 libstdc++-devel(x86_64)-4.1.2 passed rac1 libstdc++-devel(x86_64)-4.1.2-46.el5 libstdc++-devel(x86_64)-4.1.2 passed Result: Package existence check passed for "libstdc++-devel(x86_64)" Check: Package existence for "sysstat" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 sysstat-7.0.2-3.el5 sysstat-7.0.2 passed rac1 sysstat-7.0.2-3.el5 sysstat-7.0.2 passed Result: Package existence check passed for "sysstat" Check: Package existence for "ksh" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 ksh-20080202-14.el5 ksh-20060214 passed rac1 ksh-20080202-14.el5 ksh-20060214 passed Result: Package existence check passed for "ksh" Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Current group ID Result: Current group ID check passed Starting check for consistency of primary group of root user Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed Check for consistency of root user's primary group passed Check: Package existence for "cvuqdisk" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- rac2 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed rac1 cvuqdisk-1.0.9-1 cvuqdisk-1.0.9-1 passed Result: Package existence check passed for "cvuqdisk" Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes No NTP Daemons or Services were found to be running Result: Clock synchronization check using Network Time Protocol(NTP) passed Checking Core file name pattern consistency... Core file name pattern consistency check passed. Checking to make sure user "oracle" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ rac2 passed does not exist rac1 passed does not exist Result: User "oracle" is not part of "root" group. Check passed Check default user file creation mask Node Name Available Required Comment ------------ ------------------------ ------------------------ ---------- rac2 0022 0022 passed rac1 0022 0022 passed Result: Default user file creation mask check passed Checking consistency of file "/etc/resolv.conf" across nodes Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined File "/etc/resolv.conf" does not have both domain and search entries defined Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes... domain entry in file "/etc/resolv.conf" is consistent across nodes Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes... search entry in file "/etc/resolv.conf" is consistent across nodes Checking file "/etc/resolv.conf" to make sure that only one search entry is defined All nodes have one search entry defined in file "/etc/resolv.conf" Checking all nodes to make sure that search entry is "localdomain" as found on node "rac2" All nodes of the cluster have same value for 'search' Checking DNS response time for an unreachable node Node Name Status ------------------------------------ ------------------------ rac2 passed rac1 passed The DNS response time for an unreachable node is within acceptable limit on all nodes File "/etc/resolv.conf" is consistent across nodes UDev attributes check for OCR locations started... Checking udev settings for device "/dev/raw/raw1" Device Owner Group Permissions Result ---------------- ------------ ------------ ------------ ---------------- raw[1] root oinstall 640 passed Node: rac1 PRVF-5184 : Check of following Udev attributes of "rac1:/dev/raw/raw1" failed: "[Owner: Found='oracle' Expected='root']" Result: UDev attributes check failed for OCR locations UDev attributes check for Voting Disk locations started... Checking udev settings for device "/dev/raw/raw2" Device Owner Group Permissions Result ---------------- ------------ ------------ ------------ ---------------- raw[2-5] oracle oinstall 640 passed raw[2-5] oracle oinstall 640 passed Result: UDev attributes check passed for Voting Disk locations Check: Time zone consistency Result: Time zone consistency check passed Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed Clusterware version consistency passed Pre-check for cluster services setup was unsuccessful. Checks did not pass for the following node(s): rac1
这里主要是两个错误:
a.我的磁盘空间不够.需要7.5G.
b.节点1的/dev/raw/raw1的属主不是root.我的节点2是正常的,
Node: rac1 PRVF-5184 : Check of following Udev attributes of "rac1:/dev/raw/raw1" failed: "[Owner: Found='oracle' Expected='root']"
这里需要改udev的设置.并重启udev服务,重启udev服务不会对crs造成影响.
[root@rac1 ~]# /sbin/udevcontrol reload_rules [root@rac1 ~]# /sbin/start_udev Starting udev: [ OK ]
3.使用Oracle用户进行安装升级
首先应用新的环境变量,切换到grid_env进行安装.
[oracle@rac1 ~]$ source .11bash_profile [oracle@rac1 ~]$ grid_env [oracle@rac1 tmp]$ echo $ORACLE_HOME /oracle/app/11.2.0/grid [oracle@rac1 tmp]$ echo $ORACLE_SID +ASM1
这里选择第三项:Upgrade Oracle Grid Infrastucture or Oracle Automatic Storage Management
节点1运行rootupgrade.sh
[root@rac1 oracle]# /oracle/app/11.2.0/grid/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Configure Oracle Grid Infrastructure for a Cluster ... succeeded
节点2运行root.sh
[root@rac2 grid]# /oracle/app/11.2.0/grid/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /oracle/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation OLR initialization - successful Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Start upgrade invoked.. Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the OCR. Started to upgrade the CSS. The CSS was successfully upgraded. Started to upgrade the CRS. The CRS was successfully upgraded. Oracle Clusterware operating version was successfully set to 11.2.0.3.0 Configure Oracle Grid Infrastructure for a Cluster ... succeeded
如果udev属性问题不解决的话,校验不通过.会出现下列错误,所以要完美安装不报错的话,udev的属性一定要事先改好.
PRVF-5184 : Check of following Udev attributes of "rac1:/dev/raw/raw1" failed: "[Owner: Found='oracle' Expected='root']"
4.升级完之后检查,ocr和votedisk还是使用裸设备的.
[oracle@rac1 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 517828 Used space (kbytes) : 6212 Available space (kbytes) : 511616 ID : 1423844012 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File not configured Cluster registry integrity check succeeded [oracle@rac1 ~]$ crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s). [oracle@rac1 ~]$ crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------- ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1 ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1 ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac2 ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac1 ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac1 ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1 ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE rac2 ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE ora....network ora....rk.type 1/5 0/ ONLINE ONLINE rac1 ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE rac2 ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac1 ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1 ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1 ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1 ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac1 ora....SM2.asm application 1/5 0/0 ONLINE ONLINE rac2 ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2 ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2 ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2 ora.racdb.db application 0/0 0/1 ONLINE ONLINE rac1 ora....b1.inst application 1/5 0/0 ONLINE ONLINE rac1 ora....b2.inst application 1/5 0/0 ONLINE ONLINE rac2 ora....ry.acfs ora....fs.type 0/5 0/ ONLINE ONLINE rac1 ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac2 ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac1 ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac1 [oracle@rac1 ~]$ crsctl query crs softwareversion CRS software version on node [rac1] is [11.2.0.3.0] [oracle@rac1 ~]$ crsctl query crs activeversion CRS active version on the cluster is [11.2.0.3.0]
注意:在刚刚升级完后,10g的环境变量请不要先清除掉,因为这个时候11g的CRS集群件已经默认把Database注册到集群当中了,但是11g的crs软件的srvctl是不能将数据库停止掉的,会报如下错误:
[oracle@rac1 ~]$ srvctl stop database -d racdb PRCD-1027 : Failed to retrieve database racdb PRCD-1027 : Failed to retrieve database racdb PRKP-1088 : Failed to retrieve configuration of cluster database racdb PRKR-1078 : Database racdb of version 10.2.0.0.0 cannot be administered using current version of srvctl. Instead run srvctl from /oracle/app/oracle/product/10.2.0/db_1
这个时候需要我们使用10g的环境变量,然后才能把数据库停止掉.
[oracle@rac1 ~]$ cp .bash_profile .10bash_profile [oracle@rac1 ~]$ source .10bash_profile [oracle@rac1 ~]$ srvctl stop database -d racdb
Post a Comment