The system configuration used is Oracle Linux 6.4 64-bit with Virtual Box 4.3.6: this is a 2-node (ol6twcn1/ol6twnc2) cluster running Grid Infrastructure (GI) 12.1.0.1.
The documentation I have used is Performing Rolling Upgrade of Oracle Grid Infrastructure.
All shell script steps have been run with oracle account which is GI software owner (and also Oracle Database software owner) unless otherwise stated.
$ ls -al linux* -rw-r--r-- 1 oracle oinstall 1747043545 Jul 26 15:15 linuxamd64_12102_grid_1of2.zip -rw-r--r-- 1 oracle oinstall 646972897 Jul 26 15:18 linuxamd64_12102_grid_2of2.zip $ unzip linuxamd64_12102_grid_1of2.zip $ unzip linuxamd64_12102_grid_2of2.zip
$ mkdir /u01/app/12.1.0.2 $ chown oracle:dba /u01/app/12.1.0.2
This step has taken 5 minutes in my environment.
$ cd grid
$ ./runcluvfy.sh stage -pre crsinst -upgrade -src_crshome /u01/app/12.1.0.1/grid/ -dest_crshome /u01/app/12.1.0.2/grid -dest_version 12.1.0.2.0
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "ol6twcn1"
Checking user equivalence...
User equivalence check passed for user "oracle"
Check: Grid Infrastructure home writeability of path /u01/app/12.1.0.2/grid
Grid Infrastructure home check passed
Checking CRS user consistency
CRS user consistency check successful
Checking network configuration consistency.
Check for network configuration consistency passed.
Checking ASM disk size consistency
All ASM disks are correctly sized
Package existence check passed for "cvuqdisk"
Checking if default discovery string is being used by ASM
ASM discovery string "/dev/asm*" is not the default discovery string
Checking if ASM parameter file is in use by an ASM instance on the local node
ASM instance is using parameter file "+DATA/ol6twc/ASMPARAMETERFILE/registry.253.851203457" on node "ol6twcn1" on which upgrade is requested.
Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed
WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
Checking OCR integrity...
Disks "+DATA" are managed by ASM.
OCR integrity check passed
Total memory check failed
Check failed on nodes:
ol6twcn2,ol6twcn1
Available memory check passed
Swap space check passed
Free disk space check passed for "ol6twcn2:/usr,ol6twcn2:/var,ol6twcn2:/etc,ol6twcn2:/u01/app/12.1.0.1/grid/,ol6twcn2:/sbin,ol6twcn2:/tmp"
Free disk space check passed for "ol6twcn1:/usr,ol6twcn1:/var,ol6twcn1:/etc,ol6twcn1:/u01/app/12.1.0.1/grid/,ol6twcn1:/sbin,ol6twcn1:/tmp"
Check for multiple users with UID value 54321 passed
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
There are no oracle patches required for home "/u01/app/12.1.0.1/grid/".
There are no oracle patches required for home "/u01/app/12.1.0.1/grid/".
Source home "/u01/app/12.1.0.1/grid/" is suitable for upgrading to version "12.1.0.2.0".
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "ol6twcn1"
PRVG-1206 : Check cannot be performed for configured value of kernel parameter "panic_on_oops" on node "ol6twcn2"
Kernel parameter check passed for "panic_on_oops"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed
Starting check for consistency of primary group of root user
Check for consistency of root user's primary group passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
No NTP Daemons or Services were found to be running
Clock synchronization check using Network Time Protocol(NTP) passed
Core file name pattern consistency check passed.
User "oracle" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking integrity of file "/etc/resolv.conf" across nodes
"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes
checking DNS response from all servers in "/etc/resolv.conf"
Check for integrity of file "/etc/resolv.conf" passed
UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations
Time zone consistency check passed
Clusterware version consistency passed.
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking daemon "avahi-daemon" is not configured and running
Daemon not configured check passed for process "avahi-daemon"
Daemon not running check passed for process "avahi-daemon"
Starting check for /dev/shm mounted as temporary file system ...
Check for /dev/shm mounted as temporary file system passed
Starting check for /boot mount ...
Check for /boot mount passed
Starting check for zeroconf check ...
Check for zeroconf check passed
Pre-check for cluster services setup was unsuccessful on all the nodes.
I have ignored following prerequisites failures:
NFO: ------------------List of failed Tasks------------------ INFO: ********************************************* INFO: Physical Memory: This is a prerequisite condition to test whether the system has at least 4GB (4194304.0KB) of total physical memory. INFO: Severity:IGNORABLE INFO: OverallStatus:VERIFICATION_FAILED INFO: ********************************************* INFO: OS Kernel Parameter: panic_on_oops: This is a prerequisite condition to test whether the OS kernel parameter "panic_on_oops" is properly set. INFO: Severity:IGNORABLE INFO: OverallStatus:WARNING INFO: -----------------End of failed Tasks List----------------
1. You need to have about 7 GB of free disk space in GI Oracle Home file system otherwise you will get:
[FATAL] [INS-32021] Insufficient disk space on this volume for the selected Oracle home. CAUSE: The selected Oracle home was on a volume without enough disk space. ACTION: Choose a location for Oracle home that has enough space (minimum of 7,065MB) or free up space on the existing volume.
2. Because GI 12.1.0.2 will create a new internal database in ASM you need to have about 5 GB free in ASM otherwise you will get:
[FATAL][INS-43100] Insufficient space available in the ASM diskgroup DATA. ACTION: Add additional disks to the diskgroup such that the total size should be at least 4,424 MB.
3. I needed also to have cluster node names (ol6twcn1,ol6twcn2) in DNS to avoid:
INFO: Task resolv.conf Integrity: This task checks consistency of file /etc/resolv.conf file across nodes INFO: Severity:CRITICAL INFO: OverallStatus:OPERATION_FAILED INFO: ----------------------------------------------- INFO: Verification Result for Node:ol6twcn2
$ cp grid/response/grid_install.rsp ugc.rsp
In ugc.rsp I have modified following parameters (make sure to set oracle.install.crs.config.clusterNodes
as documented in grid_install.rsp to the public host name and the corresponding VIP name for each cluster node):
oracle.install.option=UPGRADE ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/12.1.0.2/grid oracle.install.asm.OSDBA=dba oracle.install.asm.OSOPER=dba oracle.install.asm.OSASM=dba oracle.install.crs.config.ClusterType=STANDARD oracle.install.crs.config.gpnp.configureGNS=false oracle.install.crs.config.clusterNodes=ol6twcn1:ol6twcn1-vip,ol6twcn2:ol6twcn2-vip
This step has taken about 25 minutes in my environment: make sure to wait until you get the message "Successfully Setup Software.":
$ ./runInstaller -silent -responseFile /stage/ugc.rsp
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 415 MB. Actual 11847 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3949 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-07-26_05-03-19PM. Please wait ...]
$ [WARNING] [INS-41808] Possible invalid choice for OSASM Group.
CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper).
ACTION: Oracle recommends that you designate asmadmin as the OSASM group.
[WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges.
ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group.
[WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
CAUSE: The group name you selected as the OSOPER for ASM group is commonly used for Oracle Database administrator privileges.
ACTION: Oracle recommends that you designate asmoper as the OSOPER for ASM group, and that the group should not be the same group as an Oracle Database OSOPER group.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
CAUSE: The group you selected for granting the OSDBA for ASM group for database access, and the OSOPER for ASM group for startup and shutdown of Oracle ASM, is the same group as the OSASM group, whose members have SYSASM privileges on Oracle ASM.
ACTION: Choose different groups as the OSASM, OSDBA for ASM, and OSOPER for ASM groups.
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oracle/oraInventory/logs/installActions2014-07-26_05-03-19PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oracle/oraInventory/logs/installActions2014-07-26_05-03-19PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
You can find the log of this install session at:
/u01/app/oracle/oraInventory/logs/installActions2014-07-26_05-03-19PM.log
The installation of Oracle Grid Infrastructure 12c was successful.
Please check '/u01/app/oracle/oraInventory/logs/silentInstall2014-07-26_05-03-19PM.log' for more details.
As a root user, execute the following script(s):
1. /u01/app/12.1.0.2/grid/rootupgrade.sh
Execute /u01/app/12.1.0.2/grid/rootupgrade.sh on the following nodes:
[ol6twcn1, ol6twcn2]
Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.
Successfully Setup Software.
As install user, execute the following script to complete the configuration.
1. /u01/app/12.1.0.2/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=
Note:
1. This script must be run on the same host from where installer was run.
2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).
On first cluster node this step has taken about 12 minutes:
# /u01/app/12.1.0.2/grid/rootupgrade.sh
Check /u01/app/12.1.0.2/grid/install/root_ol6twcn1.localdomain_2014-07-26_17-27-53.log for the output of root script
[root@ol6twcn1 ~]#
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2014/07/26 17:27:56 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2014/07/26 17:28:29 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2014/07/26 17:28:34 CLSRSC-464: Starting retrieval of the cluster configuration data
2014/07/26 17:28:43 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/07/26 17:28:43 CLSRSC-363: User ignored prerequisites during installation
2014/07/26 17:28:55 CLSRSC-515: Starting OCR manual backup.
2014/07/26 17:28:58 CLSRSC-516: OCR manual backup successful.
2014/07/26 17:29:02 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2014/07/26 17:29:02 CLSRSC-482: Running command: '/u01/app/12.1.0.1/grid/bin/crsctl start rollingupgrade 12.1.0.2.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2014/07/26 17:29:09 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0.1/grid -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'
ASM configuration upgraded in local node successfully.
2014/07/26 17:29:14 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2014/07/26 17:29:14 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/07/26 17:30:43 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/07/26 17:34:02 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/07/26 17:38:06 CLSRSC-472: Attempting to export the OCR
2014/07/26 17:38:06 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle oinstall'
2014/07/26 17:38:11 CLSRSC-473: Successfully exported the OCR
2014/07/26 17:38:17 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2014/07/26 17:38:18 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.
2014/07/26 17:38:18 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2014/07/26 17:38:18 CLSRSC-543:
3. The downgrade command must be run on the node ol6twcn2 with the '-lastnode' option to restore global configuration data.
2014/07/26 17:38:44 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/07/26 17:38:57 CLSRSC-474: Initiating upgrade of resource types
2014/07/26 17:39:16 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'
2014/07/26 17:39:16 CLSRSC-475: Upgrade of resource types successfully initiated.
2014/07/26 17:39:23 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ol6twcn1 ~]# cat /u01/app/12.1.0.2/grid/install/root_ol6twcn1.localdomain_2014-07-26_17-27-53.log
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0.2/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2014/07/26 17:27:56 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2014/07/26 17:28:29 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2014/07/26 17:28:34 CLSRSC-464: Starting retrieval of the cluster configuration data
2014/07/26 17:28:43 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2014/07/26 17:28:43 CLSRSC-363: User ignored prerequisites during installation
2014/07/26 17:28:55 CLSRSC-515: Starting OCR manual backup.
2014/07/26 17:28:58 CLSRSC-516: OCR manual backup successful.
2014/07/26 17:29:02 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2014/07/26 17:29:02 CLSRSC-482: Running command: '/u01/app/12.1.0.1/grid/bin/crsctl start rollingupgrade 12.1.0.2.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2014/07/26 17:29:09 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0.1/grid -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'
ASM configuration upgraded in local node successfully.
2014/07/26 17:29:14 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2014/07/26 17:29:14 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2014/07/26 17:30:43 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2014/07/26 17:34:02 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2014/07/26 17:38:06 CLSRSC-472: Attempting to export the OCR
2014/07/26 17:38:06 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle oinstall'
2014/07/26 17:38:11 CLSRSC-473: Successfully exported the OCR
2014/07/26 17:38:17 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2014/07/26 17:38:18 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.
2014/07/26 17:38:18 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2014/07/26 17:38:18 CLSRSC-543:
3. The downgrade command must be run on the node ol6twcn2 with the '-lastnode' option to restore global configuration data.
2014/07/26 17:38:44 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/07/26 17:38:57 CLSRSC-474: Initiating upgrade of resource types
2014/07/26 17:39:16 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'
2014/07/26 17:39:16 CLSRSC-475: Upgrade of resource types successfully initiated.
2014/07/26 17:39:23 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeed
#
Note that GI has been stopped and restarted.
I have run same script on second cluster node with root account and it has taken a little less time time (9 minutes):
# /u01/app/12.1.0.2/grid/rootupgrade.sh Check /u01/app/12.1.0.2/grid/install/root_ol6twcn2.localdomain_2014-07-26_17-42-30.log for the output of root script [root@ol6twcn2 ~]# cat /u01/app/12.1.0.2/grid/install/root_ol6twcn2.localdomain_2014-07-26_17-42-30.log Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params 2014/07/26 17:42:34 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2014/07/26 17:43:06 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector. 2014/07/26 17:43:08 CLSRSC-464: Starting retrieval of the cluster configuration data 2014/07/26 17:43:15 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed. 2014/07/26 17:43:15 CLSRSC-363: User ignored prerequisites during installation ASM configuration upgraded in local node successfully. 2014/07/26 17:43:30 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack 2014/07/26 17:45:13 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed. OLR initialization - successful 2014/07/26 17:45:46 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2014/07/26 17:48:54 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully taken the backup of node specific configuration in OCR. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2014/07/26 17:49:08 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded 2014/07/26 17:49:08 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl set crs activeversion' Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the CSS. The CSS was successfully upgraded. Started to upgrade Oracle ASM. Started to upgrade the CRS. The CRS was successfully upgraded. Successfully upgraded the Oracle Clusterware. Oracle Clusterware operating version was successfully set to 12.1.0.2.0 2014/07/26 17:50:27 CLSRSC-479: Successfully set Oracle Clusterware active version 2014/07/26 17:50:35 CLSRSC-476: Finishing upgrade of resource types 2014/07/26 17:50:44 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p last' 2014/07/26 17:50:44 CLSRSC-477: Successfully completed upgrade of resource types 2014/07/26 17:51:12 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Note that GI has been stopped and restarted.
$ crsctl query crs softwareversion
Oracle Clusterware version on node [ol6twcn2] is [12.1.0.2.0]
$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
ora.FRA.dg
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
ora.asm
ONLINE ONLINE ol6twcn1 Started,STABLE
ONLINE ONLINE ol6twcn2 Started,STABLE
ora.net1.network
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
ora.ons
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE ol6twcn2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE ol6twcn1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE ol6twcn1 STABLE
ora.MGMTLSNR
1 OFFLINE OFFLINE STABLE
ora.cdbrac.db
1 ONLINE ONLINE ol6twcn1 Open,STABLE
2 ONLINE ONLINE ol6twcn2 Open,STABLE
ora.cvu
1 ONLINE ONLINE ol6twcn1 STABLE
ora.oc4j
1 ONLINE ONLINE ol6twcn2 STABLE
ora.ol6twcn1.vip
1 ONLINE ONLINE ol6twcn1 STABLE
ora.ol6twcn2.vip
1 ONLINE ONLINE ol6twcn2 STABLE
ora.scan1.vip
1 ONLINE ONLINE ol6twcn2 STABLE
ora.scan2.vip
1 ONLINE ONLINE ol6twcn1 STABLE
ora.scan3.vip
1 ONLINE ONLINE ol6twcn1 STABLE
--------------------------------------------------------------------------------
Note that there is one new resource ora.MGMTLSNR currently offline.
I have created a configuration file p.r containing ASM instance SYS password (S_ASMPASSWORD) and ASMSNMP password (S_ASMMONITORPASSWORD):
$ cat p.r oracle.assistants.asm|S_ASMPASSWORD=oracle12c oracle.assistants.asm|S_ASMMONITORPASSWORD=oracle12c
And I have run:
$ /u01/app/12.1.0.2/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/stage/p.r
The output has a lot of warnings that you can only ignore. It shows also the creation of an internal database with DBCA:
INFO: Command /u01/app/12.1.0.2/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName DATA -datafileJarLocation /u01/app/12.1.0.2/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck -oui_internal INFO: Command /u01/app/12.1.0.2/grid/bin/dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName ol6twc -createPDBFrom RMANBACKUP -PDBBackUpfile /u01/app/12.1.0.2/grid/assistants/dbca/templates/mgmtseed_pdb.dfb -PDBMetadataFile /u01/app/12.1.0.2/grid/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck -oui_internal
and it tells you to check another log file:
You can see the log file: /u01/app/12.1.0.2/grid/cfgtoollogs/oui/configActions2014-07-26_05-56-01-PM.log
I have checked that in this second log file each plug-in has "successfully been performed":
$ grep "plug-in" /u01/app/12.1.0.2/grid/cfgtoollogs/oui/configActions2014-07-26_05-56-01-PM.log The plug-in Update CRS flag in Inventory is running The plug-in Update CRS flag in Inventory has successfully been performed The plug-in Oracle Net Configuration Assistant is running The plug-in Oracle Net Configuration Assistant has successfully been performed The plug-in Creating Container Database for Oracle Grid Infrastructure Management Repository is running The plug-in Creating Container Database for Oracle Grid Infrastructure Management Repository has successfully been performed The plug-in Setting up Oracle Grid Infrastructure Management Repository is running The plug-in Setting up Oracle Grid Infrastructure Management Repository has successfully been performed The plug-in MGMT Configuration Assistant is running The plug-in MGMT Configuration Assistant has successfully been performed The plug-in Oracle Cluster Verification Utility is running The plug-in Oracle Cluster Verification Utility has successfully been performed $
$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
ora.FRA.dg
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
ora.asm
ONLINE ONLINE ol6twcn1 Started,STABLE
ONLINE ONLINE ol6twcn2 Started,STABLE
ora.net1.network
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
ora.ons
ONLINE ONLINE ol6twcn1 STABLE
ONLINE ONLINE ol6twcn2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE ol6twcn2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE ol6twcn1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE ol6twcn1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE ol6twcn1 169.254.143.222 192.
168.43.111,STABLE
ora.cdbrac.db
1 ONLINE ONLINE ol6twcn1 Open,STABLE
2 ONLINE ONLINE ol6twcn2 Open,STABLE
ora.cvu
1 ONLINE ONLINE ol6twcn1 STABLE
ora.mgmtdb
1 ONLINE ONLINE ol6twcn1 Open,STABLE
ora.oc4j
1 ONLINE ONLINE ol6twcn2 STABLE
ora.ol6twcn1.vip
1 ONLINE ONLINE ol6twcn1 STABLE
ora.ol6twcn2.vip
1 ONLINE ONLINE ol6twcn2 STABLE
ora.scan1.vip
1 ONLINE ONLINE ol6twcn2 STABLE
ora.scan2.vip
1 ONLINE ONLINE ol6twcn1 STABLE
ora.scan3.vip
1 ONLINE ONLINE ol6twcn1 STABLE
--------------------------------------------------------------------------------
Note that ora.MGMTLSNR is now online on first cluster node and that a new resource ora.mgmtdb is online on first cluster node: ora.mgmtdb is a database but clusterware does not consider it as database resource.
The resource type is not the same as the cluster database:
$ crsctl status resource ora.cdb.db NAME=ora.cdb.db TYPE=ora.database.type TARGET=ONLINE , ONLINE STATE=ONLINE on ol6twcn1, ONLINE on ol6twcn2 $ crsctl status resource ora.mgmtdb NAME=ora.mgmtdb TYPE=ora.mgmtdb.type TARGET=ONLINE STATE=ONLINE on ol6twcn1 $
Likewise ora.MGMTLSNR is not considered by clusterware as a listener resource:
$ crsctl status resource ora.LISTENER.lsnr NAME=ora.LISTENER.lsnr TYPE=ora.listener.type TARGET=ONLINE , ONLINE STATE=ONLINE on ol6twcn1, ONLINE on ol6twcn2 $ crsctl status resource ora.MGMTLSNR NAME=ora.MGMTLSNR TYPE=ora.mgmtlsnr.type TARGET=ONLINE STATE=ONLINE on ol6twcn1 $
This new database instance is named -MGMTDB (note the '-' !) on first cluster node. Set the right environment with:
$ . oraenv ORACLE_SID = [cdbrac1] ? -MGMTDB The Oracle base remains unchanged with value /u01/app/oracle
And connect with SQL*Plus:
SYS@-MGMTDB>select name, cdb from v$database; NAME CDB --------- --- _MGMTDB YES SYS@-MGMTDB>show parameter cluster NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ cluster_database boolean FALSE cluster_database_instances integer 1 cluster_interconnects string SYS@-MGMTDB>show parameter db_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_name string _mgmtdb SYS@-MGMTDB>show parameter instance_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_name string -MGMTDB
The ora.MGMTLSNR listener is specific for -MGMTDB database instance:
$ lsnrctl status MGMTLSNR LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 26-JUL-2014 18:27:17 Copyright (c) 1991, 2014, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR))) STATUS of the LISTENER ------------------------ Alias MGMTLSNR Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production Start Date 26-JUL-2014 18:08:29 Uptime 0 days 0 hr. 18 min. 48 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/12.1.0.2/grid/network/admin/listener.ora Listener Log File /u01/app/oracle/diag/tnslsnr/ol6twcn1/mgmtlsnr/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=MGMTLSNR))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.43.111)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=169.254.143.222)(PORT=1521))) Services Summary... Service "-MGMTDBXDB" has 1 instance(s). Instance "-MGMTDB", status READY, has 1 handler(s) for this service... Service "_mgmtdb" has 1 instance(s). Instance "-MGMTDB", status READY, has 1 handler(s) for this service... Service "ol6twc" has 1 instance(s). Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
For this listener a new IP adress has been allocated.
Grid Infrastructure Management Repository (GIMR) database now mandatory in Oracle GI 12.1.0.2 .