Configuration of Oracle RAC 11g on IBM AIX using IBM GPFS 3.5

This tutorial explains how to configure Oracle RAC using two nodes, and covers the concepts, pre-requisites, hardware and software configuration, and the troubleshooting tips for the errors encountered during configuration. This tutorial covers everything starting from using IBM® General Parallel File System (IBM GPFS™) and configuring the grid, followed by installing the database, and finally creating the database instance. Considering the demand for this configuration from various customers, this tutorial can be very useful in understanding Oracle RAC and setting up a cluster.

Tejaswini Kaujalgi (tejaswini@in.ibm.com), Staff Software Engineer, IBM

Photo of Tejaswini Kaujalgi Tejaswini Kaujalgi works as a Staff Software Engineer in the IBM AIX UPT Release team, Bangalore. She has been working on AIX, Oracle, IBM PowerHA®, Security, VIOS components on IBM Power Systems for more than six years at IBM India Software Labs. She has also worked on various customer configurations using Lightweight Directory Access Protocol (LDAP), Kerberos, role-based access control (RBAC), PowerHA and AIX on Power Systems. She has co-authored the IBM developerWorks® article on Introduction to PowerHA and Configuration of Oracle RAC 11g on IBM AIX using IBM GPFS 3.5. She has contributed to two IBM Redbooks® (Power Systems Enterprise Servers with PowerVM Virtualization and RAS and IBM CSM to IBM Systems Director Transformation Guide) on topics related to virtualization and high availability. She is an IBM certified Power Systems administrator.



05 February 2014

Introduction

Oracle Real Application Clusters (RAC) provides software for creating a cluster in Oracle. This tutorial explains how to configure Oracle RAC using GPFS 3.5 and covers the concepts, pre-requisites, and hardware and software configuration along with screen captures.

We divide this tutorial into five steps.

We will use three logical partitions (LPARs): zag02, zag03, and zag04 for this setup.


Part 1: System preparation

  1. File set installation: After IBM AIX® (we are using AIX 7.1) is installed on the nodes, install the following file sets on all three LPARs.

    Table 1. File set installation
    File sets Functionality
    Dsm.core
    Dsm.dsh
    This is required for distributed shell (dsh) to work fine.
    openssh.base.client
    openssh.base.server
    openssl.base
    openssl.man.en_US
    openssh.base.client
    openssh.base.server
    openssl.base
    This is required for Secure Shell (SSH) to work fine.
    vnc-3.3.3r2-5.aix5.1.ppc.rpm
    This is required for opening a vnc session to the nodes.
    rsct.basic.rte
    rsct.compat.clients.rte
    bos.adt.base bos.adt.lib
    bos.adt.libm bos.perf.libperfstat
    bos.perf.proctools
    Other required file sets.

    After the file sets are installed, start configuring the systems.

  2. Ensure that the dsh between the systems is working fine.

    Distributed shell (dsh) is used to facilitate the running of commands on all the cluster nodes. For dsh to work, you only need to install the dsm file set on the node you want to run the command, typically the first cluster node. You might not necessarily install it on the rest of the nodes.

    Perform the following steps:

    # cat /.wcoll
    Zag02
    Zag03
    Zag03
    
    # echo 'export WCOLL=/.wcoll >> /.profile'
    # export WCOLL=/.wcoll

    Test the dsh functionality using the date command.

    # dsh date
    Zag02: Mon Dec  9 02:28:14 CST 2013
    Zag03: Mon Dec  9 02:28:14 CST 2013
    Zag04: Mon Dec  9 02:28:14 CST 2013
  3. Ensure that the remote shell (rsh) between the nodes is working fine. Oracle uses rsh and rcp to copy files from one node to the other.
    # dsh 'echo "+ +" > /.rhosts'
    # dsh chmod 600 /.rhosts
  4. Ensure that SSH is working fine between the nodes.

    Log on to each node to generate a public key file (id_rsa.pub): Write the Oracle user authorized keys of node2 and node3 into the local node Oracle user's authorized key.

    As an Oracle user, run the following commands:

    # ssh-keygen –t rsa
    · As root user, run
    # ssh-keygen –t rsa

    On the first node (zag02):

    Write the root user's public key to the root user and the Oracle user's authorized keys.

    # dsh cat /.ssh/id_rsa.pub >> /.ssh/authorized_keys
    # dsh cat /.ssh/id_rsa.pub >> ~oracle/.ssh/authorized_keys
    # dsh cat ~oracle/.ssh/id_rsa.pub >> /.ssh/authorized_keys
    # dsh cat ~oracle/.ssh/id_rsa.pub >> ~oracle/.ssh/authorized_keys

    Write the root user authorized keys of node2 and node3 into the local node's root user's authorized key.

    # rsh node2 cat /.ssh/authorized_keys >> /.ssh/authorized_keys
    # rsh node3 cat /.ssh/authorized_keys >> /.ssh/authorized_keys
    
    # rsh node2 cat /home/oracle/.ssh/authorized_keys >> 
    /home/oracle/.ssh/authorized_keys
    # rsh node3 cat /home/oracle/.ssh/authorized_keys >> 
    /home/oracle/.ssh/authorized_keys

    Now, on the first node, the root user and the Oracle user have all combinations of public keys in their authorized _keys files. Write the files to node2 and node3.

    # rcp /.ssh/authorized_keys node2:/.ssh/authorized_keys
    # rcp /.ssh/authorized_keys node3:/.ssh/authorized_keys
    # rcp /home/oracle./.ssh/authorized_keys node2:/home/oracle./.ssh/authorized_keys
    # rcp /home/oracle./.ssh/authorized_keys node3:/home/oracle./.ssh/authorized_keys

    SSH requires an appropriate ownership and no read/write permission by group and other.

    # dsh chown root /.ssh/authorized_keys
    # dsh chmod 600 /.ssh/authorized_keys
    # dsh chown oracle:dba /home/.ssh/authorized_keys
    # dsh chmod 600 /home/.ssh/authorized_keys
  5. Modify the /etc/hosts file.

    Add the machine name details used in the setup in the /etc/hosts file as shown below. Configure the IPs on the respective adapters using the chinet command. We have used ent0 to configure the public IP and ent1 and ent2 to configure the private IPs.

    9.3.66.106 zag02.upt.austin.ibm.com zag02
    9.3.66.107 zag03.upt.austin.ibm.com zag03
    9.3.66.108 zag04.upt.austin.ibm.com zag04
    
    9.3.66.109 ha-vip.upt.austin.ibm.com ha-vip
    9.3.66.110 ha-vip1.upt.austin.ibm.com ha-vip1
    9.3.66.111 ha-vip2.upt.austin.ibm.com ha-vip2
    
    10.33.1.1 zag02e2 zag02e2
    10.33.1.2 zag02e3 zag02e3
    
    10.33.1.3 zag03e2 zag03e2
    10.33.1.4 zag03e3 zag03e3
    
    10.33.1.5 zag04e2 zag04e2
    10.33.1.6 zag04e3 zag04e3
    
    9.3.66.112 hacl.upt.austin.ibm.com

    First, use local /etc/hosts before using BIND/DNS for name resolution:

    # dsh 'echo "hosts=local, bind">>/etc/netsvc.conf'
  6. After this is done, you need to create the Oracle user and the related groups.
    # dsh 'mkgroup -A id=1000 dba'
    # dsh 'mkgroup -A id=1001 oinstall'
    # dsh 'mkuser id="1000" pgrp="dba" groups="dba ,oinstall, staff" oracle'
    # dsh chuser capabilities="CAP_PROPAGATE,CAP_BYPASS_RAC_VMM,CAP_NUMA_ATTACH" root
    # dsh chuser capabilities="CAP_PROPAGATE,CAP_BYPASS_RAC_VMM,CAP_NUMA_ATTACH" oracle
    # dsh cp -p /.rhosts /home/oracle/.rhosts
    # dsh chown oracle.dba /home/oracle/.rhosts
    # dsh chmod 600 /home/oracle/.rhosts
    # dsh chmod 600 /.rhosts

    On each node, as root user, set the Oracle password as 'oracle'. When the user first logs in, the user will be prompted with setting a new password due to password expiration. Make sure that you log in as an Oracle user (and not su – oracle from root) and change the password.

  7. Set the ulimit.

    Add the following code for Oracle and root users in /etc/security/limits.

    core = 2097151
    cpu = -1
    data = -1
    rss = -1
    stack = -1
    nofiles = -1
    stack_hard = -1
    
    # dsh rcp –p zag03:/etc/security/limits /etc/security/limits
    # dsh rcp –p zag04:/etc/security/limits /etc/security/limits
  8. Specify the paging space size. According to Oracle Database Installation Guide 11g Release 2 (11.2) for IBM AIX on IBM Power Systems™ (64-bit) E17162-03, PDF page 40, the specifications shown in Table 2 are the paging space requirements.

    Table 2. Paging space recommendations
    RAMPaging space recommendations
    4 GB to 8 GB 2 times the size of RAM
    8 GB to 32 GB 1.5 times the size of RAM
    >32 GB 32 GB
  9. Then, configure some of the AIX tunables.
    # dsh chdev -l sys0 -a maxuproc=4096

    The following tunables can be set using the no and vmo commands. Cross check if they have the required values.

    ipqmaxlen: should be 512.
    rfc1323: should be 1.
    sb_max: should be 1310720.
    tcp_recvspace: should be 65536
    tcp_sendspace: should be 65536
    udp_recvspace: should be 655360
    udp_sendspace: should be 65536

    Virtual Memory Manager (VMM) options:

    minperm%: should be 3
    maxperm%: should be 90
    maxclient%: should be 90
    lru_file_repage: should be 0
    strict_maxclient: should be 1
    strict_maxperm: should be 0

    sys0 options:

    ncargs: should be >= 128

    After the system configuration part is complete, the next step is to set up file systems. We will use GPFS in our environment.


Part 2: GPFS installation

IBM GPFS cluster setup process can be categorized into following sections: Installing the GPFS software, creating a GPFS cluster, creating network shared disks (NSDs), and finally creating a GPFS file system on these NSDs.

Installing the GPFS software

  1. Make sure that the root user has rsh, or for security benefits, SSH equivalence among cluster nodes. As a default behavior, GPFS uses rsh and rcp commands for running remote commands and copy purposes.
  2. Make the GPFS software file sets available on a Network File System (NFS) mount point that is accessible to all the cluster nodes, and NFS mount this directory on all cluster nodes. We are using GPFS 3.5 in this setup.
  3. Install GPFS on each node using installp. The command output is shown in the following example. /mnt has all the file sets of the GPFS mounted.
    #dsh installp -a -d /mnt -X -Y ALL
    At the end, you should have the GPFS filesets successfully installed
    and an output similar to below should be available for all the nodes.
    
     gpfs.docs.data              3.5.0.0         SHARE       APPLY       SUCCESS    
     gpfs.docs.data              3.5.0.1         SHARE       APPLY       SUCCESS    
     gpfs.base                   3.5.0.0         USR         APPLY       SUCCESS    
     gpfs.base                   3.5.0.0         ROOT        APPLY       SUCCESS    
     gpfs.msg.en_US              3.5.0.0         USR         APPLY       SUCCESS    
     gpfs.msg.en_US              3.5.0.1         USR         APPLY       SUCCESS    
     gpfs.base                   3.5.0.1         USR         APPLY       SUCCESS    
     gpfs.base                   3.5.0.1         ROOT        APPLY       SUCCESS
  4. Verify the GPFS software installation on each node by using following command.
    #lslpp -l gpfs*

Creating the GPFS cluster:

  1. Set the PATH environment variable to include the GPFS commands.
    #export PATH=$PATH:/usr/lpp/mmfs/bin
  2. Create a file listing node descriptors, one per line for each cluster node, in the following format.
    NodeName:NodeDesignations:AdminNodeName
    #cat nodefile
    Zag02:quorum
    Zag03:quorum
    Zag04:quorum
  3. Use the mmcrcluster command to create GPFS cluster definitions supplying the name of the node file created in the previous step. Also, define the primary and secondary cluster nodes where the GPFS configuration will be maintained.
    #mmcrcluster -N nodefile -p zag02 -s zag03 -A
    Thu Jun 23 17:46:54 PDT 2011: mmcrcluster: Processing node zag02
    Thu Jun 23 17:46:55 PDT 2011: mmcrcluster: Processing node zag03
    Thu Jun 23 17:46:55 PDT 2011: mmcrcluster: Processing node zag04
    mmcrcluster: Command successfully completed
    mmcrcluster: Warning: Not all nodes have proper GPFS license designations.
        Use the mmchlicense command to designate licenses as needed.
    mmcrcluster: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
  4. Use the mmchlicense command to designate proper license for each cluster node.
    #mmchlicense server --accept -N zag02,zag03,zag04
    The following nodes will be designated as possessing GPFS server licenses:
            Zag02
            Zag03
            Zag04  
    mmchlicense: Command successfully completed
    mmchlicense: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
  5. As we are using GPFS with Oracle, set the following tuning parameters.
    #mmchconfig prefetchThreads=100
    mmchconfig: Command successfully completed
    mmchconfig: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
    
    #mmchconfig worker1Threads=450
    mmchconfig: Command successfully completed
    mmchconfig: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
  6. Enabling the usePersistentReserve option allows GPFS to recover the nodes faster in case of failures. The disks participating in NSDs need to support SCSI-3 persistent reserve option for GPFS to be able to use this feature.
    #mmchconfig usePersistentReserve=yes
    Verifying GPFS is stopped on all nodes ...
    mmchconfig: Command successfully completed
    mmchconfig: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
    
    #mmchconfig failureDetectionTime=10
    Verifying GPFS is stopped on all nodes ...
    mmchconfig: Command successfully completed
    mmchconfig: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.

Creating NSDs and GPFS file systems

Each physical or virtual disk that is planned to be used with GPFS has to be prepared as NSD using the mmcrnsd command. The command expects (as input file) a disk descriptor, one per line, for each of the disks to be processed as NSD. The disk descriptor has the following format:

DiskName:ServerList::DiskUsage:FailureGroup:DesiredName:StoragePool

'DiskName' - block device name under /dev for the device to be used.
'ServerList' - comma separated list of NSD servers. If all nodes have direct
access to disk this field could be left empty. 
If all cluster nodes don't have direct access to the disk, 
a server list has to be specified.

In our environment, all cluster nodes have direct access to the NSD. So we kept the 'ServerList' field blank.

#cat diskfile
hdisk2:::::gfsdb:
hdisk3:::::gfsvote1:
hdisk4:::::gfsvote2:
hdisk5:::::gfsvote3:
hdisk6:::::gfsocr1:
hdisk7:::::gfsocr2:
hdisk8:::::gfsocr3:

Use the mmcrnsd command to create the NSD.

#mmcrnsd -F diskfile
mmcrnsd: Processing disk hdisk2
mmcrnsd: Processing disk hdisk3
mmcrnsd: Processing disk hdisk4
mmcrnsd: Processing disk hdisk5
mmcrnsd: Processing disk hdisk6
mmcrnsd: Processing disk hdisk7
mmcrnsd: Processing disk hdisk8

ummcrnsd: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

#>mmlsnsd

File system   Disk name    NSD servers                                    
---------------------------------------------------------------------------
(free disk)   gfsdb       (directly attached)      
(free disk)   gfsvote1    (directly attached)      
(free disk)   gfsvote2    (directly attached)      
(free disk)   gfsvote3    (directly attached)      
(free disk)   gfsocr1     (directly attached)
(free disk)   gfsocr2     (directly attached)  
(free disk)   gfsocr3     (directly attached)

The mmcrnsd commands re-writes the diskfile, in a way that it can be used as an input file for mmcrfs.

#cat diskfile
# hdisk2:::::gfsdb:
gfsdb:::dataAndMetadata:-1::
# hdisk3:::::gfsvote1:
Gfsvote1:::dataAndMetadata:-1::
# hdisk4:::::gfsvote2:
Gfsvote2:::dataAndMetadata:-1::
# hdisk5:::::gfsvote3:
Gfsvote3:::dataAndMetadata:-1::
# hdisk6:::::gfsocr1:
Gfsocr1:::dataAndMetadata:-1::
# hdisk7:::::gfsocr2:
Gfsocr2:::dataAndMetadata:-1::
# hdisk8:::::gfsocr3:
Gfsocr3:::dataAndMetadata:-1::

The tiebreakerDisks option allows to run the GPFS cluster with as little as one quorum node having access to majority of the tiebreaker NSDs.

#mmchconfig tiebreakerdisks="gfsdb;gfsvote1"
Verifying GPFS is stopped on all nodes ...
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

Before a GPFS file system could be created using the mmcrnsd command, the GPFS cluster daemons needs to be up and running.

#mmstartup -a
Thu Jun 23 17:52:13 PDT 2011: mmstartup: Starting GPFS ...
#mmgetstate -a
 Node number  Node name        GPFS state
------------------------------------------
       1      zag02        active
       2      zag03        active
       3      zag04         active

Use the mmcrfs command to create a GPFS file system on a previously created NSD.

This commands expects the NSD descriptor as the input, which can be used from the

'mmcrnsd' re-written disk descriptor file from the previous step.

#mmcrfs gfsdb "gfsdb:::dataAndMetadata:-1::" -A yes -T /gfsdb
The following disks of gfsdb will be formatted on node zag02:
    gfsdb: size 524288000 KB
Formatting file system ...
Disks up to size 4.5 TB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gfsdb.
mmcrfs: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

Similarly, create the other file system.

Because the file systems were created with the automount yes option, subsequent start of the GPFS daemon either at the time of restart or while running mmstartup command, all GPFS file systems will be mounted automatically.

#dsh /usr/lpp/mmfs/bin/mmmount all
Zag02: Thu Jun 23 18:39:09 PDT 2011: mmmount: Mounting file systems ...
Zag03: Thu Jun 23 18:39:09 PDT 2011: mmmount: Mounting file systems ...
Zag04: Thu Jun 23 18:39:09 PDT 2011: mmmount: Mounting file systems ...


#dsh "mount | grep mmfs; echo"
Zag02:  /dev/gfsdb       /gfsdb      mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsdb
zag02:  /dev/gfsvote1    /gfsvote1   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote1
zag02:  /dev/gfsvote2    /gfsvote2   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote2
zag02:  /dev/gfsvote3    /gfsvote3   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote3
zag02:  /dev/gfsocr1     /gfsocr1    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr1
zag02:  /dev/gfsocr2     /gfsocr2    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr2
zag02:  /dev/gfsocr3     /gfsocr3    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr3

Zag03:  /dev/gfsdb       /gfsdb      mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsdb
zag03:  /dev/gfsvote1    /gfsvote1   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote1
zag03:  /dev/gfsvote2    /gfsvote2   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote2
zag03:  /dev/gfsvote3    /gfsvote3   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote3
zag03:  /dev/gfsocr1     /gfsocr1    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr1
zag03:  /dev/gfsocr2     /gfsocr2    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr2
zag03:  /dev/gfsocr3     /gfsocr3    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr3

Zag04:  /dev/gfsdb       /gfsdb      mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsdb
zag04:  /dev/gfsvote1    /gfsvote1   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote1
zag04:  /dev/gfsvote2    /gfsvote2   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote2
zag04:  /dev/gfsvote3    /gfsvote3   mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsvote3
zag04:  /dev/gfsocr1     /gfsocr1    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr1
zag04:  /dev/gfsocr2     /gfsocr2    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr2
zag04:  /dev/gfsocr3     /gfsocr3    mmfs   Aug 29 13:05 rw,mtime,atime,dev=gfsocr3

Part 3: Oracle Clusterware (grid) installation

Oracle Clusterware enables servers to communicate with each other, so that they appear to the function as a collective unit. This combination of servers is commonly known as a cluster. Although the servers are stand-alone servers, each server has additional processes that communicate with other servers. In this way, the separate servers appear as if they are one system to applications and end users. Oracle Clusterware provides the infrastructure necessary to run Oracle RAC.

We have mounted the Oracle images (11.2.0.3) from our centralized server. And, they are available at /images. The following screen captures provide step-by-step procedure on how to install the grid software.

  1. Run the following command from the vnc session of zag02 as an Oracle user.
    /images/Oracle/11.2.0.3/grid/
    ./runInstaller
    Figure 1. Download software updates
    Download software updates
    Figure 2. Installation option
    Installation option
    Figure 3. Installation type
    Installation type
    Figure 4. Cluster node information
    Cluster node information
  2. Mention the cluster name, scan name, and the port number. The scan name and the relevant IP must be present in /etc/hosts.
    Figure 5. Adapter information
    Adapter information

    We have used en1 for our public IP and en2 and en3 for our private IPs as shown in Figure 5.

    Figure 6. Storage option
    Storage option
  3. Select Shared File Systems as we are using GPFS in our configuration.
    Figure 7. OCR storage
    OCR storage
  4. Mention the path of OCR as shown in Figure 7.
    Figure 8. Selecting the OS group
    Selecting the OS group
  5. Create a local enhanced journaled file system (JFS2) named /grid with 20 GB size, 777 permissions and chmod oracle:dba /grid on each LPAR. Mount it and verify the ownership and permissions for the Oracle user.
    Figure 9. Oracle base path and software location path
    Oracle base path and software location path
  6. Mention the path of Oracle base as shown in Figure 9.
    Figure 10. Minimum Installation and Configuration check
    Minimum Installation and Configuration check
  7. After it is complete, verify whether the grid is installed successfully using the following commands.
    # /grid/bin/crsctl check cluster –all
    ***************************************
    Zag02:
    CRS-4537:Cluster Ready Services is online
    CRS-4529:Cluster Synchronistaion Services is online
    CRS-4533:Event Manager is online
    ***************************************
    Zag03:
    CRS-4537:Cluster Ready Services is online
    CRS-4529:Cluster Synchronistaion Services is online
    CRS-4533:Event Manager is online
    ***************************************
    Zag04:
    CRS-4537:Cluster Ready Services is online
    CRS-4529:Cluster Synchronistaion Services is online
    CRS-4533:Event Manager is online
    ***************************************
    
    # ./srvctl status nodeapps
    VIP zag02vip is enabled
    VIP zag02vip is running on node: zag02
    VIP zag03vip is enabled
    VIP zag03vip is running on node: zag03
    VIP zag04vip is enabled
    VIP zag04vip is running on node: zag04
    Network is enabled
    Network is running on node: zag02
    Network is running on node: zag03
    Network is running on node: zag04
    GSD is disabled
    GSD is not running on node: zag02
    GSD is not running on node: zag03
    GSD is not running on node: zag04
    ONS is enabled
    ONS daemon is running on node: zag02
    ONS daemon is running on node: zag03
    ONS daemon is running on node: zag04

Part 4: Database installation

The following screen captures provide the step-by-step procedure on how to install the database software.

  1. Run the following command from the vnc session of zag02 as an Oracle user. Again we are using the mounted images of Oracle from our centralized server.
    /images/Oracle/11.2.0.3/database
    ./runInstaller
    Figure 11. Install options
    Install options
  2. Select Install database software only as the installation option (as shown in Figure 11).
    Figure 12. Type of database installation
    Type of database installation
  3. Select Oracle Real AApplication Clusters database installation as the database installation type (as shown in Figure 12).
    Figure 13. Database Edition
    Database Edition
  4. Select Enterprise Edition as the database edition (as shown in Figure 13)
    Figure 14. Oracle base and software location
    Oracle base and software location
  5. Specify the path of the Oracle base and software location (as shown in Figure 14).
    Figure 15. Checks
    Checks
  6. The system performs a check and reports the errors (if any found). Fix all the errors and rerun the check. Make sure that it does not report any errors.

Part 5: Database creation

After the database installation is complete, the next step is to create the database instance.

Figure 16. Selecting the option to create a database
Selecting the option to create a database
Figure 17. Specifying the type of the database
Specifying the type of the database

In this example, the General Purpose or Transaction Processing option is selected, as shown in Figure 17.

Figure 18. Selecting the database configuration type
Selecting the database configuration type

Select Policy-Managed, as shown in Figure 18.

Figure 19. Choosing a password
Choosing a password
Figure 20. Specifying the storage type and the database file location
Specifying the storage type and the database file location
Figure 21. Selecting the recovery options for the database
Selecting the recovery options for the database

The following screen captures show the configurations of memory, sizing, character sets and the connection mode. We choose to keep the default values.

Figure 22. Memory and sizing configuration
Memory and sizing configuration
Figure 23. Character sets configuration
Character sets configuration
Figure 24. Connection mode configuration
Connection mode configuration
Figure 25. Selecting the database creation option
Selecting the database creation option
Figure 26. Database creation process
Database creation process
Figure 27. The database creation successfully completed
The database creation successfully completed

After creating the database instance successfully, we can verify the status of the database instance by using the srvctl command.

#export ORACLE_HOME=/oradb/db
#srvctl status database –d uptdb
Instance uptdb_1 is running on node zag02
Instance uptdb_2 is running on node zag03
Instance uptdb_3 is running on node zag04

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into AIX and Unix on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=960963
ArticleTitle=Configuration of Oracle RAC 11g on IBM AIX using IBM GPFS 3.5
publish-date=02052014