Friday, April 12, 2019

RAC : 11gR2 Clusterware Startup Sequence



RAC : 11gR2 Clusterware Startup Sequence

Here is the brief explanation that how the clusterware brings up step by step .

In Unix and Linux operating systems, there is a master daemon process named INIT that functions to start up additional system background processes. The INIT process first spawns the init.ohasd process, which in turn starts up the Oracle High Availability Services Daemon (OHASD). In turn, the OHASD daemon then spawns additional Clusterware processes at each startup level as shown next:

1. When a node of an Oracle Clusterware cluster start/restarts, OHASD is started by platform-specific means. OHASD is the root for bringing up Oracle Clusterware. OHASD has access to the OLR (Oracle Local Registry) stored on the local file system. OLR provides needed data to complete OHASD initialization.

2. OHASD brings up GPNPD and CSSD. CSSD has access to the GPNP Profile stored on the local file system. This profile contains the following vital bootstrap data;
        a. ASM Diskgroup Discovery String
        b. ASM SPFILE location (Diskgroup name)
        c. Name of the ASM Diskgroup containing the Voting Files

3. The Voting Files locations on ASM Disks are accessed by CSSD with well-known pointers in the ASM Disk headers and CSSD is able to complete initialization and start or join an existing cluster.

4. OHASD starts an ASM instance and ASM can now operate with CSSD initialized and operating. The ASM instance uses special code to locate the contents of the ASM SPFILE, assuming it is stored in a Diskgroup.

5. With an ASM instance operating and its Diskgroups mounted, access to Clusterware’s OCR is available to CRSD.

6. OHASD starts CRSD with access to the OCR in an ASM Diskgroup.

7. Clusterware completes initialization and brings up other services under its control.

When Clusterware starts three files are involved.

1. OLR – Is the first file to be read and opened. This file is local and this file contains information regarding where the voting disk is stored and information to startup the ASM. (e.g ASM DiscoveryString)

2. VOTING DISK – This is the second file to be opened and read, this is dependent on only OLR being accessible.

ASM starts after CSSD or ASM does not start if CSSD is offline (i.e voting file missing)

How are Voting Disks stored in ASM?

Voting disks are placed directly on ASMDISK. Oracle Clusterware will store the votedisk on the disk within a disk group that holds the Voting Files.
 

Oracle Clusterware does not rely on ASM to access the Voting Files, which means Oracle Clusterware does not need of Diskgroup to read and write on ASMDISK. It is possible to check for existence of voting files on a ASMDISK using the V$ASM_DISK column VOTING_FILE.
 

So, voting files not depend of Diskgroup to be accessed, does not mean that the diskgroup is not needed, diskgroup and voting file are linked by their settings.

3. OCR – Finally the ASM Instance starts and mount all Diskgroups, then Clusterware Deamon (CRSD) opens and reads the OCR which is stored on Diskgroup.

So, if ASM already started, ASM does not depend on OCR or OLR to be online. ASM depends on CSSD (Votedisk) to be online.

There is an exclusive mode to start ASM without CSSD (but it’s to restore OCR or VOTE purposes)
As per Oracle Documentation

The full description, the really unreadable diagram and/or any updates to this you can find it in MOS Document 1053147.1

Level 1: OHASD Spawns:
  • cssdagent – Agent responsible for spawning CSSD.
  • orarootagent – Agent responsible for managing all root owned ohasd resources.
  • oraagent – Agent responsible for managing all oracle owned ohasd resources.
  • cssdmonitor – Monitors CSSD and node health (along wth the cssdagent).
Level 2: OHASD rootagent spawns:
  • CRSD – Primary daemon responsible for managing cluster resources.
  • CTSSD – Cluster Time Synchronization Services Daemon
  • Diskmon
  • ACFS (ASM Cluster File System) Drivers
Level 2: OHASD oraagent spawns:
  • MDNSD – Used for DNS lookup
  • GIPCD – Used for inter-process and inter-node communication
  • GPNPD – Grid Plug & Play Profile Daemon
  • EVMD – Event Monitor Daemon
  • ASM – Resource for monitoring ASM instances
Level 3: CRSD spawns:
  • orarootagent – Agent responsible for managing all root owned crsd resources.
  • oraagent – Agent responsible for managing all oracle owned crsd resources.
Level 4: CRSD rootagent spawns:
  • Network resource – To monitor the public network
  • SCAN VIP(s) – Single Client Access Name Virtual IPs
  • Node VIPs – One per node
  • ACFS Registery – For mounting ASM Cluster File System
  • GNS VIP (optional) – VIP for GNS
Level 4: CRSD oraagent spawns:
  • ASM Resouce – ASM Instance(s) resource
  • Diskgroup – Used for managing/monitoring ASM diskgroups.
  • DB Resource – Used for monitoring and managing the DB and instances
  • SCAN Listener – Listener for single client access name, listening on SCAN VIP
  • Listener – Node listener listening on the Node VIP
  • Services – Used for monitoring and managing services
  • ONS – Oracle Notification Service
  • eONS – Enhanced Oracle Notification Service
  • GSD – For 9i backward compatibility
  • GNS (optional) – Grid Naming Service – Performs name resolution






No comments:

Post a Comment

Oracle traces including 10053 and 10046 trace

Running various oracle traces including 10053 and 10046 trace file   10053 trace file generation trace 10053 enables you to ...