Sunday, July 6, 2014

Oracle Cluster Registry (OCR) in 11gR2


Oracle Cluster Registry (OCR) is the critical component in Oracle RAC.

Ø  OCR records cluster configuration information.  If it fails, the entire clustered environment of Oracle RAC is affected and a possible outage is a result.
Ø  It is the central repository for CRS, which stores its metadata, configuration and state information for all clusters defined in the clusterware.
Ø  It is the cluster registry maintains application resources and their availability within the RAC environment.
Ø  It also stores information of CRS daemons and cluster managed applications.


What is stored in OCR

We have the introduction of OCR, now we will see what are stored in OCR file.

Ø  Node membership information, i.e, which nodes are part of the cluster.
Ø  Software version
Ø  Location of 11g voting disk
Ø  Server pools
Ø  Status of cluster resources such as RAC databases, listeners, instances and services.

·         Server up/down
·         Network up/down
·         Database up/down
·         Instance up/down
·         Listener up/down
Ø  Configuration of the cluster resources like RAC databases, listeners, instances and services.
·         Dependencies,
·         Management Policies (automatic/manual)
·         Callout scripts
·         Retries
Ø  Cluster database instance to node mapping
Ø  ASM instance, disk groups, etc
Ø  CRS application resource profiles such as VIP address, service, etc.
Ø  Database services characteristics eg., preferred/available nodes, TAF policy, load balancing goal, etc
Ø  Information about clusterware processes
Ø  Information about interaction and management of 3rd party applications controlled by CRS
Ø  Communication settings where the clusterware daemons or background process listen.
Ø  Information about OCR backups.



Let us see the contents in OCR file.
[root@rac1 ~]# ocrconfig -manualbackup

rac2     2014/01/18 01:03:40     /u01/app/grid/product/11.2.0.3/grid_1/cdata/cluster01/backup_20140118_010340.ocr

[root@rac2 ~]#  strings /u01/app/grid/product/11.2.0.3/grid_1/cdata/cluster01/backup_20140118_010340.ocr| grep -v type |grep ora!

ora!LISTENER!lsnr
ora!host02!vip
rora!rac1!vip
;ora!oc4j
6ora!LISTENER_SCAN3!lsnr
ora!LISTENER_SCAN2!lsnr
ora!LISTENER_SCAN1!lsnr
ora!scan3!vip
ora!scan2!vip
ora!scan1!vip
ora!gns
ora!gns!vip
ora!registry!acfs
ora!DATA!dg
dora!asm
_ora!eons
ora!ons
ora!gsd
ora!net1!network


Who is updating OCR

It is updated and maintained by many client applications.

Ø  CSSd during startup of cluster – to update the status of the servers.
Ø  CSSd during node addition/deletion – to add/delete nodes
Ø  CRSd about status of nodes during failure/reconfiguration
Ø  OUI  - Oracle universal Installer during installation/upgradation/deletion/addition
Ø  Srvctl – ( to manage clusters and rac databases)
Ø  Cluster control utility – crsctl (to manage cluster /local resources)
Ø  Enterprise Manager (EM)
Ø  Database Configuration Assistant (DBCA)
Ø  Database upgrade Assistant (DBUA)
Ø  Network Configuration Assistant (NETCA)
Ø  ASM configuration assistant (ASMCA)

How the update is performed in OCR.

1.       Each node in the cluster will have the copy of OCR in memory for better performance and each node is responsible for updating the OCR if required.
2.       CRSd process is responsible for reading and writing to the OCR files as well as refreshing the local OCR cache and caches on the other nodes in the cluster.
3.       Oracle uses distributed shared cache architecture during cluster management to optimize queries against the cluster repository and at the same time, each node maintains a copy of the OCR in memory.
4.       Oracle clusterware uses the background process to access the OCR cache.
5.       Only one CRSd process (designated as master) in the cluster performs any read/write activity. If any new information read by the CRSd master process, then it refresh the local OCR cache and the OCR cache on the others nodes of the cluster.
6.       As the OCR cache is distributed across all nodes in the cluster, OCR clients like srvctl,crsctl,etc will communicate directly with the OCR process on the node to get required information.
7.       Clients communicate via the local CRSd process for any updates on the physical OCR binary file.

During the above process, OCRCONFIG command cannot modify the OCR configuration information for nodes that are shut down or for nodes on which oracle clusterware is not running.  So, we need to avoid shutting down the nodes while modifying the OCR using the ocrconfig command.  We need to perform a repair on the stopped node before it can bring online to join the cluster.

OCRCONFIG –repair command change the OCR configuration only on the node from which we are executing this command.
 Example :  if the OCR mirror was relocated to a disk in rac2 which is down from /dev/raw/raw2  in rac1, then use ocrconfig –repair ocrmirror /dev/raw/raw2 command on rac2 while the CRS stack is down on that node to repair its OCR configuration.


Purpose of OCR

Ø  Oracle clusterware reads the ocr.loc for the location of the registry and to know which application resources needs to be started and the nodes on which to start them.
Ø  Used for bootstrap the CSS with port info, nodes in the cluster and  other similar informations.
Ø  CRSd and other clusterware daemons function is to define and manage resources managed by clusterware. Resources have profiles that define metadata about them. This metadata is stored in OCR. The CRS reads the OCR and manages the application resources, starts, stops, and monitor and manages their failover.
Ø  Maintains and tracks the information pertaining to the definition, availability and current state of the services.
Ø  Implements the workload balancing and continuous availability features of services.
Ø  Generates event during state changes.
Ø  Maintain configuration profiles of resources in OCR.
Ø  Records the currently known state of the cluster at regular intervals and provides the same when reqeuested by client application like srvctl,crsctl,etc.




How the information is stored in OCR

Ø  OCR uses a file-based repository to store configuration information in a series of key-value pairs, using a directory tree-like structure.
Ø  It contains information pertaining to all tiers in the clustered database.
Ø  Various parameters are stored as name-value pairs used and maintained at different levels of the architecture.

Each tier is managed and administered by daemon process at different levels with appropriate privileges.
Eg. All system level resources or application definitions would require root, or superuser to start or stop.
Those defined at the database level will require dba privileges.



Where and how should OCR be stored ?

Ø  Location of the OCR is found in a file on each individual node of the cluster. It varies depending on the flavors of unix. In Linux , /etc/oracle/ocr.loc

Ø  OCR must reside on a shared disk that is accessible by all nodes in the cluster. It cannot be stored on raw filesystems as in 10g as it is deprecated.  So, we are left with cluster filesystem (CFS) and ASM. But CFS will cost high and may not be an option. So, better to go for ASM.  OCR and voting disks can be exclusively available on any disksgroups .

Ø  OCR is striped and mirrored (if we have redundancy in the external) similar to other database files.

Ø  OCR is replicated across all the underlying disks of the diskgroup, so failure of the disk does not bring the failure of the diskgroup.

Ø  In 11gR2, we can have upto 5 OCR copies.

Ø  Since it is in shared location, it can be administered from any nodes irrespective of the node on which the registry was created.

Ø  A small disk of around 300MB-500MB is a good choice.



Utilities to manage OCR

Add an OCR file
—————-
Add an OCR file to an ASM diskgroup called +DATA
ocrconfig –add  +DATA

Moving the OCR
————–
Move an existing OCR file to another location :
ocrconfig –replace /u01/app/oracle/ocr –replacement +DATA

Removing an OCR location
————————
- requires that at least one other OCR file must remain online.
ocrconfig –delete +DATA



OCR Backups
ü  Oracle Clusterware 11g Release 2 backs up the OCR automatically every four hours on a schedule that is dependent on when the node started (not clock time).
ü  OCR backups are made available in $GRID_HOME/cdata/<cluster name> directory on the node performing the backups.
ü  One node known as the master node is dedicated to these backups, but in case master node is down, some other node may become the master. Hence, backups could be spread across nodes due to outages.
These backups are named as follows:
-4-hour backups   (3 max) –backup00.ocr, backup01.ocr, and backup02.ocr.
-Daily backups     (2 max) – day.ocr and day_.ocr
-Weekly backups (2 max) – week.ocr and week_.ocr

ü  It is recommended that OCR backups may be placed on a shared location which can be configured using ocrconfig -backuploc <new location> command.
ü  Oracle Clusterware maintains the last three backups, overwriting the older backups. Thus, you will have 3 4-hour backups, the current one, one four hours old and one eight hours old.
ü  Therefore no additional clean-up tasks are required of the DBA.
ü  Oracle Clusterware will also take a backup at the end of the day. The last two of these backups are retained.
ü  Finally, at the end of each week Oracle will perform another backup, and again the last two of these backups are retained. You should make sure that your routine file system backups backup the OCR location.
ü  Note that RMAN does not backup the OCR.

You can use the ocrconfig command to view the current OCR backups:

#ocrconfig –showbackup auto                                                                                                                                                                                              

2 comments: