Wednesday, July 16, 2014

Indexing null values


In this demo, we will see how to index null values.  We will also see how the optimizer changes the explain plan when we index the columns with NULL values and NOT NULL values.

            1.       Create a table with the records from dba_tables view.
SQL> create table testnull as select * from dba_tables;

Table created.

            2.       Note that the columns PCT_FREE and PCT_INCREASE has “NULL” values.
SQL> select count(*) from testnull where pct_free is null;

  COUNT(*)
----------
        66

SQL> select count(*) from testnull where pct_increase is null;

  COUNT(*)
----------
      2775

              3.       Let us create an index on PCT_FREE column
SQL> create index pctfree_null_idx on testnull(pct_free);

Index created.

                4.       Gather the stats for the table and index (using cascade=true)
SQL>  exec dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'TESTNULL',estimate_percent=>NULL,cascade=>true,method_opt=>'FOR ALL COLUMNS SIZE 1');

PL/SQL procedure successfully completed.


                5.       Now search for rows with PCT_FREE has null values.
SQL> select * from testnull where pct_free is null;

66 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 623426927

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |    66 | 15906 |    30   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TESTNULL |    66 | 15906 |    30   (0)| 00:00:01 |
------------------------------------------------------------------------------

As you can see, the index is ignored and the query is going for full table scan.


6.       Now, let us try to use HINTS and see whether the query uses index.
SQL> select /* + INDEX(tn,pctfree_null_idx) */ * from testnull where pct_free is null;

66 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 623426927

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |    66 | 15906 |    30   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TESTNULL |    66 | 15906 |    30   (0)| 00:00:01 |
------------------------------------------------------------------------------

Even using HINTS, did not work and still going for FTS.

7.       Now, let us create concatenated indexes with NULL column and NOT-NULL values.
SQL> create index conc_idx on testnull(pct_free,owner);

Index created.

  
8.       Now, let us query and see.
SQL> select * from testnull where pct_free is null;

66 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1448583841

----------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |    66 | 15906 |    10   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TESTNULL |    66 | 15906 |    10   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | CONC_IDX |    66 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

We can see that the query is going for INDEX RANGE SCAN and using the composite index.

9.       Let us drop this index now and will create a composite index using PCT_FREE and PCT_INCREASE which has null values in both their columns.
SQL> drop index conc_idx;

Index dropped.

SQL> create index conc_idx_nulls on testnull(pct_free,pct_increase) compute statistics;

Index created.


10.   Query now and check.
SQL>  select * from testnull where pct_free is null;

66 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 623426927

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |    66 | 15906 |    30   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TESTNULL |    66 | 15906 |    30   (0)| 00:00:01 |
------------------------------------------------------------------------------

Still the query goes for FTS .

11.   Again, we will create another index with just a space tagged at the end.

SQL> create index conc_idx_i on testnull(pct_free,' ') compute statistics;

Index created.

SQL>  select * from testnull where pct_free is null;

66 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3944464689

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |            |    66 | 15906 |     6   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TESTNULL   |    66 | 15906 |     6   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | CONC_IDX_I |    66 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

 Now, it goes for Index range scan.


12.   Drop the index with space  and create a new concatenated index with any variable
SQL> drop index conc_idx_i ;

Index dropped.

SQL>  create index conc_idx_i1 on testnull(pct_free,'i') compute statistics;

Index created.

13.   Query now and see, it will go for Index scan.
SQL> select * from testnull where pct_free is null;

66 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1942626316

-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |    66 | 15906 |     6   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TESTNULL    |    66 | 15906 |     6   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | CONC_IDX_I1 |    66 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Conclusion

1.       Index is not used when we create an index ONLY on NULL columns.
2.       Index is used when we create a concatenated index with NULL and NOT NULL columns.

3.       Index is used in the query when we create with just space or any variable as show in step 12 and 13. 

Sunday, July 6, 2014

Voting Disks in Oracle 11gR2 RAC


Voting disks are important component of Oracle Clusterware. Clusterware uses voting disk to determine which nodes are members of the cluster.  After ASM is introduced to store these files, these are called as VOTING FILES.
Primary function of voting disks is to manage node membership and prevent SPLITBRAIN Syndrome in which 2 or more instances attempt to control the RAC database.
Ø  These files can be stored either in ASM or on shared storage.
Ø  If it is stored in ASM, no need to configure manually as the files will be created depending on the redundancy in ASM. 
Ø  In shared storage system, we need to manually configure these files with redundancy setup for high availability.
   Ø  We must have odd number of disks.
   Ø  Oracle recommends minimum of 3 and maximum of 5. In 10g, Clusterware can supports 32 voting disks but in 11gR2 supports 15 voting disks.
   Ø  A node must be able to access more than half of the voting disks at any time.  For eg, if you have 5 voting disks, a node must be able access atleast 3 of the voting disks. If it cannot access the minimum of voting disks, then it is evicted/removed from the cluster.
   Ø  All nodes in the RAC cluster register their heartbeat information in the voting disks/files.  RAC heartbeat is the polling mechanism that is sent over the cluster interconnect to ensure all RAC
nodes are available.

How Voting Happens
The CKPT process updates the control file every 3 seconds in an operation known as heartbeat.
CKPT writes to a single block that is local to the node/each instance and intra instance coordination is not required.  This block is called checkpoint progress record.

All members of the cluster attempt to lock on the controlfile record for updating.
The instance which obtains the locks tallies the votes from all members.  Then, the group membership must conform to the decided(voted) membership before allowing GCS/GES to proceed for reconfiguration. The control file record is then stored in the same block as the heartbeat in the controlfile checkpoint progress record.



What is NETWORK and DISK HEARTBEAT and how it registers in VOTING DISKS/FILES

            1.       All nodes in the RAC cluster register their heartbeat information in the voting disks/files.  RAC heartbeat is the polling mechanism that is sent over the cluster interconnect to ensure all RAC
a.       nodes are available.
b.      Voting disks/files are just like attendance register where you have nodes mark their attendance (heartbeats).

            2.       CSSD process on every node makes entries in the voting disk to ascertain the membership of the node.  While marking their own presence, all the nodes also register the information about their communicability with other nodes in the voting disk.  This is called NETWORK HEARTBEAT.

            3.       CSSD  process in each RAC maintains the heart beat in a block of size  1 OS block in the hot block of voting disk at a specific offset. The written block has a header area with the node name.  The heartbeat counter increments every second on every write call.  Thus heartbeat of various nodes is recorded at different offsets in the voting disk. This process is called DISK HEARTBEAT.

            4.       In addition of maintaining its own disk block, CSSD processes also monitors the disk block maintained by the CSSD processes of other nodes in cluster. Healthy nodes will have continuous network & disk heartbeats exchanged between the nodes.  Break in heartbeats indicates a possible error scenario.

            5.       If the disk is not updated in a short timeout period, the node is considered unhealthy and may be rebooted to protect the database.  In this case, a message to this effect is written in the KILL BLOCK of node. Each nodes reads its KILL BLOCK once per second, if the kill block is not overwritten, node commits suicide.

            6.       During reconfig (leaving or joining), CSSD monitors all nodes heartbeat information and determines whether the nodes has a disk heartbeat including those with no network heartbeat.  If no disk heartbeat is detected, then node is considered as dead.



What Information is stored in VOTING DISK/FILE
It contains 2 types of data .

Static data : Info about the nodes in cluster
Dynamic data: Disk heartbeat logging

It contains the important details of the cluster nodes membership like
a.       Which node is part of the cluster.
b.      Which node is leaving the cluster and
c.       Which node is joining the cluster.

 Purpose of Voting disk or Why is Voting disk needed

Voting disks are used by clusterware for health check.

Ø  Used by CSS to determine which nodes are currently members of the cluster.
Ø  In concert with other cluster components like CRS to shutdown, fence or reboot either single or multiple nodes whenever network communication is lost between any node within the cluster, to prevent to split-brain condition in which 2 or more instances attempt  to control the RAC database and thus protecting the database.
Ø  Will be used by CSS to arbitrate (to take an authorized decision) with peers that it is not able to see over the private interconnect in the event of an outage, allowing it to salvage (rescue from loss) the largest fully  connected sub-cluster for further operation.  During this operation , node membership (NM) will make an entry in the voting disk to inform its vote on availability. Other instances in the cluster too do similar actions.  The 3 voting disks configured also provide a method to determine who in the cluster should survive. 
Example : if eviction of one of the node is necessiated due to unresponsive action, then the node that has 2 voting disks with start evicting the other node. NM alternates it action between the heartbeat and the voting disk to determine the availability of the other nodes in cluster.

Possible scenarios in Voting disks

As we know now that voting disks is used by CSSD. It contains both network & disk heartbeat from all nodes and if any break in heartbeat will result in eviction of the node from cluster. There are possible scenarios with missing heartbeats.

1.       Network heart beat is successful, but disk heart beat is missed.
2.       Disk heart beat is successful, but network heart beat is missed.
3.       Both heart beats failing.

When a cluster is involved with many nodes, then few more scenarios are possible.

1.       Nodes have a split into N sets of nodes., communicating within the sets, but not with the members in other set.
2.       Just one node going unhealthy.  Nodes with  quorum (minimum number of nodes to make cluster valid) will maintain active membership of the cluster and other node(s) will be fenced/rebooted.

Why should we have ODD number of voting diks ??

A node must be able to access more than half of the voting disks at any time.
Example.

a.       Let us consider 2 node cluster with even number of voting disks say 2.
b.      Let node 1 is able to access Voting disk 1.
c.       Node 2 is able to access voting disk 2.
d.      From the above steps, we see that we don’t any common file where clusterware can check the heartbeat of both the nodes.
e.      If we have 3 voting disks and both the nodes are able to access more than half ie., 2 voting disks, there will be atleast one disk which will be accessed by both the nodes. The clusterware can use this disk to check the heartbeat of the nodes.
f.        A node not able to do so will be evicted from the cluster by another node that has more than half the voting disks to maintain the integrity of the cluster.

Where voting disks are stored

It can be stored in
a.       Raw devices
b.      Cluster file system supported by Oracle RAC such as OCFS,Sun cluster or Veritas Cluster Filesystem
c.       ASM disks (in 11gR2).

When voting disk is stored in ASM, a question is arised how the voting file on ASM can be accessed when we want to add a new node to a cluster.

The answer is.
Oracle ASM reserves several blocks at the fixed location for every Oracle ASM disk used for storing the voting files. As a result, Oracle clusterware can access the voting disks present in ASM even if the ASM instance is down and CSS can continue to maintain the Oracle cluster even if the ASM has failed.  The physical location of the voting files in ASM disks are fixed i.e., the cluster stack does not rely on a running ASM instance to access the files.

d.      If the ASM is stored in ASM, the multiplexing of voting disk is decided by the redundancy of the diskgroup.

Redundancy
of the diskgroup
   #of copies of
voting disk  
 ( Minimum # of disks
 in the diskgroup)
External
1
1
Normal
3
3
High
5
5




Commands to check the Votingdisk

Crsctl query css votedisk    - for checking the file location

When to take voting disk backup
1.       Fresh installation
2.       Adding /deleting node


Voting disk backup  (In 10g)

dd if=<voting-disk-path> of=<backup/path>

Voting disk restore (In 10g)

dd  if=<backup/path>  of=<voting disk path>


In 11gR2, the voting files are backed up automatically as part of OCR.  Oracle recommends NOT used dd command to backup or restore as this can lead to loss of the voting disk.

Add/delete vote disk

crsctl add css votedisk <path> -adds a new voting disk
crsctl delete css votedisk <path> -- deletes the voting disk



Oracle Local Registry (OLR) in RAC - 11gR2


In 11gR2, addition to OCR, we have another component called OLR installed on each node in the cluster. It is a local registry for node specific purposes. The OLR is not shared by other nodes in the cluster. It is installed and configured when clusterware is installed.

Why OLR is used and why was it introduced.
In 10g, we cannot store OCR’s in ASM and hence to startup the clusterware, oracle uses OCR but what happens when OCR is stored is ASM in 11g.
OCR should be accessible to find out the resources that need to be started or not. But, if OCR is on ASM, it can’t read until ASM (which itself is the resource of the node and this information is stored in OCR) is up.

 To answer this, Oracle introduced a component called OLR.
Ø  It is the first file used to startup the clusterware when OCR is stored on ASM.
Ø  Information about the resources that needs to be started on a node is stored in an OS file called ORACLE LOCAL REGISTRY (OLR).
Ø  Since OLR is an OS file, it can be accessed by various processes on the node for read/write irrespective of the status of cluster (up/down).
Ø  When a node joins the cluster, OLR on that node is read, various resources, including ASM are started on the node.
Ø  Once ASM is up, OCR is accessible and is used henceforth to manage all the cluster nodes. If OLR is missing or corrupted, clusterware can’t be started on that node.

Where is OLR located
It is located $GRID_HOME/cdata/<hostname>.olr .  The location of OLR is stored in /etc/oracle/olr.loc and used by OHASD.
What does OLR contain
The OLR stores
·         Clusterware version info.
·         Clusterware configuration
·         Configuration of various resources which needs to be started on the node,etc.


To see the contents in the OLR file, we can use following commands and see the resources.

[root@rac1 ~]# ocrconfig -local -manualbackup
host01     2014/03/16 01:20:27     /u01/app/grid/11.2.0.3/product/grid_1/cdata/rac1/backup_20140316_012027.olr

[root@rac1 ~]# strings /u01/app/grid/11.2.0.3/product/grid_1/cdata/rac1/backup_20140316_012027.olr |grep -v type |grep ora!

ora!drivers!acfs
ora!crsd
ora!asm
ora!evmd
ora!ctssd
ora!cssd
ora!cssdmonitor
ora!diskmon
ora!gpnpd
ora!gipcd
ora!mdnsd


OLR administration
Checking the status of the OLR file on each node.
$ ocrcheck –local

OCRDUMP is used to dump the contents of the OLR to text terminal
$ocrdump –local –stdout

We can export and import  the OLR file using OCRCONFIG

$ocrconfig –local –export  <export file name>
$ocrconfig –local –import  <file_name>

We can even the repair the OLR file if it corrupted.

$ocrconfig –local –repair –olr <filename>

OLR is backed up at the end of the installation or an upgrade. After that time we need to manually backup the OLR. Automatic backups are not supported for OLR.

$ocrconfig –local –manualbackup.

Viewing the contents of backup file

$ocrdump  -local –backupfile <olr_backup_file_name>

To change the OLR backup location
$ocrconfig –local –backuploc  <new_backup_location>

To restore OLR
$crsctl stop crs
$ocrconfig –local –restore_file_name
$ocrcheck –local
$crsctl start crs
$cluvfy comp olr   -- to check the integrity of the OLR file which was restored.




Oracle Cluster Registry (OCR) in 11gR2


Oracle Cluster Registry (OCR) is the critical component in Oracle RAC.

Ø  OCR records cluster configuration information.  If it fails, the entire clustered environment of Oracle RAC is affected and a possible outage is a result.
Ø  It is the central repository for CRS, which stores its metadata, configuration and state information for all clusters defined in the clusterware.
Ø  It is the cluster registry maintains application resources and their availability within the RAC environment.
Ø  It also stores information of CRS daemons and cluster managed applications.


What is stored in OCR

We have the introduction of OCR, now we will see what are stored in OCR file.

Ø  Node membership information, i.e, which nodes are part of the cluster.
Ø  Software version
Ø  Location of 11g voting disk
Ø  Server pools
Ø  Status of cluster resources such as RAC databases, listeners, instances and services.

·         Server up/down
·         Network up/down
·         Database up/down
·         Instance up/down
·         Listener up/down
Ø  Configuration of the cluster resources like RAC databases, listeners, instances and services.
·         Dependencies,
·         Management Policies (automatic/manual)
·         Callout scripts
·         Retries
Ø  Cluster database instance to node mapping
Ø  ASM instance, disk groups, etc
Ø  CRS application resource profiles such as VIP address, service, etc.
Ø  Database services characteristics eg., preferred/available nodes, TAF policy, load balancing goal, etc
Ø  Information about clusterware processes
Ø  Information about interaction and management of 3rd party applications controlled by CRS
Ø  Communication settings where the clusterware daemons or background process listen.
Ø  Information about OCR backups.



Let us see the contents in OCR file.
[root@rac1 ~]# ocrconfig -manualbackup

rac2     2014/01/18 01:03:40     /u01/app/grid/product/11.2.0.3/grid_1/cdata/cluster01/backup_20140118_010340.ocr

[root@rac2 ~]#  strings /u01/app/grid/product/11.2.0.3/grid_1/cdata/cluster01/backup_20140118_010340.ocr| grep -v type |grep ora!

ora!LISTENER!lsnr
ora!host02!vip
rora!rac1!vip
;ora!oc4j
6ora!LISTENER_SCAN3!lsnr
ora!LISTENER_SCAN2!lsnr
ora!LISTENER_SCAN1!lsnr
ora!scan3!vip
ora!scan2!vip
ora!scan1!vip
ora!gns
ora!gns!vip
ora!registry!acfs
ora!DATA!dg
dora!asm
_ora!eons
ora!ons
ora!gsd
ora!net1!network


Who is updating OCR

It is updated and maintained by many client applications.

Ø  CSSd during startup of cluster – to update the status of the servers.
Ø  CSSd during node addition/deletion – to add/delete nodes
Ø  CRSd about status of nodes during failure/reconfiguration
Ø  OUI  - Oracle universal Installer during installation/upgradation/deletion/addition
Ø  Srvctl – ( to manage clusters and rac databases)
Ø  Cluster control utility – crsctl (to manage cluster /local resources)
Ø  Enterprise Manager (EM)
Ø  Database Configuration Assistant (DBCA)
Ø  Database upgrade Assistant (DBUA)
Ø  Network Configuration Assistant (NETCA)
Ø  ASM configuration assistant (ASMCA)

How the update is performed in OCR.

1.       Each node in the cluster will have the copy of OCR in memory for better performance and each node is responsible for updating the OCR if required.
2.       CRSd process is responsible for reading and writing to the OCR files as well as refreshing the local OCR cache and caches on the other nodes in the cluster.
3.       Oracle uses distributed shared cache architecture during cluster management to optimize queries against the cluster repository and at the same time, each node maintains a copy of the OCR in memory.
4.       Oracle clusterware uses the background process to access the OCR cache.
5.       Only one CRSd process (designated as master) in the cluster performs any read/write activity. If any new information read by the CRSd master process, then it refresh the local OCR cache and the OCR cache on the others nodes of the cluster.
6.       As the OCR cache is distributed across all nodes in the cluster, OCR clients like srvctl,crsctl,etc will communicate directly with the OCR process on the node to get required information.
7.       Clients communicate via the local CRSd process for any updates on the physical OCR binary file.

During the above process, OCRCONFIG command cannot modify the OCR configuration information for nodes that are shut down or for nodes on which oracle clusterware is not running.  So, we need to avoid shutting down the nodes while modifying the OCR using the ocrconfig command.  We need to perform a repair on the stopped node before it can bring online to join the cluster.

OCRCONFIG –repair command change the OCR configuration only on the node from which we are executing this command.
 Example :  if the OCR mirror was relocated to a disk in rac2 which is down from /dev/raw/raw2  in rac1, then use ocrconfig –repair ocrmirror /dev/raw/raw2 command on rac2 while the CRS stack is down on that node to repair its OCR configuration.


Purpose of OCR

Ø  Oracle clusterware reads the ocr.loc for the location of the registry and to know which application resources needs to be started and the nodes on which to start them.
Ø  Used for bootstrap the CSS with port info, nodes in the cluster and  other similar informations.
Ø  CRSd and other clusterware daemons function is to define and manage resources managed by clusterware. Resources have profiles that define metadata about them. This metadata is stored in OCR. The CRS reads the OCR and manages the application resources, starts, stops, and monitor and manages their failover.
Ø  Maintains and tracks the information pertaining to the definition, availability and current state of the services.
Ø  Implements the workload balancing and continuous availability features of services.
Ø  Generates event during state changes.
Ø  Maintain configuration profiles of resources in OCR.
Ø  Records the currently known state of the cluster at regular intervals and provides the same when reqeuested by client application like srvctl,crsctl,etc.




How the information is stored in OCR

Ø  OCR uses a file-based repository to store configuration information in a series of key-value pairs, using a directory tree-like structure.
Ø  It contains information pertaining to all tiers in the clustered database.
Ø  Various parameters are stored as name-value pairs used and maintained at different levels of the architecture.

Each tier is managed and administered by daemon process at different levels with appropriate privileges.
Eg. All system level resources or application definitions would require root, or superuser to start or stop.
Those defined at the database level will require dba privileges.



Where and how should OCR be stored ?

Ø  Location of the OCR is found in a file on each individual node of the cluster. It varies depending on the flavors of unix. In Linux , /etc/oracle/ocr.loc

Ø  OCR must reside on a shared disk that is accessible by all nodes in the cluster. It cannot be stored on raw filesystems as in 10g as it is deprecated.  So, we are left with cluster filesystem (CFS) and ASM. But CFS will cost high and may not be an option. So, better to go for ASM.  OCR and voting disks can be exclusively available on any disksgroups .

Ø  OCR is striped and mirrored (if we have redundancy in the external) similar to other database files.

Ø  OCR is replicated across all the underlying disks of the diskgroup, so failure of the disk does not bring the failure of the diskgroup.

Ø  In 11gR2, we can have upto 5 OCR copies.

Ø  Since it is in shared location, it can be administered from any nodes irrespective of the node on which the registry was created.

Ø  A small disk of around 300MB-500MB is a good choice.



Utilities to manage OCR

Add an OCR file
—————-
Add an OCR file to an ASM diskgroup called +DATA
ocrconfig –add  +DATA

Moving the OCR
————–
Move an existing OCR file to another location :
ocrconfig –replace /u01/app/oracle/ocr –replacement +DATA

Removing an OCR location
————————
- requires that at least one other OCR file must remain online.
ocrconfig –delete +DATA



OCR Backups
ü  Oracle Clusterware 11g Release 2 backs up the OCR automatically every four hours on a schedule that is dependent on when the node started (not clock time).
ü  OCR backups are made available in $GRID_HOME/cdata/<cluster name> directory on the node performing the backups.
ü  One node known as the master node is dedicated to these backups, but in case master node is down, some other node may become the master. Hence, backups could be spread across nodes due to outages.
These backups are named as follows:
-4-hour backups   (3 max) –backup00.ocr, backup01.ocr, and backup02.ocr.
-Daily backups     (2 max) – day.ocr and day_.ocr
-Weekly backups (2 max) – week.ocr and week_.ocr

ü  It is recommended that OCR backups may be placed on a shared location which can be configured using ocrconfig -backuploc <new location> command.
ü  Oracle Clusterware maintains the last three backups, overwriting the older backups. Thus, you will have 3 4-hour backups, the current one, one four hours old and one eight hours old.
ü  Therefore no additional clean-up tasks are required of the DBA.
ü  Oracle Clusterware will also take a backup at the end of the day. The last two of these backups are retained.
ü  Finally, at the end of each week Oracle will perform another backup, and again the last two of these backups are retained. You should make sure that your routine file system backups backup the OCR location.
ü  Note that RMAN does not backup the OCR.

You can use the ocrconfig command to view the current OCR backups:

#ocrconfig –showbackup auto