Thursday, June 14, 2012

TABLESPACE POINT IN TIME RECOVERY


RMAN TABLESPACE POINT IN TIME RECOVERY
Issue: In production database wrong batch executed and updated numbers of tables. Application team immediately asking to revert the commited changes from all the tables. Options is to go for flashback query but since long list of tables and doing so would be tough task. Other way is to do the database point in time recovery but that will make lost of many others data, thus not acceptable.

Solution: Oracle as provided a RMAN TABLESPACE POINT IN TIME RECOVERY which can do quick recovery of one or more tablespaces in a database to an earlier time without affecting the rest of the tablespaces and objects in the database.

Let’s try to learn this. To simulate the case, I will create a table with 1000 rows. And will re-execute the batch (same like wrong batch) that will again insert 1000 rows in the table, which is need to be rollback.  Also to confirm, TSPITR (TABLESPACE POINT IN TIME RECOVERY) use the last backup, so I too have taken a database backup before to start the testing.

Now creating a separate tablespace and user for testing.

SQL> select * from v$version;

BANNER                                                                         
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production         
PL/SQL Release 11.2.0.1.0 - Production                                         
CORE  11.2.0.1.0  Production                                                     
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production                         
NLSRTL Version 11.2.0.1.0 - Production                                         

22:28:29 SQL> create tablespace rmantest datafile 'C:\APP\JAYVA\ORADATA\TARGET\rman.dbf' size 2G;

Tablespace created.

22:29:44 SQL> create user rman identified by rman default tablespace rmantest;

User created.

22:30:30 SQL> grant dba to rman;

Grant succeeded.

SQL> conn rman/rman
Connected.
SQL> create table t1 (x number);

Table created.

SQL> begin
  2  for i in 1..1000 loop
  3  insert into t1 values (i);
  4  end loop;
  5  commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> select count(1) from t1;

  COUNT(1)
----------
      1000

SQL> select to_char(sysdate,'YYYY-MM-DD:HH24:MI:SS') from dual;

TO_CHAR(SYSDATE,'YY                                                            
-------------------                                                            
2012-06-13:21:55:00

SQL> select count(1) from t1;

  COUNT(1)                                                                     
----------                                                                      
      1000                                                                     

SQL> begin
  2  for i in 1..1000 loop
  3  insert into t1 values (i);
  4  end loop;
  5  commit;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> select count(1) from t1;

  COUNT(1)                                                                     
----------                                                                      
      2000                                                                     

SQL>
SQL>
SQL> select to_char(sysdate,'YYYY-MM-DD:HH24:MI:SS') from dual;

TO_CHAR(SYSDATE,'YY                                                             
-------------------                                                            
2012-06-13:22:01:18                                                            

SQL> create table t2 (x number);

Table created.

So now we have a table with 1000 rows in which wrong batch executed and made 1000 rows entry. Also later to I have created pne more table t2.

Starting the TSIPTR, we have to check for objects whose CREATION_TIME is after the target time for TSPITR.

SQL> SELECT OWNER, NAME, TABLESPACE_NAME,
  2         TO_CHAR(CREATION_TIME, 'YYYY-MM-DD:HH24:MI:SS')
  3         FROM TS_PITR_OBJECTS_TO_BE_DROPPED
  4  WHERE TABLESPACE_NAME IN ('RMANTEST')
  5  AND CREATION_TIME > TO_DATE('2012-06-13:21:55:00','YYYY-MM-DD:HH24:MI:SS')
  6  ORDER BY TABLESPACE_NAME, CREATION_TIME;

OWNER                          NAME
------------------------------ ------------------------------
TABLESPACE_NAME                TO_CHAR(CREATION_TI
------------------------------ -------------------
RMAN                           T2
RMANTEST                       2012-06-13:22:13:21


SQL>

Since table T2 is created after the time on what we are doing the TSPITR. So if we start the process this table will be drop, so if needed it can be backup. Not important for me, I will proceed with recovery process.

There are 3 modes of TSPITR:

Fully automated: This is default process and RMAN will manage the entire process.
Automated:  This is similar to automated process but location of auxiliary files and parameter will be different.
Manual: In this it will be entirely managed by DBA.

Process that is important in TSPITR:

Target instance: Instance which containing the tablespace, which need recovery.
Target time: Time to which recovery is needed.
Auxiliary instance: Database instance that will be used by TSPITR.
Auxiliary destination: Location where auxiliary instances will create files.

Before starting the recovery process, let’s see what steps RMAN follows:

Ø  Creates auxiliary instance (including the pfile etc..)
Ø  Mounts the auxiliary instance
Ø  Makes the candidate tablespace into OFFLINE
Ø  Restores the basic tablespace UNDO, SYTEM,SYSAUX and then the required tablespace
Ø  Applies archives (completes recovery)
Ø  Opens the database
Ø  Performs an export of the objects residing in the tablespace
Ø  Shutdown aux instance
Ø  Import the objects into the target database
Ø  Remove aux instance and cleanup

Now starting the process:

Recovery Manager: Release 11.2.0.1.0 - Production on Wed Jun 13 22:18:40 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

RMAN>
connected to target database: TARGET (DBID=3177713851)

RMAN> recover tablespace RMANTEST until time "to_date('2012-06-13:21:55:00','YYYY-MM-DD:HH24:MI:SS')" auxiliary destination ’C:\Users\jayva\Desktop\pending’;

RMAN>
Starting recover at 13-JUN-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='ghas'

initialization parameters used for automatic instance:
db_name=TARGET
db_unique_name=ghas_tspitr_TARGET
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=280M
processes=50
db_create_file_dest=C:\Users\jayva\Desktop\pending
log_archive_dest_1='location=C:\Users\jayva\Desktop\pending'
#No auxiliary parameter file used


starting up automatic instance TARGET

Oracle instance started

Total System Global Area     292933632 bytes

Fixed Size                     1374164 bytes
Variable Size                100665388 bytes
Database Buffers             184549376 bytes
Redo Buffers                   6344704 bytes
Automatic instance created
Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('2012-06-13:21:55:00','YYYY-MM-DD:HH24:MI:SS')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script

executing command: SET until clause

Starting restore at 13-JUN-12
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=59 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece C:\APP\JAYVA\FLASH_RECOVERY_AREA\TARGET\TARGET\BACKUPSET\2012_06_13\O1_MF_NCSNF_TAG20120613T214941_7XKHK95D_.BKP
channel ORA_AUX_DISK_1: piece handle=C:\APP\JAYVA\FLASH_RECOVERY_AREA\TARGET\TARGET\BACKUPSET\2012_06_13\O1_MF_NCSNF_TAG20120613T214941_7XKHK95D_.BKP tag=TAG20120613T214941
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\CONTROLFILE\O1_MF_7XKK67HZ_.CTL
Finished restore at 13-JUN-12

sql statement: alter database mount clone database

sql statement: alter system archive log current

sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('2012-06-13:21:55:00','YYYY-MM-DD:HH24:MI:SS')";
plsql <<<-- tspitr_2
declare
  sqlstatement       varchar2(512);
  offline_not_needed exception;
  pragma exception_init(offline_not_needed, -01539);
begin
  sqlstatement := 'alter tablespace '||  'RMANTEST' ||' offline immediate';
  krmicd.writeMsg(6162, sqlstatement);
  krmicd.execSql(sqlstatement);
exception
  when offline_not_needed then
    null;
end; >>>;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
set newname for datafile  6 to
 "C:\APP\JAYVA\ORADATA\TARGET\RMAN.DBF";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2, 6;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

sql statement: alter tablespace RMANTEST offline immediate

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_TEMP_%U_.TMP in control file

Starting restore at 13-JUN-12
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_SYSTEM_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00003 to C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_UNDOTBS1_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00002 to C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_SYSAUX_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00006 to C:\APP\JAYVA\ORADATA\TARGET\RMAN.DBF
channel ORA_AUX_DISK_1: reading from backup piece C:\APP\JAYVA\FLASH_RECOVERY_AREA\TARGET\TARGET\BACKUPSET\2012_06_13\O1_MF_NNNDF_TAG20120613T214941_7XKHDY7D_.BKP
channel ORA_AUX_DISK_1: piece handle=C:\APP\JAYVA\FLASH_RECOVERY_AREA\TARGET\TARGET\BACKUPSET\2012_06_13\O1_MF_NNNDF_TAG20120613T214941_7XKHDY7D_.BKP tag=TAG20120613T214941
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:25
Finished restore at 13-JUN-12

datafile 1 switched to datafile copy
input datafile copy RECID=5 STAMP=785888577 file name=C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_SYSTEM_7XKK6QW8_.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=6 STAMP=785888577 file name=C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_UNDOTBS1_7XKK6QVF_.DBF
datafile 2 switched to datafile copy
input datafile copy RECID=7 STAMP=785888577 file name=C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_SYSAUX_7XKK6QX0_.DBF

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('2012-06-13:21:55:00','YYYY-MM-DD:HH24:MI:SS')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
sql clone "alter database datafile  6 online";
# recover and open resetlogs
recover clone database tablespace  "RMANTEST", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

sql statement: alter database datafile  6 online

Starting recover at 13-JUN-12
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 97 is already on disk as file C:\APP\JAYVA\FLASH_RECOVERY_AREA\TARGET\TARGET\ARCHIVELOG\2012_06_13\O1_MF_1_97_7XKK6K4D_.ARC
archived log file name=C:\APP\JAYVA\FLASH_RECOVERY_AREA\TARGET\TARGET\ARCHIVELOG\2012_06_13\O1_MF_1_97_7XKK6K4D_.ARC thread=1 sequence=97
media recovery complete, elapsed time: 00:00:04
Finished recover at 13-JUN-12

database opened

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'alter tablespace  RMANTEST read only';
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
C:\Users\jayva\Desktop\pending''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
C:\Users\jayva\Desktop\pending''";
}
executing Memory Script

sql statement: alter tablespace  RMANTEST read only

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''C:\Users\jayva\Desktop\pending''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''C:\Users\jayva\Desktop\pending''

Performing export of metadata...
   EXPDP> Starting "SYS"."TSPITR_EXP_ghas": 
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TSPITR_EXP_ghas" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_ghas is:
   EXPDP>   C:\USERS\JAYVA\DESKTOP\PENDING\TSPITR_GHAS_39076.DMP
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace RMANTEST:
   EXPDP>   C:\APP\JAYVA\ORADATA\TARGET\RMAN.DBF
   EXPDP> Job "SYS"."TSPITR_EXP_ghas" successfully completed at 22:24:47
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone immediate
# drop target tablespaces before importing them back
sql 'drop tablespace  RMANTEST including contents keep datafiles';
}
executing Memory Script

database closed
database dismounted
Oracle instance shut down

sql statement: drop tablespace  RMANTEST including contents keep datafiles

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_ghas" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_ghas": 
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_ghas" successfully completed at 22:25:24
Import completed


contents of Memory Script:
{
# make read write and offline the imported tablespaces
sql 'alter tablespace  RMANTEST read write';
sql 'alter tablespace  RMANTEST offline';
# enable autobackups after TSPITR is finished
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
}
executing Memory Script

sql statement: alter tablespace  RMANTEST read write

sql statement: alter tablespace  RMANTEST offline

sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;

Removing automatic instance
Automatic instance removed
auxiliary instance file C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_TEMP_7XKKD0J6_.TMP deleted
auxiliary instance file C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\ONLINELOG\O1_MF_3_7XKKCT2J_.LOG deleted
auxiliary instance file C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\ONLINELOG\O1_MF_2_7XKKCR09_.LOG deleted
auxiliary instance file C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\ONLINELOG\O1_MF_1_7XKKCP4T_.LOG deleted
auxiliary instance file C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_SYSAUX_7XKK6QX0_.DBF deleted
auxiliary instance file C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_UNDOTBS1_7XKK6QVF_.DBF deleted
auxiliary instance file C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\DATAFILE\O1_MF_SYSTEM_7XKK6QW8_.DBF deleted
auxiliary instance file C:\USERS\JAYVA\DESKTOP\PENDING\TARGET\CONTROLFILE\O1_MF_7XKK67HZ_.CTL deleted
Finished recover at 13-JUN-12

RMAN>
RMAN>
RMAN>

Recovery Manager complete.

Also below is the part from alert logfile:
alter tablespace RMANTEST offline immediate
Completed: alter tablespace RMANTEST offline immediate
Wed Jun 13 22:25:09 2012
drop tablespace  RMANTEST including contents keep datafiles
Completed: drop tablespace  RMANTEST including contents keep datafiles
Wed Jun 13 22:25:18 2012
DM00 started with pid=36, OS id=13420, job SYS.TSPITR_IMP_ghas
Wed Jun 13 22:25:19 2012
DW00 started with pid=37, OS id=12164, wid=1, job SYS.TSPITR_IMP_ghas
Plug in tablespace RMANTEST with datafile
  'C:\APP\JAYVA\ORADATA\TARGET\RMAN.DBF'
Wed Jun 13 22:25:26 2012
alter tablespace  RMANTEST read write
Completed: alter tablespace  RMANTEST read write
alter tablespace  RMANTEST offline
Completed: alter tablespace  RMANTEST offline

Once completing the process, checking the datafile.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select tablespace_name,status from dba_data_files where tablespace_name='RM
ANTEST';

TABLESPACE_NAME                STATUS
------------------------------ ---------
RMANTEST                       AVAILABLE

SQL>
SQL>
SQL> conn rman/rman
Connected.
SQL> select count(1) from t1;
select count(1) from t1
                     *
ERROR at line 1:
ORA-00376: file 6 cannot be read at this time
ORA-01110: data file 6: 'C:\APP\JAYVA\ORADATA\TARGET\RMAN.DBF'


SQL> select tablespace_name,status,file_name from dba_data_files where tablespac
e_name='RMANTEST';

TABLESPACE_NAME                STATUS
------------------------------ ---------
FILE_NAME
--------------------------------------------------------------------------------

RMANTEST                       AVAILABLE
C:\APP\JAYVA\ORADATA\TARGET\RMAN.DBF


SQL> alter tablespace rmantest online;

Tablespace altered.

SQL> select count(1) from t1;

  COUNT(1)
----------
      1000

SQL>

So we have rollback the changes where exactly required.

Now it also got some limitation:

Database must be in archive mode.
Full database backup must be available before the time of target time.
It don’t recover the statistics, so we must gather the fresh statistics.
Once recovering the TSPITR, no old backup will be applied for this particular tablespace. So full backup is recommended after completion of the process.

For more information: please follow the Oracle document.

Suggestions are always welcome J

Monday, June 11, 2012

Result cache


Result cache

Result cache has been introduce in earlier version also but from Oracle 11g it has been enhanced to great level.  So let’s try to learn more about this feature. Result cache feature
does exactly what its name implies, it caches the results of queries and puts it into a slice of the shared pool.  If you have a query that is executed often and reads data that rarely changes, this feature can increase performance significantly.  When the query executes, Oracle will first look in the result cache to see if a previous result for this query is cached.  If so, it will retrieve those results instead of reading all the blocks and creating the results again.  The initial execution will run in the normal time but subsequent executions will seem to be nearly instantaneous.

Important points to understand are:

Oracle can cache the results of both SQL & PL/SQL.
Memory allocated  for the result cache is taken from the shared pool.

Let’s figure out important parameter useful to administer the result cache:

SQL>  select name,value from v$parameter where name like 'result_cache%';

NAME                           VALUE
------------------------------ -------------------------
result_cache_mode              MANUAL
result_cache_max_size          47677440
result_cache_max_result        5
result_cache_remote_expiration 0

Trying to understand these parameter:

result_cache_mode: Default value to this parameter is Manual. That means you have pass hint in the query to use the result cache feature. Other option to this parameter is force, that mean all the query will be cached but only if they fit in the cache. When using the force mode, all result will be cached. NO_RESULT_CACHE hint has to be passed if not want to cache the result.

result_cache_max_size: This the parameter which defines the size of the result cache. We need to understand, result cache is the part of shared pool and maximum size can be 75% of the shared pool.

result_cache_max_result: Specifies percentage of RESULT_CACHE_MAX_SIZE that any single query result set can occupy. (Just to prevent the big result set from throwing out all other small resultset).

result_cache_remote_expiration: It defines the number of minutes that cached the remote object will be remain valid. Value should be set to positive value if remote database is accessed for reading purpose.

I am trying to check the feature with the table whose value are very static and will not often.

10:21:02 SQL> set autotrace on
10:21:08 SQL> select count(1) from JAY.RESULTCACHE1;

  COUNT(1)
----------
  16207591


Execution Plan
----------------------------------------------------------
Plan hash value: 821388131

-----------------------------------------------------------------------------
| Id  | Operation          | Name           | Rows  | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                |     1 | 16648   (2)| 00:03:20 |
|   1 |  SORT AGGREGATE    |                |     1 |            |          |
|   2 |   TABLE ACCESS FULL| RESULTCACHE1 |    16M| 16648   (2)| 00:03:20 |
-----------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
         84  recursive calls
          0  db block gets
      60637  consistent gets
      60494  physical reads
          0  redo size
        529  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
          1  rows processed

10:22:14 SQL>

Consistent gets and the physical read are the two important points I want to see. Now to check the feature, I have re-started the instance.

SQL> select /*+ result_cache */ count(1) from JAY.RESULTCACHE1;

  COUNT(1)
----------
  16207591


Execution Plan
----------------------------------------------------------
Plan hash value: 821388131

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 | 16648   (2)| 00:03:20 |
|   1 |  RESULT CACHE       | cbchvkkjudm1v8shgd97yfvwt4 |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| RESULTCACHE1             |    16M| 16648   (2)| 00:03:20 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(JAY.RESULTCACHE1); attributes=(single-row); name="select /*+ result_cache */ count(1) from JAY.RESULTCACHE1"

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
      60581  consistent gets
      60394  physical reads
          0  redo size
        529  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

For first time, we have consistent and physical reads too. Now check for the second time.

SQL> select /*+ result_cache */ count(1) from JAY.RESULTCACHE1;

  COUNT(1)
----------
  16207591


Execution Plan
----------------------------------------------------------
Plan hash value: 821388131

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 | 16648   (2)| 00:03:20 |
|   1 |  RESULT CACHE       | cbchvkkjudm1v8shgd97yfvwt4 |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| RESULTCACHE1             |    16M| 16648   (2)| 00:03:20 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(JAY.RESULTCACHE1); attributes=(single-row); name="select /*+ result_cache */ count(1) from JAY.RESULTCACHE1"

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        529  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Below is the overall result when I check for couple more time:

Elapsed: 00:00:14.85
consistent gets:
60581
physical reads:
60394

Elapsed: 00:00:00.06
consistent gets: 0
physical reads: 0

Elapsed: 00:00:00.04
consistent gets: 0
physical reads: 0


Many will be confuse Result cache with buffer pool and keep pool. But both are different things and that is for sure.

Result cache exist in shared pool and had nothing to do with buffer pool. Also buffer pool cache the data block and result cache, cache the result.



I check with the below to confirm that result cache exist inside the shared pool:

SQL> select * from v$sgastat where pool='shared pool' and name like 'Result%';

POOL         NAME                                BYTES
------------ ------------------------------ ----------
shared pool  Result Cache: State Objs             2896
shared pool  Result Cache                       163976
shared pool  Result Cache: Memory Mgr              200
shared pool  Result Cache: Cache Mgr               208
shared pool  Result Cache: Bloom Fltr             2048

SQL>


Oracle provided the DBMS_RESULT_CACHE package to administrator the result cache. The package can be used for:
STATUS: Display the current status of the result package.
FLUSH: to flush the cache.
Report: to check the memory report. 


SQL> select dbms_result_cache.status from dual;

STATUS
--------------------------------------------------------------------------------
ENABLED

Above confirms result cache parameter is set to non zero value.


SQL> show parameter result_cache_max_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
result_cache_max_size                big integer 46560K
SQL>


SQL> set serveroutput on size 99999
SQL> exec  DBMS_RESULT_CACHE.MEMORY_REPORT("TRUE")
R e s u l t   C a c h e   M e m o r y   R e p o r t
[Parameters]
Block Size          = 1K bytes
Maximum Cache Size  = 46560K bytes (46560 blocks)
Maximum Result Size = 2328K bytes (2328 blocks)
[Memory]
Total Memory = 169328 bytes [0.004% of the Shared Pool]
... Fixed Memory = 5352 bytes [0.000% of the Shared Pool]
....... Memory Mgr = 200 bytes
....... Cache Mgr  = 208 bytes
....... Bloom Fltr = 2K bytes
....... State Objs = 2896 bytes
... Dynamic Memory = 163976 bytes [0.003% of the Shared Pool]
....... Overhead = 131208 bytes
........... Hash Table    = 64K bytes (4K buckets)
........... Chunk Ptrs    = 24K bytes (3K slots)
........... Chunk Maps    = 12K bytes
........... Miscellaneous = 28808 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 30 blocks
........... Used Memory = 2 blocks
............... Dependencies = 1 blocks (1 count)
............... Results = 1 blocks
................... SQL     = 1 blocks (1 count)

PL/SQL procedure successfully completed.

SQL>




SQL> exec DBMS_RESULT_CACHE.flush;

PL/SQL procedure successfully completed.

SQL>


When to use and when not to use:

Any change in the dependent object will make entire result cached invalidated and it need to be flush out manually. But with version 11.2, Oracle manage this job automatically.

In RAC, all instance managed its own result cache. But invalidation of any of the instance cached result will be maintained across all the other instances.

Too much reading of the cache will cause latch and may also hit performance issue, so it may also need to be monitored. 




SQL> select * from v$latchname where name like 'Result Cache%'
  2  ;

    LATCH# NAME                                                                   HASH
---------- ---------------------------------------------------------------- ----------
       405 Result Cache: RC Latch                                           1054203712
       406 Result Cache: SO Latch                                            986859868
       407 Result Cache: MB Latch                                            995186388

SQL>



Also the background process that is responsible for this feature:

SQL> select description from V$BGPROCESS where name like 'RC%';

DESCRIPTION
----------------------------------------------------------------
Result Cache: Background

SQL>

We need to learn more for this feature. As always suggestions and corrections are welcome.