lundi 11 juin 2012

How to known the options installed on a database

SQL> SELECT status, version, comp_name FROM dba_registry;

vendredi 24 février 2012

Oracle11g: Methodology to trace a SQL request

Just a method to keep somewhere how to perform some trace when testing a query or a set of queries to known what's happening in the database :

Clean the buffer cache :
SQL> alter system flush buffer_cache;
Set the session in trace mode :
SQL> alter system set events 'sql_trace level=12' ;
Do your test...

Set the session out of the trace mode :
SQL> alter system set events 'sql_trace off' ;

Know look for the traces using adrci tool :
&&gt: adrci
ADRCI: Release 11.2.0.1.0 - Production on Fri Feb 24 11:33:23 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/opt/oracle"
adrci> show home
ADR Homes:
diag/tnslsnr/vgerndpud133/listener_ota4g
diag/tnslsnr/vgerndpud133/ota4g
diag/tnslsnr/vgerndpud133/listener
diag/rdbms/test/TEST
adrci> set home diag/rdbms/test/TEST
adrci> show tracefiles %ora% -rt
   24-FEB-12 10:56:00  diag/rdbms/test/TEST/trace/TEST_ora_5194.trc
   24-FEB-12 10:55:20  diag/rdbms/test/TEST/trace/TEST_ora_19558.trc
   24-FEB-12 10:55:16  diag/rdbms/test/TEST/trace/TEST_ora_25982.trc
   21-FEB-12 17:36:29  diag/rdbms/test/TEST/trace/TEST_ora_5093.trc
   21-FEB-12 17:36:24  diag/rdbms/test/TEST/trace/TEST_ora_5050.trc
   21-FEB-12 17:35:49  diag/rdbms/test/TEST/trace/TEST_ora_5007.trc
   21-FEB-12 17:35:33  diag/rdbms/test/TEST/trace/TEST_ora_4992.trc
   21-FEB-12 17:26:49  diag/rdbms/test/TEST/trace/TEST_ora_4202.trc
   21-FEB-12 17:21:27  diag/rdbms/test/TEST/trace/TEST_ora_3741.trc
   21-FEB-12 17:21:21  diag/rdbms/test/TEST/trace/TEST_ora_3700.trc
   12-JAN-12 17:44:07  diag/rdbms/test/TEST/incident/incdir_117854/TEST_ora_8736_i117854.trc
   12-JAN-12 10:31:11  diag/rdbms/test/TEST/incident/incdir_117846/TEST_ora_18962_i117846.trc
adrci> view TEST_ora_5194.trc
Trace file /opt/oracle/diag/rdbms/test/TEST/trace/TEST_ora_5194.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning option
ORACLE_HOME = /opt/oracle/11.2.0
System name:    Linux
Node name:      vgerndpud133
Release:        2.6.18-92.el5
Version:        #1 SMP Tue Apr 29 13:16:12 EDT 2008
Machine:        i686
Instance name: TEST
Redo thread mounted by this instance: 1
Oracle process number: 25
Unix process pid: 5194, image: oracle@vgerndpud133


*** 2012-02-24 10:53:49.389
*** SESSION ID:(135.251) 2012-02-24 10:53:49.389
*** CLIENT ID:() 2012-02-24 10:53:49.389
*** SERVICE NAME:(SYS$USERS) 2012-02-24 10:53:49.389
*** MODULE NAME:(emagent_SQL_oracle_database) 2012-02-24 10:53:49.389
*** ACTION NAME:(UserLocks) 2012-02-24 10:53:49.389

WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1330077229389510
WAIT #1: nam='SQL*Net message from client' ela= 186 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1330077229390167
CLOSE #1:c=0,e=10,dep=0,type=3,tim=1330077229390205
=====================

For more reference on event for tracing purposes : How To Use The New 11g Events++ Syntax For Easier SQL Tracing Of Datapump Operations? [ID 813737.1]

 How to trace fast user session :
See here under an extract from dba-village @http://www.dba-village.com/village/dvp_tips.TipDetails?TipIdA=3764
If you have to trace a "fast" user session, (for example, initiated by clicking on a button on the GUI of an application), you don't have enough time to query V$SESSION to initiate a trace for the session.

A very easy way is to use a database "on logon" trigger; here is an example:

CREATE OR REPLACE TRIGGER ON_LOGON_SCOTT
AFTER LOGON ON DATABASE
WHEN ( USER = 'SCOTT' )
BEGIN
execute immediate
'alter session set events ''10046 trace name context forever, level 12''';
EXCEPTION
WHEN OTHERS THEN
NULL;
END;

Remarks:
1) The “exception” block is of utmost importance to avoid big trouble if something goes wrong with the trigger.

2) The trigger can contain more conditions (for example: if sysdate between XXX and YYY)

3) Once the trigger is enabled, all new sessions of the user will be traced. To suspend this behaviour momentarily, just perform:
ALTER TRIGGER ON_LOGON_SCOTT DISABLE;

4) Suggestion: when not necessary anymore, drop the trigger instead of keeping it “disabled” forever (maybe somebody would re-enable it by mistake):
DROP TRIGGER ON_LOGON_SCOTT; 


See also : http://www.oracle-base.com/articles/misc/sql-trace-10046-trcsess-and-tkprof.php
http://www.dba-village.com/village/dvp_tips.TipDetails?TipIdA=3764
http://guyharrison.typepad.com/oracleguy/2006/09/10g_tracing_qui.html
http://psoug.org/reference/dbms_monitor.html
http://dbasolutions.wikispaces.com/How+to+set+trace+on+SQLPLUS

Trace Analyzer :https://support.oracle.com/epmos/faces/ui/km/DocumentDisplay.jspx?_afrLoop=367664814786373&id=224270.1&_afrWindowMode=0&_adf.ctrl-state=4et2sref2_4

 List of Oracle event codes :
Code  Description
10000  Controlfile debug event, name 'control_file'
10001  Controlfile crash event1
10002  Controlfile crash event2
10003  Controlfile crash event3
10004  Controlfile crash event4
10005  Trace latch operations for debugging
10006  Testing - block recovery forced
10007  Log switch debug crash after new log select, thread %s
10008  Log switch debug crash after new log header write, thread %s
10009  Log switch debug crash after old log header write, thread %s
10010  Begin Transaction
10011  End Transaction
10012  Abort Transaction
10013  Instance Recovery
10014  Roll Back to Save Point
10015  Undo Segment Recovery
10016  Undo Segment extend
10017  Undo Segment Wrap
10018  Data Segment Create
10019  Data Segment Recovery
10020  Partial link restored to linked list (KSG)
10021  Latch cleanup for state objects (KSS)
10022  Trace ktsgsp
10023  Create Save Undo Segment
10024  Write to Save Undo
10025  Extend Save Undo Segment
10026  Apply Save Undo
10027  Latch cleanup for enqueue locks (KSQ)
10028  Latch cleanup for enqueue resources (KSQ)
10029  Session logon (KSU)
10030  Session logoff (KSU)
10031  Sort debug event (S*)
10032  Sort statistics (SOR*)
10033  Sort run information (SRD*/SRS*)
10034  Access path analysis (APA*)
10035  Parse SQL statement (OPIPRS)
10036  Create remote row source (QKANET)
10037  Allocate remote row source (QKARWS)
10038  Dump row source tree (QBADRV)
10039  Type checking (OPITCA)
10040  Dirty cache list
10041  Dump undo records skipped
10042  Trap error during undo application
10043  Check consistency of owner/waiter/converter lists in KSQ
10044  Free list undo operations
10045  Free list update operations - ktsrsp, ktsunl
10046  Enable SQL statement timing
10047  Trace switching of sessions
10048  Undo segment shrink
10049  Protect library cache memory heaps
10050  Sniper trace
10051  Trace OPI calls
10052  Don't clean up obj$
10053  CBO Enable optimizer trace
10054  Trace UNDO handling in MLS
10055  Trace UNDO handing
10056  Dump analyze stats (kdg)
10057  Suppress file names in error messages
10058  Use table scan cost in tab$.spare1
10059  Simulate error in logfile create/clear
10060  CBO Enable predicate dump
10061  Disable SMON from cleaning temp segment
10062  Disable usage of OS Roles in osds
10063  Disable usage of DBA and OPER privileges in osds
10064  Thread enable debug crash level %s, thread %s
10065  Limit library cache dump information for state object dump
10066  Simulate failure to verify file
10067  Force redo log checksum errors - block number
10068  Force redo log checksum errors - file number
10069  Trusted Oracle test event
10070  Force datafile checksum errors - block number
10071  Force datafile checksum errors - file number
10072  Protect latch recovery memory
10073  Have PMON dump info before latch cleanup
10074  Default trace function mask for kst
10075  CBO Disable outer-join to regular join conversion
10076  CBO Enable cartesian product join costing
10077  CBO Disable view-merging optimization for outer-joins
10078  CBO Disable constant predicate elimination optimization
10079  Trace data sent/received via SQL*Net
10080  Dump a block on a segment list which cannot be exchanged
10081  Segment High Water Mark has been advanced
10082  Free list head block is the same as the last block
10083  A brand new block has been requested from space management
10084  Free list becomes empty
10085  Free lists have been merged
10086  CBO Enable error if kko and qka disagree on oby sort
10087  Disable repair of media corrupt data blocks
10088  CBO Disable new NOT IN optimization
10089  CBO Disable index sorting
10090  Invoke other events before crash recovery
10091  CBO Disable constant predicate merging
10092  CBO Disable hash join
10093  CBO Enable force hash joins
10094  Before resizing a data file
10095  Dump debugger commands to trace file
10096  After the cross instance call when resizing a data file
10097  After generating redo when resizing a data file
10098  After the OS has increased the size of a data file
10099  After updating the file header with the new file size
10100  After the OS has decreased the size of a data file
10101  Atomic redo write recovery
10102  Switch off anti-joins
10103  CBO Disable hash join swapping
10104  Dump hash join statistics to trace file
10105  CBO Enable constant pred trans and MPs w WHERE-clause
10106  CBO Disable evaluating correlation pred last for NOT IN
10107  CBO Always use bitmap index
10108  CBO Don't use bitmap index
10109  CBO Disable move of negated predicates
10110  CBO Try index rowid range scans
10111  Bitmap index creation switch
10112  Bitmap index creation switch
10113  Bitmap index creation switch
10114  Bitmap index creation switch
10115  CBO Bitmap optimization use maximal expression
10116  CBO Bitmap optimization switch
10117  CBO Disable new parallel cost model
10118  CBO Enable hash join costing
10119  QKA Disable GBY sort elimination
10120  Generate relative file # different from absolute
10121  CBO Don't sort bitmap chains
10122  Disable transformation of count(col) to count(*)
10123  QKA Disable Bitmap And-EQuals
10124  Force creation of segmented arrays by kscsAllocate
10125  Disable remote sort elimination
10126  Debug oracle java xa
10127  Disable remote query block operation
10128  Dump Partition Pruning Information
10129  Alter histogram lookup for remote queries
10130  Sort disable readaheads
10131  Use v$sql_plan code path for explain plan
10132  Dump plan after compilation
10133  Testing for SQL Memory Management
10134  Tracing for SQL Memory Management for session
10135  CBO do not count 0 rows partitions
10136  CBO turn off fix for bug 1089848
10137  CBO turn off fix for bug 1344111
10138  CBO turn off fix for bug 1577003
10139  CBO turn off fix for bug 1386119
10140  CBO turn off fix for bug 1332980
10141  CBO disable additional keys for inlist in bitmap optimization
10142  CBO enable dynamic selectivity estimation
10143  CBO force dynamic selectivity estimation (if enabled)
10145  Test auditing network errors
10146  Enable Oracle TRACE collection
10148  Use pre-7.3.3 random generator
10149  Allow the creation of constraints with illegal date constants
10150  Import exceptions
10151  Force duplicate dependency removal
10152  CBO don't consider function costs in plans
10153  Switch to use public synonym if private one does not translate
10154  Switch to disallow synonyms in DDL statements
10155  CBO disable generation of transitive OR-chains
10156  CBO disable index fast full scan
10157  CBO disable index access path for in-list
10158  CBO preserve predicate order in post-filters
10159  CBO disable order-by sort pushdown into domain indexes
10160  CBO disable use of join index
10161  CBO recursive semi-join on/off-switch
10162  CBO join-back elimination on/off-switch
10163  CBO join-back elimination on/off-switch
10164  CBO disable subquery-adjusted cardinality fix
10165  Mark session to be aborted during shutdown normal
10166  Trace long operation statistics updates
10167  CBO use old index MIN/MAX optimization
10168  CBO disable single-table predicate predicate generation
10169  CBO disable histograms for multi partitions
10170  CBO use old bitmap costing
10171  CBO disable transitive join predicates
10172  CBO force hash join back
10173  CBO no constraint-based join-back elimination
10174  View join-back elimination switch
10175  CBO star transformation switch
10176  CBO colocated join switch
10177  CBO colocated join switch
10178  CBO turn off hash cluster filtering through memcmp
10179  CBO turn off transitive predicate replacement
10180  Temp table transformation print error messages
10181  CBO disable multi-column in-list processing
10182  CBO disable generation of implied predicates
10183  CBO disable cost rounding
10184  CBO disable OR-exp if long inlist on bitmap column
10185  CBO force index joins
10186  CBO disable index join
10187  CBO additional index join switch
10188  "CBO additional index join switch
10189  CBO turn off FFS null fix
10190  Analyze use old frequency histogram collection and density
10191  Avoid conversion of in-lists back to OR-expanded form
10192  Nopushdown when number of groups exceed number of rows
10193  Force repeatable sampling with specified seed
10194  CBO disable new LIKE selectivity heuristic
10195  CBO don't use check constraints for transitive predicates
10196  CBO disable index skip scan
10197  CBO force index skip scan
10198  Check undo record
10199  Set parameter in session
10200  Consistent read buffer status
10201  Consistent read undo application
10202  Consistent read block header
10203  Block cleanout
10204  Signal recursive extend
10205  Row cache debugging
10206  Transaction table consistent read
10207  Consistent read transactions' status report
10208  Consistent read loop check
10209  Enable simulated error on controlfile
10210  Check data block integrity
10211  Check index block integrity
10212  Check cluster integrity
10213  Crash after controlfile write
10214  Simulate write errors on controlfile
10215  Simulate read errors on controlfile
10216  Dump controlfile header
10217  Debug sequence numbers
10218  Dump uba of applied undo
10219  Monitor multi-pass row locking
10220  Show updates to the transaction table
10221  Show changes done with undo
10222  Row cache
10223  Transaction layer - turn on verification codes
10224  Index block split/delete trace
10225  Free/used extent row cache
10226  Trace CR applications of undo for data operations
10227  Verify (multi-piece) row structure
10228  Trace application of redo by kcocbk
10229  Simulate I/O error against datafiles
10230  Check redo generation by copying before applying
10231  Skip corrupted blocks on _table_scans_
10232  Dump corrupted blocks symbolically when kcbgotten
10233  Skip corrupted blocks on index operations
10234  Trigger event after calling kcrapc to do redo N times
10235  Check memory manager internal structures
10236  Library cache manager
10237  Simulate ^C (for testing purposes)
10238  Instantiation manager
10239  Multi-instance library cache manager
10240  Dump dba's of blocks that we wait for
10241  Remote SQL execution tracing/validation
10242  Suppress OER 2063 (for testing distrib w/o different error log)
10243  Simulated error for test %s of K2GTAB latch cleanup
10244  Make tranids in error msgs print as 0.0.0 (for testing)
10245  Simulate lock conflict error for testing PMON
10246  Print trace of PMON actions to trace file
10247  Turn on scgcmn tracing. (VMS ONLY)
10248  Turn on tracing for dispatchers
10249  Turn on tracing for multi-stated servers
10250  Trace all allocate and free calls to the topmost SGA heap
10251  Check consistency of transaction table and undo block
10252  Simulate write error to data file header
10253  Simulate write error to redo log
10254  Trace cross-instance calls
10255  Pl/sql parse checking
10256  Turn off shared server load balancing
10257  Trace shared server load balancing
10258  Force shared servers to be chosen round-robin
10259  Get error message text from remote using explicit call
10260  Trace calls to SMPRSET (VMS ONLY)
10261  Limit the size of the PGA heap
10262  Don't check for memory leaks
10263  Don't free empty PGA heap extents
10264  Collect statistics on context area usage (x$ksmcx)
10265  Keep random system generated output out of error messages
10266  Trace OSD stack usage
10267  Inhibit KSEDMP for testing
10268  Don't do forward coalesce when deleting extents
10269  Don't do coalesces of free space in SMON
10270  Debug shared cursors
10271  Distributed transaction after COLLECT
10272  Distributed transaction before PREPARE
10273  Distributed transaction after PREPARE
10274  Distributed transaction before COMMIT
10275  Distributed transaction after COMMIT
10276  Distributed transaction before FORGET
10277  Cursor sharing (or not) related event (used for testing)
10278  Internal testing
10279  Simulate block corruption in kdb4chk
10280  Internal testing - segmentation fault during crash recovery
10281  Maximum time to wait for process creation
10282  Inhibit signalling of other backgrounds when one dies
10283  Simulate asynch I/O never completing
10284  Simulate zero/infinite asynch I/O buffering
10285  Simulate controlfile header corruption
10286  Simulate controlfile open error
10287  Simulate archiver error
10288  Do not check block type in ktrget
10289  Do block dumps to trace file in hex rather than fromatted
10290  Kdnchk - checkvalid event - not for general purpose use.
10291  Die in tbsdrv to test controlfile undo
10292  Dump uet entries on a 1561 from dtsdrv
10293  Dump debugging information when doing block recovery
10294  Enable PERSISTENT DLM operations on non-compliant systems
10295  Die after file header update durning cf xact
10296  Disable ORA-379
10297  Customize dictionary object number cache
10298  Ksfd i/o tracing
10299  Trace prefetch tracking decisions made by CKPT
10300  Disable undo compatibility check at database open
10301  Enable LCK timeout table consistency check
10302  Trace create or drop internal trigger
10303  Trace loading of library cache for internal triggers
10304  Trace replication trigger
10305  Trace updatable materialized view trigger
10306  Trace materialized view log trigger
10307  Trace RepCat execution
10308  Replication testing event
10309  Trigger Debug event
10310  Trace synchronous change table trigger
10311  Disable Flashback Table Timestamp checking
10312  Allow disable to log rows into the mapping table
10319  Trace PGA statistics maintenance
10320  Enable data layer (kdtgrs) tracing of space management calls
10321  Datafile header verification debug failure.
10323  Before committing an add datafile command
10324  Enable better checking of redo logs errors
10325  Trace control file record section expand and shrink operations
10326  Clear logfile debug crash at %s, log %s
10327  Simulate ORA-00235 error for testing
10328  Disable first-to-mount split-brain error, for testing
10329  Simulate lost write, test detection by two-pass recovery
10330  Clear MTTR statistics in checkpoint progress record
10331  Simulate resilvering during recovery
10332  Force ALTER SYSTEM QUIESCE RESTRICTED command to fail
10336  Do remote object transfer using remote SQL
10337  Enable padding owner name in slave sql
10340  Buffer queues sanity check for corrupted buffers
10341  Simulate out of PGA memory in DBWR during object reuse
10342  Raise unknown exception in ACQ_ADD when checkpointing
10343  Raise an out of memory exception-OER 4031 in ACQ_ADD
10344  Simulate kghxal returning 0 in ACQ_ADD but no exception
10345  Validate queue when linking or unlinking a buffer
10346  Check that all buffers for checkpoint have been written
10347  Dump active checkpoint entries and checkpoint buffers
10348  Test abnormal termination of process initiating file checkpoint
10349  Do not allow ckpt to complete
10350  Simulate more than one object & tsn id in object reuse
10351  Size of slots
10352  Report direct path statistics
10353  Number of slots
10354  Turn on direct read path for parallel query
10355  Turn on direct read path for scans
10356  Turn on hint usage for direct read
10357  Turn on debug information for direct path
10359  Turn off updates to control file for direct writes
10360  Enable dbwr consistency checking
10365  Turn on debug information for adaptive direct reads
10370  Parallel query server kill event
10371  Disable TQ hint
10372  Parallel query server kill event proc
10373  Parallel query server kill event
10374  Parallel query server interrupt (validate lock value)
10375  Turn on checks for statistics rollups
10376  Turn on table queue statistics
10377  Turn off load balancing
10378  Force hard process/range affinity
10379  Direct read for rowid range scans (unimplemented)
10380  Kxfp latch cleanup testing event
10381  Kxfp latch cleanup testing event
10382  Parallel query server interrupt (reset)
10383  Auto parallelization testing event
10384  Parallel dataflow scheduler tracing
10385  Parallel table scan range sampling method
10386  Parallel SQL hash and range statistics
10387  Parallel query server interrupt (normal)
10388  Parallel query server interrupt (failure)
10389  Parallel query server interrupt (cleanup)
10390  Trace parallel query slave execution
10391  Trace PX granule allocation/assignment
10392  Parallel query debugging bits
10393  Print parallel query statistics
10394  Generate a fake load to test adaptive and load balancing
10395  Adjust sample size for range table queues
10396  Circumvent range table queues for queries
10397  Suppress verbose parallel coordinator error reporting
10398  Enable timeouts in parallel query threads
10399  Trace buffer allocation
10400  Turn on system state dumps for shutdown debugging
10401  Turn on IPC (ksxp) debugging
10402  Turn on IPC (skgxp) debugging
10403  Fake CPU number for default degree of parallelism
10404  Crash dbwr after write
10405  Emulate broken mirrors
10406  Enable datetime TIMESTAMP, INTERVAL datatype creation
10407  Enable datetime TIME datatype creation
10408  Disable OLAP builtin window function usage
10410  Trigger simulated communications errors in KSXP
10411  Simulate errors in IMR
10412  Trigger simulated errors in CGS/CM interface
10425  Enable global enqueue service open event trace
10426  Enable global enqueue service convert event trace
10427  Enable global enqueue service traffic controller event trace
10428  Enable tracing of global enqueue service distributed resource
10429  Enable tracing of global enqueue service IPC calls
10430  Enable tracing of global enqueue service AST calls
10431  Enable verification messages on pi consistency
10432  Enable tracing of global cache service fusion calls
10433  Global enqueue service testing event
10434  Enable tracing of global enqueue service muliple LMS
10435  Enable tracing of global enqueue service deadlock detetction
10450  Signal ctrl-c in kdddca (drop column) after n rows
10500  Turn on traces for SMON
10510  Turn off SMON check to offline pending offline rollback segment
10511  Turn off SMON check to cleanup undo dictionary
10512  Turn off SMON check to shrink rollback segments
10515  Turn on event to use physical cleanout
10550  Signal error during create as select/create index after n rows
10560  Block type '%s'
10561  Block type '%s', data object# %s
10562  Error occurred while applying redo to data block (file# %s, block# %s)
10563  Test recovery had to corrupt data block (file# %s, block# %s) in order to proceed
10564  Tablespace %s
10565  Another test recovery session is active
10566  Test recovery has used all the memory it can use
10567  Redo is inconsistent with data block (file# %s, block# %s)
10568  Failed to allocate recovery state object: out of SGA memory
10570  Test recovery complete
10571  Test recovery canceled
10572  Test recovery canceled due to errors
10573  Test recovery tested redo from change %s to %s
10574  Test recovery did not corrupt any data block
10575  Give up restoring recovered datafiles to consistent state: out of memory
10576  Give up restoring recovered datafiles to consistent state: some error occurred
10577  Can not invoke test recovery for managed standby database recovery
10578  Can not allow corruption for managed standby database recovery
10579  Can not modify control file during test recovery
10580  Can not modify datafile header during test recovery
10581  Can not modify redo log header during test recovery
10582  The control file is not a backup control file
10583  Can not recovery file %s renamed as missing during test recovery
10584  Can not invoke parallel recovery for test recovery
10585  Test recovery can not apply redo that may modify control file
10586  Test recovery had to corrupt 1 data block in order to proceed
10587  Invalid count for ALLOW n CORRUPTION option
10588  Can only allow 1 corruption for normal media/standby recovery
10589  Test recovery had to corrupt %s data blocks in order to proceed
10590  Kga (argus debugger) test flags
10591  Kga (argus debugger) test flags
10592  Kga (argus debugger) test flags
10593  Kga (argus debugger) test flags
10594  Kga (argus debugger) test flags
10595  Kga (argus debugger) test flags
10596  Kga (argus debugger) test flags
10597  Kga (argus debugger) test flags
10598  Kga (argus debugger) test flags
10599  Kga (argus debugger) test flags
10600  Check cursor frame allocation
10601  Turn on debugging for cursor_sharing (literal replacement)
10602  Cause an access violation (for testing purposes)
10603  Cause an error to occur during truncate (for testing purposes)
10604  Trace parallel create index
10605  Enable parallel create index by default
10606  Trace parallel create index
10607  Trace index rowid partition scan
10608  Trace create bitmap index
10609  Trace for array index insertion
10610  Trace create index pseudo optimizer
10611  Causes migration to fail - testing only
10612  Prints debug information for auto-space managed segments
10613  Prints debug information for auto-space managed segments
10614  Operation not allowed on this segment
10615  Invalid tablespace type for temporary tablespace
10616  Operation not allowed on this tablespace
10617  Cannot create rollback segment in this tablespace
10618  Operation not allowed on this segment
10619  Avoid assertions when possible
10620  Operation not allowed on this segment
10621  Data block does not belong to the segment
10622  Test/trace online index (re)build
10623  Enable Index range scan Prefetch - testing only
10650  Disable cache-callback optimisation
10651  Incorrect file number block number specified
10666  Do not get database enqueue name
10667  Cause sppst to check for valid process ids
10690  Set shadow process core file dump type (Unix only)
10691  Set background process core file type (Unix only)
10700  Alter access violation exception handler
10701  Dump direct loader index keys
10702  Enable histogram data generation
10703  Simulate process death during enqueue get
10704  Print out information about what enqueues are being obtained
10705  Print Out Tracing information for every I/O done by ODSs
10706  Print out information about global enqueue manipulation
10707  Simulate process death for instance registration
10708  Print out Tracing information for skxf multi instance comms
10709  Enable parallel instances in create index by default
10710  Trace bitmap index access
10711  Trace bitmap index merge
10712  Trace bitmap index or
10713  Trace bitmap index and
10714  Trace bitmap index minus
10715  Trace bitmap index conversion to rowids
10716  Trace bitmap index compress/decompress
10717  Trace bitmap index compaction trace for index creation
10718  Event to disable automatic compaction after index creation
10719  Trace bitmap index dml
10720  Trace db scheduling
10721  Internal testing - temp table transformation
10722  Set parameters for CPU frequency calculation (debug)
10723  Internal testing - release buffer for buffer cache shrink
10730  Trace row level security policy predicates
10731  Dump SQL for CURSOR expressions
10740  Disables fix for bug 598861
10750  Test rollback segment blksize guessing for index array insert
10800  Disable Smart Disk scan
10801  Enable Smart Disk trace
10802  Reserved for Smart Disk
10803  Write timing statistics on cluster database recovery scan
10804  Reserved for ksxb
10806  Switch to 7.3 mode when detaching sessions
10807  Disable user id check when switching to a global transaction
10810  Trace snapshot too old
10811  Trace block cleanouts
10812  Trace Consistent Reads
10830  Trace group by sort row source
10841  Default un-inintialized charact set form to SQLCS_IMPLICIT
10850  Enable time manager tracing
10851  Allow Drop command to drop queue tables
10852  Enable dumping of the AQ statistics hash table
10853  Event for AQ statistics latch cleanup testing
10856  Disable AQ propagator from using streaming
10857  Force AQ propagator to use two-phase commit
10858  Crash the AQ propagator at different stages of commit
10859  Disable updates of message retry count
10860  Event for AQ admin disable new name parser
10861  Disable storing extended message properties
10862  Resolve default queue owner to current user in enqueue/dequeue
10900  Extent manager fault insertion event #%s
10902  Disable seghdr conversion for ro operation
10903  Force tablespaces to become locally managed
10904  Allow locally managed tablespaces to have user allocation
10905  Do cache verification (kcbcxx) on extent allocation
10906  Unable to extend segment after insert direct load
10907  Trace extent management events
10908  Trace temp tablespace events
10909  Trace free list events
10924  Import storage parse error ignore event
10925  Trace name context forever
10926  Trace name context forever
10927  Trace name context forever
10928  Trace name context forever
10929  Trace name context forever
10930  Trace name context forever
10931  Trace name context forever
10932  Trace name context forever
10933  Trace name context forever
10934  Reserved. Used only in version 7.x.
10935  Reserved. Used only in version 7.x.
10936  Trace name context forever
10937  Trace name context forever
10938  Trace name context forever
10939  Trace name context forever
10940  Trace name context forever
10941  Trace name context forever
10943  Trace name context forever
10944  Trace name context forever
10945  Trace name context forever
10975  Trace execution of parallel propagation
10976  Internal package related tracing
10977  Trace event for RepAPI
10979  Trace flags for join index implementation
10980  Prevent sharing of parsed query during Materialized View query generation
10981  Dscn computation-related event in replication
10982  Event to turn off CDC-format MV Logs
10983  Event to enable Create_Change_Table debugging
10984  Subquery materialized view-related event
10985  Event for NULL refresh of materialized views
10986  Do not use HASH_AJ in refresh
10987  Event for the support of caching table with object feature
10988  Event to get exclusive lock during materialized view refresh in IAS
10989  Event to internally create statistics MV
10999  Do not get database enqueue name
10999  End Pseudo-error debugging events


 How to trace my session :
First, get my session id :
SQL> select distinct sid from v$mystat;

       SID
----------
 29
Update the tracefile name to contains a specific word. The trace file will be more easy to identify because it would have the following format : <SID>_ora_<pid>_<my_word>.trc
SQL&gt alter session set tracefile_identifier="FMOSSE_SESSION";

Session altered.
So the trace file will have the following format : <SID>_ora_<pid>_FMOSSE_SESSION.trc Under sys user, set my session in trace mode:
SQL> exec dbms_monitor.session_trace_enable( session_id=>29, waits=>TRUE, binds=>TRUE);
Do your stuff here.. Desactivate the trace for the session :
SQL> exec dbms_monitor.session_trace_disable( session_id=>29);
Generate a more readable trace file using tkprof:
Linux> tkprof FME_ora_2270_FMOSSE_SESSION.trc  /tmp/mytrace.txt explain=sys/<passord for sys> sys=no waits=yes

lundi 6 février 2012

Oracle11g : Monitoring table or index



To monitor a table :
SQL> ALTER TABLE <my_table> monitoring;
To monitor an index :
SQL> ALTER INDEX <my_index> monitoring;
To flush statistics in the dba monitoring table :
SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
To see monitoring info in dba tables :
SQL> select * from dba_tab_modifications;
To stop monitoring, use: alter table xxxx nomonitoring;

mercredi 11 janvier 2012

rlwrap : Command History and Auto-Completion for SQL*Plus on Unix/Linux


see http://www.idevelopment.info/data/Oracle/DBA_tips/SQL_PLUS/SQLPLUS_8.shtml
or see http://matthieucornillon.fr/2011/05/rlwrap-pour-sqlplus-auto-completion-historique-etc/

Oracle 11g: automatic startup/shutdown

To configure an instance for automatic startup and shutdown, edit the "/etc/oratab" file and for each instance define the following line :
SID:<value_of_the_oracle_home>:[Y|N]
For exemple, to register for automatic startup the DB11G instance :
DB11G:/opt/oracle/11.2.0:Y
Next, create a file called "/etc/init.d/dbora" as the root user, containing the following :
#!/bin/sh
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
#
ORACLE_HOME=/opt/oracle/11.2.0
ORACLE_OWNER=oracle

if [ ! -f $ORACLE_HOME/bin/dbstart ]
then
    echo "Oracle startup: cannot start"
    exit
fi

case "$1" in
    'start')
        # Start the Oracle databases:
        # The following command assumes that the oracle login 
        # will not prompt the user for any values
        su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl start"
        su - $ORACLE_OWNER -c $ORACLE_HOME/bin/dbstart
        touch /var/lock/subsys/dbora
        ;;
    'stop')
        # Stop the Oracle databases:
        # The following command assumes that the oracle login 
        # will not prompt the user for any values
        su - $ORACLE_OWNER -c $ORACLE_HOME/bin/dbshut
        su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl stop"
        rm -f /var/lock/subsys/dbora
        ;;
esac
Set the dbora file in the dba group :
$> chgrp dba dbora
Use the chmod command to set the privileges to 750:
$> chmod 750 /etc/init.d/dbora
Associate the dbora service with the appropriate run levels and set it to auto-start using the following command:
$> chkconfig --add dbora

links :
For Oracle 10.2 see http://docs.oracle.com/cd/B19306_01/server.102/b15658/strt_stp.htm#CFAHAHGA
For Oracle 11.2 see http://docs.oracle.com/cd/E11882_01/server.112/e10839/strt_stp.htm#BABGDGHF

Oracle 11g: Exp/imp how to get metadata


The idea is to use exp and imp utilities to extract only DDL in order to duplicate a schema.
To do this, export first the database using exp :
$> export ORACLE_HOME=/opt/oracle/10.2.0
$> export PATH=$PATH:$ORACLE_HOME/bin
$> export ORACLE_SID=MYDB
$> export NLS_LANG=AMERICAN_AMERICA.UTF8
$> exp userid=system file=export.dmp log=export.log owner=<schemaname_to_export> consistent=y

Import the database but specify that metadata will be redirected to a file :
$> imp userid=system file=export.dmp indexfile=metadata.txt fomuser=<schemaname_to_import>

lundi 5 décembre 2011

Oracle11g: Flashback database


Flashback database allows a user to rewind an entire database in the past.
Flashback database uses specific flashback logs that contains blocks of database that have been modified. These logs are stored in the Fast Recovery area.

Configuring a database for flashback database
  1.  The database must be in archivelog
    SQL> ALTER DATABASE ARCHIVELOG;
  2.  The target retention policy must be configured
    SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=3000 SCOPE=BOTH;
    In this example, the retention is configure for 3000 minutes. There is no guarantee on the retention target; it is depending on the amount of data stores in the flashback logs and the size of the fast recovery area. If space is needed for the fast recovery area, flashback logs could be lost.

  3.  Flashback Database must be explicitly activated
    SQL> ALTER DATABASE FLASHBACK ON;
    note: This activation must be done when the database is mounted in an exclusive mode (NOT OPENED!).

How to flashback a database

To perform a flashback, the database must be in MOUNT EXCLUSIVE state :
SQL> FLASHBACK DATABASE TO TIMESTAMP (sysdate - interval '1' DAY);
SQL> FLASHBACK DATABASE TO SCN #;
SQL> FLASHBACK DATABASE TO RESTORE POINT <restore_point>;
SQL> FLASHBACK DATABASE TO SEQUENCE #;
Once flashback done, the database must be reopen with RESETLOGS option.

Flashback opérations could be followed through the V$SESSION_LONGOPS view.

note : Flashback database could not be used if
  •  The control file has been restored or re-created
  •  A tablespace has been dropped
  •  The size of a file has be reduced
  •  The restoration is before a previous use of RESETLOGS (in this case use TO BEFORE RESETLOGS)
How to monitor a flashback database
To monitor disk space usage :
SQL> SELECT ESTIMATED_FLASHBACK_SIZE, FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;
To monitor the SCN window :
SQL> SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;
To monitor flashback statistics :
SQL>SELECT BEGIN_TIME, END_TIME, FLASHBACK_DATA, DB_DATA, REDO_DATA, ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_STAT;
The v$flashback_database_stat contains 24 hours of statistics collection. Each line correspond to one hour of flashback activity. This view is usefull to calculate the size requiered by flashback logs in relation to the retention target defined by the administrator.
FLASHBACK_DATA is the number of bytes of flashback data and REDO_DATA the number of bytes of redo log generated for the period.
DB_DATA is the number of bytes of data block read and write.

The V$RECOVERY_FILE_DEST gives information on fast recovery disk usage (size quota, used space, reclamable space and number of files for each location of the fast recovery area).

Guaranted restore point
With flashback database, it is possible to defined a guaranted restore point that ensure that the flash recovery area maintains all information used to restore the database at this specific restore point (In this case no lost of flashback logs).
To define a guaranted restore point, you can issue:
SQL> CREATE RESTORE POINT <restore_point> GUARANTEE FLASHBACK DATABASE;
To create this kind of restore point, the following pre-requisite are mandatory:
  •  The COMPATIBLE parameter must be greater than or equals to 10.2
  •  The database must be in archivelog
  •  Archive logs must be available before the creation of the guaranted restore point
  •  The Fast recovery area must be configured and available

Oracle11g : Flashback Data Archive [Oracle Total Recall]


Oracle total recal register all tables modification in dedicated tablespaces associated to FLASHBACK ARCHIVE area.

To create a flashback archive you need the FLASHBACK ARCHIVE ADMINISTER priviledge. With this priviledge you can craete a flashback archive :
SQL> CREATE FLASHBACK ARCHIVE fda TABLESPACE fda_tbs QUOTA 100M RETENTION 5 YEARS;

To register a table with a flashback archive you need the FLASHBACK ARCHIVE privilege.
SQL> GRANT FLASHBACK ARCHIVE on fda to <user>;
With this priviledge you can reister a table with a specific flashback archive :
SQL> ALTER TABLE my_table FALSHBACK ARCHIVE fda;

With flashback archive registered for a specific table you can query the table in the past
SQL> SELECT * FROM my_table AS OF TIMESTAMP to_timestamp( '05/12/2001 10:41:00', 'DD/MM/YYYY HH24:MI:SS');

note : To make Flashback Data Archive fda the default Flashback Data Archive:
SQL> ALTER FLASHBACK ARCHIVE fda SET DEFAULT;
note : to modify the retention policy
SQL> ALTER FLASHBACK ARCHIVE fda MODIFY RETENTION 2 YEARS;
note : to purge data older then a specific time
SQL> ALTER FLASHBACK ARCHIVE fda PURGE BEFORE TIMESTAMP (SYSTIMESTAMP - INTERVAL '5' DAY);
note : to drop an archive
SQL> DROP FLASHBACK ARCHIVE fda;
note : common DDL requests are supported by flashback archive but for complete ones it is necessary to disassociate the table before to procceed any modification. Total recall provides the DISASSOCIATED_FBA & REASSOCIATE_FBA procedure of the DBMS_FLASHBACK_ARCHIVE package for this purpose.

note :
  •  Automatic undo management must be enabled.
  •  The tablespace in which flashback data archive is created must have Automatic Segment Space Management activated.
  •  You can not drop a table that is managed by total recall but you can truncate it.

vendredi 2 décembre 2011

Oracle11g: Flashback Transaction Query, Flashback Transaction


Flashback transaction allows the user to rollback an entire transaction.
This technology is based on undo data and redo logs availables.

Pre-requisite
You need to :
  1.  Activate supplemental log data
  2.  Activate supplemental log data primary key
  3.  Grant execute on DBMS_FLASHBACK package
  4.  Grant SELECT ANY TRANSACTION priviledge to user

These can be done like this :
SQL> alter database add supplemental log data;
SQL> alter database add supplemental log data (primary key) columns;
SQL> grant execute on DBMS_FLASHBACK to <user>;
SQL> grant SELECT ANY TRANSACTION to <user>;

Transaction back-out
See the excellent example from this oracle blog to perform a transaction backout.
For Oracle documentation.

jeudi 1 décembre 2011

Oracle 11g: Flashback Query, Flashback Query Version, Flashback Table


The flashback Technology uses undo data to retrieve information (except for flashback database). Ensure that you have the correct UNDO_RETENTION parameter definition to guarantee your flashback opération through a specific period of time.


FlashBack Query
Flashback query allows user to query the database and see what were the data as a specific point in the past.

To perform a query on the past you can write :
SQL> SELECT employee_id, salary from HR.employees AS OF TIMESTAMP <time_stamp in past>;
or using a specific SCN number :
SQL> SELECT employee_id, salary from HR.employees AS OF SCN #;
note: It is not possible to flashback query a table if a DDL request has been performed after your targeted SCN or TIMESTAMP.


FlashBack Query Version
Flashback query version allows the user to display all modifications performed on a set of data between two timestamps or to SCN.
SQL> select VERSIONS_XID, employee_id, salary from HR.employees VERSIONS BETWEEN TIMESTAMP <T1> AND <T2>;
or using SCN
SQL> select VERSIONS_XID, employee_id, salary from HR.employees VERSIONS BETWEEN SCN # AND #;

VERSIONS_XID is a pseudo-column that defines the transaction identifier responsible of the line modification (Deleted lines are also considered).

note: Only commited lines are taken into account.
note: Do not use the query version to query :
  •  External tables
  •  Temporary tables
  •  Views
  •  V$ views
note: It is not possible to perform a flashback query version if a DDL request has been performed between the two targeted timstamps or SCN.


FlashBack Table
Flashback table restore a table at a specific time or SCN.
You must :
  1.  be granted FLASHBACK TABLE or FLASHBACK ANY TABLE
  2.  have SELECT , UPDATE, DELETE and ALTER priviledges on the targeted table
  3.  enable ROW MOVEMENT on the targeted tables
    SQL> ALTER TABLE <table_name> ENABLE ROW MOVEMENT;

To flashback a table at a specific time in the past :
SQL> FLASHBACK TABLE <table_name> TO TIMESTAMP to_timestamp( '01/12/2011 17:35:00', 'DD/MM/YYYY HH24:MI:SS');
note: The flashback table could not be performed on a tables in the SYSTEM tablespace, views, V$ views and external tables. It could not be also performed if a DDL request has be performed after the targeted point in time.
note: Flashback table generates redo and undo data.

It could be possible the flashback a table to a specific SCN or restore point.
SQL> FLASHBACK TABLE <table_name> TO SCN #;
SQL> FLASHBACK TABLE <table_name> TO RESTORE POINT <restore_point_name>;

mardi 29 novembre 2011

Oracle 11g: Database block recovery


There are some pre-requisite to the database block recovery :
  1.  The database must be in ARCHIVELOG
  2.  The database must be mounted or opened

The rman utility can use the following components for block recovery purposes:
  1.  Full database backup or incremential level 0 database could be used for block recovery
  2.  Archived redo log if available
  3.  Flashback database logs if available

To check which blocks need recovery, you can look at:
  1.  The V$DATABASE_BLOCK_CORRUPTION
  2.  Results returned by the following rman commands :
    •  LIST FAILURE
    •  VALIADTE {DATABASE|TABLESPACE|DATAFILE}
    •  BACKUP DATABASE VALIDATE
    •  ANALYZE {TABLE|INDEX}
    •  Trace files (for location look at the V$DIAG_INFO)
    •  DBVERIFY utility

To revocver a block : After identifying the block number and associated file number of the corrupted block to repair, you can issue:
RMAN> RECOVER DATAFILE 4 BLOCK 2;
RMAN> RECOVER
  DATAFILE 1 BLOCK 9
  DATAFILE 4 BLOCK 254
  DATAFILE 5 BLOCK 454;
or to repair all block registered in the v$database_block_corruption :
RMAN> RECOVER CORRUPTION LIST;

vendredi 18 novembre 2011

Oracle 11g: Managing SQL Plan

Oracle says about sql plan management :
"SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting, and evolving SQL plan information."

During the life cycle of our product some times an event causes the SQL execution plan of a sql request changed and introduce some regressions in SQL performance. From the tuner/profiler point of view, it will be relevant to be aware of this modification and check that the new SQL execution plan is correct.
This section will give some inputs to achieve this goal.

How to register SQL Plan

SQL plan management provides a mechanism that records and evaluates the execution plans of SQL statements over time, and builds SQL plan baselines.
Each SQL business request will be linked to an SQL plan baseline that contains an history of all execution plans build by oracle over the time for this request.

As a starting point, we need to capture execution plan for SQL request and add it in the SQL baseline.
This could be done automatically during a product run or could be done one by one by selecting a specific sql request.

Automatic SQL plan capture

To enable automatic plan capture, set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES initialization parameter to TRUE. By default, this parameter is set to FALSE.
1) Set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter to TRUE
alter system set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;

2) Run all your product use cases to capture all SQL execution plan

3) Restore the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter to its default value.
alter system set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE;

4) Look at the DBA_SQL_PLAN_BASELINES table to see all sql plans in relation to the schema you need to monitor:
select sql_handle, plan_name, enabled, accepted, fixed,OPTIMIZER_COST,sql_text from dba_sql_plan_baselines where PARSING_SCHEMA_NAME='FMOSSE';
SQL_HANDLE                     PLAN_NAME                      ENA ACC FIX OPTIMIZER_COST SQL_TEXT
------------------------------ ------------------------------ --- --- --- -------------- --------------------------------------------------------------------------------
SYS_SQL_495711156939d306       SQL_PLAN_4kpsj2pnmmns68e3968ca YES YES NO               0 delete from FMOSSE.S_PROPERTY where PROPERTYENTRY = :1
SYS_SQL_4a52f8b0626b170b       SQL_PLAN_4nnrsq1j6q5sb659ff4e8 YES YES NO               1 UPDATE S_ACCOUNT SET ACCOUNTSTATUS = :1  WHERE (ACCOUNTID = :2 )
SYS_SQL_4c596a52d15cd98f       SQL_PLAN_4sqbaab8ptqcg657f43ee YES YES NO               2 SELECT COUNT(FILEID) FROM S_FILE
SYS_SQL_4db332738e676fb7       SQL_PLAN_4vctkff76fvxr5c9cc6bc YES YES NO               2 select FOLDERID, FOLDERNAME, FOLDERDATECREATION, FOLDERIDPARENT, FOLDERSHARED, F
SYS_SQL_4e84758c387d2c28       SQL_PLAN_4x13pjhw7ub181e49339d YES YES NO               2 delete from FMOSSE.S_FILE
SYS_SQL_4fc3800bd9da7784       SQL_PLAN_4zhw01gcxnxw4291c9d40 YES YES NO               1 SELECT FOLDERID, FOLDERSHARED, FOLDERNAME, FOLDERSIZE, FOLDERDATECREATION, FOLDE
SYS_SQL_513d44ab9e0b94d8       SQL_PLAN_52ga4pfg0r56sacc4c75e YES YES NO               9 select count(*), sum(f.filesize), round(avg(f.filesize), 2), max(f.filesize), mi
SYS_SQL_51a8ae22d631730b       SQL_PLAN_53a5f4bb32wsb7b701d8a YES YES NO               3 select PROPERTYENTRY, PROPERTYVALUE from FMOSSE.S_PROPERTY order by PROPERTYENTR
SYS_SQL_51e053d0f1bfb831       SQL_PLAN_53s2mu3svzf1jc7624a29 YES YES NO               4 SELECT t0.ACCOUNTID, t0.ACCOUNTNAME, COUNT(t3.FILEID), TO_CHAR(NVL(SUM(t3.FILESI
SYS_SQL_521f93c697ccfbb2       SQL_PLAN_547wmsubwtyxk29b1bccc YES YES NO               2 select arf.accountid from s_accountresourcefolder arf inner join s_folder f on f
SYS_SQL_56ceb93fab2da70d       SQL_PLAN_5dmpt7ypkv9sd42ba1be5 YES YES NO               0 DELETE FROM S_ACCOUNT WHERE (ACCOUNTID = :1 )

Manual SQL plan capture

1) To capture a sql execution plan manually, you have to take a look at the V$SQL view and select the request you need to monitor :
SELECT sql_id, sql_fulltext FROM V$SQL WHERE sql_text LIKE '%SELECT COUNT(FILEID)%' and PARSING_SCHEMA_NAME='FMOSSE';
SQL_ID        SQL_FULLTEXT
------------- --------------------------------------------------------------------------------
1xjd9vbxt5f7w SELECT COUNT(FILEID) FROM S_FILE

2) Using the SQL_ID create a SQL plan baseline for the statement.
variable cnt number;
EXECUTE :cnt :=DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id=>'1xjd9vbxt5f7w');

3) Check the DBA_SQL_PLAN_BASELINES tables
select B.SQL_HANDLE, B.PLAN_NAME, B.SQL_TEXT, B.ORIGIN, B.PARSING_SCHEMA_NAME, B.ENABLED, B.ACCEPTED, B.FIXED from DBA_SQL_PLAN_BASELINES B, V$SQL S WHERE S.SQL_PLAN_BASELINE=B.PLAN_NAME AND S.SQL_ID='1xjd9vbxt5f7w';
SQL_HANDLE                     PLAN_NAME                      SQL_TEXT                          ORIGIN         PARSIN ENA ACC FIX
------------------------------ ------------------------------ --------------------------------- -------------- ------ --- --- ---
SYS_SQL_4c596a52d15cd98f       SQL_PLAN_4sqbaab8ptqcg657f43ee SELECT COUNT(FILEID) FROM S_FILE  AUTO-CAPTURE   FMOSSE YES YES NO

SQL Plan Baseline activation

Now we need to indicate to the optimizer to refere to the SQL plan baseline defined for each request :
alter system set optimizer_use_sql_plan_baselines=TRUE;

The optimizer will initially look for a given query if there is a baseline and compare both the cost of the execution plan from the baseline and cost of the execution plan of the current query. If the cost has changed and the execution plan of the query is better then the cost of the execution plan of the baseline then the new execution plan will be added to the baseline.

To illustrate this:
1) I create first a table and populate some data in it.
create table test (id NUMBER, VALUE VARCHAR2) NOLOGGING;
declare
  i number;
begin
  for i in 1..300000 loop
    insert /*+ APPEND */ into test values( i, 'LABEL# ' || i);
  end loop;
  commit;
end;
2) Then i perform a select :
alter system set cursor_sharing=force;
select * from test where id=215445;

3) Using the v$sql view, i found the sql_id of the new request and load its execution plan in the sql plan baseline:
variable cnt number;
EXECUTE :cnt :=DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id=>'05abfh82j5z85');

4) Now i display the baseline
select SQL_HANDLE, PLAN_NAME, OPTIMIZER_COST, SQL_TEXT, ORIGIN, PARSING_SCHEMA_NAME, ENABLED, ACCEPTED, FIXED from DBA_SQL_PLAN_BASELINES;
SQL_HANDLE                     PLAN_NAME                      OPTIMIZER_COST SQL_TEXT                                                                      ORIGIN          PARSING_SCHEMA_NAME            ENA ACC FIX
------------------------------ ------------------------------ -------------- -------------------------------------------------------------------------------- -------------- ------------------------------ --- --- ---
SYS_SQL_d223cef6c0ff72ee       SQL_PLAN_d48yfyv0gywrf97bbe3d0            104 select * from test where id=:"SYS_B_0"                                        MANUAL-LOAD    TEST                            YES YES NO
5) Now, i create an contraint on the id column and so on an associated index:
alter table test add constraint PK_TEST_ID primary key( id);
Do an other select with a different id value.

6) Look again for the baseline:
select SQL_HANDLE, PLAN_NAME, OPTIMIZER_COST, SQL_TEXT, ORIGIN, PARSING_SCHEMA_NAME, ENABLED, ACCEPTED, FIXED from DBA_SQL_PLAN_BASELINES;
SQL_HANDLE                     PLAN_NAME                      OPTIMIZER_COST SQL_TEXT                                                                      ORIGIN          PARSING_SCHEMA_NAME            ENA ACC FIX
------------------------------ ------------------------------ -------------- -------------------------------------------------------------------------------- -------------- ------------------------------ --- --- ---
SYS_SQL_d223cef6c0ff72ee       SQL_PLAN_d48yfyv0gywrf97bbe3d0            104 select * from test where id=:"SYS_B_0"                                        MANUAL-LOAD    TEST                            YES YES NO
SYS_SQL_d223cef6c0ff72ee       SQL_PLAN_d48yfyv0gywrf7ad59d6c              2 select * from test where id=:"SYS_B_0"                                        AUTO-CAPTURE   TEST                            YES NO  NO
What we can see it that a new sql plan has been added in the baseline with a better optimizer cost (2 instead of 104).
We can also see that the first sql plan has been added manually in the baseline (ORIGIN=MANUAL-LOAD) and that the second one has been added by oracle itself (ORIGIN=AUTO-CAPTURE).
As we add a new index, the sql plan of the request as changed due to a database structure modification. In this case the new plan has been added in the base line with ENABLED=YES (could be choose by the optimizer) but with ACCEPTED=NO due to the fact that the DBA has not yet validate the new SQL plan.
If we want to force the use of the first plan we need to set the plan as FIXED :
variable cnt number;
exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE(
SQL_HANDLE => 'SYS_SQL_d223cef6c0ff72ee',
PLAN_NAME => 'SQL_PLAN_d48yfyv0gywrf97bbe3d0',
ATTRIBUTE_NAME => 'fixed',
ATTRIBUTE_VALUE => 'YES');

How to overwrite an existing execution plan

Some time during a profiling session we need to overwrite an execution plan for a specific request because we improve its performance.
To do that we need :

1) to disable the current sql plan in the baseline:
variable cnt number;
exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE(
    SQL_HANDLE => 'SYS_SQL_d223cef6c0ff72ee',
    PLAN_NAME => 'SQL_PLAN_d48yfyv0gywrf97bbe3d0',
    ATTRIBUTE_NAME => 'enabled',
    ATTRIBUTE_VALUE => 'NO'
);
2) register the new plan under the same sql_handle but with the sql_id and plan_hash_value of the new request (values found in the v$sql view)
variable cnt number;
exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE(
    SQL_ID => '72fb94sq0karh',
    PLAN_HASH_VALUE => '1357081020',
    SQL_HANDLE => 'SYS_SQL_d223cef6c0ff72ee'
);
3) Ensure that the new plan has been accepted in the base line :
select SQL_HANDLE, PLAN_NAME, OPTIMIZER_COST, SQL_TEXT, ORIGIN, PARSING_SCHEMA_NAME, ENABLED, ACCEPTED, FIXED from DBA_SQL_PLAN_BASELINES;
SQL_HANDLE                     PLAN_NAME                      OPTIMIZER_COST SQL_TEXT                                                                      ORIGIN          PARSING_SCHEMA_NAME            ENA ACC FIX
------------------------------ ------------------------------ -------------- -------------------------------------------------------------------------------- -------------- ------------------------------ --- --- ---
SYS_SQL_d223cef6c0ff72ee       SQL_PLAN_d48yfyv0gywrf97bbe3d0            2 select * from test where id=:"SYS_B_0"                                        MANUAL-LOAD    TEST                            YES YES NO

How to display an execution plan registered under the baseline

To view the plans stored in the SQL plan baseline for a given statement, use the DISPLAY_SQL_PLAN_BASELINE function of the DBMS_XPLAN package:
select * from table(
    dbms_xplan.display_sql_plan_baseline(
        sql_handle=>'SYS_SQL_d223cef6c0ff72ee',
        format=>'basic'));
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL handle: SYS_SQL_d223cef6c0ff72ee
SQL text: select * from test where id=:"SYS_B_0"
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_d48yfyv0gywrf7ad59d6c         Plan id: 2060819820
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
--------------------------------------------------------------------------------


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3803811902

--------------------------------------------------
| Id  | Operation                   | Name       |
--------------------------------------------------
|   0 | SELECT STATEMENT            |            |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST       |
|   2 |   INDEX UNIQUE SCAN         | PK_TEST_ID |
--------------------------------------------------

--------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan name: SQL_PLAN_d48yfyv0gywrf97bbe3d0         Plan id: 2545673168
Enabled: YES     Fixed: NO      Accepted: YES     Origin: MANUAL-LOAD
--------------------------------------------------------------------------------

Plan hash value: 1357081020

----------------------------------
| Id  | Operation         | Name |
----------------------------------
|   0 | SELECT STATEMENT  |      |
|   1 |  TABLE ACCESS FULL| TEST |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------

34 rows selected.

Oracle documentation

http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/optplanmgmt.htm

Oracle 11g: How to restore a backup


Commands used by RMAN are :

    - RESTORE Used to retrieve a set of files from a backup.
RESTORE {DATABASE | TABLESPACE name [,name] | DATAFILE name [,name]}
You can restore the database to a specific SCN, time, restore point or redo log sequence number.

    - RECOVER Applies modifications registered inside incremental backups, archived log and redo logs to all the files previously restored.
RECOVER {DATABASE | TABLESPACE name [,name] | DATAFILE name [,name]}

How to restore/recover an entire database
RMAN> SHUTDOWN IMMEDIATE;
RMAN> STARTUP MOUNT;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
note: In case of you lost your online redo logs, you can indicate to RMAN that during the recovery procress you do not want to apply redo logs :
RMAN> RECOVER DATABASE NOREDO;


How to restore/recover a tablespace
If the tablespace is not a system tablespace then the database could be opened during the restoration otherwise the database must be in MOUNT.
RMAN> sql 'alter tablespace users offline immediate';
RMAN> RESTORE TABLESPACE USERS;
RMAN> RECOVER TABLESPACE USERS;
RMAN> sql 'alter tablespace users online';

How to restore/recover a datafile
If the datafile is not a datafile of a system tablespace then the database could be opened during the restoration otherwise the database must be in MOUNT.
Anyway the datafile must be set offline before the restoration.
RMAN> sql 'alter database datafile 4 offline immediate';
RMAN> RESTORE DATAFILE 4;
RMAN> RECOVER DATAFILE 4;
RMAN> sql 'alter database datafile 4 online';

How to restore/recover a database until the last successful transaction
RMAN> RESTORE DATABASE UNTIL CANCEL;
RMAN> RECOVER DATABASE UNTIL CANCEL;

How to restore/recover a database at a specific point in time
The following example restore a database as it should be 10 minutes before :
RMAN> RECOVER DATABASE UNTIL TIME 'sysdate-10/60/24';
If you use a specific time, it is then necessary to define the NLS_LANG & NLS_DATE_FORMAT system parameters. These parameters will be used by rman to decode the date ans time passed for the restoration.
$> export NLS_LANG = american_america.us7ascii
$> export NLS_DATE_FORMAT = 'DD/MM/YYYY HH24:MI:SS'
$> RMAN target / catalog rcatowner/rcatowner@rcat_db
RMAN> RESTORE DATABASE UNTIL TIME '18/11/2011 10:40:00';
RMAN> RECOVER DATABASE UNTIL TIME '18/11/2011 10:40:00';
RMAN> ALTER DATABASE OPEN RESETLOGS;
or using the SET command :
$> export NLS_LANG = american_america.us7ascii
$> export NLS_DATE_FORMAT = 'DD/MM/YYYY HH24:MI:SS'
$> RMAN target / catalog rcatowner/rcatowner@rcat_db
RMAN> run {
SET UNTIL TIME '18/11/2011 10:40:00';
RESTORE DATABASE;
RECOVER DATABASE;
}
RMAN> ALTER DATABASE OPEN RESETLOGS;

remark : If you want to check that the point in time is correct by checking all the data restore, you need to open the database in READ ONLY. By this way, the redo logs will not be reseted and you can move your point in time if not relevant.
RMAN> ALTER DATABASE OPEN READ ONLY;
Once the point in time is correct you can open the database and reset the redo logs.
RMAN> ALTER DATABASE OPEN RESETLOGS;
This remark could also be applied for restoration at a specific SCN or sequence number.

How to restore/recover a database at a specific SCN
RMAN> SHUTDOWN IMMEDIATE;
RMAN> STARTUP MOUNT;
RMAN> run {
SET UNTIL SCN #;
RESTORE DATABASE;
RECOVER DATABASE;
}
RMAN> ALTER DATABASE OPEN RESETLOGS;
remark :"SET UNTIL" DOESN'T WORK WITH A TABLESPACE!. It is not possible to restore only one tablespace at a specific time in the past. The database will not open.

remark :If the SCN you need to reach is not part of the current incarnation of the database, you need to reset the database to the incarnation juste before your scn and after perform the recovery. Here is the example of the restoration of the database at the SCN 5726167:
RMAN> run {
2> set until scn 5726167;
3> restore database;
4> recover database;
5> }

executing command: SET until clause

Starting restore at 18-11-11 13:28:10
using channel ORA_DISK_1
using channel ORA_SBT_TAPE_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 11/18/2011 13:28:10
RMAN-20208: UNTIL CHANGE is before RESETLOGS change

RMAN> list incarnation;
List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       TEST     2058702964       PARENT  1          13-08-09 23:00:48
2       2       TEST     2058702964       PARENT  754488     30-08-11 17:42:15
3       3       TEST     2058702964       PARENT  5571932    16-11-11 14:39:08
4       4       TEST     2058702964       ORPHAN  5576815    16-11-11 16:11:53
5       5       TEST     2058702964       ORPHAN  5577397    16-11-11 16:18:46
6       6       TEST     2058702964       ORPHAN  5598121    16-11-11 16:37:33
7       7       TEST     2058702964       PARENT  5576413    16-11-11 18:02:08
8       8       TEST     2058702964       CURRENT 5726169    18-11-11 11:41:56

RMAN> shutdown immediate;
RMAN> startup mount;
RMAN> reset database to incarnation 7;
database reset to incarnation 7

RMAN> run {
2> set until scn 5726167;
3> restore database;
4> recover database;
5> }

RMAN> ALTER DATABASE OPEN RESETLOGS;
note : The RESET DATABASE need to alter the control file, so the database must be in mount state but not open. In case of an opend database you will receive the following exception : ORA-19910: can not change recovery target incarnation in control file

How to restore/recover a database at a specific sequence number
RMAN> SHUTDOWN IMMEDIATE;
RMAN> STARTUP MOUNT;
RMAN> run {
SET UNTIL SEQUENCE #;
RESTORE DATABASE;
RECOVER DATABASE;
}
RMAN> ALTER DATABASE OPEN RESETLOGS;

How to restore a control file
To restore a control file the database must be in NOMOUNT state:
RMAN> STARTUP NOMOUNT;
RMAN> RESTORE CONTROLFILE;
RMAN> ALTER DATABASE MOUNT;
RMAN> RECOVER DATABASE;
RMAN> ALTER DATABASE OPEN RESETLOGS;
If you want to restore from an autobackup and you have an rman catalog :
RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;
If you don't have a catalog, you need to define the DBID:
RMAN> SET DBID 1090778584;
RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;

How to restore a parameter file
To restore a parameter file, the database must be in NOMOUNT state:
RMAN> STARTUP FORCE NOMOUNT;
RMAN> RESTORE SPFILE FROM AUTOBACKUP;
RMAN> STARTUP FORCE;
You can change the name of the parameter file :
RMAN> RESTORE SPFILE TO <parameter_file_name> FROM AUTOBACKUP;
Using sql, it is possible to generate the parameter file using
SQL> CREATE PFILE [='pfile_name']
FROM { { SPFILE [='spfile_name']} | MEMORY};

or

SQL> CREATE SFILE [='sfile_name']
FROM { { PFILE [='pfile_name']} | MEMORY};

How to restore/recover a database at a specific incarnation
RMAN> LIST INCARNATION;
RMAN> RESET DATABASE TO INCARNATION #;

jeudi 17 novembre 2011

Oracle 11g: Restore points


A restore point gives a name to a point in time.

You can create a restore point at the current time :
SQL> CREATE RESTORE POINT <name_of__restore_point>;
or create a point in the past :
SQL> CREATE RESTORE POINT <name_of__restore_point> AS OF SCN #;

The restore point is stored in the control file within the period of retention defined by the CONTROL_FILE_RECORD_KEEP_TIME parameter.
If you want the restore point never expires add the PRESERVE key word at the previous commands.
SQL> CREATE RESTORE POINT <name_of__restore_point> PRESERVE;

You can see all restore points created through the database view V$RESTORE_POINT or use RMAN:
RMAN> LIST RESTORE POINT ALL;

You can delete a restore point using:
SQL> DROP RESTORE POINT <name_of__restore_point>;

Oracle 11g: How to trace in alert log file


To trace sonn messahe in the alert log file :
exec dbms_system.ksdwrt(2, 'This is a message sent at ' || TO_CHAR(SYSDATE, 'dd/mm/yyyy HH24"H"MI'));
note : For more information about dbms_system package see http://psoug.org/reference/dbms_system.html

mardi 15 novembre 2011

Oracle 11g: Monitoring RMAN backups


If you are connected as SYSDBA

The following dynamic views give information about backups performance and statistics:

View Name Description
V$BACKUP_SETBackupset created
V$BACKUP_PIECEAll backup piece created
V$DATAFILE_COPYAll image copies created
V$BACKUP_FILESAll files created during backups
V$BACKUP_DATAFILEAll datafiles backuped and gives information about the moniroting of incremential backups


If you are connected as the catalog owner

View Name Description
RC_BACKUP_SETBackupset created
RC_BACKUP_PIECEAll backup piece created
RC_DATAFILE_COPYAll image copies created
RC_BACKUP_FILESAll files created during backups
RC_BACKUP_DATAFILEAll datafiles backuped and gives information about the moniroting of incremential backups

note: The following sql request gives information about the number of block read for an incremental level 1 backup.
sqlplus / as sysdba
SQL> select file#,
         avg(datafile_blocks),
         avg(blocks_read),
         avg(blocks_read/datafile_blocks) * 100 as PCT_READ_FOR_BACKUP,
         avg(blocks)
  from v$backup_datafile
  where used_change_tracking = 'YES'
        and incremental_level > 0
  group by file#;
Incremental level 1 backup are interesting as the value of the PCT_READ_FOR_BACKUP column is less than 50%. Above this value, it is recommanded to perform a level 0 incremential backup.

jeudi 10 novembre 2011

Oracle 11g: RMAN catalog resynchronization


To synchronize the target database control file with the catalog :
RAM> RESYNC CATALOG;

To check if backups on file system are deleted but still referenced in the rman catalog:
    a) for backupset
RMAN> crosscheck backupset;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=143 device type=DISK
using channel ORA_SBT_TAPE_1
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=43mraiho_1_1 RECID=59 STAMP=766855736
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=44mrail9_1_1 RECID=60 STAMP=766855849
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=4cmraj7q_1_1 RECID=62 STAMP=766856442
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=4dmral4o_1_1 RECID=63 STAMP=766858392
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=4lmranr6_1_1 RECID=65 STAMP=766861158
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=4mmranun_1_1 RECID=66 STAMP=766861271
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=4nmrao37_1_1 RECID=67 STAMP=766861415
Crosschecked 7 objects
For a specific device type
RMAN> crosscheck backupset device type sbt;
    b) for image copy
RMAN> crosscheck copy;

The crosscheck check if the backup still exists on file system; if not the backuppiece is marked as EXPIRED.

To list expired backupset :
RMAN> list expired backupset;
RMAN> list expired backupset summary;

To list expired copy :
RMAN> list expired copy;

To delete expired backupset or copy:
RMAN> delete expired backupset;
RMAN> delete expired copy;


It is also possible to check for obsolete backup. An obsolete backup is a backup that is not used to garantee the retention policy.
To check for obsolete backup :
RMAN> report obsolete;
To delete obsolete backup :
RMAN> delete obsolete;

Oracle11g: Backuping backups with RMAN


The goal of this section is to show how to transfert a set of backupset to a tape (image copy could not be transfered on tape).

Transfering a copy of the database, datafile,tablespace to tape

First of all, we need for the example to configure the sbt device type to a location on the file system in order to simulate a tape.
Here we would like to ensure that when using the sbt channel all data are redirected on disk at the '/database/TEST/flash_recovery_area/TEST/backupset/tape' location :
RMAN> configure channel device type sbt
parms='SBT_LIBRARY=oracle.disksbt, ENV=(BACKUP_DIR=/database/TEST/flash_recovery_area/TEST/backupset/tape)';

new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=oracle.disksbt, ENV=(BACKUP_DIR=/database/TEST/flash_recovery_area/TEST/backupset/tape)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

Now we would like to make a backupset that is a copy of all image copies to the tape using the sbt channel (backup copy of {database|datafile|tablespace}).
RMAN> backup as backupset device type sbt copy of database delete all input;
Starting backup at 10-NOV-11
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting full datafile backup set
channel ORA_SBT_TAPE_1: including datafile copy of datafile 00001 in backup set
input file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_system_7cqqswb2_.dbf
channel ORA_SBT_TAPE_1: including datafile copy of datafile 00002 in backup set
input file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_sysaux_7cqqtpj6_.dbf
channel ORA_SBT_TAPE_1: including datafile copy of datafile 00003 in backup set
input file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_undotbs1_7cqqvgpl_.dbf
channel ORA_SBT_TAPE_1: including datafile copy of datafile 00004 in backup set
input file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_users_7cqqvoro_.dbf
channel ORA_SBT_TAPE_1: including datafile copy of datafile 00005 in backup set
input file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_rcat_ts_7cqqvs26_.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 10-NOV-11
channel ORA_SBT_TAPE_1: finished piece 1 at 10-NOV-11
piece handle=44mrail9_1_1 comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:45
deleted datafile copy
datafile copy file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_system_7cqqswb2_.dbf RECID=26 STAMP=766855645
deleted datafile copy
datafile copy file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_sysaux_7cqqtpj6_.dbf RECID=27 STAMP=766855673
deleted datafile copy
datafile copy file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_undotbs1_7cqqvgpl_.dbf RECID=28 STAMP=766855681
deleted datafile copy
datafile copy file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_users_7cqqvoro_.dbf RECID=29 STAMP=766855687
deleted datafile copy
datafile copy file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_rcat_ts_7cqqvs26_.dbf RECID=30 STAMP=766855689
Finished backup at 10-NOV-11
remark: Using DELETE ALL INPUT ensure that all copy moved to tape are deleted from the disk.

remark: To backup a specific backup, you can use the Tag syntax
RMAN> backup device type sbt copy of database from tag 'MY_BACKUP' Tag 'TAPE_BACKUP' delete all input;

remark: To backup all archivelog to tape:
RMAN> backup device type sbt archivelog all delete all input;

Starting backup at 10-NOV-11
current log archived
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=267 RECID=96 STAMP=766855623
input archived log thread=1 sequence=268 RECID=97 STAMP=766855624
input archived log thread=1 sequence=269 RECID=98 STAMP=766855625
input archived log thread=1 sequence=270 RECID=99 STAMP=766855627
input archived log thread=1 sequence=271 RECID=101 STAMP=766855693
input archived log thread=1 sequence=272 RECID=102 STAMP=766861270
channel ORA_SBT_TAPE_1: starting piece 1 at 10-NOV-11
channel ORA_SBT_TAPE_1: finished piece 1 at 10-NOV-11
piece handle=4mmranun_1_1 tag=TAG20111110T170111 comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
channel ORA_SBT_TAPE_1: deleting archived log(s)
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_267_7cqqsqln_.arc RECID=96 STAMP=766855623
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_267_7cqocxbx_.arc RECID=91 STAMP=766853133
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_267_7cqo77yt_.arc RECID=89 STAMP=766852984
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_268_7cqqsrps_.arc RECID=97 STAMP=766855624
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_268_7cqocyh3_.arc RECID=92 STAMP=766853134
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_268_7cqocw57_.arc RECID=90 STAMP=766853132
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_269_7cqqssvk_.arc RECID=98 STAMP=766855625
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_269_7cqolrj4_.arc RECID=94 STAMP=766853352
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_269_7cqok20g_.arc RECID=93 STAMP=766853298
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_270_7cqqsv0o_.arc RECID=99 STAMP=766855627
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_270_7cqqspg8_.arc RECID=95 STAMP=766855622
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_271_7cqqvxrq_.arc RECID=101 STAMP=766855693
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_271_7cqqvwjz_.arc RECID=100 STAMP=766855692
archived log file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_272_7cqxb5w9_.arc RECID=102 STAMP=766861270
Finished backup at 10-NOV-11

remark: To backup some datafile to tape:
RMAN> backup device type sbt copy of datafile 4,5 delete input;
Starting backup at 10-NOV-11
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=146 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: WARNING: Oracle Test Disk API
channel ORA_SBT_TAPE_1: starting full datafile backup set
channel ORA_SBT_TAPE_1: including datafile copy of datafile 00004 in backup set
input file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_users_7cqcf30k_.dbf
channel ORA_SBT_TAPE_1: including datafile copy of datafile 00005 in backup set
input file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_rcat_ts_7cqprkm1_.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 10-NOV-11
channel ORA_SBT_TAPE_1: finished piece 1 at 10-NOV-11
piece handle=4omrao7v_1_1 comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
deleted datafile copy
datafile copy file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_users_7cqcf30k_.dbf RECID=21 STAMP=766842932
deleted datafile copy
datafile copy file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_rcat_ts_7cqprkm1_.dbf RECID=25 STAMP=766854561
Finished backup at 10-NOV-11

How to delete a copy of backup from a tape

To delete a copy of backupset from the tape, you can write :
RMAN> detele backupset <bs_key> device type sbt;

Transfering a copy of a backupset to tape

To copy a backupset to tape you can use the syntax
RMAN> backup device type sbt BACKUPSET <bs_key>;
To copy all backupset:
RMAN> backup device type sbt BACKUPSET ALL;

Transfering the recovery area to tape

It is possible to backup the recovery area to tape (not possible for device type disk)
RMAN> backup RECOVERY AREA;
note: The backup of recovery area doesn't backed up files that have been previously backed up. Use FORCE option to ensure that all files will be backed up each time the backup recovery area is called:
RMAN> BACKUP device type sbt RECOVERY AREA FORCE;

It is possible to copy all files of the recovery area even those how are not database files.
RMAN> backup RECOVERY FILES;

Oracle 11g: RMAN's backup


This section will explain how to perform backup of a database or datafile using RMAN.
Files generated by RMAN can be either :
  - an image copy (strict binary copy of database files)
  - a set of backupsets (a set of binary files that contains severals pieces of the database). The backupset does not contains empty blocks or blocks allocated over the High Water Mark.

How to backup the entire database as backupsets

To backup the database as a backupset you can issue the following command:
$> export ORACLE_SID=TEST

$> rman target / catalog rcatowner/rcatowner@rcat_db

RMAN> BACKUP AS BACKUPSET database;
Starting backup at 10-NOV-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/database/TEST/system/system01.dbf
input datafile file number=00005 name=/database/TEST/rcat/rcat01.dbf
input datafile file number=00002 name=/database/TEST/system/sysaux01.dbf
input datafile file number=00003 name=/database/TEST/undo/undotbs01.dbf
channel ORA_DISK_1: starting piece 1 at 10-NOV-11
channel ORA_DISK_1: finished piece 1 at 10-NOV-11
piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T104845_7cq7hxmg_.bkp tag=TAG20111110T104845 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/database/TEST/users/users01.dbf
channel ORA_DISK_1: starting piece 1 at 10-NOV-11
channel ORA_DISK_1: finished piece 1 at 10-NOV-11
piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T104845_7cq7kbgs_.bkp tag=TAG20111110T104845 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 10-NOV-11
channel ORA_DISK_1: finished piece 1 at 10-NOV-11
piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_ncsnf_TAG20111110T104845_7cq7kgol_.bkp tag=TAG20111110T104845 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-NOV-11
All datafiles, controlfiles and associated spfile are backuped as a backupset.

You can list the backupset to see its content :
RMAN> list backupset;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2661    Full    1.19G      DISK        00:00:41     10-NOV-11
        BP Key: 2664   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T104845
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T104845_7cq7hxmg_.bkp
  List of Datafiles in backup set 2661
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 5141811    10-NOV-11 /database/TEST/system/system01.dbf
  2       Full 5141811    10-NOV-11 /database/TEST/system/sysaux01.dbf
  3       Full 5141811    10-NOV-11 /database/TEST/undo/undotbs01.dbf
  5       Full 5141811    10-NOV-11 /database/TEST/rcat/rcat01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2662    Full    33.46M     DISK        00:00:01     10-NOV-11
        BP Key: 2665   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T104845
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T104845_7cq7kbgs_.bkp
  List of Datafiles in backup set 2662
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  4       Full 5141833    10-NOV-11 /database/TEST/users/users01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2663    Full    9.61M      DISK        00:00:01     10-NOV-11
        BP Key: 2666   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T104845
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_ncsnf_TAG20111110T104845_7cq7kgol_.bkp
  SPFILE Included: Modification time: 10-NOV-11
  SPFILE db_unique_name: TEST
  Control File Included: Ckp SCN: 5141834      Ckp time: 10-NOV-11
Note that the type of the backup is FULL and performed on the disk.

You can check that no images copies are generated:
RMAN> list copy;

specification does not match any datafile copy in the repository
specification does not match any control file copy in the repository
specification does not match any archived log in the repository

remark: You can delete the backupset using the following commands
RMAN> delete backupset 2661,2662,2663;

using channel ORA_DISK_1

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
2664    2661    1   1   AVAILABLE   DISK        /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T104845_7cq7hxmg_.bkp
2665    2662    1   1   AVAILABLE   DISK        /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T104845_7cq7kbgs_.bkp
2666    2663    1   1   AVAILABLE   DISK        /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_ncsnf_TAG20111110T104845_7cq7kgol_.bkp

Do you really want to delete the above objects (enter YES or NO)? y
deleted backup piece
backup piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T104845_7cq7hxmg_.bkp RECID=33 STAMP=766838925
deleted backup piece
backup piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T104845_7cq7kbgs_.bkp RECID=34 STAMP=766838970
deleted backup piece
backup piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_ncsnf_TAG20111110T104845_7cq7kgol_.bkp RECID=35 STAMP=766838974
Deleted 3 objects
or using the Tag value
RMAN> delete backupset Tag 'TAG20111110T104845';


remark: You can overwrite the tag like this
RMAN> backup as backupset database tag 'MY_FIRST_BACKUP';
and check the tag value
RMAN> list backupset summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
2833    B  F  A DISK        10-NOV-11       1       1       NO         MY_FIRST_BACKUP
2834    B  F  A DISK        10-NOV-11       1       1       NO         MY_FIRST_BACKUP
2835    B  F  A DISK        10-NOV-11       1       1       NO         MY_FIRST_BACKUP

remark: You can also backup all the archive log in the same time of the database:
RMAN> backup as backupset database plus archivelog;
...
RMAN> list backupset;


List of Backup Sets
===================


BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
586     73.50K     DISK        00:00:00     10-NOV-11
        BP Key: 590   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T114303
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_annnn_TAG20111110T114303_7cqboq9j_.bkp

  List of Archived Logs in backup set 586
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    260     5144748    10-NOV-11 5145114    10-NOV-11
  1    261     5145114    10-NOV-11 5145118    10-NOV-11
  1    262     5145118    10-NOV-11 5145125    10-NOV-11
  1    263     5145125    10-NOV-11 5145128    10-NOV-11
  1    264     5145128    10-NOV-11 5145131    10-NOV-11
  1    265     5145131    10-NOV-11 5145199    10-NOV-11

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
587     Full    1.19G      DISK        00:00:45     10-NOV-11
        BP Key: 591   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T114304
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T114304_7cqborsw_.bkp
  List of Datafiles in backup set 587
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 5145217    10-NOV-11 /database/TEST/system/system01.dbf
  2       Full 5145217    10-NOV-11 /database/TEST/system/sysaux01.dbf
  3       Full 5145217    10-NOV-11 /database/TEST/undo/undotbs01.dbf
  5       Full 5145217    10-NOV-11 /database/TEST/rcat/rcat01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
588     Full    33.46M     DISK        00:00:02     10-NOV-11
        BP Key: 592   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T114304
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T114304_7cqbq5sc_.bkp
  List of Datafiles in backup set 588
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  4       Full 5145240    10-NOV-11 /database/TEST/users/users01.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
589     Full    9.67M      DISK        00:00:02     10-NOV-11
        BP Key: 593   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T114304
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_ncsnf_TAG20111110T114304_7cqbqb0p_.bkp
  SPFILE Included: Modification time: 10-NOV-11
  SPFILE db_unique_name: TEST
  Control File Included: Ckp SCN: 5145241      Ckp time: 10-NOV-11

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
613     7.00K      DISK        00:00:00     10-NOV-11
        BP Key: 616   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T114356
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_annnn_TAG20111110T114356_7cqbqdhm_.bkp

  List of Archived Logs in backup set 613
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    266     5145199    10-NOV-11 5145247    10-NOV-11
remark: You can also delete all archived log after they have been backuped:
RMAN> backup as bacupset database plus archivelog delete input;

How to backup the entire database as image copies

To backup the database as image copy use the following command:
RMAN> backup as copy database;
Starting backup at 10-NOV-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/database/TEST/system/system01.dbf
output file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_system_7cqcc9jo_.dbf tag=TAG20111110T115433 RECID=18 STAMP=766842890
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/database/TEST/system/sysaux01.dbf
output file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_sysaux_7cqcd2th_.dbf tag=TAG20111110T115433 RECID=19 STAMP=766842916
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/database/TEST/undo/undotbs01.dbf
output file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_undotbs1_7cqcdvz1_.dbf tag=TAG20111110T115433 RECID=20 STAMP=766842927
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/database/TEST/users/users01.dbf
output file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_users_7cqcf30k_.dbf tag=TAG20111110T115433 RECID=21 STAMP=766842932
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/database/TEST/rcat/rcat01.dbf
output file name=/database/TEST/flash_recovery_area/TEST/datafile/o1_mf_rcat_ts_7cqcf64t_.dbf tag=TAG20111110T115433 RECID=22 STAMP=766842934
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=/database/TEST/flash_recovery_area/TEST/controlfile/o1_mf_TAG20111110T115433_7cqcf79c_.ctl tag=TAG20111110T115433 RECID=23 STAMP=766842935
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 10-NOV-11
channel ORA_DISK_1: finished piece 1 at 10-NOV-11
piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnsnf_TAG20111110T115433_7cqcf8gw_.bkp tag=TAG20111110T115433 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-NOV-11
You can check the image copy using:
RMAN> list copy;

specification does not match any archived log in the repository
List of Datafile Copies
=======================

Key     File S Completion Time Ckp SCN    Ckp Time
------- ---- - --------------- ---------- ---------------
671     1    A 10-NOV-11       5145845    10-NOV-11
        Name: /database/TEST/flash_recovery_area/TEST/datafile/o1_mf_system_7cqcc9jo_.dbf
        Tag: TAG20111110T115433

672     2    A 10-NOV-11       5145891    10-NOV-11
        Name: /database/TEST/flash_recovery_area/TEST/datafile/o1_mf_sysaux_7cqcd2th_.dbf
        Tag: TAG20111110T115433

673     3    A 10-NOV-11       5145901    10-NOV-11
        Name: /database/TEST/flash_recovery_area/TEST/datafile/o1_mf_undotbs1_7cqcdvz1_.dbf
        Tag: TAG20111110T115433

674     4    A 10-NOV-11       5145904    10-NOV-11
        Name: /database/TEST/flash_recovery_area/TEST/datafile/o1_mf_users_7cqcf30k_.dbf
        Tag: TAG20111110T115433

675     5    A 10-NOV-11       5145906    10-NOV-11
        Name: /database/TEST/flash_recovery_area/TEST/datafile/o1_mf_rcat_ts_7cqcf64t_.dbf
        Tag: TAG20111110T115433

List of Control File Copies
===========================

Key     S Completion Time Ckp SCN    Ckp Time
------- - --------------- ---------- ---------------
677     A 10-NOV-11       5145906    10-NOV-11
        Name: /database/TEST/flash_recovery_area/TEST/controlfile/o1_mf_TAG20111110T115433_7cqcf79c_.ctl
        Tag: TAG20111110T115433

remark: If you check for backupset, you will see that the SPFILE as been backuped as a backupset and not as an image copy.
RMAN> list backupset;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
667     Full    80.00K     DISK        00:00:00     10-NOV-11
        BP Key: 668   Status: AVAILABLE  Compressed: NO  Tag: TAG20111110T115433
        Piece Name: /database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnsnf_TAG20111110T115433_7cqcf8gw_.bkp
  SPFILE Included: Modification time: 10-NOV-11
  SPFILE db_unique_name: TEST

How to define a new location for backups

To define a specific location for backup destination, use the following syntax :
RMAN> BACKUP AS BACKUPSET FORMAT '/u01/oradata/backups/%F' database;

How to backup only a datafile

To backup only a datafile you can use the BACKUP DATAFILE syntax. First of all, you need to identify the datafile to backup either by its location in the file system or it identifier.
a) by the datafile id
RMAN> report schema;
Report of database schema for database with db_unique_name TEST

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    760      SYSTEM               YES     /database/TEST/system/system01.dbf
2    730      SYSAUX               NO      /database/TEST/system/sysaux01.dbf
3    100      UNDOTBS1             YES     /database/TEST/undo/undotbs01.dbf
4    37       USERS                NO      /database/TEST/users/users01.dbf
5    15       RCAT_TS              NO      /database/TEST/rcat/rcat01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    43       TEMP                 32767       /database/TEST/temp/temp01.dbf

RMAN> backup as backupset datafile 4
Starting backup at 10-NOV-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/database/TEST/users/users01.dbf
channel ORA_DISK_1: starting piece 1 at 10-NOV-11
channel ORA_DISK_1: finished piece 1 at 10-NOV-11
piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T143410_7cqnpl6d_.bkp tag=TAG20111110T143410 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-NOV-11
;
or using a file name
RMAN> backup as backupset datafile '/database/TEST/users/users01.dbf';
Starting backup at 10-NOV-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/database/TEST/users/users01.dbf
channel ORA_DISK_1: starting piece 1 at 10-NOV-11
channel ORA_DISK_1: finished piece 1 at 10-NOV-11
piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T143720_7cqnwk2r_.bkp tag=TAG20111110T143720 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-NOV-11
How to backup a tablespace

To backup only a tablespace you can use the BACKUP TABLESPACE syntax.
RMAN> BACKUP AS BACKUPSET TABLESPACE USERS;
Starting backup at 10-NOV-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00004 name=/database/TEST/users/users01.dbf
channel ORA_DISK_1: starting piece 1 at 10-NOV-11
channel ORA_DISK_1: finished piece 1 at 10-NOV-11
piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_nnndf_TAG20111110T144000_7cqo1jtn_.bkp tag=TAG20111110T144000 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-NOV-11
How to backup archivelog

RMAN> backup as backupset archivelog all;
Starting backup at 10-NOV-11
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=267 RECID=89 STAMP=766852984
channel ORA_DISK_1: starting piece 1 at 10-NOV-11
channel ORA_DISK_1: finished piece 1 at 10-NOV-11
piece handle=/database/TEST/flash_recovery_area/TEST/backupset/2011_11_10/o1_mf_annnn_TAG20111110T144305_7cqo79b4_.bkp tag=TAG20111110T144305 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 10-NOV-11
or
RMAN> backup as copy archivelog all;
Starting backup at 10-NOV-11
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=267 RECID=89 STAMP=766852984
output file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_267_7cqocxbx_.arc RECID=91 STAMP=766853133
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=268 RECID=90 STAMP=766853132
output file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_268_7cqocyh3_.arc RECID=92 STAMP=766853134
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 10-NOV-11

If you need to backup a specific archive log, you can write :
RAMN> backup as copy archivelog like '/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_269_7cqok20g_.arc';
Starting backup at 10-NOV-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=145 device type=DISK
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=269 RECID=93 STAMP=766853298
output file name=/database/TEST/flash_recovery_area/TEST/archivelog/2011_11_10/o1_mf_1_269_7cqolrj4_.arc RECID=94 STAMP=766853352
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 10-NOV-11

How to perform incremental backups

The start point of an incremental backup must be a level 0 inrecemtal batabase backup. See here under to perform an incremental level 0 backup:
RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE:

Once an level 0 backup is perform, it is then possible to perform incremental backups. There are 2 types of incremental backups :
    a) differential incremental backup
RMAN> backup incremental level 1 DATABASE;
A differential incremental backup contains blocks that have change since the last incremental backup level 1 or 0, if it is the list incremental backup.
    b) cummulative incremental backup
RMAN> backup incremental level 1 CUMULATIVE DATABASE;
A cumulative backup contains blocks that have changed since the level 0 backup;

remark: to speed up the incremental backups, it is possible to activate a block change tracking feature that will trace into a file all blocks that have changed since the last backup. To do this, you can use the following syntax :
SQL> ALTER DATABASE {ENABLE|DISABLE} BLOCK CHANGE TRACKING [USING FILE '...'] [REUSE];

How to perform backups with multisections

To optimize backup for big files, it will be possible to backup file using the multi section feature. In this way, we can cut up the big files in severals smallest ones that could be backed up in parallel through multiple channels allocations or parallelism.
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 4;
RMAN> BACKUP as backupset INCREMENTAL LEVEL 0 DATABASE SECTION SIZE=25M;
or
RMAN> RUN {
ALLOCATE CHANNEL disk1 device type disk;
ALLOCATE CHANNEL disk2 device type disk;
ALLOCATE CHANNEL disk3 device type disk;
ALLOCATE CHANNEL disk4 device type disk;
BACKUP as backupset DATABASE SECTION SIZE=25M;
}