lundi 18 août 2014
COMMIT_WAIT, COMMIT_LOGGING & COMMIT_WRITE
For good explanation see : http://antognini.ch/2012/04/commit_wait-and-commit_logging/
Recommanded values : COMMIT_WAIT=WAIT; COMMIT_LOGGING=BATCH
lundi 28 avril 2014
How to see hidden parameters in Oracle
To see hidden parameters in Oracle :
set echo off lines 149 pages 9999 feed off clear col clear break clear compute ttitle off btitle off COLUMN Param FORMAT a42 wrap head 'Underscore Parameter' COLUMN Descr FORMAT a75 wrap head 'Description' COLUMN SessionVal FORMAT a7 head 'Value|Session' COLUMN InstanceVal FORMAT a7 head 'Value|Instnc' ttitle skip 1 center 'All Underscore Parameters' skip 2 SELECT a.ksppinm Param , b.ksppstvl SessionVal , c.ksppstvl InstanceVal, a.ksppdesc Descr FROM x$ksppi a , x$ksppcv b , x$ksppsv c WHERE a.indx = b.indx AND a.indx = c.indx AND a.ksppinm LIKE '/_%' escape '/' ORDER BY 1;
vendredi 25 avril 2014
jeudi 24 avril 2014
How to disable AUTO TASKS under Oracle 11g
Do the following commands :
To see the auto task status, use the following command :
set lines 180 pages 1000 BEGIN DBMS_AUTO_TASK_ADMIN.disable( client_name => 'auto space advisor', operation => NULL, window_name => NULL); END; / BEGIN DBMS_AUTO_TASK_ADMIN.disable( client_name => 'sql tuning advisor', operation => NULL, window_name => NULL); END; / BEGIN DBMS_AUTO_TASK_ADMIN.disable( client_name => 'auto optimizer stats collection', operation => NULL, window_name => NULL); END; /
To see the auto task status, use the following command :
col client_name for a40 col attributes for a60 select client_name, status,attributes,service_name from dba_autotask_client; CLIENT_NAME STATUS ATTRIBUTES SERVICE_NAME ---------------------------------------- -------- ------------------------------------------------------------ ---------------------------------------------------------------- auto optimizer stats collection DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL auto space advisor DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL sql tuning advisor DISABLED ONCE PER WINDOW, ON BY DEFAULT, VOLATILE, SAFE TO KILL
jeudi 9 janvier 2014
How to find number of lines stored in a table's blocks
To find the number of lines per block for a specific table :
SQL> SELECT DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid),COUNT(*) NUMBER_ROWS_PER_BLOCK FROM <my_table_name> GROUP BY DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) Order by COUNT(*) desc; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) NUMBER_ROWS_PER_BLOCK ------------------------------------ --------------------- 355 127 2142 127 363 127 5550 127 4934 127 5614 127 5552 127 785 127 789 127And to find the file id :
SQL> select dbms_rowid.rowid_relative_fno(rowid) FILE#, dbms_rowid.rowid_block_number(rowid) BLOCK#, count(1) NUMBER_ROWS_PER_BLOCK from <my_table_name> group by dbms_rowid.rowid_relative_fno(rowid), dbms_rowid.rowid_block_number(rowid) order by 1,2 SQL> / FILE# BLOCK# NUMBER_ROWS_PER_BLOCK ---------- ---------- ---------------------- 5 2024 100 5 2025 115 5 2026 115To dump the content of a block to the trace file, use the following command :
SQL> alter system dump datafile 5 block min 2024 block max 2024;Now, to find the trace file you need to find the process id of your current session :
SQL> select p.spid from v$session s, v$process p where s.sid = (select distinct sid from v$mystat) and p.addr = s.paddr;
Dead lock detected
Just a note to don't forget how to solve a dead lock deleted with TM lock.
Having this trace :
Having this trace :
DEADLOCK DETECTED ( ORA-00060 ) [Transaction Deadlock] The following deadlock is not an ORACLE error. It is a deadlock due to user error in the design of an application or from issuing incorrect ad-hoc SQL. The following information may aid in determining the deadlock: Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TM-0000cfcc-00000000 248 668 SX SSX 258 1512 SX SSX TM-0000cfcc-00000000 258 1512 SX SSX 248 668 SX SSX session 668: DID 0001-00F8-00028FD4 session 1512: DID 0001-0102-00011BA0 session 1512: DID 0001-0102-00011BA0 session 668: DID 0001-00F8-00028FD4The lock is a TM lock where two sessions having a SX (subexclusive) lock try to acquire a SSX (share-subexclusive) higher lock on a table. A TM lock ensure that the structure of a table could not be altered whilst the session holding the lock is currently updating one or more rows in the table. Just to remember : From Oracle documentation : @http://docs.oracle.com/cd/E11882_01/server.112/e41084/ap_locks001.htm#SQLRF55502
The types of row and table locks are summarized here. For a more complete discussion of the types of row and table locks, see Oracle Database Concepts. Row Locks (TX) : ---------------- A row lock, also called a TX lock, is a lock on a single row of a table. A transaction acquires a row lock for each row modified by one of the following statements: INSERT, UPDATE, DELETE, MERGE, and SELECT ... FOR UPDATE. The row lock exists until the transaction commits or rolls back. When a transaction obtains a row lock for a row, the transaction also acquires a table lock for the table in which the row resides. The table lock prevents conflicting DDL operations that would override data changes in a current transaction. Table Locks (TM): ----------------- A transaction automatically acquires a table lock (TM lock) when a table is modified with the following statements: INSERT, UPDATE, DELETE, MERGE, and SELECT ... FOR UPDATE. These DML operations require table locks to reserve DML access to the table on behalf of a transaction and to prevent DDL operations that would conflict with the transaction. You can explicitly obtain a table lock using the LOCK TABLE statement, as described in "Manual Data Locking". A table lock can be held in any of the following modes: * A row share lock (RS), also called a subshare table lock (SS), indicates that the transaction holding the lock on the table has locked rows in the table and intends to update them. An SS lock is the least restrictive mode of table lock, offering the highest degree of concurrency for a table. * A row exclusive lock (RX), also called a subexclusive table lock (SX), indicates that the transaction holding the lock has updated table rows or issued SELECT ... FOR UPDATE. An SX lock allows other transactions to query, insert, update, delete, or lock rows concurrently in the same table. Therefore, SX locks allow multiple transactions to obtain simultaneous SX and SS locks for the same table. * A share table lock (S) held by one transaction allows other transactions to query the table (without using SELECT ... FOR UPDATE) but allows updates only if a single transaction holds the share table lock. Multiple transactions may hold a share table lock concurrently, so holding this lock is not sufficient to ensure that a transaction can modify the table. * A share row exclusive table lock (SRX), also called a share-subexclusive table lock (SSX), is more restrictive than a share table lock. Only one transaction at a time can acquire an SSX lock on a given table. An SSX lock held by a transaction allows other transactions to query the table (except for SELECT ... FOR UPDATE) but not to update the table. * An exclusive table lock (X) is the most restrictive mode of table lock, allowing the transaction that holds the lock exclusive write access to the table. Only one transaction can obtain an X lock for a table.From Tom Kyte :
=> Row Locks (TX) Row-level locks are primarily used to prevent two transactions from modifying the same row. When a transaction needs to modify a row, a row lock is acquired. => Table Locks (TM) Table-level locks are primarily used to do concurrency control with concurrent DDL operations, such as preventing a table from being dropped in the middle of a DML operationFrom another Oracle doc but more confusing:
The type of lock definition. The values are: ROWS_S (SS) : row share lock ROW-X (SX) : row exclusive lock SHARE (S) : share lock S/ROW-X (SSX): exclusive lock NONE : lock requested but not yet obtainedFrom Jonathan Lewis @http://jonathanlewis.wordpress.com/2010/06/21/locks/ :
Lock Modes : Filed under: Infrastructure,Locks — Jonathan Lewis @ 7:05 pm GMT Jun 21,2010 I usually think about locks (and the meaning of the lmode and request columns in v$lock) in terms of numbers, and never seem to remember which number goes with which name – apart from mode 6 = exclusive. So I’ve finally put up a little list somewhere I’ll always be able to find it if I need it. Value Name(s) Table method (TM lock) 0 No lock n/a 1 Null lock (NL) Used during some parallel DML operations (e.g. update) by the pX slaves while the QC is holding an exclusive lock. 2 Sub-share (SS) Until 9.2.0.5/6 "select for update" Row-share (RS) Since 9.2.0.1/2 used at opposite end of RI during DML Lock table in row share mode Lock table in share update mode 3 Sub-exclusive(SX) Update (also "select for update" from 9.2.0.5/6) Row-exclusive(RX) Lock table in row exclusive mode Since 11.1 used at opposite end of RI during DML 4 Share (S) Lock table in share mode Can appear during parallel DML with id2 = 1, in the PX slave sessions Common symptom of "foreign key locking" (missing index) problem Note that bitmap indexes on the child DON'T address the locking problem 5 share sub exclusive (SSX) Lock table in share row exclusive mode share row exclusive (SRX) Less common symptom of "foreign key locking" but likely to be more frequent if the FK constraint is defined with "on delete cascade." 6 Exclusive (X) Lock table in exclusive mode create index -- duration and timing depend on options used insert /*+ append */ (See Lock Horror for notes on the mode 3 lock during RI in 11g). It’s a minor irritant, of course, that some of the lock modes have two names and two abbreviations – and some documents use one name and the other abbrevation (e.g. talking about row-exclusive locks, then using the SX abbreviation). The various lock trace files seem to favour SS, SX, and SSX over RS, RX and SRX.n The change to mode 3 for “select for update” is described in Metalink (MOS) in note 3646162.8 – which references a bug of the same number describing a false deadlock issue in RAC; and it may still be possible to reverse the change in Oracle 11 (according to bug note 4969880) by setting the “_fix_control” hidden parameter to ’4969880:ON’. The requirement for the ‘mode 2 at opposite end of RI’ may relate to problems of checking referential integrity while executing parallel DML (unpublished bug 2435787 may apply, but I have a note to myself about parallel DML resulting in deadlocks although I can’t find anything about this on Metalink). I also have a note about a session in 9.2 getting two TM locks on the same object during parallel dml, one in mode 1 with id1 = 0 and one in mode 4 with id1 = 1. I haven’t looked at this for a long time, though.Seen from different discussion on internet :
Lock type and lock lode : * TYPE=TX, LMODE=6, REQUEST=4 = session 2 is attempting to insert a row with the same primary key value or unique index value as a row just inserted by session 1 * TYPE=TX, LMODE=6, REQUEST=4 = ITL Problem or bitmap index fragment problem * TYPE=TX, LMODE=6, REQUEST=6 = session 2 is attempting to update a row that is locked by session 1′s transaction (likely session 1 already updated the row) * TYPE=TM, LMODE=3, REQUEST=5 = possible missing index on a foreign key column From Charles Hooper — June 22, 2010and
Here is the way I remember them (the ‘Sub’ is explained in dbmslock.sql comments): S and X are Shared and eXclusive locks concerning the whole table (if talking about TM locks) SS/RS and SX/RX are Shared and eXclusive locks concerning only a Subset of the table (just some Rows) SSX/SRX is a Shared lock for whole table + eXclusive lock concerning only a Subset of the table (just some Rows) From Franck Pachot — June 22, 2010What to keep in mind : A share row exclusive table lock (TM-SX or subexclusive) generally indicates that the transaction holding the lock has made one or more updates to rows in the table. A share row subexclusive table lock (TM-SSX) is more restrictive than a subexclusive (SX) table lock. Only one transaction at a time can acquire a SSX table lock on a given table. A share row exclusive table lock held by a transaction allows other transactions to query or lock specific rows using SELECT with the FOR UPDATE clause, but not to update the table. A share row exclusive table lock held by a transaction prevents other transactions from obtaining row exclusive table locks and modifying the same table. How to know which object is locked : Look at TM-0000cfcc-00000000. TM indicate a table lock and 0000cfcc the ressource locked. To find the ressource :
SQL> select to_number( '0000cfcc', 'xxxxxxxxxx') from dual; TO_NUMBER('0000CFCC','XXXXXXXXXX') ---------------------------------- 53196 SQL> select * from dba_objects where object_id=53196; OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_ TIMESTAMP STATUS T G S ----- -------------------- ----------------- --------- -------------- ----------- --------- --------- --------- ------- - - - TEST MY_TABLE_NAME 53196 54978 TABLE 17-OCT-13 04-DEC-13 2013-10-17 VALID N N N
jeudi 19 décembre 2013
How to recover from : CRS-0184: Cannot communicate with the CRS daemon
Yesterday, i am facing an issue where all crs commands fail on a cluster with the following error :
I doesn't want to resintall a complete clusterware and loose all the clusterware configuration.
“CRS-0184: Cannot communicate with the CRS daemon”
I doesn't want to resintall a complete clusterware and loose all the clusterware configuration.
To solve the probleme, i use perl script defined in the $ORAGRID_HOME./crs/install/ directory.
I found that the file crsconfig_params containes the global configuration of my cluster.
This file will be generated at the first installation.
This file in used as input by some perl scripts and especially by the roothas.pl and rootcrs.pl.
This file will be generated at the first installation.
This file in used as input by some perl scripts and especially by the roothas.pl and rootcrs.pl.
so first, i try on each node of my cluster to reset the complete ocr/olr configuration files.
To that i execute the following commands :
$ORAGRID_HOME./crs/install/roothas.pl -deconfig -force and $ORAGRID_HOME./crs/install/rootcrs.pl -deconfig -force
This script remove all the configuration donne for the Oracle High Availability Services and also for the Cluster Ready Services.
The -force option is used to indicate to note use the clusterware services (cause not available)
The -deconfig option is used to remove content of the clusterware configuration files.
To recreate the cluster i use the following command that takes in input by default the crsconfig_params file :
$ORAGRID_HOME./crs/install/rootcrs.pl
The magic is done.
You can after restart your cluster using the following command :
$ORAGRID_HOME/bin/crsctl start crs
@see here under the content of the crsconfig_params file :
SILENT=false ORACLE_OWNER=ora11grid ORA_DBA_GROUP=dba ORA_ASM_GROUP=dba LANGUAGE_ID=AMERICAN_AMERICA.AL32UTF8 TZ=Europe/Paris ISROLLING=true REUSEDG=false ASM_AU_SIZE=1 USER_IGNORED_PREREQ=true ORACLE_HOME=/opt/ora11grid ORACLE_BASE=/opt/oracle OLD_CRS_HOME= JREDIR=/opt/ora11grid/jdk/jre/ JLIBDIR=/opt/ora11grid/jlib VNDR_CLUSTER=true OCR_LOCATIONS=/ocr/ocr1/ocr1.data,/ocr/ocr2/ocr2.data,/ocr/ocr3/ocr3.data CLUSTER_NAME=mycluster_1_2 HOST_NAME_LIST=node1,node2 NODE_NAME_LIST=node1,node2 PRIVATE_NAME_LIST= VOTING_DISKS=/voting/vot1/vot1.data,/voting/vot2/vot2.data,/voting/vot3/vot3.data #VF_DISCOVERY_STRING=%s_vfdiscoverystring% ASM_UPGRADE=false ASM_SPFILE= ASM_DISK_GROUP= ASM_DISCOVERY_STRING= ASM_DISKS= ASM_REDUNDANCY= CRS_STORAGE_OPTION=2 CSS_LEASEDURATION=400 CRS_NODEVIPS='node1-vip/255.255.252.0/eth0,node2-vip/255.255.252.0/eth0' NODELIST=node1,node2 NETWORKS="eth0"/10.10.xx.xx:public,"bond0"/192.168.164.0:cluster_interconnect SCAN_NAME=mycluster_1_2-scan SCAN_PORT=1521 GPNP_PA= OCFS_CONFIG= # GNS consts GNS_CONF=false GNS_ADDR_LIST= GNS_DOMAIN_LIST= GNS_ALLOW_NET_LIST= GNS_DENY_NET_LIST= GNS_DENY_ITF_LIST= #### Required by OUI add node NEW_HOST_NAME_LIST= NEW_NODE_NAME_LIST= NEW_PRIVATE_NAME_LIST= NEW_NODEVIPS='node1-vip/255.255.252.0/eth0,node2-vip/255.255.252.0/eth0' ############### # OCR constants #GPNPCONFIGDIR is handled differently in dev (T_HAS_WORK for all) #GPNPGCONFIGDIR in dev expands to T_HAS_WORK_GLOBAL GPNPCONFIGDIR=$ORACLE_HOME GPNPGCONFIGDIR=$ORACLE_HOME OCRLOC= OLRLOC= OCRID= CLUSTER_GUID= CLSCFG_MISSCOUNT= ####IPD/OS CRFHOME="/opt/ora11grid"
mardi 17 décembre 2013
How to ping an Oracle RAC intance through the RAC private interface with specific MTU
Some times it is interesting to check that the Oracle private network interface support a specific MTU.
Use the ping cmmand like following :
On Linux system :
Use the ping cmmand like following :
On Linux system :
/bin/ping -s <MTU> -c 2 -I <private_source_ip> <remote_nodename_priv>
/bin/traceroute -s <private_source_ip> -r -F <remote_nodename_priv> <MTU>On Solaris system :
/usr/sbin/ping -i <private_source_ip> -s <remote_nodename_priv> <MTU>
/usr/sbin/traceroute -s <private_source_ip> -r -F <remote_nodename_priv> <MTU>
jeudi 28 novembre 2013
[Oracle] How to monitor real SGA memory usage
From something read on the net :
select name, round(sum(mb),1) mb, round(sum(inuse),1) inuse from ( select case when name = 'buffer_cache' then 'db_cache_size' when name = 'log_buffer' then 'log_buffer' else pool end name, bytes/1024/1024 mb, case when name = 'buffer_cache' then (bytes - (select count(*) from v$bh where status = 'free') * (select value from v$parameter where name = 'db_block_size') )/1024/1024 when name <> 'free memory' then bytes/1024/1024 end inuse from v$sgastat ) group by name;
mercredi 27 novembre 2013
[ORACLE] Autoriser l'acces distant à la base en tant que sysdba
Verifier si :
Le parametre remote_os_authent=TRUE
Le parametre remote_login_passwordfile=EXCLUSIVE | SHARE
Pour se connecter en tant que sys, verifier si l'utilisateur est declaré dans le fichier le mot de passe :
select * from v$pwfile_users;
Si c'est vide, verifier si le fichier password existe sous ORACLE_HOME/dbs.
S'il n'existe pas le créer:
orapwd file=orapw<SID> password=<password> entries=5
si il n'apparait pas dans la liste l'ajouter avec
grant sysdba to <user>;
[ORACLE] Pour connaître la liste des patch-set appliqués sur une base
select * from registry$history;
vendredi 14 décembre 2012
[Oracle] To generate a random value in hexa string
@see on : http://www.akadia.com/services/ora_important_part_4.html#Loading%20Data%20using%20External%20Tables
CREATE OR REPLACE FUNCTION secure_rand
RETURN VARCHAR2 IS
seedval RAW(80) := HEXTORAW('72DD046BF9892A3544B7587475FDF5A0'
|| 'B8F6C44F5C33B57C8156E5CBE92A8477'
|| 'F4F8FCDE5A21236CA1D7938C4D5E47A8'
|| 'D7BBC407DB6DB8EB7695BA5565218C4F'
|| 'D66D5C41523FDCBA8D92CDBD8DC75C54');
BEGIN
RETURN RAWTOHEX(DBMS_OBFUSCATION_TOOLKIT.DES3GETKEY(seed => seedval));
END;
lundi 11 juin 2012
How to known the options installed on a database
SQL> SELECT status, version, comp_name FROM dba_registry;
vendredi 24 février 2012
Oracle11g: Methodology to trace a SQL request
Just a method to keep somewhere how to perform some trace when testing a query or a set of queries to known what's happening in the database :
Clean the buffer cache :
Set the session out of the trace mode :
Know look for the traces using adrci tool :
For more reference on event for tracing purposes : How To Use The New 11g Events++ Syntax For Easier SQL Tracing Of Datapump Operations? [ID 813737.1]
How to trace fast user session :
See here under an extract from dba-village @http://www.dba-village.com/village/dvp_tips.TipDetails?TipIdA=3764
First, get my session id :
Clean the buffer cache :
SQL> alter system flush buffer_cache;Set the session in trace mode :
SQL> alter system set events 'sql_trace level=12' ;Do your test...
Set the session out of the trace mode :
SQL> alter system set events 'sql_trace off' ;
Know look for the traces using adrci tool :
&>: adrci ADRCI: Release 11.2.0.1.0 - Production on Fri Feb 24 11:33:23 2012 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. ADR base = "/opt/oracle" adrci> show home ADR Homes: diag/tnslsnr/vgerndpud133/listener_ota4g diag/tnslsnr/vgerndpud133/ota4g diag/tnslsnr/vgerndpud133/listener diag/rdbms/test/TEST adrci> set home diag/rdbms/test/TEST adrci> show tracefiles %ora% -rt 24-FEB-12 10:56:00 diag/rdbms/test/TEST/trace/TEST_ora_5194.trc 24-FEB-12 10:55:20 diag/rdbms/test/TEST/trace/TEST_ora_19558.trc 24-FEB-12 10:55:16 diag/rdbms/test/TEST/trace/TEST_ora_25982.trc 21-FEB-12 17:36:29 diag/rdbms/test/TEST/trace/TEST_ora_5093.trc 21-FEB-12 17:36:24 diag/rdbms/test/TEST/trace/TEST_ora_5050.trc 21-FEB-12 17:35:49 diag/rdbms/test/TEST/trace/TEST_ora_5007.trc 21-FEB-12 17:35:33 diag/rdbms/test/TEST/trace/TEST_ora_4992.trc 21-FEB-12 17:26:49 diag/rdbms/test/TEST/trace/TEST_ora_4202.trc 21-FEB-12 17:21:27 diag/rdbms/test/TEST/trace/TEST_ora_3741.trc 21-FEB-12 17:21:21 diag/rdbms/test/TEST/trace/TEST_ora_3700.trc 12-JAN-12 17:44:07 diag/rdbms/test/TEST/incident/incdir_117854/TEST_ora_8736_i117854.trc 12-JAN-12 10:31:11 diag/rdbms/test/TEST/incident/incdir_117846/TEST_ora_18962_i117846.trc adrci> view TEST_ora_5194.trc Trace file /opt/oracle/diag/rdbms/test/TEST/trace/TEST_ora_5194.trc Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning option ORACLE_HOME = /opt/oracle/11.2.0 System name: Linux Node name: vgerndpud133 Release: 2.6.18-92.el5 Version: #1 SMP Tue Apr 29 13:16:12 EDT 2008 Machine: i686 Instance name: TEST Redo thread mounted by this instance: 1 Oracle process number: 25 Unix process pid: 5194, image: oracle@vgerndpud133 *** 2012-02-24 10:53:49.389 *** SESSION ID:(135.251) 2012-02-24 10:53:49.389 *** CLIENT ID:() 2012-02-24 10:53:49.389 *** SERVICE NAME:(SYS$USERS) 2012-02-24 10:53:49.389 *** MODULE NAME:(emagent_SQL_oracle_database) 2012-02-24 10:53:49.389 *** ACTION NAME:(UserLocks) 2012-02-24 10:53:49.389 WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1330077229389510 WAIT #1: nam='SQL*Net message from client' ela= 186 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1330077229390167 CLOSE #1:c=0,e=10,dep=0,type=3,tim=1330077229390205 =====================
For more reference on event for tracing purposes : How To Use The New 11g Events++ Syntax For Easier SQL Tracing Of Datapump Operations? [ID 813737.1]
How to trace fast user session :
See here under an extract from dba-village @http://www.dba-village.com/village/dvp_tips.TipDetails?TipIdA=3764
If you have to trace a "fast" user session, (for example, initiated by clicking on a button on the GUI of an application), you don't have enough time to query V$SESSION to initiate a trace for the session. A very easy way is to use a database "on logon" trigger; here is an example: CREATE OR REPLACE TRIGGER ON_LOGON_SCOTT AFTER LOGON ON DATABASE WHEN ( USER = 'SCOTT' ) BEGIN execute immediate 'alter session set events ''10046 trace name context forever, level 12'''; EXCEPTION WHEN OTHERS THEN NULL; END; Remarks: 1) The “exception” block is of utmost importance to avoid big trouble if something goes wrong with the trigger. 2) The trigger can contain more conditions (for example: if sysdate between XXX and YYY) 3) Once the trigger is enabled, all new sessions of the user will be traced. To suspend this behaviour momentarily, just perform: ALTER TRIGGER ON_LOGON_SCOTT DISABLE; 4) Suggestion: when not necessary anymore, drop the trigger instead of keeping it “disabled” forever (maybe somebody would re-enable it by mistake): DROP TRIGGER ON_LOGON_SCOTT;See also : http://www.oracle-base.com/articles/misc/sql-trace-10046-trcsess-and-tkprof.php http://www.dba-village.com/village/dvp_tips.TipDetails?TipIdA=3764 http://guyharrison.typepad.com/oracleguy/2006/09/10g_tracing_qui.html http://psoug.org/reference/dbms_monitor.html http://dbasolutions.wikispaces.com/How+to+set+trace+on+SQLPLUS Trace Analyzer :https://support.oracle.com/epmos/faces/ui/km/DocumentDisplay.jspx?_afrLoop=367664814786373&id=224270.1&_afrWindowMode=0&_adf.ctrl-state=4et2sref2_4 List of Oracle event codes :
Code Description 10000 Controlfile debug event, name 'control_file' 10001 Controlfile crash event1 10002 Controlfile crash event2 10003 Controlfile crash event3 10004 Controlfile crash event4 10005 Trace latch operations for debugging 10006 Testing - block recovery forced 10007 Log switch debug crash after new log select, thread %s 10008 Log switch debug crash after new log header write, thread %s 10009 Log switch debug crash after old log header write, thread %s 10010 Begin Transaction 10011 End Transaction 10012 Abort Transaction 10013 Instance Recovery 10014 Roll Back to Save Point 10015 Undo Segment Recovery 10016 Undo Segment extend 10017 Undo Segment Wrap 10018 Data Segment Create 10019 Data Segment Recovery 10020 Partial link restored to linked list (KSG) 10021 Latch cleanup for state objects (KSS) 10022 Trace ktsgsp 10023 Create Save Undo Segment 10024 Write to Save Undo 10025 Extend Save Undo Segment 10026 Apply Save Undo 10027 Latch cleanup for enqueue locks (KSQ) 10028 Latch cleanup for enqueue resources (KSQ) 10029 Session logon (KSU) 10030 Session logoff (KSU) 10031 Sort debug event (S*) 10032 Sort statistics (SOR*) 10033 Sort run information (SRD*/SRS*) 10034 Access path analysis (APA*) 10035 Parse SQL statement (OPIPRS) 10036 Create remote row source (QKANET) 10037 Allocate remote row source (QKARWS) 10038 Dump row source tree (QBADRV) 10039 Type checking (OPITCA) 10040 Dirty cache list 10041 Dump undo records skipped 10042 Trap error during undo application 10043 Check consistency of owner/waiter/converter lists in KSQ 10044 Free list undo operations 10045 Free list update operations - ktsrsp, ktsunl 10046 Enable SQL statement timing 10047 Trace switching of sessions 10048 Undo segment shrink 10049 Protect library cache memory heaps 10050 Sniper trace 10051 Trace OPI calls 10052 Don't clean up obj$ 10053 CBO Enable optimizer trace 10054 Trace UNDO handling in MLS 10055 Trace UNDO handing 10056 Dump analyze stats (kdg) 10057 Suppress file names in error messages 10058 Use table scan cost in tab$.spare1 10059 Simulate error in logfile create/clear 10060 CBO Enable predicate dump 10061 Disable SMON from cleaning temp segment 10062 Disable usage of OS Roles in osds 10063 Disable usage of DBA and OPER privileges in osds 10064 Thread enable debug crash level %s, thread %s 10065 Limit library cache dump information for state object dump 10066 Simulate failure to verify file 10067 Force redo log checksum errors - block number 10068 Force redo log checksum errors - file number 10069 Trusted Oracle test event 10070 Force datafile checksum errors - block number 10071 Force datafile checksum errors - file number 10072 Protect latch recovery memory 10073 Have PMON dump info before latch cleanup 10074 Default trace function mask for kst 10075 CBO Disable outer-join to regular join conversion 10076 CBO Enable cartesian product join costing 10077 CBO Disable view-merging optimization for outer-joins 10078 CBO Disable constant predicate elimination optimization 10079 Trace data sent/received via SQL*Net 10080 Dump a block on a segment list which cannot be exchanged 10081 Segment High Water Mark has been advanced 10082 Free list head block is the same as the last block 10083 A brand new block has been requested from space management 10084 Free list becomes empty 10085 Free lists have been merged 10086 CBO Enable error if kko and qka disagree on oby sort 10087 Disable repair of media corrupt data blocks 10088 CBO Disable new NOT IN optimization 10089 CBO Disable index sorting 10090 Invoke other events before crash recovery 10091 CBO Disable constant predicate merging 10092 CBO Disable hash join 10093 CBO Enable force hash joins 10094 Before resizing a data file 10095 Dump debugger commands to trace file 10096 After the cross instance call when resizing a data file 10097 After generating redo when resizing a data file 10098 After the OS has increased the size of a data file 10099 After updating the file header with the new file size 10100 After the OS has decreased the size of a data file 10101 Atomic redo write recovery 10102 Switch off anti-joins 10103 CBO Disable hash join swapping 10104 Dump hash join statistics to trace file 10105 CBO Enable constant pred trans and MPs w WHERE-clause 10106 CBO Disable evaluating correlation pred last for NOT IN 10107 CBO Always use bitmap index 10108 CBO Don't use bitmap index 10109 CBO Disable move of negated predicates 10110 CBO Try index rowid range scans 10111 Bitmap index creation switch 10112 Bitmap index creation switch 10113 Bitmap index creation switch 10114 Bitmap index creation switch 10115 CBO Bitmap optimization use maximal expression 10116 CBO Bitmap optimization switch 10117 CBO Disable new parallel cost model 10118 CBO Enable hash join costing 10119 QKA Disable GBY sort elimination 10120 Generate relative file # different from absolute 10121 CBO Don't sort bitmap chains 10122 Disable transformation of count(col) to count(*) 10123 QKA Disable Bitmap And-EQuals 10124 Force creation of segmented arrays by kscsAllocate 10125 Disable remote sort elimination 10126 Debug oracle java xa 10127 Disable remote query block operation 10128 Dump Partition Pruning Information 10129 Alter histogram lookup for remote queries 10130 Sort disable readaheads 10131 Use v$sql_plan code path for explain plan 10132 Dump plan after compilation 10133 Testing for SQL Memory Management 10134 Tracing for SQL Memory Management for session 10135 CBO do not count 0 rows partitions 10136 CBO turn off fix for bug 1089848 10137 CBO turn off fix for bug 1344111 10138 CBO turn off fix for bug 1577003 10139 CBO turn off fix for bug 1386119 10140 CBO turn off fix for bug 1332980 10141 CBO disable additional keys for inlist in bitmap optimization 10142 CBO enable dynamic selectivity estimation 10143 CBO force dynamic selectivity estimation (if enabled) 10145 Test auditing network errors 10146 Enable Oracle TRACE collection 10148 Use pre-7.3.3 random generator 10149 Allow the creation of constraints with illegal date constants 10150 Import exceptions 10151 Force duplicate dependency removal 10152 CBO don't consider function costs in plans 10153 Switch to use public synonym if private one does not translate 10154 Switch to disallow synonyms in DDL statements 10155 CBO disable generation of transitive OR-chains 10156 CBO disable index fast full scan 10157 CBO disable index access path for in-list 10158 CBO preserve predicate order in post-filters 10159 CBO disable order-by sort pushdown into domain indexes 10160 CBO disable use of join index 10161 CBO recursive semi-join on/off-switch 10162 CBO join-back elimination on/off-switch 10163 CBO join-back elimination on/off-switch 10164 CBO disable subquery-adjusted cardinality fix 10165 Mark session to be aborted during shutdown normal 10166 Trace long operation statistics updates 10167 CBO use old index MIN/MAX optimization 10168 CBO disable single-table predicate predicate generation 10169 CBO disable histograms for multi partitions 10170 CBO use old bitmap costing 10171 CBO disable transitive join predicates 10172 CBO force hash join back 10173 CBO no constraint-based join-back elimination 10174 View join-back elimination switch 10175 CBO star transformation switch 10176 CBO colocated join switch 10177 CBO colocated join switch 10178 CBO turn off hash cluster filtering through memcmp 10179 CBO turn off transitive predicate replacement 10180 Temp table transformation print error messages 10181 CBO disable multi-column in-list processing 10182 CBO disable generation of implied predicates 10183 CBO disable cost rounding 10184 CBO disable OR-exp if long inlist on bitmap column 10185 CBO force index joins 10186 CBO disable index join 10187 CBO additional index join switch 10188 "CBO additional index join switch 10189 CBO turn off FFS null fix 10190 Analyze use old frequency histogram collection and density 10191 Avoid conversion of in-lists back to OR-expanded form 10192 Nopushdown when number of groups exceed number of rows 10193 Force repeatable sampling with specified seed 10194 CBO disable new LIKE selectivity heuristic 10195 CBO don't use check constraints for transitive predicates 10196 CBO disable index skip scan 10197 CBO force index skip scan 10198 Check undo record 10199 Set parameter in session 10200 Consistent read buffer status 10201 Consistent read undo application 10202 Consistent read block header 10203 Block cleanout 10204 Signal recursive extend 10205 Row cache debugging 10206 Transaction table consistent read 10207 Consistent read transactions' status report 10208 Consistent read loop check 10209 Enable simulated error on controlfile 10210 Check data block integrity 10211 Check index block integrity 10212 Check cluster integrity 10213 Crash after controlfile write 10214 Simulate write errors on controlfile 10215 Simulate read errors on controlfile 10216 Dump controlfile header 10217 Debug sequence numbers 10218 Dump uba of applied undo 10219 Monitor multi-pass row locking 10220 Show updates to the transaction table 10221 Show changes done with undo 10222 Row cache 10223 Transaction layer - turn on verification codes 10224 Index block split/delete trace 10225 Free/used extent row cache 10226 Trace CR applications of undo for data operations 10227 Verify (multi-piece) row structure 10228 Trace application of redo by kcocbk 10229 Simulate I/O error against datafiles 10230 Check redo generation by copying before applying 10231 Skip corrupted blocks on _table_scans_ 10232 Dump corrupted blocks symbolically when kcbgotten 10233 Skip corrupted blocks on index operations 10234 Trigger event after calling kcrapc to do redo N times 10235 Check memory manager internal structures 10236 Library cache manager 10237 Simulate ^C (for testing purposes) 10238 Instantiation manager 10239 Multi-instance library cache manager 10240 Dump dba's of blocks that we wait for 10241 Remote SQL execution tracing/validation 10242 Suppress OER 2063 (for testing distrib w/o different error log) 10243 Simulated error for test %s of K2GTAB latch cleanup 10244 Make tranids in error msgs print as 0.0.0 (for testing) 10245 Simulate lock conflict error for testing PMON 10246 Print trace of PMON actions to trace file 10247 Turn on scgcmn tracing. (VMS ONLY) 10248 Turn on tracing for dispatchers 10249 Turn on tracing for multi-stated servers 10250 Trace all allocate and free calls to the topmost SGA heap 10251 Check consistency of transaction table and undo block 10252 Simulate write error to data file header 10253 Simulate write error to redo log 10254 Trace cross-instance calls 10255 Pl/sql parse checking 10256 Turn off shared server load balancing 10257 Trace shared server load balancing 10258 Force shared servers to be chosen round-robin 10259 Get error message text from remote using explicit call 10260 Trace calls to SMPRSET (VMS ONLY) 10261 Limit the size of the PGA heap 10262 Don't check for memory leaks 10263 Don't free empty PGA heap extents 10264 Collect statistics on context area usage (x$ksmcx) 10265 Keep random system generated output out of error messages 10266 Trace OSD stack usage 10267 Inhibit KSEDMP for testing 10268 Don't do forward coalesce when deleting extents 10269 Don't do coalesces of free space in SMON 10270 Debug shared cursors 10271 Distributed transaction after COLLECT 10272 Distributed transaction before PREPARE 10273 Distributed transaction after PREPARE 10274 Distributed transaction before COMMIT 10275 Distributed transaction after COMMIT 10276 Distributed transaction before FORGET 10277 Cursor sharing (or not) related event (used for testing) 10278 Internal testing 10279 Simulate block corruption in kdb4chk 10280 Internal testing - segmentation fault during crash recovery 10281 Maximum time to wait for process creation 10282 Inhibit signalling of other backgrounds when one dies 10283 Simulate asynch I/O never completing 10284 Simulate zero/infinite asynch I/O buffering 10285 Simulate controlfile header corruption 10286 Simulate controlfile open error 10287 Simulate archiver error 10288 Do not check block type in ktrget 10289 Do block dumps to trace file in hex rather than fromatted 10290 Kdnchk - checkvalid event - not for general purpose use. 10291 Die in tbsdrv to test controlfile undo 10292 Dump uet entries on a 1561 from dtsdrv 10293 Dump debugging information when doing block recovery 10294 Enable PERSISTENT DLM operations on non-compliant systems 10295 Die after file header update durning cf xact 10296 Disable ORA-379 10297 Customize dictionary object number cache 10298 Ksfd i/o tracing 10299 Trace prefetch tracking decisions made by CKPT 10300 Disable undo compatibility check at database open 10301 Enable LCK timeout table consistency check 10302 Trace create or drop internal trigger 10303 Trace loading of library cache for internal triggers 10304 Trace replication trigger 10305 Trace updatable materialized view trigger 10306 Trace materialized view log trigger 10307 Trace RepCat execution 10308 Replication testing event 10309 Trigger Debug event 10310 Trace synchronous change table trigger 10311 Disable Flashback Table Timestamp checking 10312 Allow disable to log rows into the mapping table 10319 Trace PGA statistics maintenance 10320 Enable data layer (kdtgrs) tracing of space management calls 10321 Datafile header verification debug failure. 10323 Before committing an add datafile command 10324 Enable better checking of redo logs errors 10325 Trace control file record section expand and shrink operations 10326 Clear logfile debug crash at %s, log %s 10327 Simulate ORA-00235 error for testing 10328 Disable first-to-mount split-brain error, for testing 10329 Simulate lost write, test detection by two-pass recovery 10330 Clear MTTR statistics in checkpoint progress record 10331 Simulate resilvering during recovery 10332 Force ALTER SYSTEM QUIESCE RESTRICTED command to fail 10336 Do remote object transfer using remote SQL 10337 Enable padding owner name in slave sql 10340 Buffer queues sanity check for corrupted buffers 10341 Simulate out of PGA memory in DBWR during object reuse 10342 Raise unknown exception in ACQ_ADD when checkpointing 10343 Raise an out of memory exception-OER 4031 in ACQ_ADD 10344 Simulate kghxal returning 0 in ACQ_ADD but no exception 10345 Validate queue when linking or unlinking a buffer 10346 Check that all buffers for checkpoint have been written 10347 Dump active checkpoint entries and checkpoint buffers 10348 Test abnormal termination of process initiating file checkpoint 10349 Do not allow ckpt to complete 10350 Simulate more than one object & tsn id in object reuse 10351 Size of slots 10352 Report direct path statistics 10353 Number of slots 10354 Turn on direct read path for parallel query 10355 Turn on direct read path for scans 10356 Turn on hint usage for direct read 10357 Turn on debug information for direct path 10359 Turn off updates to control file for direct writes 10360 Enable dbwr consistency checking 10365 Turn on debug information for adaptive direct reads 10370 Parallel query server kill event 10371 Disable TQ hint 10372 Parallel query server kill event proc 10373 Parallel query server kill event 10374 Parallel query server interrupt (validate lock value) 10375 Turn on checks for statistics rollups 10376 Turn on table queue statistics 10377 Turn off load balancing 10378 Force hard process/range affinity 10379 Direct read for rowid range scans (unimplemented) 10380 Kxfp latch cleanup testing event 10381 Kxfp latch cleanup testing event 10382 Parallel query server interrupt (reset) 10383 Auto parallelization testing event 10384 Parallel dataflow scheduler tracing 10385 Parallel table scan range sampling method 10386 Parallel SQL hash and range statistics 10387 Parallel query server interrupt (normal) 10388 Parallel query server interrupt (failure) 10389 Parallel query server interrupt (cleanup) 10390 Trace parallel query slave execution 10391 Trace PX granule allocation/assignment 10392 Parallel query debugging bits 10393 Print parallel query statistics 10394 Generate a fake load to test adaptive and load balancing 10395 Adjust sample size for range table queues 10396 Circumvent range table queues for queries 10397 Suppress verbose parallel coordinator error reporting 10398 Enable timeouts in parallel query threads 10399 Trace buffer allocation 10400 Turn on system state dumps for shutdown debugging 10401 Turn on IPC (ksxp) debugging 10402 Turn on IPC (skgxp) debugging 10403 Fake CPU number for default degree of parallelism 10404 Crash dbwr after write 10405 Emulate broken mirrors 10406 Enable datetime TIMESTAMP, INTERVAL datatype creation 10407 Enable datetime TIME datatype creation 10408 Disable OLAP builtin window function usage 10410 Trigger simulated communications errors in KSXP 10411 Simulate errors in IMR 10412 Trigger simulated errors in CGS/CM interface 10425 Enable global enqueue service open event trace 10426 Enable global enqueue service convert event trace 10427 Enable global enqueue service traffic controller event trace 10428 Enable tracing of global enqueue service distributed resource 10429 Enable tracing of global enqueue service IPC calls 10430 Enable tracing of global enqueue service AST calls 10431 Enable verification messages on pi consistency 10432 Enable tracing of global cache service fusion calls 10433 Global enqueue service testing event 10434 Enable tracing of global enqueue service muliple LMS 10435 Enable tracing of global enqueue service deadlock detetction 10450 Signal ctrl-c in kdddca (drop column) after n rows 10500 Turn on traces for SMON 10510 Turn off SMON check to offline pending offline rollback segment 10511 Turn off SMON check to cleanup undo dictionary 10512 Turn off SMON check to shrink rollback segments 10515 Turn on event to use physical cleanout 10550 Signal error during create as select/create index after n rows 10560 Block type '%s' 10561 Block type '%s', data object# %s 10562 Error occurred while applying redo to data block (file# %s, block# %s) 10563 Test recovery had to corrupt data block (file# %s, block# %s) in order to proceed 10564 Tablespace %s 10565 Another test recovery session is active 10566 Test recovery has used all the memory it can use 10567 Redo is inconsistent with data block (file# %s, block# %s) 10568 Failed to allocate recovery state object: out of SGA memory 10570 Test recovery complete 10571 Test recovery canceled 10572 Test recovery canceled due to errors 10573 Test recovery tested redo from change %s to %s 10574 Test recovery did not corrupt any data block 10575 Give up restoring recovered datafiles to consistent state: out of memory 10576 Give up restoring recovered datafiles to consistent state: some error occurred 10577 Can not invoke test recovery for managed standby database recovery 10578 Can not allow corruption for managed standby database recovery 10579 Can not modify control file during test recovery 10580 Can not modify datafile header during test recovery 10581 Can not modify redo log header during test recovery 10582 The control file is not a backup control file 10583 Can not recovery file %s renamed as missing during test recovery 10584 Can not invoke parallel recovery for test recovery 10585 Test recovery can not apply redo that may modify control file 10586 Test recovery had to corrupt 1 data block in order to proceed 10587 Invalid count for ALLOW n CORRUPTION option 10588 Can only allow 1 corruption for normal media/standby recovery 10589 Test recovery had to corrupt %s data blocks in order to proceed 10590 Kga (argus debugger) test flags 10591 Kga (argus debugger) test flags 10592 Kga (argus debugger) test flags 10593 Kga (argus debugger) test flags 10594 Kga (argus debugger) test flags 10595 Kga (argus debugger) test flags 10596 Kga (argus debugger) test flags 10597 Kga (argus debugger) test flags 10598 Kga (argus debugger) test flags 10599 Kga (argus debugger) test flags 10600 Check cursor frame allocation 10601 Turn on debugging for cursor_sharing (literal replacement) 10602 Cause an access violation (for testing purposes) 10603 Cause an error to occur during truncate (for testing purposes) 10604 Trace parallel create index 10605 Enable parallel create index by default 10606 Trace parallel create index 10607 Trace index rowid partition scan 10608 Trace create bitmap index 10609 Trace for array index insertion 10610 Trace create index pseudo optimizer 10611 Causes migration to fail - testing only 10612 Prints debug information for auto-space managed segments 10613 Prints debug information for auto-space managed segments 10614 Operation not allowed on this segment 10615 Invalid tablespace type for temporary tablespace 10616 Operation not allowed on this tablespace 10617 Cannot create rollback segment in this tablespace 10618 Operation not allowed on this segment 10619 Avoid assertions when possible 10620 Operation not allowed on this segment 10621 Data block does not belong to the segment 10622 Test/trace online index (re)build 10623 Enable Index range scan Prefetch - testing only 10650 Disable cache-callback optimisation 10651 Incorrect file number block number specified 10666 Do not get database enqueue name 10667 Cause sppst to check for valid process ids 10690 Set shadow process core file dump type (Unix only) 10691 Set background process core file type (Unix only) 10700 Alter access violation exception handler 10701 Dump direct loader index keys 10702 Enable histogram data generation 10703 Simulate process death during enqueue get 10704 Print out information about what enqueues are being obtained 10705 Print Out Tracing information for every I/O done by ODSs 10706 Print out information about global enqueue manipulation 10707 Simulate process death for instance registration 10708 Print out Tracing information for skxf multi instance comms 10709 Enable parallel instances in create index by default 10710 Trace bitmap index access 10711 Trace bitmap index merge 10712 Trace bitmap index or 10713 Trace bitmap index and 10714 Trace bitmap index minus 10715 Trace bitmap index conversion to rowids 10716 Trace bitmap index compress/decompress 10717 Trace bitmap index compaction trace for index creation 10718 Event to disable automatic compaction after index creation 10719 Trace bitmap index dml 10720 Trace db scheduling 10721 Internal testing - temp table transformation 10722 Set parameters for CPU frequency calculation (debug) 10723 Internal testing - release buffer for buffer cache shrink 10730 Trace row level security policy predicates 10731 Dump SQL for CURSOR expressions 10740 Disables fix for bug 598861 10750 Test rollback segment blksize guessing for index array insert 10800 Disable Smart Disk scan 10801 Enable Smart Disk trace 10802 Reserved for Smart Disk 10803 Write timing statistics on cluster database recovery scan 10804 Reserved for ksxb 10806 Switch to 7.3 mode when detaching sessions 10807 Disable user id check when switching to a global transaction 10810 Trace snapshot too old 10811 Trace block cleanouts 10812 Trace Consistent Reads 10830 Trace group by sort row source 10841 Default un-inintialized charact set form to SQLCS_IMPLICIT 10850 Enable time manager tracing 10851 Allow Drop command to drop queue tables 10852 Enable dumping of the AQ statistics hash table 10853 Event for AQ statistics latch cleanup testing 10856 Disable AQ propagator from using streaming 10857 Force AQ propagator to use two-phase commit 10858 Crash the AQ propagator at different stages of commit 10859 Disable updates of message retry count 10860 Event for AQ admin disable new name parser 10861 Disable storing extended message properties 10862 Resolve default queue owner to current user in enqueue/dequeue 10900 Extent manager fault insertion event #%s 10902 Disable seghdr conversion for ro operation 10903 Force tablespaces to become locally managed 10904 Allow locally managed tablespaces to have user allocation 10905 Do cache verification (kcbcxx) on extent allocation 10906 Unable to extend segment after insert direct load 10907 Trace extent management events 10908 Trace temp tablespace events 10909 Trace free list events 10924 Import storage parse error ignore event 10925 Trace name context forever 10926 Trace name context forever 10927 Trace name context forever 10928 Trace name context forever 10929 Trace name context forever 10930 Trace name context forever 10931 Trace name context forever 10932 Trace name context forever 10933 Trace name context forever 10934 Reserved. Used only in version 7.x. 10935 Reserved. Used only in version 7.x. 10936 Trace name context forever 10937 Trace name context forever 10938 Trace name context forever 10939 Trace name context forever 10940 Trace name context forever 10941 Trace name context forever 10943 Trace name context forever 10944 Trace name context forever 10945 Trace name context forever 10975 Trace execution of parallel propagation 10976 Internal package related tracing 10977 Trace event for RepAPI 10979 Trace flags for join index implementation 10980 Prevent sharing of parsed query during Materialized View query generation 10981 Dscn computation-related event in replication 10982 Event to turn off CDC-format MV Logs 10983 Event to enable Create_Change_Table debugging 10984 Subquery materialized view-related event 10985 Event for NULL refresh of materialized views 10986 Do not use HASH_AJ in refresh 10987 Event for the support of caching table with object feature 10988 Event to get exclusive lock during materialized view refresh in IAS 10989 Event to internally create statistics MV 10999 Do not get database enqueue name 10999 End Pseudo-error debugging eventsHow to trace my session :
First, get my session id :
SQL> select distinct sid from v$mystat; SID ---------- 29Update the tracefile name to contains a specific word. The trace file will be more easy to identify because it would have the following format : <SID>_ora_<pid>_<my_word>.trc
SQL> alter session set tracefile_identifier="FMOSSE_SESSION"; Session altered.So the trace file will have the following format : <SID>_ora_<pid>_FMOSSE_SESSION.trc Under sys user, set my session in trace mode:
SQL> exec dbms_monitor.session_trace_enable( session_id=>29, waits=>TRUE, binds=>TRUE);Do your stuff here.. Desactivate the trace for the session :
SQL> exec dbms_monitor.session_trace_disable( session_id=>29);Generate a more readable trace file using tkprof:
Linux> tkprof FME_ora_2270_FMOSSE_SESSION.trc /tmp/mytrace.txt explain=sys/<passord for sys> sys=no waits=yes
lundi 6 février 2012
Oracle11g : Monitoring table or index
To monitor a table :
SQL> ALTER TABLE <my_table> monitoring;To monitor an index :
SQL> ALTER INDEX <my_index> monitoring;To flush statistics in the dba monitoring table :
SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;To see monitoring info in dba tables :
SQL> select * from dba_tab_modifications;To stop monitoring, use: alter table xxxx nomonitoring;
mercredi 11 janvier 2012
Oracle 11g: automatic startup/shutdown
To configure an instance for automatic startup and shutdown, edit the "/etc/oratab" file and for each instance define the following line :
links :
For Oracle 10.2 see http://docs.oracle.com/cd/B19306_01/server.102/b15658/strt_stp.htm#CFAHAHGA
For Oracle 11.2 see http://docs.oracle.com/cd/E11882_01/server.112/e10839/strt_stp.htm#BABGDGHF
SID:<value_of_the_oracle_home>:[Y|N]For exemple, to register for automatic startup the DB11G instance :
DB11G:/opt/oracle/11.2.0:YNext, create a file called "/etc/init.d/dbora" as the root user, containing the following :
#!/bin/sh # chkconfig: 345 99 10 # description: Oracle auto start-stop script. # ORACLE_HOME=/opt/oracle/11.2.0 ORACLE_OWNER=oracle if [ ! -f $ORACLE_HOME/bin/dbstart ] then echo "Oracle startup: cannot start" exit fi case "$1" in 'start') # Start the Oracle databases: # The following command assumes that the oracle login # will not prompt the user for any values su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl start" su - $ORACLE_OWNER -c $ORACLE_HOME/bin/dbstart touch /var/lock/subsys/dbora ;; 'stop') # Stop the Oracle databases: # The following command assumes that the oracle login # will not prompt the user for any values su - $ORACLE_OWNER -c $ORACLE_HOME/bin/dbshut su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/lsnrctl stop" rm -f /var/lock/subsys/dbora ;; esacSet the dbora file in the dba group :
$> chgrp dba dboraUse the chmod command to set the privileges to 750:
$> chmod 750 /etc/init.d/dboraAssociate the dbora service with the appropriate run levels and set it to auto-start using the following command:
$> chkconfig --add dbora
links :
For Oracle 10.2 see http://docs.oracle.com/cd/B19306_01/server.102/b15658/strt_stp.htm#CFAHAHGA
For Oracle 11.2 see http://docs.oracle.com/cd/E11882_01/server.112/e10839/strt_stp.htm#BABGDGHF
Oracle 11g: Exp/imp how to get metadata
The idea is to use exp and imp utilities to extract only DDL in order to duplicate a schema.
To do this, export first the database using exp :
$> export ORACLE_HOME=/opt/oracle/10.2.0 $> export PATH=$PATH:$ORACLE_HOME/bin $> export ORACLE_SID=MYDB $> export NLS_LANG=AMERICAN_AMERICA.UTF8 $> exp userid=system file=export.dmp log=export.log owner=<schemaname_to_export> consistent=y
Import the database but specify that metadata will be redirected to a file :
$> imp userid=system file=export.dmp indexfile=metadata.txt fomuser=<schemaname_to_import>
lundi 5 décembre 2011
Oracle11g: Flashback database
Flashback database allows a user to rewind an entire database in the past.
Flashback database uses specific flashback logs that contains blocks of database that have been modified. These logs are stored in the Fast Recovery area.
Configuring a database for flashback database
How to flashback a database
To perform a flashback, the database must be in MOUNT EXCLUSIVE state :
Flashback opérations could be followed through the V$SESSION_LONGOPS view.
note : Flashback database could not be used if
To monitor disk space usage :
The V$RECOVERY_FILE_DEST gives information on fast recovery disk usage (size quota, used space, reclamable space and number of files for each location of the fast recovery area).
Guaranted restore point
With flashback database, it is possible to defined a guaranted restore point that ensure that the flash recovery area maintains all information used to restore the database at this specific restore point (In this case no lost of flashback logs).To define a guaranted restore point, you can issue:
Flashback database uses specific flashback logs that contains blocks of database that have been modified. These logs are stored in the Fast Recovery area.
Configuring a database for flashback database
- The database must be in archivelog
SQL> ALTER DATABASE ARCHIVELOG;
- The target retention policy must be configured
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=3000 SCOPE=BOTH;
In this example, the retention is configure for 3000 minutes. There is no guarantee on the retention target; it is depending on the amount of data stores in the flashback logs and the size of the fast recovery area. If space is needed for the fast recovery area, flashback logs could be lost.
- Flashback Database must be explicitly activated
SQL> ALTER DATABASE FLASHBACK ON;
note: This activation must be done when the database is mounted in an exclusive mode (NOT OPENED!).
How to flashback a database
To perform a flashback, the database must be in MOUNT EXCLUSIVE state :
SQL> FLASHBACK DATABASE TO TIMESTAMP (sysdate - interval '1' DAY); SQL> FLASHBACK DATABASE TO SCN #; SQL> FLASHBACK DATABASE TO RESTORE POINT <restore_point>; SQL> FLASHBACK DATABASE TO SEQUENCE #;Once flashback done, the database must be reopen with RESETLOGS option.
Flashback opérations could be followed through the V$SESSION_LONGOPS view.
note : Flashback database could not be used if
- The control file has been restored or re-created
- A tablespace has been dropped
- The size of a file has be reduced
- The restoration is before a previous use of RESETLOGS (in this case use TO BEFORE RESETLOGS)
To monitor disk space usage :
SQL> SELECT ESTIMATED_FLASHBACK_SIZE, FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;To monitor the SCN window :
SQL> SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;To monitor flashback statistics :
SQL>SELECT BEGIN_TIME, END_TIME, FLASHBACK_DATA, DB_DATA, REDO_DATA, ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_STAT;The v$flashback_database_stat contains 24 hours of statistics collection. Each line correspond to one hour of flashback activity. This view is usefull to calculate the size requiered by flashback logs in relation to the retention target defined by the administrator.FLASHBACK_DATA is the number of bytes of flashback data and REDO_DATA the number of bytes of redo log generated for the period.DB_DATA is the number of bytes of data block read and write.
The V$RECOVERY_FILE_DEST gives information on fast recovery disk usage (size quota, used space, reclamable space and number of files for each location of the fast recovery area).
Guaranted restore point
With flashback database, it is possible to defined a guaranted restore point that ensure that the flash recovery area maintains all information used to restore the database at this specific restore point (In this case no lost of flashback logs).To define a guaranted restore point, you can issue:
SQL> CREATE RESTORE POINT <restore_point> GUARANTEE FLASHBACK DATABASE;To create this kind of restore point, the following pre-requisite are mandatory:
- The COMPATIBLE parameter must be greater than or equals to 10.2
- The database must be in archivelog
- Archive logs must be available before the creation of the guaranted restore point
- The Fast recovery area must be configured and available
Oracle11g : Flashback Data Archive [Oracle Total Recall]
Oracle total recal register all tables modification in dedicated tablespaces associated to FLASHBACK ARCHIVE area.
To create a flashback archive you need the FLASHBACK ARCHIVE ADMINISTER priviledge. With this priviledge you can craete a flashback archive :
To register a table with a flashback archive you need the FLASHBACK ARCHIVE privilege.
With flashback archive registered for a specific table you can query the table in the past
note : To make Flashback Data Archive fda the default Flashback Data Archive:
note :
To create a flashback archive you need the FLASHBACK ARCHIVE ADMINISTER priviledge. With this priviledge you can craete a flashback archive :
SQL> CREATE FLASHBACK ARCHIVE fda TABLESPACE fda_tbs QUOTA 100M RETENTION 5 YEARS;
To register a table with a flashback archive you need the FLASHBACK ARCHIVE privilege.
SQL> GRANT FLASHBACK ARCHIVE on fda to <user>;With this priviledge you can reister a table with a specific flashback archive :
SQL> ALTER TABLE my_table FALSHBACK ARCHIVE fda;
With flashback archive registered for a specific table you can query the table in the past
SQL> SELECT * FROM my_table AS OF TIMESTAMP to_timestamp( '05/12/2001 10:41:00', 'DD/MM/YYYY HH24:MI:SS');
note : To make Flashback Data Archive fda the default Flashback Data Archive:
SQL> ALTER FLASHBACK ARCHIVE fda SET DEFAULT;note : to modify the retention policy
SQL> ALTER FLASHBACK ARCHIVE fda MODIFY RETENTION 2 YEARS;note : to purge data older then a specific time
SQL> ALTER FLASHBACK ARCHIVE fda PURGE BEFORE TIMESTAMP (SYSTIMESTAMP - INTERVAL '5' DAY);note : to drop an archive
SQL> DROP FLASHBACK ARCHIVE fda;note : common DDL requests are supported by flashback archive but for complete ones it is necessary to disassociate the table before to procceed any modification. Total recall provides the DISASSOCIATED_FBA & REASSOCIATE_FBA procedure of the DBMS_FLASHBACK_ARCHIVE package for this purpose.
note :
- Automatic undo management must be enabled.
- The tablespace in which flashback data archive is created must have Automatic Segment Space Management activated.
- You can not drop a table that is managed by total recall but you can truncate it.
vendredi 2 décembre 2011
Oracle11g: Flashback Transaction Query, Flashback Transaction
Flashback transaction allows the user to rollback an entire transaction.
This technology is based on undo data and redo logs availables.
Pre-requisite
You need to :
These can be done like this :
Transaction back-out
See the excellent example from this oracle blog to perform a transaction backout.
For Oracle documentation.
This technology is based on undo data and redo logs availables.
Pre-requisite
You need to :
- Activate supplemental log data
- Activate supplemental log data primary key
- Grant execute on DBMS_FLASHBACK package
- Grant SELECT ANY TRANSACTION priviledge to user
These can be done like this :
SQL> alter database add supplemental log data; SQL> alter database add supplemental log data (primary key) columns; SQL> grant execute on DBMS_FLASHBACK to <user>; SQL> grant SELECT ANY TRANSACTION to <user>;
Transaction back-out
See the excellent example from this oracle blog to perform a transaction backout.
For Oracle documentation.
jeudi 1 décembre 2011
Oracle 11g: Flashback Query, Flashback Query Version, Flashback Table
The flashback Technology uses undo data to retrieve information (except for flashback database). Ensure that you have the correct UNDO_RETENTION parameter definition to guarantee your flashback opération through a specific period of time.
FlashBack Query
Flashback query allows user to query the database and see what were the data as a specific point in the past.
To perform a query on the past you can write :
FlashBack Query Version
Flashback query version allows the user to display all modifications performed on a set of data between two timestamps or to SCN.
VERSIONS_XID is a pseudo-column that defines the transaction identifier responsible of the line modification (Deleted lines are also considered).
note: Only commited lines are taken into account.
note: Do not use the query version to query :
FlashBack Table
Flashback table restore a table at a specific time or SCN.
You must :
To flashback a table at a specific time in the past :
note: Flashback table generates redo and undo data.
It could be possible the flashback a table to a specific SCN or restore point.
FlashBack Query
Flashback query allows user to query the database and see what were the data as a specific point in the past.
To perform a query on the past you can write :
SQL> SELECT employee_id, salary from HR.employees AS OF TIMESTAMP <time_stamp in past>;or using a specific SCN number :
SQL> SELECT employee_id, salary from HR.employees AS OF SCN #;note: It is not possible to flashback query a table if a DDL request has been performed after your targeted SCN or TIMESTAMP.
FlashBack Query Version
Flashback query version allows the user to display all modifications performed on a set of data between two timestamps or to SCN.
SQL> select VERSIONS_XID, employee_id, salary from HR.employees VERSIONS BETWEEN TIMESTAMP <T1> AND <T2>;or using SCN
SQL> select VERSIONS_XID, employee_id, salary from HR.employees VERSIONS BETWEEN SCN # AND #;
VERSIONS_XID is a pseudo-column that defines the transaction identifier responsible of the line modification (Deleted lines are also considered).
note: Only commited lines are taken into account.
note: Do not use the query version to query :
- External tables
- Temporary tables
- Views
- V$ views
FlashBack Table
Flashback table restore a table at a specific time or SCN.
You must :
- be granted FLASHBACK TABLE or FLASHBACK ANY TABLE
- have SELECT , UPDATE, DELETE and ALTER priviledges on the targeted table
- enable ROW MOVEMENT on the targeted tables
SQL> ALTER TABLE <table_name> ENABLE ROW MOVEMENT;
To flashback a table at a specific time in the past :
SQL> FLASHBACK TABLE <table_name> TO TIMESTAMP to_timestamp( '01/12/2011 17:35:00', 'DD/MM/YYYY HH24:MI:SS');note: The flashback table could not be performed on a tables in the SYSTEM tablespace, views, V$ views and external tables. It could not be also performed if a DDL request has be performed after the targeted point in time.
note: Flashback table generates redo and undo data.
It could be possible the flashback a table to a specific SCN or restore point.
SQL> FLASHBACK TABLE <table_name> TO SCN #;
SQL> FLASHBACK TABLE <table_name> TO RESTORE POINT <restore_point_name>;
mardi 29 novembre 2011
Oracle 11g: Database block recovery
There are some pre-requisite to the database block recovery :
The rman utility can use the following components for block recovery purposes:
To check which blocks need recovery, you can look at:
To revocver a block : After identifying the block number and associated file number of the corrupted block to repair, you can issue:
- The database must be in ARCHIVELOG
- The database must be mounted or opened
The rman utility can use the following components for block recovery purposes:
- Full database backup or incremential level 0 database could be used for block recovery
- Archived redo log if available
- Flashback database logs if available
To check which blocks need recovery, you can look at:
- The V$DATABASE_BLOCK_CORRUPTION
- Results returned by the following rman commands :
- LIST FAILURE
- VALIADTE {DATABASE|TABLESPACE|DATAFILE}
- BACKUP DATABASE VALIDATE
- ANALYZE {TABLE|INDEX}
- Trace files (for location look at the V$DIAG_INFO)
- DBVERIFY utility
To revocver a block : After identifying the block number and associated file number of the corrupted block to repair, you can issue:
RMAN> RECOVER DATAFILE 4 BLOCK 2; RMAN> RECOVER DATAFILE 1 BLOCK 9 DATAFILE 4 BLOCK 254 DATAFILE 5 BLOCK 454;or to repair all block registered in the v$database_block_corruption :
RMAN> RECOVER CORRUPTION LIST;
vendredi 18 novembre 2011
Oracle 11g: Managing SQL Plan
Oracle says about sql plan management :
"SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting, and evolving SQL plan information."
During the life cycle of our product some times an event causes the SQL execution plan of a sql request changed and introduce some regressions in SQL performance. From the tuner/profiler point of view, it will be relevant to be aware of this modification and check that the new SQL execution plan is correct.
This section will give some inputs to achieve this goal.
How to register SQL Plan
SQL plan management provides a mechanism that records and evaluates the execution plans of SQL statements over time, and builds SQL plan baselines.
Each SQL business request will be linked to an SQL plan baseline that contains an history of all execution plans build by oracle over the time for this request.
As a starting point, we need to capture execution plan for SQL request and add it in the SQL baseline.
This could be done automatically during a product run or could be done one by one by selecting a specific sql request.
Automatic SQL plan capture
To enable automatic plan capture, set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES initialization parameter to TRUE. By default, this parameter is set to FALSE.
1) Set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter to TRUE
2) Run all your product use cases to capture all SQL execution plan
3) Restore the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter to its default value.
4) Look at the DBA_SQL_PLAN_BASELINES table to see all sql plans in relation to the schema you need to monitor:
Manual SQL plan capture
1) To capture a sql execution plan manually, you have to take a look at the V$SQL view and select the request you need to monitor :
2) Using the SQL_ID create a SQL plan baseline for the statement.
variable cnt number;
3) Check the DBA_SQL_PLAN_BASELINES tables
SQL Plan Baseline activation
Now we need to indicate to the optimizer to refere to the SQL plan baseline defined for each request :
The optimizer will initially look for a given query if there is a baseline and compare both the cost of the execution plan from the baseline and cost of the execution plan of the current query. If the cost has changed and the execution plan of the query is better then the cost of the execution plan of the baseline then the new execution plan will be added to the baseline.
To illustrate this:
1) I create first a table and populate some data in it.
3) Using the v$sql view, i found the sql_id of the new request and load its execution plan in the sql plan baseline:
4) Now i display the baseline
alter table test add constraint PK_TEST_ID primary key( id);
Do an other select with a different id value.
6) Look again for the baseline:
We can also see that the first sql plan has been added manually in the baseline (ORIGIN=MANUAL-LOAD) and that the second one has been added by oracle itself (ORIGIN=AUTO-CAPTURE).
As we add a new index, the sql plan of the request as changed due to a database structure modification. In this case the new plan has been added in the base line with ENABLED=YES (could be choose by the optimizer) but with ACCEPTED=NO due to the fact that the DBA has not yet validate the new SQL plan.
If we want to force the use of the first plan we need to set the plan as FIXED :
How to overwrite an existing execution plan
Some time during a profiling session we need to overwrite an execution plan for a specific request because we improve its performance.
To do that we need :
1) to disable the current sql plan in the baseline:
How to display an execution plan registered under the baseline
To view the plans stored in the SQL plan baseline for a given statement, use the DISPLAY_SQL_PLAN_BASELINE function of the DBMS_XPLAN package:
Oracle documentation
http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/optplanmgmt.htm
"SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting, and evolving SQL plan information."
During the life cycle of our product some times an event causes the SQL execution plan of a sql request changed and introduce some regressions in SQL performance. From the tuner/profiler point of view, it will be relevant to be aware of this modification and check that the new SQL execution plan is correct.
This section will give some inputs to achieve this goal.
How to register SQL Plan
SQL plan management provides a mechanism that records and evaluates the execution plans of SQL statements over time, and builds SQL plan baselines.
Each SQL business request will be linked to an SQL plan baseline that contains an history of all execution plans build by oracle over the time for this request.
As a starting point, we need to capture execution plan for SQL request and add it in the SQL baseline.
This could be done automatically during a product run or could be done one by one by selecting a specific sql request.
Automatic SQL plan capture
To enable automatic plan capture, set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES initialization parameter to TRUE. By default, this parameter is set to FALSE.
1) Set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter to TRUE
alter system set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;
2) Run all your product use cases to capture all SQL execution plan
3) Restore the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter to its default value.
alter system set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE;
4) Look at the DBA_SQL_PLAN_BASELINES table to see all sql plans in relation to the schema you need to monitor:
select sql_handle, plan_name, enabled, accepted, fixed,OPTIMIZER_COST,sql_text from dba_sql_plan_baselines where PARSING_SCHEMA_NAME='FMOSSE'; SQL_HANDLE PLAN_NAME ENA ACC FIX OPTIMIZER_COST SQL_TEXT ------------------------------ ------------------------------ --- --- --- -------------- -------------------------------------------------------------------------------- SYS_SQL_495711156939d306 SQL_PLAN_4kpsj2pnmmns68e3968ca YES YES NO 0 delete from FMOSSE.S_PROPERTY where PROPERTYENTRY = :1 SYS_SQL_4a52f8b0626b170b SQL_PLAN_4nnrsq1j6q5sb659ff4e8 YES YES NO 1 UPDATE S_ACCOUNT SET ACCOUNTSTATUS = :1 WHERE (ACCOUNTID = :2 ) SYS_SQL_4c596a52d15cd98f SQL_PLAN_4sqbaab8ptqcg657f43ee YES YES NO 2 SELECT COUNT(FILEID) FROM S_FILE SYS_SQL_4db332738e676fb7 SQL_PLAN_4vctkff76fvxr5c9cc6bc YES YES NO 2 select FOLDERID, FOLDERNAME, FOLDERDATECREATION, FOLDERIDPARENT, FOLDERSHARED, F SYS_SQL_4e84758c387d2c28 SQL_PLAN_4x13pjhw7ub181e49339d YES YES NO 2 delete from FMOSSE.S_FILE SYS_SQL_4fc3800bd9da7784 SQL_PLAN_4zhw01gcxnxw4291c9d40 YES YES NO 1 SELECT FOLDERID, FOLDERSHARED, FOLDERNAME, FOLDERSIZE, FOLDERDATECREATION, FOLDE SYS_SQL_513d44ab9e0b94d8 SQL_PLAN_52ga4pfg0r56sacc4c75e YES YES NO 9 select count(*), sum(f.filesize), round(avg(f.filesize), 2), max(f.filesize), mi SYS_SQL_51a8ae22d631730b SQL_PLAN_53a5f4bb32wsb7b701d8a YES YES NO 3 select PROPERTYENTRY, PROPERTYVALUE from FMOSSE.S_PROPERTY order by PROPERTYENTR SYS_SQL_51e053d0f1bfb831 SQL_PLAN_53s2mu3svzf1jc7624a29 YES YES NO 4 SELECT t0.ACCOUNTID, t0.ACCOUNTNAME, COUNT(t3.FILEID), TO_CHAR(NVL(SUM(t3.FILESI SYS_SQL_521f93c697ccfbb2 SQL_PLAN_547wmsubwtyxk29b1bccc YES YES NO 2 select arf.accountid from s_accountresourcefolder arf inner join s_folder f on f SYS_SQL_56ceb93fab2da70d SQL_PLAN_5dmpt7ypkv9sd42ba1be5 YES YES NO 0 DELETE FROM S_ACCOUNT WHERE (ACCOUNTID = :1 )
Manual SQL plan capture
1) To capture a sql execution plan manually, you have to take a look at the V$SQL view and select the request you need to monitor :
SELECT sql_id, sql_fulltext FROM V$SQL WHERE sql_text LIKE '%SELECT COUNT(FILEID)%' and PARSING_SCHEMA_NAME='FMOSSE'; SQL_ID SQL_FULLTEXT ------------- -------------------------------------------------------------------------------- 1xjd9vbxt5f7w SELECT COUNT(FILEID) FROM S_FILE
2) Using the SQL_ID create a SQL plan baseline for the statement.
variable cnt number;
EXECUTE :cnt :=DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id=>'1xjd9vbxt5f7w');
3) Check the DBA_SQL_PLAN_BASELINES tables
select B.SQL_HANDLE, B.PLAN_NAME, B.SQL_TEXT, B.ORIGIN, B.PARSING_SCHEMA_NAME, B.ENABLED, B.ACCEPTED, B.FIXED from DBA_SQL_PLAN_BASELINES B, V$SQL S WHERE S.SQL_PLAN_BASELINE=B.PLAN_NAME AND S.SQL_ID='1xjd9vbxt5f7w'; SQL_HANDLE PLAN_NAME SQL_TEXT ORIGIN PARSIN ENA ACC FIX ------------------------------ ------------------------------ --------------------------------- -------------- ------ --- --- --- SYS_SQL_4c596a52d15cd98f SQL_PLAN_4sqbaab8ptqcg657f43ee SELECT COUNT(FILEID) FROM S_FILE AUTO-CAPTURE FMOSSE YES YES NO
SQL Plan Baseline activation
Now we need to indicate to the optimizer to refere to the SQL plan baseline defined for each request :
alter system set optimizer_use_sql_plan_baselines=TRUE;
The optimizer will initially look for a given query if there is a baseline and compare both the cost of the execution plan from the baseline and cost of the execution plan of the current query. If the cost has changed and the execution plan of the query is better then the cost of the execution plan of the baseline then the new execution plan will be added to the baseline.
To illustrate this:
1) I create first a table and populate some data in it.
create table test (id NUMBER, VALUE VARCHAR2) NOLOGGING; declare i number; begin for i in 1..300000 loop insert /*+ APPEND */ into test values( i, 'LABEL# ' || i); end loop; commit; end;2) Then i perform a select :
alter system set cursor_sharing=force;
select * from test where id=215445;
3) Using the v$sql view, i found the sql_id of the new request and load its execution plan in the sql plan baseline:
variable cnt number; EXECUTE :cnt :=DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id=>'05abfh82j5z85');
4) Now i display the baseline
select SQL_HANDLE, PLAN_NAME, OPTIMIZER_COST, SQL_TEXT, ORIGIN, PARSING_SCHEMA_NAME, ENABLED, ACCEPTED, FIXED from DBA_SQL_PLAN_BASELINES; SQL_HANDLE PLAN_NAME OPTIMIZER_COST SQL_TEXT ORIGIN PARSING_SCHEMA_NAME ENA ACC FIX ------------------------------ ------------------------------ -------------- -------------------------------------------------------------------------------- -------------- ------------------------------ --- --- --- SYS_SQL_d223cef6c0ff72ee SQL_PLAN_d48yfyv0gywrf97bbe3d0 104 select * from test where id=:"SYS_B_0" MANUAL-LOAD TEST YES YES NO5) Now, i create an contraint on the id column and so on an associated index:
alter table test add constraint PK_TEST_ID primary key( id);
Do an other select with a different id value.
6) Look again for the baseline:
select SQL_HANDLE, PLAN_NAME, OPTIMIZER_COST, SQL_TEXT, ORIGIN, PARSING_SCHEMA_NAME, ENABLED, ACCEPTED, FIXED from DBA_SQL_PLAN_BASELINES; SQL_HANDLE PLAN_NAME OPTIMIZER_COST SQL_TEXT ORIGIN PARSING_SCHEMA_NAME ENA ACC FIX ------------------------------ ------------------------------ -------------- -------------------------------------------------------------------------------- -------------- ------------------------------ --- --- --- SYS_SQL_d223cef6c0ff72ee SQL_PLAN_d48yfyv0gywrf97bbe3d0 104 select * from test where id=:"SYS_B_0" MANUAL-LOAD TEST YES YES NO SYS_SQL_d223cef6c0ff72ee SQL_PLAN_d48yfyv0gywrf7ad59d6c 2 select * from test where id=:"SYS_B_0" AUTO-CAPTURE TEST YES NO NOWhat we can see it that a new sql plan has been added in the baseline with a better optimizer cost (2 instead of 104).
We can also see that the first sql plan has been added manually in the baseline (ORIGIN=MANUAL-LOAD) and that the second one has been added by oracle itself (ORIGIN=AUTO-CAPTURE).
As we add a new index, the sql plan of the request as changed due to a database structure modification. In this case the new plan has been added in the base line with ENABLED=YES (could be choose by the optimizer) but with ACCEPTED=NO due to the fact that the DBA has not yet validate the new SQL plan.
If we want to force the use of the first plan we need to set the plan as FIXED :
variable cnt number; exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE( SQL_HANDLE => 'SYS_SQL_d223cef6c0ff72ee', PLAN_NAME => 'SQL_PLAN_d48yfyv0gywrf97bbe3d0', ATTRIBUTE_NAME => 'fixed', ATTRIBUTE_VALUE => 'YES');
How to overwrite an existing execution plan
Some time during a profiling session we need to overwrite an execution plan for a specific request because we improve its performance.
To do that we need :
1) to disable the current sql plan in the baseline:
variable cnt number; exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE( SQL_HANDLE => 'SYS_SQL_d223cef6c0ff72ee', PLAN_NAME => 'SQL_PLAN_d48yfyv0gywrf97bbe3d0', ATTRIBUTE_NAME => 'enabled', ATTRIBUTE_VALUE => 'NO' );2) register the new plan under the same sql_handle but with the sql_id and plan_hash_value of the new request (values found in the v$sql view)
variable cnt number; exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE( SQL_ID => '72fb94sq0karh', PLAN_HASH_VALUE => '1357081020', SQL_HANDLE => 'SYS_SQL_d223cef6c0ff72ee' );3) Ensure that the new plan has been accepted in the base line :
select SQL_HANDLE, PLAN_NAME, OPTIMIZER_COST, SQL_TEXT, ORIGIN, PARSING_SCHEMA_NAME, ENABLED, ACCEPTED, FIXED from DBA_SQL_PLAN_BASELINES; SQL_HANDLE PLAN_NAME OPTIMIZER_COST SQL_TEXT ORIGIN PARSING_SCHEMA_NAME ENA ACC FIX ------------------------------ ------------------------------ -------------- -------------------------------------------------------------------------------- -------------- ------------------------------ --- --- --- SYS_SQL_d223cef6c0ff72ee SQL_PLAN_d48yfyv0gywrf97bbe3d0 2 select * from test where id=:"SYS_B_0" MANUAL-LOAD TEST YES YES NO
How to display an execution plan registered under the baseline
To view the plans stored in the SQL plan baseline for a given statement, use the DISPLAY_SQL_PLAN_BASELINE function of the DBMS_XPLAN package:
select * from table( dbms_xplan.display_sql_plan_baseline( sql_handle=>'SYS_SQL_d223cef6c0ff72ee', format=>'basic')); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------- SQL handle: SYS_SQL_d223cef6c0ff72ee SQL text: select * from test where id=:"SYS_B_0" -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Plan name: SQL_PLAN_d48yfyv0gywrf7ad59d6c Plan id: 2060819820 Enabled: YES Fixed: NO Accepted: NO Origin: AUTO-CAPTURE -------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Plan hash value: 3803811902 -------------------------------------------------- | Id | Operation | Name | -------------------------------------------------- | 0 | SELECT STATEMENT | | | 1 | TABLE ACCESS BY INDEX ROWID| TEST | | 2 | INDEX UNIQUE SCAN | PK_TEST_ID | -------------------------------------------------- -------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Plan name: SQL_PLAN_d48yfyv0gywrf97bbe3d0 Plan id: 2545673168 Enabled: YES Fixed: NO Accepted: YES Origin: MANUAL-LOAD -------------------------------------------------------------------------------- Plan hash value: 1357081020 ---------------------------------- | Id | Operation | Name | ---------------------------------- | 0 | SELECT STATEMENT | | | 1 | TABLE ACCESS FULL| TEST | PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------- 34 rows selected.
Oracle documentation
http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/optplanmgmt.htm
Oracle 11g: How to restore a backup
Commands used by RMAN are :
- RESTORE Used to retrieve a set of files from a backup.
- RECOVER Applies modifications registered inside incremental backups, archived log and redo logs to all the files previously restored.
How to restore/recover a tablespace
If the tablespace is not a system tablespace then the database could be opened during the restoration otherwise the database must be in MOUNT.
If the datafile is not a datafile of a system tablespace then the database could be opened during the restoration otherwise the database must be in MOUNT.
Anyway the datafile must be set offline before the restoration.
The following example restore a database as it should be 10 minutes before :
remark : If you want to check that the point in time is correct by checking all the data restore, you need to open the database in READ ONLY. By this way, the redo logs will not be reseted and you can move your point in time if not relevant.
How to restore/recover a database at a specific SCN
remark :If the SCN you need to reach is not part of the current incarnation of the database, you need to reset the database to the incarnation juste before your scn and after perform the recovery. Here is the example of the restoration of the database at the SCN 5726167:
How to restore/recover a database at a specific sequence number
To restore a control file the database must be in NOMOUNT state:
To restore a parameter file, the database must be in NOMOUNT state:
- RESTORE Used to retrieve a set of files from a backup.
RESTORE {DATABASE | TABLESPACE name [,name] | DATAFILE name [,name]}You can restore the database to a specific SCN, time, restore point or redo log sequence number.
- RECOVER Applies modifications registered inside incremental backups, archived log and redo logs to all the files previously restored.
RECOVER {DATABASE | TABLESPACE name [,name] | DATAFILE name [,name]}How to restore/recover an entire database
RMAN> SHUTDOWN IMMEDIATE; RMAN> STARTUP MOUNT; RESTORE DATABASE; RECOVER DATABASE; ALTER DATABASE OPEN RESETLOGS;note: In case of you lost your online redo logs, you can indicate to RMAN that during the recovery procress you do not want to apply redo logs :
RMAN> RECOVER DATABASE NOREDO;
How to restore/recover a tablespace
If the tablespace is not a system tablespace then the database could be opened during the restoration otherwise the database must be in MOUNT.
RMAN> sql 'alter tablespace users offline immediate'; RMAN> RESTORE TABLESPACE USERS; RMAN> RECOVER TABLESPACE USERS; RMAN> sql 'alter tablespace users online';How to restore/recover a datafile
If the datafile is not a datafile of a system tablespace then the database could be opened during the restoration otherwise the database must be in MOUNT.
Anyway the datafile must be set offline before the restoration.
RMAN> sql 'alter database datafile 4 offline immediate'; RMAN> RESTORE DATAFILE 4; RMAN> RECOVER DATAFILE 4; RMAN> sql 'alter database datafile 4 online';How to restore/recover a database until the last successful transaction
RMAN> RESTORE DATABASE UNTIL CANCEL; RMAN> RECOVER DATABASE UNTIL CANCEL;How to restore/recover a database at a specific point in time
The following example restore a database as it should be 10 minutes before :
RMAN> RECOVER DATABASE UNTIL TIME 'sysdate-10/60/24';If you use a specific time, it is then necessary to define the NLS_LANG & NLS_DATE_FORMAT system parameters. These parameters will be used by rman to decode the date ans time passed for the restoration.
$> export NLS_LANG = american_america.us7ascii $> export NLS_DATE_FORMAT = 'DD/MM/YYYY HH24:MI:SS' $> RMAN target / catalog rcatowner/rcatowner@rcat_db RMAN> RESTORE DATABASE UNTIL TIME '18/11/2011 10:40:00'; RMAN> RECOVER DATABASE UNTIL TIME '18/11/2011 10:40:00'; RMAN> ALTER DATABASE OPEN RESETLOGS;or using the SET command :
$> export NLS_LANG = american_america.us7ascii $> export NLS_DATE_FORMAT = 'DD/MM/YYYY HH24:MI:SS' $> RMAN target / catalog rcatowner/rcatowner@rcat_db RMAN> run { SET UNTIL TIME '18/11/2011 10:40:00'; RESTORE DATABASE; RECOVER DATABASE; } RMAN> ALTER DATABASE OPEN RESETLOGS;
remark : If you want to check that the point in time is correct by checking all the data restore, you need to open the database in READ ONLY. By this way, the redo logs will not be reseted and you can move your point in time if not relevant.
RMAN> ALTER DATABASE OPEN READ ONLY;Once the point in time is correct you can open the database and reset the redo logs.
RMAN> ALTER DATABASE OPEN RESETLOGS;This remark could also be applied for restoration at a specific SCN or sequence number.
How to restore/recover a database at a specific SCN
RMAN> SHUTDOWN IMMEDIATE; RMAN> STARTUP MOUNT; RMAN> run { SET UNTIL SCN #; RESTORE DATABASE; RECOVER DATABASE; } RMAN> ALTER DATABASE OPEN RESETLOGS;remark :"SET UNTIL" DOESN'T WORK WITH A TABLESPACE!. It is not possible to restore only one tablespace at a specific time in the past. The database will not open.
remark :If the SCN you need to reach is not part of the current incarnation of the database, you need to reset the database to the incarnation juste before your scn and after perform the recovery. Here is the example of the restoration of the database at the SCN 5726167:
RMAN> run { 2> set until scn 5726167; 3> restore database; 4> recover database; 5> } executing command: SET until clause Starting restore at 18-11-11 13:28:10 using channel ORA_DISK_1 using channel ORA_SBT_TAPE_1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 11/18/2011 13:28:10 RMAN-20208: UNTIL CHANGE is before RESETLOGS change RMAN> list incarnation; List of Database Incarnations DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time ------- ------- -------- ---------------- --- ---------- ---------- 1 1 TEST 2058702964 PARENT 1 13-08-09 23:00:48 2 2 TEST 2058702964 PARENT 754488 30-08-11 17:42:15 3 3 TEST 2058702964 PARENT 5571932 16-11-11 14:39:08 4 4 TEST 2058702964 ORPHAN 5576815 16-11-11 16:11:53 5 5 TEST 2058702964 ORPHAN 5577397 16-11-11 16:18:46 6 6 TEST 2058702964 ORPHAN 5598121 16-11-11 16:37:33 7 7 TEST 2058702964 PARENT 5576413 16-11-11 18:02:08 8 8 TEST 2058702964 CURRENT 5726169 18-11-11 11:41:56 RMAN> shutdown immediate; RMAN> startup mount; RMAN> reset database to incarnation 7; database reset to incarnation 7 RMAN> run { 2> set until scn 5726167; 3> restore database; 4> recover database; 5> } RMAN> ALTER DATABASE OPEN RESETLOGS;note : The RESET DATABASE need to alter the control file, so the database must be in mount state but not open. In case of an opend database you will receive the following exception : ORA-19910: can not change recovery target incarnation in control file
How to restore/recover a database at a specific sequence number
RMAN> SHUTDOWN IMMEDIATE; RMAN> STARTUP MOUNT; RMAN> run { SET UNTIL SEQUENCE #; RESTORE DATABASE; RECOVER DATABASE; } RMAN> ALTER DATABASE OPEN RESETLOGS;How to restore a control file
To restore a control file the database must be in NOMOUNT state:
RMAN> STARTUP NOMOUNT; RMAN> RESTORE CONTROLFILE; RMAN> ALTER DATABASE MOUNT; RMAN> RECOVER DATABASE; RMAN> ALTER DATABASE OPEN RESETLOGS;If you want to restore from an autobackup and you have an rman catalog :
RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;If you don't have a catalog, you need to define the DBID:
RMAN> SET DBID 1090778584; RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;How to restore a parameter file
To restore a parameter file, the database must be in NOMOUNT state:
RMAN> STARTUP FORCE NOMOUNT; RMAN> RESTORE SPFILE FROM AUTOBACKUP; RMAN> STARTUP FORCE;You can change the name of the parameter file :
RMAN> RESTORE SPFILE TO <parameter_file_name> FROM AUTOBACKUP;Using sql, it is possible to generate the parameter file using
SQL> CREATE PFILE [='pfile_name'] FROM { { SPFILE [='spfile_name']} | MEMORY}; or SQL> CREATE SFILE [='sfile_name'] FROM { { PFILE [='pfile_name']} | MEMORY};How to restore/recover a database at a specific incarnation
RMAN> LIST INCARNATION; RMAN> RESET DATABASE TO INCARNATION #;
jeudi 17 novembre 2011
Oracle 11g: Restore points
A restore point gives a name to a point in time.
You can create a restore point at the current time :
The restore point is stored in the control file within the period of retention defined by the CONTROL_FILE_RECORD_KEEP_TIME parameter.
If you want the restore point never expires add the PRESERVE key word at the previous commands.
You can see all restore points created through the database view V$RESTORE_POINT or use RMAN:
You can delete a restore point using:
You can create a restore point at the current time :
SQL> CREATE RESTORE POINT <name_of__restore_point>;or create a point in the past :
SQL> CREATE RESTORE POINT <name_of__restore_point> AS OF SCN #;
The restore point is stored in the control file within the period of retention defined by the CONTROL_FILE_RECORD_KEEP_TIME parameter.
If you want the restore point never expires add the PRESERVE key word at the previous commands.
SQL> CREATE RESTORE POINT <name_of__restore_point> PRESERVE;
You can see all restore points created through the database view V$RESTORE_POINT or use RMAN:
RMAN> LIST RESTORE POINT ALL;
You can delete a restore point using:
SQL> DROP RESTORE POINT <name_of__restore_point>;
Oracle 11g: How to trace in alert log file
To trace sonn messahe in the alert log file :
exec dbms_system.ksdwrt(2, 'This is a message sent at ' || TO_CHAR(SYSDATE, 'dd/mm/yyyy HH24"H"MI'));note : For more information about dbms_system package see http://psoug.org/reference/dbms_system.html
mardi 15 novembre 2011
Oracle 11g: Monitoring RMAN backups
If you are connected as SYSDBA
The following dynamic views give information about backups performance and statistics:
If you are connected as the catalog owner
note: The following sql request gives information about the number of block read for an incremental level 1 backup.
The following dynamic views give information about backups performance and statistics:
View Name | Description |
---|---|
V$BACKUP_SET | Backupset created |
V$BACKUP_PIECE | All backup piece created |
V$DATAFILE_COPY | All image copies created |
V$BACKUP_FILES | All files created during backups |
V$BACKUP_DATAFILE | All datafiles backuped and gives information about the moniroting of incremential backups |
If you are connected as the catalog owner
View Name | Description |
---|---|
RC_BACKUP_SET | Backupset created |
RC_BACKUP_PIECE | All backup piece created |
RC_DATAFILE_COPY | All image copies created |
RC_BACKUP_FILES | All files created during backups |
RC_BACKUP_DATAFILE | All datafiles backuped and gives information about the moniroting of incremential backups |
note: The following sql request gives information about the number of block read for an incremental level 1 backup.
sqlplus / as sysdba SQL> select file#, avg(datafile_blocks), avg(blocks_read), avg(blocks_read/datafile_blocks) * 100 as PCT_READ_FOR_BACKUP, avg(blocks) from v$backup_datafile where used_change_tracking = 'YES' and incremental_level > 0 group by file#;Incremental level 1 backup are interesting as the value of the PCT_READ_FOR_BACKUP column is less than 50%. Above this value, it is recommanded to perform a level 0 incremential backup.
jeudi 10 novembre 2011
Oracle 11g: RMAN catalog resynchronization
To synchronize the target database control file with the catalog :
To check if backups on file system are deleted but still referenced in the rman catalog:
a) for backupset
The crosscheck check if the backup still exists on file system; if not the backuppiece is marked as EXPIRED.
To list expired backupset :
To list expired copy :
To delete expired backupset or copy:
It is also possible to check for obsolete backup. An obsolete backup is a backup that is not used to garantee the retention policy.
To check for obsolete backup :
RAM> RESYNC CATALOG;
To check if backups on file system are deleted but still referenced in the rman catalog:
a) for backupset
RMAN> crosscheck backupset; allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=143 device type=DISK using channel ORA_SBT_TAPE_1 crosschecked backup piece: found to be 'EXPIRED' backup piece handle=43mraiho_1_1 RECID=59 STAMP=766855736 crosschecked backup piece: found to be 'EXPIRED' backup piece handle=44mrail9_1_1 RECID=60 STAMP=766855849 crosschecked backup piece: found to be 'EXPIRED' backup piece handle=4cmraj7q_1_1 RECID=62 STAMP=766856442 crosschecked backup piece: found to be 'EXPIRED' backup piece handle=4dmral4o_1_1 RECID=63 STAMP=766858392 crosschecked backup piece: found to be 'EXPIRED' backup piece handle=4lmranr6_1_1 RECID=65 STAMP=766861158 crosschecked backup piece: found to be 'EXPIRED' backup piece handle=4mmranun_1_1 RECID=66 STAMP=766861271 crosschecked backup piece: found to be 'EXPIRED' backup piece handle=4nmrao37_1_1 RECID=67 STAMP=766861415 Crosschecked 7 objectsFor a specific device type
RMAN> crosscheck backupset device type sbt;b) for image copy
RMAN> crosscheck copy;
The crosscheck check if the backup still exists on file system; if not the backuppiece is marked as EXPIRED.
To list expired backupset :
RMAN> list expired backupset; RMAN> list expired backupset summary;
To list expired copy :
RMAN> list expired copy;
To delete expired backupset or copy:
RMAN> delete expired backupset; RMAN> delete expired copy;
It is also possible to check for obsolete backup. An obsolete backup is a backup that is not used to garantee the retention policy.
To check for obsolete backup :
RMAN> report obsolete;To delete obsolete backup :
RMAN> delete obsolete;
Inscription à :
Articles (Atom)