Mihajlo Tekic

Subscribe to Mihajlo Tekic feed
MThttp://www.blogger.com/profile/01128712911412678264noreply@blogger.comBlogger26125
Updated: 11 hours 13 min ago

_direct_read_decision_statistcs_driven, _small_table_threshold and direct path reads on partitioned tables in 11.2.0.3 (Part 2)

Thu, 2014-04-10 01:30
This is continuation of my last post regarding direct path reads on partitioned tables in Oracle 11.2.0.3.

To recap, the behavior I observed is that direct path reads will be performed if number of blocks for all partitions that will be accessed exceeds _small_table_threshold value. That is if a table is consisted of 10 partitions each having 100 blocks and if a query goes after two of the partitions, direct path reads will be performed if _small_table_threshold is lower than 200.

Also regardless of how much data has been cached(in the buffer cache)  for each of the partitions, if direct path reads are to be performed, all partition segments will be directly scanned. So, it is all or nothing situation.

I also indicated that _direct_read_decision_statistics_driven parameter was set to TRUE (default) for the tests done in my earlier post.

What is _direct_read_decision_statistics_driven anyway? According to the parameter description, it enables direct path read decision to be based on optimizer statistics. If the parameter is set to FALSE Oracle will use segment headers to determine how many blocks the segment has. (read Tanel Poder’s blogpost for more information)

Let’s see how queries that access table partitions (full scan) behave if _direct_read_decsision_statiscs_driven parameter is set to FALSE in 11.2.0.3. My expectation was that it should be the same as if it was set to TRUE. I thought that once Oracle gets information about the number of blocks in each of the partitions it would use the same calculation as if the parameter was set to TRUE. Let’s see.

But, before moving forward a small disclaimer: Do not perform these tests in production or any other important environment. Changing of undocumented parameters should be done under the guidance of Oracle Support. The information presented here is for demonstration purposes only.

I will use the same table, TEST_PART, that I used in my earlier post.

I started with flushing the buffer cache (to make sure none of the partitions has blocks in the cache).

I set _direct_read_decision_statistcs_driven parameter to false and ran a query that selects data from PART_1 partition only. Each of the partitions contains 4000 rows stored in 65 blocks, plus one segment header block.

_small_table_threshold in my sandbox environment was set to 117.


SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> SELECT count(1) FROM test_part WHERE col1 in (1);

COUNT(1)
----------
4000


As expected, no direct path reads were performed (I used my sese.sql script that scans v$sesstat for statistics that match given keyword)


SQL> @sese direct

no rows selected


Now let’s see what happens with a query that accesses the first two partitions. Remember if _direct_read_decision_statistcs_driven parameter is set to TRUE, this query would perform direct path reads because the number of blocks in both partitions, 130 (2x65) exceeds
_small_table_threshold(117) parameter.


SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

SQL> @sese direct

no rows selected


No direct reads. Definitely different compared to when _direct_read_decision_statistcs_driven was set to TRUE.

How about for a query that accesses three partitions:


SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

SQL> @sese direct

no rows selected


Still no direct path reads.

How about if we access all 7 partitions:


SQL> select count(1) from test_part where col1 in (1,2,3,4,5,6,7);

COUNT(1)
----------
28000

SQL> @sese direct

no rows selected


No direct path reads.

So what is going on? Seems when _direct_read_decision_statistcs_driven is set to FALSE, Oracle makes decision on partition by partition basis. If the number of blocks in the partition is less or equal than _small_table_threshold buffer cache will be used, otherwise direct path reads.

What if some of the partitions were already cached in the buffer cache?

In the next test I’ll:
  • Flush the buffer cache again
  • Set _direct_read_decision_statistcs_driven is set to FALSE
  • Run a query that accesses the first two partitions
  • Decrease the value for _small_table_threshold to 60
  • Run a query that accesses the first three partitions.
  • Check if direct path reads were performed and how many
With this test I’d like to see if Oracle will utilize the buffer cache if the segment data is cached and the number of blocks in partition is greater than _small_table_threshold.


SQL> alter system flush buffer_cache;

System altered.

SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;

Session altered.

SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

SQL> @sese direct

no rows selected


At this point, PART_1 and PART_2 partitions should be entirely in the buffer cache. If you want, you could query X$KCBOQH to confirm this (from a different session logged in as SYS).


SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66


As expected, both partitions are in the buffer cache.

Now let’s change decrease _small_table_threshold to 60 and run a query that scans the first three partitions:


SQL> alter session set "_small_table_threshold"=60;

Session altered.

SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

alter session set events '10046 trace name context off';

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
9 76 STAT.consistent gets direct 65
9 81 STAT.physical reads direct 65
9 380 STAT.table scans (direct read) 1


Here they are, 65 direct path reads, one table scan (direct read) which means one of the partitions was scanned using direct path reads. Which one? Yes, you are right, the one that is not in the buffer cache (PART_3 in this example).

If you query X$KCBOQH again you can see that only one block of PART_3 is in the cache. That is the segment header block.


SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 1 <===


This means that when _direct_read_decision_statistcs_driven is set to FALSE, in 11.2.0.3, Oracle uses totally different calculation compared to the one used when the parameter is set to TRUE (see in my earlier post).

Moreover, seems Oracle examines each of the partitions separately (which I initially expected to be a case even when _direct_read_decision_statistcs_driven is set to TRUE ) and applies the rules as described in Alex Fatkulin’s blogpost. That is, if any of the following is true, oracle will scan the data in the buffer cache, otherwise direct path reads will be performed: 
  •  the number of blocks in the segment is lower or equal than _small_table_threshold 
  •  at least 50% of the segment data blocks are in the buffer cache
  •  at least 25% of the data blocks are dirty 
The conclusion so far is that in 11.2.0.3, you may observe different behavior for the queries that access table partitions using FTS if you decide to change _direct_read_decision_statistcs_driven parameter.

I will stop here. I ran the same tests against 11.2.0.4 and 12.1.0.1 and noticed some differences in the behavior compared to the one I just wrote about (11.2.0.3). I will post these results in the next few days.

Stay tuned...



_small_table_threshold and direct path reads on partitioned tables in 11.2.0.3

Tue, 2014-04-08 00:33

I was troubleshooting a performance problem few days ago. The database the problem was experienced on was recently migrated from Oracle 10.2.0.4 to Oracle 11.2.0.3.

Long story short, the problem was described as performance of a query that scans two or more partitions in a table is much worse compared to combined performances of queries accessing each of the partitions separately.

After a short investigation I narrowed down the problem to “direct path reads” being the culprit of the problem.

As you know, due to the adaptive direct read feature introduced in 11g full table scans may utilize PGA instead of the buffer cache as it was a case in the earlier versions.

There are few good articles on this change in behavior among which I personally favor Tanel’s blogpost and hacking session and the post by Alex Fatkulin. You could also check MOS Note 793845.1.

What I observed in 11.2.0.3.0 was quite surprising and a bit different from what I’ve read so far. I know that there are different parameters/variables that influence the decision whether or not direct part reads should be used. I tried to be careful and not to fall in any of these traps.

Please note all the tests were done in a sandbox environment. I advise against trying these tests in any production environment.

The database version was 11.2.0.3.0.

_serial_direct_read = auto

_direct_read_decision_statistics_driven = TRUE

_small_table_threshold = 117


I used ASSM, a tablespace with uniform extent size(64K)

As you may know _small_table_threshold parameter is set to about 2% of the size of the buffer cache. On my test machine I have pretty small buffer cache, 5892 buffers big (117 is 1.98% of 5892)


SQL> SELECT name,block_size,buffers FROM v$buffer_pool;

NAME BLOCK_SIZE BUFFERS
-------------------------------------------------- ---------- ----------
DEFAULT 8192 5892


I will try to simplify the problem by using a partitioned table, TEST_PART containing 7 partitions.


CREATE TABLE test_part
(
col1 NUMBER NOT NULL
, col2 VARCHAR2(100)
)
PARTITION BY LIST (col1)
(
PARTITION PART_1 VALUES (1)
, PARTITION PART_2 VALUES (2)
, PARTITION PART_3 VALUES (3)
, PARTITION PART_4 VALUES (4)
, PARTITION PART_5 VALUES (5)
, PARTITION PART_6 VALUES (6)
, PARTITION PART_7 VALUES (7)
) ;


Each of the 7 partitions will be populated with 4000 rows (for total of 65 blocks allocated per partition) using the following SQL statement:


INSERT INTO test_part (SELECT mod(rownum,7)+1, rpad('A',100,'A') FROM dual CONNECT BY rownum<=28000);


I will collect stats using the statement below:


exec dbms_stats.gather_table_stats(user,'TEST_PART');


As you can see from the output below, each of the partitions has 65 blocks below the HWM:


SQL> select table_name, partition_name, num_rows, blocks, avg_row_len
from user_tab_partitions
where table_name='TEST_PART'; 2 3

TABLE_NAME PARTITION_NAME NUM_ROWS BLOCKS AVG_ROW_LEN
------------------------------ ------------------------------ ---------- ---------- -----------
TEST_PART PART_1 4000 65 104
TEST_PART PART_2 4000 65 104
TEST_PART PART_3 4000 65 104
TEST_PART PART_4 4000 65 104
TEST_PART PART_5 4000 65 104
TEST_PART PART_6 4000 65 104
TEST_PART PART_7 4000 65 104

7 rows selected.



Observation #1 -
 _small_table_threshold is applied on the total number of blocks expected to be returned by the query (considering all partition segments that will be accessed)

As you can see number of blocks in each of the partitions (65) is lower than _small_table_threshold value (117). Therefore a query that accesses only one of the partitions uses the buffer cache to store the segment blocks.


SQL> select count(1) from test_part where col1 in (1);

COUNT(1)
----------
4000


I will use my sese.sql script to check the values for specific session statistics. It simply scans v$sesstat for the current session and a given keyword. If there are statistics that contain the specified keyword and their value is greater than 0 they will be reported. As you can see no direct path reads were performed.


SQL> @sese direct

no rows selected


I expected to see the next query utilizing the buffer cache as well. It scans two partitions. As you know, each of the partitions has 65 blocks which is less than _small_table_threshold value (117), hence I thought I won't see any direct path reads.


SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

However, direct path reads were performed. Moreover, even though one of the partitions I previously scanned was already in the buffer cache, both partitions were scanned using direct path reads. As shown in the output below, two segments were fully scanned using direct reads for total of 130 direct reads were performed (2x65).


SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 130
7 81 STAT.physical reads direct 130
7 380 STAT.table scans (direct read) 2

Let’s see what happens when I increase _small_table_threshold to 130 and run the last query.


SQL> alter session set "_small_table_threshold"=130;

Session altered.

SQL> select count(1) from test_part where col1 in (1,2);

COUNT(1)
----------
8000

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 130
7 81 STAT.physical reads direct 130
7 380 STAT.table scans (direct read) 2


The number of direct path reads stayed the same, which means no direct path reads were performed.

How about if I we add one more partition to the equation now (_small_table_threshold=130):


SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 325
7 81 STAT.physical reads direct 325
7 380 STAT.table scans (direct read) 5


Now since we scan 3 partitions, that is 195 blocks Oracle went back to direct path reads and the statistic numbers went up by 195 (3x65) , 130+195=325 or three new table/segment scans.

Therefore seems the logic behind the decision whether or not to perform direct path reads is:

IF SUM(blocks of all partitions that are accessed)>_small_table_threshold THEN
     perform direct path reads for all partitions that are accessed
ELSE
     utilize buffer cache

Again, just to remind you this behavior is specific to 11.2.0.3.


Observation #2 -
The percentage of cached blocks per partition is not relevant


This brings me to the second observation. If you query X$KCBOQH.NUM_BUF for the partition segments (read Tanel’s blogpost or watch his hacking session ) you can see that even though partitions PART_1 and PART_2 were in the cache, Oracle still performed direct path reads for all three partitions:


SQL> conn /as sysdba
Connected.

SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 1


I ran the output above after the last test. As you can see PART_1 and PART_2 segments are completely in the buffer cache, 66 blocks each (65 blocks for the data and 1 block for the segment header). PART_3 however has only one block in the cache and that is most likely the segment header block.

But, even when all 3 partitions were fully loaded in the buffer cache, Oracle still performed direct path reads:


SQL> conn *****/*****
Connected.
SQL> select count(1) from test_part where col1 in (3);

COUNT(1)
----------
4000

SQL> conn /as sysdba
Connected.

SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;

SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 66

SQL> conn *****/*****
Connected.
SQL> @sese direct

no rows selected

SQL> select count(1) from test_part where col1 in (1,2,3);

COUNT(1)
----------
12000

SQL> @sese direct

SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 195
7 81 STAT.physical reads direct 195
7 380 STAT.table scans (direct read) 3

SQL>

I will stop with this post here. Tomorrow I will publish another post where I'll show what difference _direct_read_decision_statistics_driven could make for partitioned tables in 11.2.0.3.
(Update: Part2 - what difference does _direct_read_decision_statistics_driven=FALSE make)

I will repeat the same tests in 11.2.0.4 and 12.1.0.1 and see if the behavior is any different.

Stay tuned.
 

Find the enabled events using ORADEBUG EVENTDUMP

Sat, 2014-03-15 09:46

Just a quick note on how to use ORADEBUG in order to find events that are enabled on system or session level.

Starting from Oracle 10.2 you could use ORADEBUG EVENTDUMP in order to get all events enabled either on session or system level. (not sure if this is available in 10.1)


The synopsis is:


oradebug eventdump


Where level is either system, process or session:
 


SQL> oradebug doc event action eventdump
eventdump
- list events that are set in the group
Usage
-------
eventdump( group < system | process | session >)


For demonstration purposes I will set three events, two on session and one on system level.
 


SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> alter session set events '942 trace name errorstack';

Session altered.

SQL> alter system set events 'trace[px_control][sql:f54y11t3njdpj]';

System altered.


Now, let's check what events are enabled on SESSION or SYSTEM level using ORADEBUG EVENTDUMP

SQL> oradebug setmypid

SQL> oradebug eventdump session;
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]
sql_trace level=8
942 trace name errorstack

SQL> oradebug eventdump system
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]


As you may already know, ORADEBUG requires SYSDBA privilege. In order to check events set for other session, one could do so by attaching to the other session process using oradebug  setospid or oradebug setorapid.

This note was more for my own reference. I hope someone else finds it useful too.

Fun with global temporary tables in Oracle 12c

Sun, 2014-03-02 22:07

Few months ago I wrote a post about 12c session specific statistics for global temporary tables (link). Long awaited feature no matter what.

Recently I had some discussions on the same subject with members of my team.

One interesting observation was the behavior of transaction specific GTTs with session specific statistics enabled. What attracted our interest was the fact that data in global temporary tables is not deleted after DBMS_STATS package is invoked.

Prior to 12c, a call to DBMS_STATS will result with an implicit commit. This would wipe out the content of a transaction specific global temporary table.

I’ll digress here a bit. Yes, I know, who would call DBMS_STATS to collect statistics on a transaction specific GTT knowing the data in the table will be lost. Well, things change a bit in 12c.

In Oracle 12c, no implicit commit is invoked when DBMS_STATS.GATHER_TABLE_STATS is invoked on a transaction specific with session specific statistics enabled thus letting users take advantage of session specific statistics for this type of GTTs.

This behavior is documented in Oracle documentation.

I’ll try to put some more light on this behavior through couple of examples:

For this purpose I’ll start with three tables. T1 and T2 are transaction specific temporary tables. T3 is a regular table. By default, in 12c, session specific statistics are used.



CREATE GLOBAL TEMPORARY TABLE t1 (id NUMBER);

CREATE GLOBAL TEMPORARY TABLE t2 (id NUMBER);

CREATE TABLE t3 (id NUMBER);



Scenario #1 – Insert 5 rows to each of the three tables and observe the state of the data after DBMS_STATS is invoked on a transaction specific GTT.



SQL> INSERT INTO t1 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> INSERT INTO t2 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> INSERT INTO t3 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> exec DBMS_STATS.GATHER_TABLE_STATS(user,'T1');
PL/SQL procedure successfully completed.

SQL> SELECT count(1) FROM t1;
COUNT(1)
----------
5


As you can see the data in T1 is still present. Furthermore if you open another session you can also see that T3 has no rows. This means commit was not invoked when session specific statistics were collected for T1.

Scenario 2# Insert 5 rows in each of the three tables and collect statistics only on the regular table, T3.


SQL> INSERT INTO t1 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> INSERT INTO t2 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> INSERT INTO t3 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.

SQL> exec DBMS_STATS.GATHER_TABLE_STATS(user,'T3');
PL/SQL procedure successfully completed.

SQL> SELECT count(1) FROM t1;
COUNT(1)
----------
0


As you can see in this scenario implicit commit was invoked which resulted with data in T1 being purged.

Hope this helps … :-)

Cheers!





Playing with Oracle Spatial - An important consideration when using SDO_NN operator

Fri, 2013-11-15 23:27

I've been playing with Oracle Spatial quite a bit lately and ran into some interesting things that I  plan to write about in the next couple of posts.

This post covers an important consideration, I think, one should make when using SDO_NN spatial operator with sdo_num_res parameter.

But first, just briefly on the operator itself. SDO_NN operator is used to identify the nearest neighbors for a particular geometry. The full operator synopsis can be found in the official documentation

Please note, the database I use in the examples below is Oracle 11.2.0.3.0 Enterprise Edition with no additional patches applied and default Oracle Spatial settings.

Let's consider the following example, there is a table RESTAURANTS having only an id and a geometry column (point).

 CREATE TABLE restaurants
(
id NUMBER NOT NULL,
point MDSYS.SDO_GEOMETRY
);

ALTER TABLE restaurants ADD CONSTRAINT restaurants_pk PRIMARY KEY (id);


Let's create a spatial index on restaruants.point column:

 INSERT INTO user_sdo_geom_metadata 
( TABLE_NAME
, COLUMN_NAME
, DIMINFO
, SRID)
VALUES
( 'restaurants'
, 'POINT'
, SDO_DIM_ARRAY(
SDO_DIM_ELEMENT('X', 0, 10000000, 0.005),
SDO_DIM_ELEMENT('Y', 0, 10, 0.005)
)
, null -- SRID
);

CREATE INDEX restaurants_sidx ON restaurants(point) INDEXTYPE IS MDSYS.SPATIAL_INDEX;


For simplicity let's say there are 500,000 restaurants distributed on a single line (x-axis of two dimensional coordinate system, y=0, x=1,2,3,...500,000).

INSERT INTO restaurants 
(SELECT rownum --id
, SDO_GEOMETRY(
2001 -- 2 dimensional point
, null -- SDO SRID
, SDO_POINT_TYPE(rownum,0,null) –-x=rownum, y=0
, null
, null
)
FROM dual CONNECT BY rownum<=500000
);

In order to find the first 5 nearest neighbors to point (10,0) one could use the following spatial query:

SQL> SELECT p.id, sdo_nn_distance(1) distance
2 FROM restaurants p
3 WHERE SDO_NN(p.point,
4 SDO_GEOMETRY(
5 2001 -- 2 dimensional point
6 , null
7 , SDO_POINT_TYPE(10,0,null)
8 , null
9 , null
10 ),
11 'sdo_num_res=5',
12 1
13 )='TRUE'
14* ORDER BY 2;

ID DISTANCE
---------- ----------
10 0
9 1
11 1
12 2
8 2

SQL>

Now suppose each restaurant has 5 menu items that are stored in MENU_ITEMS table. The table has 4 columns, a unique identifier ID; a reference to RESTAURANTS table, rest_id; an identifier that identifies an item within the restaurant menu, item_number; and lastly a description, 100 characters long column that I use in this example to make this table a bit bigger.

CREATE TABLE menu_items
(id NUMBER NOT NULL,
rest_id NUMBER NOT NULL,
item_number NUMBER NOT NULL,
description varchar2(100)
);

ALTER TABLE menu_items ADD CONSTRAINT menu_items_pk PRIMARY KEY (id);

ALTER TABLE menu_items ADD CONSTRAINT menu_items_rest_FK FOREIGN KEY (rest_id) REFERENCES restaurants(id);

CREATE INDEX menu_items_rest_idx ON menu_items(rest_id);


INSERT INTO menu_items
(SELECT rownum
, b.rn
, a.rn
, rpad(rownum,100,'A')
FROM
(SELECT rownum rn FROM dual CONNECT BY rownum<=5) a, --5 menu items per rest.
(SELECT rownum rn FROM dual CONNECT BY rownum<=500000) b
);

commit;

Suppose you want to get the menu items of the top 5 nearest restaurants to a particular location (10,0) for an example. One way would be with the following spatial query (but first of course collect the necessary stats):

BEGIN
DBMS_STATS.GATHER_TABLE_STATS(ownname=>user,tabname=>'restaurants',cascade=>TRUE);
DBMS_STATS.GATHER_TABLE_STATS(ownname=>user,tabname=>'menu_items',cascade=>TRUE, method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO');
END;

SELECT t1.id
, t2.id
, t2.restaurants_id
, t2.item_number
FROM restaurants t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
AND SDO_NN(t1.point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
ORDER BY t1.id, t2.item_number;


The query produces the desired result. However, let's look how it performed. I re-run the query, but this time I used gather_plan_statistics hint to get the execution statistics for the query.

SQL>SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
2 FROM restaurants t1
3 , menu_items t2
4 WHERE t1.id=t2.restaurants_id
5 AND SDO_NN(t1.point,
6 SDO_GEOMETRY(
7 2001 -- 2 dimensional point
8 , null
9 , SDO_POINT_TYPE(10,0,null)
10 , null
11 , null
12 ),
13 'sdo_num_res=5',
14 1
15 )='TRUE'
16 ORDER BY t1.id, t2.item_number;

ID ID RESTAURANTS_ID ITEM_NUMBER
---------- ---------- -------------- -----------
8 8 8 1
8 500008 8 2
8 1000008 8 3
8 1500008 8 4
8 2000008 8 5
9 9 9 1
9 500009 9 2
9 1000009 9 3
9 1500009 9 4
9 2000009 9 5
10 10 10 1
10 500010 10 2
10 1000010 10 3
10 1500010 10 4
10 2000010 10 5
11 11 11 1
11 500011 11 2
11 1000011 11 3
11 1500011 11 4
11 2000011 11 5
12 12 12 1
12 500012 12 2
12 1000012 12 3
12 1500012 12 4
12 2000012 12 5

25 rows selected.


The query took 1.32 seconds to complete as you can see from the execution statistics:

SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gwpqub3k0awqm, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id,
t2.item_number FROM restaurants t1 , menu_items t2 WHERE
t1.id=t2.restaurants_id AND SDO_NN(t1.point, SDO_GEOMETRY(
2001 -- 2 dimensional point , null ,
SDO_POINT_TYPE(10,0,null) , null , null ),
'sdo_num_res=5', 1 )='TRUE' ORDER BY t1.id, t2.item_number

Plan hash value: 2076547507


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:01.32 | 41540 | 41465 | | | |
| 1 | SORT ORDER BY | | 1 | 24678 | 25 |00:00:01.32 | 41540 | 41465 | 2048 | 2048 | 2048 (0)|
|* 2 | HASH JOIN | | 1 | 24678 | 25 |00:00:01.32 | 41540 | 41465 | 1517K| 1517K| 886K (0)|
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5000 | 5 |00:00:00.01 | 64 | 0 | | | |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 63 | 0 | | | |
| 5 | TABLE ACCESS FULL | MENU_ITEMS | 1 | 2500K| 2500K|00:00:00.62 | 41476 | 41465 | | | |
------------------------------------------------------------------------------------------------------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T1"."ID"="T2"."RESTAURANTS_ID")
4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='
TRUE')


29 rows selected.

SQL>


The problem

Well, one could say that 1.32 seconds is not that bad. However, if you look at the plan more carefully you can notice that CBO used a HASH JOIN method to join RESTAURANTS and MENU_ITEMS tables and as a result it performed full table scan on MENU_ITEMS table. Now imagine if MENU_ITEMS table was quite big (suppose you have stored the items of all restaurants across US)?!

What made CBO to pick a HASH JOIN? Look at the estimated (5,000) vs. actual rows (5) selected from RESTAURANTS table. We were only after the 5 nearest neighbors. We used sdo_num_res=5 in our SDO_NN call. One could expect that CBO would recognize that fact and estimate cardinality of 5. But, no, CBO applied selectivity of 1% (500,000 restaurants x 1% = 5,000)

Why selectivity of 1%?

SDO_NN is an operator that is bound to the implementation of the function NN that is part of PRVT_IDX package owned by MDSYS.

By default, statistics for PRVT_IDX package is defined by SDO_STATISTICS type. That is Extensible Optimizer feature is used to define how cardinality and the cost will be calculated for functions/procedures defined in the package. (if statistics are not associated, CBO uses default selectivity of 1%)

Seems like the logic implemented in ODCIStatsSelectivity() function is not good enough to detect that we are only after the first N rows, as defined with SDO_NUM_RES parameter.

You can clearly see this if you create 10053 trace file in the section where MDSYS.SDO_STATISTICS.ODCIStatsSelectivity procedure is used to calculate selectivity.



MDSYS.SDO_STATISTICS.ODCIStatsFunctionCost returned:
CPUCost : 100000000000000
IOCost : 100000000000000
NetworkCost : -1
Calling user-defined selectivity function...
predicate: "MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE'
declare
sel number;
obj0 "MDSYS"."SDO_GEOMETRY" := "MDSYS"."SDO_GEOMETRY"(NULL, NULL, NULL, NULL, NULL);

begin
:1 := "MDSYS"."SDO_STATISTICS".ODCIStatsSelectivity(
sys.ODCIPREDINFO('MDSYS',
'PRVT_IDX',
'NN',
45),
sel,
sys.ODCIARGDESCLIST(sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(2, 'RESTAURANTS', 'TEST', '"POINT"', NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
:3,
:4
, obj0, "MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL), :5,
sys.ODCIENV(:6,:7,:8,:9));
if sel IS NULL then
:2 := -1.0;
else
:2 := sel;
end if;
exception
when others then
raise;
end;
Bind :3 Value 'TRUE'
Bind :4 Value 'TRUE'
Bind :5 Value 'sdo_num_res=5'
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 0
ODCIEnv Bind :9 Value 3
MDSYS.SDO_STATISTICS.ODCIStatsSelectivity returned selectivity: 1.00000000%
Table: RESTAURANTS Alias: T1
Card: Original: 500000.000000 Rounded: 5000 Computed: 5000.00 Non Adjusted: 5000.00




If statistics are not associated CBO would also use selectivity of 1%. So what is the differences? Under what circumstances one could take advantage of the extensible optimizer feature and generate better selectivity when SDO_NN operator is used along with sdo_num_res parameter? I couldn't find any article or documentation piece that will help me answer these questions.

What I do know however, is that it shouldn't be that difficult to implement a piece of logic that will make SDO_STATISTICS.ODCIStatsSelectivity procedure generate more accurate numbers.

As a matter of fact, I played a bit with this.

Disclaimer: Please be aware the following examples are for demonstration purposes only and shouldn't be used in an actual, supported, database environment. The logic I show below is quite simple and is for illustration only. Please contact Oracle Support if you want to implement something similar in your environment.

I created a simple type named SDO_STATISTICS_CUSTOM under MDSYS schema using the Extensible Optimizer interface (link)

I implemented ODCIStatsSelectivity procedure as shown below (one more time, this implementation is to demonstrate that with appropriate logic ODCIStatsSelectivity can return desired values):



STATIC FUNCTION ODCIStatsSelectivity (
pred_info IN SYS.ODCIPredInfo,
sel OUT NUMBER,
args IN SYS.ODCIArgDescList,
p_start IN VARCHAR2,
p_stop IN VARCHAR2,
geometry1 IN MDSYS.SDO_GEOMETRY,
geometry2 IN MDSYS.SDO_GEOMETRY,
param IN VARCHAR2,
env IN SYS.ODCIEnv
) RETURN NUMBER IS
BEGIN

*******************
if param=NULL then
sel:=1;
else if lower(substr(param,1,instr(param,'=')-1)) = 'sdo_num_res' then
SELECT decode(num_rows,null,1,
decode(to_number(trim(substr(param,instr(param,'=')+1))),0,1,to_number(trim(substr(param,instr(param,'=')+1)))/num_rows)*100
)
INTO sel
FROM dba_tables where owner=args(3).TABLESCHEMA and table_name=args(3).TABLENAME;

if sel>100 then
sel:=100 ;
end if;

end if;
end if;

****************

RETURN ODCIConst.success;
END ODCIStatsSelectivity;



The logic I chose is simple, if sdo_num_res parameter is defined, get the number of records from the statistics available for the table and set the selectivity variable (sel) to (sdo_num_res/num_rows)*100.

Let's see how it works:


SQL> EXPLAIN PLAN FOR
2 SELECT t1.id, t2.id, t2.restaurants_id, t2.item_number
3 FROM restaurants t1
4 , menu_items t2
5 WHERE t1.id=t2.restaurants_id
6 AND SDO_NN(t1.point,
7 SDO_GEOMETRY(
8 2001 -- 2 dimensional point
9 , null
10 , SDO_POINT_TYPE(10,0,null)
11 , null
12 , null
13 ),
14 'sdo_num_res=5',
15 1
16 )='TRUE'
17* ORDER BY t1.id, t2.item_number
SQL> /

Explained.

SQL> set line 200 pagesize 9999
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2341889131

------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 25 | 1000 | 36 (3)| 00:00:01 |
| 1 | SORT ORDER BY | | 25 | 1000 | 36 (3)| 00:00:01 |
| 2 | NESTED LOOPS | | 25 | 1000 | 35 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 5 | 80 | 0 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| MENU_ITEMS | 5 | 120 | 7 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,N
ULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
6 - access("T1"."ID"="T2"."RESTAURANTS_ID")

20 rows selected.


As you can see the cardinality is correctly calculated which resulted CBO to produce better execution plan. With this plan in place, the query completes in less than 1 cs.

If you look into 10053 trace file you can also see that the calculated selectivity is 0.001%:




MDSYS.SDO_STATISTICS_CUSTOM.ODCIStatsFunctionCost returned:
CPUCost : 10000
IOCost : 10000
NetworkCost : 10000
Calling user-defined selectivity function...
predicate: "MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE'
declare
sel number;
obj0 "MDSYS"."SDO_GEOMETRY" := "MDSYS"."SDO_GEOMETRY"(NULL, NULL, NULL, NULL, NULL);

begin
:1 := "MDSYS"."SDO_STATISTICS_CUSTOM".ODCIStatsSelectivity(
sys.ODCIPREDINFO('MDSYS',
'PRVT_IDX',
'NN',
45),
sel,
sys.ODCIARGDESCLIST(sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(2, 'RESTAURANTS', 'TEST', '"POINT"', NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
:3,
:4
, obj0, "MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL), :5,
sys.ODCIENV(:6,:7,:8,:9));
if sel IS NULL then
:2 := -1.0;
else
:2 := sel;
end if;
exception
when others then
raise;
end;
Bind :3 Value 'TRUE'
Bind :4 Value 'TRUE'
Bind :5 Value 'sdo_num_res=5'
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 0
ODCIEnv Bind :9 Value 3
MDSYS.SDO_STATISTICS_CUSTOM.ODCIStatsSelectivity returned selectivity: 0.00100000%


Anyway, since the current implementation of SDO_STATISTICS type returns selectivity of 1%, the next logical question is what one can do in order to workaround the problem.

Workaround

The problem can be addressed by providing CBO that extra bit of information, that is how many rows we expect to get back after applying SDO_NN operator. In order to do this one could consider using the cardinality hint or rewrite the query and add one more predicate ROWNUM<=5 that would tell CBO how many rows we are expecting to select from RESTAURANTS table. (there may be other strategies available too)

Use cardinality hint:

SELECT /*+ cardinality(t1 5) gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
FROM restaurants t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
AND SDO_NN(t1.point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
ORDER BY t1.id, t2.item_number;


The execution statistics in this case look like:

Plan hash value: 2341889131


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:00.01 | 92 | 5 | | | |
| 1 | SORT ORDER BY | | 1 | 25 | 25 |00:00:00.01 | 92 | 5 | 2048 | 2048 | 2048 (0)|
| 2 | NESTED LOOPS | | 1 | 25 | 25 |00:00:00.01 | 92 | 5 | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5 | 5 |00:00:00.01 | 55 | 0 | | | |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 54 | 0 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| MENU_ITEMS | 5 | 5 | 25 |00:00:00.01 | 37 | 5 | | | |
|* 6 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | 5 | 25 |00:00:00.01 | 12 | 0 | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
6 - access("T1"."ID"="T2"."RESTAURANTS_ID")


29 rows selected.


Rewrite the query and use ROWNUM<=n predicate where n is the number associated with sdo_num_rows parameter:

SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
FROM (SELECT *
FROM restaurants
WHERE
SDO_NN(point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
AND rownum<=5
) t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
ORDER BY t1.id, t2.item_number;


… and the execution statistics :

Plan hash value: 3570932640

----------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:00.01 | 101 | | | |
| 1 | SORT ORDER BY | | 1 | 25 | 25 |00:00:00.01 | 101 | 2048 | 2048 | 2048 (0)|
| 2 | NESTED LOOPS | | 1 | | 25 |00:00:00.01 | 101 | | | |
| 3 | NESTED LOOPS | | 1 | 25 | 25 |00:00:00.01 | 76 | | | |
| 4 | VIEW | | 1 | 5 | 5 |00:00:00.01 | 64 | | | |
|* 5 | COUNT STOPKEY | | 1 | | 5 |00:00:00.01 | 64 | | | |
| 6 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5 | 5 |00:00:00.01 | 64 | | | |
|* 7 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 63 | | | |
|* 8 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | 5 | 25 |00:00:00.01 | 12 | | | |
| 9 | TABLE ACCESS BY INDEX ROWID | MENU_ITEMS | 25 | 5 | 25 |00:00:00.01 | 25 | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter(ROWNUM<=5)
7 - access("MDSYS"."SDO_NN"("POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
8 - access("T1"."ID"="T2"."RESTAURANTS_ID")


34 rows selected.


As you can see, Oracle CBO made correct cardinality estimates in the last two examples and used NESTED LOOPS method to join both tables that resulted with performance improvement.

I hope you got some valuable information out of this post.

CURSOR_SHARING=SIMILAR available in Oracle 12c, or not ?

Sun, 2013-07-28 00:05
It was announced before Oracle Database 12c was released that CURSOR_SHARING=SIMILAR will be deprecated (MOS Note 1169017.1). Moreover, according to the same note the ability to set this value will be removed.

And indeed, when I looked into 12c documentation, I found EXACT and FORCEbeing the only available values where the former is default value. (link)

I decided to play around a bit and immediately figured that SIMILAR could still be assigned to CURSOR_SHARING.

test[12,133]> ALTER SESSION SET CURSOR_SHARING=SIMILAR;

Session altered. 

When I tried to assign an invalid value to CURSOR_SHARING usingSQL*Plus I got an error message that says EXACT, SIMILAR and FORCE are acceptable values for CURSOR_SHARING parameter.

test[12,133]> ALTER SESSION SET CURSOR_SHARING=foo;
ERROR:
ORA-00096: invalid value FOO for parameter cursor_sharing, must be 
from among SIMILAR, EXACT, FORCE 

Couple of reasons I can think of as why SIMILAR is still allowed. The first, it may be some sort of backward compatibility (so that applications that explicitly set this value don't break when migrated to 12c); or it may be that CURSOR_SHARING=SIMILAR is still a valid option if COMPATIBLE parameter is set to an earlier release where the value was supported. (didn't have time to play with COMPATIBLE parameter)

Anyway, my main question was how Oracle will behave if CURSOR_SHARING is set to SIMILAR in 12c.

I ran a quick test. I created one table, named TAB1 and then executed one sub-optimally shareable statement 5 times passing 5 different literal values for each of the executions.

When CURSOR_SHARING is set to SIMILAR and a suboptimal statement using literals is executed, Oracle would not share any of the existing cursors and will perform a hard parse.

CREATE TABLE tab1 AS (SELECT * FROM user_objects);

ALTER SESSION SET CURSOR_SHARING=SIMILAR;

SELECT COUNT(1) FROM tab1 WHERE object_id>1;

SELECT COUNT(1) FROM tab1 WHERE object_id>2;

SELECT COUNT(1) FROM tab1 WHERE object_id>3;

SELECT COUNT(1) FROM tab1 WHERE object_id>4;

SELECT COUNT(1) FROM tab1 WHERE object_id>5; 

For each of these statements the literals were replaced with bind variables hence the SQL that was parsed has sql id c73v21bgp4956 and text like the one below:

SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1"

After running the statements from above, I observed the content in v$sqlarea, v$sql and v$sql_shared_cursor views to see if Oracle did a hard parsing.
 
set line 200
col sql_text format a80
SELECT sql_id, sql_text, executions, child_number FROM v$sql WHERE sql_id='c73v21bgp4956';

SQL_ID SQL_TEXT EXECUTIONS CHILD_NUMBER
------------- -------------------------------------------------------------------------------- ---------- ------------
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 5 0


As you can see Oracle reused the same child cursor 5 times, which means even though CURSOR_SHARING was set to SIMILAR it behaved as it was set to FORCE.

I ran the same example from above in 11.2.0.2 database. This time Oracle behaved as expected when CURSOR_SHARING is set to SIMILAR and used 5 different child cursors.
 
set line 200
col sql_text format a80
SELECT sql_id, sql_text, executions, child_number FROM v$sql WHERE sql_id='c73v21bgp4956';

SQL_ID SQL_TEXT EXECUTIONS CHILD_NUMBER
------------- -------------------------------------------------------------------------------- ---------- ------------
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 0
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 1
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 2
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 3
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 4


When I checked v$sql_shared_cursor I found that all of the child cursors had Y for HASH_MATCH_FAILED column which is expected when suboptimal statement is executed with CURSOR_SHARING=SIMILAR.


Conclusion


Even though setting SIMILAR to CURSOR_SHARING initialization parameter is still acceptable value, it looks Oracle ignore it and behave as if FORCE is used. (this may not be true if compatible parameter is set to an earlier release, like 11.2.0.2 for example. I didn't have time to test this scenario)


Hope this helps.

ORA-12514 during switchover using Data Guard Broker (Update)

Fri, 2013-07-26 21:34
This is just a short update for an earlier post about getting ORA-12514 while performing switchover using DataGuard broker.

There was a comment on whether or not _DGMGRL static service is still required when performing a switchover in 11.2 and onwards.

In order for the broker to be able to successfully start an instance during a switchover operation, static service needs to be registered with the listener. Starting from 11.2.0.1 this service doesn’t have to be "<db_unique_name>_DGMGRL.<db_domain>". Oracle introduced a new instance-level property, StaticConnectIdentifier. As a value, this property accepts a valid net service name (defined in tnsnames.ora) or full connection identifier. Therefore, starting from 11.2.0.1 you have flexibility to use any service, which still needs to be statically registered with the listener.

Here is an example:

There are two databases db112a (primary) and db112b(standby) hosted on hosta and hostb respectively.

Below is the content of the tnsnames.ora. It is identical for both hosts:



DB112A=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hosta)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DB112A)
)
)

DB112B=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostb)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DB112B)
)
)


listener.ora on HOSTA has the following content:


SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = db112a)
(ORACLE_HOME = /oracle/product/11.2/dbms)
(SID_NAME = db112a)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hosta)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)


listener.ora for HOSTB has db112b service statically registered:


SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = db112b)
(ORACLE_HOME = /oracle/product/11.2/dbms)
(SID_NAME = db112b)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostb)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)


Now the only thing left would be to set the StaticConnectIdentifier property for each of the databases.


DGMGRL> connect sys
Password:
Connected.
DGMGRL> show configuration

Configuration - db112

Protection Mode: MaxPerformance
Databases:
dg112a - Primary database
dg112b - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL>

DGMGRL> edit database db112a set property staticConnectidentifier='db112a';
DGMGRL> edit database db112b set property staticConnectidentifier='db112b';


And you should be all set to perform a switchover:


DGMGRL> switchover to dg112b
Performing switchover NOW, please wait...
New primary database "dg112b" is opening...
Operation requires shutdown of instance "dg112a" on database "dg112a"
Shutting down instance "dg112a"...
ORACLE instance shut down.
Operation requires startup of instance "dg112a" on database "dg112a"
Starting instance "dg112a"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "dg112b"

DGMGRL> show configuration

Configuration - db112

Protection Mode: MaxPerformance
Databases:
dg112b - Primary database
dg112a - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS


Should you not want to deal with staticConnectidentifieryou still need to have "<db_unique_name>_DGMGRL.<db_domain>" statically registered in order for the broker to be able to start the instance.

Useful resources:

MOS Note # 1305019.1 - 11.2 Data Guard Physical Standby Switchover Best Practices using the Broker


Hope this helps.

Oracle 12c - Session-specific statistics for global temporary tables

Wed, 2013-07-03 23:11
It’s been more than a week since Oracle Database 12c officially became available. I spent the very first night on reading the documentation, New Features chapter in particular.

New version, new features, improvements, enhancements, some more exciting than others. One however, intrigued me a bit .That is Session-specific statistics for global temporary tables. From time to time I am involved in optimizing queries that use global temporary tables so I am well aware of the challenges that come with them.

As the name suggest temporary tables contain temporary data. The data is visible only to the current session and persist either to the next commit or until the session is terminated. Either way, two sessions may populate the same temporary table with different data and concurrently run identical queries against it.

As we all know queries are executed using an execution plan generated by the optimizer. The efficiency of the execution plan depends upon the information the optimizer has about the data at the time the query is parsed. In my opinion, in order to get good execution plans(I am deliberately avoiding to say optimal plans) , accurate statistics are the most important piece of information you want to feed the optimizer with . 

However, when temporary tables are involved, it is often a challenge for the optimizer to generate efficient execution plan(s) that will be appropriate for all use cases.

In some cases when temporary tables are populated with data having similar characteristics, it is often a practice to have appropriate statistics collected and fixed (dynamic sampling may also fit in this case) so the optimizer produces efficient and stable execution plan(s). (one size fits all)

However, often temporary tables do not have statistics simply because it is difficult to find ones that will cover all use cases.

Consider the following example:

There is table MY_OBJECTS created using the following CTAS statement:

 CREATE TABLE my_objects AS  
SELECT * FROM all_objects
, (SELECT * FROM dual CONNECT BY rownum&lt;=10);

A table MY_TYPES that contains all distinct data types derived from ALL_OBJECTS

 CREATE TABLE my_types AS  
SELECT DISTINCT object_type FROM all_objects;

A table T that has the same structure as MY_OBJECTS

 CREATE TABLE t AS   
SELECT * FROM my_objects WHERE 1=2;

And a global temporary table TEMP_OBJECTS that has only one column which will accept OBJECT_IDs

 CREATE GLOBAL TEMPORARY TABLE temp_objects   
(object_id NUMBER NOT NULL)
ON COMMIT PRESERVE ROWS;

Collect statistics for MY_OBJECTS and MY_TYPES tables

 EXEC dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'MY_OBJECTS',cascade=>true);  
EXEC dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'MY_TYPES',cascade=>true);

Let's suppose there are two sessions that load some data in TEMP_OBJECTS and then execute the following insert statement:

 INSERT INTO t  
SELECT /*+ gather_plan_statistics */ o.*
FROM my_objects o
, my_types t
, temp_objects tt
WHERE o.object_type=t.object_type
AND o.object_id=tt.object_id;

The first session (SID 22) inserts one record in TEMP_OBJECTS and then executes the insert statement from above by invoking insert_1.sql script.


 test[22,39]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum&lt;=1);  

1 row created.

test[22,39]> commit;

Commit complete.

test[22,39]> @insert_1

10 rows created.



Let's check the execution plan being used. It is based on Merge Cartesian Join, which is somewhat acceptable in this case.

 test[22,39]> @explain_last  

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0  
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.56 | 14067 | 14028 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.56 | 14067 | 14028 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10 |00:00:00.04 | 14036 | 14028 | 1519K| 1519K| 1521K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38 |00:00:00.02 | 6 | 2 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | | | |
| 5 | BUFFER SORT | | 1 | 38 | 38 |00:00:00.02 | 3 | 2 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.02 | 3 | 2 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:02.09 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.



Session with SID (251) kicks in a moment later loading 1000 rows in TEMP_OBJECTS and runs the same insert statement. Since the same SQL statement has already been parsed , this session will reuse the existing cursor. However, in this case the plan being used is the one that should be avoided since cartesian join operation may severely impact performances:

 test[251,65]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum&lt;=1000);  

1000 rows created.

test[251,65]> @insert_1

10000 rows created.

test[251,65]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:03.87 | 15495 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:03.87 | 15495 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10000 |00:00:00.22 | 14037 | 14026 | 3162K| 2024K| 3101K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38000 |00:00:00.73 | 7 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1000 |00:00:00.02 | 4 | 0 | | | |
| 5 | BUFFER SORT | | 1000 | 38 | 38000 |00:00:00.21 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.97 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.


As expected, the same cursor was reused by both sessions.

 sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';  

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 2 0 Y N

sys[263,185]>


As you can sense, plan stability is one of the biggest challenges when dealing with queries that depend on temporary tables.

There are several strategies available, that I am aware of, on how to attack this problem. They include: use of stored outlines; fixed statistics; baselines; hints (cardinality hint for example); periodically invalidating plans; using separate workspaces (schemas) for different types of workloads; and couple of others more or less effective. Each of these comes with its own limitations.

With Oracle 12c users have an ability to gather session specific statistics for global temporary tables. This feature seems to nicely address the challenges from above.

When session specific statistics are used, each session collects “private” statistics and have CBO generate execution plan/cursor based on them. These cursors are not shared with other sessions. Also a cursor that’s been generated using session specific statistics is invalidated when statistics are re-collected within the same session.

Session specific statistics are enabled when GLOBAL_TEMP_TABLE_STATS preference is set to SESSION.

 exec dbms_stats.set_table_prefs(ownname=>'TEST', tabname=>'TEMP_OBJECTS',pname=>'GLOBAL_TEMP_TABLE_STATS',pvalue=>'SESSION');  

Once the preference is set all one needs to do is to gather stats for session based statistics to be collected.

And that is all, now let’s see which plans will be generated for each of the sessions if we repeat the example from above when session statistics are being used.

Session with SID 251 loads 1000 rows into TEMP_OBJECTS table, collects stats and executes the same insert statement.

 test[251,69]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum&lt;=1000);  

1000 rows created.

test[251,69]> exec dbms_stats.gather_table_stats(ownname=>'TEST', tabname=>'TEMP_OBJECTS');

PL/SQL procedure successfully completed.

test[251,69]> @insert_1

10000 rows created.

Now, the execution plan used is based on a HASH JOIN rather than MERGE CARTESIAN JOIN operation

 test[251,69]> @explain_last  

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 1
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 4256520316

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.71 | 15514 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.71 | 15514 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 9858 | 10000 |00:00:00.07 | 14037 | 14026 | 1696K| 1696K| 1583K (0)|
| 3 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
|* 4 | HASH JOIN | | 1 | 9858 | 10000 |00:00:00.04 | 14034 | 14026 | 2293K| 2293K| 1607K (0)|
| 5 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1000 | 1000 |00:00:00.01 | 4 | 0 | | | |
| 6 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.64 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE")
4 - access("O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- Global temporary table session private statistics used


30 rows selected.

Note the “Note” - Global temporary table session private statistics used.

If you check v$sqlarea or v$sql you will see there are two child cursors for the same sql id.


 sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';  

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 2 0 Y N
4256520316 1 00000000625A3928 1 0 Y N

sys[263,185]>


It is worth to mention that sessions that use global statistics will keep using without interfering with the session specific stats and cursors.

For example, session with SID 26 loads 1000 rows, but doesn't collect stats on TEMP_OBJECTS table. The insert statement in this case will use the globally available cursors (child cursor 0)

   
test[26,145]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum&lt;=1000);

1000 rows created.

test[26,145]> commit;

Commit complete.

test[26,145]> @insert_1

10000 rows created.

test[26,145]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:03.71 | 15513 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:03.71 | 15513 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10000 |00:00:00.13 | 14039 | 14026 | 3162K| 2024K| 3114K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38000 |00:00:00.64 | 9 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1000 |00:00:00.01 | 6 | 0 | | | |
| 5 | BUFFER SORT | | 1000 | 38 | 38000 |00:00:00.19 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.88 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


30 rows selected.

sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';

PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
4256520316 1 00000000625A3928 1 0 Y N


Let's suppose the session with sid 251 modifies the TEMP_OBJECTS table, deletes its content and loads only 1 row followed by refreshing the session specific statistics. In this case it is expected to see the plan with hash value 3579371359 to be used.

 test[251,69]> DELETE FROM temp_objects;  

1000 rows deleted.

test[251,69]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum&lt;=1);

1 row created.

test[251,69]> commit;

Commit complete.

test[251,69]> exec dbms_stats.gather_table_stats(ownname=>'TEST', tabname=>'TEMP_OBJECTS');

PL/SQL procedure successfully completed.

test[251,69]>



If you check the cursor state (valid/invalid) you can see the session private cursor (child cursor 1) was just invalidated.

 PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS  EXECUTIONS INVALIDATIONS I I  
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
4256520316 1 00000000625A3928 1 1 Y N

test[251,69]> @insert_1

10 rows created.

test[251,69]> @explain_last

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 1
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id

Plan hash value: 3579371359

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.43 | 14051 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.43 | 14051 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10 |00:00:00.01 | 14037 | 14026 | 1519K| 1519K| 1426K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38 |00:00:00.01 | 7 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | | | |
| 5 | BUFFER SORT | | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:02.05 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")

Note
-----
- Global temporary table session private statistics used


30 rows selected.


As expected the plan with hash value 3579371359 was used.

 PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS  EXECUTIONS INVALIDATIONS I I  
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
3579371359 1 00000000625A3928 1 1 Y N


You could also use v$sql_shared_cursor view to observe how the cursors are created and shared.
In order to track statistics one could use [DBA|USER|ALL]_TAB_STATISTICS views. These views have SCOPE column that indicate whether the statistics are shared or session specific:

 test[251,69]> SELECT owner, table_name, num_rows, last_analyzed, scope FROM dba_tab_statistics WHERE table_name='TEMP_OBJECTS';  

OWNER TABLE_NAME NUM_ROWS LAST_ANAL SCOPE
--------------- --------------- ---------- --------- -------
TEST TEMP_OBJECTS SHARED
TEST TEMP_OBJECTS 1 30-JUN-13 SESSION


Finally, something to think about is the possibility to increase hard parsing. As you may have observed from above, each parse made after the stats are gathered is a hard parse. Therefore, depending upon your workload you may expect to see more hard parsing.

Resources

Review: uCertify PrepKit 1Z0-047

Sun, 2009-01-25 23:09

I was asked by uCertify to review their preparation kit product.

I am currently preparing myself for Oracle 1Z0-047 Oracle Database: SQL Certified Expert Exam and I thought this could be a good chance for me test my current knowledge and chose to review their preparation kit for exactly this exam. (Link)

To be honest, this is my first GUI based prep kit I’ve ever tried. So far I was using the Oracle official documentation as my primary source, and some of the books (Exam Study Guides) that were available.

My study strategy is read the official documentation and practice, practice, practice.

From my experience, the toughest part during preparation for an exam is when it comes to answer the question: “Am I ready enough?”

Usually I go through the questions available in some of the preparation books plus making even more questions based on different scenarios. I also use OTN forums to find discussions with challenging topics where I can test my knowledge.

With the last in mind, the uCertify Prep Kit is just another handy resource that one can use to test his/her knowledge before he/she actually takes the exam.

It is a solid product that provides more than three hundred questions combined into one diagnostic, one final, four practice tests and a quiz. The user also has an ability to create custom tests using those questions he/she finds interesting.



There are also number of questions that are pretty challenging. For all questions, the users have an option to read explanations about the answers including references to the Oracle documentation or other resources.

For each question, there is an option to start a discussion with other users, to send feedback to uCertify and to write personal notes and tags.

The feedback and discussion features are really nice. I found some answers that I did not agree with and used this feature to provide feedback to uCertify. I tested the discussion feature as well and it works nice.

The exam objectives are well covered. There are also questions specific to 11g version of the database.

The software has an update option so the user can download the latest updates and fixes.

The Graphic User Interface is very nice and the navigation is great. Each of the test results can be saved and used for later reference. Also there are features like Flash cards, a Quiz, Study notes and Articles. There are number of notes and articles available.



Finally there is a readiness report that can help the user to find the answer to the question: “Am I ready for the real thing?”

In general, this is a nice product that can be very helpful for one to prepare for an exam (they have various prep kits available (Link)). However, I don’t want anybody to get an impression that I think this is the only resource that one could use to prepare and pass an exam. Whenever I was asked for an advice on what resources should (must) one use to prepare for an exam, my answer was always straight, The Official Documentation, period. (The official trainings are also very valuable, but sometimes they are a must-do regardless of someone’s recommendations :-))

This product and all the other available out there (books, prep kits, study guides) are just another resource for learning and practicing that one may consider to use when preparing for an exam.

Save some disk space - One Reminder and a Cool Windows Command

Sat, 2009-01-17 00:13
CPU Reminder

It’s the time of the year when the January CPU is released. I’d like to remind you to think about cleaning up your obsolete backups from $ORACLE_HOME/.patch_storage directory (the ones that are not needed anymore for rollback purposes).

Starting from 10g R2 Oracle backs up the affected libraries along with the rest of the affected files. Some of the libraries can be pretty big and thus after several CPUs the amount of disk space consumed by can be significant.

In order to prevent from unnecessary wasting the disk space, you could use Opatch utility using util cleanup option to remove those backups that are not needed anymore.

Below is the syntax and the options available when using Opatch tool to clean up old backups:



SYNTAX
opatch util cleanup [-invPtrLoc ]
[-jre ] [-oh ]
[-silent] [-report]
[-ps , this will
be located under ORACLE_HOME/.patch_storage/]

OPTIONS
-invPtrLoc
Used to locate the oraInst.loc file. Needed when the
installation used the -invPtrLoc flag. This should be
the path to the oraInst.loc file.

-jre
This option tells OPatch to use JRE (java) from the
specified location instead of the default location
under Oracle Home. Both -jdk and -jre options cannot
be specified together. OPatch will display error in
that case.

-oh
The oracle home to work on. This takes precedence over
the environment variable ORACLE_HOME.

-ps
This option is used to specify the Patch ID with timestamp.
This Patch ID with timestamp should be the same as in
.patch_storage directory.

A directory by this name will be present under
ORACLE_HOME/.patch_storage. If this directory is specified
and is valid, then the contents specified in the description
will be cleaned up only for this patch. Otherwise, all patch
related directories will be acted upon by this utility.

-silent
In silent mode, the cleanup always takes place.

-report
Prints the operations without actually executing them.


Make sure you specify the patch id (ps parameter) of the patch you want to remove.

Cool Windows Command

Few days ago I learned about forfiles command that can be used to select a set of files using some criteria (like date modified) and run command against each of them (like delete). (Something like find –exec in Unix)

I found this very useful for cleaning up the trace and log files that are not needed anymore.

It is very handy and I am really surprised that Microsoft finally came up with something like this. Before I used to write batch scripts to implement the logic that forfiles provides.

It is available on Windows Vista, Windows 2003 Server and Windows 2008 Server.

You should really take a look if you didn’t know about this one already.
Find more (the syntax and few examples) about forfiles from this TechNet Note .

Example:

Task: Delete all the trace files that are older more than 30 days


c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
echo @FILE @FDATE"

"db11g_ora_13200.trc" 12/18/2008
"db11g_ora_18716.trc" 12/18/2008
"db11g_ora_18768.trc" 12/18/2008
"db11g_ora_18892.trc" 12/18/2008
"db11g_ora_3004.trc" 12/18/2008
"db11g_ora_4428.trc" 12/18/2008
"db11g_ora_6256.trc" 12/18/2008
"db11g_ora_6444.trc" 12/18/2008
"db11g_ora_6480.trc" 12/18/2008
"db11g_ora_6504.trc" 12/18/2008
"db11g_ora_6844.trc" 12/18/2008
"db11g_ora_6912.trc" 12/18/2008
"db11g_ora_6928.trc" 12/18/2008
"db11g_ora_7044.trc" 12/18/2008

c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
del @FILE"


c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
echo @FILE @FDATE"
ERROR: No files found with the specified search criteria.



Pretty handy, isn’t it?

ORA-12514 during switchover using Data Guard Broker

Wed, 2009-01-14 22:45
I’ve seen, on several occasions, questions being asked about data guard broker being unable to automatically start the databases during a switchover, failing with ORA-12514 - TNS:listener does not currently know of service requested in connect descriptor?

This is most likely because the special service db_unique_name_DGMGRL has not been registered properly with the listener.

This is one of the requirements when configuring Data Guard broker.

From the Oracle documentation (Reference Oracle® Data Guard Broker 10g Release 2 (10.2)):

To enable DGMGRL to restart instances during the course of broker operations, a service with a specific name must be statically registered with the local listener of each instance. The value for the GLOBAL_DBNAME attribute must be set to a concatenation of db_unique_name_DGMGRL.db_domain. For example, in the LISTENER.ORA file:

LISTENER = (DESCRIPTION =
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=host_name)
(PO1RT=port_num))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=sid_name)
(GLOBAL_DBNAME=db_unique_name_DGMGRL.db_domain)
(ORACLE_HOME=oracle_home)))



I think the main reason for overlooking this prerequisite is because nothing about this is mentioned in Oracle 10g Data Guard Concepts and Administration Guide, the chapters for standby databases configuration.

So once the initial configuration is set up and works fine, the listener prerequisites for setting up Data Guard Broker are probably overlooked.

Of course, this applies only if Data Guard Broker is configured manually. If one uses OEM Grid Control to set up and configure the Data Guard Configuration, OEM will make the necessary changes.

Another point I want to make here is the importance of practicing various switchover and failover scenarios. It is not only that you will gain more experience and fill more comfortable doing the same thing in real situation, but you also will learn about any hidden configuration wholes that you may have overlooked or missed in your configuration steps.

Here is an example of this problem and its solution:

For this example I used two Oracle 10.2.0.1 databases, dg1db running as a primary database and dg2db running as a physical standby database.


[oracle@dg1 ~]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.1.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***
Connected.
DGMGRL> show configuration

Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Primary database
dg2db - Physical standby database

Current status for "dg-test":
SUCCESS


Now lets try the switchover. As you can see the role transition was done successfully, but the database startup that should happened at the end failed with ORA-12514: TNS:listener does not currently know of service requested in connect descriptor.


DGMGRL> switchover to dg2db;
Performing switchover NOW, please wait...
Operation requires shutdown of instance "dg1db" on database "dg1db"
Shutting down instance "dg1db"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "dg2db" on database "dg2db"
Shutting down instance "dg2db"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "dg1db" on database "dg1db"
Starting instance "dg1db"...
Unable to connect to database
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Failed.
You are no longer connected to ORACLE
Please connect again.
Unable to start instance "dg1db"
You must start instance "dg1db" manually
Operation requires startup of instance "dg2db" on database "dg2db"
You must start instance "dg2db" manually
Switchover succeeded, new primary is "dg2db"
DGMGRL>


After starting the databases manually, I checked the status of the Data Guard configuration and it was SUCCESS.


DGMGRL> show configuration

Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Physical standby database
dg2db - Primary database

Current status for "dg-test":
SUCCESS

DGMGRL>


Now I will make the necessary changes to the listeners and try the switchover in the opposite direction. After the change listener.ora should look like (pay attention to GLOBAL_NAME=dg1db_DGMGRL.localdomain).


[oracle@dg1 admin]$ more listener.ora
# listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/ora10g/network/admin/listener.ora
# Generated by Oracle configuration tools.

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = dg1db_DGMGRL.localdomain)
(ORACLE_HOME = /u01/app/oracle/product/ora10g)
(SID_NAME= dg1db)
)
)

LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dg1.localdomain)(PORT = 1521))
)

[oracle@dg1 admin]$ lsnrctl reload
[oracle@dg1 admin]$ lsnrctl status

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 13-JAN-2009 17:02:35

Copyright (c) 1991, 2005, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dg1.localdomain)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 13-JAN-2009 08:08:17
Uptime 0 days 8 hr. 54 min. 18 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/10.2.0/ora10g/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/10.2.0/ora10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dg1.localdomain)(PORT=1521)))
Services Summary...
Service "dg1db.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
Service "dg1db_DGB.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
Service "dg1db_DGMGRL.localdomain" has 1 instance(s).
Instance "dg1db", status UNKNOWN, has 1 handler(s) for this service...
Service "dg1db_XPT.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
The command completed successfully

I did the same thing with the second listener.
Now the switchover will complete without any problems.
DGMGRL> show configuration

Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Physical standby database
dg2db - Primary database

Current status for "dg-test":
SUCCESS

DGMGRL> switchover to dg1db
Performing switchover NOW, please wait...
Operation requires shutdown of instance "dg2db" on database "dg2db"
Shutting down instance "dg2db"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "dg1db" on database "dg1db"
Shutting down instance "dg1db"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "dg2db" on database "dg2db"
Starting instance "dg2db"...
ORACLE instance started.
Database mounted.
Operation requires startup of instance "dg1db" on database "dg1db"
Starting instance "dg1db"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "dg1db"
DGMGRL>

DGMGRL> show configuration

Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Primary database
dg2db - Physical standby database

Current status for "dg-test":
SUCCESS

DGMGRL>


Cheers,
Mihajlo Tekic

HAPPY NEW YEAR !!!

Thu, 2009-01-01 16:31
HAPPY NEW YEAR TO EVERYONE !!!

I WISH YOU AND YOUR FAMILIES HEALTHY, HAPPY AND SUCCESSFUL NEW YEAR !!!



The Picture above is of my daughter Monika.

She is four months old and is the best thing that happened to me in 2008.

ORA-23375 when adding new master site in multi-master environment

Sun, 2008-08-31 17:58
If your database global name contains a word that belongs to the list of Oracle Database Reserved Words and you are planning to add this database as a new master site in multi-master environment, you may want to consider changing the database global name so you can avoid ORA-23375 when executing DBMS_REPCAT.ADD_MASTER_DATABASE procedure.

This is especially important for those databases that have their global name containing country internet code of any of the following countries: Austria (AT), Belarus (BY), India (IN), Iceland (IS), etc. AT, BY, IN, IS are among the database reserved words.

You may experience problems even before you try to add the new master site. The creation of the database link to the new master site may fail with:
ORA-02084: database name is missing a component
However, you may not get this error in some cases, for instance when AT keyword is used; or if you put the database link name in double-quotes.

(the environment used in the examples below is DB1.MYDOMAIN.COM(master definition site, release 10.2.0.2) and DB11G.MYDOMAIN.IN (master site, release 11.1.0.6))

SQL> create database link DB11G.MYDOMAIN.IN
2 connect to rep_admin
3 identified by “rep_password”
4 using 'DB11G';
create database link DB11G.MYDOMAIN.IN
*
ERROR at line 1:
ORA-02084: database name is missing a component

SQL> create database link "DB11G.MYDOMAIN.IN"
2 connect to rep_admin
3 identified by "rep_password"
4 using 'db11g';

Database link created.

SQL> create database link DB11G.MYDOMAIN.AT
2 connect to rep_admin
3 identified by "rep_password"
4 using 'DB11G';

Database link created.



But, even if the database link creation succeeds, the attempt to add the new master site would definitely fail with:
ORA-23375: feature is incompatible with database version at global database name


SQL> begin
2 dbms_repcat.add_master_database(
3 gname=>'TEST_GROUP',
4 master=>'"DB11G.MYDOMAIN.IN"',
5 use_existing_objects =>true,
6 copy_rows =>false,
7 propagation_mode=>'ASYNCHRONOUS');
8 end;
9 /
begin
*
ERROR at line 1:
ORA-23375: feature is incompatible with database version at DB11G.MYDOMAIN.IN
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_REPCAT_MAS", line 2159
ORA-06512: at "SYS.DBMS_REPCAT", line 146
ORA-06512: at line 2


SQL>

SQL> begin
2 dbms_repcat.add_master_database(
3 gname=>'TEST_GROUP',
4 master=>'DB11G.MYDOMAIN.AT',
5 use_existing_objects =>true,
6 copy_rows =>false,
7 propagation_mode=>'ASYNCHRONOUS');
8 end;
9 /

begin
*
ERROR at line 1:
ORA-23375: feature is incompatible with database version at DB11G.MYDOMAIN.AT
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_REPCAT_MAS", line 2159
ORA-06512: at "SYS.DBMS_REPCAT", line 146
ORA-06512: at line 2

If you turn 10046 trace event you can notice that the error occurs when DBMS_REPCAT_RPC.REPLICATION_VERSION_RC and DBMS_REPCAT_UTL2.REPLICATION_VERSION procedures are invoked against the new master database.

PARSE ERROR #39:len=73 dep=1 uid=61 oct=47 lid=61 tim=1191209094695404 err=6550
BEGIN sys.dbms_repcat_rpc.replication_version_rc@DB11G.MYDOMAIN.IN (:rv);END;
=====================
PARSE ERROR #39:len=72 dep=1 uid=61 oct=47 lid=61 tim=1191209094702737 err=6550
BEGIN sys.dbms_repcat_utl2.replication_version@DB11G.MYDOMAIN.IN (:rv);END;

To solve this problem the global name of the new master database should be changed. Once the global name is changed to value that does not contain any reserved word, you should be able to add the database to the replication environment.

In the example below, I will try to add a new master site which global name was changed to DB11G.FOO.BAR

SQL>conn sys/***@DB11G as sysdba
Connected.
SQL> alter database rename global_name to DB11G.FOO.BAR;

Database altered.

SQL> conn rep_admin/rep_password@DB1
Connected.
SQL>
SQL> create database link DB11G.FOO.BAR
2 connect to rep_admin
3 identified by "rep_password"
4 using 'DB11G';

Database link created.

SQL> begin
2 dbms_repcat.add_master_database(
3 gname=>'TEST_GROUP',
4 master=>'DB11G.FOO.BAR',
5 use_existing_objects =>true,
6 copy_rows =>false,
7 propagation_mode=>'ASYNCHRONOUS');
8 end;
9 /

PL/SQL procedure successfully completed.


SQL> column dblink format a30
SQL> select DBLINK, MASTERDEF, MASTER
2 from dba_repsites;

DBLINK M M
------------------------------ - -
DB1.MYDOMAIN.COM Y Y
DB11G.FOO.BAR N Y

There are few ways to find Oracle PL/SQL reserved words:
  1. V$RESERVED_WORDS

  2. Oracle documentation (PL/SQL Reserved Words and Keywords). Check your version specific documentation available on http://tahiti.oracle.com

  3. Issue HELP RESERVED WORDS (PL/SQL) from Sql*Plus (if Sql*Plus help is installed).

Fast-Start Failover - It is reliable

Sun, 2008-08-10 20:45
Last Friday there was one post on OTN forums that brought my attention. The OP was wondering “Is Data Guard Buggy” with attention to Fast-Start Failover (FSFO) feature that provides an ability of automatic failover to the standby database if the primary database is not available for certain time.

He had some concerns about FSFO being unreliable, very difficult to be implemented and actually doesn't work properly.

The OP got some immediate response from the OTN users. I couldn't agree more with Joseph Meeks's comment on the topic.

In my opinion, FSFO is very nice feature that plays big role in Oracle's Maximum Availability Architecture. There might be some valid reasons not to implemented it, but if the automatic failover is a requirement, FSFO is the way to go. Should one have any problems implementing it, the best way to go is to get Oracle Support involved.

In this post, I'd like to show that implementation of the FSFO should not be a difficult task once you have the Data Guard environment set up.

I configured data guard physical standby environment on my laptop. Due to hardware limitations, I'll have the observer running on the same machine with the standby databases. Keep in mind this is just an example. In practice, the primary database, the standby database and the observer should run on different hosts. This example also answers one of the questions OP asked: Will it be possible to set it up on one machine? The answer would be it is possible, as shown in the example :-), but it is not the right way to go.

I use DGMGRL utility in the example..

So, I configured a Data Guard environment where DB1.MYDOMAIN.COM is primary database and STDB.MYDOMAIN.COM is physical standby. Both databases are 10.2.x
By default, protection mode of the Data Guard configuration is set to MAXIMUM PERFORMANCE

DGMGRL> connect sys/***@db1
Connected.
DGMGRL> show configuration

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
db1 - Primary database
stdb - Physical standby database

Current status for "DRTest":
SUCCESS


Enabling Fast-Start Failover requires the following pre-requisites to be met:
Flashback Database feature is enabled on both the primary and the standby database.
The protection mode of the configuration must be set to MAXIMUM AVAILABILITY
tnsnames.ora in the ORACLE_HOME where the observer runs must be set to see both databases, the primary and the standby.
DGMGRL must be available on the observer host.

Enable Flashback Database
I will enable flashback database feature on both databases. This assumes that the flash recovery area is configured.
The flashback database feature provides an ability for an easy reinstatement of the failed primary database to new standby database.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

SQL> startup mount
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1261348 bytes
Variable Size 100663516 bytes
Database Buffers 176160768 bytes
Redo Buffers 7127040 bytes
Database mounted.
SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.


Set the protection mode to MAXIMUM AVAILABILITY
The Fast-Start Failover can be enabled only if the protection mode is set to MAXIMUM AVAILABILITY.
LogXptMode has to be set to 'SYNC'

DGMGRL> edit database 'db1'
> set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'stdb'
> set property LogXptMode='SYNC';
Property "logxptmode" updated

DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.
DGMGRL> show configuration

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: DISABLED
Databases:
db1 - Primary database
stdb - Physical standby database

Current status for "DRTest":
SUCCESS

SQL> conn sys/***@db1 as sysdba
Connected.
SQL> select protection_mode, protection_level from v$database;

PROTECTION_MODE PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY

SQL> conn sys/***@stdb as sysdba
Connected.
SQL> select protection_mode, protection_level from v$database;

PROTECTION_MODE PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY

SQL>


I ensured that tnsnames.ora are set correctly as well DGMGRL is installed.

Now, since all the prerequisites are met, lets move forward and enable the FSFO.
Before enabling it, make sure each of the databases in the configuration has set a fast start failover target. This is achieved by setting the FastStartFailoverTarget parameter.


DGMGRL> edit database 'db1' set property faststartfailovertarget='stdb';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'stdb' set property faststartfailovertarget='db1';
Property "faststartfailovertarget" updated


Another important parameter that has to be set is FastStartFailoverThreshold. This parameter specifies the amount of time (in seconds) the observers attempts to reconnect to the primary database before starting the fast-start failover to the standby database. The default value is set to 30 seconds. In the example I set this parameter to 120 seconds.


DGMGRL> edit configuration set property FastStartFailoverThreshold=120;
Property "faststartfailoverthreshold" updated


Now lets enable the fast start failover:


DGMGRL> ENABLE FAST_START FAILOVER;
Enabled.


If you check the configuration at this moment you may find the following warning: ORA-16608: one or more databases have warnings.
If you check the status of one of the databases, you can see this warning: ORA-16819: Fast-Start Failover observer not started


DGMGRL> show configuration

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target

Current status for "DRTest":
Warning: ORA-16608: one or more databases have warnings


DGMGRL> show database verbose 'db1'

Database
Name: db1
Role: PRIMARY
Enabled: YES
Intended State: ONLINE
Instance(s):
db1

Properties:
InitialConnectIdentifier = 'db1.mydomain.com'
LogXptMode = 'SYNC'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'auto'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '2'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'stdb, db1'
LogFileNameConvert = 'stdb, db1'
FastStartFailoverTarget = 'stdb'
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'miki-laptop'
SidName = 'db1'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=miki-laptop)(PORT=1521))'
StandbyArchiveLocation = 'dgsby_db1'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'

Current status for "db1":
Warning: ORA-16819: Fast-Start Failover observer not started


So lets start the observer. I will repeat again, the observer should run on a different host, however for the sake of this example it will run on the same machine as the databases.
In order to start the observer one should start DGMGRL utility and login to the data guard configuration.
Once logged in, issue START OBSERVER command. This will start the observer.
Optionally you can set a log file destination while invoking DGMGRL utility and specify name for the observer configuration file (for more information check Data Guard Command-Line Interface Reference).
Once the observer is started, the control is not returned to the user until the observer is stopped.


DGMGRL for Linux: Version 10.2.0.3.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***@db1
Connected.
DGMGRL> start observer;
Observer started


So lets check the configuration now.


$ dgmgrl
DGMGRL for Linux: Version 10.2.0.3.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***@db1
Connected.
DGMGRL> show configuration verbose;

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target

Fast-Start Failover
Threshold: 120 seconds
Observer: miki-laptop

Current status for "DRTest":
SUCCESS

DGMGRL>


As of this moment my configuration has FSFO enabled.

Now lets test if the FSFO really works.
One should be aware of the conditions that must be satisfied for the observer to attempt FSFO.
The full list of conditions can be found in Data Guard Broker user guide under 5.5.2.1 What Happens When Fast-Start Failover and the Observer Are Running?

I will issue SHUTDOWN ABORT to the primary database (SHUTDOWN NORMAL/IMMEDIATE/TRANSACTIONAL would not trigger the failover).
Once I crash the database, the status of the configuration will return Error: ORA-16625: cannot reach the database. Be aware that since the primary database is down, the only way to check the configuration is to connect using the standby database credentials.


SQL> conn sys/***@db1 as sysdba
Connected.
SQL> shutdown abort
ORACLE instance shut down.
SQL>

DGMGRL> connect sys/***@stdb
Connected.
DGMGRL> show configuration

Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target

Current status for "DRTest":
Error: ORA-16625: cannot reach the database


After waiting for two minutes (FSFO threshold was set to 120 seconds), I checked the observer log file and found out that it started the failover.


[W000 08/09 17:58:49.75] Observer started.

18:05:38.31 Saturday, August 09, 2008
Initiating fast-start failover to database "stdb"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "stdb"
18:06:16.82 Saturday, August 09, 2008



DGMGRL> show configuration verbose

Configuration

Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Physical standby database (disabled)
- Fast-Start Failover target
stdb - Primary database

Fast-Start Failover

Threshold: 120 seconds
Observer: miki-laptop

Current status for "DRTest":

Warning: ORA-16608: one or more databases have warnings

DGMGRL>


So the observer started the fast-start failover and has successfully converted the former standby to new primary database.

Once the failover is done the observer will attempt to automatically reinstate the former primary database to new physical standby. Certain requirements have to be met for the automatic reinstatement to take place. If not, the database can be manually reinstated. Once the reinstatement is successful you can see that the former primary database became new physical standby.

DGMGRL> show configuration

Configuration

Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:

db1 - Physical standby database
- Fast-Start Failover target
stdb - Primary database

Current status for "DRTest":

SUCCESS

DGMGRL>


Finally, you can do switchover to perform role change and to place the configuration in its initial state.

The example above shows that enabling fast-start failover is pretty straight forward. There are many good documents with information how to enable FSFO feature. Some of them are listed below:

Oracle® Data Guard Broker 10g Release 2 (10.2) - 5.5 Fast-Start Failover
What Happens When Fast-Start Failover and the Observer Are Running?

7.6 Scenario 5: Enabling Fast-Start Failover and the Observer

Metalink Note#359555.1 IMPLEMENTING FAST-START FAILOVER IN 10GR2 DATAGUARD BROKER ENVIRONMENT

Oracle Maximum Availability Architecture – MAA

I hope I didn't miss something.

Cheers!

Metalink, SCM, the error and the good stuff

Thu, 2008-07-31 23:45
Yesterday I got an e-mail from Oracle Support letting me know about the new Metalink interface which is supposed to go live this Fall:


Dear MetaLink Customer,

Oracle is committed to consistently improving your customer support experience. In the fall of 2008, MetaLink will have a new user interface. To help you prepare for the transition, you may now preview MetaLink's new user interface and provide valuable feedback about its features.

******


I clicked on the link, that was supposed to redirect me to Metalink, and I got one very fancy schmancy login page. Well, actually, it is the Software Configuration Manager (SCM).

I put in my login credentials and ... I got an error message "IO Error Error #2032"



WOW ... :-)

Well, OK the problem was fixed latter that day.

Regardless of the problem I experienced, I must say that I've been using SCM for quite a while and I am pretty impressed with its functionality. It is really much easier to create and manage service requests using the configurations you I have registered with SCM. Oracle Support engineers have all the information they need about the configuration of the server and the database.

Searching the Knowledge base looks improved as well. Now you can have your search results visible on the left panel of the screen, while, at the same time, you can read the content of the selected note. This makes navigation much easier.

Service Requests part has new design too.

All in all, improved functionality, better navigation, good design, some new features too ... Good job !!!

Useful Links:

http://csm.oracle.com/
http://www.oracle.com/support/premier/software-configuration-manager.html

1Z0-043 ... Passed

Sun, 2008-07-27 23:32
I passed the 1Z0-043 exam a week ago. While waiting on Oracle to process my hands-on course requirement I am thinking what should be the next step. Certainly OCM is on my list, but in the mean time I'd like to go for SQL Expert Certificate.

I have been studying very hard, especially the last month. I started with Oracle 10g Certification and Oracle Database 10g OCP Certification All-In-One Exam Guide, but the most of the time I spent reading from the Official Documentation and of course, practicing a lot.

The books I mentioned are good, they cover some basics, but I found they are not quite sufficient to prepare you for the exam. Some of the material is not up to date with the exam topics. The sample tests that come with the books contain some questions out of the scope of the exam. Some of the questions contain answers that are wrong or arguable.

It is very important that while preparing for the exam you practice a lot. It is not very difficult to set up a test environment for each of the topics. It was slightly complicated to create a test environment for ASM on my Ubuntu Linux, but the post from Grégory Guillou How To Set Up Oracle ASM on Ubuntu Gutsy Gibbon has helped me with that.

Finally, while preparing for the exam, I have improved my skills significantly and learned some new stuff that I didn't have any experience with before (Resource Manager for instance).

Blocked Patch files cause OPatch to fail with error code = 170

Thu, 2007-11-01 12:28
I had to apply a CPU patch to 9.2.0.7 Oracle Home on Windows 2003 Server box.

Opatch utility was failing with the following error:


The patch directory area must be a number.

ERROR: OPatch failed because of problems in patch area.
OPatch returns with error code = 170


Well, there are quite few reasons why you may get this error when you try to apply a patch using Opatch.
Many of them are published in some of the following Metalink notes:

Note:369555.1
Note:294350.1
Note:418479.1


But, I couldn't find anything about the one that I came across recently.

I tried everything described in the notes above. Nothing worked out, until I realized that all of the patch files have been somehow blocked.

Although the patch files had all the necessary permissions, especially read, the opatch utility couldn't read them because of the lock set by the OS.

If you check the file properties you can see the message:
"This file came from another computer and might be blocked to help protect this computer".



The reason why these files were blocked was because I copy them from a different machine.

A blocked file can be unlocked by pressing Unlock button.
The odd thing was that only one file can be unlocked at a time.



So I just deleted the patch directory and unlocked the patch zip before I extract it again.

I haven't had similar issue before. I believe this is a new "security" feature of Windows 2003 Server.

Query to determine the clustering factor before you create the index

Sun, 2007-09-16 10:33
I found the following query useful to determine the clustering factor for the indexes that has not been created yet.

select count(1) clustering_factor
from
(
select dbms_rowid.rowid_block_number(rowid) block_no,
list of the indexed columns,
count(1) num_rows,
LAG(dbms_rowid.rowid_block_number(rowid))
over (order by list of the indexed columns) prev_block
from table_name
group by dbms_rowid.rowid_block_number(rowid),
list of the indexed columns
order by list of the indexed columns
)
where block_no<>prev_block or prev_block is null


Let's take a look at the following example:

SQL> create table test
2 tablespace example
3 as select
4 mod(rownum,8) col1
5 lpad('x',1000) col2
6 from all_objects
7 where rownum<=1000;


Using the following query, we can see that there are about 7 rows per block (block size 8192) that makes the records with same COL1 to be scattered across different blocks:

SQL> select num_rows_in_block, count(1) blocks
2 from
3 (
4 select block_no, count(1) num_rows_in_block
5 from
6 (
7 select dbms_rowid.rowid_block_number(rowid) block_no
8 from test
9 )
10 group by block_no
11 )
12* group by num_rows_in_block
SQL> /

NUM_ROWS_IN_BLOCK BLOCKS
----------------- ----------
6 1
7 142

Let see how big would be the clustering factor of an index built on TEST(COL1).

SQL> select count(1) clustering_factor
2 from
3 (
4 select dbms_rowid.rowid_block_number(rowid) block_no,
5 col1,
6 count(1) num_rows,
7 LAG(dbms_rowid.rowid_block_number(rowid))
8 over (order by col1) prev_block
9 from test
10 group by dbms_rowid.rowid_block_number(rowid), col1
11 order by col1
12 )
13* where block_no<>prev_block or prev_block is null
SQL> /

CLUSTERING_FACTOR
-----------------
1000

The result was pretty obvious.
Now let's add another column COL3 in the table TEST and set its value to mod(col1, 4).
This will make the blocks to contain at most two records with the same value of COL3.

SQL> alter table test
2 add col3 number;

Table altered.

SQL> update test set col3=mod(col1,4);

1000 rows updated.

SQL> commit;

Commit complete.

SQL>

Now let see what would be the value of the clustering factor of the index created on COL3 column:

SQL> select count(1) clustering_factor
2 from
3 (
4 select dbms_rowid.rowid_block_number(rowid) block_no,
5 col3,
6 count(1) num_rows,
7 LAG(dbms_rowid.rowid_block_number(rowid))
8 over (order by col3) prev_block
9 from test
10 group by dbms_rowid.rowid_block_number(rowid), col3
11 order by col3
12 )
13* where block_no<>prev_block or prev_block is null
SQL> /

CLUSTERING_FACTOR
-----------------
572

Now let's create the actual indexes and see what their clustering factor would be:

SQL> create index test_col1_idx on test(col1);

Index created.

SQL> create index test_col3_idx on test(col3);

Index created.

SQL> select index_name, clustering_factor
2 from user_indexes
3* where table_name='TEST'
SQL> /

INDEX_NAME CLUSTERING_FACTOR
------------------------------ -----------------
TEST_COL1_IDX 1000
TEST_COL3_IDX 572

Perfect insights about the importance of the clustering factor for calculating the cost I found in Jonathan Lewis's book Cost-Based Oracle Fundamentals.

Let's go back to the very first query in this post. Why I think this query is useful?
Because sometime, even though I think some columns are perfect combination to create index on, the index might not be used by the optimizer because the clustering factor is big.
It can give you an initial clue how effective some index could be.

Run Flashback commands only from Sql*Plus 10.1.x or newer

Fri, 2006-12-15 23:24
getting ORA-08186: invalid timestamp specified each time I tried to run a FVQ.

ORA-08186: invalid timestamp specified
Well ... take a look at the following example

First I wanted to make sure, that the format I use is the correct one.


1* select to_char(systimestamp,'DD-MON-RR HH.MI.SSXFF AM') from dual
SQL> /

TO_CHAR(SYSTIMESTAMP,'DD-MON-RR
-------------------------------
14-DEC-06 10.27.26.622829 AM


Now, when I tried to run FVQ, I got "ORA-30052: invalid lower limit snapshot expression". That was an expected result, since my lower limit did not belong in (SYSDATE-UNDO_RETENTION, SYSDATE] range. (UNDO_RETENTION parameter was set to 900).
But you can agree with me that Oracle successfully processed timestamp values that I used in this query.


SQL> ed
Wrote file afiedt.buf

1 select comm
2 from scott.emp
3 versions between timestamp
4 to_timestamp('14-DEC-06 09.45.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM') and
5 to_timestamp('14-DEC-06 10.00.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM')
6 where
7* empno = 7369
SQL> /
from scott.emp
*
ERROR at line 2:
ORA-30052: invalid lower limit snapshot expression

So I modified the lower limit to fit in the right range, and I got ORA-08186: invalid timestamp specified. !?!?!?

SQL> ed
Wrote file afiedt.buf

1 select comm
2 from scott.emp
3 versions between timestamp
4 to_timestamp('14-DEC-06 10.20.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM') and
5 to_timestamp('14-DEC-06 11.00.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM')
6 where
7* empno = 7369
SQL> /
from scott.emp
*
ERROR at line 2:
ORA-08186: invalid timestamp specified

After some time that I spent trying to resolve this issue (I couldn't dare to open SR about it:-)) I remembered I have had similar problems while trying to test some flashback features (flashback table to before drop) on Sql*Plus 9.2.x while ago….and I was using Sql*Plus 9.2 again.

I tried the same example on Sql*Plus 10.1.0.2

… and everything worked well.

Some books might make you think wrong

Sat, 2006-11-11 10:59
It is amazing how confusable some of the books could be.
You better check any information you think is different than you know.
Currently I am reading a book that is suppose to prepare me for OCP exam.
There are some questions and answers after each Chapter. I found few answers that I think are not correct.
Don’t get me wrong I still think that the book I read is a good one.

One Example: (I changed usernames and sequence name used in the original question):
Question:

The following SQL statement will allow user test_usr which operations on sequence scott.test_sq?
GRANT ALL ON scott.test_sq to test_usr;


A) Select the next value from scott.test_sq
B) Alter sequence scott.test_sq to change the next value
C) Change the number of sequence numbers that will be cached in memory
D) Both A and C
E) All of the above

The answer provided in the book is D.

But, I think it is E. Why?


SQL> conn scott/*****
Connected.
scott@ora10g> create sequence test_sq
2 start with 1
3 increment by 1;

Sequence created.

scott@ora10g> select test_sq.nextval from dual;

NEXTVAL
----------
1
scott@ora10g> conn sys as sysdba
Connected.
sys@ora10g> create user test_usr identified by test_usr
2 default tablespace users
3 temporary tablespace temp;

User created.

sys@ora10g> grant connect, resource to test_usr;

Grant succeeded.

sys@ora10g> grant all on scott.test_sq to test_usr;

Grant succeeded.

sys@ora10g> conn test_usr/test_usr
Connected.
test_usr@ora10g> select scott.test_sq.nextval from dual;

NEXTVAL
----------
2

test_usr@ora10g> alter sequence scott.test_sq
2 increment by 10;

Sequence altered.

test_usr@ora10g> select scott.test_sq.nextval from dual;

NEXTVAL
----------
12

test_usr@ora10g> alter sequence scott.test_sq
2 increment by 1;

Sequence altered.

test_usr@ora10g> conn scott/*****
Connected.
scott@ora10g> select test_sq.nextval from dual;

NEXTVAL
----------
13

scott@ora10g>

Pages