Feed aggregator

Is there a way to find occupied sizes of all tables(of all columns of each table) in oracle db ?

Tom Kyte - 25 min 55 sec ago
I need sizes of all columns of a table and the same details of all user tables in db. -- Query to find # of rows and sizes of All data in Table1 select count(1), sum(length(column1)), sum(length(column2)), sum(length(column3)), sum(length(column4)), sum(length(column5)) from TABLE1; Should I construct similar query for all tables and get the info OR is there a way to do automatically pull all tables sizes ?
Categories: DBA Blogs

"Compute" inside a "group by"

Tom Kyte - 25 min 55 sec ago
Hi. Does anyone know how to add a "Compute" inside a "group by" already set up through the action button in an Interactive Report? Is it possible to do it, has someone done something similar? Thank you
Categories: DBA Blogs

Operations preserving row order

Tom Kyte - 25 min 55 sec ago
Hi Tom, a fan of your work. Have a question: are there any operations in Oracle preserving row order? For example, can I expect that <code> select * from (select tag from test order by tag) </code> Will return in sorted order? Or if a pipelined table function produces a dozen rows in certain order, can I use "select * from table(f())" to see them in the same order? Will a cursor read rows from a pipelined function in the same order they are piped? Basically, looking for exceptions to the general rule "any operation destroys row order".
Categories: DBA Blogs

Extract and delete records from table by only one session

Tom Kyte - 25 min 55 sec ago
Hello, we have a table with records that contain values that have to be filled in batch and should be consumed by the client application. The various sessions should read a record and delete it and it must be guaranteed that one record is used by only one session. We tried with select for update but had locking problems. Is there another way to "consume" the records and have the security that only session gets the same record? Regards Andreas
Categories: DBA Blogs

Batch Processing vs Stream Processing

Online Apps DBA - 4 hours 51 min ago

In today’s digital economy, data is the new currency, but it is still a struggle to keep pace with the changes in enterprise data and the growing business demands for information. While businesses can agree that cloud-based technologies are key to ensuring data management, security, privacy, and process compliance across enterprises, there’s still a hot […]

The post Batch Processing vs Stream Processing appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Azure Stream Analytics

Online Apps DBA - 5 hours 42 min ago

If you have incoming live streaming data that you wish to store or get insights by transforming it, or report on it with Power BI, but confused with the right Azure Service to process this data, then Azure Stream Analytics is the one you are looking for. Azure Stream Analytics is the perfect solution when […]

The post Azure Stream Analytics appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Periodically Test Recovery Procedures

Tom Kyte - Mon, 2020-11-23 14:46
Hello Team, I need to document our RMAN backup and restore best practices, and I'm struggling to answer the question: How often (months, years, etc.) should we test our restore procedures? (max time between restores, at least "RESTORE... VALIDATE") Thanks. Here an old document: https://docs.oracle.com/cd/B12037_01/server.101/b10726/configbp.htm#1007459
Categories: DBA Blogs

How to extract specific tags from a clob colum storing XML

Tom Kyte - Mon, 2020-11-23 14:46
I have a clob column that as different tags in it, like the example below, I am trying to get the comments tag of all the rows, one of them is returning null, I am assuming it is because it has the word "comments" more than once, this is the query I am using: <b>select d.d1_activity_id, dbms_lob.substr(d.bo_data_area, dbms_lob.getlength(d.bo_data_area), 1) as DCLOB, extractValue(xmlparse(contentt d.bo_data_area),'comments' ) AS comnt from d1_activity d where dbms_lob.instr(d.bo_data_area,'comments') > 0 </b> This is an example of the data we have in that column: <code><comments>C2M Test Exchange Meter</comments><instructions>C2M Test Exchange Meter</instructions><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails><contactDetails/><connectNewDevice>D1CN</connectNewDevice><oldDeviceId>061840493997</oldDeviceId> <isFieldActivityCompleted>D1NO</isFieldActivityCompleted><isAppointmentNecessary>N</isAppointmentNecessary><appointmentWindow/><comments>C2M Test for M-Exchange Orch to PragmaCad</comments><instructions>C2M Test for M-Exchange Orch to PragmaCad</instructions><isMeasurementFound>D1NO</isMeasurementFound><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails><allowParentTransition>true</allowParentTransition><overrideRestrictions>D1NA</overrideRestrictions><fieldWorkSystemAddress><address1>3456 BOWDEN CIR W</address1><address4>15305034560000&gt;&lt;193954</address4><crossStreet>6249</crossStreet><city>JACKSONVILLE</city><county>DUVAL</county><postal>32216</postal><country>USA</country><state>FL</state><geocodeLatitude>0.000000</geocodeLatitude><geocodeLongitude>0.000000</geocodeLongitude></fieldWorkSystemAddress><contactDetails/> <updateSpecificActivity>D1YS</updateSpecificActivity><updateableItems><comments>Editing comments</comments><instructions>Editing comments</instructions><startDateTime>2020-10-27-00.00.00</startDateTime></updateableItems><isAppointmentNecessary>N</isAppointmentNecessary><appointmentWindow/><allowParentTransition>true</allowParentTransition><replyToExternalSystem>D1YS</replyToExternalSystem><retryDetails><numberOfRetries>0</numberOfRetries><isToDoEntrySuppressed>false</isToDoEntrySuppressed></retryDetails> </code>
Categories: DBA Blogs

IMPDP statement with multiple where and table clause

Tom Kyte - Mon, 2020-11-23 14:46
I have more than 20 tables to restore from exported dumpfile. so i have question how to import 20 tables into one impdp statement so my dba can save time. here i post two different impdp statement which contain different where statement and different tables to be import. <code>impdp username/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=dmp_file_name.dmp LOGFILE=dmp_log_file.txt TABLES=HR.emp_log query=HR.emp_log:\"where dept_id in ( select a.dept_id from HR.remote_data_emp_log a where a.log_date = '31-DEC-2019' ) \" impdp username/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=dmp_file_name.dmp LOGFILE=dmp_log_file.txt TABLES=HR.dept_log query=HR.dept_log:\"where dept_id in ( select a.dept_id from HR.remote_data_dept_log a where a.dept_log_date = '31-DEC-2019' ) \"</code>
Categories: DBA Blogs

Questions about on commit refresh Fast MVs

Tom Kyte - Mon, 2020-11-23 14:46
Team: Here is my testcase used for the below demo. this was from 18c database. Questions: Q1 - why this error "ORA-10980" is reported in this trace file, what was the problem with my testcase ? Q2 - all three delete statements having the predicate like " where rid1/rid2 in (...) " is not using the index on either of the columns why ? Q3 - please see the " insert into T1_T2_MV..." where it joins mlog$_t2 with T2 - with the hint HASH_SJ - optimizer is still not accessing T2 based on ROWID on nested loops for card=2, instead it make use of HASH join here. what else could be done here to avoid the full scan on T2. <code>create table t1 as select a.*,rownum r from all_objects a, all_users where rownum <=2000000; create table t2 as select * from t1; alter table t1 add constraint t1_pk primary key(r); alter table t2 add constraint t2_pk primary key(r); create materialized view log on t1 with primary key,rowid,sequence (object_type, object_name,created,last_ddl_time,timestamp,status) including new values; create materialized view log on t2 with primary key,rowid,sequence (object_type, object_name,created,last_ddl_time,timestamp,status) including new values; create materialized view t1_t2_mv build immediate refresh fast on demand enable query rewrite as select t1.object_type, t1.object_name, t1.created, t1.last_ddl_time, t1.timestamp, t1.status, 1 as umarker, t1.rowid as rid1, t1.rowid as rid2 from t1 where owner in ('PUBLIC','APEX_200100','ORDSYS','MDSYS','XDB','SYSTEM','CTXSYS') union all select t1.object_type, t1.object_name, t1.created, t2.last_ddl_time, t2.timestamp, t1.status, 2 as umarker, t1.rowid as rid1, t2.rowid as rid2 from t1 , t2 where t1.r = t2.r and t1.owner ='SYS'; create index t1_t2_mv_idx_01 on t1_t2_mv( rid1 ) nologging; create index t1_t2_mv_idx_02 on t1_t2_mv( rid2 ) nologging; update t2 set object_type = lower(object_type) where rownum =1; delete from t1 where rownum <=5; commit; exec dbms_stats.gather_table_stats(user,'mlog$_t1',no_invalidate=>false); exec dbms_stats.gather_table_stats(user,'mlog$_t2',no_invalidate=>false); demo@XEPDB1> select count(*) from mlog$_t1; COUNT(*) ---------- 5 demo@XEPDB1> select count(*) from mlog$_t2; COUNT(*) ---------- 2 demo@XEPDB1> @tkfilename.sql D:\APP\VNAMEIT\ORA18C_XE\diag\rdbms\xe\xe\trace\xe_ora_8468.trc demo@XEPDB1> @tktrace.sql PL/SQL procedure successfully completed. demo@XEPDB1> set timing on demo@XEPDB1> exec dbms_mview.refresh('T1_T2_MV','F'); PL/SQL procedure successfully completed. Elapsed: 00:02:19.47 demo@XEPDB1> exit and the TKPROF show's this: The following statements encountered a error during parse: select t1.object_type, t1.object_name, t1.created, t1.last_ddl_time, t1.timestamp, t1.status, 1 as umarker, t1.rowid as rid1, t1.rowid as rid2 from t1 where owner in ('PUBLIC','APEX_200100','ORDSYS','MDSYS','XDB','SYSTEM','CTXSYS') union all select t1.object_type, t1.object_name, t1.created, t2.last_ddl_time, t2.timestamp, t1.status, 2 as umarker, t1.rowid as rid1, t2.rowid as rid2 from t1 , t2 where t1.r = t2.r and t1.owner ='SYS' Error encountered: ORA-10980 -------------------------------------------------------------------------------- select t1.object_type, t1.object_name, t1.created, t1.last_ddl_time, t1.timestamp, t1.status, 1 as umarker, t1.rowid as rid1, t1.rowid as rid2 from t1 where owner in ('PUBLIC','APEX_200100','ORDSYS','MDSYS','XDB','SYSTEM','CTXSYS') union all select t1.object_type, t1.object_name, t1.created, t2.last_ddl_time, t2.timestamp, t1.status, 2 as umarker, t1...
Categories: DBA Blogs

Parallel Window Consolidator Calls The Stops

Dominic Brooks - Mon, 2020-11-23 04:59

Sharing observations of performance issue arising out of testing 19.6 upgrade from 11.2.0.4 (bug possibly from 12+ looking at the fixes which work).

Sharing this one in particular as it doesn’t seem that common from anecdotal evidence (forums, blogs, support), it doesn’t appear fixed yet, progress with support has been disappointing, however circumstances don’t seem particularly niche.

We’ve had a few issues with a number of parallel queries just “hanging” forever.

Issue seems to be related to WINDOW CONSOLIDATOR and the parallel distribution method.

Haven’t been able to find a good matching bug via Oracle Support (nor happy with the progress of SR with them) but found a good match on a Lothar Flatz blog post which led me to the PQ_DISTRIBUTE_WINDOW as cause.

SQL contains parallel + join to view, view contains analytic
SQL Executes in under a second “normally”.

Without a fix, the SQL will just hang for a long time. The longest I’ve left it is 3 days but given the wait states of the sessions involved, there’s no reason to think it would stop of its own accord.

Jumping ahead to the workarounds which work, any of the following:

  • Turn off fix_control 13345888 via alter session or opt_param
  • Turn off “_adaptive_window_consolidator_enabled” via alter session/system or opt_param
  • Hint a PQ distribution method other than 2. 3 doesn’t work here so 1 is the other option – PQ_DISTRIBUTE_WINDOW(@query_block 1)

Here we see some evidence from just now, running for nearly 30 minutes:

select s.inst_id i,s.sid,s.serial#,s.module
 ,      s.blocking_session blks,s.blocking_instance bi,s.final_blocking_session f_blk_s, s.final_blocking_instance fbi
 ,      CASE WHEN state != 'WAITING' THEN 'WORKING' ELSE 'WAITING' END AS state
 ,      CASE WHEN state != 'WAITING' THEN 'On CPU / runqueue' ELSE event END AS sw_event, s.seconds_in_wait wait
 ,      s.sql_id,s.sql_exec_id,to_char(s.sql_exec_start,'DD-MON-YYYY HH24:MI:SS') exec_start
 from   gv$session    s
 where  sql_id = '5xyrp7v4djagv';

         I        SID    SERIAL# MODULE                     BLKS         BI    F_BLK_S        FBI STATE   SW_EVENT                        WAIT SQL_ID        SQL_EXEC_ID EXEC_START                   
---------- ---------- ---------- -------------------- ---------- ---------- ---------- ---------- ------- ------------------------- ---------- ------------- ----------- -----------------------------
         2       2805      15947 SQL Developer                                                    WAITING PX Deq: Execution Msg           1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         
         2       3095       3608 SQL Developer                                                    WAITING PX Deq: Execution Msg           1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         
         2       3367      28066 SQL Developer                                                    WAITING PX Deq: Execution Msg           1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         
         2       5610       8452 SQL Developer                                                    WAITING PX Deq: Table Q Normal          1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         
         2       5885      54481 SQL Developer              9672          2       9672          2 WAITING PX Deq: Execute Reply           1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         
         2       8828      57832 SQL Developer                                                    WAITING PX Deq: Execution Msg           1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         
         2       9111      37143 SQL Developer                                                    WAITING PX Deq: Execution Msg           1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         
         2       9383      50792 SQL Developer                                                    WAITING PX Deq: Execution Msg           1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         
         2       9672      29993 SQL Developer                                                    WAITING PX Deq: Execution Msg           1492 5xyrp7v4djagv    33554434 26-OCT-2020 12:18:59         

Real-time SQL Monitoring report looks per below (note Duration compared to any other progress/metrics)… note that in later versions of Oracle, RTSM has a habit of timing out and reporting DONE(ERROR) even though the SQL is still going:



 Global Information
 ------------------------------
 Status              :  EXECUTING                  
 Instance ID         :  2                          
 SQL ID              :  5xyrp7v4djagv              
 SQL Execution ID    :  33554434                   
 Execution Started   :  10/26/2020 12:18:59        
 First Refresh Time  :  10/26/2020 12:18:59        
 Last Refresh Time   :  10/26/2020 12:18:59        
 Duration            :  1592s                      
 Module/Action       :  SQL Developer/-            
 Service             :  LNTDH8U.UK.DB.COM          
 Program             :  SQL Developer              


Global Stats
=========================================
| Elapsed |   Cpu   |  Other   | Buffer |
| Time(s) | Time(s) | Waits(s) |  Gets  |
=========================================
|    0.00 |    0.00 |     0.00 |      3 |
=========================================

Parallel Execution Details (DOP=4 , Servers Allocated=8)
==========================================================================================
|      Name      | Type  | Server# | Elapsed |   Cpu   |  Other   | Buffer | Wait Events |
|                |       |         | Time(s) | Time(s) | Waits(s) |  Gets  | (sample #)  |
==========================================================================================
| PX Coordinator | QC    |         |    0.00 |    0.00 |     0.00 |      3 |             |
| p000           | Set 1 |       1 |         |         |          |        |             |
| p001           | Set 1 |       2 |         |         |          |        |             |
| p002           | Set 1 |       3 |         |         |          |        |             |
| p003           | Set 1 |       4 |         |         |          |        |             |
| p004           | Set 2 |       1 |         |         |          |        |             |
| p005           | Set 2 |       2 |         |         |          |        |             |
| p006           | Set 2 |       3 |         |         |          |        |             |
| p007           | Set 2 |       4 |         |         |          |        |             |
==========================================================================================

SQL Plan Monitoring Details (Plan Hash Value=3844894891)
===============================================================================================================================================================================================
| Id   |                           Operation                           |              Name              |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |
|      |                                                               |                                | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |
===============================================================================================================================================================================================
|    0 | SELECT STATEMENT                                              |                                |         |      |           |        |     1 |          |          |                 |
| -> 1 |   PX COORDINATOR                                              |                                |         |      |         1 |     +0 |     1 |        0 |   100.00 | Cpu (1)         |
|    2 |    PX SEND QC (RANDOM)                                        | :TQ20006                       |      2M |  445 |           |        |       |          |          |                 |
|    3 |     HASH JOIN                                                 |                                |      2M |  445 |           |        |       |          |          |                 |
|    4 |      PART JOIN FILTER CREATE                                  | :BF0000                        |     232 |  441 |           |        |       |          |          |                 |
|    5 |       PX RECEIVE                                              |                                |     232 |  441 |           |        |       |          |          |                 |
|    6 |        PX SEND BROADCAST                                      | :TQ20005                       |     232 |  441 |           |        |       |          |          |                 |
|    7 |         VIEW                                                  |                                |     232 |  441 |           |        |       |          |          |                 |
|    8 |          WINDOW CONSOLIDATOR BUFFER                           |                                |     232 |  441 |           |        |       |          |          |                 |
|    9 |           PX RECEIVE                                          |                                |     232 |  441 |           |        |       |          |          |                 |
|   10 |            PX SEND HASH                                       | :TQ20004                       |     232 |  441 |           |        |       |          |          |                 |
|   11 |             WINDOW BUFFER                                     |                                |     232 |  441 |           |        |       |          |          |                 |
|   12 |              HASH JOIN                                        |                                |     232 |  441 |           |        |       |          |          |                 |
|   13 |               NESTED LOOPS                                    |                                |     505 |    4 |           |        |       |          |          |                 |
|   14 |                TABLE ACCESS BY GLOBAL INDEX ROWID             | D_B                            |       1 |    3 |           |        |       |          |          |                 |
|   15 |                 PX RECEIVE                                    |                                |       1 |    2 |           |        |       |          |          |                 |
|   16 |                  PX SEND HASH (BLOCK ADDRESS)                 | :TQ20002                       |       1 |    2 |           |        |       |          |          |                 |
|   17 |                   PX SELECTOR                                 |                                |         |      |           |        |       |          |          |                 |
|   18 |                    INDEX UNIQUE SCAN                          | D_B_PK                         |       1 |    2 |           |        |       |          |          |                 |
|   19 |                INDEX FULL SCAN                                | D_R_UK01                       |     505 |    1 |           |        |       |          |          |                 |
|   20 |               PX RECEIVE                                      |                                |     108 |  437 |           |        |       |          |          |                 |
|   21 |                PX SEND BROADCAST                              | :TQ20003                       |     108 |  437 |           |        |       |          |          |                 |
|   22 |                 VIEW                                          | V_V2                           |     108 |  437 |           |        |       |          |          |                 |
|   23 |                  WINDOW CONSOLIDATOR BUFFER                   |                                |     108 |  437 |           |        |       |          |          |                 |
|   24 |                   PX RECEIVE                                  |                                |     108 |  437 |           |        |       |          |          |                 |
|   25 |                    PX SEND HASH                               | :TQ20001                       |     108 |  437 |           |        |       |          |          |                 |
|   26 |                     WINDOW SORT                               |                                |     108 |  437 |           |        |       |          |          |                 |
|   27 |                      NESTED LOOPS                             |                                |     108 |  436 |           |        |       |          |          |                 |
|   28 |                       NESTED LOOPS                            |                                |     108 |    4 |           |        |       |          |          |                 |
|   29 |                        TABLE ACCESS BY GLOBAL INDEX ROWID     | D_B                            |       1 |    3 |           |        |       |          |          |                 |
|   30 |                         PX RECEIVE                            |                                |       1 |    2 |           |        |       |          |          |                 |
|   31 |                          PX SEND HASH (BLOCK ADDRESS)         | :TQ20000                       |       1 |    2 |           |        |       |          |          |                 |
|   32 |                           PX SELECTOR                         |                                |         |      |           |        |       |          |          |                 |
|   33 |                            INDEX UNIQUE SCAN                  | D_B_PK                         |       1 |    2 |           |        |       |          |          |                 |
|   34 |                        INDEX RANGE SCAN                       | D_BS_UK01                      |     108 |    1 |           |        |       |          |          |                 |
|   35 |                       PX COORDINATOR                          |                                |         |      |           |        |       |          |          |                 |
|   36 |                        PX SEND QC (RANDOM)                    | :TQ10002                       |       1 |    4 |           |        |       |          |          |                 |
|   37 |                         BUFFER SORT                           |                                |      2M |      |           |        |       |          |          |                 |
|   38 |                          VIEW                                 | V_V1                           |       1 |    4 |           |        |       |          |          |                 |
|   39 |                           UNION ALL PUSHED PREDICATE          |                                |         |      |           |        |       |          |          |                 |
|   40 |                            TABLE ACCESS BY GLOBAL INDEX ROWID | D_S                            |       1 |    2 |           |        |       |          |          |                 |
|   41 |                             BUFFER SORT                       |                                |         |      |           |        |       |          |          |                 |
|   42 |                              PX RECEIVE                       |                                |       1 |    1 |           |        |       |          |          |                 |
|   43 |                               PX SEND HASH (BLOCK ADDRESS)    | :TQ10000                       |       1 |    1 |           |        |       |          |          |                 |
|   44 |                                PX SELECTOR                    |                                |         |      |           |        |       |          |          |                 |
|   45 |                                 INDEX UNIQUE SCAN             | D_S_PK                         |       1 |    1 |           |        |       |          |          |                 |
|   46 |                            TABLE ACCESS BY INDEX ROWID        | D_S                            |       1 |    2 |           |        |       |          |          |                 |
|   47 |                             BUFFER SORT                       |                                |         |      |           |        |       |          |          |                 |
|   48 |                              PX RECEIVE                       |                                |       1 |    1 |           |        |       |          |          |                 |
|   49 |                               PX SEND HASH (BLOCK ADDRESS)    | :TQ10001                       |       1 |    1 |           |        |       |          |          |                 |
|   50 |                                PX SELECTOR                    |                                |         |      |           |        |       |          |          |                 |
|   51 |                                 INDEX UNIQUE SCAN             | D_S                            |       1 |    1 |           |        |       |          |          |                 |
|   52 |      PX BLOCK ITERATOR ADAPTIVE                               |                                |    8255 |    3 |           |        |       |          |          |                 |
|   53 |       TABLE ACCESS STORAGE FULL                               | F_FACT                         |    8255 |    3 |           |        |       |          |          |                 |
===============================================================================================================================================================================================

ASH says it’s not doing anything (as we’d expect it to, being consistent with RTSM report above).


select count(*), min(sample_time), max(sample_time), sysdate from gv$active_session_history where sql_id = '5xyrp7v4djagv';

  COUNT(*) MIN(SAMPLE_TIME)             MAX(SAMPLE_TIME)             SYSDATE             
---------- ---------------------------- ---------------------------- --------------------
         1 26-OCT-20 12.18.59.828000000 26-OCT-20 12.18.59.828000000 26-OCT-2020 12:50:29

Miscellaneous Slave Info


select
decode(px.qcinst_id,NULL,'QC',
' - '||lower(substr(pp.SERVER_NAME,
length(pp.SERVER_NAME)-4,4) ) )"Username",
decode(px.qcinst_id,NULL, 'QC', '(Slave)') "QC/Slave" ,
to_char( px.server_set) "SlaveSet",
to_char(s.sid) "SID",
to_char(px.inst_id) "Slave INST",
decode(px.qcinst_id, NULL ,to_char(s.sid) ,px.qcsid) "QC SID",
to_char(px.qcinst_id) "QC INST",
px.req_degree "Req. DOP",
px.degree "Actual DOP"
from gv$px_session px,
gv$session s ,
gv$px_process pp
where px.sid=s.sid (+)
and px.serial#=s.serial#(+)
and px.inst_id = s.inst_id(+)
and px.sid = pp.sid (+)
and px.serial#=pp.serial#(+)
and s.sql_id = '5xyrp7v4djagv'
order by 6 , 1 desc
/

Usernam QC/Slav SlaveSet   SID       Slave INST QC SID     QC INST      Req. DOP  Actual DOP
------- ------- ---------- --------- ---------- ---------- -------- ------------- -----------
QC      QC                 5885       2          5885         
 - p007 (Slave) 2          9672       2          5885       2                   4          4
 - p006 (Slave) 2          3367       2          5885       2                   4          4
 - p005 (Slave) 2          9395       2          5885       2                   4          4
 - p004 (Slave) 2          3095       2          5885       2                   4          4
 - p003 (Slave) 1          9111       2          5885       2                   4          4
 - p002 (Slave) 1          2805       2          5885       2                   4          4
 - p001 (Slave) 1          8828       2          5885       2                   4          4
 - p000 (Slave) 1          5610       2          5885       2                   4          4

Announcement: New Oracle Webinars Scheduled For February 2021 !!

Richard Foote - Sun, 2020-11-22 22:46
  After much badgering from a number of you (you know who you all are), I’m pleased to finally announce the scheduling of 2 new webinars for February 2021 !! As usual, places are very strictly limited as I only run small classes to give every attendee the opportunity to get the most from the […]
Categories: DBA Blogs

Configuring Transparent Data Encryption -- 1 : For a Tablespace

Hemant K Chitale - Sat, 2020-11-21 05:49

 Oracle allows TDE (Transparent Data Encryption) for specific (i.e. selected) columns or a full Tablespace.

Here is a quick demo of TDE for a Tablespace.

First, I setup a target tablespace with some data



SQL> connect hemant/hemant
Connected.
SQL> create tablespace TDE_TARGET_TBS datafile '/opt/oracle/oradata/HEMANT/TDE_TARGET_TBS.dbf' size 100M;

Tablespace created.

SQL>
SQL> create table TDE_TARGET_TABLE
2 (id_col number,
3 data_col varchar2(50))
4 tablespace TDE_TARGET_TBS
5 /

Table created.

SQL> insert into TDE_TARGET_TABLE
2 select rownum,
3 'MY DATA CONTENT : ' || rownum
4 from dual
5 connect by level "less than" 1001 -- the "less than" symbol replaced by string to preserve formatting for HTML
6 /

1000 rows created.

SQL> commit;

Commit complete.

SQL>


oracle19c>dd if=/opt/oracle/oradata/HEMANT/TDE_TARGET_TBS.dbf ibs=8192 skip=128 > /tmp/dump_of_TDE_TARGET_TBS.TXT
12673+0 records in
202768+0 records out
103817216 bytes (104 MB) copied, 1.10239 s, 94.2 MB/s
oracle19c>
oracle19c>strings -a /tmp/dump_of_TDE_TARGET_TBS.TXT |grep 'CONTENT' |head -5
MY DATA CONTENT : 944,
MY DATA CONTENT : 945,
MY DATA CONTENT : 946,
MY DATA CONTENT : 947,
MY DATA CONTENT : 948,
oracle19c>strings -a /tmp/dump_of_TDE_TARGET_TBS.TXT |grep 'CONTENT' | wc -l
1000
oracle19c>


I inserted 1000 rows with the text "MY DATA CONTENT" and it is visible as plain-text when I dump the datafile.   

Note how not all the inserted rows appear to be in physical order -- the "last" 57 "records" (i.e. rows) seem to appear before the first "record" (row) as I show in this video recording of a viewing of the dump.  Never assume physical ordering of data in a datafile or when retrieving output (for ordering the results of a SELECT statement, *always* use the ORDER BY clause)






So, I now intend to encrypt the tablespace.

Step 1 : Specify the ENCRYPTION WALLET LOCATION
In earlier releases, this is specified in the sqlnet.ora file like this :

ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=FILE)
(METHOD_DATA=
(DIRECTORY=/home/oracle/wallet))) -- or this could be any other folder, or defaulting to $ORACLE_BASE/admin/db_unique_name/wallet


However, in 19c, Oracle recommends using the KEYSTORE_CONFIGURATION attribute of the TDE_CONFIGURATION initialization parameter after setting the WALLET_ROOT.


SQL> show parameter tde

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
one_step_plugin_for_pdb_with_tde boolean FALSE
tde_configuration string
SQL> show parameter wallet

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
ssl_wallet string
wallet_root string
SQL>
SQL> alter system set wallet_root='/opt/oracle/product/19c/dbhome_1/TDE_WALLETS' scope=SPFILE;

System altered.

SQL>
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1207958960 bytes
Fixed Size 8895920 bytes
Variable Size 318767104 bytes
Database Buffers 872415232 bytes
Redo Buffers 7880704 bytes
Database mounted.
Database opened.
SQL>
SQL> show parameter wallet

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
ssl_wallet string
wallet_root string /opt/oracle/product/19c/dbhome
_1/TDE_WALLETS
SQL>
SQL> alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE' scope=SPFILE;

System altered.

SQL>
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1207958960 bytes
Fixed Size 8895920 bytes
Variable Size 318767104 bytes
Database Buffers 872415232 bytes
Redo Buffers 7880704 bytes
Database mounted.
Database opened.
SQL> show parameter wallet

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
ssl_wallet string
wallet_root string /opt/oracle/product/19c/dbhome
_1/TDE_WALLETS
SQL> show parameter tde

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
one_step_plugin_for_pdb_with_tde boolean FALSE
tde_configuration string KEYSTORE_CONFIGURATION=FILE
SQL>


In the case of this database, since I had not earlier configured these parameters, I had to do a manual restart for them to take effect (Note : "WALLET_ROOT" must have been configured before "TDE_CONFIGURATION" can be set, that is why I had to do an additional restart between setting the two parameters).
I have deliberately configured WALLET_ROOT to a non-default/standard location.


Step 2 : Create the KEYSTORE (under WALLET_ROOT)

This is where I actually  create the Wallet.  The syntax specifies KEYSTORE location, but can default to WALLET_ROOT as I have already defined it. I can also create an Auto-Login Keystore

SQL> administer key management create keystore identified by mysecretpassword;

keystore altered.

SQL> !ls -l /opt/oracle/product/19c/dbhome_1/TDE_WALLETS/tde
total 4
-rw-------. 1 oracle oinstall 2555 Nov 21 18:38 ewallet.p12

SQL>
SQL> administer key management create LOCAL auto_login keystore
2 from keystore '/opt/oracle/product/19c/dbhome_1/TDE_WALLETS/tde'
3 identified by mysecretpassword
4 /

keystore altered.

SQL> !ls -l /opt/oracle/product/19c/dbhome_1/TDE_WALLETS/tde
total 8
-rw-------. 1 oracle oinstall 2600 Nov 21 18:49 cwallet.sso
-rw-------. 1 oracle oinstall 2555 Nov 21 18:38 ewallet.p12

SQL>
SQL> select * from v$encryption_wallet;

WRL_TYPE
--------------------
WRL_PARAMETER
------------------------------------------------------------------------------------------------------------------------------------
STATUS WALLET_TYPE WALLET_OR KEYSTORE FULLY_BAC CON_ID
------------------------------ -------------------- --------- -------- --------- ----------
FILE
/opt/oracle/product/19c/dbhome_1/TDE_WALLETS/tde/
OPEN_NO_MASTER_KEY LOCAL_AUTOLOGIN SINGLE NONE UNDEFINED 0


SQL>


In this case, "ewallet.p12" is the Password Protected Keystore and "cwallet.sso" is the Auto-Login Keystore (created LOCALly only, not for remote servers/clients).


Step 3 : OPEN the Keystore  (only if it is NOT already OPEN) 

I can see that the Keystore is already OPEN (from the query on v$encryption_wallet) but I could attempt OPENing it with :


SQL> administer key management set keystore open
2 identified by mysecretpassword
3 /
administer key management set keystore open
*
ERROR at line 1:
ORA-28354: Encryption wallet, auto login wallet, or HSM is already open


SQL>
SQL> !oerr ora 28354
28354, 0000, "Encryption wallet, auto login wallet, or HSM is already open"
// *Cause: Encryption wallet, auto login wallet, or HSM was already opened.
// *Action: None.
//

SQL> select * from v$encryption_wallet;

WRL_TYPE
--------------------
WRL_PARAMETER
------------------------------------------------------------------------------------------------------------------------------------
STATUS WALLET_TYPE WALLET_OR KEYSTORE FULLY_BAC CON_ID
------------------------------ -------------------- --------- -------- --------- ----------
FILE
/opt/oracle/product/19c/dbhome_1/TDE_WALLETS/tde/
OPEN_NO_MASTER_KEY LOCAL_AUTOLOGIN SINGLE NONE UNDEFINED 0


SQL>


Step 4 : Setup the Master Encryption Key

Now, I setup the Master Key (and also backup the existing key file)


SQL> administer key management set key
2 using tag 'For_Tablespace_TDE'
3 force keystore -- because I am using LOCAL_AUTOLOGIN
4 identified by mysecretpassword
5 with backup using 'tde_key_backup'
6 /

keystore altered.

SQL>
SQL> !ls -l /opt/oracle/product/19c/dbhome_1/TDE_WALLETS/tde
total 20
-rw-------. 1 oracle oinstall 4232 Nov 21 19:04 cwallet.sso
-rw-------. 1 oracle oinstall 2555 Nov 21 19:04 ewallet_2020112111043216_tde_key_backup.p12
-rw-------. 1 oracle oinstall 4171 Nov 21 19:04 ewallet.p12

SQL>
SQL> select * from v$encryption_wallet;

WRL_TYPE
--------------------
WRL_PARAMETER
------------------------------------------------------------------------------------------------------------------------------------
STATUS WALLET_TYPE WALLET_OR KEYSTORE FULLY_BAC CON_ID
------------------------------ -------------------- --------- -------- --------- ----------
FILE
/opt/oracle/product/19c/dbhome_1/TDE_WALLETS/tde/
OPEN LOCAL_AUTOLOGIN SINGLE NONE NO 0


SQL>
SQL> select key_id, creation_time, keystore_type, tag from v$encryption_keys;

KEY_ID
------------------------------------------------------------------------------
CREATION_TIME KEYSTORE_TYPE
--------------------------------------------------------------------------- -----------------
TAG
------------------------------------------------------------------------------------------------------------------------------------
AaHRyuP8yE+ivzI/hZHcOdoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
21-NOV-20 07.04.32.347090 PM +08:00 SOFTWARE KEYSTORE
For_Tablespace_TDE


SQL>


Now I am ready the encrypt my Tablespace.


Step 5 :  Online Encryption of Tablespace
This is a 19c feature, earlier versions required Offline Encryption.
This method creates a new datafile with encrypted data

SQL> alter tablespace TDE_TARGET_TBS
2 encryption online
3 using 'AES192'
4 encrypt file_name_convert = ('/opt/oracle/oradata/HEMANT/TDE_TARGET_TBS.dbf','/opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf')
5 /

Tablespace altered.

SQL>

oracle19c>pwd
/opt/oracle/oradata/HEMANT
oracle19c>ls -l TDE*
-rw-r-----. 1 oracle oinstall 104865792 Nov 21 19:42 TDE_TARGET_TBS_encrypted.dbf
oracle19c>
oracle19c>dd if=/opt/oracle/oradata/HEMANT/TDE_TARGET_TBS_encrypted.dbf ibs=8192 skip=128 > /tmp/dump_of_TDE_TARGET_TBS_encrypted.TXT
12673+0 records in
202768+0 records out
103817216 bytes (104 MB) copied, 0.414517 s, 250 MB/s
oracle19c>
oracle19c>strings -a /tmp/dump_of_TDE_TARGET_TBS_encrypted.TXT |grep 'CONTENT' |head -5
oracle19c>

SQL> l
1 select t.name, e.ts#, e.encryptionalg, e.encryptedts, e.key_version, e.status
2 from v$tablespace t, v$encrypted_tablespaces e
3* where t.ts#=e.ts#
SQL> /

NAME TS# ENCRYPT ENC KEY_VERSION STATUS
------------------------------ ---------- ------- --- ----------- ----------
TDE_TARGET_TBS 8 AES192 YES 1 NORMAL

SQL>



Oracle has replaced the TDE_TARGET_TBS.dbf file with TDE_TARGET_TBS_encrypted.dbf file. The new file does NOT have any plain-text values "CONTENT"

For subsequent Tablespaces, Steps 1 to 4 would not be required.



Categories: DBA Blogs

Upgrade DB from 11.2 to 19.8 Using dbua silent

Michael Dinh - Fri, 2020-11-20 19:58

There was a debate as to whether the parameter -useGRP UPGRADE19C for dbua is necessary where UPGRADE19C is the name for the restore point created prior to upgrading the database.

Although it’s not necessary, it is beneficial for dbua to automate the restore process.

When -useGRP UPGRADE19C is used, restore.sh is created to restore the database using guarantee restore point specified.

If -useGRP is not used, then dbua will not create restore.sh script. While I have not personally tested this, I did check for restore.sh script for a recent upgrade and did not find one.

Why not use dbua to its full potential?

DEMO:

--- 11.2 database:
 [oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
 There are no Interim patches installed in this Oracle Home.

 [oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory|grep Database
 Oracle Database 11g                                                  11.2.0.4.0
 [oracle@ol7-112-dg1 ~]$

--- 19c database:
 [oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
 31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
 31281355;Database Release Update : 19.8.0.0.200714 (31281355)
 OPatch succeeded.
 [oracle@ol7-112-dg1 ~]$

--- Copy emremove.sql from 19c to 11.2 DB home:
 cp -fv /u01/app/oracle/product/19.3.0.0/db_1/rdbms/admin/emremove.sql /u01/app/oracle/product/11.2.0.4/dbhome_1/rdbms/admin

--- Remove EM and OLAP:
 set echo on serveroutput on
 @?/rdbms/admin/emremove.sql
 @?/olap/admin/catnoamd.sql
 @?/rdbms/admin/utlrp.sql

--- Create guarantee restore point UPGRADE19C:
 [oracle@ol7-112-dg1 ~]$ sqlplus / as sysdba
 SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 21 00:23:56 2020
 Copyright (c) 1982, 2013, Oracle.  All rights reserved.
 Connected to:
 Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options

 SQL> @/sf_working/sql/restore_point_upgrade19c.sql
 SQL> drop restore point UPGRADE19C;
 drop restore point UPGRADE19C
 *
 ERROR at line 1:
 ORA-38780: Restore point 'UPGRADE19C' does not exist.

 SQL> alter system set db_recovery_file_dest_size=1m scope=both sid='*';
 System altered.

 SQL> alter system set db_recovery_file_dest_size=9000m scope=both sid='*';
 System altered.

 SQL> select sum(flashback_size)/1024/1024/1024 gb from v$flashback_database_log;
         GB
"----------"

 SQL> select flashback_on from v$database;
 FLASHBACK_ON
"----------"
 NO

 SQL> create restore point UPGRADE19C guarantee flashback database;
 Restore point created.

 SQL> select flashback_on from v$database;
 FLASHBACK_ON
"------------------"
 RESTORE POINT ONLY

 SQL> select name, time, guarantee_flashback_database from v$restore_point order by 1,2;
 NAME                           TIME                                     GUA
"------------------------------ ---------------------------------------- ---"
 UPGRADE19C                     21-NOV-20 12.24.19.000000000 AM          YES

 SQL> select sum(flashback_size)/1024/1024/1024 gb from v$flashback_database_log;
         GB
 .048828125

 SQL> exit
 Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
 [oracle@ol7-112-dg1 ~]$

--- Upgrade DB using dbua silent: -useGRP UPGRADE19C
 [oracle@ol7-112-dg1 ~]$ echo $ORACLE_SID $ORACLE_HOME
 testdb /u01/app/oracle/product/11.2.0.4/dbhome_1
 [oracle@ol7-112-dg1 ~]$ ./run_dbua.sh

 /u01/app/oracle/product/19.3.0.0/db_1/bin/dbua -silent \
 -sid testdb \
 -oracleHome /u01/app/oracle/product/11.2.0.4/dbhome_1 \
 -useGRP UPGRADE19C \
 -recompile_invalid_objects TRUE \
 -upgradeTimezone TRUE \
 -emConfiguration NONE \
 -skipListenersMigration \
 -createListener FALSE \
 -upgrade_parallelism 8 
 Logs directory:  /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM
 Performing Pre-Upgrade Checks…
 PRE- and POST- FIXUP ACTIONS
 /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/upgrade.xml
 /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/preupgrade_fixups.sql
 /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/postupgrade_fixups.sql
 [WARNING] [DBT-20060] One or more of the pre-upgrade checks on the database have resulted into warning conditions that require manual intervention. It is recommended that you address these warnings as suggested before proceeding.
    ACTION: Refer to the pre-upgrade results location for details: /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb
 12% complete
 15% complete
 25% complete
 77% complete
 87% complete
 Database upgrade has been completed successfully, and the database is ready to use.
 100% complete
 [oracle@ol7-112-dg1 ~]$

--- DBUA Logs:
 [oracle@ol7-112-dg1 ~]$ ls -l /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/
 total 76340
 -rw-r-----. 1 oracle oinstall        0 Nov 21 00:27 Backup.log
 -rw-r-----. 1 oracle oinstall 48860899 Nov 21 01:03 catupgrd0.log
 -rw-r-----. 1 oracle oinstall  6740107 Nov 21 01:03 catupgrd1.log
 -rw-r-----. 1 oracle oinstall  3759694 Nov 21 01:03 catupgrd2.log
 -rw-r-----. 1 oracle oinstall  5391694 Nov 21 01:03 catupgrd3.log
 -rw-r-----. 1 oracle oinstall  2974948 Nov 21 01:03 catupgrd4.log
 -rw-r-----. 1 oracle oinstall  2127696 Nov 21 01:03 catupgrd5.log
 -rw-r-----. 1 oracle oinstall  3975631 Nov 21 01:03 catupgrd6.log
 -rw-r-----. 1 oracle oinstall  3411705 Nov 21 01:03 catupgrd7.log
 -rw-------. 1 oracle oinstall      528 Nov 21 00:28 catupgrd_catcon_7841.lst
 -rw-r-----. 1 oracle oinstall        0 Nov 21 00:54 catupgrd_datapatch_upgrade.err
 -rw-r-----. 1 oracle oinstall     1306 Nov 21 01:01 catupgrd_datapatch_upgrade.log
 -rw-r-----. 1 oracle oinstall    38676 Nov 21 01:03 catupgrd_stderr.log
 -rw-r-----. 1 oracle oinstall        1 Nov 21 00:27 checksBuffer.tmp
 -rw-r-----. 1 oracle oinstall    41134 Nov 21 00:27 components.properties
 -rwxr-xr-x. 1 oracle oinstall      320 Nov 21 00:27 createSPFile_testdb.sql
 -rw-r-----. 1 oracle oinstall    15085 Nov 21 00:27 dbms_registry_extended.sql
 -rwxr-xr-x. 1 oracle oinstall      120 Nov 21 00:27 grpOpen_testdb.sql
 -rw-r-----. 1 oracle oinstall      942 Nov 21 00:27 init.ora
 -rw-r-----. 1 oracle oinstall       69 Nov 21 00:28 Migrate_Sid.log
 drwxr-x---. 3 oracle oinstall       21 Nov 21 00:27 oracle
 -rw-r-----. 1 oracle oinstall    10409 Nov 21 01:04 Oracle_Server.log
 -rw-r-----. 1 oracle oinstall    14051 Nov 21 00:27 parameters.properties
 -rw-r-----. 1 oracle oinstall     8580 Nov 21 00:27 postupgrade_fixups.sql
 -rw-r-----. 1 oracle oinstall      301 Nov 21 01:10 PostUpgrade.log
 -rw-r-----. 1 oracle oinstall     7884 Nov 21 00:27 preupgrade_driver.sql
 -rw-r-----. 1 oracle oinstall     8514 Nov 21 00:27 preupgrade_fixups.sql
 -rw-r-----. 1 oracle oinstall      443 Nov 21 00:28 PreUpgrade.log
 -rw-r-----. 1 oracle oinstall    99316 Nov 21 00:27 preupgrade_messages.properties
 -rw-r-----. 1 oracle oinstall   457732 Nov 21 00:27 preupgrade_package.sql
 -rw-r-----. 1 oracle oinstall     1464 Nov 21 00:27 PreUpgradeResults.html
 -rwxr-xr-x. 1 oracle oinstall       42 Nov 21 00:27 shutdown_testdb.sql
 -rw-r-----. 1 oracle oinstall    94342 Nov 21 01:10 sqls.log
 -rwxr-xr-x. 1 oracle oinstall       35 Nov 21 00:27 startup_testdb.sql
 -rwxr-xr-x. 1 oracle oinstall     2070 Nov 21 00:27 testdb_restore.sh
 drwxr-x---. 3 oracle oinstall       24 Nov 21 00:27 upgrade
 -rw-r-----. 1 oracle oinstall     5287 Nov 21 01:10 UpgradeResults.html
 -rw-r-----. 1 oracle oinstall     2920 Nov 21 01:09 UpgradeTimezone.log
 -rw-r-----. 1 oracle oinstall    11264 Nov 21 00:27 upgrade.xml
 -rw-r-----. 1 oracle oinstall     1583 Nov 21 01:04 upg_summary_CDB_Root.log
 -rw-r-----. 1 oracle oinstall      115 Nov 21 01:07 Utlprp.log
 [oracle@ol7-112-dg1 ~]$

--- Script testdb_restore.sh:
 [oracle@ol7-112-dg1 sql]$ cat /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/testdb_restore.sh
 !/bin/sh

-- Run this Script to Restore Oracle Database Instance testdb
 echo -- Bringing up the database from the source oracle home
 ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1; export ORACLE_HOME
 LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0.4/dbhome_1/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
 ORACLE_SID=testdb; export ORACLE_SID
 /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/sqlplus /nolog @/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/shutdown_testdb.sql
 echo -- Bringing down the database from the new oracle home
 ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/db_1; export ORACLE_HOME
 LD_LIBRARY_PATH=/u01/app/oracle/product/19.3.0.0/db_1/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
 ORACLE_SID=testdb; export ORACLE_SID
 /u01/app/oracle/product/19.3.0.0/db_1/bin/sqlplus /nolog @/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/shutdown_testdb.sql
 echo -- Removing database instance from new oracle home …
 echo You should Remove this entry from the /etc/oratab: testdb:/u01/app/oracle/product/19.3.0.0/db_1:N
 echo -- Bringing up the database from the source oracle home
 unset LD_LIBRARY_PATH; unset LD_LIBRARY_PATH_64; unset SHLIB_PATH; unset LIB_PATH
 LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0.4/dbhome_1/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
 ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1; export ORACLE_HOME
 ORACLE_SID=testdb; export ORACLE_SID
 rm /u01/app/oracle/product/19.3.0.0/db_1/dbs/spfiletestdb.ora
 echo You should Add this entry in the /etc/oratab: testdb:/u01/app/oracle/product/11.2.0.4/dbhome_1:Y
 cd /u01/app/oracle/product/11.2.0.4/dbhome_1
 /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/sqlplus /nolog @/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/createSPFile_testdb.sql
 /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/sqlplus /nolog @/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/grpOpen_testdb.sql
 RESTORE_RESULT=$?
 echo -- Execution of restore script for the database TESTDB completed.
 exit $(($RESTORE_RESULT|$?))
 [oracle@ol7-112-dg1 sql]$

--- grpOpen_testdb.sql: flashback database to restore point UPGRADE19C;
 [oracle@ol7-112-dg1 ~]$ grep -i upgrade19c /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/*.sql
 /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/grpOpen_testdb.sql:flashback database to restore point UPGRADE19C;
 [oracle@ol7-112-dg1 ~]$

--- Restore database back to 11.2:
 [oracle@ol7-112-dg1 ~]$ /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/testdb_restore.sh
 -- Bringing up the database from the source oracle home
 SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 21 04:28:33 2020
 Copyright (c) 1982, 2013, Oracle.  All rights reserved.
 Connected to an idle instance.
 ORACLE instance shut down.
 Disconnected
 -- Bringing down the database from the new oracle home
 SQL*Plus: Release 19.0.0.0.0 - Production on Sat Nov 21 04:28:33 2020
 Version 19.8.0.0.0
 Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 Connected.
 ORACLE instance shut down.
 Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
 Version 19.8.0.0.0
 -- Removing database instance from new oracle home …
 You should Remove this entry from the /etc/oratab: testdb:/u01/app/oracle/product/19.3.0.0/db_1:N
 -- Bringing up the database from the source oracle home
 You should Add this entry in the /etc/oratab: testdb:/u01/app/oracle/product/11.2.0.4/dbhome_1:Y
 SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 21 04:28:38 2020
 Copyright (c) 1982, 2013, Oracle.  All rights reserved.
 Connected to an idle instance.
 ORACLE instance started.
 Total System Global Area 1603411968 bytes
 Fixed Size                  2253664 bytes
 Variable Size             520096928 bytes
 Database Buffers         1073741824 bytes
 Redo Buffers                7319552 bytes
 File created.
 ORA-01507: database not mounted
 ORACLE instance shut down.
 Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
 SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 21 04:28:46 2020
 Copyright (c) 1982, 2013, Oracle.  All rights reserved.
 Connected to an idle instance.
 ORACLE instance started.
 Total System Global Area 1603411968 bytes
 Fixed Size                  2253664 bytes
 Variable Size             520096928 bytes
 Database Buffers         1073741824 bytes
 Redo Buffers                7319552 bytes
 Database mounted.
 Flashback complete.
 Database altered.
 Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
 With the Partitioning, OLAP, Data Mining and Real Application Testing options
 -- Execution of restore script for the database TESTDB completed.
 [oracle@ol7-112-dg1 ~]$

ZigBee@Linux: Getting Data from ZigBee Devices via MQTT to InfluxDB and Grafana

Dietrich Schroff - Fri, 2020-11-20 15:56

Getting sensors with zigbee integrated with my linux raspberry pi, i did some monitoring tasks on my raspberry pi.

  1. Monitoring my raspberry pi:
    There is a very nice tutorial:
    https://medium.com/@andreea.sonda31/monitor-raspberry-pi-resources-and-parameters-with-grafana-board-part-1-ab0567303e8
    Or even better: Just use this from grafana:
    https://grafana.com/grafana/dashboards/10578
    1. add deb https://packages.grafana.com/oss/deb stable main to a file in /etc/apt/sources.list.d/
    2. apt install grafana telegraf influxdb
    3. configure telegraf for your influxdb
    4. import the json from the grafana.com-link above



  2. Monitoring my Fritz.Box with Grafana:
    https://grafana.com/grafana/dashboards/713 
    and follow the given tutorial https://fetzerch.github.io/2014/08/23/fritzcollectd/
After these steps i have the following infrastructures running:
  1. zigbee2mqtt --> MQTT -->FHEM


  2. Fritz.box --> collectd --> InfluxDB --> Grafana

  3. raspberry --> telegraf --> InfluxDB --> Grafana


For  2 and 3 it is very easy to create graphics and the presentation looks little bit prettier than 1 (imho). 

AND there is only one frontend to configure. So what about the following chain for my zigbee sensors:

  1. zigbee2mqtt --> MQTT -->telegraf --> InfluxDB --> Grafana 

Looks like some more steps, but the telegraf --> InfluxDB --> Grafana chain is already there for monitoring my raspberry pi.

So i only had to add the following on /etc/telegraf/telegraf.conf:

[[inputs.mqtt_consumer]]
   servers = ["tcp://127.0.0.1:1883"]
   topics = [
     "zigbee2mqtt/0x00158d000542239e",
     "zigbee2mqtt/0x00158d00044a6378",
     "zigbee2mqtt/0x00158d0003f0faad",
     "zigbee2mqtt/0x00158d00044a72a2",
   data_format = "json"

And after that i was able to use the data in Grafana:


 


OLAP - How can I check that databae uses it

Tom Kyte - Fri, 2020-11-20 13:26
<i></i>Hello, This is my first post, please be understanding :). I have questions about olap: 1. In Oracle 11.2, olap was free (In Oracle 12.2, olap requires a license)? 2. What happens when I import a base using olaps into the environment without olap? 3. How to check if the factual database uses ola? <code>select name, first_usage_date, last_usage_date from dba_feature_usage_statistics where name like '%OLAP%' and first_usage_date is not null; </code> Is that enough? 4. how i check that database is warehouse ? Thank you Regars Krzysztof
Categories: DBA Blogs

How to audit all Select and DML by a user?

Tom Kyte - Fri, 2020-11-20 13:26
Good Afternoon, How can we audit all select and DML statements by a user? I tried this: AUDIT ALL BY JCANTU; Then I ran a few selects, but the select didn't appear in the audit trail so I ended up just doing a SQL Trace. Is audit all supposed to create an audit log if I select a table so that the audit log shows that I performed a select operation and it logs the table that the user selected? Thanks,
Categories: DBA Blogs

Performing sum of all matched substrings from a string using regular expression

Tom Kyte - Fri, 2020-11-20 13:26
I have a database table with name T_Usage and column name general_1. The general_1 field consists of below value. 14348860:1T:24:|120|1120|2000*14348860:1T:24:|120|1220|3000*14348860:1T:24:|120|1120|879609299148 I have to perform the sum of substrings enclosed between |(Pipe) and *(asterisk) .In the above input value we have two such substrings (2000,3000). using regexp_substr() function, I am able to identify first substring only. <code>select regexp_substr('input', '\|([0-9])+\*') test from dual;</code> How to identify all occurrences and perform addition. Please provide me SQL query if possible. Expected output should be = (2000 + 3000) = 5000
Categories: DBA Blogs

RMAN : how to restore a dropped tablespace if no catalog and no PITR

Tom Kyte - Fri, 2020-11-20 13:26
Hello experts, I am in 12.2, multi-tenant architecture, no RMAN catalog, auto backup control file. I have a problem to restore with RMAN a deleted tablespace. I create it and I made a complete backup of my container with the PDB and the tbs. <code>SQL> CREATE TABLESPACE ZZTBS DATAFILE '/u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf' size 10m EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; RMAN> BACKUP DATABASE PLUS ARCHIVELOG; ... Starting backup at 02-NOV-20 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00010 name=/u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf input datafile file number=00011 name=/u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf input datafile file number=00009 name=/u01/app/oracle/oradata/orcl12c/orcl/system01.dbf input datafile file number=00012 name=/u01/app/oracle/oradata/orcl12c/orcl/users01.dbf input datafile file number=00016 name=/u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf input datafile file number=00013 name=/u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf input datafile file number=00014 name=/u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf channel ORA_DISK_1: starting piece 1 at 02-NOV-20 channel ORA_DISK_1: finished piece 1 at 02-NOV-20 piece handle=/u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/49BFF8A6BB912582E0530100007F8BE4/backupset/2020_11_02/o1_mf_nnndf_TAG20201102T102548_ht097xb2_.bkp tag=TAG20201102T102548 comment=NONE ... </code> We see that the backup is OK : BS 2, Key 16 and, most important, the column Name is fill with the datafile of my tbs. <code>RMAN> list backup; List of Backup Sets =================== ... BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 2 Full 1.41G DISK 00:00:34 02-NOV-20 BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20201102T102548 Piece Name: /u01/app/oracle/fast_recovery_area/orcl12c/ORCL12C/49BFF8A6BB912582E0530100007F8BE4/backupset/2020_11_02/o1_mf_nnndf_TAG20201102T102548_ht097xb2_.bkp List of Datafiles in backup set 2 Container ID: 3, PDB Name: ORCL File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- 9 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/system01.dbf 10 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/sysaux01.dbf 11 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/undotbs01.dbf 12 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/users01.dbf 13 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/APEX_1991375173370654.dbf 14 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/APEX_1993195660370985.dbf 16 Full 2166604 02-NOV-20 NO /u01/app/oracle/oradata/orcl12c/orcl/zztbs.dbf </code> I delete my tbs. <code>SQL> drop tablespace ZZTBS INCLUDING CONTENTS AND DATAFILES; Tablespace dropped.</code> The problem is that, after the delete tbs, in the control file there is no more reference to my tbs. So, when I use RMAN, connected to the PDB, I get an error message saying that it does not know my tbs. <code>RMAN> LIST BACKUP OF TABLESPACE ZZTBS; RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of list command at 11/02/2020 10:28:10 RMAN-20202: Tablespace not...
Categories: DBA Blogs

How to prevent sqlldr from aborting with WHEN clause

Tom Kyte - Fri, 2020-11-20 13:26
The file names will be the same.. Bad file has ONLY the 1000 record. The good file has many. I can load the 1st record for both the good file and the bad file but the good file aborts because it skips all the other record types. I was using the WHEN clause to load JUST the 1000 record. This is an example of a bad file <code>1000,payment file failed,002 - Duplicate File.</code> ===================================== This is an example of a good file <code>1000,1.0,TEMPSUA,10142020071021,10162020172131 4000,1.0,814605760,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605770,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605780,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605790,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605810,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605820,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605830,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605840,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 4000,1.0,814605850,Failure,TSFMS100000248581445,101 - Payable: Unique Payable Identifier field is required.,101 - Payable: Total Payable Amount field is required.,,,,,,,,, 5000,9,0,9</code> I have a table with 1 header record and multiple detail records. A good file and a bad file will have the same file name. Only the header record will alert us to a failed file. I have a 'PRETEST' table that I only want to load the header record into. Then I have a 'PRETEST' script to test the header record. I use a WHEN clause and only load 1 record but the step aborts because it creates a discard file with all the other good records. How can I prevent this from aborting? Thank you Sherry Borden <code>SQL*Loader: Release 19.0.0.0.0 - Production on Wed Nov 18 10:44:55 2020 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Control File: C:\AppWorx\sql\F_JPM_TF_RESPONSE_PRETEST_LOAD.ctl Data File: 3140121.tmp Bad File: 3140121.bad Discard File: 3140121.dis (Allow all discards) Number to load: ALL Number to skip: 0 Errors allowed: 50 Bind array: 250 rows, maximum of 1048576 bytes Continuation: none specified Path used: Conventional Table TEMPLE_FINANCE.JPM_SUA_RESPONSE_PRETEST, loaded when JPM_REC_TYPE = 0X31303030(character '1000') Insert option in effect for this table: REPLACE TRAILING NULLCOLS option in effect Column Name Position Len Term Encl Datatype ------------------------------ ---------- ----- ---- ---- --------------------- JPM_REC_TYPE FIRST * , O(") CHARACTER JPM_S_OR_F NEXT * , O(") CHARACTER JPM_ERRCODE_DESCRIP NEXT * , O(") CHARACTER Record 2: Discarded - failed all WHEN clauses. Record 3: Discarded - failed all WHEN clauses. Record 4: Discarded - failed all WHEN clauses. Record 5: Discarded - fa...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator