Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 5 hours 57 min ago

Troubleshooting heavy hash joins

Thu, 2021-01-21 09:26
Hello Chris, Hello Connor. I know that hash join performance might suffer a lot if it has to store the hash table on a disk, but I have no idea how to check if that's happening and that is the reason a query is slow. Admittedly I do not know much about memory usage in Oracle and how to track it. Could you please, point me toward some articles on the topic or data dictionaries, that would help? Unfortunately I have neither a dba access nor an access to trace files.
Categories: DBA Blogs

Spooling data to .csv file via SQL Plus

Thu, 2021-01-21 09:26
It is probably apparent from my job title that my role is far from a dba. I am, however, a frequent user of Oracle SQL Developer. I work in mass appraisal and use SQL Developer to select, update, insert, delete, etc., on a regular basis. I would like to bring some automation to some of the select statements that I run most frequently and spool the results to a .csv file. An example of this is a select statement for identifying recently sold properties that need to be reviewed. The following command [using the Windows Command Prompt] has been stored as a scheduled task [using Windows Task Scheduler]: G:\>sqlplus username/password@connection @G:\SALES_VALIDATION\bat_files\weekly_salesval_rev.sql Weekly_salesval_rev.sql is the script containing the spool commands and select statement. I have included a link so you can view the script. What command [or commands] can I incorporate so that the data will be formatted appropriately [and include column headings] for review in Excel? While there should be approximately 21 columns of data, the results are currently displaying in a scattered fashion within the first 3 columns of the .csv file. If you need any other detail, please let me know. Also, if you would suggest or recommend other approaches to automating frequently run SQL select statements, let me know and I would be glad to look into those alternatives. Thank you for your time and help! Berkley Rose Product Versions: SQL Plus Release 12.2.0.1.0 Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 Oracle SQL Developer 4.0.3.16 Windows Version 10.0.17763.1577
Categories: DBA Blogs

Find sku_no values from the table which does not have any records for ven_type='P'

Thu, 2021-01-21 09:26
Hi Tom and Team, I have a very simple table with the following records. site_no, sku_no, vendor_id, ven_type A sku_no can have multiple vendor_id values associated to it. A vendor_id can have a ven_type = P (Primary) or S (Secondary) I want to find all those sku_no values from the table that do not have any ven_type='P' records. Kindly help me write a query for this. Please let me know if the explanation is not clear or if you want me to provide any other details. Thank you.
Categories: DBA Blogs

Table with LONG data type not being freed

Wed, 2021-01-20 15:06
Hi Tom, We are dealing with a system which has some legacy tables with LONG column. One of these tables contains critical data (email bodies and attachments, chat transcripts, etc) in LONG column. Unfortunately we cannot remove or even change this table in any way, system is heavily dependent on it. 2 years ago we introduced the following solution: - New table with same columns, but CLOB instead of LOB - Minutely job, which copies newly inserted records from original table into new table, and converts LONG to CLOB - Daily job which deletes already copied records from old table When this solution was introduced, we did a manual cleanup of the original table: - Stop the system which uses the table - Create new table with exact same definition (LONG column) - Copy over all records which had to be kept - Rename original table to "_OLD" postfix and new table to original table name - Drop old table However, after 2 years, customer where this database is located wanted to drop old partitions (note: the problematic table is NOT partitioned), and while checking disk space, noticed that the original table is still growing to be very large (this was the original problem also 2 years ago). Our expectation from the above solution was that while Oracle will not free up the disk space after the DELETE because of the LONG column, it will still reuse existing blocks, so that table will not grow after a certain point. However this is not what is happening, the table keeps growing, day by day. Currently it has about 13 000 records, but over 19 million blocks. About 18 000 records are added during the day, and each of those are deleted at night. I tried to reproduce the problem using the below SQL, but I'm not sure it's fully correct, it's producing a different result, something which is what we were expecting: after we delete and insert again, used blocks count is not increasing. <code> -- Helper proc CREATE OR REPLACE PROCEDURE show_space ( PI_TABLE_NAME IN VARCHAR2 ) AS l_unformatted_blocks NUMBER; l_unformatted_bytes NUMBER; l_fs1_blocks NUMBER; l_fs1_bytes NUMBER; l_fs2_blocks NUMBER; l_fs2_bytes NUMBER; l_fs3_blocks NUMBER; l_fs3_bytes NUMBER; l_fs4_blocks NUMBER; l_fs4_bytes NUMBER; l_full_blocks NUMBER; l_full_bytes NUMBER; PROCEDURE p ( p_label IN VARCHAR2, p_num IN NUMBER ) IS BEGIN dbms_output.put_line(rpad(p_label, 40, '.') || p_num); END; BEGIN dbms_space.space_usage( 'CCQ', PI_TABLE_NAME, 'TABLE', l_unformatted_blocks, l_unformatted_bytes, l_fs1_blocks, l_fs1_bytes, l_fs2_blocks, l_fs2_bytes, l_fs3_blocks, l_fs3_bytes, l_fs4_blocks, l_fs4_bytes, l_full_blocks, l_full_bytes); p('Unformatted Blocks ', l_unformatted_blocks); p('FS1 Blocks (0-25) ', l_fs1_blocks); p('FS2 Blocks (25-50) ', l_fs2_blocks); p('FS3 Blocks (50-75) ', l_fs3_blocks); p('FS4 Blocks (75-100)', l_fs4_blocks); p('Full Blocks ', l_full_blocks); END; / -- Dummy table, exact same definition as original table CREATE TABLE test_long ( qwkey NUMBER(38, 0) NOT NULL, tablename VARCHAR2(90 BYTE), fieldname VARCHAR2(90 BYTE), textkey NUMBER(38, 0), text LONG ); ALTER TABLE test_long ADD CONSTRAINT pk_test_long PRIMARY KEY (qwkey) USING INDEX; -- Original stats BEGIN dbms_stats.gather_table_stats(ownname => '"CCQ"', tabname => '"TEST_LONG"', estimate_percent => 1); END; BEGIN show_space('TEST_LONG'); END; /* Output: Unformatted...
Categories: DBA Blogs

Historical question about the definition of the constraining table in the Oracle documentation

Wed, 2021-01-20 15:06
Hello, AskTom! Excuse me for the "idle" question, but none of my colleagues could answer this. Many-many years ago, back in Oracle 7, was a "constraining error" ORA-04094 "Table is constraining, trigger may not modify it". This restriction was relaxed since Oracle 8i. In the Oracle (8i and later) documentation you can read - "constraining error prevented a row trigger from modifying a table when the parent statement implicitly read that table to enforce a foreign key constraint" and it's absolutely clear for me. But the Oracle 7 documentation says - "A constraining table is a table that a triggering statement might need to read either directly, for a SQL statement, or indirectly, for a declarative referential integrity constraint". I never undestood the first part of the sentence - "read directly, for a SQL statement": how a reading of the table can "constraint" the table in the triggering statement if the SQL operator reads the data in the table in the state at the moment in time BEFORE the row trigger can do any changes in the table to be read? My opinion is - it was a mistake in the Oracle documentation and a correct definition of the constraining table would be "constraining table is a table that a triggering statement might need to read indirectly, for a declarative referential integrity constraint", without the first part.
Categories: DBA Blogs

Need help working with PL/SQL FOR LOOP

Wed, 2021-01-20 15:06
We have batch_table with column batch_id having values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Running sample code for demonstration. <code> PACKAGE XXX PROCEDURE YYY IS variables... BEGIN FOR i IN (SELECT batch_id FROM batch_table) LOOP -- Function call IF get_running_threads('XXEXTRACT_DATA') <= l_max_allowed_threads THEN l_request_id := fnd_request.submit_request ( application => g_application_name, program => 'XXEXTRACT_DATA', description => 'XXEXTRACT_DATA', start_time => SYSDATE, sub_request => FALSE, argument1 => i.batch_id); ELSE BEGIN -- WAIT LOGIC -- Wait till previous concurrent program completes successfully. -- Call API FND_CONCURRENT.WAIT_FOR_REQUEST DBMS_LOCK.sleep(10); END; END IF; END LOOP; END YYY; END XXX; </code> My ask is : Batch_id 1, 2, 3 successfully processed by calling the concurrent program XXEXTRACT_DATA. When FOR Loop starts processing batch_id 4, control comes to ELSE part. In the next iteration, this starts with batch_id 5. So batch_id 4 is never processed. How can I process batch_id 4 ? Is this possible to achieve without reopening the cursor more than one ? Pls respond.
Categories: DBA Blogs

Choosing a view based on result of view

Tue, 2021-01-19 20:46
Hi all, I am having a struggle with a view. The outcome of the view can be 1 row of 3 different views with the same fields. Can someone point me in the right direction how to think please? :) Very basic, this is my problem: <code> /* MAINVIEW */ SELECT * FROM (SELECT * FROM VIEW1) --returns n rows /* IF VIEW1 IS EMPTY THEN SELECT ALL FROM VIEW2 */ (SELECT * FROM VIEW2) -- returns n rows /* IF VIEW2 IS EMPTY THEN SELECT VIEW3 (=HAS ALWAYS DATA) */ (SELECT * FROM VIEW3) -- returns 1 row </code> I don't need full code, just a hint in the right direction ;) . Thanks in advance.
Categories: DBA Blogs

Question about sequence with lower nextval than column

Tue, 2021-01-19 20:46
Hi, I have a sequence name s1 start with 1 and incremented by 1 and end with 999. I am inserting these sequence value into one table name 'e'. E table contains eno (pk) column. insert into e values(s1.nextval); I inserted 9 rows. sequence current value is 9 and 10 is already inserted in backend. I try to insert into e values(s1.nextval); then it will come pk violation error. In this case i want to insert 11 in place of 10
Categories: DBA Blogs

The JSON query results does not return the full path

Tue, 2021-01-19 20:46
Hi, I'm try to evaluate the Json query. e.g I have following json string in the table <code>{ "WO": [ { "id": "1", "name": "WO1", "action": "add", "items": [ { "id": "1", "author": "Tom Ask" }, { "id": "2", "author": "Lee Frik" } ] }, { "id": "3", "name": "WO1", "action": "add", "items": [ { "id": "1", "author": "John A" }, { "id": "2", "author": "Jean Pontus" } ] } ] }</code> <code> select d.wo_data.WO.items.id from workorder_json d;</code> I get following results: <code>["Tom Ask","Lee Frik","John A","Jean Pontus"] </code> What I hope is return with full path like following <code>{ "WO": [ "items": [ { "author": "Tom Ask" }, { "author": "Lee Frik" } ], "items": [ { "author": "John A" }, { "author": "Jean Pontus" } ] ] } </code> Is there anyway to return like above?
Categories: DBA Blogs

Advanced Queueing

Mon, 2021-01-18 08:26
Environment: Oracle 18XE 64-bit for Windows. I have a question about dequeueing an array of messages from persistent queue. It?s a simple point-to-point messaging. Queue is ?single_consumer?, without propagation. I registered my PL/SQL callback function. I need to know an exact size of array of messages to dequeue in every call of my callback function from Oracle AQ internal job. And I found the only legal way how to have done it. And this way is to register callback with <b>qosflags</b> parameter of <b>sys.aq$reg_info</b> equal to <i><u>dbms_aq.NTFN_QOS_PAYLOAD</u></i>. Here is the registration PL/SQL block: <code>declare v_qosflags number := dbms_aq.NTFN_QOS_PAYLOAD; r_info SYS.AQ$_REG_INFO; begin r_info := SYS.AQ$_REG_INFO( 'STERN.FOUNDERS_QUEUE', DBMS_AQ.NAMESPACE_AQ, 'plsql://stern.dosomecalc', HEXTORAW('FF') ); r_info.qosflags := v_qosflags; r_info.ntfn_grouping_class := dbms_aq.NTFN_GROUPING_CLASS_TIME ; r_info.ntfn_grouping_value := 60; r_info.ntfn_grouping_type := dbms_aq.NTFN_GROUPING_TYPE_SUMMARY ; DBMS_AQ.REGISTER ( SYS.AQ$_REG_INFO_LIST( r_info ), 1 ); end;</code> Here is the declaration of callback procedure. It is a standard declaration: <code>create or replace procedure dosomecalc (context RAW ,reginfo SYS.AQ$_REG_INFO ,descr SYS.AQ$_DESCRIPTOR ,payload raw ,payloadl NUMBER)</code> Now, thankfully to <b>qosflags</b> parameter initialized with <i><u>dbms_aq.NTFN_QOS_PAYLOAD</u></i> ,my callback function is registered in such a way that I always can see real size of messages to dequeue in callback session. It may be evaluated as counting size of <b>descr.msgid</b>_array part of descr parameter. Without setting of <b>qosflags</b> during registration to some value - this part of descr parameter always comes empty to callback procedure call. Once I know the real size of messages array , I can use it in <code>Dbms_aq.dequeue_array(?, array_size => descr.msgid_array.count,?) /*dequeuing call*/.</code> inside my callback function. Than, after analyze of contents of descr parameter, I found in it an <b>ntfnsRecdInGrp</b> element, and decided that <b>ntfnsRecdInGrp </b>is always equal to <b>descr.msgid_array.count</b>, and just made for programmer?s convenience, just for duplicate <b> descr.msgid_array.count</b>. AQ documentation says: <code>msgid_array - Group notification message ID list ntfnsRecdInGrp - Notifications received in group</code> That was why I decided that they are equal by value. It was a my mistake. When I use callback with array size equal to<b> descr.msgid_array.count</b> ? everything is OK. With n<b>tfnsRecdInGrp</b> ? no. Sometimes <b>descr.msgid_array.count</b> and <b>ntfnsRecdInGrp</b> equal to each other, sometimes not. Now the question is: What is the meaning of <b>ntfnsRecdInGrp</b> part of descr parameter? Why it is not the same as <b>Msgid_array.count</b>? <u></u>
Categories: DBA Blogs

How to allow in-progress client work survive database corruptions that don't require a database restart.

Mon, 2021-01-18 08:26
Good Morning, It seems like no time is spent during development efforts to assure that an application is able to survive database corruptions that don't require a database restart like say a datafile corruption or even a block corruption. If a corruption of a datafile occurs, for instance, the DBA can restore and recover the datafile, but I don't think the clients or patch processes using the application that access the corrupt datafile will survive the operation and/or the 15-60 minutes it may take to recover. I would assume that the application developers would need to do additional work on their end to even have a chance. Is there any Oracle documentation or a book that you can point me to that an organization can use to make their applications bullet proof against database corruptions that don't require shutting down the database? I would hate to have to try to reinvent this wheel by figuring out all possible Oracle error codes, timeout settings, etc. since it seems like this is something that should be done by lots of application development efforts, but my guess is that 98%+ of the time, nothing is done to try to have in-progress client work survive database corruptions that don't require a database restart. Thank you
Categories: DBA Blogs

audit once per session (unified auditing)

Mon, 2021-01-18 08:26
Hi, is it possible to audit (unified auditing) access to tables only once per session? I don't need to catch all selects from audited tables in a session, I just want to to know if a table was at querried at least once. Currently it generates huge amount of audit data, from which only fraction is needed. Thanks, Julius
Categories: DBA Blogs

Understanding reused values for sql_id, address, hash_value, plan_hash_value in v$sqlarea

Thu, 2021-01-14 12:46
good evening, I have a sql statement with the following information in v$sqlarea <code>select sql_id, address, hash_value, plan_hash_value from v$sqlarea where sqltext=<string to identify my query> sql_id |address |hash_value|plan_hash_value cv65zdurrtfus|00000000FCAA9560|2944187224|3149222761</code> I remove this object from the shared pool with the command because I want to recompute the exec plan for my sql statement <code>exec sys.dbms_shared_pool.purge('00000000FCAA9560,2944187224','c');</code> I redo my previous select statement on v$sqlarea and it retuns 0 row so I'm happy with that. Then I execute my original sql and last I redo my select statement on v$sqlarea and it returns one row with the same values <code>sql_id |address |hash_value|plan_hash_value cv65zdurrtfus|00000000FCAA9560|2944187224|3149222761</code> I was wondering how identical ids were generated, i was expecting new values even though at the end I have the expected result. Thanks for your feedback. Simon
Categories: DBA Blogs

Get date filter from a table in Oracle?

Thu, 2021-01-14 12:46
I would like to know how to access date column from a table and use it as date filter for another large volume table.. I have the following query that currently uses a date in the filter criteria and it gets completed in about 10 to 15 minutes. <code>select a,b,datec, sum(c) from table1 where datec = date '2021-01-12' group by a,b,datec</code> I'm trying to replace the hard coded date with a date from another table called table2. It's a small table with 1600 rows that just returns latest cycle completion date (one value) which is typically today's date minus one day for most days except for holidays when the cycle doesn't run.table1 is a view and it returns millions of rows. I tried the following queries in order to get the date value in the filter condition: <code>select a,b,datec, sum(c) from table1 t1, table2 t2 where t1.datec = t2.pdate and t2.prcnm = 'TC' group by a,b,datec select a,b,datec, sum(c) from table1 t1 inner join table2 t2 on datec = t2.pdate and t2.prcnm = 'TC' group by a,b,datec select a,b,datec, sum(c) from table1 t1 where t1.datec = (SELECT t2.date FROM table2 t2 WHERE prcnm = 'TC') group by a,b,datec</code> I also tried this hint: <code>select a,b,datec, sum(c) from table1 t1 where t1.datec = (SELECT /*+ PRECOMPUTE_SUBQUERY */ t2.date FROM table2 t2 WHERE prcnm = 'TC') group by a,b,datec</code> The above queries take too long and eventually fail with this error message - "parallel query server died unexpectedly" I am not even able to get 10 rows returned when I use the date from table2. I confirmed that table2 returns only one date and not multiple dates. Can you please help me in understanding why the query works when hard coded date is used, but not when a date from another table is used? thank you.
Categories: DBA Blogs

Requirements to set up an Oracle Directory for WRITE access

Thu, 2021-01-14 12:46
We have several existing Oracle Directories set up to allow reading CSV files that work fine, and a couple of them work OK to Write new files. I have been trying to add a new Directory definition pointing to a different path and cannot get it to work. I am in a corporate environment where I don't have access to the System accounts and cannot see the instance startup file, and don't have direct access to the Linux operating system, so I don't know what setup has been done for the previous Directories. One of the existing Directories that works for both read and write is defined as: <code>CREATE OR REPLACE DIRECTORY RED AS '/red/dev';</code> for the above directory, the following test code works fine to create an output file: <code>DECLARE v_file UTL_FILE.FILE_TYPE; BEGIN v_file := UTL_FILE.FOPEN(location => 'RED', filename => 'test.csv', open_mode => 'w', max_linesize => 32767); UTL_FILE.PUT_LINE(v_file, 'A,123'); UTL_FILE.FCLOSE(v_file); END; </code> I want to write some files to a subdirectory under the above path, and have found that Oracle will only allow WRITE to a named-Oracle Directory for security reasons. A new Directory I want to create is defined as: <code>CREATE OR REPLACE DIRECTORY RED_OUTPUT AS '/red/dev/OUTPUT';</code> But changing the code above to use RED_OUTPUT as the "location" or directory, results in "ORA-29283: invalid file operation: cannot open file". The '/red/dev/OUTPUT' directory location exists on the external NAS filesystem and appears to have the same permissions as the parent '/red/dev' directory (as best I can tell by using Windows Explorer to look at the directory security properties). I have read various posts online indicating things like the Oracle instance must be restarted after defining a new Oracle Directory, or that every path specified by an Oracle Directory must have a separate "mount point" on the Oracle Linux server, but I don't have easy access to do those things. The RED_OUTPUT directory can be currently used to READ an existing file if I copy one to that location using Windows Explorer. What is likely the issue with not being able to WRITE to this new RED_OUTPUT directory, and are any of these additional steps (restart, mounting, etc) necessary to make this work?
Categories: DBA Blogs

Is there a view that a DBA can query to find out if "ORA-02393: exceeded call limit on CPU usage"

Thu, 2021-01-14 12:46
Greetings, I've seen when "cpu_per_call" limit is reached. ORA-02393 is sent to the SQL Plus. Is there a view that a DBA can query to find out if "ORA-02393: exceeded call limit on CPU usage" occurs to applications using the database since it isn't written to alert log. Thanks, John
Categories: DBA Blogs

DIFFERENCE BETWEEN ANALYZE AND DBMS_STATS

Thu, 2021-01-14 12:46
DIFFERENCE BETWEEN ANALYZE AND DBMS_STATS
Categories: DBA Blogs

How to pass a parameter to a GET Handler in APEX?

Thu, 2021-01-14 12:46
Hello, I created a PL/SQL function that returns a list of open balances as a table result, where all amounts are converted to the currency provided as an input parameter: <code>function my_pkg.my_func (pi_currency in NUMBER default NULL) return amount_tab pipelined; </code> I created an Oracle REST Data Service with only GET handler: <code>select * from table(my_pkg.my_func(:to_currency)) ;</code> I tested it by Advanced REST Client and it is working as expected with an additional header for the to_currency parameter. In APEX I declared a REST Data Source related to the above REST service, then I made an APEX page with IG region based on the above REST source and it is working well as long as I am not trying to provide a parameter, i.e. until to_currency is null. When I try to populate <b>{"to_currency":"USD"}</b> in the External Filter attribute, this causes the application crash. I googled the problem but found nothing. Is any other standard way to pass the non-column parameter to the GET handler in APEX or I should write my own procedure to call REST service, e.g. by using APEX_EXEC? Thank you and best regards, Alex
Categories: DBA Blogs

blob to clob on ORDS Handler Definition

Thu, 2021-01-14 12:46
Hi! I'm trying to send a post request with json: <code> { "id": 12344444, "email": "ppppoddddddppp@gmail.com", "first_name": "", "last_name": "", "billing": { "first_name": "22222", "last_name": "", "company": "", "address_1": "", "address_2": "", "city": "", "postcode": "", "country": "", "state": "", "email": "", "phone": "" } } </code> I'm trying to use apex_json to extract information like: ?company? that is in ?billing? I read the following guide:https://oracle-base.com/articles/misc/apex_json-package-generate-and-parse-json-documents-in-oracle#parsing-json and it works but not inside ORDS Handler Definition.... I'm trying to use the following code ... but it's not insert the data and return "201": <code> DECLARE l_json_payload clob; l_blob_body blob := :body; l_dest_offset integer := 1; l_src_offset integer := 1; l_lang_context integer := dbms_lob.default_lang_ctx; l_warning PLS_INTEGER := DBMS_LOB.warn_inconvertible_char; BEGIN if dbms_lob.getlength(l_blob_body) = 0 then :status_code := 400; --error :errmsg := 'Json is empty'; return; end if; dbms_lob.createTemporary(lob_loc => l_json_payload ,cache => false); dbms_lob.converttoclob( dest_lob => l_json_payload ,src_blob => l_blob_body ,amount => dbms_lob.lobmaxsize ,dest_offset => l_dest_offset ,src_offset => l_src_offset ,blob_csid => dbms_lob.default_csid ,lang_context => l_lang_context ,warning => l_warning); APEX_JSON.parse(l_json_payload); INSERT INTO ACCOUNTS ( wp_id , name , email , f_name , l_name , wp_role , wp_username , woo_is_paying_customer , woo_billing_first_name ) VALUES ( :id, :first_name || ' ' || :last_name, :email, :first_name, :last_name, :role, :username, decode(:is_paying_customer,'false', 'N', 'Y'), APEX_JSON.get_varchar2(p_path => 'billing.first_name') ); :status_code := 201; --created EXCEPTION WHEN OTHERS THEN :status_code := 400; --error :errmsg := SQLERRM; END; </code> updating: After testing - the problem is in this line: <code> l_blob_body blob := :body; </code> When I enter it, it does not insert anything into a database update 2: after testing... I realized that it is not possible to combine: : body and other bind value, so APEX_JSON.get_varchar2 should be used instead (p_path => 'billing.first_name') So the problem was solved
Categories: DBA Blogs

MATERIALIZED VIEW Performance Issue!

Thu, 2021-01-14 12:46
I have created a MV on UAT server and my MV view using a query which has remote connectivity to PROD and select only rights to these tables which has millions of rows around 10 lakhs in each table but after calculation output of query is 139-150 rows only. query alone without MViews is taking 60 seconds but when I use CREATE MATERIALIZED VIEW NOCOMPRESS NOLOGGING BUILD IMMEDIATE USING INDEX REFRESH FORCE ON DEMAND NEXT null USING DEFAULT LOCAL ROLLBACK SEGMENT USING ENFORCED CONSTRAINTS DISABLE QUERY REWRITE as "query" mview creation happens in one hour and after that refresh time is 20-30 minutes ? which is surely not acceptable as this data is being used for dashboard with 3 minutes delay which MV should take time to refresh! I don't have privilege to anything to check on prod DB but on UAT I have sufficient access! I have tried many option but didn't work so please help me to know what is solution and if no solution what is reason behind this? in addition when my mview refresh it shows in explain plan " INSERT /*+ BYPASS_RECURSIVE_CHECK */ INTO abc". Please help me! I am really stuck here and tried my hard to get it resolved or finding a reason where I can explain to relevant team! Please help! 1. I have tried create table with same query and it took less than a minute. 2. Insert statement also working fine taking same time. 3. I tried MV view refresh option with atomic_refresh=false as well but it didn't work and actually it will not help! Please let me know if u have any info required! Note: My mv view query using prod tables(approx 4 tables) with db link from UAT.Prod server has one separate user which has been given below table rights select count(*) from abc@prod; --800000 select count(*) from abc1@prod; --700000 select count(*) from abc2@prod; --200000
Categories: DBA Blogs

Pages