Wednesday, 19 October 2011

Transaction Management with LogMiner and Flashback Data Archive

LogMiner is an often dismissed yet very strong implement in the Oracle Database. It is employed to extract DML assertions from the redo log files—the primary SQL that produced the transaction and even the SQL that can change back the transactions. (For an introduction to LogMiner and how it works, cite to my Oracle Magazine portion "Mining for Clues.") Until now, this strong implement was ordinarily under-appreciated due to the deficiency of a more simple interface. In Oracle Database 11g, even so, Oracle Enterprise Manager has a graphical interface to extract transaction from the redo logs employing LogMiner,


which makes it greatly not hard to use the implement to appraise and rollback transactions. (Note: As in earlier versions, you can carry on to use the DBMS_LOGMNR parcel to put on lead line-driven log prospecting if you wish.)
Let's observe an instance how this is done. To empower log prospecting, you want only less noteworthy supplemental logging empowered for the database or a least the table. Flashback Transaction demands main key logging. To empower it for the every part of database, subject the subsequent commands:

SQL> alter database add supplemental log data;
 
Database altered.
 
SQL> alter database add supplemental log data (primary key) columns;
 
Database altered.

Now, consider the following statements issued by an application against your database:
SQL> insert into res values (100002,sysdate,12,1);
 
1 row created.
 
SQL> commit;
 
Commit complete.

SQL> update res set hotel_id = 13 where res_id = 100002;

1 row updated.
 
SQL> commit;
 
Commit complete.

SQL> delete res where res_id = 100002;
 
1 row deleted.
 
SQL> commit;
 
Commit complete. 
 
Note the  assertions  carefully: each one is  was  winning   by a  entrust    assertion,  which  suggests  that each  assertion  is a transaction. Now 
let's  observe  how you can  appraise  the transactions in LogMiner in Oracle 
Database 11g Database Control.                             
             
             
In the Enterprise Manager video screen, from the Database homepage, depart to the tab tagged Availability.
 

 Click View and Manage Transactions, listed under Manage. This brings up the main LogMiner interface, as shown below: 
 

You can move into precise assortments in time or SCNs to explore for transactions. In the diagram atop, I have moved into a assortment of time to explore in the Query Time Range. In the Query Filter, I have employed only the SCOTT's transactions, because that was employed to put on all the DMLs. In the Advanced Query segment, you can move into any supplemental filter. After all paddocks are moved into, snap Continue.
This hits up the Log Mining method that explorations through the redologs (both online and archived, if needed) and finds the transactions distributed by the customer SCOTT. After the method is concluded, you will observe the effects screen.
The apex part of the effects video screen views like this:


The effects suggest that the explore encountered two transactions by SCOTT, which changed two records.
The foundation part of the video screen presentations the particulars of those transactions. Here is a partial scenery of the screen. You can observe the transactions present as 1 ins (meaning "1 penetrate statement"). The leftmost post presentations the transaction identifiers (XID), a number that solely acknowledges a transaction.
 
If you click on that transaction identifier, you can see the details of that transaction as shown in the screen below:

As you can perceive, you can exert Database Control to hunt and diagnose the transactions. Click the buttons Previous Transaction and Next Transaction to scroll through all the transactions found via the search.

Use Cases

How can you exert this feature? Well, a figure of ways. The majority significant exert may be to discover "who" did "what." If you don't have auditing enabled for performance purposes, or just haven't retained the audit listing, all you have to do is to hunt for the evidence in the LogMiner interface via mining the redo logs—online as well as archived ones. In the hunt screen, you can enter else filtering conditions in the Advanced Query field beneath Query Filter.
Suppose you want to locate a transaction where the record for the RES_ID = 100002 was inserted, deleted, or updated. You can hunt for a particular quality in the redo river via employing the operate column_present in the dbms_logmnr bundle as shown below:


This run will obtain all the transactions that fascinated 100002 in the RES_ID column in RES office desk below the SCOTT schema.
You can also exercise this attribute to unearth the DDL commands liberated against the database. To do that, decide the radio button View DDL Only in the Query Filter section.

Backout of Selected Transactions

When you check a transaction, what do you want do with it? One thought—perhaps the intent you are staring into the transaction in the first place—is that the transaction was organised in fault and you want to undo it. That's rightly simple; if the transaction is an usher in, you just have to delete it; or if it is an update, afterward the undo will be updating the row to the major value.
However, message the transactions adapted in the case carefully. The first transaction ushers in a row. The second one updates the row just inserted and the third one deletes that very row. The first one (the insert) is the transaction you want to backout. But, here is a problem; the row is already deleted by the afterwards transactions; so what is the undo transaction overseen to be, in this case?
This is where the Dependent Transaction seeing attribute in Oracle Database 11g comes handy. Click Flashback Transaction. After numerous tracks down, it will prevailing a screen interchangeable to below:



 This video screen presentations you the reliant transactions and renew and deletes as well. Now when you back-out the transaction, you can back-out the dependents as well. To perform so, decide the Cascade broadcasting button from the table under and snap OK.

 
It will show you the different transactions you want backed out; click the Transaction IDs to see that what SQL statements Oracle will issue to undo the specific transaction.

 
For instance, to undo the insert, it has to issue a delete, as shown above. If you click on the next transaction (just below it), you will see the details of what needs to be done to back that one out:

 
You get the idea. Click Submit and all these transactions will be rolled back, in one sweep. This is the cleanest way to undo a transaction and its dependents.

Command Line Interface

What if you don't have get access to to the Enterprise Manager or possibly you desire this finished through a script? The bundle DBMS_FLASHBACK, which is furthermore present in Oracle Database 10g, has a new method called TRANSACTION_BACKOUT. This method is overloaded so you have to overtake the worth to the entitled parameters, as shown below.
declare
   trans_arr xid_array;
begin
   trans_arr := xid_array('030003000D040000','F30003000D04010');
   dbms_flashback.transaction_backout (
        numtxns         => 1,
        xids            => trans_arr,
        options         => dbms_flashback.cascade
   );
end;

The kind xid_array is furthermore new in Oracle Database 11g. It is present to overtake a sequence of transaction identifiers to the procedure.


Other LogMiner Improvements

If you have been utilising XMLType as a facts and numbers kind and you have more causes to use it in Oracle Database 11g, you will be joyous to glimpse that the XML facts and numbers is mined as well in LogMiner. It displays up both in SQL_REDO and SQL_UNDO columns.
You can set an choice called SKIP_CORRUPTION while beginning LogMiner which will skip the corrupt blocks in redo logs. So, you can still salvage legitimate facts and numbers from the redo logs even if it is partially damaged. Here is how you can use the advanced syntax:
begin
   dbms_logmnr.start_logmnr(
        options => dbms_logmnr.skip_corruption
   ) ;
end;


Flashback Data Archive

Oracle9i Database Release 2 presented the proverbial time appliance in the pattern of the Flashback Query, which permits you to choose the pre-changed type of the data. For demonstration, had you altered a worth from 100 to 200 and pledged, you can still choose the worth as of two minutes before even if the change was committed. This expertise utilised the pre-change facts and numbers from the alter segments. In Oracle Database 10g, this facility was increased with the introduction of Flashback Versions Query, where you can even pathway the alterations made to a strip as long as the alterations are still present in the alter segments.
However, there was a little problem: When the database is recycled, the alter facts and numbers is cleansed out and the pre-change standards disappear. Even if the database is not recycled, the facts and numbers may be elderly out of the alter segments to make room for new changes.
Since pre-11g flashback procedures count on the alter facts and numbers, which is accessible only for a short length, you can't actually use it over an expanded time span of time or for more enduring notes for example for auditing. As a workaround, we resorted to composing initiates to make more enduring notes of the alterations to the database.
Well, don't despair. In Oracle Database 11g, Flashback Data Archive blends the best of both worlds: it boasts the ease and power of the flashback queries but does not depend on transient storage like the undo. Rather, it notes alterations in a more enduring position, the Flashback Recovery Area.
Let's gaze at an example. (Note: you require to cause Automatic Undo Management for Flashback Data Archive to work.) First, you conceive a Flashback Data Archive, as shown below:
SQL>  conceive  flashback archive near_term
  2  tablespace far_near_term
  3  retention 1 month
  4  /
 
Flashback archive created.

For the time being disregard the significance of the period "retention"; we will revisit it later. (This is a position where the alterations will be recorded.) The archive is conceived in the tablespace far_near_term.

Assume you have to record alterations to a table called TRANS. All you require to manage is endow the Flashback Data Archive rank of the table to start notes the alterations in that archive.
SQL>  adjust  table trans flashback archive near_term;
 
Table altered.

This places the table into the Flashback Data Archive mode. All the alterations to the lines of the table will be now followed permanently. Let's glimpse a demonstration.

First, choose a exact strip of the table.
SQL>  choose  txn_amt from trans where trans_id = 2;
 
   TXN_AMT
----------
  19325.67
 
SQL>  revise  trans set txn_amt = 2000 where trans_id = 2;
 
1  strip  updated.
 
SQL> commit;
 
Commit complete.

Now, if you choose the strip, it will habitually brandish 2000 in this column. To find out the older worth as of a certain time, you can use the Flashback query as shown below:
 
elect txn_amt
from trans
as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')
where trans_id = 2;
 
   TXN_AMT
----------
  19325.67
Now, after some time, when the alter facts and numbers has been purged out of the alter segments, query the flashback facts and numbers again:
 
select txn_amt
from trans
as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')
where trans_id = 2;

It arrives back with the result: 19325.67. The alter is gone, so where did the facts and numbers arrive from?
Let's inquire Oracle. You can manage that utilising autotrace and glimpse the execution plan:
                               
SQL> set autotrace traceonly explain

SQL>  choose  txn_amt

  2  from trans

  3  as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')

  4  where trans_id = 2;



Execution Plan

----------------------------------------------------------

Plan hash value: 535458644



----------------------------------------------------------



| Id  | Operation                 | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop

-------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT          |                    |     2 |    52 |    10  (10)| 00:00:01 |       |

|   1 |  VIEW                     |                    |     2 |    52 |    10  (10)| 00:00:01 |       |

|   2 |   UNION-ALL               |                    |       |       |            |          |       |

|*  3 |    FILTER                 |                    |       |       |            |          |       |

|   4 |     PARTITION RANGE SINGLE|                    |     1 |    52 |     3   (0)| 00:00:01 |     1 |     1

|*  5 |      TABLE ACCESS FULL    | SYS_FBA_HIST_68909 |     1 |    52 |     3   (0)| 00:00:01 |     1 |     1

|*  6 |    FILTER                 |                    |       |       |            |          |       |

|*  7 |     HASH JOIN OUTER       |                    |     1 |  4053 |    10  (10)| 00:00:01 |       |

|*  8 |      TABLE ACCESS FULL    | TRANS              |     1 |    38 |     6   (0)| 00:00:01 |       |

|   9 |      VIEW                 |                    |     2 |  8030 |     3   (0)| 00:00:01 |       |

|* 10 |       TABLE ACCESS FULL   | SYS_FBA_TCRV_68909 |     2 |  8056 |     3   (0)| 00:00:01 |       |

-------------------------------------------------------------------------------------------------



Predicate Information (identified by  procedure  id):

---------------------------------------------------



   3 - filter(NULL IS NOT NULL)

   5 - filter("TRANS_ID"=2 AND "ENDSCN">161508784336056 AND "ENDSCN"<=1073451 AND ("STARTSCN" IS NULL

              OR "STARTSCN"<=161508784336056))

   6 - filter("F"."STARTSCN"<=161508784336056 OR "F"."STARTSCN" IS NULL)

   7 - access("T".ROWID=("F"."RID"(+)))

   8 - filter("T"."VERSIONS_STARTSCN" IS NULL AND "T"."TRANS_ID"=2)

  10 - filter(("ENDSCN" IS NULL OR "ENDSCN">1073451) AND ("STARTSCN" IS NULL OR "STARTSCN"<1073451))



Note

-----

   - dynamic  trying   utilised  for this statement



                            
This yield responses the riddle "Where did the facts and numbers arrive from?"; it came from the table SYS_FBA_HIST_68909, which is a position in the Flashback Archive you characterised previous for that table. You can ascertain the table but it's not sustained by Oracle to exactly peek at that facts and numbers there. Anyway, I don't glimpse a cause you would desire to manage that.
The facts and numbers interior the archive is kept but until how long? This is where the keeping time span arrives into play. It's kept up to that period. After that, when new facts and numbers arrives in, the older facts and numbers will be purged. You can furthermore purge it yourself, e.g.

alter flashback archive near_term purge before scn 1234567;
 

Managing Flashback Archives

You can add more than one tablespace to an archive. Conversely you can get clear of a tablespace from one too. If you are arranging to use a tablespace that has other customer written knowledge as well, you run into the risk of choking with population the tablespace with the Flashback Data Archive written knowledge and moving out no space for the customer data. To lessen the risk, you can initiate a quota on how much space the archive can take indoors the tablespace. You can set the quota by:

alter flashback archive near_term modify tablespace far_near_term quota 10M;

You can review which desks have Flashback Data Archive turned on by querying the list of remarks view: 

SQL> select * from user_flashback_archived_tables;
 
TABLE_NAME                     OWNER_NAME
------------------------------ ------------------
FLASHBACK_ARCHIVE_NAME
-------------------------------------------------
TRANS                          ARUP
NEAR_TERM

You can find out about the archives by querying the dictionary view:
 
sql> select * from flashback_archives;
 
FLASHBACK_ARCHI FLASHBACK_ARCHIVE# RETENTION_IN_DAYS  PURGE_SCN STATUS
--------------- ------------------ ----------------- ---------- -------
NEAR_TERM                        1                30    1042653
MED_TERM                         2               365    1042744
LONG_TERM                        3              1825    1042838


Using multiple archives lets you exercise them fruitfully in dissimilar situations. For instance, a hotel company's database may deficiency one year of reservation knowledge but three years of payments. So you can define multiple archives with dissimilar retention policies and distribute them to the tables. Or if you have a uniform retention policy, you can define simply one archive and generate it the default.
 
alter flashback archive near_term set default;

When you don't deficiency an archive for a office desk, you can turn it off with:
 
 alter table trans no flashback archive;

As you can suppose, you just enabled a forceful modification recording system without writing a single row of code.

Differences vs. Regular Auditing

How does Flashback Data Archive differ from regular auditing? First of all, the latter requires the audit_trail parameter be set to DB or DB_EXTENDED and the trails are written to the office desk summoned AUD$ in the SYSTEM tablespace. Flashback Data Archives can be defined on any tablespace (or more than one, even on parts of a tablespace where user knowledge exists) and thus can be defined on cheaper storage.
Second, auditing is grounded on autonomous transaction, which has numerous performance overhead. Flashback Data Archives are written by a committed background process summoned FBDA so there is smaller diagram impact on performance.
Finally, Flashback Data Archives can be purged at regular intervals automatically. Audit trails must be manually maintained.

Use Cases

Flashback Data Archive is handy for many purposes. Here are numerous ideas:
  • To audit for recording how knowledge changed
  • To enable an submission to undo modifications (correct mistakes)
  • To debug how knowledge has been changed
  • To heed with numerous justice that want knowledge must not be modified later numerous time. Flashback Data Archives are not regular office desks so they can't be modified by typical users.
  • Recording audit trails on cheaper storage thereby commending more retention at smaller diagram cost

 

No comments:

Post a Comment