Quantcast
Channel: Deiby Gomez's Activities
Viewing all 108 articles
Browse latest View live

Oracle Database 12.2 - How to track index usage

$
0
0

By Deiby Gómez

Introduction

Several articles have been written about how to track the usage of indexes and there are several scripts to determine which indexes are being used after monitoring for a while. In previous versions to 12cR2 of Oracle Database there is the clause “ALTER INDEX (…) MONITORING USAGE” that can be used for this. However Oracle 12.2 introduced two new views that automatically monitor index usage:

V$INDEX_USAGE_INFO: V$INDEX_USAGE_INFO keeps track of index usage since the last flush. A flush occurs every 15 minutes. After each flush, ACTIVE_ELEM_COUNT is reset to 0 and LAST_FLUSH_TIME is updated to the current time.

DBA_INDEX_USAGE: DBA_INDEX_USAGE displays cumulative statistics for each index.

With these two new views, Oracle automatically tracks the usage of indexes. There are several columns in the dba_index_usage that can be used to find out how many accesses the indexes have received, how many rows have returned, and, even better, there are buckets to create histograms for accesses and rows returned. The most recent time that the index was used is also recorded.  

In the following example, I will create a table with three columns, with one index in every column. Then I will run some queries against the table in order to use the indexes, and we will confirm that indeed Oracle 12.2 tracks the usage.

Creating the table

SQL> create table dgomez.table1 (id number, val1 varchar2(20), val2 varchar2(20));

Table created.

Creating an Index in each column

SQL> create index dgomez.idx_id on dgomez.table1(id);

Index created.

 

SQL> create index dgomez.idx_val1 on dgomez.table1(val1);

Index created.

 

SQL> create index dgomez.idx_val2 on dgomez.table1(val2);

Index created.

Perform some INSERTs in the table

While the INSERTs sentences also impact the index (index entries must be created in the b-tree), this doesn’t count as “access”.

SQL> insert into dgomez.table1 values (1,'a','b');
SQL> insert into dgomez.table1 values (2,'b','c');
SQL> insert into dgomez.table1 values (3,'c','d');
SQL> insert into dgomez.table1 values (4,'d','e');
SQL> insert into dgomez.table1 values (5,'e','f');
SQL> insert into dgomez.table1 values (6,'f','g');
SQL> insert into dgomez.table1 values (7,'g','h');
SQL> insert into dgomez.table1 values (8,'h','i');
SQL> insert into dgomez.table1 values (9,'i','j');
SQL> insert into dgomez.table1 values (10,'j','k');
SQL> insert into dgomez.table1 values (11,'k','l');
SQL> commit;

Executing some queries

I will execute some queries. I have enabled autotrace to confirm that the query is using the index. This counts as an “access”. Also pay attention to how many rows each query has returned, since this count is also monitored by Oracle. At the end, we will list how many accesses and how many rows each index has returned and we will confirm whether the data displayed is correct.

Using the index IDX_ID:

SQL> select id from dgomez.table1 where id>1;

10 rows selected.

---------------------------------------------------------------------------
| Id | Operation        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT |        | 10    | 130   | 1 (0)      | 00:00:01 |
|* 1 | INDEX RANGE SCAN | IDX_ID | 10    | 130   | 1 (0)      | 00:00:01 |
---------------------------------------------------------------------------

SQL> select id from dgomez.table1 where id>0;

11 rows selected.

---------------------------------------------------------------------------
| Id | Operation        | Name   | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT |        | 11   | 143   | 1 (0)      | 00:00:01 |
|* 1 | INDEX RANGE SCAN | IDX_ID | 11   | 143   | 1 (0)      | 00:00:01 |
---------------------------------------------------------------------------

Using the index IDX_VAL1:

SQL> select val1 from dgomez.table1 where val1 !='a';

10 rows selected.

-----------------------------------------------------------------------------
| Id | Operation        | Name     | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 10   | 120   | 1 (0)      | 00:00:01 |
|* 1 | INDEX FULL SCAN  | IDX_VAL1 | 10   | 120   | 1 (0)      | 00:00:01 |
-----------------------------------------------------------------------------

SQL> select val1 from dgomez.table1 where val1 !='z';

11 rows selected.

-----------------------------------------------------------------------------
| Id | Operation        | Name     | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 11   | 132   | 1 (0)      | 00:00:01 |
|* 1 | INDEX FULL SCAN  | IDX_VAL1 | 11   | 132   | 1 (0)      | 00:00:01 |
-----------------------------------------------------------------------------

Using the index IDX_VAL2:

SQL> select val2 from dgomez.table1 where val2 !='b';

10 rowsselected.

-----------------------------------------------------------------------------
| Id | Operation        | Name     | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 10   | 120   | 1 (0)      | 00:00:01 |
|* 1 | INDEX FULL SCAN  | IDX_VAL2 | 10   | 120   | 1 (0)      | 00:00:01 |
-----------------------------------------------------------------------------

SQL> select val2 from dgomez.table1 where val2 !='z';

11 rows selected.

-----------------------------------------------------------------------------
| Id | Operation        | Name     | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 11   | 132   | 1 (0)      | 00:00:01 |
|* 1 | INDEX FULL SCAN  | IDX_VAL2 | 11   | 132   | 1 (0)      | 00:00:01 |
-----------------------------------------------------------------------------

Confirming the information captured

Now let’s take a look into the information captured by Oracle. In the previous part of this demo I executed each query two times in order to use every index twice. The first query always returned 10 rows for every index, and the second query returned 11 rows for every index; this means in total the index has returned 21 rows. Now let’s confirm these values:

SQL>select name, total_access_count, total_exec_count, total_rows_returned, last_used from DBA_INDEX_USAGE where owner='DGOMEZ';

NAME      TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED LAST_USED
--------- ------------------ ---------------- ------------------- ---------------------
IDX_ID                     2                2                 21   07-16-2017 18:58:43
IDX_VAL1                   2                2                 21   07-16-2017 18:58:43
IDX_VAL2                   2                2                 21   07-16-2017 18:58:43

 

Fortunately the information about every query I executed was captured, but it seems not all the SELECTs are captured, as Frank Pachot explains in this article. I also saw that if the Quries are executed by SYS the index usage is not captured. 

The following output shows how many accesses the index has received:

SQL> select name, bucket_1_access_count, bucket_2_10_access_count, bucket_11_100_access_count, bucket_101_1000_access_count from DBA_INDEX_USAGE where owner='DGOMEZ';

NAME      BUC_1_ACC_CT BUC_2_10_ACC_CT BUC_11_100_ACC_CT BUC_101_1000_ACC_CT
--------- ------------ --------------- ----------------- -------------------
IDX_ID               0               1                 1                  0
IDX_VAL1             0               1                 1                  0
IDX_VAL2             0               1                 1                  0

 

The definition of the column “BUCKET_11_100_ACCESS_COUNT” is “The index has been accessed between 11 and 100 times. At first look it seems that this definition is not correct, because I just executed the same query two times for each index. I didn’t execute a query that accessed the index between 11 and 100 times.

So apparently this column actually captures its accesses, not operations. Since the first SELECT operations accessed the index 10 times because it returned 10 rows, the bucket_2_10_access_count was increased by one. It is the same for the second query, which accessed the index 11 times because it returned 11 rows; the bucket_11_100_access_countwas increased by one.

But… Wait! TOTAL_ACCESS_COUNT says every index was accessed only two times in total. So, there are some inconsistent definitions here:

  • Either there were two accesses of every index because I executed two SELECT operations that touched the index, in which case TOTAL_ACCESS_COUNT is correct but BUCKET_11_100_ACCESS_COUNT is not correct, because I didn’t execute any query more than 10 times and fewer than 101 times. 
  • Or, the BUCKET_11_100_ACCESS_COUNT is correct and it doesn’t count the operations (SELECTs in this case) and instead counts every access to the b-tree nodes into the index; in which case the definition of TOTAL_ACCESS_COUNT is wrong.

In the following output we can confirm that every bucket received the correct information. For example, for the bucket bucket_2_10_rows_returned there is 1 execution; this is because the first query always returned 10 rows in every index. The bucket bucket_11_100_rows_returned always has the right value (1 execution) since the second query we executed against every index always returned 11 rows.

SQL> select name, bucket_2_10_rows_returned, bucket_11_100_rows_returned, bucket_101_1000_rows_returned from DBA_INDEX_USAGE where owner='DGOMEZ';

NAME      BUC_2_10_RW_RETD BUC_11_100_RW_RETD BUC_101_1000_RW_RETD
--------- ---------------- ------------------ ---------------------
IDX_ID                  10                 11                     0
IDX_VAL1                10                 11                     0
IDX_VAL2                10                 11                     0

Conclusion

Oracle has been introducing new views that provides very useful information to DBAs so that the DBAs can administrate properly the databases and diagnose problems in order to avoid any reactive problems. For several years scripts, third-parties tools, ALTER INDEX clauses, etc., were used to track the index usage, but this changed now Oracle perform this automatically without overheads in the performance.  

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter LinkedIn 


Invisible Columns in Oracle 12c

$
0
0

Starting in Oracle 12.1.0.1 there are several new features, +500 I have heard, and one of a good features for developers is  "Invisible Columns". Invisible Columns allows a developer create a table with some special columns. These special columns are not shown to everybody who is using the table, in order to get the value of that column whoever is performing DMLs against the table must specify the name of the column explicitly, otherwise the behavior of that table will be as if it hadn't that column. This is useful when an application has changed, but some users are still using the former "structure" of the table. In this case "Invisible Columns" can be used, and let the new users know that they must specify the new columns explicitly while the old users can still using the former structure without issues. I will show you a couple of examples in this article in order to know all the "properties" around "Invisible Columns". 

To begin, you have to know that Invisible Columns can be created at the time of the table creation, the syntax has changed a little bit for columns as I show you in the following picture:

Now let's create a table with invisible columns:

SQL> create table dgomez.TableWithInvisibleColumns (
col1 varchar2 (20) visible,
col2 varchar2 (20) invisible); 

Table created.

Now let's  see how DMLs work with Invisible Columns:

 

Insert Operations: 

In an insert operation where we don't specify explicitly the invisible column however we try to use it we will get an error. For example, in the following sentence, I am not specifying explicitly the column "col2" which is our invisible column, however I am trying to use it because I am inserting two values:

SQL> insert into dgomez.TableWithInvisibleColumns values ('b','b');
insert into dgomez.TableWithInvisibleColumns values ('b','b')
*
ERROR at line 1:
ORA-00913: too many values

SQL>

The correct way to use the invisible column is as following, specifying the "col2", that will let Oracle know that we are aware of that invisible column and indeed we want to use it:

SQL> insert into dgomez.TableWithInvisibleColumns (col1, col2) values ('a','a');

1 row created.

SQL>

 

Select Operations:

In a select operation is the same, if we want to get the values of the invisible columns we have to specify the name of the invisible column in the "SELECT" sentence. For example, in the following sentence, we are trying to get all the columns from the table "dgomez.TableWithInvisibleColumns", however only one column is returned. This is because even if we specify "*" that is not a guarantee for oracle that we are aware about the invisible column, based on that, oracle returns us only the "visible" columns. 

SQL> select * from dgomez.TableWithInvisibleColumns;

COL1
--------
a

If we want to get the values of the invisible columns we have to specify the names, as the following example:

SQL> select col1, col2 from dgomez.TableWithInvisibleColumns;

COL1  COL2
----- -----
a     a

SQL>

Are the values stored physically into the table?

Yes, invisible columns are not the same than "Virtual Columns". This is totally different, with Virtual Columns the value (or the function that produces the value) is stored as metadata of that column but the value is not stored physically. This is different in indexes as you can read in my last article. But when we are using Invisible Columns the value is in fact stored physically. The visibility of those values are only managed as metadata, but the data is there. 


data_block_dump,data header at 0x7f340fe60264
===============
tsiz: 0x1f98
hsiz: 0x14
pbl: 0x7f340fe60264
76543210
flag=--------
ntab=1
nrow=1
frre=-1
fsbo=0x14
fseo=0x1f91
avsp=0x1f7b
tosp=0x1f7b
0xe:pti[0] nrow=1 offs=0
0x12:pri[0] offs=0x1f91
block_row_dump:
tab 0, row 0, @0x1f91
tl: 7 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 1] 61  
--> In ascii 'a'
col 1: [ 1] 61  
--> In ascii 'a' (This is the value of Invisible Column)
end_of_block_dump
End dump data blocks tsn: 4 file#: 6 minblk 227 maxblk 227

Metadata of the Invisible Columns:

So, what about if I am not one more user that is using the table?, What about if I am the DBA of that table and I want to know which columns are invisible and which columns are not? There should be a way to know this. The first thought would be a "DBA_" table, but which one? Then we would think that the table DBA_TAB_COLUMNS has that information and we perform a "DESC DBA_TAB_COLUMNS", but unfortunately we see that there is not a column called "VISIBLE" or "VISIBILITY" or something like that. This is because Oracle didn't add a new column to describe the visibility of every column in a table, indeed the view "DBA_TAB_COLUMNS" has our information but is handled in a column that already exist, that column is "COLUMN_ID". When a column has NULL as the value of "COLUMN_ID" that means that column is Invisible, as in the following example:


SQL> select table_name, column_name, column_id from dba_tab_columns where owner='DGOMEZ' and table_name='TABLEWITHINVISIBLECOLUMNS';

TABLE_NAME                COLUMN_NAME  COLUMN_ID
------------------------- ------------ ----------
TABLEWITHINVISIBLECOLUMNS COL1         1
TABLEWITHINVISIBLECOLUMNS COL2

SQL>

We clearly see that the column "COL2" has a NULL value, that means that COL2 is Invisible.  

 

Adding Invisible Columns:

Not only at the time of the table creation we can create the invisible columns, we can add them as well after the table creation by using "ALTER TABLE. In the following example I will show you how to add a Invisible Column but also I will confirm another property of invisible columns, this is that Virtual Columns can be also invisible:

SQL> alter table dgomez.TableWithInvisibleColumns add (col3 invisible as (col1||col2) virtual ) ;

Table altered.

 

Does the structure of the table has the invisible columns information?

To answer this question, let's describe the table. Usually we use "DESCRIBE" to have a quick look at the table's structure:

SQL> desc dgomez.TableWithInvisibleColumns;

Name   Null?  Type
------ ------ ----------------------------
COL1          VARCHAR2(20)

SQL>

But as we see, the "DESCRIBE" command doesn't show any information about it. Now let's extract the structure but using "DBMS_METADATA":

SQL> select dbms_metadata.get_ddl('TABLE','TABLEWITHINVISIBLECOLUMNS','DGOMEZ') from dual;

DBMS_METADATA.GET_DDL('TABLE','TABLEWITHINVISIBLECOLUMNS','DGOMEZ')
--------------------------------------------------------------------------------

CREATE TABLE "DGOMEZ"."TABLEWITHINVISIBLECOLUMNS"
( "COL2" VARCHAR2(20) INVISIBLE,
"COL3" VARCHAR2(40) INVISIBLE GENERATED ALWAYS AS ("COL1"||"COL2") VIRTUAL ,
"COL1" VARCHAR2(20)
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"

SQL>

There is a very interesting thing here, do you remember how we were creating the columns in that table? At the table creation I put "COL1" as the first column, and "COL2" as the second column. After that I added a third column (COL3) via "ALTER TABLE". But see how DBMS_METADATA returns the DDL of that able, all the invisible columns are put at the beginning. If you use that DDL to create new tables and later you decide to put those columns VISIBLE the order of the columns will be different from the "original table's DDL". 

 

Are indexes supported on Invisible Columns?

The answer is yes, we can. I will put a couple of examples here:

SQL> create index dgomez.Index1OnInvisibleColumn on dgomez.TableWithInvisibleColumns (col2);

Index created.

SQL> create index dgomez.Index2OnInvisibleColumn on dgomez.TableWithInvisibleColumns (col2,col3);

Index created.

 

Are Partition Keys supported on Invisible Columns?

This is interesting as well, when we are creating partitioned tables we can select an invisible column for the partition key:

SQL> create table dgomez.Table3WithInvisibleColumns (
col1 varchar2 (20),
col2varchar2 (20) invisible)
partition by hash (col2)
partitions 2;

Table created.

 

How to change the visibility of a column?

Fo finish this article I will show you how to change from a column from "Invisible" to "visible" and from "visible" to "invisible":

SQL> alter table dgomez.Table3WithInvisibleColumns modify (col2 visible);

Table altered.

SQL> alter table dgomez.Table3WithInvisibleColumns modify (col2 invisible);

Table altered.

SQL>

Follow me:

      

Oracle Database 12.2 Statement-level Refresh for Materialized Views

$
0
0

By Deiby Gómez

 

Introduction:

Materialized views have been used for several years and they are being improved by Oracle with every database version or release. Up to Oracle Database 12cR1 Oracle Materialized Views supported the following refreshes:

  • ON DEMAND:You can control the time of refresh of the materialized views.
    • COMPLETE: Refreshes by recalculating the defining query of the materialized view.
    • FAST: Refreshes by incrementally applying changes to the materialized view.
    • For local materialized views, it chooses the refresh method that is estimated by optimizer to be most efficient. The refresh methods considered are log-based FAST and FAST_PCT.
    • FAST_PCT: Refreshes by recomputing the rows in the materialized view affected by changed partitions in the detail tables.
    • FORCE: Attempts a fast refresh. If that is not possible, it does a complete refresh.
  • ON COMMIT: Whenever a transaction commits which has updated the tables on which a materialized view is defined, those changes are automatically reflected in the materialized view. The only disadvantage is that the time required to complete the commit will be slightly longer because of the extra processing involved.

Starting with Oracle 12cR2, Materialized views can be refreshed ON STATEMENT.

  • ON STATEMENT: With this refresh mode, any changes to the base tables are immediately reflected in the materialized view. There is no need to commit the transaction or maintain materialized view logs on the base tables. If the DML statements are subsequently rolled back, then the corresponding changes made to the materialized view are also rolled back.

In the following graphic we can see that in the syntax the option “ON STATEMENT” was introduced:

To use an ON STATEMENT materialized view the following restrictions must be cleared:

  • They are for materialized join view only.
  • Base tables referenced in the materialized view defining query must be connected in a join graph of star/snowflake shape.
  • An existing non-ON-STATEMENT materialized view cannot be converted to REFRESH ON STATEMENT.
  • Altering an existing ON STATEMENT materialized view is not allowed.
  • An ON STATEMENT materialized view cannot be created under SYS
  • AN ON STATEMENT materialized view needs to be fast refreshable. You must specify the clause ‘REFRESH FAST’ in the CREATE MATERIALIZED VIEW command. materialized view logs are not required.
  • The defining query needs to include the ROWID column of the fact table in the SELECT list.
  • Be careful with UPDATE operations, because these are not supported on any dimension table. It will make the ON STATEMENT materialized view unusable.
  • TRUNCATE operations on a base table are not supported. They will make the ON STATEMENT materialized view unusable.
  • The defining query should NOT include:
    • invisible column
    • ANSI join syntax
    • complex defining query
    • (inline) view as base table
    • composite primary key
    • long/LOB column

Every type of refresh mode has its own restrictions; it is difficult to memorize every single restriction for every refresh mode. If you are getting errors like “ORA-12052: cannot fast refresh materialized view” it’s likely that you are forgetting to clear a restriction. To make this task easier, you can always visit the note Materialized View Fast Refresh Restrictions and ORA-12052 (Doc ID 222843.1), where you will find every single restriction for all the refresh modes.

So enough of the basic concepts of materialized views; it’s time for an example. In the following example I am using Oracle Database Enterprise Edition 12.2.0.1 and creating four tables. Then I will create two materialized views, one ON COMMIT and one ON STATEMENT. I will insert some rows in each of the four tables without committing them. We will query the ON STATEMENT materialized view, analyze the result, and then we will commit the data to finally query the ON COMMIT materialized view and its result.

Creating the tables:

SQL> CREATE TABLE employee (
employee_id number,
name varchar2(20),
phone number,
position varchar2(20),
CONSTRAINT employee_pk PRIMARY KEY (employee_id));

Table created.

SQL> CREATE TABLE department (
department_id number,
name varchar2(20),
CONSTRAINT department_pk PRIMARY KEY (department_id));

Table created.

SQL> CREATE TABLE product (
product_id number,
name varchar2(20),
price number(*,2),
CONSTRAINT product_pk PRIMARY KEY (product_id));

Table created.

SQL> CREATE TABLE purchase (
purchase_code number,
department_id number,
employee_id number,
product_id number,
amount number,
purchase_date date,
CONSTRAINT purchase_pk PRIMARY KEY (purchase_code),
FOREIGN KEY (department_id) REFERENCES department (department_id),
FOREIGN KEY (employee_id) REFERENCES employee (employee_id),
FOREIGN KEY (product_id) REFERENCES product (product_id));

Table created.

 

The advantage of ON STATEMENT materialized views is that there is no need to create materialized view logs in order to create them:

SQL> CREATE MATERIALIZED VIEW onstatement_purchases
REFRESH FAST ON STATEMENT
AS
SELECT p.rowid rid, e.name, p.purchase_code, pr.product_id, p.amount
FROM department d, employee e, purchase p, product pr
WHERE d.department_id=p.department_id and
pr.product_id=p.product_id and
e.employee_id=p.employee_id;

Materialized view created.

One of the disadvantages of using ON COMMIT materialized views is that materialized view logs must be created with “INCLUDING NEW VALUES” and “WITH ROWID” as well as including all the columns that will be referenced inside the materialized view.

CREATE MATERIALIZED VIEW LOG ON purchase WITH PRIMARY KEY,ROWID, SEQUENCE(department_id,employee_id,product_id,amount,purchase_date) INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON department WITH PRIMARY KEY,ROWID, SEQUENCE(name) INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON employee WITH PRIMARY KEY,ROWID, SEQUENCE(name,phone,position ) INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON product WITH PRIMARY KEY,ROWID, SEQUENCE(name,price) INCLUDING NEW VALUES;

 

Creating the ON COMMIT materialized view:

SQL> CREATE MATERIALIZED VIEW oncommit_purchases
REFRESH FAST ON COMMIT
AS
SELECT e.name, p.purchase_code, pr.product_id, p.amount
FROM department d, employee e, purchase p, product pr
WHERE d.department_id=p.department_id and
pr.product_id=p.product_id and
e.employee_id=p.employee_id
group by e.name, p.purchase_code, pr.product_id, p.amount;

Materialized view created.

 

Verifying the refresh mode of each materialized view:

SQL> select owner, mview_name, REFRESH_MODE from dba_mviews where owner='DGOMEZ'

OWNER      MVIEW_NAME                REFRESH_MODE
---------- ------------------------- ------------
DGOMEZ     ONCOMMIT_PURCHASES        COMMIT
DGOMEZ     ONSTATEMENT_PURCHASES     STATEMENT

Now I will insert some rows without committing them:

SQL> Insert into employee values (1,'Jose',55555555,'Manager');

1 row created.

SQL> Insert into department values (1,'Sales');

1 row created.

SQL> Insert into product values (1,'Soda',100.50);

1 row created.

SQL> insert into purchase values (1,1,1,1,100,sysdate);

1 row created.

I will query the materialized view onstatement_purchases and we will see that It was populated:

 

NAME                 PURCHASE_CODE PRODUCT_ID AMOUNT
-------------------- ------------- ---------- ----------
Jose                             1          1       100

 

However the ON COMMIT materialized view oncommit_purchases is empty:

SQL> select name, purchase_code, product_id, amount from oncommit_purchases;

no rows selected

 

I will commit the rows:

SQL> commit;

Commit complete.

 

As soon as the rows are committed, the ON COMMIT materialized view is populated:

SQL> select name, purchase_code, product_id, amount from oncommit_purchases;

NAME                 PURCHASE_CODE PRODUCT_ID AMOUNT
-------------------- ------------- ---------- ----------
Jose                             1          1        100

 

Conclusion:

Materialized views are frequently used to improve the performance of complex queries and are very popular. Oracle has been improving them, and with the introduction of ON STATEMENT materialized views, DBAs will have one more option they can use to meet client requirements or solve performance issues. In this article we looked at some basic concepts of materialized views, and two examples: an ON STATEMENT materialized view, where we saw that without to commit the data the materialized view was populated, and an ON COMMIT materialized  view, which needed the commit instruction to get populated.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

 

Oracle EM 13c Database's historic data without DBA_HIST*

$
0
0

By Deiby Gómez

Introduction

Data changes frequently in OLTP environments and Oracle has to be aware of those changes or at least to try detect these changes in order to adjust the optimizer and execute sentences in the best possible way. To do so, Oracle generates several metrics from the system, from the session, from the services, etc., and also it gathers statistics automatically via AUTOTASK.

There is a huge amount of information generated by the metrics, which is captured mainly in AWR repository tables. The information generated by the metrics is very important because by using it the database administrators can perform troubleshooting and capacity planning, analyze the workload over a period of time, and so on.  When there are no performance issues, database administrators mostly think about capacity planning in order to understand how the database is growing over time.  In the past, this information was used to size the new hardware that they had to buy every two or three years, but with Oracle Cloud, that’s a thing of the past. Nowadays this information is used to understand different aspects of the growth of the business.

Businesses impose several different requirements; for example, a business might want to know  about the increase in users consuming their services or products; the DBA would want to know about increased space requirements, increase in physical writes, and so on. These are among several scenarios where historical data is needed to create complex and customized reports.

When we think about historical data, our first thought is AWR/ASH; however, there is another alternative that few DBAs use: the repository views of Enterprise Manager. These views have hundreds of different metrics that are captured automatically by Enterprise Manager and can be used to create customized reports as complex as we could want. Just imagine, hundreds of metrics to play with!

As per Oracle "Database Licensing Information" (I didn’t find other sources of information on this), the following views also require Oracle Diagnostic Pack. If this license cannot be acquired you can use the STATSPACK tables.

MGMT$METRIC_DETAILS: The MGMT$METRIC_DETAILS view displays a rolling 7 day window of individual metric samples. These are the metric values for the most recent sample that has been loaded into the Management Repository plus any earlier samples that have not been aggregated into hourly statistics.

MGMT$METRIC_CURRENT: The MGMT$METRIC_CURRENT view displays information on the most recent metric values that have been loaded into the Management Repository.

MGMT$METRIC_HOURLY: The MGMT$METRIC_HOURLY view displays metric statistics information that has been aggregated from the individual metric samples into hourly time periods. For example, if a metric is collected every 15 minutes, the 1 hour rollup would aggregate the 4 samples into a single hourly value by averaging the 4 individual samples together. The current hour of statistics may not be immediately available from this view. The timeliness of the information provided from this view is dependent on when the query against the view was executed and when the hourly rollup table was last refreshed.

MGMT$METRIC_DAILY: The MGMT$METRIC_DAILY view displays metric statistics that have been aggregated from the samples collected over the previous twenty-four hour time period. The timeliness of the information provided from this view is dependent on when the query against the view was executed and when the hourly rollup table was last refreshed.

MGMT$TARGET_TYPE:  MGMT$TARGET_TYPE displays metric descriptions for a given target name and target type. This information is available for the metrics for the managed targets that have been loaded into the Management Repository. Metrics are specific to the target type.

You can build reports as complex as you want. In this article I will show you some basic examples that you can take as a starting point. You can also read my article “Creación de un reporte simple usando Information Publisher Report”, where you will learn how to use Infomration Publisher to build nice reports.

List all the metrics available in Enterprise Manager Repository Views

With this query you can list all the metrics that you can use to build your reports. This query will return hundreds of rows, each row for one specific metric:

SELECT distinct metric_name,
metric_column,
metric_label,
metric_column
FROM MGMT$METRIC_DAILY
ORDER BY 1,2,3;

All the metrics for all the database targets

With this query you list all the metrics available for one specific type of target, in this case the type ‘oracle_database’:

SELECT t.target_name target_name,
       t.metric_name,
       m.metric_column metric_column,
       to_char(m.rollup_timestamp,'YYYY-MM-DD HH24') as TIME,
       sum(m.average/1024) as value
FROM   mgmt$metric_hourly M,
       mgmt$target_type T
WHERE  t.target_type='oracle_database'
       and m.target_guid=t.target_guid
       and m.metric_guid=t.metric_guid
GROUP BY  t.target_name,
          t.metric_name,
          m.metric_column,
          m.rollup_timestamp
ORDER BY 1,2,3;

Once you know which metrics are available to build reports, you can proceed to create a basic report.

Current value for the metric iombs_ps

Let’s start with something basic: learning the current value for one specific metric. In this example, we’ll learn the value of the metric “iombs_ps”, which is part of the category “instance_throughput”.

This query uses the view mgmt$metric_current:

SQL> SELECT t.target_name target_name,
     t.metric_name,
     m.metric_column metric_column,
     to_char(m.collection_timestamp,'YYYY-MM-DD HH24:MI') as TIME,
     m.value as value
FROM mgmt$metric_current M,
     mgmt$target_type T
WHERE t.target_type='oracle_database'
      and m.target_guid=t.target_guid
      and m.metric_guid=t.metric_guid
      and t.metric_name='instance_throughput'
      and t.metric_column='iombs_ps'
      ORDER BY 1,2,3;

TARGET_NAME  METRIC_NAME         METRIC_COLUMN TIME             VALUE
------------ ------------------- ------------- ---------------- --------
cloud1       instance_throughput iombs_ps      2017-08-20 20:32 378

Historic data for the metric iombs_ps per hour

Now I will use the historic data for the same metric for the last 24 hours and then I will build a chart with Google Chart to see the behavior of this metric across the time. This query uses the view mgmt$metric_hourly.

SQL> SELECT t.target_name target_name,
            t.metric_name,
            m.metric_column metric_column,
            to_char(m.rollup_timestamp,'YYYY-MM-DD HH24') as TIME,
            sum(m.average/1024) as value
FROM        mgmt$metric_hourlyM,
            mgmt$target_type T
WHERE       t.target_type='oracle_database'
            and m.target_guid=t.target_guid
            and m.metric_guid=t.metric_guid
            and t.metric_name='instance_throughput'
            and t.metric_column='iombs_ps'
GROUP BY t.target_name,
         t.metric_name,
         m.metric_column,
         m.rollup_timestamp
ORDER BY 1,2,3; 

TARGET_NAME  METRIC_NAME          METRIC_COLUMN   MONTH_TIMESTA VALUE
------------ -------------------- --------------- ------------- ----------
cloud1       instance_throughput  iombs_ps        2017-08-19 00 296
cloud1       instance_throughput  iombs_ps        2017-08-19 01 374
cloud1       instance_throughput  iombs_ps        2017-08-19 02 362
cloud1       instance_throughput  iombs_ps        2017-08-19 03 360
cloud1       instance_throughput  iombs_ps        2017-08-19 04 378
cloud1       instance_throughput  iombs_ps        2017-08-19 05 378
cloud1       instance_throughput  iombs_ps        2017-08-19 06 378
cloud1       instance_throughput  iombs_ps        2017-08-19 07 362
cloud1       instance_throughput  iombs_ps        2017-08-19 08 360
cloud1       instance_throughput  iombs_ps        2017-08-19 09 362
cloud1       instance_throughput  iombs_ps        2017-08-19 10 360
cloud1       instance_throughput  iombs_ps        2017-08-19 11 359
cloud1       instance_throughput  iombs_ps        2017-08-19 12 362
cloud1       instance_throughput  iombs_ps        2017-08-19 13 361
cloud1       instance_throughput  iombs_ps        2017-08-19 14 370
cloud1       instance_throughput  iombs_ps        2017-08-19 15 378
cloud1       instance_throughput  iombs_ps        2017-08-19 16 378
cloud1       instance_throughput  iombs_ps        2017-08-19 17 378
cloud1       instance_throughput  iombs_ps        2017-08-19 18 161
cloud1       instance_throughput  iombs_ps        2017-08-19 19 161
cloud1       instance_throughput  iombs_ps        2017-08-19 20 175
cloud1       instance_throughput  iombs_ps        2017-08-19 21 178
cloud1       instance_throughput  iombs_ps        2017-08-19 22 179
cloud1       instance_throughput  iombs_ps        2017-08-19 23 164
cloud1       instance_throughput  iombs_ps        2017-08-19 24 160

 

Now I will use Google Chart to chart the data. We can see that interpreting a graphic is easier than looking only at numbers. In this graphic we can see that something happened around 17:00 because the IO throughput decreased:

Historic data for the metric iombs_ps per day

Our last report example will use the view mgmt$metric_daily to create a report on the same metric, but daily. You can add more WHERE clauses to filter the period of time and also you can play with the values MAXIMUM and MINIMUM.

SQL> SELECT t.target_name target_name,
            t.metric_name,
            m.metric_column metric_column,
            to_char(m.rollup_timestamp,'YYYY-MM-DD') as TIME,
            sum(m.average/1024) as value
FROM        mgmt$metric_daily M,
            mgmt$target_type T
WHERE       t.target_type='oracle_database'
            and m.target_guid=t.target_guid
            and m.metric_guid=t.metric_guid
            and t.metric_name='instance_throughput'
            and t.metric_column='iombs_ps'
GROUP BY t.target_name, t.metric_name, m.metric_column, m.rollup_timestamp
ORDER BY 1,2,3; 

TARGET_NAME  METRIC_NAME          METRIC_COLUMN   MONTH_TIME VALUE
------------ -------------------- --------------- ---------- ----------
cloud1       instance_throughput  iombs_ps        2017-08-13 377
cloud1       instance_throughput  iombs_ps        2017-08-14 360
cloud1       instance_throughput  iombs_ps        2017-08-15 367
cloud1       instance_throughput  iombs_ps        2017-08-16 378
cloud1       instance_throughput  iombs_ps        2017-08-17 378
cloud1       instance_throughput  iombs_ps        2017-08-18 378
cloud1       instance_throughput  iombs_ps        2017-08-19 378

 


Conclusion

In this article I have showed you one more historic data source that you can use to understand the behavior of your business through hundreds of metrics that are available in the Enterprise Manager Repository Views. You have views to see the current value of the metrics, the hourly value, or the daily value, and can play with values like the MAXIMUM in a day (or in an hour), MINUMUM, or AVERAGE. You can create very complex queries to analyze different problems across time, and then you can chart the data and get nice graphics that you can present to the board.

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Why Certifications Are Important

$
0
0

By Deiby Gómez

Introduction:

Ever since I started my career in Oracle technology I’ve always wanted to deliver the best support to my clients. I have wanted to solve the problems quickly. I am not afraid of new challenges, I am not afraid to start looking into a problem that I have never seen before; on the contrary, I am happy to look into unfamiliar problems because they are opportunities to learn. Following that approach, and to comply with my commitment with my clients, I started to look into Oracle certification program. I began to learn what Oracle University was about, and the paths to get certified.

I started my career with Oracle Database 11g. Then, because of the clients I was doing work for, I extended my knowledge to 10g and even 9i, and the oldest version I had was 8i but with few tickets on it. At the moment, the newest version of Oracle is 12c and all the certifications are already available for 12c. You can even get certified in a specific release, like OCP on 12cR2. I recommend that you get certified on the most recent versions of the technology you are interested.

Anyhow, since I came into Oracle technology on 11g my path to get certified was the following:

 

So I worked hard to pass the following exams. This should give you an idea of the time it would take to progress through the certifications:

  • 1Z0-051: Oracle Database 11g SQL Fundamentals I – January 2011
  • 1Z0-052: Oracle Database 11g Administration I – March 2011
  • 1Z0-053: Oracle Database 11g: Administration II – May 2011
  • 1Z0-402: Enterprise Linux Fundamentals – May 2011
  • 1Z0-451: Oracle Service Oriented Architecture Foundation Practitioner – August 2012
  • 1Z0-027: Oracle Exadata X3 and X4 Administration – August 2013
  • 1Z0-058: Oracle RAC 11g Release 2 and Grid Infrastructure Administration – December 2013
  • 1Z0-060: Upgrade to Oracle Database 12c – February 2014
  • 1Z0-093: Oracle Database 11g Certified Master Exam (OCM) – February 2015
  • 1Z0-432: Oracle Real Application Clusters 12c Essentials – September 2015
  • 1Z0-029: Oracle Database 12c Certified Master Upgrade Exam– April 2016
  • 1Z0-066: Oracle Database 12c: Data Guard Administration – December 2016

Additionally, I became an Oracle ACE in 2013 and an Oracle ACE Director in 2015. I also was a technical reviewer of the book "Oracle Database 12c Release 2 Multitenant" and a co-author of the book "Oracle Database 12c Release 2 Testing Tools and Techniques for Performance and Scalability".

After all this hard work, I can tell you why certifications are important.

Of course, this is a personal opinion.  At the beginning of my career I started getting certifications frequently in order to get a salary hike (like the most people that are starting a career) , but after two certifications I changed my thinking and started to enjoy the path because it was aligned with what I wanted to deliver: to fix problems quickly and deliver excellence to my clients, which is the right approach. It's all about the enjoy the journey!

When preparing for a certification, you have to build several environments, practice installations and different rman scenarios, test every Oracle database feature and ASM feature. You find errors, and investigate how to fix those errors. While investigating the problems you will read blogs, Metalink notes, whitepapers, Oracle Press books, Oracle University manuals and even videos on YouTube!  You will spend several hours and days in front of a computer practicing. You’ll study so hard that when you are in front of the computer actually taking the exam, it’s anticlimactic – just a set of some questions that you already know how to answer. You’ll feel like it’s a time sink to sit in front of that laptop taking the exam because you already know you’ve got the knowledge. Yes, you do have the knowledge, but you still have to pass the exam to prove it. And once that certification is in hand, it is proof of all the preparation and hard work that help you deliver better support to your clients. 

So the advantages I can highlight from the perspective of a consultant are:

  • Preparing for the exam increases your knowledge.
  • You get faster at fixing problems.
  • You face several issues while practicing that sometimes only with "hear" or "see" the symptoms you already know where the problem would be.
  • You acquire friends and colleagues through forums, blogs and Oracle events around the world.
  • You can get better jobs.
  • You can deliver your clients a better quality of support.
  • Because your credentials sometimes you are invited to a community project (to be Speaker, co-authoring a book, to help in a blog, to contribute in an Open Source Project, etc)
  • Depending on where you are, yes, you may get that pay raise.
  • You get a profile in www.youracclaim.com
  • If you become an OCM you also get a special profile in Oracle OCMs list.
  • You get less stressed, because with the knowledge you’ve acquired preparing for certification there will be fewer things that you don’t know, and less reason to fear making errors.
  • Since your knowledge has increased, you also can help your colleagues.
  • You get respect from newbies. [:)]

And perhaps much more! But those are just the advantages for consultants. There’s another beneficiary of your certifications; namely, the company you are working for. I became part of Nuvola Consulting Group in 2016 and since then we’ve gotten several clients on board (YAY!). Still, I can tell you why certifications are important for organizations:

  • Companies promote your certifications to prove that they have good consultants.
  • For partnerships, When you are looking for being a partner of another company, the other company will look into your consultants and their certifications. 
  • Companies use your certifications to prove that they can work with a specific technology or product very well (Amazon AWS, Oracle DB, Tuning, SOA, etc.).
  • It’s better to hire certified consultants where the risk that they make mistakes is less than a consultant that doesn’t have certifications. Of course there are also consultants without certifications with a lot of experience, but in those cases, they have to demonstrate that experience from past performance unless the person is well-known and is very well-recommended by others that we already know.
  • Companies can charge a higher hourly rate for support or consulting when the consultants are certified.
  • Having several certified consultants is very helpful when the company wants to get on board with a big prospective customer or get a very good contract. Generally large enterprises want companies with certified consultants to provide them services.
  • Having certified consultants helps a firm compete with other companies in the same industry.

In Guatemala, for example, the country where I am currently living, I have observed that certifications are more important to hiring companies in the IT industry than a bachelor’s degree. For non-IT companies, it may be different, but in Latin American IT companies this is common. And over the years  I have seen many students starting early in their college years and getting certified to increase their expertise in a single technology (Let's say Java, etc). I’m included in this group, because I started working with Oracle technology professionally before completing university. IT industry wants people very specialized in a single technology or product and ready to get involved in projects. 

Conclusion:

Certifications are important for consultants and also for the companies to we work for. The industry wants specialized people. The IT industry is growing fast, with some of the largest companies in the world today being in IT, and they’re demanding certified people. This is an opportunity that you have to take advantage of: Get certified!

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter LinkedIn 

Prepare yourself for passing the Oracle Certified Master 12c requirements!

$
0
0

By Deiby Gómez

Introduction

In 2013 I began my preparation for becoming Oracle Certified Master (OCM) 11g. I was already OCP (Oracle Certified Professional) 11g and OCP 12c, so to get to the next level, I made myself a schedule of reading and practice. The OCM exam is no joke—it takes a lot of knowledge, as well as speed in working to solve problems, to pass it. Later in this article I’ll share my study and practice schedule, which you can use to prepare for the exam yourself.

But first, some background. There are three levels of Oracle Database Certifications (for any version 11g or 12c):

  • Associate (OCA)
  • Professional (OCP)
  • Master (OCM)

The Professional level requires Associate as a prerequisite, and Master requires Professional. So your certification quest starts at the Associate level. You have to pass two exams in order to achieve it. Next, you have to get your professional certification, which requires you to pass an exam and also take an Oracle University course. Once you are an OCP you can start your journey to become an OCM.

For OCM, you have to take two courses in Oracle University, and then you have to pass one more exam. This exam is different from the ones for the first two levels of certification, because it does not consist of multiple-choice questions and also it’s not online like the exams for OCA and OCP. To find out where to take it, you need to look at the Oracle Certified Master Exam Worldwide Schedule. There are only few countries where you can take this exam. 

This exam is for real DBAs! It is 100% practice, rather than answering questions.

Basically you have to be prepared for anything and you have to do everything as fast as you can, because you have a limited time for each problem.

Above is the path to OCM for 11g. If you want to start directly toward certification in the 12c version, the path you follow is as follows:

 

Some months after I passed the OCM 11g exam, the OCM 12c exam was released, so I decided to take it as well. When I was preparing my OCM 12c I created the following schedule, which you can use, too, for your own preparation.

I focused my preparation in two main areas: Knowledge and Speed.  

Hours to develop knowledge

The hours I allotted to increasing my knowledge I spent reading everything I could about that topic, blogs, metalink notes, forums, books, videos, etc. And inside that time I also practiced every topic on a virtual machine, at least twice. For example if the topic was “install database software”, I read everything about that topic and then I installed the software at least two times. With these hours I also was reading every single option of every single command J Yes! It was fun. I also tried to memorize as much syntax as I could. Once I knew how to do everything related to a topic and I got considerable knowledge about the syntax and concepts I moved to the hours for get faster.

 

Hours to increase speed: During these hours, I didn’t have to read more because I already knew how to do the things I was focusing on. This was time I set aside to practice and practice and practice and yes, practice.  I tried to get as fast as I could.

So here is the schedule I used:

 

Topic

Time (hrs) to read and practice (Knowledge)

Time (hrs) to improve speed

(Time)

General Database and Network Administration

40

14

Create and manage pluggable databases

16

4

Administer users, roles, and privileges

4

2

Configure the network environment to allow connections to multiple databases

4

2

Administer database configuration files

8

2

Configure shared server

4

2

Manage network file directories

4

2

 

 

 

Manage Database Availability

60

18

Install the EM Cloud Control agent

24

8

Configure recovery catalog

8

2

Configure RMAN

8

2

Perform a full database backup

4

2

Configure and monitor Flashback Database

16

4

 

 

 

Data Warehouse Management

56

23

Manage database links

4

2

Manage a fast refreshable materialized view

16

4

Create a plug-in tablespace by using the transportable tablespace feature

16

4

Optimize star queries

4

2

Configure parallel execution

4

2

Apply a patch

4

2

Configure Automatic Data Optimization, In-Row Archiving, and Temporal Validity

8

4

Manage external tables

8

3

 

 

 

Data Management

60

16

Manage additional buffer cache

4

2

Optimize space usage for the LOB data

8

2

Manage an encrypted tablespace

8

2

Manage schema data

8

2

Manage partitioned tables

8

2

Set up fine-grained auditing

8

2

Configure the database to retrieve all previous versions of the table rows

16

4

 

 

 

Performance Management

68

27

Configure the Resource Manager

16

12

Tune SQL statements

8

3

Use real application testing

16

3

Manage SQL Plan baselines

8

3

Capture performance statistics

8

3

Tune an instance - Configure and manage result cache, Control CPU use for Oracle Instances, Configure and manage "In Memory" features

12

3

Manage extended statistics

8

2

Create and manage partitioned indexes

8

2

 

 

 

Data Guard

56

26

Administer a Data Guard environment

12

4

Create a physical standby database

16

8

Configure a standby database for testing

4

4

Configure a standby database to apply redo

8

2

Configure a standby database to use for reporting

4

2

Configure fast start failover

4

2

Manage extended statistics

4

2

Manage DDL in a Data Guard environment

4

2

 

 

 

Grid Infrastructure

80

34

Install Oracle Grid Infrastructure

16

8

Create ASM Disk Groups

8

4

Create and manage ASM instances

8

4

Configure ASM Cloud File System (ACFS)

8

4

Administer Oracle Clusterware

16

6

Manage Flex Clusters and Flex ASM

12

4

Manage Flex Clusters and Flex ASM

12

4

 

 

 

Real Application Cluster Database

40

9

Install Oracle Database software

8

3

Create a Real Application Clusters (RAC) database

8

2

Configure Database Services

16

2

Administer Oracle RAC databases on one or more cluster nodes

8

2

Using this schedule, I tried to practice four hours every day after my job, and I dedicated my weekends to this effort completely (16 hours) so I was able to get prepared in about three months. Depending on the time you have to commit to your own effort, your ‘mileage may vary’.

In addition to my schedule, you can also use the following books for your preparation. One of them is from Kamran Agayev, an 11g OCM and a good friend.

Oracle Certified Master 11g Study Guide by Kamran Aghayev.

 

OCM: Oracle Database 10g Administrator Certified Master Exam Guide by Nilesh Kakkad

Once you have passed your OCM exam you will receive a card like this:

 

 

Conclusion

Getting prepared for the OCM is not easy, it takes time. And without good preparation, you likely will not pass the exam. This exam is no joke, it is serious and you should be well prepared in every area before scheduling it. In this article I’ve provided a preparation plan you can follow to get to take it and become an Oracle Certified Master. Best of luck!

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Solving Communication problems between DB and ASM instances

$
0
0

By Deiby Gómez 

Introduction

Most of the time I write how-to articles or I am introducing a new feature of Oracle Database. Those articles contain new information that’s good to know and help people fix issues or to use a function/feature, but in this time I am writing about situations I have had. It’s good for readers and beginners to know those little details around how an issue was fixed, what daily work is like for another DBA, or just to read a funny story. In this article I will tell a story regarding a problem a customer had a long time ago; the root cause is not frequent (I hope!) but if we don’t understand the relevant concepts we could spend several hours trying to find out a root cause that could be easy to identify when the our concepts are solid.

Infrequent, but it can happen

A long time ago I received a call from a customer saying that there were some errors in the database instance. Well, interestingly the databases were executing DMLs properly without any issue. I asked the customer if these errors appeared only with one specific operation like an Insert, or like a CREATE <something>, etc.; and he said that he was running a script received from the application team to create several tablespaces with its datafiles.  When he was running the script he was receiving the following errors:

ORA-01119: error in creating database file '+DATA'

ORA-17502: ksfdcre:4 Failed to create file +DATA

ORA-27300: OS system dependent operation:open failed with status: 2

ORA-27301: OS failure message: No such file or directory

ORA-27302: failure occurred at: sskgmsmr_7

First, you can see that the set of errors says that there is a directory or file that doesn’t exist in the OS; on the other hand, it points to the ASM disk group, which in this case is “+DATA”. So this is confusing, because either the file that the database is looking for is in ASM or it is in the OS.  I did a quick check of the database instance and it was OK. There were no errors in the alert log, all the disks were healthy. On the database side, however, there seemed to be some issues, specifically with the sentences “CREATE TABLESPACE” which the customer had in the script provided by the application team.

So, the clues were:

  • No issues with the ASM Instance
  • DMLs were being executed successfully in the database instance.
  • CREATE TABLESPACE statements fail in the database instance.
  • ASM and OS are both involved in a “file” or “directory” that doesn’t exist. 

With these four clues to go by, you should be on the right track if your concepts are solid. The root cause you would be thinking about would involve the file that the database instance uses to communicate with the ASM instance This file is named "ab_<ASM SID>.dat" and it is located in the $ORACLE_HOME/dbs. You need to know about the existence of this file and what its function is.This file rarely has issues, or rarely causes problems…but sometimes it happens,

Let’s define this file:

What is the "ab_<ASM SID>.dat" file? This file is used by the database instance to message an ASM instance. When the database instance needs to send a message to the ASM instance, the database instance reads this file in order to find out the information required for getting connected to the ASM instance. This file is in $ORACLE_HOME/dbs. If this file doesn't exist the database will not be able to connect to the ASM instance and you will receive an error. This file is important because it is involved in the database instance work.

Some time ago I wrote an article with several tests of where this file is required to execute some sentences in the database and in which sentences the file is not required. You can read the details here.

The conclusion of that earlier article indicates:

  • Tablespace creation – required
  • Datafile creation – required
  • Table creation – not required
  • DML operations – not required
  • Drop tablespace – not required
  • Delete datafile – not required
  • Startup database instance – required
  • Shutdown database instance – not required

Well, taking that into account, to solve this customer’s issue, I listed all the files in $ORACLE_HOME/dbs and the root cause was confirmed. The file "ab_<ASM SID>.dat" did not exist in the directory. I asked the customer if he had moved the file somewhere else or if he’d deleted it and he said that the day before the junior DBA was “cleaning” logs and traces that were using space and that could be deleted. I think that one of those files that “could be deleted” was "ab_<ASM SID>.dat". As I said before, this situation happens rarely. Solving the problem is not a big deal; what we have to do is reboot the ASM instance, but in order to do that we have to reboot the database instance as well.  After rebooting the ASM instance the file was recreated and the database was able to use it. The script that the customer had was executed successfully and all the CREATE TABLESPACE operations were success.

Conclusion

Sometimes there are issues whose root cause is very rare, and in order to determine it quickly we have to have all our concepts solid; otherwise, we might spend several hours trying to figure out what’s going on, reading notes and so on.

In this case, it was very important to identify the clues. We had four clues here which pointed us to the right root cause.  Sometimes the customer is stressed and under pressure and wants us to fix the problem fast, but DBAs have to stay calm, we have to extract the clues (symptoms), to think about the root cause,  to create an hypothesis and work to prove it. To shorten diagnostic time make sure you’re on solid ground conceptually, which you can do by practicing various scenarios while you are getting prepared for a certification.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Block Corruption in an Oracle Database

$
0
0

By Deiby Gómez

Introduction

Block corruption is a common topic when we are dealing with any software that stores data. In Oracle Database there are several types of logical structures that are mapped to a physical file named “datafile” that is divided into filesystem blocks.

 

A block can have logical or physical corruption. A corrupt block is a block that has been changed so that it differs from what Oracle expects to find. A logical corruption is a block that has a valid checksum but its content is corrupt; for example, a row locked by a non-existent transaction, the amount of space used is not equal to block size, avsp bad, etc. Logical corruption can cause ORA-600 depending on which content inside the block is corrupted. A physical corruption is also called a media corruption; the database does not recognize the block at all, it’s when the problem is not related to the content but to the physical location or structure itself; for example, a bad header, a fractured/incomplete block, the block checksum is invalid, the block is misplaced, zeroed-out blocks, the header and footer of the block do not match, one of the key data block data structures is incorrect, such as the data block address (DBA), etc.

Detecting, monitoring and fixing corrupt blocks is an important task that we have take care of regularly and frequently. A corrupt block not only means a problem with the block but also means that there is data that may be lost, and this is very important for the business.

The Problem

The problem with corrupted blocks is that we don’t know they are corrupted until we try to use them. Of course, this applies to a scenario where we are not executing a proactive activity to detect corrupt blocks. For example, a table block can be corrupted and there is no way to know it until someone performs a SELECT or any other DML that reads that block. Once the block is read, Oracle will know the block is corrupted and then an ORA-0600, ORA-27047 or ORA-01578 will be returned to the user.

A long time ago a customer called me saying that they were trying to execute a SELECT from the application and whenever the SELECT was executed the application got an ORA-01578. I detected the block # and the datafile # and I fixed it. At that time, the user was able to continue working the rest of the day. However, the next day again, the same customer called me saying that they were receiving more ORA-01578’s. This time I confirmed that the corrupted block was in a different datafile than the block I’d fixed a day before. This made me think that there could be more corrupted blocks.  I executed dbverify against the full database and I saw that the database had several corrupted blocks.  However last rman backup didn’t report any corrupted blocks. We engaged a sysadmin and he detected that the storage was having issues that day. Fortunately we detected the storage problem quickly and no data was lost. But if these kinds of issues are not detected properly the data can be compromised.  In this example we have been talking about a physical problem, but there are some other cases where it is more difficult to detect the problem, especially when it is a logical corruption.

How to avoid it

Using Oracle ASM: Oracle recommends using ASM as the storage for the database. ASM has three types of redundancy: External, Normal and High.  If we are using Normal or High Oracle keeps a copy (Normal) or two copies (High) of every block. This block is called “Mirror Block” and whenever it finds a corrupt block, it automatically restores the corrupt block from one of its mirror copies. I have written an article where I explain with a lot of details how Oracle recover a block from its mirror copy, in case you want you read it: Data block recovering process using Normal Redundancy

Using parameter db_block_checking: This parameter is used to control whether block checking is done for transaction managed blocks. As early detection of corruptions is useful, and has only a small performance impact. However, there are some types of applications where having the parameter DB_BLOCK_CHECKING = TRUE can have a considerable overhead, all depends on the application,  to test the change in a test environment is recommended. The immediate overhead is a CPU overhead of checking a block contents after each change but a secondary effect is than that this means blocks are held for longer periods of time so other sessions needing the current block image may have to wait longer. The actual overhead on any system depends heavily on the application profile and data layout.

Using parameter db_block_checksum: determines whether DBWn and the direct loader will calculate a checksum (a number calculated from all the bytes stored in the block) and store it in the cache header of every data block when writing it to disk. Checksums are verified when a block is read – only if this parameter is TYPICAL or FULL and the last write of the block stored a checksum. In FULL mode, Oracle also verifies the checksum before a change like  update/delete statements and recomputes it after the change is applied. In addition, Oracle gives every log block a checksum before writing it to the current log. Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O systems. If set to FULL, DB_BLOCK_CHECKSUM also catches in-memory corruptions and stops them from making it to the disk. Turning on this feature in TYPICAL mode causes only an additional 1% to 2% overhead. In FULL mode it causes 4% to 5% overhead.

Dbfsize: Can be used to check the consistency of Block 0.

Dbverify: Can be used to check Oracle datafiles for signs of corruption and give some degree of confidence that a datafile is free from corruption. It opens files in a read-only mode and so cannot change the contents of the file being checked. It checks that datafile has a valid header. Each data block in the file has a special "wrapper" which identifies the block – this "wrapper" is checked for correctness. Dbverify also checks that DATA (TABLE) and INDEX blocks are internally consistent. And, from 8.1.6 onwards, it checks that various other block types are internally consistent (such as rollback segment blocks).

RMAN VALIDATE command:  You can use the VALIDATE command to manually check for physical and logical corruptions in database files. This command performs the same types of checks as BACKUP VALIDATE. By default, RMAN does not check for logical corruption. If you specify CHECK LOGICAL on the BACKUP command, however, then RMAN tests data and index blocks for logical corruption, such as corruption of a row piece or index entry.

RMAN> validate check logical database;

RMAN > validate database;

RMAN > validate backupset 11;

RMAN > validate datafile 2 block 11;

I have written some other articles related to RMAN and corrupt blocks,  in case you want to read more about the issue

Conclusion 

Perform proactive tasks to detect or avoid having physical and logical corruption, if the corruption is detected on time, the solution can be easily executed. Oracle offers several tools that we can use to detect, monitor, and fix corruption in the block. It is important to be aware of these type of problems so that our data is not compromised. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 


Viewing all 108 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>