Quantcast
Channel: Deiby Gomez's Activities
Viewing all 108 articles
Browse latest View live

Introduction to Application Containers in Oracle Database 12cR2

$
0
0

NOTE: This article was written using Oracle Public Cloud.

  


Introduction:

Developers and end users are the roles that mostly use the database. Developers keep fixing code, maintaining the legacy applications, creating new applications or creating new versions of the same applications. There area a lot of tasks involved in these activities, some of them would be creating new databases for new applications, cloning the data of production database, re-creating packages for new versions of the applications, and if we have several customers using those applications we have to sync those customer's application with the new data or performing refreshes of the new version. Developers and DBAs work together, Oracle knows that and that's why with every version of Oracle Database several functions, packages and features are introduced to help not only DBAs but also Developers. In Oracle Database 12.2 a new feature called"Application Container" was introduced, this new feature helps developers a lot with the day-to-day tasks. With "Application Container", developers can create Applications, every Application can have its own data and version and Developers decide which database should have which version of the same Application and when to refresh the data. With "Application Containers" the developers keep the objects and data only in one side, not in every database in the organization, sync from that principal side all the dependent databases. Also there are three levels of "Sharing" for those data, some allow to store the data in each PDB. This what we will discuss in this article, how to create applications and how to sync them with the PDBs.


What is an Application Container? An Application Container is composed by One Application Root, zero or more Application Pluggable Databases (also known as Application Tenants), zero or one Application Seed and zero or more Applications.


Creating an Application Root:

An Application Root is an special Pluggable Database where the "Applications" are installed. Developers Maintain the objects and data only in this Pluggable Databases and later they can sync the Application PDBs with these objects and data. There may be only one Application Root per Application Container. Using different "Sharing" levels of the data we can store some data into each PDB.

In order to create an Application Root you have to be connected with SYS or other user with privileges:

SQL> show user
USER is "SYS"

You have to be connected to CDB$ROOT:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

Then you create the Application Root. As you can see bellow, the syntax to create an Application Root is very similar to create a normal Pluggable Database, the difference is the addition of the clause "as Application Container".

SQL> create pluggable database AppRoot as application container admin user pdbadmin identified by xxxx;

Pluggable database created.

Opening the Application Root:

SQL> alter pluggable database AppRoot open;

Pluggable database altered.

Confirming the Application Root was created successfully:

SQL> select con_id, name , open_mode, application_root from v$pdbs where application_root='YES';

CON_ID     NAME       OPEN_MODE  APP
---------- ---------- ---------- ---
5          APPROOT    READ WRITE YES

From these steps you would realize that we added "as Application Container". So does that mean we created an Application Container or an Application Root? Well, this could be confusing but at the end it is simple. I prefer to see it this way: "When we create an Application Root, by default one Application Container is created because an Application Root cannot exist alone", or if you want... you can see it this way: "When we create an Application Container by default an Application Root is created". You can pick your preferred definition :)

Creating an Application Pluggable Database

An Application PDB or Application Tenant is one special Pluggable Database that can get metadata and data from the Application Root and also it can have its own metadata and data, it depends on how the "Application" was created. We will discuss this "depends" later. So basically the Application PDBs are those databases that belong to one and only one Application Root that's why when you create an Application PDB you must be connected to one Application Root. So far you have seen that an Application Root can have zero or many "Application PDBs" but an "Application PDB" belongs to only one "Application Root".

The first step to create an "Application PDB" is to be connected to an Application Root:

SQL> alter session set container=AppRoot;

Session altered.

Verify you are connected to the Application Root:

SQL> show con_name

CON_NAME
------------------------------
APPROOT

The creation of the Application PDB is exactly the same than creating a normal PDB, the only difference is that now we are connected to an Application Root:

SQL> create pluggable database AppPDB1 admin user apppdb1admin identified by xxxx;

Pluggable database created.

Opening the Application Tenant:

SQL> alter pluggable database AppPDB1 open;

Pluggable database altered.

Verifying the Application PDBs were created successfully:

SQL> select con_id, name , open_mode, application_root, application_pdb from v$pdbs;

CON_ID     NAME     OPEN_MODE  APPLICATION_ROOT APPLICATION_PDB
---------- -------- ---------- ---------------- ---------------
5          APPROOT  READ WRITE YES              NO
6          APPPDB1 READ WRITE NO               YES

So far we have created 1 Application Container containing 1 Application Root and two Application PDBs. But there is not any Application yet. That is the next step.

Creating an Application

An Application is composed by Objects  and Data. Every Object can be created with three levels of "Sharing": Metadata-linked, Data-Linked and Extended-Data Linked. Depending on which level of "Sharing" we used to create the objects, the objects and data will be shared from Application Root or stored in each container.

Applications can be created only in an Application Root.

SQL> show con_name

CON_NAME
---------------
APPROOT

To install an Application you have to declare that you will start installing it, you must specify the name of the Application and you must specify the version of the Application. You can have several "Applications" in an "Application Container" as long as their name are different inside that "Application Container". 

SQL> alter pluggable database application MyApp begin install'1.0';

Pluggable database altered.

After declaring that you are installing an "Application", all the next sentences are marked as part of the installation, here is where you start to create all the objects and data:

SQL> create user test identified by xxxx;

User created.

SQL> grant connect, resource, unlimited tablespace to test;

Grant succeeded.

Metadata-Linked: A metadata link shares the database object’s metadata, but its data is unique to each container.

SQL> create table test.metadataLinkedTable SHARING=METADATA (name varchar2(20));

Table created.

SQL> insert into test.metadataLinkedTable values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

Data-Linked: A data link shares the database object, and its data is the same for all containers in the application container. Its data is stored only in the application root.

SQL> create table test.dataLinkedTable SHARING=DATA (name varchar2(20));

Table created.

SQL> insert into test.dataLinkedTable values ('Costa Rica');

1 row created.

SQL> commit;

Commit complete.

Extended Data-Linked: An extended data link shares the database object, and its data in the application root is the same for all containers in the application container. However, each application PDB in the application container can store data that is unique to the application PDB. Personally, I like to call this "Row-Linked" because some rows are stored in the Applicaiton PDB and some others in the Application Root, basically you are sharing a set of rows from Application Root. 

SQL> create table test.extendedDataLinkedTable SHARING=EXTENDED DATA (name varchar2(20));

Table created.

SQL> insert into test.extendedDataLinkedTable values ('Nicaragua');

1 row created.

SQL> commit;

Commit complete.

To finish the installation of the Application the following sentence has to be executed specifying the Application's name and the version:

SQL> alter pluggable database application MyApp end install '1.0';

Pluggable database altered.


Excellent! So far we have created an "Application Container" containing 1 Application Root, 1 Application Tenant and 1 Application with 3 Tables: 1 metadata-linked Table, 1 data-linked Table and 1 Row-Linked Table.

The Application PDBs don't see the Application yet. This is because the synchronization is not automatically as we can see bellow:

Checking out if the Application PDB "AppPDB1" has the objects of the Application "MyApp":

SQL> alter session set container=AppPDB1;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
APPPDB1

SQL> select * from test.metadataLinkedTable;
select * from test.metadataLinkedTable
*
ERROR at line 1:
ORA-00942: table or view does not exist

Synchronizing Application PDBs

In order to sync an "Application" to an "Application PDB" you have to open a session in that specific Application PDB:

SQL> alter session set container=AppPDB1;

Session altered.

SQL> show con_name

CON_NAME
------------
APPPDB1

Then execute the following sentence specifying the Application's name:

SQL> alter pluggable database application MyApp sync;

Pluggable database altered.

After to execute the "Application Sync" we will be able to see the objects and data depending on how the SHARING clause was used:

SQL> select * from test.metadataLinkedTable;

NAME
--------------------
Guatemala

SQL> select * from test.dataLinkedTable ;

NAME
--------------------
Costa Rica

SQL> select * from test.extendedDataLinkedTable;

NAME
--------------------
Nicaragua

Now let's see the difference between the sharing levels. In order to explain this I have to do some more inserts into the tables. All these inserts will be executed from the Application PDB "AppPDB1":

SQL> show con_name

CON_NAME
------------------------------
APPPDB1

Insert #1:

SQL> insert into test.metadataLinkedTable values ('Mexico');

1 row created.

Insert #2:

SQL> insert into test.dataLinkedTable values ('Canada');
insert into test.dataLinkedTable values ('Canada')
*
ERROR at line 1:
ORA-65097: DML into a data link table is outside an application action

Insert #3:

SQL> insert into test.extendedDataLinkedTable values ('USA');

1 row created.

SQL> commit;

Commit complete.

Explanation of Insert #1: This insert was executed against a medata-linked table, the result is that the insert was accepted from the Application PDB and the row will be stored in each Application PDB. This means for every row inserted while the Application was being created there will be a row in each "Application PDB" that is Synchronized, this is because the rows are unique to each "Application PDB". We can confirm that by checking out the ROWID:

SQL> alter session set container=AppRoot;

Session altered.

SQL> show con_name

CON_NAME
----------
APPROOT

SQL> select c.con_id, p.name PDB_NAME, dbms_rowid.rowid_to_absolute_fno(t.rowid,'TEST','METADATALINKEDTABLE') file_num, t.name from test.metadataLinkedTable t, v$datafile c, v$pdbs p where c.file#=dbms_rowid.rowid_to_absolute_fno(t.rowid,'TEST','METADATALINKEDTABLE') and c.con_id=p.con_id;

CON_ID     PDB_NAME   FILE_NUM   NAME
---------- ---------- ---------- ----------
5          APPROOT    38         Guatemala

This means that row is stored in the datafile # 38, and that datafile belongs to the container called "AppRoot", in this case is the "Application Root".

SQL> alter session set container=AppPDB1;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
APPPDB1

SQL> select c.con_id, p.name PDB_NAME, dbms_rowid.rowid_to_absolute_fno(t.rowid,'TEST','METADATALINKEDTABLE') file_num, t.name from test.metadataLinkedTable t, v$datafile c, v$pdbs p where c.file#=dbms_rowid.rowid_to_absolute_fno(t.rowid,'TEST','METADATALINKEDTABLE') and c.con_id=p.con_id;

CON_ID     PDB_NAME     FILE_NUM   NAME
---------- -------- ---------- --------------------
6          APPPDB1  41         Guatemala
6          APPPDB1  41         Mexico

And now you see that the same row "Guatemala" was also stored in a different datafile, in this case the datafile #41 which belongs to the PDB caled "AppPDB1" which is an "Application PDB". Additionally the row "Mexico" is also inserted in the same datafile. This confirms that with this level of "Sharing" each container has its own data. 

As you see there are two rows with the same value "Guatemala", one inserted in "AppRoot" and other stored in "AppPDB1", this is because every row here is stored in each container.

Explanation of Insert #2: In this case we tried to insert a row in a Data-Linked Table and we received an error. This is because in a Table using Sharing=Data (Data-Linked) there can be only those rows inserted in the Application Root in order to later sync the Application PDBs. No rows are accepted in each Application PDB.

Explanation of Insert #3: This insert was executed against an extended data-linked table (Row-Linked), the result is that the insert was accepted from the Application PDB and the row will be stored in that specific Application PDB because the "INSERT" operation was executed inside the "Application PDB", if we had executed the "INSERT" from "Application Root" then that row had been stored in the "Application Root" and shared to the "Application PDB". I tried to confirm this by using ROWID but I saw that ROWID cannot be used against an row-linked table, the following error is returned:

ORA-02031: no ROWID for fixed tables or for external-organized tables

So you can use the following query to confirm that some rows are returned from "Application Root" and some others from the local "Application PDB":

SQL> select con_id, owner, table_name, common_data from cdb_tables where table_name='EXTENDEDDATALINKEDTABLE'

CON_ID     OWNER  TABLE_NAME                COMMON_DATA
---------- ------ ------------------------- -----------
6          TEST   EXTENDEDDATALINKEDTABLE   YES

The meaning of the column COMMON_DATA is the following:

SQL>select owner, table_name, column_name, comments from cdb_COL_COMMENTS where column_name like 'COMMON_DATA%' and table_name='CDB_TABLES' and con_id=1

OWNER  TABLE_NAME COLUMN_NAME     COMMENTS
------ ---------- --------------- -----------------------------------------
SYS    CDB_TABLES COMMON_DATA     Whether the table is enabled for fetching
                                  common data from Root

SYS    CDB_TABLES COMMON_DATA_MAP Whether the table is enabled for use with
                                  common_data_map database property

I had to get the definition from the data dictionary because those columns are not documented at the time in 12.2 public oracle database documentation (Database Reference Book), I already sent an email to oracle asking why :)

Conclusion:

So far you have seen an introduction to "Application Containers". We created an "Application Container", by default an "Application Root" was created, then we created an "Application PDB" and we installed and application with three tables. And finally we inserted some rows and we saw how "Sharing" levels work. 

Follow me:

      


How to solve user errors with Oracle Flashback 12cR2 and its enhancements

$
0
0

Introduction:

Flashback is a technology introduced in Oracle Database 10g to provide fixes for user errors. For example, one of the most common issues it can solve is when a DELETE operation was executed without a proper WHERE clause. Another case: a user has dropped a table but after some time that table is required. And the worst-case error: the data of a complete database has been logically corrupted. There are several use cases for Flashback technology, all of them focused on recovering objects and data or simply reverting data from the past. Flashback technology is not a replacement for other recovery methods such as RMAN hot backups, cold backups or datapump export/import; Flashback technology is a complement. While RMAN is the main tool to recover and restore physical data, Flashback technology is used for logical corruptions. For instance, it cannot be used to restore a datafile, while RMAN is the perfect tool for that purpose. Also, be careful when NOLOGGING operations are used; Flashback Database cannot restore changes through NOLOGGING.

Flashback Technology includes several "Flashback Operations", among them Flashback Drop, Flashback Table, Flashback Query, Flashback Version, Flashback Transaction and Flashback Database. They use different data sources to restore/revert user data from the past. The following table shows which data source is used for which Flashback operation:

Flashback Operation         Data Source

Flashback Database            Flashback Logs
Flashback Drop                   Recycle bin
Flashback Table                  Undo Data
Flashback Query                 Undo Data
Flashback Version               Undo Data
Flashback Transaction        Undo Data

In this article, we will focus on Flashback Database, a feature that is able to "flash back" a complete database to a point in the past. Flashback Database has the following use cases:

  • Taking a database to an earlier SCN: This is really useful when a new version of an application needs to be tested and all the changes made for the testing discarded afterwards. In this case, a new environment (for testing or dev) must be created that contains the data in the production database at a specific time in the past.
  • Recovery through resetlogs: Flashback Database can revert (logically) a database to a specific date in the past, even if that specific date precedes that of a RESETLOGS operation.
  • Activating a Physical Standby Database: With Oracle Database 10g, Flashback Database can be used in a Physical Standby. The Physical Standby can be opened in read-write for testing purposes and when the activity completes, the database can be reverted to the time before the Physical Standby was activated.
  • Creating a Snapshot Standby: In 11g, Snapshot Standby was introduced. The concept is basically to automate all the steps involved in activating (opening in read-write) a Physical Standby in version 10g, then later make it Physical Standby again (with recovery). This "automated" conversion of a Physical Standby into a “Snapshot Standby” uses Flashback Database transparently to the DBA. 
  • Configuring Fast Start Failover: To configure Fast Start Failover in Data Guard Broker, Flashback Database is required.
  • Reinstating a Physical Standby: Data Guard broker uses Flashback Database to reinstate a former primary after Failover operations. Read more about reinstating a database in the following articles:  Role Operations with Snapshot Standby 12cRole Operations involving two 12c Standby Databases
  • Upgrade testing: A Physical Standby can be used to test an upgrade; in this case, the Physical Standby is opened in read-write and upgraded. Applications can be tested with the upgraded database and when the activity completes the Physical Standby can be reverted to the former version using Flashback Database. The Transient Logical Standby method for upgrades also involves Flashback Database.

How Flashback Database works:

When blocks are modified in the Buffer Cache, some of the before-the-change block images are stored in the Flashback Buffer and subsequently stored physically in the Flashback Logs by the RVWR process. All blocks are captured: index blocks, table blocks, undo blocks, segment headers, etc. When a Flashback Database operation is performed, Oracle uses the target time and checks out its Flashback Logs to find which Flashback Logs have the required block images with the SCN right before the target time. Then Oracle restores the appropriate data blocks from Flashback Logs to the Datafiles, applies redo records to reach the exact target time, and when the Database is opened with resetlogs, the changes that were not committed are rolled back using undo data to finally have a consistent database ready to be used.

Flashback Database Enhancements:

Flashback Database has had several enhancements since it was introduced, with the biggest enhancements in 12.1 and 12.2. In Oracle Database 12.1 Flashback Database supported Container Databases (CDBs) supporting the Multitenant Architecture, however Flashback Database at the PDB Level was not possible. In Oracle Database 12cR2 Flashback Database added support at the PDB level. This was enabled thanks to another good feature introduced in Oracle Database 12.2 called "Local Undo". Local Undo allows you to create an undo tablespace in each Pluggable Database and use it to store locally undo data for that PDB specifically. Local Undo must be enabled at the CDB level. However, if the CDB is not running in Local Undo mode, Flashback Pluggable Database can also be used, but the mechanism used is totally different. In a Shared Undo mode, Flashback Pluggable Database needs an auxiliary instance in which the required tablespaces will be restored and recovered to perform the Flashback Database operation and a switch is then performed between the current tablespaces and the new restored-and-recovered tablespaces in the required Pluggable Database.

NOTE: All the examples in this article were created using Oracle Public Cloud:

Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE 12.2.0.1.0 Production
TNS for Linux: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 – Production

Enabling Flashback:

Local Undo is used in this example:

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME        PROPERTY_VALUE
-------------------- ---------------
LOCAL_UNDO_ENABLED   TRUE

To read more about Local Undo and Shared Undo the following articles are recommended: Oracle DB 12.2 Local Undo: PDB undo tablespace creation, How to Enable and Disable Local Undo in Oracle 12.2.

Flashback cannot be enabled at the PDB level in 12.1 and 12.2.0.1. Flashback must be enabled at the CDB level. Before you can Enable Flashback in your CDB you have to ensure that enough space is available to store the Flashback Logs. Oracle recommends using the following generic formula to setup your Fast recovery area space:

Target FRA = (Current FRA)+[DB_FLASHBACK_RETENTION_TARGET x 60 x Peak Redo Rate (MB/sec)]

After setup the FRA space properly, Flashback may be enabled:

SQL> alter database flashback; 

Database altered.

 

Creating a table and some rows

To test the result of Flashback Database operation, I will create a table with some rows in it; that data will be used to flashback the database and verify that the database was thereby successfully reverted to a past time.

SQL> alter session set container=nuvolapdb2;

Session altered.

SQL> create table deiby.piece (piece_name varchar2(20));

Table created.

SQL> insert into deiby.piece values ('King');

SQL> insert into deiby.piece values ('Queen');

SQL> insert into deiby.piece values ('Rook');

SQL> insert into deiby.piece values ('Bishop');

SQL> insert into deiby.piece values ('Knight');

SQL> insert into deiby.piece values ('Pawn');

SQL> commit;

Commit complete.

SQL> select * from deiby.piece;

PIECE_NAME
--------------------
King
Queen
Rook
Bishop
Knight
Pawn

6 rows selected.

 

Restore Point creation

To perform Flashback Database a restore point, a guaranteed restore point, an SCN or a timestamp is required. In this example a normal restore point is used.

SQL> create restore point before_resetlogs for pluggable database Nuvola2;

Restore point created.

SQL> SELECT name, pdb_restore_point, scn, time FROM V$RESTORE_POINT
NAME PDB SCN TIME
----------------- --- ---------- -------------------------------
BEFORE_RESETLOGS  YES 3864200    09-JAN-17 08.12.56.000000000 PM

 

Truncating and dropping the table

Now let's assume a user error: a DBA, developer, or end user truncates a table and then drops it. This is a simple example, but you can make this "logical error" as complex as you want so long as a physical error is not involved and a NOLOGGING is not used.

Truncating the table:

SQL> truncate table deiby.piece;

Table truncated. 

Drop the table with purge:

SQL> drop table deiby.piece purge;

Table dropped.

 

Open the database with resetlogs:

To make it more interesting, I will simulate a recovery-until-time operation in order to perform a resetlogs operation:

RMAN> recover pluggable database nuvolapdb2 until scn 3864712;
Starting recover at 09-JAN-17
current log archived
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 09-JAN-17

Opening the Pluggable Database with resetlogs:

RMAN> alter pluggable database nuvolapdb2 open resetlogs;

Statement processed

We can verify that indeed a new incarnation was created for the PDB:

Flashback the database

Now it's time for the magic, the new feature introduced in Oracle Database 12.2 called "Flashback Pluggable Database". To use Flashback Database at Pluggable Database level, the PDB must first be closed.

SQL> alter pluggable database nuvolapdb2 close;

Pluggable database altered.

SQL> select con_id, db_incarnation# db_inc#, pdb_incarnation# pdb_inc#, status,incarnation_scn from v$pdb_incarnation where con_id=4;

CON_ID     DB_INC#    PDB_INC#   STATUS  INCARNATION_SCN
---------- ---------- ---------- ------- ---------------
4          1          5          CURRENT 3864712
4          1          0          PARENT  1

Then Flashback PDB may be used:

SQL> flashback pluggable database nuvolapdb2 to restore point before_resetlogs;

Flashback complete. 

After a Flashback PDB operation, the PDB must be opened with resetlogs:

SQL>  alter pluggable database nuvolapdb2 open resetlogs;

Pluggable database altered.

Verifying the data

Once the Flashback PDB has completed successfully, the data that existed before the truncate, drop and resetlogs (and even more if you want) can be queried:

SQL> alter session set container=nuvolapdb2;

Session altered. 

SQL> select * from deiby.piece;

PIECE_NAME
--------------------
King
Queen
Rook
Bishop
Knight
Pawn

6 rows selected. 

A quick look at the incarnations will show that a new incarnation was created for the PDB (Incarnation #6) and the former Incarnation was made orphan (Incarnation #5).

SQL> select con_id, db_incarnation# db_inc#, pdb_incarnation# pdb_inc#, status,INCARNATION_SCN from v$pdb_incarnation where con_id=4;

CON_ID     DB_INC#    PDB_INC#   STATUS  INCARNATION_SCN
---------- ---------- ---------- ------- ---------------
4          1          6          CURRENT 3864201
4          1          0          PARENT  1
4          1          5          ORPHAN  3864712

Conclusion:

Flashback Database has several use cases and is a very useful feature that DBAs should keep “in their pocket” and ready to use when they need to revert a database to a time in the past. It allows you to test upgrades, activate a physical standby, undo user errors, and test applications—all without worry. I’m sure that Oracle will keep improving this feature; perhaps in the next version of Oracle we will gain the ability to enable Flashback in PDB Level and several others functions. For now, the enhancements made by Oracle in 12.1 and 12.2 are enough to work with non-CDB, CDBs and PDBs.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

 

Taking Care of Block Change Tracking While Duplicating Databases

$
0
0

By Deiby Gomez

Before Oracle 12c, pluggable databases didn’t exist; whenever we wanted to clone a database we had use either RMAN Backup and Restore or RMAN Duplicate (Active or from backup location). There are several use cases to consider when we are talking about cloning databases, from low-criticality cases  like cloning a database into a new empty server to highly critical cases like cloning a production database in the same production server. Whatever the use case, most of the time, a DBA considers only four kinds of files when planning to restore a database: spfile, controlfiles, datafiles and redologs.

But what if the database uses Block Change Tracking? The database has several others files that we also should consider; for instance, password file (for Physical Standby Databases) and Flashback Logs (to use Flashback Database) and,  of course, “Block Change Tracking” (in the use case of this article, for RMAN Duplicates and RMAN restores). In the following image, we can see the “big picture” of an Oracle database:

 

 

But why is Block Change Tracking important to consider? Well, let me tell you a story. Several years ago I was working on duplicating a big database into a new server. The database was 11.2.0.3 and I was using RMAN Duplicate from Backup pieces. You can read more about RMAN duplicates in my previous articles:

 

I was preparing the environment in the target database, I made sure to have all the directories created that are referenced by the spfile, the required space to create the new database, the permissions on the directory. To perform this duplication, I had approximately one day, but the database was around 400GB so for sure I knew it was going to take time to duplicate such a database. I decided to specify a different path (DB_FILE_NAME_CONVERT) to the new directories for the datafiles since the directory structure in the target server was different from the directory structure in the source server. I remember that day because it was a hard day for me, mostly because I had to deal with the time window (one day) and I was running out of time. Well, I prepared everything and then I raised the “RMAN Duplicate from backup pieces”. It took several hours to restore the datafiles and a couple of hours more to recover them, but it was when recovering the datafiles that I hit the following bug:

Bug 18371441 : RMAN DUPLICATE FAILS TO CREATE BCT FILE

The messages I received from RMAN were similar to the following:

RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/.../o1_mf_1_90_9m65pp1g_.arc'

ORA-00283: recovery session canceled due to errors

ORA-19755: could not open change tracking file

ORA-19750: change tracking file: '/oradata/.../blockchangetracking.dbf'

ORA-17503: ksfdopn:2 Failed to open file /oradata/l.../blockchangetracking.dbf

 

It was kind of strange for me to see that the block change tracking file was involved in an “alter database recover logfile” operation. I tried to complete the recovery of the database but was unsuccessful because the database reached the problematic SCN. I had to recover the database for a previous SCN, and the only way to fix this was to raise the RMAN duplicate again from the beginning. Here was where I started to feel the stress because I had only a few hours available to complete this task.

So, some advice: Always check out if your database uses block change tracking file if you are in one of the versions impacted by this bug.

The bug seems to occur when Block Change Tracking is enabled in the source database and a datafile was “autoextended”. When the sequence in which the datafile size was changed is applied to the auxiliary instance, the duplicate fails when trying to create the Block Change Tracking, leaving the database in an inconsistent state in the middle of a recovery process. 

The bug seems to be confirmed in the following versions:

The bug has been fixed in the following:

 

I forgot to tell you that that day I also hit the following bug [:)]:

Bug 11744544 - Set newname for database does not apply to block change tracking file (Doc ID 11744544.8)

Yes… it was a hard day for me. [:(]

 

So, I decided to disable the block change tracking file before applying the “recovery”. There are two ways to do this:

  • Restore datafiles
  • Recover until before the SCN that requires Block Change Tracking

Or

  • Duplicate database with UNTIL SCN

 

Similar to the following example:

duplicate database to 'db2' backup location '/home/oracle/backup' NOFILENAMECHECK UNTIL SCN <scn#>

DB_FILE_NAME_CONVERT=('/oradata/db1/','/oradata/db2/');

 

In the end I was able to duplicate the source database. It took me some extra hours but the work was accepted by the customer and everybody was happy. [:)]

So the advice from this article is always to check whether the database is using block change tracking. There are several bugs related to block change tracking while duplicating databases with RMAN, and even restoring and recovering databases. If the source database is using block change tracking, I advise that you disable it in the target database before restore and recover. By doing so you will avoid encountering the bugs related to it. This procedure is also recommended in the note: Rman Duplicate fail ORA-19755, Tries Open The Block Change Tracking File of Source DB (Doc ID 1098638.1).

You can use the following sentence to check whether a database is using Block Change Tracking:

SQL> select * from V$BLOCK_CHANGE_TRACKING

STATUS        FILENAME           BYTES     CON_ID

---------- -------------------- ---------- ----------

ENABLED    /home/oracle/bct.dbf   11599872      0

 

To disable block change tracking (in the target database) before to apply redologs you can use the following statement:

SQL> alter database disable block change tracking;

Database altered.

 

To enable block change tracking after successfully restoring or duplicating the new database you can use the following statement:

SQL> alter database enable block change tracking using file '/home/oracle/bct.dbf';

Database altered.

 

Now let me tell you a little bit more about some other scenarios that I also tested:

 

Using RMAN Backup / Restore – Source server not the same as Target Server

Usually when we restore a database from backup in another server there are several activities involved, like performing an RMAN backup from the source database with all the archivelogs, transferring all those backup files to the other server, preparing the target server with the spfile, restoring the controlfile, renaming all the files to a new directory in case the directory structure is different between source server and target server and then restoring the datafiles, applying recover, and finally opening the database with resetlogs. In this scenario if we are duplicating a database that uses block change tracking and the bug 18371441 is not present, all you have to do is ensure that the directory where the source block change tracking file is located also exists in the target server; otherwise you will receive the following error:

Executing: alter database enable block change tracking using file '/home/oracle/blockchangetracking/bct.dbf'

ORACLE error from auxiliary database: ORA-19751: could not create the change tracking file

ORA-19750: change tracking file: '/home/oracle/blockchangetracking/bct.dbf'

ORA-27040: file create error, unable to create file

Linux-x86_64 Error: 2: No such file or directory

Additional information: 1

 

Using RMAN Backup / Restore – Source server the same as Target Server

If you are duplicating a block change tracking and you are not encountering bug 18371441, then don’t worry about overwriting the current block change tracking of the source database. Even if you didn’t take the block change tracking file into consideration, when the RMAN duplicate tries to recreate the block change tracking of the target database you will receive the following error:

Executing: alter database enable block change tracking using file '/home/oracle/blockchangetracking/bct.dbf'

ORACLE error from auxiliary database: ORA-19751: could not create the change tracking file

ORA-19750: change tracking file: '/home/oracle/blockchangetracking/bct.dbf'

ORA-27038: created file already exists

Additional information: 1

 

This error says that the new block change tracking was not created because it “already exists”. We know that that existing block change tracking doesn’t belong to the new database, but this is OK; as long as it was not overwritten, we are OK. We can safely enable block change tracking in the new database using a different file name.

 

Using RMAN Restore – Same Server

If a new database is being created by restoring an existing RMAN backup and the source database is in the same server where the target database is intended to be stored, I suggest creating a control file “to trace” and then creating the controlfile using that trace file; of course, after reviewing to make sure that all the datafile paths are different from the paths that the original database. When the controlfile is recreated, the block change tracking file is automatically disabled. I already confirmed this:

SQL> CREATE CONTROLFILE set DATABASE "DB2" RESETLOGS  ARCHIVELOG

    MAXLOGFILES 16

    MAXLOGMEMBERS 3

    MAXDATAFILES 100

    MAXINSTANCES 8

    MAXLOGHISTORY 292

LOGFILE

  GROUP 1 '/oradata/db2/redo01.log'  SIZE 50M BLOCKSIZE 512,

  GROUP 2 '/oradata/db2/redo02.log'  SIZE 50M BLOCKSIZE 512,

  GROUP 3 '/oradata/db2/redo03.log'  SIZE 50M BLOCKSIZE 512

-- STANDBY LOGFILE

DATAFILE

  '/oradata/db2/system01.dbf',

  '/oradata/db2/sysaux01.dbf',

  '/oradata/db2/undotbs01.dbf',

  '/oradata/db2/users01.dbf'

CHARACTER SET WE8MSWIN1252

;  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18 

 

Control file created.

 

SQL> select * from V$BLOCK_CHANGE_TRACKING;

 

STATUS      FILENAME      BYTES

----------  ------------ -------------------------

DISABLED

 

 

Conclusion:

In this article there was presented a real use case where the file that usually DBAs think that don’t have importance, in fact it has and a lot of importance actually. Block Change Tracking can be used in any database, it is recommended in huge databases where the time to perform a backup is long. This article makes DBAs to put attention either the BCT is being used or not, because there are some bugs, especially since 11.2.0.3 until 12.1, like the bugs presented in this article, that could take the DBAs to run out of time in a maintenance window. It is better to always recommended to review if BCT is used and proceed properly while restoring a database or duplicating it. In this article there was presented the steps to review if BCT was used, there were provided recommendations and how to proceed to avoid the well-known bugs.

How to rename an ASM Diskgroup

$
0
0

Oracle ASM was introduced in Oracle Database 10g, since then, several enhancements were introduced with every version. Nowadays ASM is the most common filesystem used by the Database Administrators to store the database files, it is also highly recommended by Oracle. Said that, to perform maintenance tasks of ASM Disks and ASM Diskgroups are very frequent. In this article we will focus on only one maintenance tasks, this is renaming an ASM Diskgroup. This task sounds easy to perform but we will see that it's not that way, it needs a carefully execution by the DBA, specially because this task requieres Downtime. Whenever we need downtime it is necessary to coordinate with the other areas of the company like "Application team" and sometimes "sysadmins". it is highly recommended also to have a backup of the database before to proceed. While renaming a diskgroup only headers of the disks are modified, not the data. But as a best practice and if you don't like headaches like me it's better to have a backup.

In this article we will perform the following activity. We have already one ASM diskgroup called "DATA" and we will rename it to "DATA2". 

 

The first step it is to know which databases will be impacted if we unmount the ASM Diskgroup "DATA", to know this, we can query the view "v$asm_client", that view will show us which database instances are using the diskgroup that we want to rename. To do that, first let's find what is the ASM Diskgroup number for the diskgroup "DATA":

SQL> select group_number, name from v$asm_diskgroup where name='DATA';

GROUP_NUMBER NAME
------------ ------------------------------
1            DATA

SQL> select group_number, instance_name, db_name, status from v$asm_client where group_number=1;

GROUP_NUMBER INSTANCE_NAME   DB_NAME  STATUS
------------ --------------- -------- ------------
1            +ASM            +ASM     CONNECTED
1            orcl            orcl     CONNECTED

Ok, we have found that there is one database instance using the diskgroup. Now it's time to review that database instance because we have to shut it down. 

[oracle@a1 ~]$ ps -ef |grep pmon
grid 3759 1 0 Oct19 ? 00:00:08 asm_pmon_+ASM
oracle 3851 1 0 Oct19 ? 00:00:09 ora_pmon_orcl
oracle 12038 12016 0 12:36 pts/2 00:00:00 grep pmon
[oracle@a1 ~]$

I will review where the datafiles of this database are located. This step is important, because not all the databases have the datafiles in the same ASM Diskgroup, to avoid any surprise later I am checking out the location of the datafiles. In this case, all the datafiles are located in the same ASM Diskgroup, in "DATA". 


SQL> select name from v$datafile;

NAME
-------------------------------------------
+DATA/orcl/datafile/system.262.912909191
+DATA/orcl/datafile/sysaux.257.912909191
+DATA/orcl/datafile/undotbs1.261.912909191
+DATA/orcl/datafile/users.271.912909191
+DATA/orcl/datafile/tbs1.279.918100673
+DATA/orcl/datafile/tbs2.256.918102673

Now it's time to check out some information about the disks of the diskgroup "DATA":

SQL> select group_number, state, name, label, path from v$asm_disk where group_number=1;

GROUP_NUMBER STATE    NAME       PATH
------------ -------- ---------- -----------------------------
1            NORMAL   DATA_0002  /dev/oracleasm/disks/ASMDISK3
1            NORMAL   DATA_0001  /dev/oracleasm/disks/ASMDISK2
1            NORMAL   DATA_0000  /dev/oracleasm/disks/ASMDISK1

We see that three disks will be involved in the activity. As I said before, only the headers are modified, not the data.

Shutting down the database: This step is required because the ASM Diskgroup DATA must be unmounted. 

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
orcl

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

Unmount the ASM Diskgroup:

With the user "grid" which is the owner of the Grid Infrastructure we check out the current status of the ASM Diskgroup:

[grid@a1 ~]$ asmcmd lsdg
State   Type   Rebal Sector Block AU     Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL Y     512    4096 1048576 115262   95528   51489           22019   0    N   DATA/
[grid@a1 ~]$

And now I will proceed to unmount it:

[grid@a1 ~]$ asmcmd umount DATA
[grid@a1 ~]$ asmcmd lsdg
[grid@a1 ~]$

Once the ASM Diskgorup has been unmounted we can proceed to rename the diskgroup using the tool "renamedg".

 

Renaming the ASM Diskgroup:

To perform the renaming of the ASM Diskgroup we will use the tool "renamedg". As most of the tools of Oracle, a "-help" will tell us a lot of useful information about  how to use it. I recommend to take a couple of minutes to read the description of every option. 

[grid@a1 ~]$ renamedg -help

Parsing parameters..
phase                Phase to execute,
                      (phase=ONE|TWO|BOTH), default BOTH

dgname               Diskgroup to be renamed

newdgname            New name for the diskgroup

config               intermediate config file

check                just check-do not perform actual operation,
                      (check=TRUE/FALSE), default FALSE

confirm              confirm before committing changes to disks,
                      (confirm=TRUE/FALSE), default FALSE

clean                ignore errors,
                      (clean=TRUE/FALSE), default TRUE

asm_diskstring       ASM Diskstring (asm_diskstring='discoverystring',
                      'discoverystring1' ...)

verbose              verbose execution,
                      (verbose=TRUE|FALSE), default FALSE

keep_voting_files    Voting file attribute,
                      (keep_voting_files=TRUE|FALSE), default FALSE

[grid@a1 ~]$

The most important thing of this tool is to know that it works with two phases. 

  • Phase one: This phase generates a configuration file to be used in phase two.
  • Phase two: This phase uses the configuration file to perform the renaming of the disk group.

Said that I recommend to run "renamedg" with the option "check=true", doing so it will not write anything in the headers of the ASM Disks, it will only perform the phase one which is the creation of the file of configuration and it will check out the steps in the phase two without  really perform it. 

 

Running "renamedg" with "check=true":

[grid@a1 ~]$ renamedg phase=both dgname=DATA newdgname=DATA2 asm_diskstring='/dev/oracleasm/disks/' check=true verbose=true

Parsing parameters..

Parameters in effect:

Old DG name : DATA
New DG name : DATA2
Phases :
Phase 1
Phase 2
Discovery str : /dev/oracleasm/disks/
Check : TRUE
Clean : TRUE
Raw only : TRUE
renamedg operation: phase=both dgname=DATA newdgname=DATA2 asm_diskstring=/dev/oracleasm/disks/ check=true verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:/dev/oracleasm/disks/
Identified disk UFS:/dev/oracleasm/disks/ASMDISK1 with disk number:0 and timestamp (33017185 1812365312)
Identified disk UFS:/dev/oracleasm/disks/ASMDISK2 with disk number:1 and timestamp (33017185 1812365312)
Identified disk UFS:/dev/oracleasm/disks/ASMDISK3 with disk number:2 and timestamp (33017186 -1487072256)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/oracleasm/disks/
Identified disk UFS:/dev/oracleasm/disks/ASMDISK1 with disk number:0 and timestamp (33017185 1812365312)
Identified disk UFS:/dev/oracleasm/disks/ASMDISK2 with disk number:1 and timestamp (33017185 1812365312)
Identified disk UFS:/dev/oracleasm/disks/ASMDISK3 with disk number:2 and timestamp (33017186 -1487072256)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Checking disk number:1
Checking disk number:2
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/oracleasm/disks/ASMDISK1
Leaving the header unchanged
Looking for /dev/oracleasm/disks/ASMDISK2
Leaving the header unchanged
Looking for /dev/oracleasm/disks/ASMDISK3
Leaving the header unchanged
Completed phase 2
Terminating kgfd context 0x7fc8de5cb0a0
[grid@a1 ~]$

There are some important messages that we saw in the output, the message "Leaving the header unchanged" means that the disks were not modified. Only the phase one was performed (creating a config file) and a review of the disk were performed without changes. That's because we execute "renamedg" with the option "check=true". 

After execute it we will see the config file created in the same directory where we executed "renamedg", since we didn't specify a specific name for the config file, the default name is "renamedg_config":

[grid@a1 ~]$ ls -ltr renamedg_config
-rw-r--r-- 1 grid oinstall 123 Oct 20 12:54 renamedg_config
[grid@a1 ~]$

Let's take a look into the config file created by the phase one:

[grid@a1 ~]$ cat renamedg_config
/dev/oracleasm/disks/ASMDISK1 DATA DATA2
/dev/oracleasm/disks/ASMDISK2 DATA DATA2
/dev/oracleasm/disks/ASMDISK3 DATA DATA2
[grid@a1 ~]$

It seems that only the disks of the ASM Diskgroup DATA are listed, in this case three disks are listed, the second column seems to be the current name of the ASM Diskgroup (DATA) and the third column seems to be the new name of the ASM Diskgroup (DATA2). 

 

Performing the ASM Diskgroup renaming: Since we already executed the phase one, we will re-execute "renamedg" but only for the phase two and using the config file generated by the phase one:

[grid@a1 ~]$ renamedg dgname=DATA newdgname=DATA2 asm_diskstring='/dev/oracleasm/disks/' verbose=true phase=twoconfig='/home/grid/renamedg_config'

Parsing parameters..

Parameters in effect:

Old DG name : DATA
New DG name : DATA2
Phases :
Phase 2
Discovery str : /dev/oracleasm/disks/
Clean : TRUE
Raw only : TRUE
renamedg operation: dgname=DATA newdgname=DATA2 asm_diskstring=/dev/oracleasm/disks/ verbose=true phase=two config=/home/grid/renamedg_config
Executing phase 2
Looking for /dev/oracleasm/disks/ASMDISK1
Modifying the header
Looking for /dev/oracleasm/disks/ASMDISK2
Modifying the header
Looking for /dev/oracleasm/disks/ASMDISK3
Modifying the header
Completed phase 2
Terminating kgfd context 0x7f7b3673c0a0
[grid@a1 ~]$

it takes just a few seconds to complete. 

 

Mounting the ASM Diskgroup: The next step is to mount the ASM Diskgroup, don't forget that you have to mount it using the new name because it was already renamed. 

[grid@a1 ~]$ asmcmd mount DATA2
[grid@a1 ~]$ asmcmd lsdg
State   Type  Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTEDNORMAL Y   512   4096   1048576   115262   95528   51489   22019  0   N   DATA2/
[grid@a1 ~]$

After validate that the ASM Diskgroup is again in status "MOUNTED" we can proceed with the post-renaming steps. 

 

Renaming the Spfile:

The first thing of post-renaming steps is  to perform a modification in the spfile that the database instance was using in order to open our database. In this case we can see that the database instance uses a pfile in "$ORACLE_HOME/dbs" but that pfile is a pointer to a spfile that is stored inside the ASM Diskgroup "DATA". Since the new diskgroup name is "DATA2" we have to update that information:

[oracle@a1 ~]$ cat $ORACLE_HOME/dbs/initorcl.ora
SPFILE='+DATA/orcl/spfileorcl.ora'
[oracle@a1 ~]$
[oracle@a1 ~]$ vi $ORACLE_HOME/dbs/initorcl.ora
[oracle@a1 ~]$ cat $ORACLE_HOME/dbs/initorcl.ora
SPFILE='+DATA2/orcl/spfileorcl.ora'
[oracle@a1 ~]$

Once we have done that change we can start the database instance up in status "nomount":

[oracle@a1 ~]$ sqlplus / as sysdba

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 1870647296 bytes
Fixed Size 2254304 bytes
Variable Size 503319072 bytes
Database Buffers 1358954496 bytes
Redo Buffers 6119424 bytes
SQL>

 

Modifying the Control File location in the spfile:

We have already started up the database instance, but before to proceed to mount it we have to do another step. We have to change the location of the control files inside the spfile. To do so, I am creating a temporary pfile from the current spfile:

SQL> create pfile='/home/oracle/stagePfile.ora' from spfile;

File created.

SQL>

I will modify the current location of the control files with the new ASM Diskgroup:

[oracle@a1 ~]$ cat /home/oracle/stagePfile.ora|grep DATA
*.control_files='+DATA/orcl/controlfile/current.275.912909297'
[oracle@a1 ~]$

[oracle@a1 ~]$ vi /home/oracle/stagePfile.ora
[oracle@a1 ~]$ cat /home/oracle/stagePfile.ora|grep DATA
*.control_files='+DATA2/orcl/controlfile/current.275.912909297'
[oracle@a1 ~]$

Once done the change, in order to create a spfile from the temporary pfile we have to shutdown again the instance and re-creating the spfile using the temporary pfile and then start the database instance up until mount state:

[oracle@a1 ~]$ sqlplus / as sysdba

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> create spfile='+DATA2/orcl/spfileorcl.ora' from pfile='/home/oracle/stagePfile.ora';

File created.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1870647296 bytes
Fixed Size 2254304 bytes
Variable Size 503319072 bytes
Database Buffers 1358954496 bytes
Redo Buffers 6119424 bytes
Database mounted.

Renaming the Database Files:

So far we have renamed the spfile, updated the controlfile locations and the last step is to rename every file used by the database, these are redo logs, datafiles, temporary files and in case you are using a block change tracking file you have to update it as well. In order to rename these files, I have used the following query to create the sentences that do the work:

SQL> set head off
SQL>select 'alter database rename file '''||name||''' to '''||replace(name, 'DATA','DATA2')||''';' from v$datafile
union
select 'alter database rename file '''||member||''' to '''||replace(member, 'DATA','DATA2')||''';' from v$logfile
union
select 'alter database rename file '''||name||''' to '''||replace(name, 'DATA','DATA2')||''';' from v$tempfile;


alter database rename file '+DATA/orcl/onlinelog/group_1.259.916424605' to '+DATA2/orcl/onlinelog/group_1.259.916424605';
alter database rename file '+DATA/orcl/onlinelog/group_2.266.916424607' to '+DATA2/orcl/onlinelog/group_2.266.916424607';
alter database rename file '+DATA/orcl/onlinelog/group_3.270.916424607' to '+DATA2/orcl/onlinelog/group_3.270.916424607';
alter database rename file '+DATA/orcl/tempfile/temp.263.912909305' to '+DATA2/orcl/tempfile/temp.263.912909305';
alter database rename file '+DATA/orcl/datafile/sysaux.257.912909191' to '+DATA2/orcl/datafile/sysaux.257.912909191';
alter database rename file '+DATA/orcl/datafile/system.262.912909191' to '+DATA2/orcl/datafile/system.262.912909191';
alter database rename file '+DATA/orcl/datafile/tbs1.279.918100673' to '+DATA2/orcl/datafile/tbs1.279.918100673';
alter database rename file '+DATA/orcl/datafile/tbs2.256.918102673' to '+DATA2/orcl/datafile/tbs2.256.918102673';
alter database rename file '+DATA/orcl/datafile/undotbs1.261.912909191' to '+DATA2/orcl/datafile/undotbs1.261.912909191';
alter database rename file '+DATA/orcl/datafile/users.271.912909191' to '+DATA2/orcl/datafile/users.271.912909191';

6 rows selected.

The sentences to rename every file used by the database were created, all what we have to do is just execute them:


SQL> alter database rename file '+DATA/orcl/datafile/system.262.912909191' to '+DATA2/orcl/datafile/system.262.912909191';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/sysaux.257.912909191' to '+DATA2/orcl/datafile/sysaux.257.912909191';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/undotbs1.261.912909191' to '+DATA2/orcl/datafile/undotbs1.261.912909191';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/users.271.912909191' to '+DATA2/orcl/datafile/users.271.912909191';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/tbs1.279.918100673' to '+DATA2/orcl/datafile/tbs1.279.918100673';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/tbs2.256.918102673' to '+DATA2/orcl/datafile/tbs2.256.918102673';

Database altered.

SQL> SQL> alter database rename file '+DATA/orcl/onlinelog/group_1.259.916424605' to '+DATA2/orcl/onlinelog/group_1.259.916424605';

Database altered.

SQL> alter database rename file '+DATA/orcl/onlinelog/group_2.266.916424607' to '+DATA2/orcl/onlinelog/group_2.266.916424607';

Database altered.

SQL> alter database rename file '+DATA/orcl/onlinelog/group_3.270.916424607' to '+DATA2/orcl/onlinelog/group_3.270.916424607';

Database altered.

SQL> alter database rename file '+DATA/orcl/tempfile/temp.263.912909305' to '+DATA2/orcl/tempfile/temp.263.912909305';

Database altered.

 

Opening the database in read-write:

Once all the files were renamed, we are ready to open the database normally:

SQL> set head on
SQL> select name , open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
ORCL      READ WRITE

SQL>

Follow me:

      

Oracle Database 12c Deferred Global Index

$
0
0

By Deiby Gómez

Introduction:

Before Oracle 12.1.0.1, whenever a table partition was dropped all the global indexes that were created on that table became UNUSABLE unless we specified the clause UPDATE INDEXES in the ALTER TABLE (…) DROP PARTITION sentence. If we didn’t specify the clause UPDATE INDEX then the index had to be rebuilt. Why? Look at the following picture:

In this picture we have a table with 2 columns, 5 rows. And an index created on that table, the index was created on the column “ID” of the table and its index entries include the row address (ROWID). The ROWID is an address to the physical location of a row (which datafile, which block, which position in a block).

 

Now, think what would happen if we dropped the table partition 2, which includes the values “Argentina” and “Colombia: It would result in something like the following:

 

We see that the values “Agentina” and “Colombia” don’t exist anymore because they were deleted when the partition was dropped. However, we see on the left side that the Index entries related to the values of the dropped table partition still exist into the index. Now they are orphaned index entries because the rows in the table where they are pointing don’t exist anymore. So in this situation what the index needs is: Maintenance.

For small indexes the performance overhead related to the index maintenance could be acceptable, but when we are using big tables and create global indexes on them, our indexes could also be also big (GBs). Rebuilding an index impacts performance since it uses several IO resources to scan the table and creates B-tree (or bitmap) branch and leaf nodes. The performance problem increases if the table is large, as we said previously. In the other hand, if we use the UPDATE INDEXES clause we get the following benefits:

  • The indexes are updated with the base table operation. You are not required to update later or rebuild the indexes.
  • The global indexes are more highly available, because they are not marked UNUSABLE. These indexes can be used with a small performance overhead while the index completes its maintenance (all the orphaned entries are fixed).
  • You don’t have to determine which indexes are dependent on the table from which we are dropping the partition. All the dependent indexes get maintained automatically without DBA intervention.

There is a trick you can use to drop a table partition without making the global indexes unusable; however, that trick is not always possible. You can read about this trick in my article “The trick of dropping a table partition without impact the Global Index”.

Oracle has fixed this downside of performing the index maintenance immediately after to drop a table partition and in Oracle Database 12.1.0.1 has introduced the feature “Deferred Global Index”, which allows you to drop a table partition, to keep the dependent global indexes available and in USABLE state, and leave the maintenance for later. The DBA can decide when the index maintenance is performed via a job “PMO_DEFERRED_GIDX_MAINT_JOB” that exists by default. Oracle is able to use an index that has orphaned entries to recover rows from a table without compromising the reliability of the rows returned.

To explain how this works, I will drop a table partition in a database version 11.2.0.3 and we will see that when we specify UPDATE INDEXES, the index maintenance is performed immediately. For large indexes this could cause performance degradation in the database.

 

Dropping a table partition in Oracle Database 11.2.0.3:

Creating a database for testing purposes: 

SQL> create table dgomez.table1 (col1 varchar2(20))
partition by list (col1)
(partition dgomez_table1_p1 VALUES ('guatemala'),
partition dgomez_table1_p2 VALUES ('brasil'),
partition dgomez_table1_p3 VALUES ('colombia'));

Table created.

 Inserting some rows into the table:

SQL> insert into dgomez.table1 values ('guatemala');
SQL> insert into dgomez.table1 values ('brasil');
SQL> insert into dgomez.table1 values ('colombia');
SQL> commit;

Gathering stats so that we can query the right metadata:

 

SQL> exec dbms_stats.gather_table_stats('DGOMEZ','TABLE1');

PL/SQL procedure successfully completed.

 

Checking how many rows there are in each table partition. One row was created in one specific partition. The partition “DGOMEZ_TABLE_P1” has the value “guatemala”, the next partition the value “brasil”, and the last partition the value “colombia”.

 

SQL> select TABLE_NAME, PARTITION_NAME,NUM_ROWS from dba_tab_partitions where TABLE_OWNER='DGOMEZ' and table_name='TABLE1'

TABLE_NAME PARTITION_NAME       NUM_ROWS
---------- -------------------- ----------
TABLE1     DGOMEZ_TABLE1_P1     1
TABLE1     DGOMEZ_TABLE1_P2     1
TABLE1     DGOMEZ_TABLE1_P3     1

Creating a Global Index:

 SQL> create index dgomez.index1 on dgomez.table1 (col1) global ;

Index created.

Looking at the index internals, we can see that the the values were successfully indexed.

Leaf block dump
===============
header address 140176766509668=0x7f7d725fa264
kdxcolev 0
KDXCOLEV Flags = - - -
kdxcolok 0
kdxcoopc 0x80: opcode=0: iot flags=--- is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 3
kdxcofbo 42=0x2a
kdxcofeo 7967=0x1f1f
kdxcoavs 7925
kdxlespl 0
kdxlende 0
kdxlenxt 0=0x0
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 8032
row#0[8012] flag: ------, lock: 0, len=20
col 0; len 6; (6): 62 72 61 73 69 6c -- brasil
col 1; len 10; (10): 00 01 2d f8 01 00 05 32 00 00
row#1[7990] flag: ------, lock: 0, len=22
col 0; len 8; (8): 63 6f 6c 6f 6d 62 69 61 -- colombia
col 1; len 10; (10): 00 01 2d f9 01 00 09 32 00 00
row#2[7967] flag: ------, lock: 0, len=23
col 0; len 9; (9): 67 75 61 74 65 6d 61 6c 61 -- guatemala
col 1; len 10; (10): 00 01 2d f7 01 00 01 32 00 00
----- end of leaf block dump -----

 

Now we will drop a table partition using the UPDATE INDEXES clause and we will review the result:

SQL> alter table dgomez.table1 drop partition dgomez_table1_p1 update indexes;

Table altered.

Looking at the Index internals after dropping a table partition allows us to see that the index maintenance was performed immediately. The index entry “guatemala” was marked with the tag “D”, which means the index entry was “deleted” and it can be reused by another index entry. If you want to know more about what this tag means you can read my presentation “Oracle Indexes: From the Concept to Internals”.

Leaf block dump
===============
header address 140655176049252=0x7fecd5cde264
kdxcolev 0
KDXCOLEV Flags = - - -
kdxcolok 0
kdxcoopc 0x80: opcode=0: iot flags=--- is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 3
kdxcofbo 42=0x2a
kdxcofeo 7967=0x1f1f
kdxcoavs 7925
kdxlespl 0
kdxlende 1
kdxlenxt 0=0x0
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 8032
row#0[8012] flag: ------, lock: 0, len=20
col 0; len 6; (6): 62 72 61 73 69 6c -- brasil
col 1; len 10; (10): 00 01 2d f8 01 00 05 32 00 00
row#1[7990] flag: ------, lock: 0, len=22
col 0; len 8; (8): 63 6f 6c 6f 6d 62 69 61 -- colombia
col 1; len 10; (10): 00 01 2d f9 01 00 09 32 00 00
row#2[7967] flag: ---D--, lock: 2, len=23
col 0; len 9; (9): 67 75 61 74 65 6d 61 6c 61 -- guatemala
col 1; len 10; (10): 00 01 2d f7 01 00 01 32 00 00
----- end of leaf block dump -----

 In this case the index maintenance had as a result only one index entry marked as “D”. But think about large indexes: How long would index maintenance take to fix orphaned indexes in a global index of 600GB? how many session would be impacted with the performance degradation of this operation? We will see how Oracle enhancements have fixed this situation in the next section.

 

Dropping a table partition in Oracle Database 12.1.0.2:

Creating a database for testing purposes:

SQL> create table dgomez.table1 (col1 varchar2(20))
partition by list (col1)
(partition dgomez_table1_p1 VALUES ('guatemala'),
partition dgomez_table1_p2 VALUES ('brasil'),
partition dgomez_table1_p3 VALUES ('colombia'));

Table created.

Inserting some rows into the table: 

SQL> insert into dgomez.table1 values ('guatemala');
SQL> insert into dgomez.table1 values ('brasil');
SQL> insert into dgomez.table1 values ('colombia');
SQL> commit;

Gathering stats so that we can query the right metadata:

SQL> exec dbms_stats.gather_table_stats('DGOMEZ','TABLE1');

PL/SQL procedure successfully completed.

 

Checking how many rows there are in each table partition: 

SQL> select TABLE_NAME, PARTITION_NAME,NUM_ROWS from dba_tab_partitions where TABLE_OWNER='DGOMEZ' and table_name='TABLE1'

TABLE_NAME PARTITION_NAME       NUM_ROWS
---------- -------------------- ----------
TABLE1     DGOMEZ_TABLE1_P1     1
TABLE1     DGOMEZ_TABLE1_P2     1
TABLE1     DGOMEZ_TABLE1_P3     1

 

Creating a Global Index:

SQL> create index dgomez.index1 on dgomez.table1 (col1) global ;

Index created.

 

Looking at the index internals. The three values were successfully indexed. 

Leaf block dump
===============
header address 140082935337572=0x7f6799999264
kdxcolev 0
KDXCOLEV Flags = - - -
kdxcolok 0
kdxcoopc 0x80: opcode=0: iot flags=--- is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 3
kdxcofbo 42=0x2a
kdxcofeo 7967=0x1f1f
kdxcoavs 7925
kdxlespl 0
kdxlende 0
kdxlenxt 0=0x0
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 8032
row#0[8012] flag: -------, lock: 0, len=20
col 0; len 6; (6): 62 72 61 73 69 6c -- brasil
col 1; len 10; (10): 00 02 11 a2 01 80 08 8e 00 00
row#1[7990] flag: -------, lock: 0, len=22
col 0; len 8; (8): 63 6f 6c 6f 6d 62 69 61 -- colombia
col 1; len 10; (10): 00 02 11 a3 01 80 0c 8e 00 00
row#2[7967] flag: -------, lock: 0, len=23
col 0; len 9; (9): 67 75 61 74 65 6d 61 6c 61 -- guatemala
col 1; len 10; (10): 00 02 11 a1 01 80 04 8e 00 00
----- end of leaf block Logical dump -----

 

I will enable autotrace on my session to see that the index is indeed used:

SQL> select * from dgomez.table1 where col1='brasil';

COL1
--------------------
brasil

Execution Plan
----------------------------------------------------------
Plan hash value: 1498168486

---------------------------------------------------------------------------
| Id | Operation        | Name   | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT |        | 1    | 7     | 1 (0)      | 00:00:01 |
|* 1 | INDEX RANGE SCAN | INDEX1 | 1    | 7     | 1 (0)      | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("COL1"='brasil')

 

Look at the end of the output above, only one filter was used: COL1=’brasil’. It is important to note this because we will talk about it later.

Reviewing if the index has orphaned entries. In Oracle 12.1.0.2 the column ORPHANED_ENTRIES was introduced. This column says if an index has orphaned index entries, the index is marked as a candidate for maintenance on the next execution of the job “PMO_DEFERRED_GIDX_MAINT_JOB”. In this case the index doesn’t have orphaned index entries, as we can see with the following query:

SQL> select index_name, status,orphaned_entries from dba_indexes where owner='DGOMEZ' and index_name='INDEX1';

INDEX_NAME STATUS   ORP
---------- -------- ---
INDEX1     VALID    NO

Now I will proceed to drop a table partition using the UPDATE INDEXES clause:

 

SQL> alter table dgomez.table1 drop partition dgomez_table1_p1 update indexes;

Table altered.

 

Immediately after dropping the table partition, we can see that the indexes that depend on the table have orphaned index entries:

SQL> select index_name, status,orphaned_entries from dba_indexes where owner='DGOMEZ' and index_name='INDEX1';

INDEX_NAME STATUS   ORP
---------- -------- ---
INDEX1     VALID    YES

 

However, the behavior in Oracle 12.1.0.2 is different, because if we look at the index internals, the index entries have been not touched as they were in Oracle 11.2.0.3.

Leaf block dump
===============
header address 140533172339300=0x7fd06dd10264
kdxcolev 0
KDXCOLEV Flags = - - -
kdxcolok 0
kdxcoopc 0x80: opcode=0: iot flags=--- is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 3
kdxcofbo 42=0x2a
kdxcofeo 7967=0x1f1f
kdxcoavs 7925
kdxlespl 0
kdxlende 0
kdxlenxt 0=0x0
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 8032
row#0[8012] flag: -------, lock: 0, len=20
col 0; len 6; (6): 62 72 61 73 69 6c -- brasil
col 1; len 10; (10): 00 02 11 a2 01 80 08 8e 00 00
row#1[7990] flag: -------, lock: 0, len=22
col 0; len 8; (8): 63 6f 6c 6f 6d 62 69 61 -- colombia
col 1; len 10; (10): 00 02 11 a3 01 80 0c 8e 00 00
row#2[7967] flag: -------, lock: 0, len=23
col 0; len 9; (9): 67 75 61 74 65 6d 61 6c 61 -- guatemala
col 1; len 10; (10): 00 02 11 a1 01 80 04 8e 00 00
----- end of leaf block Logical dump -----

This means that Index maintenance was not performed immediately. This leads us to the question: Is the index usable if it has orphaned index entries? Yes, it is! I will run another query to confirm it:

SQL> select * from dgomez.table1 where col1='guatemala';

COL1
--------------------
guatemala


Execution Plan
----------------------------------------------------------
Plan hash value: 1498168486

---------------------------------------------------------------------------
| Id | Operation        | Name   | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT |        | 1    | 7     | 1 (0)      | 00:00:01 |
|* 1 | INDEX RANGE SCAN | INDEX1 | 1    | 7     | 1 (0)      | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("COL1"='guatemala')
filter(TBL$OR$IDX$PART$NUM("DGOMEZ"."TABLE1",0,8,0,"TABLE1".ROWID)=1)

 

The query was able to use the index “INDEX1” even though the index has orphaned index entries and, more interestingly, the values were returned correctly. This confirms that an index with orphaned entries can be used to recover rows from a table without compromising the reliability of the data returned. But how? This is a good question; you can see at the end of AUTOTRACE that a new filter was used. This filter used the undocumented function called “TBL$OR$IDX$PART$NUM” which in fact is responsible for the magic here. Unfortunately, there is not much documentation on it, but I will provide some insights:

 

Definition:

TBL$OR$IDX$PART$NUM function is used to find which partition a particular row would belong to. This function is undocumented. It has the format TBL$OR$IDX$PART$NUM(PARTITIONED_TABLE_NAME,0,d#,p#,COLUMN_NAME) .

I tried to use the function without knowing how it worked, but it was unsuccessful:

SQL> select TBL$OR$IDX$PART$NUM("DGOMEZ"."TABLE1",0,8,0,'ROWID') from dual;
select TBL$OR$IDX$PART$NUM("DGOMEZ"."TABLE1",0,8,0,'ROWID') from dual
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [evapnum:dim8rowid], [], [], [], [],
[], [], [], [], [], [], []

 

What I saw from my tests is that the function change its functionality depending on the value of the parameter “d#”. In some cases of “d#”, such a function requires even more values. Unfortunately, which value is for which functionality is not documented.

With some traces I was able to capture the following SQL statements that we used while the ALTER TABLE (…) DROP PARTITON was executed with the clause UPDATE INDEXES:

 

select count(*) from SYS.INDEX_ORPHANED_ENTRY_V$ ioe

                 where ioe.index_object_id = :1 and ioe.type != 'H' and NOT EXISTS (select * from SYS.INDEX_ORPHANED_ENTRY_V$ ioe2

select text from view$ where rowid=:1

select u.name, o.name, o.namespace, o.type#, decode(bitand(i.property,1024),0,0,1), o.obj# from ind$ i,obj$ o,user$ u where i.obj#=:1 and o.obj#=i.bo# and o.owner#=u.user#

select u.name, o.name, o.namespace, o.type#, decode(bitand(i.property,1024),0,0,1), o.obj# from ind$ i,obj$ o,user$ u where i.obj#=:1 and o.obj#=i.bo# and o.owner#=u.user#

select u.name, o.name, o.namespace, o.type#, decode(bitand(i.property,1024),0,0,1), o.obj# from ind$ i,obj$ o,user$ u where i.obj#=:1 and o.obj#=i.bo# and o.owner#=u.user#

delete from index_orphaned_entry$ where indexobj#=:1 and           tabpartdobj# = :2

delete from index_orphaned_entry$ where indexobj#=:1

delete from superobj$ where subobj# = :1

delete from tab_stats$ where obj#=:1

 

It seems the tables SYS.INDEX_ORPHANED_ENTRY_V$ and SYS.INDEX_ORPHANED_ENTRY$ are used to track which indexes have orphaned entries. What I was not able to see is how Oracle tracks exactly which index entries are orphaned. Somewhere Oracle has information about which index entries must be marked as “D” when it performs the index maintenance. 

 

Looking at the tables related to orphaned index entries:

SQL> desc SYS.INDEX_ORPHANED_ENTRY_V$
Name              Null? Type
----------------- -------- ----------------------------
INDEX_OWNER       NOT NULL VARCHAR2(128)
INDEX_NAME        NOT NULL VARCHAR2(128)
INDEX_SUBNAME              VARCHAR2(128)
INDEX_OBJECT_ID   NOT NULL NUMBER
TABLE_OWNER       NOT NULL VARCHAR2(128)
TABLE_NAME        NOT NULL VARCHAR2(128)
TABLE_SUBNAME              VARCHAR2(128)
TABLE_OBJECT_ID   NOT NULL NUMBER


SQL> select * from SYS.INDEX_ORPHANED_ENTRY_V$;

INDEX_OWNE INDEX_NAME INDEX_SUBN INDEX_OBJECT_ID TABLE_OWNE TABLE_NAME TABLE_SUBN TABLE_OBJECT_ID T
---------- ---------- ---------- --------------- ---------- ---------- ---------- --------------- -
DGOMEZ     INDEX1                136728          DGOMEZ     TABLE1                136724          O

SQL> desc sys.index_orphaned_entry$;
Name          Null? Type
------------- -------- --------------------
INDEXOBJ#     NOT NULL NUMBER
TABPARTDOBJ#  NOT NULL NUMBER
HIDDEN                 VARCHAR2(1)

SQL> select * from index_orphaned_entry$;

INDEXOBJ#  TABPARTDOBJ# H
---------- ------------ -
136728     136727       O

 

I will delete another table partition in order to see if the filter using the function TBL$OR$IDX$PART$NUM changes.

SQL> alter table dgomez.table1 drop partition dgomez_table1_p3 update indexes;

Table altered.

 

Executing a query with Autotrace:

SQL> select * from dgomez.table1 where col1='colombia';

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 1498168486

---------------------------------------------------------------------------
| Id | Operation        | Name   | Rows  | Bytes      | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT |        | 1     | 8          | 1 (0)      | 00:00:01 |
|* 1 | INDEX RANGE SCAN | INDEX1 | 1     | 8          | 1 (0)      | 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("COL1"='colombia')
filter(TBL$OR$IDX$PART$NUM("DGOMEZ"."TABLE1",0,8,0,"TABLE1".ROWID)=1)

 

We see that the filter didn’t change. This is even more interesting. We have confirmed that the filter doesn’t depend on how many table partitions we drop.

Reviewing when the next execution of the job that performs index maintenance will be:

SQL> select job_name, enabled,run_count, to_char(NEXT_RUN_DATE,'mm-dd-yyyy hh24:mi') from dba_scheduler_jobs where job_name='PMO_DEFERRED_GIDX_MAINT_JOB'

JOB_NAME                       ENABL RUN_COUNT TO_CHAR(NEXT_RUN
------------------------------ ----- ---------- ----------------
PMO_DEFERRED_GIDX_MAINT_JOB    TRUE  2          03-26-2017 02:00

Reviewing what the job executes:

SQL> select source, PROGRAM_OWNER, PROGRAM_NAME from dba_scheduler_jobs where job_name='PMO_DEFERRED_GIDX_MAINT_JOB'

SOURCE     PROGRAM_OWNER        PROGRAM_NAME
---------- -------------------- ------------------------------
           SYS                  PMO_DEFERRED_GIDX_MAINT

Looking at the procedure that is executed by the job:

SQL>select program_action from dba_SCHEDULER_PROGRAMS where program_name='PMO_DEFERRED_GIDX_MAINT'

PROGRAM_ACTION
--------------------------------------------------------------------------------
dbms_part.cleanup_gidx_internal(noop_okay_in => 1);

 

Checking the functions included in the package DBMS_PART:

SQL> DESC DBMS_PART
PROCEDURE CLEANUP_GIDX
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SCHEMA_NAME_IN     VARCHAR2 IN DEFAULT
TABLE_NAME_IN      VARCHAR2 IN DEFAULT
PROCEDURE CLEANUP_GIDX_INTERNAL
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SCHEMA_NAME_IN     VARCHAR2   IN DEFAULT
TABLE_NAME_IN      VARCHAR2   IN DEFAULT
ORPHANS_ONLY_IN    NUMBER(38) IN DEFAULT
NOOP_OKAY_IN NUMBER(38) IN DEFAULT
PROCEDURE CLEANUP_ONLINE_OP
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SCHEMA_NAME        VARCHAR2 IN DEFAULT
TABLE_NAME         VARCHAR2 IN DEFAULT
PARTITION_NAME     VARCHAR2 IN DEFAULT

 

Performing an index maintenance manually:

SQL> exec dbms_part.cleanup_gidx_internal(noop_okay_in => 1);

PL/SQL procedure successfully completed.

 

Checking out if the index has orphaned entries:

SQL> select index_name, status,orphaned_entries from dba_indexes where owner='DGOMEZ' and index_name='INDEX1'

INDEX_NAME           STATUS   ORP
-------------------- -------- ---
INDEX1               VALID    NO

 

We can see that after index maintenance, the index internals changed; now we have only 1 index entry. It is interesting to note that the two orphaned index entries were not marked as “D” as in 11.2.0.3, they were in fact deleted from the index.

Leaf block dump
===============
header address 140442250240612=0x7fbb426fe264
kdxcolev 0
KDXCOLEV Flags = - - -
kdxcolok 0
kdxcoopc 0x80: opcode=0: iot flags=--- is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 1
kdxcofbo 38=0x26
kdxcofeo 7967=0x1f1f
kdxcoavs 7974
kdxlespl 0
kdxlende 0
kdxlenxt 0=0x0
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 8032
row#0[8012] flag: -------, lock: 0, len=22
col 0; len 8; (8): 63 6f 6c 6f 6d 62 69 61 -- colombia
col 1; len 10; (10): 00 02 11 a3 01 80 0c 8e 00 00
----- end of leaf block Logical dump ---- 

Conclusion:

Starting with Oracle Database 12.1.0.1 we can drop table partitions without worrying about index maintenance (at least immediately). The database will be working normally, without performance overhead, becaise the index maintenance is not performed immediately. The DBA can decide when the index maintenance will be performed via a scheduled job and even better, has the option to perform the index maintenance manually. We also saw a comparison between 11.2.0.3 and 12.1.0.2 of this behavior and we looked at index internals so that we could see how the concept is linked to internals.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Introduction to Oracle SQL Plan Directives in Oracle Database 12.2

$
0
0

By Deiby Gómez

 

Introduction:

An Execution Plan is composed by the steps that the optimizer does in order to process a SQL statement. Oracle Optimizer always tries to find out the best execution plan for a SQL statement, taking into consideration several things, such as access paths, parallelism, statistics, histograms, bind variables, database parameters, etc. However, there are situations when Oracle Optimizer doesn’t create the right execution plan because the information that the optimizer used to create the “best plan” was not correct, not updated, or not sufficient; an example of this (but not the only reason) is when statistics are not updated (stale statistics) and the data has changed considerably in the tables involved in the SQL statement. In such cases, the SQL statement will be executed with an execution plan that the optimizer thinks  is the best, but actually it is not.

Oracle has designed several features that make the optimizer aware that there is “something wrong” with the actual execution plan; once the optimizer is aware of that it takes “feedback” and then creates a new execution plan, or takes some others action to “adapt” itself to the environment or data change. The set of features that make the optimizer “adapt” itself to the changes in the environment (for instance, database parameters) or in the data (for instance, skewed data) are called “adaptive features”. In Oracle Database 12.1.0.1 probably the most popular words were “adaptive” and “multi-tenant”; it was the first version with several new features that included the word “adaptive”.  For instance, adaptive index compression, adaptive query optimization, adaptive plans, adaptive joins, adaptive parallel and several additional adaptive things!

“Adaptive Features” comprise two categories: “Adaptive Plans” and “Adaptive Statistics”.

 

In 12.1.0.1 all the “adaptive” features were controlled by the database parameter “optimizer_adaptive_features”; however, in 12.2.0.1 that changed and now the database parameter “optimizer_adaptive_features” has been broken up into two new database parameters: optimizer_adaptive_plans and optimizer_adaptive_statistics. Each parameter controls a category of Adaptive Features. The database parameter “optimizer_adaptive_features” doesn’t exist in 12.2.0.1. 

The definition of the parameter in 12.1.0.1:

  • optimizer_adaptive_features enables or disables all of the adaptive optimizer features, including adaptive plan (adaptive join methods and bitmap pruning), automatic re-optimization, SQL plan directives, and adaptive distribution methods.

The definition of the parameters in 12.2.0.1:

  • optimizer_adaptive_plans controls adaptive plans. Adaptive plans are execution plans built with alternative choices that are decided at runtime based on statistics collected as the query executes.
  • optimizer_adaptive_statistics controls adaptive statistics. Some query shapes are too complex to rely on base table statistics alone, so the optimizer augments these statistics with adaptive statistics.

Oracle SQL Plan Directives is part of the category “Adaptive Statistics”.  Basically, they are notes that the optimizer writes and stores in the database to “adapt” itself to the environment or data changes. For example, if the optimizer sees that the actual rows are considerably different than the estimated rows, then the optimizer writes a note to “remember” what happened, so that in the next execution of the same SQL statement (or one with the same query expressions), the optimizer can take actions to fix it. These notes taken by the optimizer are called “SQL Plan Directives”. SQL Plan Directives are not tied to a specific SQL_ID. SQL Plan Directives are based on a query expression rather than at the SQL statement level. This makes the SQL Plan Directives usable for others SQL_IDs as long as the query expression is the same. SQL Plan Directives can be queried using the views DBA_SQL_PLAN_DIR_OBJECTS and DBA_SQL_PLAN_DIRECTIVES.

If you want to see a comparison between SQL Plan Directives in 12.1 and 12.2 you can read this good article written by Mauro Pagano.

Oracle automatically handles all related to SQL Plan Directives; it creates and maintains them. The only operations allowed by Oracle for SQL Plan Directives are the following:

  • Flush the SQL Plan Directives to disk.
  • Delete a SQL Plan Directive
  • Export a SQL Plan Directive
  • Import a SQL Plan Directive.

How to flush the SQL Plan Directives to disk: When a SQL Plan Directive is created, it is created only in memory. Oracle flushes all the new SQL Plan Directives to disk every 15 minutes. However, if you want to flush the SQL Plan Directives manually you can use the following sentences:

BEGIN

  DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE;

END;

/

Delete a SQL Plan Directive: To delete a SQL Plan Directive you can use the following SQL statement where the only value requested is the ID of the SQL Plan Directive:

SQL> exec dbms_spd.drop_sql_plan_directive ('<SPD ID>');

PL/SQL procedure successfully completed.

Export and Import SQL Plan Directives: SQL Plan Directives are transported to others databases following the same method that we use to transfer SQL Tuning Sets. This article doesn’t cover those steps, but for more details you can see the Metalink Note:  How to Transport SQL Plan Directives (SPD) From One Database to Another (Doc ID 2064227.1)

Now after covering these useful concepts, let’s do an example!

In this example I am using Oracle Database 12.2.0.1 Enterprise Edition. I have the table dgomez.employee:

SQL> desc dgomez.employee
Name       Null?   Type
--------- -------- ----------------------------
AGE                NUMBER
NAME               VARCHAR2(20)
COUNTRY            VARCHAR2(20)

In the table I have only one row with the data of one employee.

select /*+gather_plan_statistics*/ *
from dgomez.employee e
where e.country='Guatemala' and e.age=21;

AGE        NAME        COUNTRY
---------- ----------- --------------------
21         Deiby       Guatemala

 

You can see that the Estimated Rows (E-Rows) is the same as the value of Actual Rows (A-Rows):

select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL_ID bmx5dfgyzm2ag, child number 0
-------------------------------------
select /*+gather_plan_statistics*/ * from dgomez.employee e where
e.country='Guatemala' and e.age=21

Plan hash value: 2119105728

----------------------------------------------------------------------------------------------

| Id | Operation        | Name     | Starts | E-Rows | A-Rows | A-Time     | Buffers | Reads |
--------------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 1      |        | 1      |00:00:00.01 | 7       | 6     |
|* 1 | TABLE ACCESS FULL| EMPLOYEE | 1      | 1     | 1     |00:00:00.01 | 7       | 6     |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("E"."COUNTRY"='Guatemala' AND "E"."AGE"=21))

19 rows selected.

Now let’s query the view v$sql; this view has a column called “IS_REOPTIMIZABLE”. The definition of this column is the following:

 

We can see that the SQL statement we executed has not been marked as reoptimizable:

select sql_id, child_number, is_reoptimizable, sql_text from v$sql where sql_text like '%dgomez%' and sql_text not like '%insert%';

SQL_ID CHILD_  NUMBER      I SQL_TEXT
------------- ------------ - ----------------------------------------
bmx5dfgyzm2ag            0 N select /*+gather_plan_statistics*/ * fro
                             m dgomez.employee e where e.country='Gua
                             temala' and e.age=21

PL/SQL procedure successfully completed.

Now I will insert several others employees in order to create a difference between the estimated rows and the actual rows. Let’s execute the SQL statement again. I would like to highlight the fact that I had to execute this SQL statement four times in order to make it reoptimizable; in some other cases I had to execute it more times and in some others less times.

select /*+gather_plan_statistics*/ *
from dgomez.employee e
where e.country='Guatemala' and e.age=21;

AGE NAME    COUNTRY
-- -------- --------------------
21 Jose     Guatemala
21 Maria    Guatemala
21 Josh     Guatemala
21 Julio    Guatemala
21 Pedro    Guatemala
21 Marvin   Guatemala
21 Oscar    Guatemala
21 Mauricio Guatemala
21 Gabriel  Guatemala
21 Jonathan Guatemala
21 Lucrecia Guatemala
21 Alex     Guatemala
21 Alvaro   Guatemala
21 Alan     Guatemala
21 Deiby    Guatemala

15 rows selected.

We can see now that there is a difference between the Actual Rows and the Estimated Rows:


select * from table(dbms_xplan.display_cursor(format=>'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
SQL_ID bmx5dfgyzm2ag, child number 0
-------------------------------------
select /*+gather_plan_statistics*/ * from dgomez.employee e where
e.country='Guatemala' and e.age=21

Plan hash value: 2119105728

--------------------------------------------------------------------------------------
| Id | Operation        | Name     | Starts | E-Rows | A-Rows | A-Time     | Buffers |
--------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 1      |        | 15     |00:00:00.01 | 8       |
|* 1 | TABLE ACCESS FULL| EMPLOYEE | 1      | 1     | 15    |00:00:00.01 | 8       |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("E"."COUNTRY"='Guatemala' AND "E"."AGE"=21))

19 rows selected.

The SQL statement was finally marked as reoptimizable. This is because the optimizer saw that there was a difference between the estimated rows and the actual rows.

SQL_ID CHILD_NUMBER      I SQL_TEXT

------------- ---------- - ----------------------------------------
bmx5dfgyzm2ag          0 Y select /*+gather_plan_statistics*/ * fro
                           m dgomez.employee e where e.country='Gua
                           temala' and e.age=21

PL/SQL procedure successfully completed.

Also, the optimizer wrote some “notes” (SQL Plan Directives) to remember in the next execution that there was something wrong with the estimated rows:

select o.directive_id id, owner, o.object_name, o.object_type, d.state, d.reason, d.notes from DBA_SQL_PLAN_DIR_OBJECTS o, DBA_SQL_PLAN_DIRECTIVES d where o.OWNER='DGOMEZ' and o.directive_id=d.directive_id;

ID                   OWNER  OBJECT_NAM OBJECT_TYP STATE  REASON               NOTES
-------------------- ------ ---------- ---------- ------ -------------------- --------------------
14767378624474121740 DGOMEZ EMPLOYEE   COLUMN     USABLE SINGLE TABLE CARDINA <spd_note>
                                                                              LITY MISESTIMATE <internal_state>NE
                                                                              W</internal_state>
                                                                              <redundant>NO</red
                                                                              undant>
                                                                              <spd_text>{EC(DGOM
                                                                              EZ.EMPLOYEE)[AGE, CO
                                                                              UNTRY]}</spd_text>
                                                                              </spd_note>

14767378624474121740 DGOMEZ EMPLOYEE COLUMN       USABLE SINGLE TABLE CARDINA <spd_note>
                                                                              LITY MISESTIMATE <internal_state>NE
                                                                              W</internal_state>
                                                                              <redundant>NO</red
                                                                              undant>
                                                                              <spd_text>{EC(DGOM
                                                                              EZ.EMPLOYEE)[AGE, CO
                                                                              UNTRY]}</spd_text>
                                                                              </spd_note>

14767378624474121740 DGOMEZ EMPLOYEE TABLE        USABLE SINGLE TABLE CARDINA <spd_note>
                                                                              LITY MISESTIMATE <internal_state>NE
                                                                              W</internal_state>
                                                                              <redundant>NO</red
                                                                              undant>
                                                                              <spd_text>{EC(DGOM
                                                                              EZ.EMPLOYEE)[AGE, CO
                                                                              UNTRY]}</spd_text>
                                                                              </spd_note>

 

The TYPE of this Directive DYNAMIC_SAMPLING:

SQL> select o.directive_id id, d.type from DBA_SQL_PLAN_DIR_OBJECTS o, DBA_SQL_PLAN_DIRECTIVES d where o.OWNER='DGOMEZ' and o.directive_id=d.directive_id;

ID                   TYPE
-------------------- -----------------------
14767378624474121740 DYNAMIC_SAMPLING
14767378624474121740 DYNAMIC_SAMPLING
14767378624474121740 DYNAMIC_SAMPLING

You can see several rows returned, but if you look at the “ID” column, you will see that only one SQL Plan Directive was created.

Now let’s execute again the SQL statements and let’s see what happens:

select /*+gather_plan_statistics*/ *
from dgomez.employee e
where e.country='Guatemala' and e.age=21;

AGE NAME     COUNTRY
--- -------- --------------------
21  Jose     Guatemala
21  Maria    Guatemala
21  Josh     Guatemala
21  Julio    Guatemala
21  Pedro    Guatemala
21  Marvin   Guatemala
21  Oscar    Guatemala
21  Mauricio Guatemala
21  Gabriel  Guatemala
21  Jonathan Guatemala
21  Lucrecia Guatemala
21  Alex     Guatemala
21  Alvaro   Guatemala
21  Alan     Guatemala
21  Deiby    Guatemala

15 rows selected.

The SQL Plan Directive was used, as well as Dynamic Sampling Statistics, which made the optimizer fix the difference between estimated rows and actual rows. With the help of SQL Plan Directives, the optimizer was able to adapt itself to the change; in this case, a change in the data (several more rows were inserted). In this example Dynamic Sampling Statistics was used, but SQL Plan Directives can remind the optimizer to take other actions in addition to Dynamic Sampling Statistics. (At least, it was designed to have more TYPES, but at this time Dynamic Sampling Statistics (and its sub type DYNAMIC_SAMPLING_RESULT) is the only TYPE existing, as Mauro Pagan explains in this presentation.)  

PLAN_TABLE_OUTPUT

--------------------------------------------------------------------------------------
SQL_ID bmx5dfgyzm2ag, child number 0
-------------------------------------
select /*+gather_plan_statistics*/ * from dgomez.employee e where
e.country='Guatemala' and e.age=21

Plan hash value: 2119105728

-------------------------------------------------------------------------------------
| Id | Operation        | Name     | Starts | E-Rows | A-Rows | A-Time     | Buffers |
-------------------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 1      |        | 15     |00:00:00.01 | 8       |
|* 1 | TABLE ACCESS FULL| EMPLOYEE | 1      | 15    | 15    |00:00:00.01 | 8       |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("E"."COUNTRY"='Guatemala' AND "E"."AGE"=21))

Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

24 rows selected.

And after reoptimizing the SQL Statement, the query is now marked as not reoptimizable:

SQL_ID CHILD_ NUMBER I SQL_TEXT
------------- ------ - ----------------------------------------
bmx5dfgyzm2ag      0 N select /*+gather_plan_statistics*/ * fro
                       m dgomez.employee e where e.country='Gua
                       temala' and e.age=21

PL/SQL procedure successfully completed.

A new note was added to the same SQL Plan Directive.  

select o.directive_id id, owner, o.object_name, o.object_type, d.state, d.reason, d.notes from DBA_SQL_PLAN_DIR_OBJECTS o, DBA_SQL_PLAN_DIRECTIVES d where o.OWNER='DGOMEZ' and o.directive_id=d.directive_id;

ID                   OWNER  OBJECT_NAM OBJECT_TYP STATE  REASON               NOTES
-------------------- ------ ---------- ---------- ------ -------------------- --------------------
14767378624474121740 DGOMEZ EMPLOYEE   COLUMN     USABLE SINGLE TABLE CARDINA <spd_note>
                                                                              LITY MISESTIMATE <internal_state>MI
                                                                              SSING_STATS</interna
                                                                              l_state>
                                                                              <redundant>NO</red
                                                                              undant>
                                                                              <spd_text>{EC(DGOM
                                                                              EZ.EMPLOYEE)[AGE, CO
                                                                              UNTRY]}</spd_text>
                                                                              </spd_note>

14767378624474121740 DGOMEZ EMPLOYEE   COLUMN     USABLE SINGLE TABLE CARDINA <spd_note>
                                                                              LITY MISESTIMATE <internal_state>MI
                                                                              SSING_STATS</interna
                                                                              l_state>
                                                                              <redundant>NO</red
                                                                              undant>
                                                                              <spd_text>{EC(DGOM
                                                                              EZ.EMPLOYEE)[AGE, CO
                                                                              UNTRY]}</spd_text>
                                                                              </spd_note>

14767378624474121740 DGOMEZ EMPLOYEE   TABLE      USABLE SINGLE TABLE CARDINA <spd_note>
                                                                              LITY MISESTIMATE <internal_state>MI
                                                                              SSING_STATS</interna
                                                                              l_state>
                                                                              <redundant>NO</red
                                                                              undant>
                                                                              <spd_text>{EC(DGOM
                                                                              EZ.EMPLOYEE)[AGE, CO
                                                                              UNTRY]}</spd_text>
                                                                              </spd_note>

7617691850148384040 DGOMEZ EMPLOYEE   TABLE       USABLE VERIFY CARDINALITY E <spd_note>
                                                  STIMATE                     <internal_state>NE
                                                                              W</internal_state>
                                                                              <redundant>NO</red
                                                                              undant>
                                                                              <spd_text>{(DGOMEZ
                                                                              .EMPLOYEE, num_rows=
                                                                              15) - (SQL_ID:2zbnc0
                                                                              ugm2qzy, T.CARD=15[-
                                                                              2 -2])}</spd_text>
                                                                              </spd_note>

 

This last SQL Plan Directive is of type “DYNAMIC_SAMPLING_RESULT”:

SQL> select to_char(o.directive_id) id, d.type from DBA_SQL_PLAN_DIR_OBJECTS o, DBA_SQL_PLAN_DIRECTIVES d where o.OWNER='DGOMEZ' and o.directive_id=d.directive_id;

 

ID                                     TYPE

-------------------------------------- -----------------------

14767378624474121740                   DYNAMIC_SAMPLING

14767378624474121740                   DYNAMIC_SAMPLING

14767378624474121740                   DYNAMIC_SAMPLING

7617691850148384040                    DYNAMIC_SAMPLING_RESULT

 

How to disable SQL Plan Directives: If you want to disable only SQL Plan Directives you can set the following parameters to ‘0’:

This will stop creation of new SQL Plan Directives:

SQL> alter system set "_sql_plan_directive_mgmt_control"=0;

System altered.

This will stop using existing SQL Plan Directives:

SQL> alter system set "_optimizer_dsdir_usage_control"=0;

System altered.

You can disable SQL Plan Directives indirectly if you set the following parameter:

  • optimizer_adaptive_reporting_only = “TRUE”.
  • optimizer_features_enable < 12.1.0.1

How to disable all the features in “Adaptive Statistics”:

SQL> alter system set optimizer_adaptive_statistics=false;

System altered.

How to disable all the features in “Adaptive Plans”:

SQL> alter system set optimizer_adaptive_plans=false;

System altered.

How to enable SQL Plan Directives:

  • The parameter _sql_plan_directive_mgmt_control must not be set to 0
  • The parameter _optimizer_dsdir_usage_control must not be set to 0.
  • The parameter optimizer_adaptive_statistics must be set to “TRUE”.
  • The parameter optimizer_adaptive_reporting_only    must be set to “FALSE”.
  • The parameter optimizer_features_enable must be set to a value >= 12.1.0.1

 

Conclusion:

Oracle Database has been improving its features with every version, and in 12c several adaptive features were introduced. SQL Plan Directives are notes that help the optimizer remember things in the next execution; this allows the optimizer to adapt to changes. We saw a step-by-step example in this article, and we explained how to enable and disable SQL Plan Directives and the others database parameters related to adaptive features.  

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database Resource Manager for pluggable databases

$
0
0

By Deiby Gómez

Introduction:

Oracle Database Resource Manager enables you to limit database resources. Oracle has been enhanced in this regard in its new versions; for example, in 12.1.0.1, Database Resource Manager supports multi-tenant architecture. One of the most common performance issues that DBAs have when they are consolidating several databases into a single database by using pluggable databases is that all the PDBs try to use as much of the resources as they can. The PDBs compete for resources. Even more critical, sessions inside PDBs that don’t have high importance can make a critical PDB slow. This can be a serious performance problem that can impact multiple users and applications. 

Oracle Database Resource Manager lets you specify how much of the resources you want to assign for every pluggable database; you can assign more resources to those PDBs that are more important and minimal resources for those with low importance. This prevents the low-importance PDBs from impacting the high-importance PDBs.

Within a PDB or a non-CDB also, all the sessions compete for the resources assigned to the PDB. Sessions that have low importance can be using most of the resources, impacting the more important sessions. When Database Resource Manager is not used, a single session inside a PDB might use 95% of the resources that the whole CDB has assigned. This makes all the rest of PDB severely slow.  Inside a PDB or inside a non-CDB, Database Resource Manager enables us to assign resources to Consumer Groups, which comprise a group of users. You can group users into a Consumer Group with high importance and make another Consumer Group for the users that have low importance, and assign resources to them appropriately.

 

New Features in Oracle Database 12.2.0.1:

  • SESSION_PGA_LIMIT: The maximum amount of PGA in MB that sessions in a Consumer Group can allocate before being terminated.
  • Oracle Enterprise Manager Database Express (EM Express) supports Database Resource Manager.

 

Example:

In this article we will see how to set up a Database Resource Manager configuration for a Container Database with two pluggable databases. The version that we will use for this example is Oracle Database 12.2.0.1 Enterprise Edition.

  • CDB Name: CDB1
  • PDB Names: NPDB1 and NPDB2

We will assign the following resources:

 

Assigning resources at Container Database Level:

Go to the “Home Page” of the Container Database (CDB); in this case the CDB’s name is “CDB1”. In “Administration”-->“Resource Manager” you will see the following page.

 

Click on the option “CDB Resource Plans” and you will see the following page:

 

Click on the bottom “Create”. We will create a new CDB Resource Plan for our two pluggable databases NPDB1 and NPDB2.

 

In this page, you will have to specify the name of the CDB Resource Plan, a description and a set of resources. The resources that you can assign are: Shares, Utilization Limit %, Parallel Server Limit %. However, there are more resources that you can assign. This is a downside of Oracle Cloud Control, because it is not synchronized with the options provided by the Oracle Database version (in this case 12.2.0.1) regarding the resources. For instance, if we were using an Oracle Database 11.2.0.4 then Oracle Cloud Control should show us the resource options that 11.2.0.4 offers us, and if we were using 12.1.0.2 then it should show us the resource options that that version offers us, and so on. 

Unfortunately, Oracle Cloud Control 13.2 (in this example) and in previous versions offers us only the basic resource options.  Of course, you can always change the SQL Statement, but that is another story.

Click on the Button “Add/Remove” to add the pluggable databases NPDB1 and NPDB2.  You will see the following page, where you have to transfer from the left side to the right side those pluggable databases you want to assign resources to. In this example, both pluggable databases were selected. Click in the button “Assign”.

 

Then you will see the two PDBs listed; then we can assign them resources. In this example we have assigned some percentages and numbers to both PDBs and we have selected the option “Activate this Plan”.

 

 

If you click on the button “Show SQL” you can see the SQL Statement that will be used to create the Resource Plan and to assign the resources. Click on the button “Return”.

 

 

Click on the button “OK”. This will create the Resource Plan.

 

SQL> select plan_id, plan, comments from dba_cdb_rsrc_plans where plan='NUVOLACDBPLAN';

 

   PLAN_ID PLAN            COMMENTS

---------- --------------- ------------------------------

     73553 NUVOLACDBPLAN   Resource Plan for Nuvola CDB

    

SQL> show parameters resource

 

NAME                         TYPE   VALUE

------------------------------------ ----------- -----------------resource_limit                    boolean     TRUE

resource_manage_goldengate         boolean     FALSE

resource_manager_cpu_allocation     integer     1

resource_manager_plan              string      NUVOLACDBPLAN

    

SQL> select plan, pluggable_database, shares, utilization_limit, parallel_server_limit, memory_min, memory_limit from DBA_CDB_RSRC_PLAN_DIRECTIVES where plan='NUVOLACDBPLAN';

 

PLAN              PLUGGABLE_DATABASE                    SHARES  UTILIZATION  PARAL  MEMORY_MIN   MEMORY_LIMIT

---------------   ---------------------------- ------- ------------ ------ ------------ ------------

NUVOLACDBPLAN   ORA$DEFAULT_PDB_DIRECTIVE   1          100  100

NUVOLACDBPLAN   ORA$AUTOTASK                            90  100

NUVOLACDBPLAN   NPDB1                       3           30   30

NUVOLACDBPLAN   NPDB2                       7           70   70

 

As I said before, in Cloud Control 13.2 you cannot specify all the options available. There two options that are not shown for Oracle Database 12.2.1.0; they are the following:

  • memory_limit: This parameter is applicable only to Oracle Exadata storage for configuring the Database Smart Flash Cache.
  • memory_min: This parameter is applicable only to Oracle Exadata storage for configuring the Database Smart Flash Cache.

When I was reading the documentation I thought that these parameters were to limit the usage of the SGA for the PDBs, but it seems that they work only for Exadata. Perhaps in upcoming versions? I hope!

If you want to add a value for those resources you have to modify the CDB Plan Directive manually, as I show you below:

 

SQL> exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

PL/SQL procedure successfully completed.

 

 

SQL> BEGIN

  DBMS_RESOURCE_MANAGER.UPDATE_CDB_PLAN_DIRECTIVE(

    plan                  => 'NUVOLACDBPLAN',

    pluggable_database    => 'NPDB1',

    new_shares                => 3,

    new_utilization_limit     => 30,

    new_parallel_server_limit => 30,

    new_memory_limit=>30,

    new_memory_min=>30);

END;

/  2    3    4    5    6    7    8    9   10   11 

 

PL/SQL procedure successfully completed.

 

SQL> BEGIN

  DBMS_RESOURCE_MANAGER.UPDATE_CDB_PLAN_DIRECTIVE(

    plan                  => 'NUVOLACDBPLAN',

    pluggable_database    => 'NPDB2',

    new_shares                => 7,

    new_utilization_limit     => 70,

    new_parallel_server_limit => 70,

    new_memory_limit=>70,

    new_memory_min=>70);

END;

/  2    3    4    5    6    7    8    9   10   11  

 

PL/SQL procedure successfully completed.

 

SQL>

SQL> exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

 

PL/SQL procedure successfully completed.

 

SQL>

SQL> exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

PL/SQL procedure successfully completed.

 

Then you can query if the values were set correctly:

 

SQL> select plan, pluggable_database, shares, utilization_limit, parallel_server_limit, memory_min, memory_limit from DBA_CDB_RSRC_PLAN_DIRECTIVES where plan='NUVOLACDBPLAN';

 

PLAN          PLUGGABLE_DATABASE          SHARES  UTILIZATION_LIMIT PARALLEL_SERVER_LIMIT MEMORY_MINMEMORY_LIMIT

--------------- ------------------------- ------- ----------------- --------------------- ---------- -----------

NUVOLACDBPLAN   ORA$DEFAULT_PDB_DIRECTIVE    1                 100               100

NUVOLACDBPLAN   ORA$AUTOTASK                                             90               100

NUVOLACDBPLAN   NPDB1                        3                  30                30         30          30

NUVOLACDBPLAN   NPDB2                        7                  70                70         70          70

 

Assigning resources at pluggable database level:

Now go back to the “Home page” of the Container Database “CDB1”.  “Administration”--> “Resource Manager”.  We will select this time the option “Consumer Groups”. The Consumer Groups receive the resources assigned by Resource Plan Directives.

 

 

 

We will see the following page where we have to select in which pluggable database we want to create the Consumer Groups. In this example, we will select the PDB called “NPDB1”. Click on the button “Continue”.

 

 

In the following page, all the Consumer Groups will be listed. There are several already created by default. We will click on the button “Create”.

 

 

In the following page we have to specify the name of the Consumer Group and a description. The name of the Consumer Group will be called “nuvola_cg1”. Click on the button “Add” so what we can add Users that will inside this Consumer Group.

 

The users that exist in the PBD will be listed. In this example, I have filtered the users by the string “Nuvola”. The users “NUVOLA_USER_1” and the user “NUVOLA_USER_2” will be added to this Consumer Group. Click on the button “Select”.

 

 

We will repeat the last two steps to create another Consumer Group.This time it will be called “nuvola_cg2”. For this Consumer Group, the users “NUVOLA_USER_3” and “NUVOLA_USER_4” will be added.

 

 

You will see the two Consumers Group created in the following page:

Now there is only one step pending. To Create the Resource Plan Directive. A Resource Plan Directive specifies how the resources will be assigned to Consumer Groups. For this Go back to the “Home page” of the pluggable database “NPDB1”.  “Administration”--> “Resource Manager”.  We will select this time the option “Plans”. 

 

 

There are some Resource Plans already created by default. We will click on the button “Create”.

 

 

On this page we have to specify the name of the PDB Resource Plan and a description; in this case the plan will be called “NUVOLA_PDB1_PLAN”. Also on the second part of the page, “Resource Allocations”, we will specify the “Directive” where we specify how much of the resources will be assigned to each Consumer Group. Click on the button “Add/Remove” to add the Consumer Groups that we created previously.

 

 

Select the two Consumer Groups that we created; in this example they are “NUVOLA_CG1” and “NUVOLA_CG2”. Click on the button “OK”.

 

 

The two Consumer Group will be added and then we can assign them resources. In this section, it is important to note that only the following resources can be added:

 

General Tab:

  • Shares
  • Utilization Limit %

 

Parallelism Tab:

  • Bypass Queue
  • Max Degree of Parallelism
  • Parallel Server Limit
  • Parallel Statement Queue Timeout

 

Runaway Query Tab:

  • Elapsed time Limit (Secs)
  • CPU Time limit (Secs)
  • IO Limit (MBs)
  • IO Request Limit (Requests)
  • Action

 

Idle Time Tab:

  • Max idle time
  • Max idle time if blocking another session

 

However, as in the CDB Resource Plan, there are others resources that also can be specified manually but that are not present in Cloud Control/ for instance, UNDO limit. Again, a downside of Oracle Cloud Control. If you want to see all the resources that you can specify review the documentation: http://docs.oracle.com/database/122/ARPLS/DBMS_RESOURCE_MANAGER.htm#ARPLS73823

 

Click on tab “General”. Specify the resources that you want and click on the button “OK”.

 

 

 

Click on tab “Parallelism”. Specify the resources that you want and click on the button “OK”.

 

 

 

Click on tab “Runaway Query”. Specify the resources that you want and click on the button “OK”.

 

 

Click on tab “Idle Time”. Specify the resources that you want and click on the button “OK”.

 

 

We can review with SQL Statements whether the PDB Resource Plan, its Directive, and the Consumer Groups were created successfully. We will login to the PDB called “NPDB1”:

 

SQL> alter session set container=npdb1;

Session altered.

 

We can review a couple of Resources using the view DBA_RSRC_PLAN_DIRECTIVES:

 

SQL> select plan, group_or_subplan, max_idle_time, max_utilization_limit, parallel_queue_timeout, utilization_limit from dba_rsrc_plan_directives where plan like '%NUVOLA%'

 

PLAN            GROUP_OR_SUBPLA MAX_IDLE_TIME MAX_UTILIZATION_LIMIT

---------------- --------------- ------------- ---------------------

NUVOLA_PDB1_PLAN NUVOLA_CG1         500                       30   

NUVOLA_PDB1_PLAN NUVOLA_CG2         250                       70  

NUVOLA_PDB1_PLAN OTHER_GROUPS                                  0        

As I said before, Oracle Cloud Control doesn’t show all the options for resources. For instance, if you want to assign Undo Limit or PGA limit for a Consumer Group then you have to modify the Plan Directive manually as I show below:

 

 

SQL> exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

PL/SQL procedure successfully completed.

 

SQL> BEGIN

DBMS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(

plan=>'NUVOLA_PDB1_PLAN',

group_or_subplan=>'NUVOLA_CG1',

new_undo_pool=>100,

new_session_pga_limit=>120);

END;

/  2    3    4    5    6    7    8 

 

PL/SQL procedure successfully completed.

 

SQL> exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

PL/SQL procedure successfully completed.

 

SQL> exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

PL/SQL procedure successfully completed.

 

 

SQL>  select plan, group_or_subplan, UNDO_POOL,max_idle_time, max_utilization_limit from dba_rsrc_plan_directives where plan like '%NUVOLA%';

 

PLAN          GROUP_OR_SUBPLA UNDO_POOL MAX_IDLE_TIME MAX_UTILIZATION_LIMIT

---------------- ----------------- ---------- --------------- ---------------------

NUVOLA_PDB1_PLAN NUVOLA_CG1              100            500                        30

NUVOLA_PDB1_PLAN NUVOLA_CG2                            250                   70

NUVOLA_PDB1_PLAN OTHER_GROUPS                                                             0      

 

Conclusion:

If you are consolidating non-CDBs databases into a CDB database with several PDBs it is highly recommended that you implement Database Resource Manager. In this article I presented a step-by-step ecample that you can use as a recipe to implement a Database Resource Manager configuration and assign resources properly across PDBs and across Consumer Groups.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12cR2 new feature: Application Root Replica

$
0
0

By Deiby Gómez

 

Introduction:

In my previous articles we have seen concepts like “Application Containers” and “Proxy PDB”, which are new in Oracle Database 12cR2. With Application Containers, you can install applications in an Application Root and synchronize the application (metadata without data) to Application PDBs.On the other hand, a Proxy PDB provides location transparency; this is useful when we want to access data or objects remotely from another Container Database (CDB). An advantage of a Proxy PDB is that we don’t have to copy all the data to the remote CDB in order to access the objects and its data, however this is also a disadvantage. If something goes wrong with the Application Root in the Master Application Container, all the remote Proxy PDBs in others CDB will be broken. To avoid this, we would probably want to have a physical replica of all the objects and data in another remote Container Database. Here is where a new feature called “Application Root Replica”, also introduced in 12.2.0.1.0, is helpful.

Application Root Replica is a physical replica of a master Application Root but in another remote Container Database. This lets us synchronize applications in an Application Container across different and remote Container Databases without using solutions like RMAN, Data Pump, or remote cloning. 

There are two methods to create an Application Root Replica:

  1. Create an empty application container and then synchronize the application.
  2. Clone the master application root.

In this article, I will show you a use-case example.

 

Preparation of the Environment:

With these steps I will create the environment described in the following image. I already have the two Container Databases, CDB1 and CDB2. So I will start by creating the Application Root “AppRoot” and the Application PDB “AppPDB1” in CDB1. I will create an application in “AppRoot” and I will sync that application to “AppPDB1”.  Then I will create the Application Root “AppRoot2” and the Application PDB “AppPDB2” in CDB2.

 

Creating an Application Root named “AppRoot”:

SQL> create pluggable database AppRoot as application container admin user pdbadmin identified by nuvola;

 

Pluggable database created.

 

SQL> alter pluggable database AppRoot open;

 

Pluggable database altered.

 

Creating the Application PDB named “AppPDB1”:

SQL> alter session set container=AppRoot;

 

Session altered.

 

SQL> show con_name

 

CON_NAME

------------------------------

APPROOT

 

SQL> create pluggable database AppPDB1 admin user pdbadmin identified by nuvola;

 

Pluggable database created.

 

SQL>  alter pluggable database AppPDB1 open;

 

Pluggable database altered.

 

Installing the application named “MyApp” in the Application Root “AppRoot” in CDB1:

 

SQL> alter pluggable database application MyApp begin install '1.0';

 

Pluggable database altered.

 

SQL> create table c##dgomez.dataLinkedTable SHARING=DATA   (name varchar2(20));

 

Table created.

 

SQL> insert into c##dgomez.dataLinkedTable values ('Guatemala');

 

1 row created.

 

SQL> commit;

 

Commit complete.

 

SQL> alter pluggable database application MyApp end install '1.0';

 

Pluggable database altered.

 

Synchronizing the Application PDB “AppPDB1”:

SQL> alter session set container=AppPDB1;

 

Session altered.

 

SQL> alter pluggable database application MyApp sync;

 

Pluggable database altered.

 

Confirming that the table and data were synchronized:

 SQL>  select * from c##dgomez.dataLinkedTable;

 

NAME

--------------------

Guatemala

 

In the Container Database “CDB2” I will create the Application Root named “AppRoot2”

SQL> create pluggable database AppRoot2 as application container admin user pdbadmin identified by nuvola;

 

Pluggable database created.

 

SQL> alter pluggable database AppRoot2 open;

 

Pluggable database altered.

 

Creating the Application PDB “AppPDB2” in CDB2:

SQL> alter session set container=AppRoot2;

 

Session altered.

 

SQL> show con_name

 

CON_NAME

------------------------------

APPROOT2

 

SQL> create pluggable database AppPDB2 admin user pdbadmin identified by nuvola;

 

Pluggable database created.

 

SQL>  alter pluggable database AppPDB2 open;

 

Pluggable database altered.

 

Confirming that the table c##dgomez.dataLinkedTable doesn’t exist in “AppPDB2”. This is just to confirm that the environment we have created matches with the previous image.

 

SQL> alter session set container=AppPDB2;

 

Session altered.

 

SQL> select * from c##dgomez.dataLinkedTable;

select * from c##dgomez.dataLinkedTable

                        *

ERROR at line 1:

ORA-00942: table or view does not exist

 

The problem:

At this time we have two CDBs, one called CDB1, which has an Application Container created with one application installed. However, I also want to have that Application in CDB2 in the Application Container that has already been created there. Also I would like to be able to synchronize all the data whenever the “master” Application receives any change. In the past, we would have used a Full Backup and Restore with RMAN or perhaps an export and import with Data Pump, or even a materialized view. In 12.1.0.2.0 we would use “Remote PDB Cloning”. However, all these solutions are not the best!

The solution:

The best solution to this problem is “Application Root Replica”. An Application Root Replica is a physical replica of one Application Root in another CDB. In this case our Master Application Root is “AppRoot” in CDB1, and the Application Root Replica is “AppRoot2” in CDB2. The Application Root Replica uses a Proxy PDB to synchronize the data with the Master Application Root. In the following image you can see that the Proxy PDB is created in the CDB1, this is because the Proxy PDB will be seen as a normal PDB in the Application Container in CDB1, which means that the Proxy PDB will get the data (via synchronization) from the Master Application Root. Since the “Referenced PDB” of that Proxy PDB is “AppRoot2”, it is as if “AppRoot2” was physically located in CDB1. This is the concept of a Proxy PDB, and this is how “AppRoot2” can get all the data from “AppRoot2”. Once the Application Root “AppRoot2” get synchronized with the Application Root “AppRoot” through the Proxy PDB, we will have to synchronize the Application PDB “AppPDB2” in CDB2.

 

 

In the Application Root “AppRoot” in CDB1:

SQL> alter session set container=AppRoot;

 

Session altered.

 

SQL> show con_name

 

CON_NAME

------------------------------

APPROOT

 

Since the Proxy PDB needs a database link to be created:

SQL>  CREATE DATABASE LINK link_to_AppRoot CONNECT TO c##dgomez IDENTIFIED BY nuvola USING '192.168.1.22:1521/approot2';

 

Database link created.

 

Note that the database link connects to the Application Root “AppRoot2” in CDB2.

 

Creating the Proxy PDB in CDB1:

 

SQL> create pluggable database ProxyPDB AS PROXY FROM approot2@link_to_AppRoot;

 

Pluggable database created.

 

SQL> alter pluggable database ProxyPDB open;

 

Pluggable database altered.

 

Unfortunately Proxy PDB doesn’t support OS Authentication, so I have to open a session to “ProxyPDB” in CDB1 using password authentication:

[oracle@nuvola2 apex]$ sqlplus sys/manager1@'192.168.1.22:1521/ProxyPDB' as sysdba

 

The following step will synchronize the “Proxy PDB”, which automatically will fill up the “Application Root Replica” called “AppRoot2” in CDB2:

SQL> alter pluggable database application MyApp sync;

 

Pluggable database altered.

 

If we connect to the application root replica “AppRoot2” in CDB2 we will see that the application is there as well as its data, physically.

SQL> show con_name

 

CON_NAME

------------------------------

APPROOT2

 

SQL> select app_name, app_version from dba_app_versions where app_name='MYAPP'

 

APP_NAME          APP_VERSION

-------------------- ------------------------------

MYAPP             1.0

 

So the application “MyApp” has been synchronized to the Application Root [Replica] “AppRoot2”. It’s time to synchronize all the Application PDBs in the Application Container in CDB2: 

SQL> alter session set container=AppPDB2;

 

Session altered.

 

SQL> show con_name

 

CON_NAME

------------------------------

APPPDB2

 

SQL> alter pluggable database application MyApp sync;

 

Pluggable database altered.

 

We can confirm that the Application “MyApp” was successfully replicated from AppRoot to ProxyPDB in CDB1, from ProxyPDB in CDB1 to AppRoot2 in CDB2, and from AppRoot2 to AppPDB2 in CDB:

SQL> select * from c##dgomez.dataLinkedTable;

 

NAME

--------------------

Guatemala

 

Starting now, we only have to keep performing “SYNC” operations to replicate the data through all of the configuration that involves both Container Databases.

 

Conclusion:

We have seen through this article how to synchronize application data in an Application Container across Container Databases without using Backup and Recovery operations with RMAN, Export & Import with Data Pump or Remote PDB Cloning. When we are working with Application Containers, both Proxy PDB and Application Root Replica are useful for replicating our installed applications to other Containers Databases. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 


How to run SQL Statements across Pluggable Databases with catcon.pl

$
0
0

By Deiby Gómez

 

Introduction:

Beginning with Oracle Database 12.1.0.1.0, DBAs started to work with Pluggable Databases. There were some large migrations of several databases from 10g/11g to 12c where they were consolidated into a new Oracle Database Container using several Pluggable Databases. However, running operations in several Pluggable Databases became a problem, since people had to login into every Pluggable Database and to run the required script or SQL Statement there. To avoid causing people to spend too much time doing this kind of work Oracle introduced the Perl script “catcon.pl”. Basically catcon.pl receives either a script or the text of a SQL Statement and executes it in the Pluggable Databases that we specify, even in PDB$SEED and CDB$ROOT, depending on which flags of catcon.pl are used. In the following image we see a script received by catcon.pl, and catcon.pl executes the script in CDB$ROOT and PDB$SEED if the flag “-S” is used as well as in the rest of Pluggable Databases.

 

Using catcon.pl considerably reduces the time spent on running scripts across several databases. One of its advantages is that you can filter the pluggable databases where you want to execute the script or SQL Statement by using “-C” for exclusion of pluggable databases and “-c” for inclusion of pluggable databases. You can also specify the order of the pluggable databases where the script or SQL statement has to be executed.

In this article we will use the environment described in the previous image. I will start creating the three pluggable databases and the scripts that will be executed across the PDBs:

SQL> create pluggable database PDB1 admin user pdbadmin identified by nuvola;

 

Pluggable database created.

 

SQL> create pluggable database PDB2 admin user pdbadmin identified by nuvola;

 

Pluggable database created.

 

SQL> create pluggable database PDB3 admin user pdbadmin identified by nuvola;

 

Pluggable database created.

 

SQL> alter pluggable database all open;

 

Pluggable database altered.

 

SQL> show pdbs;

 

    CON_ID CON_NAME                OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

        2 PDB$SEED                 READ ONLY  NO

        3 PDB1                     READ WRITE NO

        4 PDB2                     READ WRITE NO

        5 PDB3                     READ WRITE NO

 

Creating the Script #1:

The following script contains a CREATE TABLE statement, an INSERT statement, a commit and a SELECT statement. All these operations use the same table, C##DGOMEZ.COUNTRY.

[oracle@nuvola2 ~]$ pwd

/home/oracle

 

[oracle@nuvola2 ~]$ vi script.sql

[oracle@nuvola2 ~]$ cat script.sql

show con_name;

create table c##dgomez.country (name varchar2(20));

insert into c##dgomez.country values ('Guatemala');

commit;

select * from c##dgomez.country ;

[oracle@nuvola2 admin]$

 

Creating the Script #2:

This script doesn’t create any table; instead, it only inserts rows in the table C##DGOMEZ.COUNTRY

[oracle@nuvola2 admin]$ cat /home/oracle/script2.sql

insert into c##dgomez.country values ('Canada');

commit;

[oracle@nuvola2 admin]$

 

Running catcon.pl without “-S” flag:

When the flag “-S” is not used, catcon.pl executes the script or the SQL Statement in all the containers including CDB$ROOT and PDB$SEED. Also all the objects created by catcon.pl are created as “ORACLE_MAINTAINED”, which means that those will be objects owned by Oracle and which cannot be modified by any database user. I don’t recommend using this method to create objects for the business or our application schema; this method is used to run perhaps a script for patching, migration, or any other task that touches the data dictionary or any other aspect owned by Oracle.

Moving to the directory where catcon.pl is located:

[oracle@nuvola2 ~]$ cd $ORACLE_HOME/rdbms/admin

 

Executing catcon.pl. The flag “-d” specifies where the script is located. The flag “-l” specifies the directory where all the logs will be created. The flag “-b” specifies the prefix name of the log files that will be generated and finally the value with the name of the script that will be executed by catcon.pl.

[oracle@nuvola2 admin]$  $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -b catcon-example script.sql

 

As you can see, the script was executed and it created the objects as “ORACLE_MAINTAINED”. The script was executed in CDB$ROOT and also in PDB$SEED. In this example, the script failed in PDB$SEED because the schema c##dgomez didn’t exist within the PDB, and catcon.pl couldn’t create the table.

SQL> select con_id, owner, object_name, object_type, ORACLE_MAINTAINED from cdb_objects where owner='C##DGOMEZ';

 

    CON_ID OWNER      OBJECT_NAM OBJECT_TYP ORACLE_MAINTAIN

---------- ---------- ---------- ---------- ---------------

        1 C##DGOMEZ  COUNTRY     TABLE     Y

        3 C##DGOMEZ  COUNTRY     TABLE     Y

        4 C##DGOMEZ  COUNTRY     TABLE     Y

        5 C##DGOMEZ  COUNTRY     TABLE     Y

 

Running catcon.pl with “-S” flag

I recommend using this flag when you are running either a script or SQL Statement that create objects for your business application schema like the Script #1 or the Script #2 that I created in this article. In other words, when you are running operations not related to patching, upgrades, or to the data dictionary. When the flag “-S” is used, catcon.pl doesn’t execute the script in CDB$ROOT or in PDB$SEED.

[oracle@nuvola2 ~]$ cd $ORACLE_HOME/rdbms/admin

 

[oracle@nuvola2 admin]$  $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -S  -b catcon-example script.sql

 

catcon: ALL catcon-related output will be written to [/home/oracle/catcon_logs/catcon-example_catcon_26297.lst]

catcon: See [/home/oracle/catcon_logs/catcon-example*.log] files for output generated by scripts

catcon: See [/home/oracle/catcon_logs/catcon-example_*.lst] files for spool files, if any

catcon.pl: completed successfully

[oracle@nuvola2 admin]$

 

The logs will be generated in the directory “/home/oracle/catcon_logs” with the prefix “catcon-example” as it was specified:

[oracle@nuvola2 admin]$ ls -ltr /home/oracle/catcon_logs/

total 12

-rw-r--r-- 1 oracle oinstall  419 May  7 05:57 catcon-example_catcon_26297.lst

-rw-r--r-- 1 oracle oinstall 3371 May  7 05:58 catcon-example0.log

-rw-r--r-- 1 oracle oinstall 1922 May  7 05:58 catcon-example1.log

[oracle@nuvola2 admin]$

 

The script was executed only in the pluggable databases. It was not executed in CDB$ROOT nor PDB$SEED and the table was created as non-Oracle maintained:

SQL> select con_id, owner, object_name, object_type, ORACLE_MAINTAINED from cdb_objects where owner='C##DGOMEZ'

 

    CON_ID OWNER      OBJECT_NAM OBJECT_TYP ORACLE_MAINTAINED

---------- ---------- ---------- ---------- -----------------

        3 C##DGOMEZ  COUNTRY     TABLE     N

        4 C##DGOMEZ  COUNTRY     TABLE     N

        5 C##DGOMEZ  COUNTRY     TABLE     N

 

We can verify that the table was created and the rows inserted in every PDB:

SQL> select con_id, name from containers(C##DGOMEZ.COUNTRY) ;

 

    CON_ID NAME

---------- --------------------

        1 Guatemala

        3 Guatemala

        4 Guatemala

        5 Guatemala

 

NOTE: I manually created the table in CDB$ROOT, just to make the CONTAINERS clause work.

In the following example I am using the flag “-c”, which is useful when we want to use “inclusion”. We have to provide the list of the PDBs where the script will be executed. In this example, the script will be executed only in PDB1 and PDB3. I will use in this example the script #2, which  performs only an INSERT operation.

[oracle@nuvola2 admin]$ $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -S -c 'PDB1 PDB3'-b catcon-example script2.sql

 

catcon: ALL catcon-related output will be written to [/home/oracle/catcon_logs/catcon-example_catcon_27384.lst]

catcon: See [/home/oracle/catcon_logs/catcon-example*.log] files for output generated by scripts

catcon: See [/home/oracle/catcon_logs/catcon-example_*.lst] files for spool files, if any

catcon.pl: completed successfully

[oracle@nuvola2 admin]$

 

We can verify whether the script was executed in only PDB1 and PDB3 by querying the table c##dgomez.country:

[oracle@nuvola2 admin]$ sqlplus / as sysdba

 

SQL> select con_id, name from containers(C##DGOMEZ.COUNTRY) ;

 

    CON_ID NAME

---------- --------------------

        1 Guatemala

        3 Guatemala

        3 Canada

        4 Guatemala

        5 Guatemala

        5 Canada

 

8 rows selected.

 

Conclusion:

When the multi-tenant architecture was introduced, the Perl script catcon.pl was also introduced to help running frequent scripts in multiple pluggable databases. In this article we saw some examples where different flags of catcon.pl were used, such as the flag to include or exclude PDB, the flag to execute a script as if it was provided by Oracle, and when we want to create objects for our application schema. There was also an example in which the order of PDB was provided. The Perl script catcon.pl is certainly useful to avoid wasting too much time executing the same task in every PDB.    

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12cR2 new feature: Container Maps

$
0
0

By Deiby Gómez

Introduction:

Oracle introduced a new cool concept called “Application Containers” in 12cR2 (12.2.0.1.0). I have already written about this topic in the article “Introduction to Application Containers in Oracle Database 12cR2”, where you can find an introduction to the topic and see a couple of examples. Since version 8.0, Oracle Database has had the partitioning feature, which helps you access data faster. Since 8.0 there have been several enhancements for partitioning, types of partitions, objects that supports partitioning, etc. In Oracle Database version 12.1.0.2 Oracle introduced the “CONTAINERS” clause, a very useful clause that can be used to execute queries across several Pluggable Databases. You can filter which PDB you want to get the data from by the CON_ID column. You can read more about the CONTAINERS clause in the articles “New CONTAINERS Clause in 12.1.0.2 - Common Perspective” and “New CONTAINERS Clause in 12.1.0.2 - Local Perspective”. The downside of using the “CONTAINERS” clause is that you have to hard code the value of the CON_ID column. If the CON_ID changes because of a PDB unplug and a PDB plugin, you would be getting data from a wrong PDB; or if you remove the PDB, your queries will simply fail. There should be a way to use the “CONTAINERS” clause without hard coding the CON_ID, and, even better, why not to combine it with partitioning?  Basically, this was Oracle was thinking, and then the following insight occurred:

What if we use Pluggable Databases as partitions?

What if the PDB name is used instead of the CON_ID?

Thanks to this insight, “Container Maps” was introduced in Oracle 12.2.0.1.0. Unfortunately, at present, “Container Maps” are not available to use with normal Pluggable Databases. “Container Maps” can be used only with Application Containers (Application Root + Application PDBs).  

The illustration below shows how “Container Maps” works. In it, you see an end user executing a query and filtering the data by country=’GUATEMALA’. Internally, Oracle uses Application PDBs as partitions, where each Application PDB represents the data of a specific region (North, Central, South). After determining in which “partition” (Application PDB) all the files with the country=’GUATEMALA’ are located , Oracle then proceeds to query the table which is stored in that specific “Application PDB” –in this case, the Application PDB named “CENTRAL”. Of course, the table can also be partitioned as always, using all the enhancements in Oracle partitioning up to version 12.2.0.1.0.

 

In the following example we will explain step-by-step how to use “Container Maps”.

Create an Application Root:

First, I will create an “Application Container”, an Application Root named “Nuvola”, and three “Application PDBs” named “NORTH”, “CENTRAL” and “SOUTH”. If you want to read more about Application Containers you can read my article Introduction to Application Containers in Oracle Database 12cR2.

Creating the Application Root:

SQL> create pluggable database Nuvola as application container admin user pdbadmin identified by Nuvola1; 

Pluggable database created.

SQL> alter pluggable database Nuvola open;

Pluggable database altered.

 

In order to create “Application PDB” you must be connected to the “Application Root”:

SQL> alter session set container=Nuvola;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
NUVOLAAPPROOT

 

Creating the Application PDB named “North”:

SQL> create pluggable database north admin user app1admin identified by Nuvola1;

Pluggable database created.

 

Creating the Application PDB named “Central”:

SQL> create pluggable database central admin user app1admin identified by Nuvola1;

Pluggable database created.

 

Creating the Application PDB named “South”:

SQL> create pluggable database south admin user app1admin identified by Nuvola1;

Pluggable database created.

 

Opening all the Application PDBs:

SQL> alter pluggable database all open;

Pluggable database altered.

 

Creating the container map table:

 A container map is a simple table that has the information on which “partitions” (Application PDBs) are used and which column is used to address the data; in this case the column “country”. The type of partitioning used here is “BY LIST”. Note that the name of the “partitions” matches exactly with the name of the “Application PDBs”.

SQL> CREATE TABLE c##dgomez.containermap (
country VARCHAR2(30) NOT NULL)
PARTITION BY LIST (region) (
PARTITION north VALUES ('CANADA','USA'),
PARTITION central VALUES ('GUATEMALA','NICARAGUA'),
PARTITION south VALUES ('ARGENTINA','BRAZIL'));

Table created.

 

Now we set the “Application Root” to use the “Container Map”:

SQL> ALTER PLUGGABLE DATABASE SET CONTAINER_MAP='C##DGOMEZ.CONTAINERMAP'; 

Pluggable database altered.

  

Create an application with data

Now we will create an application and we will insert some data. This is just to show a couple of SELECT examples, so that you can see how the data is gotten transparently through the “partitions” (Application PDBs) based on the column “country”.

Start to install the application:

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola BEGIN INSTALL '1.0'; 

Pluggable database altered.

 

It is not mandatory to use “SHARING=METADATA”. I am using this because all I want to share among the Application PDBs is the metadata (the objects, without data). The data will be physically stored into each Application PDB.

SQL> CREATE TABLE c##dgomez.revenue SHARING=METADATA (
country VARCHAR2(30),
revenue number);

Table created.

 

The following clauses are mandatory in order to use “Container Maps”:

SQL> ALTER TABLE c##dgomez.revenue ENABLE CONTAINER_MAP;

Table altered.

 

SQL>  ALTER TABLE c##dgomez.revenue ENABLE CONTAINERS_DEFAULT;

Table altered.

 

And finally, we will end the application installation:

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola END INSTALL '1.0';

Pluggable database altered.

 

Verifying if the table is enabled to use Container Maps:

 We can double check whether the tables where the data will be stored are enabled to use Container Maps by querying the view DBA_TABLES and its new column “CONTAINER_MAP”:

SQL> select owner, table_name, CONTAINER_MAP from dba_tables where table_name='REVENUE';

OWNER      TABLE_NAME CONTAINER_MAP
---------- ---------- ---------------
C##DGOMEZ  REVENUE    YES

 

Inserting data to query using Container Map:

In order to complete our example, I will insert some data into each Application PDB. This is only to show how Container Maps work. After there is data inserted, I will proceed to perform a couple of SELECT statements that will automatically use the Container Map (in the next section of this article):

SQL> alter session set container=north;

Session altered.

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola  SYNC;

Pluggable database altered.

SQL> insert into c##dgomez.revenue values ('CANADA',1000);

SQL> insert into c##dgomez.revenue values ('USA',2000);

SQL> commit; 

SQL>  alter session set container=central;

Session altered.

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola SYNC;

Pluggable database altered.

SQL> insert into c##dgomez.revenue values ('GUATEMALA',3000);

SQL> insert into c##dgomez.revenue values ('NICARAGUA',4000);

SQL> commit;

SQL> alter session set container=south;

Session altered.

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola SYNC;

Pluggable database altered. 

SQL> insert into c##dgomez.revenue values ('ARGENTINA',5000);

SQL> insert into c##dgomez.revenue values ('BRAZIL',6000);

SQL> commit;

 

Executing queries using PDBs as partitions:

Now, time for the magic. I will connect to the “Application Root” and from it I will execute two queries. You can see that the SELECT statements don’t have any filter with the column CON_ID nor the Application PDB name. We are just getting data from a simple table (C##DGOMEZ.REVENUE), but the SELECT statement understands that Container Map is enabled, it will ask in which “partition” (Application PDB) the value “GUATEMALA” is stored and then it will query the table “C##DGOMEZ.REVENUE” in that specific Application PDB.

SQL> alter session set container=nuvola;

Session altered.

SQL> select country, revenue from c##dgomez.revenue where country='GUATEMALA';

COUNTRY        REVENUE
-------------- ----------
GUATEMALA      3000

 

We can also use the country ‘CANADA” and Oracle will perform the same mechanism:

SQL> select country, revenue from c##dgomez.revenue where country='CANADA';

COUNTRY     REVENUE
----------- ----------
CANADA      1000 

 

Conclusion:

We saw in this article a new, cool concept that combines the CONTAINERS clause, partitioning, and Application Containers. DBAs and developers will be able to take advantage of Container Maps, particularly s for reports that have to get data across several Application PDBs, without having to rewrite the code, and without having to add new clauses in the SELECT statement, taking advantage of Application PDBs as if they were partitions. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12cR2 new feature: Lockdown Profiles

$
0
0

By Deiby Gómez

 

Introduction:

In the past, roles, system privileges, and table privileges were used to control the functionalities allowed to database users. However, roles and privileges don’t have enough granularity to effectively restrict what work a user may do.  For example, you can grant the privilege “ALTER SYSTEM” to a user, but with that, you are allowing that user to change any database parameter. “ALTER SYSTEM” is not granular enough to enable the user to change some database parameters but not others. Even worse, there is no way to allow a user to change a specific database parameter with a range or list of values but disable another range or list of values. This functionality has been requested by DBAs for years and finally Oracle has heard us.

Oracle has introduced several new features in its newest version, 12.2.0.1. One of the most important features is “Lockdown Profiles”. Lockdown Profiles provides the granularity we were talking about. With this feature you can enable and disable database functions, features and options. It even lets you specify a range or list of values that may be used.

 

About Lockdown Profiles creation

Lockdown Profiles can be created only in Container Databases, and you must be connected to CDB$ROOT. If you try to create a lockdown profile in a non-container database you will receive the following error:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;
CREATE LOCKDOWN PROFILE WANNACRY_PROFILE
*
ERROR at line 1:
ORA-65090: operation only allowed in a container database

 

If you try to create a lockdown profile while connected to a PDB you will get the following error:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;
CREATE LOCKDOWN PROFILE WANNACRY_PROFILE
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

 

How to create a Lockdown Profile

Connect to CDB$ROOT:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

 

Execute the CREATE LOCKDOWN PROFILE sentence:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;

Lockdown Profile created.

 

Unfortunately, you cannot specify which functionality to enable or disable along with the CREATE LOCKDOWN PROFILE sentence. To do this, you have to use the ALTER LOCKDOWN PROFILE sentence separately.

 

Enabling or disabling functionalities:

There are three functionalities that you can disable:

  • FEATURE: Allows you to enable or disable database features. To see the full list of features that you can indicate, check here.
  • OPTION: The two options you can either enable or disable are “DATABASE QUEUING” and “PARTITIONING”.
  • STATEMENT: You can either enable or disable the statements “ALTER DATABASE”, “ALTER PLUGGABLE DATABASE”, “ALTER SESSION”, and “ALTER SYSTEM”. You can specify granular options along with these statements.

In the three functionalities, you can also use clauses like ALL and EXCEPT, which allows you to include or exclude a set of features instead of specifying them one by one.

In the following example we will disable two features, one option, and one statement.

The statement that we will disable is to change the parameter “nls_date_format” in an ALTER SYSTEM statement:

SQL>  ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE STATEMENT = ('ALTER SYSTEM') CLAUSE = ('SET')  OPTION= ('nls_date_format');

Lockdown Profile altered.

 

The next example is similar to the previous one, but here we are specifying a minimum value and a maximum value. All the values between are allowed, while all the values outside of this range are disallowed.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE STATEMENT = ('ALTER SYSTEM') CLAUSE = ('SET') OPTION = ('parallel_max_servers') MINVALUE = '10' MAXVALUE = '39';

Lockdown Profile altered.

 

In the next example I am disabling the feature “COMMON_USER_CONNECT”. This feature disallows common users to connect to pluggable databases directly. All common users must first connect to CDB$ROOT and then jump to any Pluggable Database.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE FEATURE = ('COMMON_USER_CONNECT'); 

Lockdown Profile altered.

 

The last example disables the option “PARTITIONING”, which means I cannot use any operations that relies on partitioning.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE OPTION = ('PARTITIONING'); 

Lockdown Profile altered.

 

Reviewing Lockdown Profiles information:

Once the lockdown profile has been created and you have enabled or disabled the required functionalities, you can review all the information using the view DBA_LOCKDOWN_PROFILES:

SQL> select rule_type, rule, clause, clause_option, option_value , min_value, max_value, status from DBA_LOCKDOWN_PROFILES where profile_name='WANNACRY_PROFILE' ;

RULE_TYPE  RULE                CLAUS    CLAUSE_OPTION        OPTION_VAL MIN MA STATUS
---------- ------------------- -------- -------------------- ---------- --- -- ----------
FEATURE    COMMON_USER_CONNECT DISABLE
OPTION     PARTITIONING        DISABLE
STATEMENT  ALTER SYSTEM        SET      NLS_DATE_FORMAT      MM-DD-YYYY         DISABLE
STATEMENT  ALTER SYSTEM        SET      PARALLEL_MAX_SERVERS            40  60  DISABLE

 

Enable Lockdown Profile:

As we have seen, I created the lockdown profile directly without specifying whether I want that lockdown profile in one specific PDB, or in all the PDBs, etc., I just created it. Don’t worry about it: The creation of a lockdown profile doesn’t mean it is enabled by default. Lockdown profile works like a Database Resource Manager Plan; you can create as many as you want, but only one is enabled and it must be enabled explicitly. And enabling a lockdown profile is similar to enabling a Database Resource Manager Plan; it is enabled by a database parameter.

So far we have created the lockdown profile “WANNACRY_PROFILE” and we have customized it but we haven’t enabled it yet.  You can enable a lockdown profile in one specific PDB, in a set of them or in all PDBs. If you want to enable the lockdown profile in all the PDBs you have to be connected to CDB$ROOT and set the database parameter “pdb_lockdown” to the name of your lockdown profile; in this case, “WANNACRY_PROFILE”. If you want to enable the lockdown profile in a specific PDB, first you have to connect to the specific PDB and then you have to set the database parameter “pdb_lockdown”. 

In the following example we have a CDB called “db12c” with two PDBs, one named “PDB1” and the second one named “PDB2”. We will enable the lockdown profile “WANNACRY_PROFILE” only in “PDB1”.

Checking out that the parameter is not set in any container:

SQL> select con_id, name, value from gv$system_parameter where name='pdb_lockdown';

CON_ID     NAME            VALUE
---------- --------------- ----------
0          pdb_lockdown

 

Connecting to “PDB1”:

SQL> show con_name

CON_NAME
------------------------------
PDB1

 

Set the database parameter pdb_lockdown:

SQL> alter system set pdb_lockdown='WANNACRY_PROFILE';

System altered.

 

Verifying that the parameter is set only in “PDB1” (CON_ID=3):

SQL> select con_id, name, value from gv$system_parameter where name='pdb_lockdown';

CON_ID     NAME VALUE
---------- -------------- ------------------------------
0          pdb_lockdown
3          pdb_lockdown   WANNACRY_PROFILE

 

Confirming whether the functionalities were successfully disabled:

Testing to change the parameter nls_date_format:

Connecting to “PDB1”:

SQL> show con_name

CON_NAME

------------------------------

PDB1

 

I am using a common user with “alter system” privileges:

SQL> show user

USER is "C##DGOMEZ"

 

As you see, even if the user has “alter system” privilege it is not allowed to change the database parameter because of the lockdown profile.

SQL> alter system set nls_date_format='mmddyyyy' scope=spfile;
alter system set nls_date_format='mm-dd-yyyy' scope=spfile
*
ERROR at line 1:
ORA-01031: insufficient privileges

 

Testing the feature 'COMMON_USER_CONNECT'. Without the lockdown profile, I was able to connect directly to a PDB with a common user, however now it is not allowed because of the lockdown profile:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/dgomez@192.168.1.22:1521/pdb1 

ERROR:

ORA-01017: invalid username/password; logon denied

Testing the parameter parallel_max_servers. The range we specified in the lockdown profile was [10,39]. As we explained before, all the values outside of this range are disabled, while the values between these values are allowed.

SQL> alter system set parallel_max_servers=9;
alter system set parallel_max_servers=9
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set parallel_max_servers=10;

System altered.

SQL> alter system set parallel_max_servers=39;

System altered.

SQL> alter system set parallel_max_servers=40;
alter system set parallel_max_servers=40
*
ERROR at line 1:
ORA-01031: insufficient privileges

 

How to drop a lockdown profile:

To drop a lockdown profile is easy. You just have to execute the following sentence from CDB$ROOT. You don’t have to reset or clean the parameter pdb_lockdown in all the PDBs that are using this lockdown profile (although I strongly think it should not be this way). When you execute this sentence, all the PDBs using the lockdown profile will automatically stop using the settings provided by this lockdown profile.   

DROP LOCKDOWN PROFILE WANNACRY_PROFILE;

 

Conclusion:

In this article, I outlined the required steps to create a new lockdown profile, I explained which kind of functionalities we can enable and disable, and I provided several examples. I provided comments to help you quickly understand how to use lockdown profiles and take advantage of them; very important in an era where the security is of utmost value and a finer granularity is needed to restrict people to only those tasks necessary for their role. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter LinkedIn 

Oracle Database 12cR2 new feature: Proxy PDB

$
0
0

By Deiby Gómez

 

Introduction:

The need to communicate with external systems and exchange data made Oracle develop a way to connect to different Oracle databases to execute operations. Traditionally, whenever we wanted to bring data in from a different database, we used a Database Link. But 12.1.0.1.0 Oracle introduced a major new multi-tenant architecture. With Multitenant, a database could be either Container Database or non-Container Database. If we decided to create a new database as a Container (CDB) we could create Pluggable Databases connected to the CDB.  However, DBAs still needed to use Database Links to exchange data between the pluggable databases within a Container.

In the newest version of Oracle Database 12.2.0.1.0 introduces a feature called “Proxy PDB”. A Proxy PDB is physically an empty PDB that has the minimum tablespaces required (SYSTEM, SYSAUX, UNDO), created in one CDB that references a remote Pluggable Database in a different CDB. All the operations (DDLs & DMLs) that are executed within the Proxy PDB are sent to the referenced Pluggable Database and remotely executed in it, except for the operations ALTER PLUGGABLE DATABASE and ALTER DATABASE. This is why it is called “Proxy”.

The benefit of a Proxy PDB is that it’s exactly as if the referenced PDB was in the local CDB, but the data is stored remotely and the operations are executed remotely in the referenced Pluggable Database. For instance, if we have Database Resource Manager active in the local CDB, the current Resource Manager Plan also applies to the Proxy PDB. Another example is the CONTAINERS clause, which allows retrieval of data from all the Pluggable Databases; this clause also works for a Proxy PDB. For all operations, the Proxy PDB will be seen as a normal PDB.

The image below sets up our example. It shows two containers, CDB1 and CDB2.  The remote container is shown at the top of the illustration: CDB1. The local CDB is shown at the bottom of the illustration: CDB2. Each container has two pluggable databases within it, designated as PDB1 and PDB2. The PDB2 in the local container is a Proxy PDB that references the PDB2 within CDB1.  

In the illustration we see a user connected to the CDB$ROOT of CDB2 who is executing a query using the CONTAINERS clause across all the PDBs that belong to CDB2. The data returned includes “Guatemala”, which is physically stored in the referenced PDB, that is, the PDB2 within CDB1. the PDB2. The row with the value “Guatemala” is returned because the query was sent to the Referenced PDB and executed there. (The referenced PDB can be either a normal PDB or an application PDB. In this example the referenced PDB is a normal PDB.)

 

 

To create a Proxy PDB there are some prerequisites:

  • The CDB that contains the referenced PDB must be in local undo mode.
  • The CDB that contains the referenced PDB must be in ARCHIVELOG mode.
  • The referenced PDB must be in open read/write mode when the proxy PDB is created.

We will go through the example presented in the above image. First I will connect to CDB1 and create the PDB1 and PDB2 Pluggable Databases, and then I will jump to CDB2 to create its PDB1 and then the Proxy PDB called PDB2. Once everything is completed I will perform the query with the CONTAINERS clause from CDB2, my local container.

 

Preparation in CDB1:

I will create the PDB1 and PDB2 in CDB1:

SQL> create pluggable database pdb1 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> create pluggable database pdb2 admin user pdbadmin identified by nuvola;

Pluggable database created.

 

Opening PDB1 and PDB2:

SQL> alter pluggable database all open;

Pluggable database altered.

 

One of the prerequisites is that the referenced PDB is in read/write; in this example both are in read/write:

SQL> select name, open_mode from v$pdbs;

NAME       OPEN_MODE
---------- ----------
PDB$SEED   READ ONLY
PDB1       READ WRITE
PDB2       READ WRITE

 

Another prerequisite is that the user that connects to the referenced PDB has to be a common user:

SQL> select username, common from dba_users where username='C##DGOMEZ';

USERNAME   COM
---------- ---
C##DGOMEZ  YES

Another prerequisite is that the remote CDB, in this case CDB1, has to be configured with Local Undo:

SQL>SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME        PROPERTY_VALUE
-------------------- --------------------
LOCAL_UNDO_ENABLED   TRUE

In the previous image, you can see that there is a table with 1 row inserted. I will load these rows into the PDB1 and the PDB2 in CDB1 to make this environment match with the image:

SQL> alter session set container=pdb1;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Brazil');

1 row created.

SQL> commit;

Commit complete.

 

The PDB2 of CDB1 will be our referenced PDB. In the image you can see that the value in the referenced PDB is “Guatemala”:

SQL> alter session set container=pdb2;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

 

The work in CDB is done. Two PDBs were created, the table was created and the rows were inserted. Now it’s time to configure CDB2 and create the Proxy PDB.

 

Preparation in CDB2:

We will start from the CDB$ROOT:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

 

I will create a common user in order to perform the example with the CONTAINERS clause. For more information about the CONTAINERS clause you can read my article “New CONTAINERS Clause in 12.1.0.2 - Common Perspective”.

SQL> create user c##dgomez identified by nuvola container=all;

User created.

SQL> grant connect, resource, unlimited tablespace to c##dgomez container=all;

Grant succeeded.

 

I will create the same table in CDB$ROOT in CDB2 and insert a row in order to follow the example in the image:

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('USA');

1 row created. 

SQL> commit;

Commit complete.

 

Creating the PDB1 in CDB2:

SQL> create pluggable database pdb1 admin user pdbadmin identified by nuvola;

Pluggable database created.

 

Opening the PDB1 of CDB2:

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

 

Creating the table country in the PDB1 of CDB2:

SQL> alter session set container=pdb1;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Canada');

1 row created. 

SQL> commit; 

Commit complete.

 

Creation of “Proxy PDB”:

Well, so far everything we have done is only to build the environment in the example in the image shown at the beginning of this article. We have not seen how “Proxy PDB” works; I have only provided concepts and some prerequisites.  The next sentence creates a database link in the CDB$ROOT of CDB2. The database link is required only at the time of the Proxy PDB creation. Once the Proxy PDB has been created the database link is no longer required; Proxy PDB connects directly to the referenced PDB without using the database link.  

Note that the database link references directly to a common user in the PDB that will be the Referenced PDB, in this case PBD2 of CDB1.

SQL> CREATE DATABASE LINK link_to_pdb2_in_cdb1 CONNECT TO c##dgomez IDENTIFIED BY nuvola USING '192.168.1.22:1521/pdb2';

Database link created.

 

Note that the database like uses the common user in CDB1, this was one of the prerequisites I mentioned before. The database link connects to the PDB2 in CDB1 since this will be our Referenced PDB.

Once the database link is created, the next step is to create the Proxy PDB.

SQL> create pluggable database pdb2 AS PROXY FROM pdb2@link_to_pdb2_in_cdb1;

Pluggable database created.

 

And that’s it! The Proxy PDB was created successfully. I will proceed to open it in read/write to start using it:

SQL> alter pluggable database pdb2 open; 

Pluggable database altered.

 

Now it’s time to test how Proxy PDB works! Since the example in this article is based on the CONTAINERS clause, I will connect to the CDB$ROOT of CDB2 using password authentication and execute a query:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/nuvola@'192.168.1.22:1521/cdb2'

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

 

Note that the query from CDB$ROOT of CDB2 returns the value “Guatemala”; this is because of the Proxy PDB. The value “Guatemala” is not stored in the PDB2 of CDB2 (the Proxy PDB) but, as I said before, the Proxy PDB will behave transparently for all the DDLs and DMLs, it as if a normal PDB was there..

SQL> select name from containers(c##dgomez.country);

NAME
-------------------------
USA
Canada
Guatemala

There is a limitation on Proxy PDBs, they don’t support OS authentication. If you login to the CDB2 with OS authentication and try to run a query from the PDB2 you will get no data. This is because the Proxy PDB will not be able to connect to the referenced PDB and get the data from it (. Proxy PDB supports only password authentication.

[oracle@nuvola2 ~]$  sqlplus  / as sysdba

SQL> show con_name

CON_NAME
-----------------------------
CDB$ROOT

SQL> select name from containers(c##dgomez.country);

NAME
-------------------------
USA
Canada 

If we connect with OS authentication to the PDB2 in CDB2 and we try to execute a query, the query will fail, saying that the password used is not correct. Of course, we know that there was not a password provided since we used OS authentication.

[oracle@nuvola2 ~]$ sqlplus / as sysdba 

SQL> alter session set container=pdb2; 

Session altered.

SQL> select * from c##dgomez.country;

select * from c##dgomez.country

                        *

ERROR at line 1:

ORA-01017: invalid username/password; logon denied

ORA-02063: preceding line from PROXYPDB$DBLINK

 

When we use password authentication the Selects works well:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/nuvola@'192.168.1.22:1521/cdb2'

SQL>  alter session set container=pdb2;

Session altered.

SQL> select * from c##dgomez.country;

NAME
-------------------------
Guatemala

Now I will test an INSERT operation in the Proxy PDB, but since it is a Proxy, the operation will be executed in the referenced PDB, which means that the row will be stored in the referenced PDB:

SQL> insert into c##dgomez.country values ('Costa Rica');

1 row created.

SQL> commit;

Commit complete.

 

In PDB2 of CDB1, I will verify if the row was inserted there:

SQL> select name from v$database;

NAME

---------

DB12C

SQL> alter session set container=pdb2; 

Session altered.

SQL> select * from c##dgomez.country;

NAME
-------------------------
Guatemala
Costa Rica

This confirms that the Proxy PDB sends SELECTs and also INSERTS (DDLs+DMLs) to be processed inside the referenced PDB.

 

Conclusion:

We have seen that a Proxy PDB is a special PDB that receives operations (DDLs and DMLs) in a local CDB but sends all the operations to its referenced PDB, and processes the operations remotely within the referenced PDB. This brings is the advantage of “Location Transparency”. Location Transparency means that it doesn’t matter where the data is located physically; when we use Proxy PDBs, we can present a PDB in other CDBs as if the PBD that has all the data stored physically was there. All the operations will be processed remotely. Data can be used everywhere in 12.2.0.1.0 without actually having the data physically in all the sites. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

oracle 12cR2 RMAN New Feature: UNTIL AVAILABLE REDO

$
0
0

By Deiby Gómez

Introduction:

 Oracle has introduced several new features in its new version Oracle Database 12.2.0.1.0 and RMAN it is not the exception. Most of the DBA would agree that one of the difficult tasks whenever a database needs to be restored is to calculate the SCN, or the Sequence to use in the “RECOVER DATABASE UNTIL (…)” operation, in order to apply as many archived logs as possible, to recover as much data as possible. Every DBA has different methods to discover the target SCN or the target Sequence. Some use the “PREVIEW” clause, some others the view v$log, some others the RMAN “LIST” commands, and so on. The problem is that when the calculation is not correct, and the database that is being restored is huge (let’s say 8TB), an error on the “RECOVER” phase might take us to restore the whole database from scratch. In Oracle database 12.2.0.1.0 the clause “UNTIL AVAILABLE REDO” is available. As its name indicates, this clause makes all the required calculations to recover the database up to the last available archive log. This is a really cool feature, since all the DBA has to do is catalog all the archivelogs available and use “UNTIL AVAILABLE REDO” in the “RECOVER DATABASE” phase, and Oracle will do all the work., This also lets us avoid human error in the calculations.

In order to show how this feature works I will use an empty database with the table DGOMEZ.COUNTRY; currently it has no rows.  This database is in archivelog mode.

 

Performing a backup:

RMAN> backup database;

Starting backup at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=53 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/others/db1/DB1/datafile/o1_mf_system_djyxzjxt_.dbf
input datafile file number=00003 name=/others/db1/DB1/datafile/o1_mf_sysaux_djyy0ynm_.dbf
input datafile file number=00004 name=/others/db1/DB1/datafile/o1_mf_undotbs1_djyy23sy_.dbf
input datafile file number=00007 name=/others/db1/DB1/datafile/o1_mf_users_djyy24y4_.dbf
channel ORA_DISK_1: starting piece 1 at 07-MAY-17
channel ORA_DISK_1: finished piece 1 at 07-MAY-17
piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:38
Finished backup at 07-MAY-17

Starting Control File and SPFILE Autobackup at 07-MAY-17
piece handle=/others/db1/fra/DB1/autobackup/2017_05_07/o1_mf_s_943372550_djyyy6vo_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-MAY-17

I will insert a row with the value ‘Guatemala’ into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

 

A second row with the value ‘Canada’ will be inserted into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Canada');

1 row created.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

 

A last row with the value ‘Colombia’ will be inserted into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Colombia');

1 row created 

SQL> commit;

Commit complete.

SQL> alter system switch logfile; 

System altered.

 

You can see that there were three archived logs created. This is because for every row that was inserted we executed a switch of the log file, and that resulted in the creation of a new archived log.

[oracle@nuvola2 2017_05_07]$ ls -ltr

total 155072

-rw-r----- 1 oracle dba 158784512 May  7 15:59 o1_mf_1_1_djyz5fgk_.arc

-rw-r----- 1 oracle dba      2560 May  7 16:00 o1_mf_1_2_djyz6dyd_.arc

-rw-r----- 1 oracle dba      3072 May  7 16:00 o1_mf_1_3_djyz723j_.arc

[oracle@nuvola2 2017_05_07]$

 

Confirming the three rows are in the table:

SQL> select * from dgomez.country;

NAME

--------------------

Guatemala

Canada

Colombia

 

Basically what I have done is what the following picture explains.  Initially the database was empty. The row with the value ‘Guatemala’ was inserted and then I generated an archived log (#1). I repeated these steps with the value ‘Canada’ and ‘Colombia’ respectively.

 

First Test – Using all the archived logs generated:

The first test that I will perform is to use these three newly generated archived logs to recover the database. For this I will simulate that all the datafiles of the existing database were deleted and we have to restore and recover the database.

Shutting down the existing database:

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 

Mounting the database:

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area  843055104 bytes

Fixed Size              8626288 bytes

Variable Size         322965392 bytes

Database Buffers      507510784 bytes

Redo Buffers            3952640 bytes

Database mounted.

 

Deleting datafiles and online logs in order to simulate a storage damage:

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/datafile/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/onlinelog/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/onlinelog/*

 

Restoring the database:

RMAN> restore database;

Starting restore at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /others/db1/DB1/datafile/o1_mf_system_djyxzjxt_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /others/db1/DB1/datafile/o1_mf_sysaux_djyy0ynm_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /others/db1/DB1/datafile/o1_mf_undotbs1_djyy23sy_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /others/db1/DB1/datafile/o1_mf_users_djyy24y4_.dbf
channel ORA_DISK_1: reading from backup piece /others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp
channel ORA_DISK_1: piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 07-MAY-17

 

Recovering the database: Here is where the magic happens. All we have to do is use the “UNTIL AVAILABLE REDO” clause and Oracle automatically will apply all the archived logs that have registered into its control file or a catalog; if a catalog is used. There is no need to perform calculations for the target SCN.

RMAN> recover database until available redo;

Starting recover at 07-MAY-17
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 1 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc
archived log for thread 1 with sequence 2 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc
archived log for thread 1 with sequence 3 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc thread=1 sequence=1
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc thread=1 sequence=2
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc thread=1 sequence=3
warning: attempt media recovery until thread 1, sequence 4
Finished recover at 07-MAY-17

We can see that the three archived logs were applied automatically and there were no errors.

Opening the database in resetlogs:

SQL> alter database open resetlogs; 

Database altered.

 

Verification of the data:

SQL> select * from dgomez.country;

NAME

--------------------

Guatemala

Canada

Colombia

 

Since the three rows are there, we can confirm that Oracle indeed applied the three archived logs automatically, without our having to specify any target SCN or target sequence.

 

Second Test – Deleting the last two archived logs:

The test that I will perform now is with the last two archived logs deleted and only the first archived log available. I will again use the UNTIL AVAILABLE REDO clause and Oracle should be able to discover that the maximum time to which the database can be recovered is right after the first row was inserted (with the value ‘Guatemala’).  

Shutting down the existing database:

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 

Mounting the database:

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area  843055104 bytes

Fixed Size              8626288 bytes

Variable Size         322965392 bytes

Database Buffers      507510784 bytes

Redo Buffers            3952640 bytes

Database mounted.

 

Deleting datafiles and online logs in order to simulate a storage damage:

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/datafile/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/onlinelog/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/onlinelog/*

 

Confirming that our three archived logs are there:

[oracle@nuvola2 2017_05_07]$ ls -ltr  /others/db1/fra/DB1/archivelog/2017_05_07/*

-rw-r----- 1 oracle dba 158784512 May  7 15:59 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc

-rw-r----- 1 oracle dba      2560 May  7 16:00 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc

-rw-r----- 1 oracle dba      3072 May  7 16:00 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc

 

Deleting the last two archived logs that were generated:

[oracle@nuvola2 2017_05_07]$ rm -rf  /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc

 

Confirming that only the first archived log is available now:

[oracle@nuvola2 2017_05_07]$ ls -ltr  /others/db1/fra/DB1/archivelog/2017_05_07/*

-rw-r----- 1 oracle dba 158784512 May  7 15:59 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc

[oracle@nuvola2 2017_05_07]$

 

The following image explains what we are doing. We deleted the last two generated archived logs in order to test whether Oracle is aware of it and whether it automatically handles the situation and applies all the redo data in the first archived log. If Oracle performs its job well, at the end, we will be see only one row inserted with the value ‘Guatemala’.

 

Restoring the database:

RMAN> restore database;

Starting restore at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=44 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /others/db1/DB1/datafile/o1_mf_system_djyznwbl_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /others/db1/DB1/datafile/o1_mf_sysaux_djyznwby_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /others/db1/DB1/datafile/o1_mf_undotbs1_djyznwc9_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /others/db1/DB1/datafile/o1_mf_users_djyznwcn_.dbf
channel ORA_DISK_1: reading from backup piece /others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp
channel ORA_DISK_1: piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:22
Finished restore at 07-MAY-17

 

Recovering the database:

RMAN> recover database until available redo;

Starting recover at 07-MAY-17
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 1 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc thread=1 sequence=1
warning: attempt media recovery until thread 1, sequence 2
Finished recover at 07-MAY-17

 

You can see that Oracle automatically discovered that only one archived log is available and automatically calculated the target sequence for the database to be recovered.

Opening the database with resetlogs:

RMAN> alter database open resetlogs; 

Statement processed

 

Confirming the data:

RMAN> select * from dgomez.country;

NAME               

--------------------

Guatemala          

 

We can see that the result is correct. Since only the first archived log was applied, only the row with the value ‘Guatemala’ exists in the table.

 

Conclusion:

Definitely the ‘UNTIL AVAILABLE REDO’ clause is something DBAs have been waiting for, since it eliminates time spent calculating the target SCN or sequence and also removes the risk of human error in the calculations that in might result in having to restore the entire database from scratch. That would be acceptable for small databases, but for huge, multi-terabyte databases it’s not acceptable.  Oracle has made our life easier.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

How to analyze Undo statistics to proactively avoid undo space issues

$
0
0

By Deiby Gómez

Introduction

In my previous articles I explained two very important concepts about Undo Data; one is how Oracle manages the retention time and the other is how Oracle reuses the undo extents. You can also check my presentation "How to avoid ORA-01555" if you want to know more about that error. In this article, I will show you how the view V$UNDOSTAT can give you useful information about how everything is going regarding your undo data in your database. First, let me give you a small definition about two views:

V$UNDOSTAT: Each row in the view keeps statistics collected in the instance for a 10-minute interval. The rows are in descending order by the BEGIN_TIME column value. Each row belongs to the time interval marked by (BEGIN_TIME, END_TIME). Each column represents the data collected for the particular statistic in that time interval. The first row of the view contains statistics for the (partial) current time period. The view contains a total of 576 rows, spanning a 4 day cycle.

DBA_HIST_UNDOSTAT: This view contains snapshots of V$UNDOSTAT. Basically is has the history of V$UNDOSTAT.

As you can see, the main view is V$UNDOSTAT; the other is just its history. There are several columns in the view. Here are the ones we’ll focus on:

UNDOBLKS: Represents the total number of undo blocks consumed. You can use this column to obtain the consumption rate of undo blocks, and thereby estimate the size of the undo tablespace needed to handle the workload on your system

TXNCOUNT: Identifies the total number of transactions executed within the period

UNXPBLKREUCNT: Number of unexpired undo blocks reused by transactions

EXPBLKRELCNT: Number of expired undo blocks stolen from other undo segments

ACTIVEBLKS: Total number of blocks in the active extents of the undo tablespace for the instance at the sampled time in the period

UNEXPIREDBLKS: Total number of blocks in the unexpired extents of the undo tablespace for the instance at the sampled time in the period

EXPIREDBLKS: Total number of blocks in the expired extents of the undo tablespace for the instance at the sampled time in the period.

NOSPACEERRCNT: Identifies the number of times space was requested in the undo tablespace and there was no free space available. That is, all of the space in the undo tablespace was in use by active transactions. The corrective action is to add more space to the undo tablespace.

By using these columns, there are some interesting combinations that every DBA can use to tune undo data generation. If we combine UNDOBLKS and TXNCOUNT, for instance, we can find out the consumption rate of undo blocks per transaction.  Use the following query:

select min(UNDOBLKS/TXNCOUNT), avg(UNDOBLKS/TXNCOUNT), max (UNDOBLKS/TXNCOUNT) from V$UNDOSTAT

select BEGIN_TIME, END_TIME, UNDOBLKS/TXNCOUNT from V$UNDOSTAT;

You can also combine UNDOBLKS, the Undo tablespace’s block size, and the retention time in order to learn how many MB you will need for your undo tablespace’s size to match with a specific retention time.

And even more interesting, we can extract the data from V$UNDOSTAT in a CSV format and create line charts in order to understand the undo behavior of our databases.

Let’s see how this would work. As an example, I have created a 12.2.0.1 EE database, where I have loaded some workload with SLOB. The SLOB was configured to perform 95% UPDATES and 5% SELECTs, a WORK_UNIT=8192, 5 SLOB schemas and 5 threads per schema in order to generate a lot of undo data. 

For each chart that I will show, SLOB was running for around 60 minutes. This means that we will have 6 rows in V$UNDOSTAT, since every row is a sample of 10 mins.

Before you study the charts, I really recommend that you first read these two articles to master the two principal concepts:

How does Oracle reuse the Expired and Unexpired undo extents?

Undo retention time with autoextend=on and autoextend=off

Let’s begin. The following charts use the columns: NOSPACEERRCNT, ACTIVEBLKS, UNEXPIREDBLKS, EXPIREDBLKS (but you can build more complex charts using the others columns of V$UNDOSTAT).

First type of workload 

The chart below characterizes an OLTP database; the database is receiving transactions (because there are active undo extents) but the transactions seem to happen infrequently since most of the undo extents are "expired" and the active extents have not increased enough to require reusing expired/unexpired extents.

If you have your undo data behavior looking like this chart, you would say your database is healthy from an undo space perspective. This would be a "perfect" environment. In this chart, there is no reason to be worried regarding undo space.

 

First Workload Example

Second type of workload

This workload is quite different.. In the previous chart, the higher line was of “Expired Blocks” and the lower line was of “Unexpired Blocks”; however, in this second chart this is reversed. Now we can see that the higher line is of “Unexpired Blocks”. This means that the database is receiving the workload and the undo retention time is high enough to keep the undo data of the completed transactions (Unexpired extents) stored.

Here, you have to review whether there are Unexpired extents that are being reused by new transactions. This happens more frequently when the line of Unexpired extents is getting close to the line of the active extents (the next two charts). If you see that “UNXPBLKREUCNT” has a value greater than one, you probably should tune undo retention. If the undo retention has the value that you require, then you can increase the size of your undo tablespace; otherwise, unexpired extents will be overwritten by other transactions if Oracle requires it. In that case you would see some ORA-01555 in your SELECT operations.

In the chart below, however, there is no reason to be worried regarding space.

Second Workload Example

Third type of workload

The chart below is very similar to the previous one; however, in this chart the line of “Unexpired extents” is closer to the line of Active extents. This behavior increases the probability of getting ORA-01555 in your SELECT operations. If you want to avoid ORA-01555, you can increase the undo retention time or increase the size of the undo tablespace.

In this chart, there is no reason to be worried regarding space, only about ORA-0155, but you should look a little bit deeper because if you don’t pay attention, your database might reach the status of either of the two charts we’ll be looking at later on.

Third Workload Example

Fourth type of workload 

This chart indicates a worse situation than the two previous charts. Here, the number of transactions is increased such that the number of active undo extents has also increased, and started to overwrite (reuse) some unexpired undo extents.

In a database with this undo behavior there will surely be some SELECTs failing with ORA-01555, and space issues will be around the corner. I recommend in this case that you make a deep analysis of why expired undo extents have started to be reused.

If you just ignore the type status shown in this chart, your database will at some point reach the behavior shown in next chart. There will be space problems and your transactions (INSERT, UPDATE, DELETE) will start failing because there is no free space in the undo tablespace to be assigned for new extents.


Fourth Workload Example

Fifth type of workload

You should avoid having your database in this status as much as possible. In this status, some transactions (INSERT, UPDATE, DELETE) have already started to fail because there was no free space in undo tablespace to create new active undo extents.You should definitely increase the size of some datafiles of undo tablespace.


Fifth Workload Example

I’ve just shown you five charts created from the view V$UNDOSTAT that allows you to chart up to 4 days of historic data. You could  use DBA_HIST_UNDOSTAT if you want to chart several days in the past.

Determining the proper undo tablespace size

Oracle provides the function dbms_undo_adv.required_undo_size , which you can use to determine the proper undo tablespace size to comply with an specific undo retention time.

SQL> SELECT 'The Required undo tablespace size using Statistics In Memory is ' || dbms_undo_adv.required_undo_size(128) || ' MB' required_undo_size FROM dual;

REQUIRED_UNDO_SIZE

--------------------------------------------------------------------------------

The Required undo tablespace size using Statistics In Memory is 79 MB

You can use this function as a starting point, but I recommend that you set the size of the undo tablespace based on your analysis of the behavior and historic statistics of your undo data.

Conclusion

In this article I demonstrated that the view V$UNDOSTAT has very useful information that you can review, or even better, that you can chart. You can build charts as complex as you want in order to analyze the behavior of your database from the undo usage perspective and then make decisions to properly tune undo retention time and undo tablespace size.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12.2 - How to track index usage

$
0
0

By Deiby Gómez

Introduction

Several articles have been written about how to track the usage of indexes and there are several scripts to determine which indexes are being used after monitoring for a while. In previous versions to 12cR2 of Oracle Database there is the clause “ALTER INDEX (…) MONITORING USAGE” that can be used for this. However Oracle 12.2 introduced two new views that automatically monitor index usage:

V$INDEX_USAGE_INFO: V$INDEX_USAGE_INFO keeps track of index usage since the last flush. A flush occurs every 15 minutes. After each flush, ACTIVE_ELEM_COUNT is reset to 0 and LAST_FLUSH_TIME is updated to the current time.

DBA_INDEX_USAGE: DBA_INDEX_USAGE displays cumulative statistics for each index.

With these two new views, Oracle automatically tracks the usage of indexes. There are several columns in the dba_index_usage that can be used to find out how many accesses the indexes have received, how many rows have returned, and, even better, there are buckets to create histograms for accesses and rows returned. The most recent time that the index was used is also recorded.  

In the following example, I will create a table with three columns, with one index in every column. Then I will run some queries against the table in order to use the indexes, and we will confirm that indeed Oracle 12.2 tracks the usage.

Creating the table

SQL> create table dgomez.table1 (id number, val1 varchar2(20), val2 varchar2(20));

Table created.

Creating an Index in each column

SQL> create index dgomez.idx_id on dgomez.table1(id);

Index created.

 

SQL> create index dgomez.idx_val1 on dgomez.table1(val1);

Index created.

 

SQL> create index dgomez.idx_val2 on dgomez.table1(val2);

Index created.

Perform some INSERTs in the table

While the INSERTs sentences also impact the index (index entries must be created in the b-tree), this doesn’t count as “access”.

SQL> insert into dgomez.table1 values (1,'a','b');

SQL> insert into dgomez.table1 values (2,'b','c');

SQL> insert into dgomez.table1 values (3,'c','d');

SQL> insert into dgomez.table1 values (4,'d','e');

SQL> insert into dgomez.table1 values (5,'e','f');

SQL> insert into dgomez.table1 values (6,'f','g');

SQL> insert into dgomez.table1 values (7,'g','h');

SQL> insert into dgomez.table1 values (8,'h','i');

SQL> insert into dgomez.table1 values (9,'i','j');

SQL> insert into dgomez.table1 values (10,'j','k');

SQL> insert into dgomez.table1 values (11,'k','l');

SQL> commit;

Executing some queries

I will execute some queries. I have enabled autotrace to confirm that the query is using the index. This counts as an “access”. Also pay attention to how many rows each query has returned, since this count is also monitored by Oracle. At the end, we will list how many accesses and how many rows each index has returned and we will confirm whether the data displayed is correct.

Using the index IDX_ID:

SQL> select id from dgomez.table1 where id>1;

10 rows selected.

 

---------------------------------------------------------------------------

| Id  | Operation      | Name   | Rows  | Bytes | Cost (%CPU)| Time       |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT |       |    10 |   130 | 1   (0)     | 00:00:01  |

|*  1 |  INDEX RANGE SCAN| IDX_ID |    10 |   130 |      1   (0)| 00:00:01 |

---------------------------------------------------------------------------

 

SQL> select id from dgomez.table1 where id>0;

 

11 rows selected.

 

---------------------------------------------------------------------------

| Id  | Operation      | Name   | Rows  | Bytes | Cost (%CPU)| Time       |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT |        |    11 |   143 |      1   (0)| 00:00:01  |

|*  1 |  INDEX RANGE SCAN| IDX_ID |    11 |   143 |      1   (0)| 00:00:01  |

---------------------------------------------------------------------------

 

Using the index IDX_VAL1:

SQL> select val1 from dgomez.table1 where val1 !='a';

 

10 rows selected.

 

-----------------------------------------------------------------------------

| Id  | Operation      | Name     | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------

|   0 | SELECT STATEMENT |          |  10 | 120   |        1   (0)| 00:00:01 |

|*  1 |  INDEX FULL SCAN | IDX_VAL1 |  10 | 120   |        1   (0)| 00:00:01 |

-----------------------------------------------------------------------------

 

SQL> select val1 from dgomez.table1 where val1 !='z';

 

11 rows selected.

 

-----------------------------------------------------------------------------

| Id  | Operation     | Name      | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------

|   0 | SELECT STATEMENT |          |  11 | 132    |        1   (0)| 00:00:01 |

|*  1 |  INDEX FULL SCAN | IDX_VAL1 |  11 | 132    |        1   (0)| 00:00:01 |

-----------------------------------------------------------------------------

 

Using the index IDX_VAL2:

 SQL> select val2 from dgomez.table1 where val2 !='b';

 

10 rows selected.

 

-----------------------------------------------------------------------------

| Id  | Operation      | Name     | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------

|   0 | SELECT STATEMENT |          |  10 | 120    |        1   (0)| 00:00:01 |

|*  1 |  INDEX FULL SCAN |IDX_VAL2 |  10 | 120    |        1   (0)| 00:00:01 |

-----------------------------------------------------------------------------

 

SQL> select val2 from dgomez.table1 where val2 !='z';

 

11 rowsselected.

 

-----------------------------------------------------------------------------

| Id  | Operation      | Name     | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------

|   0 | SELECT STATEMENT |          |  11 | 132    |        1   (0)| 00:00:01 |

|*  1 |  INDEX FULL SCAN | IDX_VAL2 |  11 | 132    |        1   (0)| 00:00:01 |

-----------------------------------------------------------------------------

Confirming the information captured

Now let’s take a look into the information captured by Oracle. In the previous part of this demo I executed each query two times in order to use every index twice. The first query always returned 10 rows for every index, and the second query returned 11 rows for every index; this means in total the index has returned 21 rows. Now let’s confirm these values:

SQL>

select name, total_access_count, total_exec_count, total_rows_returned, last_used from DBA_INDEX_USAGE where owner='DGOMEZ';

 

NAME     TOTAL_ACCESS_COUNT  TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED             LAST_USED

--------- ------------------ ---------------- ------------------- ---------------------

IDX_ID                     2                2                  21  07-16-2017 18:58:43

IDX_VAL1                   2                2                  21  07-16-2017 18:58:43

IDX_VAL2                   2                2                  21  07-16-2017 18:58:43

 

Fortunately the information about every query I executed was captured, but it seems not all the SELECTs are captured, as Frank Pachot explains in this article.

The following output shows how many accesses the index has received:

SQL> select name, bucket_1_access_count, bucket_2_10_access_count, bucket_11_100_access_count, bucket_101_1000_access_count  from DBA_INDEX_USAGE where owner='DGOMEZ';

 

NAME      BUC_1_ACC_CT BUC_2_10_ACC_CT BUC_11_100_ACC_CT BUC_101_1000_ACC_CT

--------- ------------ --------------- ----------------- -------------------

IDX_ID              0               1                 1                    0

IDX_VAL1            0               1                 1                   0

IDX_VAL2            0               1                 1                   0

 

The definition of the column “BUCKET_11_100_ACCESS_COUNT” is “The index has been accessed between 11 and 100 times. At first look it seems that this definition is not correct, because I just executed the same query two times for each index. I didn’t execute a query that accessed the index between 11 and 100 times.

So apparently this column actually captures is accesses, not operations. Since the first SELECT operations accessed the index 10 times because it returned 10 rows, the bucket_2_10_access_count was increased by one. It is the same for the second query, which accessed the index 11 times because it returned 11 rows; the bucket_11_100_access_countwas increased by one.

But… Wait! TOTAL_ACCESS_COUNT says every index was accessed only two times in total. So, there are some inconsistent definitions here:

  • Either there were two accesses of every index because I executed two SELECT operations that touched the index, in which case TOTAL_ACCESS_COUNT is correct but BUCKET_11_100_ACCESS_COUNT is not correct, because I didn’t execute any query more than 10 times and fewer than 101 times. 
  • Or, the BUCKET_11_100_ACCESS_COUNT is correct and it doesn’t count the operations (SELECTs in this case) and instead counts every access to the b-tree nodes into the index; in which case the definition of TOTAL_ACCESS_COUNT is wrong.

In the following output we can confirm that every bucket received the correct information. For example, for the bucket bucket_2_10_rows_returned there is 1 execution; this is because the first query always returned 10 rows in every index. The bucket bucket_11_100_rows_returned always has the right value (1 execution) since the second query we executed against every index always returned 11 rows.

SQL> select name, bucket_2_10_rows_returned, bucket_11_100_rows_returned, bucket_101_1000_rows_returned from DBA_INDEX_USAGE where owner='DGOMEZ';

 

NAME      BUC_2_10_RW_RETD BUC_11_100_RW_RETD  BUC_101_1000_RW_RETD

--------- ---------------- ------------------ ---------------------

IDX_ID                 10                11                      0

IDX_VAL1               10                11                      0

IDX_VAL2               10                11                      0

Conclusion

Oracle has been introducing new views that provides very useful information to DBAs so that the DBAs can administrate properly the databases and diagnose problems in order to avoid any reactive problems. For several years scripts, third-parties tools, ALTER INDEX clauses, etc., were used to track the index usage, but this changed now Oracle perform this automatically without overheads in the performance.  

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 


Oracle EM 13c Database's historic data without DBA_HIST*

$
0
0

By Deiby Gómez

Introduction

Data changes frequently in OLTP environments and Oracle has to be aware of those changes or at least to try detect these changes in order to adjust the optimizer and execute sentences in the best possible way. To do so, Oracle generates several metrics from the system, from the session, from the services, etc., and also it gathers statistics automatically via AUTOTASK.

There is a huge amount of information generated by the metrics, which is captured mainly in AWR repository tables. The information generated by the metrics is very important because by using it the database administrators can perform troubleshooting and capacity planning, analyze the workload over a period of time, and so on.  When there are no performance issues, database administrators mostly think about capacity planning in order to understand how the database is growing over time.  In the past, this information was used to size the new hardware that they had to buy every two or three years, but with Oracle Cloud, that’s a thing of the past. Nowadays this information is used to understand different aspects of the growth of the business.

Businesses impose several different requirements; for example, a business might want to know  about the increase in users consuming their services or products; the DBA would want to know about increased space requirements, increase in physical writes, and so on. These are among several scenarios where historical data is needed to create complex and customized reports.

When we think about historical data, our first thought is AWR/ASH; however, there is another alternative that few DBAs use: the repository views of Enterprise Manager. These views have hundreds of different metrics that are captured automatically by Enterprise Manager and can be used to create customized reports as complex as we could want. Just imagine, hundreds of metrics to play with!

As per Oracle "Database Licensing Information" (I didn’t find other sources of information on this), the following views also require Oracle Diagnostic Pack. If this license cannot be acquired you can use the STATSPACK tables.

MGMT$METRIC_DETAILS: The MGMT$METRIC_DETAILS view displays a rolling 7 day window of individual metric samples. These are the metric values for the most recent sample that has been loaded into the Management Repository plus any earlier samples that have not been aggregated into hourly statistics.

MGMT$METRIC_CURRENT: The MGMT$METRIC_CURRENT view displays information on the most recent metric values that have been loaded into the Management Repository.

MGMT$METRIC_HOURLY: The MGMT$METRIC_HOURLY view displays metric statistics information that has been aggregated from the individual metric samples into hourly time periods. For example, if a metric is collected every 15 minutes, the 1 hour rollup would aggregate the 4 samples into a single hourly value by averaging the 4 individual samples together. The current hour of statistics may not be immediately available from this view. The timeliness of the information provided from this view is dependent on when the query against the view was executed and when the hourly rollup table was last refreshed.

MGMT$METRIC_DAILY: The MGMT$METRIC_DAILY view displays metric statistics that have been aggregated from the samples collected over the previous twenty-four hour time period. The timeliness of the information provided from this view is dependent on when the query against the view was executed and when the hourly rollup table was last refreshed.

MGMT$TARGET_TYPE:  MGMT$TARGET_TYPE displays metric descriptions for a given target name and target type. This information is available for the metrics for the managed targets that have been loaded into the Management Repository. Metrics are specific to the target type.

You can build reports as complex as you want. In this article I will show you some basic examples that you can take as a starting point. You can also read my article “Creación de un reporte simple usando Information Publisher Report”, where you will learn how to use Infomration Publisher to build nice reports.

List all the metrics available in Enterprise Manager Repository Views

With this query you can list all the metrics that you can use to build your reports. This query will return hundreds of rows, each row for one specific metric:

SELECT distinct metric_name,
metric_column,
metric_label,
metric_column
FROM MGMT$METRIC_DAILY
ORDER BY 1,2,3;

All the metrics for all the database targets

With this query you list all the metrics available for one specific type of target, in this case the type ‘oracle_database’:

SELECT t.target_name target_name,
       t.metric_name,
       m.metric_column metric_column,
       to_char(m.rollup_timestamp,'YYYY-MM-DD HH24') as TIME,
       sum(m.average/1024) as value
FROM   mgmt$metric_hourly M,
       mgmt$target_type T
WHERE  t.target_type='oracle_database'
       and m.target_guid=t.target_guid
       and m.metric_guid=t.metric_guid
GROUP BY  t.target_name,
          t.metric_name,
          m.metric_column,
          m.rollup_timestamp
ORDER BY 1,2,3;

Once you know which metrics are available to build reports, you can proceed to create a basic report.

Current value for the metric iombs_ps

Let’s start with something basic: learning the current value for one specific metric. In this example, we’ll learn the value of the metric “iombs_ps”, which is part of the category “instance_throughput”.

This query uses the view mgmt$metric_current:

SQL> SELECT t.target_name target_name,
     t.metric_name,
     m.metric_column metric_column,
     to_char(m.collection_timestamp,'YYYY-MM-DD HH24:MI') as TIME,
     m.value as value
FROM mgmt$metric_current M,
     mgmt$target_type T
WHERE t.target_type='oracle_database'
      and m.target_guid=t.target_guid
      and m.metric_guid=t.metric_guid
      and t.metric_name='instance_throughput'
      and t.metric_column='iombs_ps'
      ORDER BY 1,2,3;

TARGET_NAME  METRIC_NAME         METRIC_COLUMN TIME             VALUE
------------ ------------------- ------------- ---------------- --------
cloud1       instance_throughput iombs_ps      2017-08-20 20:32 378

Historic data for the metric iombs_ps per hour

Now I will use the historic data for the same metric for the last 24 hours and then I will build a chart with Google Chart to see the behavior of this metric across the time. This query uses the view mgmt$metric_hourly.

SQL> SELECT t.target_name target_name,
            t.metric_name,
            m.metric_column metric_column,
            to_char(m.rollup_timestamp,'YYYY-MM-DD HH24') as TIME,
            sum(m.average/1024) as value
FROM        mgmt$metric_hourlyM,
            mgmt$target_type T
WHERE       t.target_type='oracle_database'
            and m.target_guid=t.target_guid
            and m.metric_guid=t.metric_guid
            and t.metric_name='instance_throughput'
            and t.metric_column='iombs_ps'
GROUP BY t.target_name,
         t.metric_name,
         m.metric_column,
         m.rollup_timestamp
ORDER BY 1,2,3; 

TARGET_NAME  METRIC_NAME          METRIC_COLUMN   MONTH_TIMESTA VALUE
------------ -------------------- --------------- ------------- ----------
cloud1       instance_throughput  iombs_ps        2017-08-19 00 296
cloud1       instance_throughput  iombs_ps        2017-08-19 01 374
cloud1       instance_throughput  iombs_ps        2017-08-19 02 362
cloud1       instance_throughput  iombs_ps        2017-08-19 03 360
cloud1       instance_throughput  iombs_ps        2017-08-19 04 378
cloud1       instance_throughput  iombs_ps        2017-08-19 05 378
cloud1       instance_throughput  iombs_ps        2017-08-19 06 378
cloud1       instance_throughput  iombs_ps        2017-08-19 07 362
cloud1       instance_throughput  iombs_ps        2017-08-19 08 360
cloud1       instance_throughput  iombs_ps        2017-08-19 09 362
cloud1       instance_throughput  iombs_ps        2017-08-19 10 360
cloud1       instance_throughput  iombs_ps        2017-08-19 11 359
cloud1       instance_throughput  iombs_ps        2017-08-19 12 362
cloud1       instance_throughput  iombs_ps        2017-08-19 13 361
cloud1       instance_throughput  iombs_ps        2017-08-19 14 370
cloud1       instance_throughput  iombs_ps        2017-08-19 15 378
cloud1       instance_throughput  iombs_ps        2017-08-19 16 378
cloud1       instance_throughput  iombs_ps        2017-08-19 17 378
cloud1       instance_throughput  iombs_ps        2017-08-19 18 161
cloud1       instance_throughput  iombs_ps        2017-08-19 19 161
cloud1       instance_throughput  iombs_ps        2017-08-19 20 175
cloud1       instance_throughput  iombs_ps        2017-08-19 21 178
cloud1       instance_throughput  iombs_ps        2017-08-19 22 179
cloud1       instance_throughput  iombs_ps        2017-08-19 23 164
cloud1       instance_throughput  iombs_ps        2017-08-19 24 160

 

Now I will use Google Chart to chart the data. We can see that interpreting a graphic is easier than looking only at numbers. In this graphic we can see that something happened around 17:00 because the IO throughput decreased:

Historic data for the metric iombs_ps per day

Our last report example will use the view mgmt$metric_daily to create a report on the same metric, but daily. You can add more WHERE clauses to filter the period of time and also you can play with the values MAXIMUM and MINIMUM.

SQL> SELECT t.target_name target_name,
            t.metric_name,
            m.metric_column metric_column,
            to_char(m.rollup_timestamp,'YYYY-MM-DD') as TIME,
            sum(m.average/1024) as value
FROM        mgmt$metric_daily M,
            mgmt$target_type T
WHERE       t.target_type='oracle_database'
            and m.target_guid=t.target_guid
            and m.metric_guid=t.metric_guid
            and t.metric_name='instance_throughput'
            and t.metric_column='iombs_ps'
GROUP BY t.target_name, t.metric_name, m.metric_column, m.rollup_timestamp
ORDER BY 1,2,3; 

TARGET_NAME  METRIC_NAME          METRIC_COLUMN   MONTH_TIME VALUE
------------ -------------------- --------------- ---------- ----------
cloud1       instance_throughput  iombs_ps        2017-08-13 377
cloud1       instance_throughput  iombs_ps        2017-08-14 360
cloud1       instance_throughput  iombs_ps        2017-08-15 367
cloud1       instance_throughput  iombs_ps        2017-08-16 378
cloud1       instance_throughput  iombs_ps        2017-08-17 378
cloud1       instance_throughput  iombs_ps        2017-08-18 378
cloud1       instance_throughput  iombs_ps        2017-08-19 378

 


Conclusion

In this article I have showed you one more historic data source that you can use to understand the behavior of your business through hundreds of metrics that are available in the Enterprise Manager Repository Views. You have views to see the current value of the metrics, the hourly value, or the daily value, and can play with values like the MAXIMUM in a day (or in an hour), MINUMUM, or AVERAGE. You can create very complex queries to analyze different problems across time, and then you can chart the data and get nice graphics that you can present to the board.

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Why Certifications Are Important

$
0
0

By Deiby Gómez

Introduction:

Ever since I started my career in Oracle technology I’ve always wanted to deliver the best support to my clients. I have wanted to solve the problems quickly. I am not afraid of new challenges, I am not afraid to start looking into a problem that I have never seen before; on the contrary, I am happy to look into unfamiliar problems because they are opportunities to learn. Following that approach, and to comply with my commitment with my clients, I started to look into Oracle certification program. I began to learn what Oracle University was about, and the paths to get certified.

I started my career with Oracle Database 11g. Then, because of the clients I was doing work for, I extended my knowledge to 10g and even 9i, and the oldest version I had was 8i but with few tickets on it. At the moment, the newest version of Oracle is 12c and all the certifications are already available for 12c. You can even get certified in a specific release, like OCP on 12cR2. I recommend that you get certified on the most recent versions of the technology you are interested.

Anyhow, since I came into Oracle technology on 11g my path to get certified was the following:

 

So I worked hard to pass the following exams. This should give you an idea of the time it would take to progress through the certifications:

  • 1Z0-051: Oracle Database 11g SQL Fundamentals I – January 2011
  • 1Z0-052: Oracle Database 11g Administration I – March 2011
  • 1Z0-053: Oracle Database 11g: Administration II – May 2011
  • 1Z0-402: Enterprise Linux Fundamentals – May 2011
  • 1Z0-451: Oracle Service Oriented Architecture Foundation Practitioner – August 2012
  • 1Z0-027: Oracle Exadata X3 and X4 Administration – August 2013
  • 1Z0-058: Oracle RAC 11g Release 2 and Grid Infrastructure Administration – December 2013
  • 1Z0-060: Upgrade to Oracle Database 12c – February 2014
  • 1Z0-093: Oracle Database 11g Certified Master Exam (OCM) – February 2015
  • 1Z0-432: Oracle Real Application Clusters 12c Essentials – September 2015
  • 1Z0-029: Oracle Database 12c Certified Master Upgrade Exam– April 2016
  • 1Z0-066: Oracle Database 12c: Data Guard Administration – December 2016

Additionally, I became an Oracle ACE in 2013 and an Oracle ACE Director in 2015. I also was a technical reviewer of the book "Oracle Database 12c Release 2 Multitenant" and a co-author of the book "Oracle Database 12c Release 2 Testing Tools and Techniques for Performance and Scalability".

After all this hard work, I can tell you why certifications are important.

Of course, this is a personal opinion.  At the beginning of my career I started getting certifications frequently in order to get a salary hike (like the most people that are starting a career) , but after two certifications I changed my thinking and started to enjoy the path because it was aligned with what I wanted to deliver: to fix problems quickly and deliver excellence to my clients, which is the right approach. It's all about the enjoy the journey!

When preparing for a certification, you have to build several environments, practice installations and different rman scenarios, test every Oracle database feature and ASM feature. You find errors, and investigate how to fix those errors. While investigating the problems you will read blogs, Metalink notes, whitepapers, Oracle Press books, Oracle University manuals and even videos on YouTube!  You will spend several hours and days in front of a computer practicing. You’ll study so hard that when you are in front of the computer actually taking the exam, it’s anticlimactic – just a set of some questions that you already know how to answer. You’ll feel like it’s a time sink to sit in front of that laptop taking the exam because you already know you’ve got the knowledge. Yes, you do have the knowledge, but you still have to pass the exam to prove it. And once that certification is in hand, it is proof of all the preparation and hard work that help you deliver better support to your clients. 

So the advantages I can highlight from the perspective of a consultant are:

  • Preparing for the exam increases your knowledge.
  • You get faster at fixing problems.
  • You face several issues while practicing that sometimes only with "hear" or "see" the symptoms you already know where the problem would be.
  • You acquire friends and colleagues through forums, blogs and Oracle events around the world.
  • You can get better jobs.
  • You can deliver your clients a better quality of support.
  • Because your credentials sometimes you are invited to a community project (to be Speaker, co-authoring a book, to help in a blog, to contribute in an Open Source Project, etc)
  • Depending on where you are, yes, you may get that pay raise.
  • You get a profile in www.youracclaim.com
  • If you become an OCM you also get a special profile in Oracle OCMs list.
  • You get less stressed, because with the knowledge you’ve acquired preparing for certification there will be fewer things that you don’t know, and less reason to fear making errors.
  • Since your knowledge has increased, you also can help your colleagues.
  • You get respect from newbies. [:)]

And perhaps much more! But those are just the advantages for consultants. There’s another beneficiary of your certifications; namely, the company you are working for. I became part of Nuvola Consulting Group in 2016 and since then we’ve gotten several clients on board (YAY!). Still, I can tell you why certifications are important for organizations:

  • Companies promote your certifications to prove that they have good consultants.
  • For partnerships, When you are looking for being a partner of another company, the other company will look into your consultants and their certifications. 
  • Companies use your certifications to prove that they can work with a specific technology or product very well (Amazon AWS, Oracle DB, Tuning, SOA, etc.).
  • It’s better to hire certified consultants where the risk that they make mistakes is less than a consultant that doesn’t have certifications. Of course there are also consultants without certifications with a lot of experience, but in those cases, they have to demonstrate that experience from past performance unless the person is well-known and is very well-recommended by others that we already know.
  • Companies can charge a higher hourly rate for support or consulting when the consultants are certified.
  • Having several certified consultants is very helpful when the company wants to get on board with a big prospective customer or get a very good contract. Generally large enterprises want companies with certified consultants to provide them services.
  • Having certified consultants helps a firm compete with other companies in the same industry.

In Guatemala, for example, the country where I am currently living, I have observed that certifications are more important to hiring companies in the IT industry than a bachelor’s degree. For non-IT companies, it may be different, but in Latin American IT companies this is common. And over the years  I have seen many students starting early in their college years and getting certified to increase their expertise in a single technology (Let's say Java, etc). I’m included in this group, because I started working with Oracle technology professionally before completing university. IT industry wants people very specialized in a single technology or product and ready to get involved in projects. 

Conclusion:

Certifications are important for consultants and also for the companies to we work for. The industry wants specialized people. The IT industry is growing fast, with some of the largest companies in the world today being in IT, and they’re demanding certified people. This is an opportunity that you have to take advantage of: Get certified!

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter LinkedIn 

Prepare yourself for passing the Oracle Certified Master 12c requirements!

$
0
0

By Deiby Gómez

Introduction

In 2013 I began my preparation for becoming Oracle Certified Master (OCM) 11g. I was already OCP (Oracle Certified Professional) 11g and OCP 12c, so to get to the next level, I made myself a schedule of reading and practice. The OCM exam is no joke—it takes a lot of knowledge, as well as speed in working to solve problems, to pass it. Later in this article I’ll share my study and practice schedule, which you can use to prepare for the exam yourself.

But first, some background. There are three levels of Oracle Database Certifications (for any version 11g or 12c):

  • Associate (OCA)
  • Professional (OCP)
  • Master (OCM)

The Professional level requires Associate as a prerequisite, and Master requires Professional. So your certification quest starts at the Associate level. You have to pass two exams in order to achieve it. Next, you have to get your professional certification, which requires you to pass an exam and also take an Oracle University course. Once you are an OCP you can start your journey to become an OCM.

For OCM, you have to take two courses in Oracle University, and then you have to pass one more exam. This exam is different from the ones for the first two levels of certification, because it does not consist of multiple-choice questions and also it’s not online like the exams for OCA and OCP. To find out where to take it, you need to look at the Oracle Certified Master Exam Worldwide Schedule. There are only few countries where you can take this exam. 

This exam is for real DBAs! It is 100% practice, rather than answering questions.

Basically you have to be prepared for anything and you have to do everything as fast as you can, because you have a limited time for each problem.

Above is the path to OCM for 11g. If you want to start directly toward certification in the 12c version, the path you follow is as follows:

 

Some months after I passed the OCM 11g exam, the OCM 12c exam was released, so I decided to take it as well. When I was preparing my OCM 12c I created the following schedule, which you can use, too, for your own preparation.

I focused my preparation in two main areas: Knowledge and Speed.  

Hours to develop knowledge

The hours I allotted to increasing my knowledge I spent reading everything I could about that topic, blogs, metalink notes, forums, books, videos, etc. And inside that time I also practiced every topic on a virtual machine, at least twice. For example if the topic was “install database software”, I read everything about that topic and then I installed the software at least two times. With these hours I also was reading every single option of every single command J Yes! It was fun. I also tried to memorize as much syntax as I could. Once I knew how to do everything related to a topic and I got considerable knowledge about the syntax and concepts I moved to the hours for get faster.

 

Hours to increase speed: During these hours, I didn’t have to read more because I already knew how to do the things I was focusing on. This was time I set aside to practice and practice and practice and yes, practice.  I tried to get as fast as I could.

So here is the schedule I used:

 

Topic

Time (hrs) to read and practice (Knowledge)

Time (hrs) to improve speed

(Time)

General Database and Network Administration

40

14

Create and manage pluggable databases

16

4

Administer users, roles, and privileges

4

2

Configure the network environment to allow connections to multiple databases

4

2

Administer database configuration files

8

2

Configure shared server

4

2

Manage network file directories

4

2

 

 

 

Manage Database Availability

60

18

Install the EM Cloud Control agent

24

8

Configure recovery catalog

8

2

Configure RMAN

8

2

Perform a full database backup

4

2

Configure and monitor Flashback Database

16

4

 

 

 

Data Warehouse Management

56

23

Manage database links

4

2

Manage a fast refreshable materialized view

16

4

Create a plug-in tablespace by using the transportable tablespace feature

16

4

Optimize star queries

4

2

Configure parallel execution

4

2

Apply a patch

4

2

Configure Automatic Data Optimization, In-Row Archiving, and Temporal Validity

8

4

Manage external tables

8

3

 

 

 

Data Management

60

16

Manage additional buffer cache

4

2

Optimize space usage for the LOB data

8

2

Manage an encrypted tablespace

8

2

Manage schema data

8

2

Manage partitioned tables

8

2

Set up fine-grained auditing

8

2

Configure the database to retrieve all previous versions of the table rows

16

4

 

 

 

Performance Management

68

27

Configure the Resource Manager

16

12

Tune SQL statements

8

3

Use real application testing

16

3

Manage SQL Plan baselines

8

3

Capture performance statistics

8

3

Tune an instance - Configure and manage result cache, Control CPU use for Oracle Instances, Configure and manage "In Memory" features

12

3

Manage extended statistics

8

2

Create and manage partitioned indexes

8

2

 

 

 

Data Guard

56

26

Administer a Data Guard environment

12

4

Create a physical standby database

16

8

Configure a standby database for testing

4

4

Configure a standby database to apply redo

8

2

Configure a standby database to use for reporting

4

2

Configure fast start failover

4

2

Manage extended statistics

4

2

Manage DDL in a Data Guard environment

4

2

 

 

 

Grid Infrastructure

80

34

Install Oracle Grid Infrastructure

16

8

Create ASM Disk Groups

8

4

Create and manage ASM instances

8

4

Configure ASM Cloud File System (ACFS)

8

4

Administer Oracle Clusterware

16

6

Manage Flex Clusters and Flex ASM

12

4

Manage Flex Clusters and Flex ASM

12

4

 

 

 

Real Application Cluster Database

40

9

Install Oracle Database software

8

3

Create a Real Application Clusters (RAC) database

8

2

Configure Database Services

16

2

Administer Oracle RAC databases on one or more cluster nodes

8

2

Using this schedule, I tried to practice four hours every day after my job, and I dedicated my weekends to this effort completely (16 hours) so I was able to get prepared in about three months. Depending on the time you have to commit to your own effort, your ‘mileage may vary’.

In addition to my schedule, you can also use the following books for your preparation. One of them is from Kamran Agayev, an 11g OCM and a good friend.

Oracle Certified Master 11g Study Guide by Kamran Aghayev.

 

OCM: Oracle Database 10g Administrator Certified Master Exam Guide by Nilesh Kakkad

Once you have passed your OCM exam you will receive a card like this:

 

 

Conclusion

Getting prepared for the OCM is not easy, it takes time. And without good preparation, you likely will not pass the exam. This exam is no joke, it is serious and you should be well prepared in every area before scheduling it. In this article I’ve provided a preparation plan you can follow to get to take it and become an Oracle Certified Master. Best of luck!

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Solving Communication problems between DB and ASM instances

$
0
0

By Deiby Gómez 

Introduction

Most of the time I write how-to articles or I am introducing a new feature of Oracle Database. Those articles contain new information that’s good to know and help people fix issues or to use a function/feature, but in this time I am writing about situations I have had. It’s good for readers and beginners to know those little details around how an issue was fixed, what daily work is like for another DBA, or just to read a funny story. In this article I will tell a story regarding a problem a customer had a long time ago; the root cause is not frequent (I hope!) but if we don’t understand the relevant concepts we could spend several hours trying to find out a root cause that could be easy to identify when the our concepts are solid.

Infrequent, but it can happen

A long time ago I received a call from a customer saying that there were some errors in the database instance. Well, interestingly the databases were executing DMLs properly without any issue. I asked the customer if these errors appeared only with one specific operation like an Insert, or like a CREATE <something>, etc.; and he said that he was running a script received from the application team to create several tablespaces with its datafiles.  When he was running the script he was receiving the following errors:

ORA-01119: error in creating database file '+DATA'

ORA-17502: ksfdcre:4 Failed to create file +DATA

ORA-27300: OS system dependent operation:open failed with status: 2

ORA-27301: OS failure message: No such file or directory

ORA-27302: failure occurred at: sskgmsmr_7

First, you can see that the set of errors says that there is a directory or file that don’t exist in the OS on the other hand, it points to the ASM disk group, which in this case is “+DATA”. So this is confusing, because either the file that the database is looking for is in ASM or it is in the OS.  I did a quick check of the database instance and it was OK. There were no errors in the alert log, all the disks were healthy. On the database side, however, there seemed to be some issues, specifically with the sentences “CREATE TABLESPACE” which the customer had in the script provided by the application team.

So, the clues were:

  • No issues with the ASM Instance
  • DMLs were being executed successfully in the database instance.
  • CREATE TABLESPACE statements fail in the database instance.
  • ASM and OS are both involved in a “file” or “directory” that doesn’t exist. 

With these four clues to go by, you should be on the right track if your concepts are solid. The root cause you would be thinking about would involve the file that the database instance uses to communicate with the ASM instance This file is named "ab_<ASM SID>.dat" and it is located in the $ORACLE_HOME/dbs. You need to know about the existence of this file and what its function is.This file rarely has issues, or rarely causes problems…but sometimes it happens,

Let’s define this file:

What is the "ab_<ASM SID>.dat" file? This file is used by the database instance to message an ASM instance. When the database instance needs to send a message to the ASM instance, the database instance reads this file in order to find out the information required for getting connected to the ASM instance. This file is in $ORACLE_HOME/dbs. If this file doesn't exist the database will not be able to connect to the ASM instance and you will receive an error. This file is important because it is involved in the database instance work.

Some time ago I wrote an article with several tests of where this file is required to execute some sentences in the database and in which sentences the file is not required. You can read the details here.

The conclusion of that earlier article indicates:

  • Tablespace creation – required
  • Datafile creation – required
  • Table creation – not required
  • DML operations – not required
  • Drop tablespace – not required
  • Delete datafile – not required
  • Startup database instance – required
  • Shutdown database instance – not required

Well, taking that into account, to solve this customer’s issue, I listed all the files in $ORACLE_HOME/dbs and the root cause was confirmed. The file "ab_<ASM SID>.dat" did not exist in the directory. I asked the customer if he had moved the file somewhere else or if he’d deleted it and he said that the day before the junior DBA was “cleaning” logs and traces that were using space and that could be deleted. I think that one of those files that “could be deleted” was "ab_<ASM SID>.dat". As I said before, this situation happens rarely. Solving the problem is not a big deal; what we have to do is reboot the ASM instance, but in order to do that we have to reboot the database instance as well.  After rebooting the ASM instance the file was recreated and the database was able to use it. The script that the customer had was executed successfully and all the CREATE TABLESPACE operations were success.

Conclusion

Sometimes there are issues whose root cause is very rare, and in order to determine it quickly we have to have all our concepts solid; otherwise, we might spend several hours trying to figure out what’s going on, reading notes and so on.

In this case, it was very important to identify the clues. We had four clues here which pointed us to the right root cause.  Sometimes the customer is stressed and under pressure and wants us to fix the problem fast, but DBAs have to stay calm, we have to extract the clues (syntomps), to think about the root cause,  to create an hypothesis and work to prove it. To shorten diagnostic time make sure you’re on solid ground conceptually, which you can do by practicing various scenarios while you are getting prepared for a certification.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12cR2 new feature: Container Maps

$
0
0

By Deiby Gómez

Introduction:

Oracle introduced a new cool concept called “Application Containers” in 12cR2 (12.2.0.1.0). I have already written about this topic in the article “Introduction to Application Containers in Oracle Database 12cR2”, where you can find an introduction to the topic and see a couple of examples. Since version 8.0, Oracle Database has had the partitioning feature, which helps you access data faster. Since 8.0 there have been several enhancements for partitioning, types of partitions, objects that supports partitioning, etc. In Oracle Database version 12.1.0.2 Oracle introduced the “CONTAINERS” clause, a very useful clause that can be used to execute queries across several Pluggable Databases. You can filter which PDB you want to get the data from by the CON_ID column. You can read more about the CONTAINERS clause in the articles “New CONTAINERS Clause in 12.1.0.2 - Common Perspective” and “New CONTAINERS Clause in 12.1.0.2 - Local Perspective”. The downside of using the “CONTAINERS” clause is that you have to hard code the value of the CON_ID column. If the CON_ID changes because of a PDB unplug and a PDB plugin, you would be getting data from a wrong PDB; or if you remove the PDB, your queries will simply fail. There should be a way to use the “CONTAINERS” clause without hard coding the CON_ID, and, even better, why not to combine it with partitioning?  Basically, this was Oracle was thinking, and then the following insight occurred:

What if we use Pluggable Databases as partitions?

What if the PDB name is used instead of the CON_ID?

Thanks to this insight, “Container Maps” was introduced in Oracle 12.2.0.1.0. Unfortunately, at present, “Container Maps” are not available to use with normal Pluggable Databases. “Container Maps” can be used only with Application Containers (Application Root + Application PDBs).  

The illustration below shows how “Container Maps” works. In it, you see an end user executing a query and filtering the data by country=’GUATEMALA’. Internally, Oracle uses Application PDBs as partitions, where each Application PDB represents the data of a specific region (North, Central, South). After determining in which “partition” (Application PDB) all the files with the country=’GUATEMALA’ are located , Oracle then proceeds to query the table which is stored in that specific “Application PDB” –in this case, the Application PDB named “CENTRAL”. Of course, the table can also be partitioned as always, using all the enhancements in Oracle partitioning up to version 12.2.0.1.0.

 

In the following example we will explain step-by-step how to use “Container Maps”.

Create an Application Root:

First, I will create an “Application Container”, an Application Root named “Nuvola”, and three “Application PDBs” named “NORTH”, “CENTRAL” and “SOUTH”. If you want to read more about Application Containers you can read my article Introduction to Application Containers in Oracle Database 12cR2.

Creating the Application Root:

SQL> create pluggable database Nuvola as application container admin user pdbadmin identified by Nuvola1; 

Pluggable database created.

SQL> alter pluggable database Nuvola open;

Pluggable database altered.

 

In order to create “Application PDB” you must be connected to the “Application Root”:

SQL> alter session set container=Nuvola;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
NUVOLAAPPROOT

 

Creating the Application PDB named “North”:

SQL> create pluggable database north admin user app1admin identified by Nuvola1;

Pluggable database created.

 

Creating the Application PDB named “Central”:

SQL> create pluggable database central admin user app1admin identified by Nuvola1;

Pluggable database created.

 

Creating the Application PDB named “South”:

SQL> create pluggable database south admin user app1admin identified by Nuvola1;

Pluggable database created.

 

Opening all the Application PDBs:

SQL> alter pluggable database all open;

Pluggable database altered.

 

Creating the container map table:

 A container map is a simple table that has the information on which “partitions” (Application PDBs) are used and which column is used to address the data; in this case the column “country”. The type of partitioning used here is “BY LIST”. Note that the name of the “partitions” matches exactly with the name of the “Application PDBs”.

SQL> CREATE TABLE c##dgomez.containermap (
country VARCHAR2(30) NOT NULL)
PARTITION BY LIST (region) (
PARTITION north VALUES ('CANADA','USA'),
PARTITION central VALUES ('GUATEMALA','NICARAGUA'),
PARTITION south VALUES ('ARGENTINA','BRAZIL'));

Table created.

 

Now we set the “Application Root” to use the “Container Map”:

SQL> ALTER PLUGGABLE DATABASE SET CONTAINER_MAP='C##DGOMEZ.CONTAINERMAP'; 

Pluggable database altered.

  

Create an application with data

Now we will create an application and we will insert some data. This is just to show a couple of SELECT examples, so that you can see how the data is gotten transparently through the “partitions” (Application PDBs) based on the column “country”.

Start to install the application:

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola BEGIN INSTALL '1.0'; 

Pluggable database altered.

 

It is not mandatory to use “SHARING=METADATA”. I am using this because all I want to share among the Application PDBs is the metadata (the objects, without data). The data will be physically stored into each Application PDB.

SQL> CREATE TABLE c##dgomez.revenue SHARING=METADATA (
country VARCHAR2(30),
revenue number);

Table created.

 

The following clauses are mandatory in order to use “Container Maps”:

SQL> ALTER TABLE c##dgomez.revenue ENABLE CONTAINER_MAP;

Table altered.

 

SQL>  ALTER TABLE c##dgomez.revenue ENABLE CONTAINERS_DEFAULT;

Table altered.

 

And finally, we will end the application installation:

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola END INSTALL '1.0';

Pluggable database altered.

 

Verifying if the table is enabled to use Container Maps:

 We can double check whether the tables where the data will be stored are enabled to use Container Maps by querying the view DBA_TABLES and its new column “CONTAINER_MAP”:

SQL> select owner, table_name, CONTAINER_MAP from dba_tables where table_name='REVENUE';

OWNER      TABLE_NAME CONTAINER_MAP
---------- ---------- ---------------
C##DGOMEZ  REVENUE    YES

 

Inserting data to query using Container Map:

In order to complete our example, I will insert some data into each Application PDB. This is only to show how Container Maps work. After there is data inserted, I will proceed to perform a couple of SELECT statements that will automatically use the Container Map (in the next section of this article):

SQL> alter session set container=north;

Session altered.

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola  SYNC;

Pluggable database altered.

SQL> insert into c##dgomez.revenue values ('CANADA',1000);

SQL> insert into c##dgomez.revenue values ('USA',2000);

SQL> commit; 

SQL>  alter session set container=central;

Session altered.

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola SYNC;

Pluggable database altered.

SQL> insert into c##dgomez.revenue values ('GUATEMALA',3000);

SQL> insert into c##dgomez.revenue values ('NICARAGUA',4000);

SQL> commit;

SQL> alter session set container=south;

Session altered.

SQL> ALTER PLUGGABLE DATABASE APPLICATION Application_Nuvola SYNC;

Pluggable database altered. 

SQL> insert into c##dgomez.revenue values ('ARGENTINA',5000);

SQL> insert into c##dgomez.revenue values ('BRAZIL',6000);

SQL> commit;

 

Executing queries using PDBs as partitions:

Now, time for the magic. I will connect to the “Application Root” and from it I will execute two queries. You can see that the SELECT statements don’t have any filter with the column CON_ID nor the Application PDB name. We are just getting data from a simple table (C##DGOMEZ.REVENUE), but the SELECT statement understands that Container Map is enabled, it will ask in which “partition” (Application PDB) the value “GUATEMALA” is stored and then it will query the table “C##DGOMEZ.REVENUE” in that specific Application PDB.

SQL> alter session set container=nuvola;

Session altered.

SQL> select country, revenue from c##dgomez.revenue where country='GUATEMALA';

COUNTRY        REVENUE
-------------- ----------
GUATEMALA      3000

 

We can also use the country ‘CANADA” and Oracle will perform the same mechanism:

SQL> select country, revenue from c##dgomez.revenue where country='CANADA';

COUNTRY     REVENUE
----------- ----------
CANADA      1000 

 

Conclusion:

We saw in this article a new, cool concept that combines the CONTAINERS clause, partitioning, and Application Containers. DBAs and developers will be able to take advantage of Container Maps, particularly s for reports that have to get data across several Application PDBs, without having to rewrite the code, and without having to add new clauses in the SELECT statement, taking advantage of Application PDBs as if they were partitions. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Viewing all 108 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>