Quantcast
Channel: Deiby Gomez's Activities
Viewing all 108 articles
Browse latest View live

Oracle Database 12cR2 new feature: Lockdown Profiles

$
0
0

By Deiby Gómez

 

Introduction:

In the past, roles, system privileges, and table privileges were used to control the functionalities allowed to database users. However, roles and privileges don’t have enough granularity to effectively restrict what work a user may do.  For example, you can grant the privilege “ALTER SYSTEM” to a user, but with that, you are allowing that user to change any database parameter. “ALTER SYSTEM” is not granular enough to enable the user to change some database parameters but not others. Even worse, there is no way to allow a user to change a specific database parameter with a range or list of values but disable another range or list of values. This functionality has been requested by DBAs for years and finally Oracle has heard us.

Oracle has introduced several new features in its newest version, 12.2.0.1. One of the most important features is “Lockdown Profiles”. Lockdown Profiles provides the granularity we were talking about. With this feature you can enable and disable database functions, features and options. It even lets you specify a range or list of values that may be used.

 

About Lockdown Profiles creation

Lockdown Profiles can be created only in Container Databases, and you must be connected to CDB$ROOT. If you try to create a lockdown profile in a non-container database you will receive the following error:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;
CREATE LOCKDOWN PROFILE WANNACRY_PROFILE
*
ERROR at line 1:
ORA-65090: operation only allowed in a container database

 

If you try to create a lockdown profile while connected to a PDB you will get the following error:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;
CREATE LOCKDOWN PROFILE WANNACRY_PROFILE
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

 

How to create a Lockdown Profile

Connect to CDB$ROOT:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

 

Execute the CREATE LOCKDOWN PROFILE sentence:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;

Lockdown Profile created.

 

Unfortunately, you cannot specify which functionality to enable or disable along with the CREATE LOCKDOWN PROFILE sentence. To do this, you have to use the ALTER LOCKDOWN PROFILE sentence separately.

 

Enabling or disabling functionalities:

There are three functionalities that you can disable:

  • FEATURE: Allows you to enable or disable database features. To see the full list of features that you can indicate, check here.
  • OPTION: The two options you can either enable or disable are “DATABASE QUEUING” and “PARTITIONING”.
  • STATEMENT: You can either enable or disable the statements “ALTER DATABASE”, “ALTER PLUGGABLE DATABASE”, “ALTER SESSION”, and “ALTER SYSTEM”. You can specify granular options along with these statements.

In the three functionalities, you can also use clauses like ALL and EXCEPT, which allows you to include or exclude a set of features instead of specifying them one by one.

In the following example we will disable two features, one option, and one statement.

The statement that we will disable is to change the parameter “nls_date_format” in an ALTER SYSTEM statement:

SQL>  ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE STATEMENT = ('ALTER SYSTEM') CLAUSE = ('SET')  OPTION= ('nls_date_format');

Lockdown Profile altered.

 

The next example is similar to the previous one, but here we are specifying a minimum value and a maximum value. All the values between are allowed, while all the values outside of this range are disallowed.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE STATEMENT = ('ALTER SYSTEM') CLAUSE = ('SET') OPTION = ('parallel_max_servers') MINVALUE = '10' MAXVALUE = '39';

Lockdown Profile altered.

 

In the next example I am disabling the feature “COMMON_USER_CONNECT”. This feature disallows common users to connect to pluggable databases directly. All common users must first connect to CDB$ROOT and then jump to any Pluggable Database.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE FEATURE = ('COMMON_USER_CONNECT'); 

Lockdown Profile altered.

 

The last example disables the option “PARTITIONING”, which means I cannot use any operations that relies on partitioning.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE OPTION = ('PARTITIONING'); 

Lockdown Profile altered.

 

Reviewing Lockdown Profiles information:

Once the lockdown profile has been created and you have enabled or disabled the required functionalities, you can review all the information using the view DBA_LOCKDOWN_PROFILES:

SQL> select rule_type, rule, clause, clause_option, option_value , min_value, max_value, status from DBA_LOCKDOWN_PROFILES where profile_name='WANNACRY_PROFILE' ;

RULE_TYPE  RULE                CLAUS    CLAUSE_OPTION        OPTION_VAL MIN MA STATUS
---------- ------------------- -------- -------------------- ---------- --- -- ----------
FEATURE    COMMON_USER_CONNECT DISABLE
OPTION     PARTITIONING        DISABLE
STATEMENT  ALTER SYSTEM        SET      NLS_DATE_FORMAT      MM-DD-YYYY         DISABLE
STATEMENT  ALTER SYSTEM        SET      PARALLEL_MAX_SERVERS            40  60  DISABLE

 

Enable Lockdown Profile:

As we have seen, I created the lockdown profile directly without specifying whether I want that lockdown profile in one specific PDB, or in all the PDBs, etc., I just created it. Don’t worry about it: The creation of a lockdown profile doesn’t mean it is enabled by default. Lockdown profile works like a Database Resource Manager Plan; you can create as many as you want, but only one is enabled and it must be enabled explicitly. And enabling a lockdown profile is similar to enabling a Database Resource Manager Plan; it is enabled by a database parameter.

So far we have created the lockdown profile “WANNACRY_PROFILE” and we have customized it but we haven’t enabled it yet.  You can enable a lockdown profile in one specific PDB, in a set of them or in all PDBs. If you want to enable the lockdown profile in all the PDBs you have to be connected to CDB$ROOT and set the database parameter “pdb_lockdown” to the name of your lockdown profile; in this case, “WANNACRY_PROFILE”. If you want to enable the lockdown profile in a specific PDB, first you have to connect to the specific PDB and then you have to set the database parameter “pdb_lockdown”. 

In the following example we have a CDB called “db12c” with two PDBs, one named “PDB1” and the second one named “PDB2”. We will enable the lockdown profile “WANNACRY_PROFILE” only in “PDB1”.

Checking out that the parameter is not set in any container:

SQL> select con_id, name, value from gv$system_parameter where name='pdb_lockdown';

CON_ID     NAME            VALUE
---------- --------------- ----------
0          pdb_lockdown

 

Connecting to “PDB1”:

SQL> show con_name

CON_NAME
------------------------------
PDB1

 

Set the database parameter pdb_lockdown:

SQL> alter system set pdb_lockdown='WANNACRY_PROFILE';

System altered.

 

Verifying that the parameter is set only in “PDB1” (CON_ID=3):

SQL> select con_id, name, value from gv$system_parameter where name='pdb_lockdown';

CON_ID     NAME VALUE
---------- -------------- ------------------------------
0          pdb_lockdown
3          pdb_lockdown   WANNACRY_PROFILE

 

Confirming whether the functionalities were successfully disabled:

Testing to change the parameter nls_date_format:

Connecting to “PDB1”:

SQL> show con_name

CON_NAME

------------------------------

PDB1

 

I am using a common user with “alter system” privileges:

SQL> show user

USER is "C##DGOMEZ"

 

As you see, even if the user has “alter system” privilege it is not allowed to change the database parameter because of the lockdown profile.

SQL> alter system set nls_date_format='mmddyyyy' scope=spfile;
alter system set nls_date_format='mm-dd-yyyy' scope=spfile
*
ERROR at line 1:
ORA-01031: insufficient privileges

 

Testing the feature 'COMMON_USER_CONNECT'. Without the lockdown profile, I was able to connect directly to a PDB with a common user, however now it is not allowed because of the lockdown profile:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/dgomez@192.168.1.22:1521/pdb1 

ERROR:

ORA-01017: invalid username/password; logon denied

Testing the parameter parallel_max_servers. The range we specified in the lockdown profile was [10,39]. As we explained before, all the values outside of this range are disabled, while the values between these values are allowed.

SQL> alter system set parallel_max_servers=9;
alter system set parallel_max_servers=9
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set parallel_max_servers=10;

System altered.

SQL> alter system set parallel_max_servers=39;

System altered.

SQL> alter system set parallel_max_servers=40;
alter system set parallel_max_servers=40
*
ERROR at line 1:
ORA-01031: insufficient privileges

 

How to drop a lockdown profile:

To drop a lockdown profile is easy. You just have to execute the following sentence from CDB$ROOT. You don’t have to reset or clean the parameter pdb_lockdown in all the PDBs that are using this lockdown profile (although I strongly think it should not be this way). When you execute this sentence, all the PDBs using the lockdown profile will automatically stop using the settings provided by this lockdown profile.   

DROP LOCKDOWN PROFILE WANNACRY_PROFILE;

 

Conclusion:

In this article, I outlined the required steps to create a new lockdown profile, I explained which kind of functionalities we can enable and disable, and I provided several examples. I provided comments to help you quickly understand how to use lockdown profiles and take advantage of them; very important in an era where the security is of utmost value and a finer granularity is needed to restrict people to only those tasks necessary for their role. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter LinkedIn 


Oracle Database 12cR2 new feature: Proxy PDB

$
0
0

By Deiby Gómez

 

Introduction:

The need to communicate with external systems and exchange data made Oracle develop a way to connect to different Oracle databases to execute operations. Traditionally, whenever we wanted to bring data in from a different database, we used a Database Link. But 12.1.0.1.0 Oracle introduced a major new multi-tenant architecture. With Multitenant, a database could be either Container Database or non-Container Database. If we decided to create a new database as a Container (CDB) we could create Pluggable Databases connected to the CDB.  However, DBAs still needed to use Database Links to exchange data between the pluggable databases within a Container.

In the newest version of Oracle Database 12.2.0.1.0 introduces a feature called “Proxy PDB”. A Proxy PDB is physically an empty PDB that has the minimum tablespaces required (SYSTEM, SYSAUX, UNDO), created in one CDB that references a remote Pluggable Database in a different CDB. All the operations (DDLs & DMLs) that are executed within the Proxy PDB are sent to the referenced Pluggable Database and remotely executed in it, except for the operations ALTER PLUGGABLE DATABASE and ALTER DATABASE. This is why it is called “Proxy”.

The benefit of a Proxy PDB is that it’s exactly as if the referenced PDB was in the local CDB, but the data is stored remotely and the operations are executed remotely in the referenced Pluggable Database. For instance, if we have Database Resource Manager active in the local CDB, the current Resource Manager Plan also applies to the Proxy PDB. Another example is the CONTAINERS clause, which allows retrieval of data from all the Pluggable Databases; this clause also works for a Proxy PDB. For all operations, the Proxy PDB will be seen as a normal PDB.

The image below sets up our example. It shows two containers, CDB1 and CDB2.  The remote container is shown at the top of the illustration: CDB1. The local CDB is shown at the bottom of the illustration: CDB2. Each container has two pluggable databases within it, designated as PDB1 and PDB2. The PDB2 in the local container is a Proxy PDB that references the PDB2 within CDB1.  

In the illustration we see a user connected to the CDB$ROOT of CDB2 who is executing a query using the CONTAINERS clause across all the PDBs that belong to CDB2. The data returned includes “Guatemala”, which is physically stored in the referenced PDB, that is, the PDB2 within CDB1. the PDB2. The row with the value “Guatemala” is returned because the query was sent to the Referenced PDB and executed there. (The referenced PDB can be either a normal PDB or an application PDB. In this example the referenced PDB is a normal PDB.)

 

 

To create a Proxy PDB there are some prerequisites:

  • The CDB that contains the referenced PDB must be in local undo mode.
  • The CDB that contains the referenced PDB must be in ARCHIVELOG mode.
  • The referenced PDB must be in open read/write mode when the proxy PDB is created.

We will go through the example presented in the above image. First I will connect to CDB1 and create the PDB1 and PDB2 Pluggable Databases, and then I will jump to CDB2 to create its PDB1 and then the Proxy PDB called PDB2. Once everything is completed I will perform the query with the CONTAINERS clause from CDB2, my local container.

 

Preparation in CDB1:

I will create the PDB1 and PDB2 in CDB1:

SQL> create pluggable database pdb1 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> create pluggable database pdb2 admin user pdbadmin identified by nuvola;

Pluggable database created.

 

Opening PDB1 and PDB2:

SQL> alter pluggable database all open;

Pluggable database altered.

 

One of the prerequisites is that the referenced PDB is in read/write; in this example both are in read/write:

SQL> select name, open_mode from v$pdbs;

NAME       OPEN_MODE
---------- ----------
PDB$SEED   READ ONLY
PDB1       READ WRITE
PDB2       READ WRITE

 

Another prerequisite is that the user that connects to the referenced PDB has to be a common user:

SQL> select username, common from dba_users where username='C##DGOMEZ';

USERNAME   COM
---------- ---
C##DGOMEZ  YES

Another prerequisite is that the remote CDB, in this case CDB1, has to be configured with Local Undo:

SQL>SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME        PROPERTY_VALUE
-------------------- --------------------
LOCAL_UNDO_ENABLED   TRUE

In the previous image, you can see that there is a table with 1 row inserted. I will load these rows into the PDB1 and the PDB2 in CDB1 to make this environment match with the image:

SQL> alter session set container=pdb1;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Brazil');

1 row created.

SQL> commit;

Commit complete.

 

The PDB2 of CDB1 will be our referenced PDB. In the image you can see that the value in the referenced PDB is “Guatemala”:

SQL> alter session set container=pdb2;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

 

The work in CDB is done. Two PDBs were created, the table was created and the rows were inserted. Now it’s time to configure CDB2 and create the Proxy PDB.

 

Preparation in CDB2:

We will start from the CDB$ROOT:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

 

I will create a common user in order to perform the example with the CONTAINERS clause. For more information about the CONTAINERS clause you can read my article “New CONTAINERS Clause in 12.1.0.2 - Common Perspective”.

SQL> create user c##dgomez identified by nuvola container=all;

User created.

SQL> grant connect, resource, unlimited tablespace to c##dgomez container=all;

Grant succeeded.

 

I will create the same table in CDB$ROOT in CDB2 and insert a row in order to follow the example in the image:

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('USA');

1 row created. 

SQL> commit;

Commit complete.

 

Creating the PDB1 in CDB2:

SQL> create pluggable database pdb1 admin user pdbadmin identified by nuvola;

Pluggable database created.

 

Opening the PDB1 of CDB2:

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

 

Creating the table country in the PDB1 of CDB2:

SQL> alter session set container=pdb1;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Canada');

1 row created. 

SQL> commit; 

Commit complete.

 

Creation of “Proxy PDB”:

Well, so far everything we have done is only to build the environment in the example in the image shown at the beginning of this article. We have not seen how “Proxy PDB” works; I have only provided concepts and some prerequisites.  The next sentence creates a database link in the CDB$ROOT of CDB2. The database link is required only at the time of the Proxy PDB creation. Once the Proxy PDB has been created the database link is no longer required; Proxy PDB connects directly to the referenced PDB without using the database link.  

Note that the database link references directly to a common user in the PDB that will be the Referenced PDB, in this case PBD2 of CDB1.

SQL> CREATE DATABASE LINK link_to_pdb2_in_cdb1 CONNECT TO c##dgomez IDENTIFIED BY nuvola USING '192.168.1.22:1521/pdb2';

Database link created.

 

Note that the database like uses the common user in CDB1, this was one of the prerequisites I mentioned before. The database link connects to the PDB2 in CDB1 since this will be our Referenced PDB.

Once the database link is created, the next step is to create the Proxy PDB.

SQL> create pluggable database pdb2 AS PROXY FROM pdb2@link_to_pdb2_in_cdb1;

Pluggable database created.

 

And that’s it! The Proxy PDB was created successfully. I will proceed to open it in read/write to start using it:

SQL> alter pluggable database pdb2 open; 

Pluggable database altered.

 

Now it’s time to test how Proxy PDB works! Since the example in this article is based on the CONTAINERS clause, I will connect to the CDB$ROOT of CDB2 using password authentication and execute a query:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/nuvola@'192.168.1.22:1521/cdb2'

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

 

Note that the query from CDB$ROOT of CDB2 returns the value “Guatemala”; this is because of the Proxy PDB. The value “Guatemala” is not stored in the PDB2 of CDB2 (the Proxy PDB) but, as I said before, the Proxy PDB will behave transparently for all the DDLs and DMLs, it as if a normal PDB was there..

SQL> select name from containers(c##dgomez.country);

NAME
-------------------------
USA
Canada
Guatemala

There is a limitation on Proxy PDBs, they don’t support OS authentication. If you login to the CDB2 with OS authentication and try to run a query from the PDB2 you will get no data. This is because the Proxy PDB will not be able to connect to the referenced PDB and get the data from it (. Proxy PDB supports only password authentication.

[oracle@nuvola2 ~]$  sqlplus  / as sysdba

SQL> show con_name

CON_NAME
-----------------------------
CDB$ROOT

SQL> select name from containers(c##dgomez.country);

NAME
-------------------------
USA
Canada 

If we connect with OS authentication to the PDB2 in CDB2 and we try to execute a query, the query will fail, saying that the password used is not correct. Of course, we know that there was not a password provided since we used OS authentication.

[oracle@nuvola2 ~]$ sqlplus / as sysdba 

SQL> alter session set container=pdb2; 

Session altered.

SQL> select * from c##dgomez.country;

select * from c##dgomez.country

                        *

ERROR at line 1:

ORA-01017: invalid username/password; logon denied

ORA-02063: preceding line from PROXYPDB$DBLINK

 

When we use password authentication the Selects works well:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/nuvola@'192.168.1.22:1521/cdb2'

SQL>  alter session set container=pdb2;

Session altered.

SQL> select * from c##dgomez.country;

NAME
-------------------------
Guatemala

Now I will test an INSERT operation in the Proxy PDB, but since it is a Proxy, the operation will be executed in the referenced PDB, which means that the row will be stored in the referenced PDB:

SQL> insert into c##dgomez.country values ('Costa Rica');

1 row created.

SQL> commit;

Commit complete.

 

In PDB2 of CDB1, I will verify if the row was inserted there:

SQL> select name from v$database;

NAME

---------

DB12C

SQL> alter session set container=pdb2; 

Session altered.

SQL> select * from c##dgomez.country;

NAME
-------------------------
Guatemala
Costa Rica

This confirms that the Proxy PDB sends SELECTs and also INSERTS (DDLs+DMLs) to be processed inside the referenced PDB.

 

Conclusion:

We have seen that a Proxy PDB is a special PDB that receives operations (DDLs and DMLs) in a local CDB but sends all the operations to its referenced PDB, and processes the operations remotely within the referenced PDB. This brings is the advantage of “Location Transparency”. Location Transparency means that it doesn’t matter where the data is located physically; when we use Proxy PDBs, we can present a PDB in other CDBs as if the PBD that has all the data stored physically was there. All the operations will be processed remotely. Data can be used everywhere in 12.2.0.1.0 without actually having the data physically in all the sites. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12cR2 new feature: Application Root Replica

$
0
0

By Deiby Gómez

 

Introduction:

In my previous articles we have seen concepts like “Application Containers” and “Proxy PDB”, which are new in Oracle Database 12cR2. With Application Containers, you can install applications in an Application Root and synchronize the application (metadata without data) to Application PDBs.On the other hand, a Proxy PDB provides location transparency; this is useful when we want to access data or objects remotely from another Container Database (CDB). An advantage of a Proxy PDB is that we don’t have to copy all the data to the remote CDB in order to access the objects and its data, however this is also a disadvantage. If something goes wrong with the Application Root in the Master Application Container, all the remote Proxy PDBs in others CDB will be broken. To avoid this, we would probably want to have a physical replica of all the objects and data in another remote Container Database. Here is where a new feature called “Application Root Replica”, also introduced in 12.2.0.1.0, is helpful.

Application Root Replica is a physical replica of a master Application Root but in another remote Container Database. This lets us synchronize applications in an Application Container across different and remote Container Databases without using solutions like RMAN, Data Pump, or remote cloning. 

There are two methods to create an Application Root Replica:

  1. Create an empty application container and then synchronize the application.
  2. Clone the master application root.

In this article, I will show you a use-case example.

 

Preparation of the Environment:

With these steps I will create the environment described in the following image. I already have the two Container Databases, CDB1 and CDB2. So I will start by creating the Application Root “AppRoot” and the Application PDB “AppPDB1” in CDB1. I will create an application in “AppRoot” and I will sync that application to “AppPDB1”.  Then I will create the Application Root “AppRoot2” and the Application PDB “AppPDB2” in CDB2.

 

Creating an Application Root named “AppRoot”:

SQL> create pluggable database AppRoot as application container admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> alter pluggable database AppRoot open;

Pluggable database altered.

 

Creating the Application PDB named “AppPDB1”:

SQL> alter session set container=AppRoot;

Session altered.

SQL> show con_name

CON_NAME

------------------------------

APPROOT

 

SQL> create pluggable database AppPDB1 admin user pdbadmin identified by nuvola; 

Pluggable database created.

SQL>  alter pluggable database AppPDB1 open;

Pluggable database altered.

 

Installing the application named “MyApp” in the Application Root “AppRoot” in CDB1:

 

SQL> alter pluggable database application MyApp begin install '1.0';

Pluggable database altered.

SQL> create table c##dgomez.dataLinkedTable SHARING=DATA   (name varchar2(20));

Table created.

SQL> insert into c##dgomez.dataLinkedTable values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

SQL> alter pluggable database application MyApp end install '1.0';

Pluggable database altered.

 

Synchronizing the Application PDB “AppPDB1”:

SQL> alter session set container=AppPDB1;

Session altered.

SQL> alter pluggable database application MyApp sync;

Pluggable database altered.

 

Confirming that the table and data were synchronized:

SQL>  select * from c##dgomez.dataLinkedTable;

NAME

--------------------

Guatemala

 

In the Container Database “CDB2” I will create the Application Root named “AppRoot2”

SQL> create pluggable database AppRoot2 as application container admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> alter pluggable database AppRoot2 open;

Pluggable database altered.

 

Creating the Application PDB “AppPDB2” in CDB2:

SQL> alter session set container=AppRoot2;

Session altered.

SQL> show con_name

CON_NAME

------------------------------

APPROOT2

 

SQL> create pluggable database AppPDB2 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL>  alter pluggable database AppPDB2 open;

Pluggable database altered.

 

Confirming that the table c##dgomez.dataLinkedTable doesn’t exist in “AppPDB2”. This is just to confirm that the environment we have created matches with the previous image.

 

SQL> alter session set container=AppPDB2;

Session altered.

SQL> select * from c##dgomez.dataLinkedTable;

select * from c##dgomez.dataLinkedTable

                        *

ERROR at line 1:

ORA-00942: table or view does not exist

 

The problem:

At this time we have two CDBs, one called CDB1, which has an Application Container created with one application installed. However, I also want to have that Application in CDB2 in the Application Container that has already been created there. Also I would like to be able to synchronize all the data whenever the “master” Application receives any change. In the past, we would have used a Full Backup and Restore with RMAN or perhaps an export and import with Data Pump, or even a materialized view. In 12.1.0.2.0 we would use “Remote PDB Cloning”. However, all these solutions are not the best!

The solution:

The best solution to this problem is “Application Root Replica”. An Application Root Replica is a physical replica of one Application Root in another CDB. In this case our Master Application Root is “AppRoot” in CDB1, and the Application Root Replica is “AppRoot2” in CDB2. The Application Root Replica uses a Proxy PDB to synchronize the data with the Master Application Root. In the following image you can see that the Proxy PDB is created in the CDB1, this is because the Proxy PDB will be seen as a normal PDB in the Application Container in CDB1, which means that the Proxy PDB will get the data (via synchronization) from the Master Application Root. Since the “Referenced PDB” of that Proxy PDB is “AppRoot2”, it is as if “AppRoot2” was physically located in CDB1. This is the concept of a Proxy PDB, and this is how “AppRoot2” can get all the data from “AppRoot2”. Once the Application Root “AppRoot2” get synchronized with the Application Root “AppRoot” through the Proxy PDB, we will have to synchronize the Application PDB “AppPDB2” in CDB2.

 

 

In the Application Root “AppRoot” in CDB1:

SQL> alter session set container=AppRoot; 

Session altered.

SQL> show con_name

CON_NAME

------------------------------

APPROOT

 

Since the Proxy PDB needs a database link to be created:

SQL>  CREATE DATABASE LINK link_to_AppRoot CONNECT TO c##dgomez IDENTIFIED BY nuvola USING '192.168.1.22:1521/approot2';

Database link created.

 

Note that the database link connects to the Application Root “AppRoot2” in CDB2.

Creating the Proxy PDB in CDB1:

SQL> create pluggable database ProxyPDB AS PROXY FROM approot2@link_to_AppRoot;

Pluggable database created.

SQL> alter pluggable database ProxyPDB open;

Pluggable database altered.

 

Unfortunately Proxy PDB doesn’t support OS Authentication, so I have to open a session to “ProxyPDB” in CDB1 using password authentication:

[oracle@nuvola2 apex]$ sqlplus sys/manager1@'192.168.1.22:1521/ProxyPDB' as sysdba

 

The following step will synchronize the “Proxy PDB”, which automatically will fill up the “Application Root Replica” called “AppRoot2” in CDB2:

SQL> alter pluggable database application MyApp sync;

Pluggable database altered.

 

If we connect to the application root replica “AppRoot2” in CDB2 we will see that the application is there as well as its data, physically.

SQL> show con_name

CON_NAME

------------------------------

APPROOT2

 

SQL> select app_name, app_version from dba_app_versions where app_name='MYAPP'

APP_NAME          APP_VERSION

-------------------- ------------------------------

MYAPP             1.0

 

So the application “MyApp” has been synchronized to the Application Root [Replica] “AppRoot2”. It’s time to synchronize all the Application PDBs in the Application Container in CDB2: 

SQL> alter session set container=AppPDB2;

Session altered.

SQL> show con_name

CON_NAME

------------------------------

APPPDB2

 

SQL> alter pluggable database application MyApp sync;

Pluggable database altered.

 

We can confirm that the Application “MyApp” was successfully replicated from AppRoot to ProxyPDB in CDB1, from ProxyPDB in CDB1 to AppRoot2 in CDB2, and from AppRoot2 to AppPDB2 in CDB:

SQL> select * from c##dgomez.dataLinkedTable;

NAME

--------------------

Guatemala

 

Starting now, we only have to keep performing “SYNC” operations to replicate the data through all of the configuration that involves both Container Databases.

 

Conclusion:

We have seen through this article how to synchronize application data in an Application Container across Container Databases without using Backup and Recovery operations with RMAN, Export & Import with Data Pump or Remote PDB Cloning. When we are working with Application Containers, both Proxy PDB and Application Root Replica are useful for replicating our installed applications to other Containers Databases. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

How to run SQL Statements across Pluggable Databases with catcon.pl

$
0
0

Introduction:

Beginning with Oracle Database 12.1.0.1.0, DBAs started to work with Pluggable Databases. There were some large migrations of several databases from 10g/11g to 12c where they were consolidated into a new Oracle Database Container using several Pluggable Databases. However, running operations in several Pluggable Databases became a problem, since people had to login into every Pluggable Database and to run the required script or SQL Statement there. To avoid causing people to spend too much time doing this kind of work Oracle introduced the Perl script “catcon.pl”. Basically catcon.pl receives either a script or the text of a SQL Statement and executes it in the Pluggable Databases that we specify, even in PDB$SEED and CDB$ROOT, depending on which flags of catcon.pl are used. In the following image we see a script received by catcon.pl, and catcon.pl executes the script in CDB$ROOT and PDB$SEED if the flag “-S” is used as well as in the rest of Pluggable Databases.

 

Using catcon.pl considerably reduces the time spent on running scripts across several databases. One of its advantages is that you can filter the pluggable databases where you want to execute the script or SQL Statement by using “-C” for exclusion of pluggable databases and “-c” for inclusion of pluggable databases. You can also specify the order of the pluggable databases where the script or SQL statement has to be executed.

In this article we will use the environment described in the previous image. I will start creating the three pluggable databases and the scripts that will be executed across the PDBs:

SQL> create pluggable database PDB1 admin user pdbadmin identified by nuvola; 

Pluggable database created.

SQL> create pluggable database PDB2 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> create pluggable database PDB3 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> alter pluggable database all open;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

        2 PDB$SEED                 READ ONLY  NO

        3 PDB1                     READ WRITE NO

        4 PDB2                     READ WRITE NO

        5 PDB3                     READ WRITE NO

 

Creating the Script #1:

The following script contains a CREATE TABLE statement, an INSERT statement, a commit and a SELECT statement. All these operations use the same table, C##DGOMEZ.COUNTRY.

[oracle@nuvola2 ~]$ pwd

/home/oracle

 

[oracle@nuvola2 ~]$ vi script.sql

[oracle@nuvola2 ~]$ cat script.sql

show con_name;

create table c##dgomez.country (name varchar2(20));

insert into c##dgomez.country values ('Guatemala');

commit;

select * from c##dgomez.country ;

[oracle@nuvola2 admin]$

 

Creating the Script #2:

This script doesn’t create any table; instead, it only inserts rows in the table C##DGOMEZ.COUNTRY

[oracle@nuvola2 admin]$ cat /home/oracle/script2.sql

insert into c##dgomez.country values ('Canada');

commit;

[oracle@nuvola2 admin]$

 

Running catcon.pl without “-S” flag:

When the flag “-S” is not used, catcon.pl executes the script or the SQL Statement in all the containers including CDB$ROOT and PDB$SEED. Also all the objects created by catcon.pl are created as “ORACLE_MAINTAINED”, which means that those will be objects owned by Oracle and which cannot be modified by any database user. I don’t recommend using this method to create objects for the business or our application schema; this method is used to run perhaps a script for patching, migration, or any other task that touches the data dictionary or any other aspect owned by Oracle.

Moving to the directory where catcon.pl is located:

[oracle@nuvola2 ~]$ cd $ORACLE_HOME/rdbms/admin

 

Executing catcon.pl. The flag “-d” specifies where the script is located. The flag “-l” specifies the directory where all the logs will be created. The flag “-b” specifies the prefix name of the log files that will be generated and finally the value with the name of the script that will be executed by catcon.pl.

[oracle@nuvola2 admin]$  $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -b catcon-example script.sql

 

As you can see, the script was executed and it created the objects as “ORACLE_MAINTAINED”. The script was executed in CDB$ROOT and also in PDB$SEED. In this example, the script failed in PDB$SEED because the schema c##dgomez didn’t exist within the PDB, and catcon.pl couldn’t create the table.

SQL> select con_id, owner, object_name, object_type, ORACLE_MAINTAINED from cdb_objects where owner='C##DGOMEZ';

    CON_ID OWNER      OBJECT_NAM OBJECT_TYP ORACLE_MAINTAIN

---------- ---------- ---------- ---------- ---------------

        1 C##DGOMEZ  COUNTRY     TABLE     Y

        3 C##DGOMEZ  COUNTRY     TABLE     Y

        4 C##DGOMEZ  COUNTRY     TABLE     Y

        5 C##DGOMEZ  COUNTRY     TABLE     Y

 

Running catcon.pl with “-S” flag

I recommend using this flag when you are running either a script or SQL Statement that create objects for your business application schema like the Script #1 or the Script #2 that I created in this article. In other words, when you are running operations not related to patching, upgrades, or to the data dictionary. When the flag “-S” is used, catcon.pl doesn’t execute the script in CDB$ROOT or in PDB$SEED.

[oracle@nuvola2 ~]$ cd $ORACLE_HOME/rdbms/admin

[oracle@nuvola2 admin]$  $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -S  -b catcon-example script.sql

catcon: ALL catcon-related output will be written to [/home/oracle/catcon_logs/catcon-example_catcon_26297.lst]

catcon: See [/home/oracle/catcon_logs/catcon-example*.log] files for output generated by scripts

catcon: See [/home/oracle/catcon_logs/catcon-example_*.lst] files for spool files, if any

catcon.pl: completed successfully

[oracle@nuvola2 admin]$

 

The logs will be generated in the directory “/home/oracle/catcon_logs” with the prefix “catcon-example” as it was specified:

[oracle@nuvola2 admin]$ ls -ltr /home/oracle/catcon_logs/

total 12

-rw-r--r-- 1 oracle oinstall  419 May  7 05:57 catcon-example_catcon_26297.lst

-rw-r--r-- 1 oracle oinstall 3371 May  7 05:58 catcon-example0.log

-rw-r--r-- 1 oracle oinstall 1922 May  7 05:58 catcon-example1.log

[oracle@nuvola2 admin]$

 

The script was executed only in the pluggable databases. It was not executed in CDB$ROOT nor PDB$SEED and the table was created as non-Oracle maintained:

SQL> select con_id, owner, object_name, object_type, ORACLE_MAINTAINED from cdb_objects where owner='C##DGOMEZ'

    CON_ID OWNER      OBJECT_NAM OBJECT_TYP ORACLE_MAINTAINED

---------- ---------- ---------- ---------- -----------------

        3 C##DGOMEZ  COUNTRY     TABLE     N

        4 C##DGOMEZ  COUNTRY     TABLE     N

        5 C##DGOMEZ  COUNTRY     TABLE     N

 

We can verify that the table was created and the rows inserted in every PDB:

SQL> select con_id, name from containers(C##DGOMEZ.COUNTRY) ;

    CON_ID NAME

---------- --------------------

        1 Guatemala

        3 Guatemala

        4 Guatemala

        5 Guatemala

 

NOTE: I manually created the table in CDB$ROOT, just to make the CONTAINERS clause work.

In the following example I am using the flag “-c”, which is useful when we want to use “inclusion”. We have to provide the list of the PDBs where the script will be executed. In this example, the script will be executed only in PDB1 and PDB3. I will use in this example the script #2, which  performs only an INSERT operation.

[oracle@nuvola2 admin]$ $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -S -c 'PDB1 PDB3'-b catcon-example script2.sql

catcon: ALL catcon-related output will be written to [/home/oracle/catcon_logs/catcon-example_catcon_27384.lst]

catcon: See [/home/oracle/catcon_logs/catcon-example*.log] files for output generated by scripts

catcon: See [/home/oracle/catcon_logs/catcon-example_*.lst] files for spool files, if any

catcon.pl: completed successfully

[oracle@nuvola2 admin]$

 

We can verify whether the script was executed in only PDB1 and PDB3 by querying the table c##dgomez.country:

[oracle@nuvola2 admin]$ sqlplus / as sysdba

SQL> select con_id, name from containers(C##DGOMEZ.COUNTRY) ;

 

    CON_ID NAME

---------- --------------------

        1 Guatemala

        3 Guatemala

        3 Canada

        4 Guatemala

        5 Guatemala

        5 Canada

8 rows selected.

 

Conclusion:

When the multi-tenant architecture was introduced, the Perl script catcon.pl was also introduced to help running frequent scripts in multiple pluggable databases. In this article we saw some examples where different flags of catcon.pl were used, such as the flag to include or exclude PDB, the flag to execute a script as if it was provided by Oracle, and when we want to create objects for our application schema. There was also an example in which the order of PDB was provided. The Perl script catcon.pl is certainly useful to avoid wasting too much time executing the same task in every PDB.    

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle 12cR2 RMAN New Feature: UNTIL AVAILABLE REDO

$
0
0

By Deiby Gómez

Introduction:

 Oracle has introduced several new features in its new version Oracle Database 12.2.0.1.0 and RMAN it is not the exception. Most of the DBA would agree that one of the difficult tasks whenever a database needs to be restored is to calculate the SCN, or the Sequence to use in the “RECOVER DATABASE UNTIL (…)” operation, in order to apply as many archived logs as possible, to recover as much data as possible. Every DBA has different methods to discover the target SCN or the target Sequence. Some use the “PREVIEW” clause, some others the view v$log, some others the RMAN “LIST” commands, and so on. The problem is that when the calculation is not correct, and the database that is being restored is huge (let’s say 8TB), an error on the “RECOVER” phase might take us to restore the whole database from scratch. In Oracle database 12.2.0.1.0 the clause “UNTIL AVAILABLE REDO” is available. As its name indicates, this clause makes all the required calculations to recover the database up to the last available archive log. This is a really cool feature, since all the DBA has to do is catalog all the archivelogs available and use “UNTIL AVAILABLE REDO” in the “RECOVER DATABASE” phase, and Oracle will do all the work., This also lets us avoid human error in the calculations.

In order to show how this feature works I will use an empty database with the table DGOMEZ.COUNTRY; currently it has no rows.  This database is in archivelog mode.

 

Performing a backup:

RMAN> backup database;

Starting backup at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=53 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/others/db1/DB1/datafile/o1_mf_system_djyxzjxt_.dbf
input datafile file number=00003 name=/others/db1/DB1/datafile/o1_mf_sysaux_djyy0ynm_.dbf
input datafile file number=00004 name=/others/db1/DB1/datafile/o1_mf_undotbs1_djyy23sy_.dbf
input datafile file number=00007 name=/others/db1/DB1/datafile/o1_mf_users_djyy24y4_.dbf
channel ORA_DISK_1: starting piece 1 at 07-MAY-17
channel ORA_DISK_1: finished piece 1 at 07-MAY-17
piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:38
Finished backup at 07-MAY-17

Starting Control File and SPFILE Autobackup at 07-MAY-17
piece handle=/others/db1/fra/DB1/autobackup/2017_05_07/o1_mf_s_943372550_djyyy6vo_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-MAY-17

I will insert a row with the value ‘Guatemala’ into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

 

A second row with the value ‘Canada’ will be inserted into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Canada');

1 row created.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

 

A last row with the value ‘Colombia’ will be inserted into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Colombia');

1 row created 

SQL> commit;

Commit complete.

SQL> alter system switch logfile; 

System altered.

 

You can see that there were three archived logs created. This is because for every row that was inserted we executed a switch of the log file, and that resulted in the creation of a new archived log.

[oracle@nuvola2 2017_05_07]$ ls -ltr

total 155072

-rw-r----- 1 oracle dba 158784512 May  7 15:59 o1_mf_1_1_djyz5fgk_.arc

-rw-r----- 1 oracle dba      2560 May  7 16:00 o1_mf_1_2_djyz6dyd_.arc

-rw-r----- 1 oracle dba      3072 May  7 16:00 o1_mf_1_3_djyz723j_.arc

[oracle@nuvola2 2017_05_07]$

 

Confirming the three rows are in the table:

SQL> select * from dgomez.country;

NAME

--------------------

Guatemala

Canada

Colombia

 

Basically what I have done is what the following picture explains.  Initially the database was empty. The row with the value ‘Guatemala’ was inserted and then I generated an archived log (#1). I repeated these steps with the value ‘Canada’ and ‘Colombia’ respectively.

 

First Test – Using all the archived logs generated:

The first test that I will perform is to use these three newly generated archived logs to recover the database. For this I will simulate that all the datafiles of the existing database were deleted and we have to restore and recover the database.

Shutting down the existing database:

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 

Mounting the database:

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area  843055104 bytes

Fixed Size              8626288 bytes

Variable Size         322965392 bytes

Database Buffers      507510784 bytes

Redo Buffers            3952640 bytes

Database mounted.

 

Deleting datafiles and online logs in order to simulate a storage damage:

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/datafile/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/onlinelog/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/onlinelog/*

 

Restoring the database:

RMAN> restore database;

Starting restore at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /others/db1/DB1/datafile/o1_mf_system_djyxzjxt_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /others/db1/DB1/datafile/o1_mf_sysaux_djyy0ynm_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /others/db1/DB1/datafile/o1_mf_undotbs1_djyy23sy_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /others/db1/DB1/datafile/o1_mf_users_djyy24y4_.dbf
channel ORA_DISK_1: reading from backup piece /others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp
channel ORA_DISK_1: piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 07-MAY-17

 

Recovering the database: Here is where the magic happens. All we have to do is use the “UNTIL AVAILABLE REDO” clause and Oracle automatically will apply all the archived logs that have registered into its control file or a catalog; if a catalog is used. There is no need to perform calculations for the target SCN.

RMAN> recover database until available redo;

Starting recover at 07-MAY-17
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 1 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc
archived log for thread 1 with sequence 2 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc
archived log for thread 1 with sequence 3 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc thread=1 sequence=1
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc thread=1 sequence=2
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc thread=1 sequence=3
warning: attempt media recovery until thread 1, sequence 4
Finished recover at 07-MAY-17

We can see that the three archived logs were applied automatically and there were no errors.

Opening the database in resetlogs:

SQL> alter database open resetlogs; 

Database altered.

 

Verification of the data:

SQL> select * from dgomez.country;

NAME

--------------------

Guatemala

Canada

Colombia

 

Since the three rows are there, we can confirm that Oracle indeed applied the three archived logs automatically, without our having to specify any target SCN or target sequence.

 

Second Test – Deleting the last two archived logs:

The test that I will perform now is with the last two archived logs deleted and only the first archived log available. I will again use the UNTIL AVAILABLE REDO clause and Oracle should be able to discover that the maximum time to which the database can be recovered is right after the first row was inserted (with the value ‘Guatemala’).  

Shutting down the existing database:

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 

Mounting the database:

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area  843055104 bytes

Fixed Size              8626288 bytes

Variable Size         322965392 bytes

Database Buffers      507510784 bytes

Redo Buffers            3952640 bytes

Database mounted.

 

Deleting datafiles and online logs in order to simulate a storage damage:

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/datafile/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/onlinelog/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/onlinelog/*

 

Confirming that our three archived logs are there:

[oracle@nuvola2 2017_05_07]$ ls -ltr  /others/db1/fra/DB1/archivelog/2017_05_07/*

-rw-r----- 1 oracle dba 158784512 May  7 15:59 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc

-rw-r----- 1 oracle dba      2560 May  7 16:00 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc

-rw-r----- 1 oracle dba      3072 May  7 16:00 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc

 

Deleting the last two archived logs that were generated:

[oracle@nuvola2 2017_05_07]$ rm -rf  /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc

 

Confirming that only the first archived log is available now:

[oracle@nuvola2 2017_05_07]$ ls -ltr  /others/db1/fra/DB1/archivelog/2017_05_07/*

-rw-r----- 1 oracle dba 158784512 May  7 15:59 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc

[oracle@nuvola2 2017_05_07]$

 

The following image explains what we are doing. We deleted the last two generated archived logs in order to test whether Oracle is aware of it and whether it automatically handles the situation and applies all the redo data in the first archived log. If Oracle performs its job well, at the end, we will be see only one row inserted with the value ‘Guatemala’.

 

Restoring the database:

RMAN> restore database;

Starting restore at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=44 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /others/db1/DB1/datafile/o1_mf_system_djyznwbl_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /others/db1/DB1/datafile/o1_mf_sysaux_djyznwby_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /others/db1/DB1/datafile/o1_mf_undotbs1_djyznwc9_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /others/db1/DB1/datafile/o1_mf_users_djyznwcn_.dbf
channel ORA_DISK_1: reading from backup piece /others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp
channel ORA_DISK_1: piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:22
Finished restore at 07-MAY-17

 

Recovering the database:

RMAN> recover database until available redo;

Starting recover at 07-MAY-17
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 1 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc thread=1 sequence=1
warning: attempt media recovery until thread 1, sequence 2
Finished recover at 07-MAY-17

 

You can see that Oracle automatically discovered that only one archived log is available and automatically calculated the target sequence for the database to be recovered.

Opening the database with resetlogs:

RMAN> alter database open resetlogs; 

Statement processed

 

Confirming the data:

RMAN> select * from dgomez.country;

NAME               

--------------------

Guatemala          

 

We can see that the result is correct. Since only the first archived log was applied, only the row with the value ‘Guatemala’ exists in the table.

 

Conclusion:

Definitely the ‘UNTIL AVAILABLE REDO’ clause is something DBAs have been waiting for, since it eliminates time spent calculating the target SCN or sequence and also removes the risk of human error in the calculations that in might result in having to restore the entire database from scratch. That would be acceptable for small databases, but for huge, multi-terabyte databases it’s not acceptable.  Oracle has made our life easier.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

How to analyze Undo statistics to proactively avoid undo space issues

$
0
0

By Deiby Gómez

Introduction

In my previous articles I explained two very important concepts about Undo Data; one is how Oracle manages the retention time and the other is how Oracle reuses the undo extents. You can also check my presentation "How to avoid ORA-01555" if you want to know more about that error. In this article, I will show you how the view V$UNDOSTAT can give you useful information about how everything is going regarding your undo data in your database. First, let me give you a small definition about two views:

V$UNDOSTAT: Each row in the view keeps statistics collected in the instance for a 10-minute interval. The rows are in descending order by the BEGIN_TIME column value. Each row belongs to the time interval marked by (BEGIN_TIME, END_TIME). Each column represents the data collected for the particular statistic in that time interval. The first row of the view contains statistics for the (partial) current time period. The view contains a total of 576 rows, spanning a 4 day cycle.

DBA_HIST_UNDOSTAT: This view contains snapshots of V$UNDOSTAT. Basically is has the history of V$UNDOSTAT.

As you can see, the main view is V$UNDOSTAT; the other is just its history. There are several columns in the view. Here are the ones we’ll focus on:

UNDOBLKS: Represents the total number of undo blocks consumed. You can use this column to obtain the consumption rate of undo blocks, and thereby estimate the size of the undo tablespace needed to handle the workload on your system

TXNCOUNT: Identifies the total number of transactions executed within the period

UNXPBLKREUCNT: Number of unexpired undo blocks reused by transactions

EXPBLKRELCNT: Number of expired undo blocks stolen from other undo segments

ACTIVEBLKS: Total number of blocks in the active extents of the undo tablespace for the instance at the sampled time in the period

UNEXPIREDBLKS: Total number of blocks in the unexpired extents of the undo tablespace for the instance at the sampled time in the period

EXPIREDBLKS: Total number of blocks in the expired extents of the undo tablespace for the instance at the sampled time in the period.

NOSPACEERRCNT: Identifies the number of times space was requested in the undo tablespace and there was no free space available. That is, all of the space in the undo tablespace was in use by active transactions. The corrective action is to add more space to the undo tablespace.

By using these columns, there are some interesting combinations that every DBA can use to tune undo data generation. If we combine UNDOBLKS and TXNCOUNT, for instance, we can find out the consumption rate of undo blocks per transaction.  Use the following query:

select min(UNDOBLKS/TXNCOUNT), avg(UNDOBLKS/TXNCOUNT), max (UNDOBLKS/TXNCOUNT) from V$UNDOSTAT

select BEGIN_TIME, END_TIME, UNDOBLKS/TXNCOUNT from V$UNDOSTAT;

You can also combine UNDOBLKS, the Undo tablespace’s block size, and the retention time in order to learn how many MB you will need for your undo tablespace’s size to match with a specific retention time.

And even more interesting, we can extract the data from V$UNDOSTAT in a CSV format and create line charts in order to understand the undo behavior of our databases.

Let’s see how this would work. As an example, I have created a 12.2.0.1 EE database, where I have loaded some workload with SLOB. The SLOB was configured to perform 95% UPDATES and 5% SELECTs, a WORK_UNIT=8192, 5 SLOB schemas and 5 threads per schema in order to generate a lot of undo data. 

For each chart that I will show, SLOB was running for around 60 minutes. This means that we will have 6 rows in V$UNDOSTAT, since every row is a sample of 10 mins.

Before you study the charts, I really recommend that you first read these two articles to master the two principal concepts:

How does Oracle reuse the Expired and Unexpired undo extents?

Undo retention time with autoextend=on and autoextend=off

Let’s begin. The following charts use the columns: NOSPACEERRCNT, ACTIVEBLKS, UNEXPIREDBLKS, EXPIREDBLKS (but you can build more complex charts using the others columns of V$UNDOSTAT).

First type of workload 

The chart below characterizes an OLTP database; the database is receiving transactions (because there are active undo extents) but the transactions seem to happen infrequently since most of the undo extents are "expired" and the active extents have not increased enough to require reusing expired/unexpired extents.

If you have your undo data behavior looking like this chart, you would say your database is healthy from an undo space perspective. This would be a "perfect" environment. In this chart, there is no reason to be worried regarding undo space.

 

First Workload Example

Second type of workload

This workload is quite different.. In the previous chart, the higher line was of “Expired Blocks” and the lower line was of “Unexpired Blocks”; however, in this second chart this is reversed. Now we can see that the higher line is of “Unexpired Blocks”. This means that the database is receiving the workload and the undo retention time is high enough to keep the undo data of the completed transactions (Unexpired extents) stored.

Here, you have to review whether there are Unexpired extents that are being reused by new transactions. This happens more frequently when the line of Unexpired extents is getting close to the line of the active extents (the next two charts). If you see that “UNXPBLKREUCNT” has a value greater than one, you probably should tune undo retention. If the undo retention has the value that you require, then you can increase the size of your undo tablespace; otherwise, unexpired extents will be overwritten by other transactions if Oracle requires it. In that case you would see some ORA-01555 in your SELECT operations.

In the chart below, however, there is no reason to be worried regarding space.

Second Workload Example

Third type of workload

The chart below is very similar to the previous one; however, in this chart the line of “Unexpired extents” is closer to the line of Active extents. This behavior increases the probability of getting ORA-01555 in your SELECT operations. If you want to avoid ORA-01555, you can increase the undo retention time or increase the size of the undo tablespace.

In this chart, there is no reason to be worried regarding space, only about ORA-0155, but you should look a little bit deeper because if you don’t pay attention, your database might reach the status of either of the two charts we’ll be looking at later on.

Third Workload Example

Fourth type of workload 

This chart indicates a worse situation than the two previous charts. Here, the number of transactions is increased such that the number of active undo extents has also increased, and started to overwrite (reuse) some unexpired undo extents.

In a database with this undo behavior there will surely be some SELECTs failing with ORA-01555, and space issues will be around the corner. I recommend in this case that you make a deep analysis of why expired undo extents have started to be reused.

If you just ignore the type status shown in this chart, your database will at some point reach the behavior shown in next chart. There will be space problems and your transactions (INSERT, UPDATE, DELETE) will start failing because there is no free space in the undo tablespace to be assigned for new extents.


Fourth Workload Example

Fifth type of workload

You should avoid having your database in this status as much as possible. In this status, some transactions (INSERT, UPDATE, DELETE) have already started to fail because there was no free space in undo tablespace to create new active undo extents.You should definitely increase the size of some datafiles of undo tablespace.


Fifth Workload Example

I’ve just shown you five charts created from the view V$UNDOSTAT that allows you to chart up to 4 days of historic data. You could  use DBA_HIST_UNDOSTAT if you want to chart several days in the past.

Determining the proper undo tablespace size

Oracle provides the function dbms_undo_adv.required_undo_size , which you can use to determine the proper undo tablespace size to comply with an specific undo retention time.

SQL> SELECT 'The Required undo tablespace size using Statistics In Memory is ' || dbms_undo_adv.required_undo_size(128) || ' MB' required_undo_size FROM dual;

REQUIRED_UNDO_SIZE

--------------------------------------------------------------------------------

The Required undo tablespace size using Statistics In Memory is 79 MB

You can use this function as a starting point, but I recommend that you set the size of the undo tablespace based on your analysis of the behavior and historic statistics of your undo data.

Conclusion

In this article I demonstrated that the view V$UNDOSTAT has very useful information that you can review, or even better, that you can chart. You can build charts as complex as you want in order to analyze the behavior of your database from the undo usage perspective and then make decisions to properly tune undo retention time and undo tablespace size.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12.2 - How to track index usage

$
0
0

By Deiby Gómez

Introduction

Several articles have been written about how to track the usage of indexes and there are several scripts to determine which indexes are being used after monitoring for a while. In previous versions to 12cR2 of Oracle Database there is the clause “ALTER INDEX (…) MONITORING USAGE” that can be used for this. However Oracle 12.2 introduced two new views that automatically monitor index usage:

V$INDEX_USAGE_INFO: V$INDEX_USAGE_INFO keeps track of index usage since the last flush. A flush occurs every 15 minutes. After each flush, ACTIVE_ELEM_COUNT is reset to 0 and LAST_FLUSH_TIME is updated to the current time.

DBA_INDEX_USAGE: DBA_INDEX_USAGE displays cumulative statistics for each index.

With these two new views, Oracle automatically tracks the usage of indexes. There are several columns in the dba_index_usage that can be used to find out how many accesses the indexes have received, how many rows have returned, and, even better, there are buckets to create histograms for accesses and rows returned. The most recent time that the index was used is also recorded.  

In the following example, I will create a table with three columns, with one index in every column. Then I will run some queries against the table in order to use the indexes, and we will confirm that indeed Oracle 12.2 tracks the usage.

Creating the table

SQL> create table dgomez.table1 (id number, val1 varchar2(20), val2 varchar2(20));

Table created.

Creating an Index in each column

SQL> create index dgomez.idx_id on dgomez.table1(id);

Index created.

 

SQL> create index dgomez.idx_val1 on dgomez.table1(val1);

Index created.

 

SQL> create index dgomez.idx_val2 on dgomez.table1(val2);

Index created.

Perform some INSERTs in the table

While the INSERTs sentences also impact the index (index entries must be created in the b-tree), this doesn’t count as “access”.

SQL> insert into dgomez.table1 values (1,'a','b');
SQL> insert into dgomez.table1 values (2,'b','c');
SQL> insert into dgomez.table1 values (3,'c','d');
SQL> insert into dgomez.table1 values (4,'d','e');
SQL> insert into dgomez.table1 values (5,'e','f');
SQL> insert into dgomez.table1 values (6,'f','g');
SQL> insert into dgomez.table1 values (7,'g','h');
SQL> insert into dgomez.table1 values (8,'h','i');
SQL> insert into dgomez.table1 values (9,'i','j');
SQL> insert into dgomez.table1 values (10,'j','k');
SQL> insert into dgomez.table1 values (11,'k','l');
SQL> commit;

Executing some queries

I will execute some queries. I have enabled autotrace to confirm that the query is using the index. This counts as an “access”. Also pay attention to how many rows each query has returned, since this count is also monitored by Oracle. At the end, we will list how many accesses and how many rows each index has returned and we will confirm whether the data displayed is correct.

Using the index IDX_ID:

SQL> select id from dgomez.table1 where id>1;

10 rows selected.

---------------------------------------------------------------------------
| Id | Operation        | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT |        | 10    | 130   | 1 (0)      | 00:00:01 |
|* 1 | INDEX RANGE SCAN | IDX_ID | 10    | 130   | 1 (0)      | 00:00:01 |
---------------------------------------------------------------------------

SQL> select id from dgomez.table1 where id>0;

11 rows selected.

---------------------------------------------------------------------------
| Id | Operation        | Name   | Rows | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
| 0  | SELECT STATEMENT |        | 11   | 143   | 1 (0)      | 00:00:01 |
|* 1 | INDEX RANGE SCAN | IDX_ID | 11   | 143   | 1 (0)      | 00:00:01 |
---------------------------------------------------------------------------

Using the index IDX_VAL1:

SQL> select val1 from dgomez.table1 where val1 !='a';

10 rows selected.

-----------------------------------------------------------------------------
| Id | Operation        | Name     | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 10   | 120   | 1 (0)      | 00:00:01 |
|* 1 | INDEX FULL SCAN  | IDX_VAL1 | 10   | 120   | 1 (0)      | 00:00:01 |
-----------------------------------------------------------------------------

SQL> select val1 from dgomez.table1 where val1 !='z';

11 rows selected.

-----------------------------------------------------------------------------
| Id | Operation        | Name     | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 11   | 132   | 1 (0)      | 00:00:01 |
|* 1 | INDEX FULL SCAN  | IDX_VAL1 | 11   | 132   | 1 (0)      | 00:00:01 |
-----------------------------------------------------------------------------

Using the index IDX_VAL2:

SQL> select val2 from dgomez.table1 where val2 !='b';

10 rowsselected.

-----------------------------------------------------------------------------
| Id | Operation        | Name     | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 10   | 120   | 1 (0)      | 00:00:01 |
|* 1 | INDEX FULL SCAN  | IDX_VAL2 | 10   | 120   | 1 (0)      | 00:00:01 |
-----------------------------------------------------------------------------

SQL> select val2 from dgomez.table1 where val2 !='z';

11 rows selected.

-----------------------------------------------------------------------------
| Id | Operation        | Name     | Rows | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
| 0  | SELECT STATEMENT |          | 11   | 132   | 1 (0)      | 00:00:01 |
|* 1 | INDEX FULL SCAN  | IDX_VAL2 | 11   | 132   | 1 (0)      | 00:00:01 |
-----------------------------------------------------------------------------

Confirming the information captured

Now let’s take a look into the information captured by Oracle. In the previous part of this demo I executed each query two times in order to use every index twice. The first query always returned 10 rows for every index, and the second query returned 11 rows for every index; this means in total the index has returned 21 rows. Now let’s confirm these values:

SQL>select name, total_access_count, total_exec_count, total_rows_returned, last_used from DBA_INDEX_USAGE where owner='DGOMEZ';

NAME      TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED LAST_USED
--------- ------------------ ---------------- ------------------- ---------------------
IDX_ID                     2                2                 21   07-16-2017 18:58:43
IDX_VAL1                   2                2                 21   07-16-2017 18:58:43
IDX_VAL2                   2                2                 21   07-16-2017 18:58:43

 

Fortunately the information about every query I executed was captured, but it seems not all the SELECTs are captured, as Frank Pachot explains in this article. I also saw that if the Quries are executed by SYS the index usage is not captured. 

The following output shows how many accesses the index has received:

SQL> select name, bucket_1_access_count, bucket_2_10_access_count, bucket_11_100_access_count, bucket_101_1000_access_count from DBA_INDEX_USAGE where owner='DGOMEZ';

NAME      BUC_1_ACC_CT BUC_2_10_ACC_CT BUC_11_100_ACC_CT BUC_101_1000_ACC_CT
--------- ------------ --------------- ----------------- -------------------
IDX_ID               0               1                 1                  0
IDX_VAL1             0               1                 1                  0
IDX_VAL2             0               1                 1                  0

 

The definition of the column “BUCKET_11_100_ACCESS_COUNT” is “The index has been accessed between 11 and 100 times. At first look it seems that this definition is not correct, because I just executed the same query two times for each index. I didn’t execute a query that accessed the index between 11 and 100 times.

So apparently this column actually captures its accesses, not operations. Since the first SELECT operations accessed the index 10 times because it returned 10 rows, the bucket_2_10_access_count was increased by one. It is the same for the second query, which accessed the index 11 times because it returned 11 rows; the bucket_11_100_access_countwas increased by one.

But… Wait! TOTAL_ACCESS_COUNT says every index was accessed only two times in total. So, there are some inconsistent definitions here:

  • Either there were two accesses of every index because I executed two SELECT operations that touched the index, in which case TOTAL_ACCESS_COUNT is correct but BUCKET_11_100_ACCESS_COUNT is not correct, because I didn’t execute any query more than 10 times and fewer than 101 times. 
  • Or, the BUCKET_11_100_ACCESS_COUNT is correct and it doesn’t count the operations (SELECTs in this case) and instead counts every access to the b-tree nodes into the index; in which case the definition of TOTAL_ACCESS_COUNT is wrong.

In the following output we can confirm that every bucket received the correct information. For example, for the bucket bucket_2_10_rows_returned there is 1 execution; this is because the first query always returned 10 rows in every index. The bucket bucket_11_100_rows_returned always has the right value (1 execution) since the second query we executed against every index always returned 11 rows.

SQL> select name, bucket_2_10_rows_returned, bucket_11_100_rows_returned, bucket_101_1000_rows_returned from DBA_INDEX_USAGE where owner='DGOMEZ';

NAME      BUC_2_10_RW_RETD BUC_11_100_RW_RETD BUC_101_1000_RW_RETD
--------- ---------------- ------------------ ---------------------
IDX_ID                  10                 11                     0
IDX_VAL1                10                 11                     0
IDX_VAL2                10                 11                     0

Conclusion

Oracle has been introducing new views that provides very useful information to DBAs so that the DBAs can administrate properly the databases and diagnose problems in order to avoid any reactive problems. For several years scripts, third-parties tools, ALTER INDEX clauses, etc., were used to track the index usage, but this changed now Oracle perform this automatically without overheads in the performance.  

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter LinkedIn 

Invisible Columns in Oracle 12c

$
0
0

Starting in Oracle 12.1.0.1 there are several new features, +500 I have heard, and one of a good features for developers is  "Invisible Columns". Invisible Columns allows a developer create a table with some special columns. These special columns are not shown to everybody who is using the table, in order to get the value of that column whoever is performing DMLs against the table must specify the name of the column explicitly, otherwise the behavior of that table will be as if it hadn't that column. This is useful when an application has changed, but some users are still using the former "structure" of the table. In this case "Invisible Columns" can be used, and let the new users know that they must specify the new columns explicitly while the old users can still using the former structure without issues. I will show you a couple of examples in this article in order to know all the "properties" around "Invisible Columns". 

To begin, you have to know that Invisible Columns can be created at the time of the table creation, the syntax has changed a little bit for columns as I show you in the following picture:

Now let's create a table with invisible columns:

SQL> create table dgomez.TableWithInvisibleColumns (
col1 varchar2 (20) visible,
col2 varchar2 (20) invisible); 

Table created.

Now let's  see how DMLs work with Invisible Columns:

 

Insert Operations: 

In an insert operation where we don't specify explicitly the invisible column however we try to use it we will get an error. For example, in the following sentence, I am not specifying explicitly the column "col2" which is our invisible column, however I am trying to use it because I am inserting two values:

SQL> insert into dgomez.TableWithInvisibleColumns values ('b','b');
insert into dgomez.TableWithInvisibleColumns values ('b','b')
*
ERROR at line 1:
ORA-00913: too many values

SQL>

The correct way to use the invisible column is as following, specifying the "col2", that will let Oracle know that we are aware of that invisible column and indeed we want to use it:

SQL> insert into dgomez.TableWithInvisibleColumns (col1, col2) values ('a','a');

1 row created.

SQL>

 

Select Operations:

In a select operation is the same, if we want to get the values of the invisible columns we have to specify the name of the invisible column in the "SELECT" sentence. For example, in the following sentence, we are trying to get all the columns from the table "dgomez.TableWithInvisibleColumns", however only one column is returned. This is because even if we specify "*" that is not a guarantee for oracle that we are aware about the invisible column, based on that, oracle returns us only the "visible" columns. 

SQL> select * from dgomez.TableWithInvisibleColumns;

COL1
--------
a

If we want to get the values of the invisible columns we have to specify the names, as the following example:

SQL> select col1, col2 from dgomez.TableWithInvisibleColumns;

COL1  COL2
----- -----
a     a

SQL>

Are the values stored physically into the table?

Yes, invisible columns are not the same than "Virtual Columns". This is totally different, with Virtual Columns the value (or the function that produces the value) is stored as metadata of that column but the value is not stored physically. This is different in indexes as you can read in my last article. But when we are using Invisible Columns the value is in fact stored physically. The visibility of those values are only managed as metadata, but the data is there. 


data_block_dump,data header at 0x7f340fe60264
===============
tsiz: 0x1f98
hsiz: 0x14
pbl: 0x7f340fe60264
76543210
flag=--------
ntab=1
nrow=1
frre=-1
fsbo=0x14
fseo=0x1f91
avsp=0x1f7b
tosp=0x1f7b
0xe:pti[0] nrow=1 offs=0
0x12:pri[0] offs=0x1f91
block_row_dump:
tab 0, row 0, @0x1f91
tl: 7 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 1] 61  
--> In ascii 'a'
col 1: [ 1] 61  
--> In ascii 'a' (This is the value of Invisible Column)
end_of_block_dump
End dump data blocks tsn: 4 file#: 6 minblk 227 maxblk 227

Metadata of the Invisible Columns:

So, what about if I am not one more user that is using the table?, What about if I am the DBA of that table and I want to know which columns are invisible and which columns are not? There should be a way to know this. The first thought would be a "DBA_" table, but which one? Then we would think that the table DBA_TAB_COLUMNS has that information and we perform a "DESC DBA_TAB_COLUMNS", but unfortunately we see that there is not a column called "VISIBLE" or "VISIBILITY" or something like that. This is because Oracle didn't add a new column to describe the visibility of every column in a table, indeed the view "DBA_TAB_COLUMNS" has our information but is handled in a column that already exist, that column is "COLUMN_ID". When a column has NULL as the value of "COLUMN_ID" that means that column is Invisible, as in the following example:


SQL> select table_name, column_name, column_id from dba_tab_columns where owner='DGOMEZ' and table_name='TABLEWITHINVISIBLECOLUMNS';

TABLE_NAME                COLUMN_NAME  COLUMN_ID
------------------------- ------------ ----------
TABLEWITHINVISIBLECOLUMNS COL1         1
TABLEWITHINVISIBLECOLUMNS COL2

SQL>

We clearly see that the column "COL2" has a NULL value, that means that COL2 is Invisible.  

 

Adding Invisible Columns:

Not only at the time of the table creation we can create the invisible columns, we can add them as well after the table creation by using "ALTER TABLE. In the following example I will show you how to add a Invisible Column but also I will confirm another property of invisible columns, this is that Virtual Columns can be also invisible:

SQL> alter table dgomez.TableWithInvisibleColumns add (col3 invisible as (col1||col2) virtual ) ;

Table altered.

 

Does the structure of the table has the invisible columns information?

To answer this question, let's describe the table. Usually we use "DESCRIBE" to have a quick look at the table's structure:

SQL> desc dgomez.TableWithInvisibleColumns;

Name   Null?  Type
------ ------ ----------------------------
COL1          VARCHAR2(20)

SQL>

But as we see, the "DESCRIBE" command doesn't show any information about it. Now let's extract the structure but using "DBMS_METADATA":

SQL> select dbms_metadata.get_ddl('TABLE','TABLEWITHINVISIBLECOLUMNS','DGOMEZ') from dual;

DBMS_METADATA.GET_DDL('TABLE','TABLEWITHINVISIBLECOLUMNS','DGOMEZ')
--------------------------------------------------------------------------------

CREATE TABLE "DGOMEZ"."TABLEWITHINVISIBLECOLUMNS"
( "COL2" VARCHAR2(20) INVISIBLE,
"COL3" VARCHAR2(40) INVISIBLE GENERATED ALWAYS AS ("COL1"||"COL2") VIRTUAL ,
"COL1" VARCHAR2(20)
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"

SQL>

There is a very interesting thing here, do you remember how we were creating the columns in that table? At the table creation I put "COL1" as the first column, and "COL2" as the second column. After that I added a third column (COL3) via "ALTER TABLE". But see how DBMS_METADATA returns the DDL of that able, all the invisible columns are put at the beginning. If you use that DDL to create new tables and later you decide to put those columns VISIBLE the order of the columns will be different from the "original table's DDL". 

 

Are indexes supported on Invisible Columns?

The answer is yes, we can. I will put a couple of examples here:

SQL> create index dgomez.Index1OnInvisibleColumn on dgomez.TableWithInvisibleColumns (col2);

Index created.

SQL> create index dgomez.Index2OnInvisibleColumn on dgomez.TableWithInvisibleColumns (col2,col3);

Index created.

 

Are Partition Keys supported on Invisible Columns?

This is interesting as well, when we are creating partitioned tables we can select an invisible column for the partition key:

SQL> create table dgomez.Table3WithInvisibleColumns (
col1 varchar2 (20),
col2varchar2 (20) invisible)
partition by hash (col2)
partitions 2;

Table created.

 

How to change the visibility of a column?

Fo finish this article I will show you how to change from a column from "Invisible" to "visible" and from "visible" to "invisible":

SQL> alter table dgomez.Table3WithInvisibleColumns modify (col2 visible);

Table altered.

SQL> alter table dgomez.Table3WithInvisibleColumns modify (col2 invisible);

Table altered.

SQL>

Follow me:

      


Oracle Database 12.2 Statement-level Refresh for Materialized Views

$
0
0

By Deiby Gómez

 

Introduction:

Materialized views have been used for several years and they are being improved by Oracle with every database version or release. Up to Oracle Database 12cR1 Oracle Materialized Views supported the following refreshes:

  • ON DEMAND:You can control the time of refresh of the materialized views.
    • COMPLETE: Refreshes by recalculating the defining query of the materialized view.
    • FAST: Refreshes by incrementally applying changes to the materialized view.
    • For local materialized views, it chooses the refresh method that is estimated by optimizer to be most efficient. The refresh methods considered are log-based FAST and FAST_PCT.
    • FAST_PCT: Refreshes by recomputing the rows in the materialized view affected by changed partitions in the detail tables.
    • FORCE: Attempts a fast refresh. If that is not possible, it does a complete refresh.
  • ON COMMIT: Whenever a transaction commits which has updated the tables on which a materialized view is defined, those changes are automatically reflected in the materialized view. The only disadvantage is that the time required to complete the commit will be slightly longer because of the extra processing involved.

Starting with Oracle 12cR2, Materialized views can be refreshed ON STATEMENT.

  • ON STATEMENT: With this refresh mode, any changes to the base tables are immediately reflected in the materialized view. There is no need to commit the transaction or maintain materialized view logs on the base tables. If the DML statements are subsequently rolled back, then the corresponding changes made to the materialized view are also rolled back.

In the following graphic we can see that in the syntax the option “ON STATEMENT” was introduced:

To use an ON STATEMENT materialized view the following restrictions must be cleared:

  • They are for materialized join view only.
  • Base tables referenced in the materialized view defining query must be connected in a join graph of star/snowflake shape.
  • An existing non-ON-STATEMENT materialized view cannot be converted to REFRESH ON STATEMENT.
  • Altering an existing ON STATEMENT materialized view is not allowed.
  • An ON STATEMENT materialized view cannot be created under SYS
  • AN ON STATEMENT materialized view needs to be fast refreshable. You must specify the clause ‘REFRESH FAST’ in the CREATE MATERIALIZED VIEW command. materialized view logs are not required.
  • The defining query needs to include the ROWID column of the fact table in the SELECT list.
  • Be careful with UPDATE operations, because these are not supported on any dimension table. It will make the ON STATEMENT materialized view unusable.
  • TRUNCATE operations on a base table are not supported. They will make the ON STATEMENT materialized view unusable.
  • The defining query should NOT include:
    • invisible column
    • ANSI join syntax
    • complex defining query
    • (inline) view as base table
    • composite primary key
    • long/LOB column

Every type of refresh mode has its own restrictions; it is difficult to memorize every single restriction for every refresh mode. If you are getting errors like “ORA-12052: cannot fast refresh materialized view” it’s likely that you are forgetting to clear a restriction. To make this task easier, you can always visit the note Materialized View Fast Refresh Restrictions and ORA-12052 (Doc ID 222843.1), where you will find every single restriction for all the refresh modes.

So enough of the basic concepts of materialized views; it’s time for an example. In the following example I am using Oracle Database Enterprise Edition 12.2.0.1 and creating four tables. Then I will create two materialized views, one ON COMMIT and one ON STATEMENT. I will insert some rows in each of the four tables without committing them. We will query the ON STATEMENT materialized view, analyze the result, and then we will commit the data to finally query the ON COMMIT materialized view and its result.

Creating the tables:

SQL> CREATE TABLE employee (
employee_id number,
name varchar2(20),
phone number,
position varchar2(20),
CONSTRAINT employee_pk PRIMARY KEY (employee_id));

Table created.

SQL> CREATE TABLE department (
department_id number,
name varchar2(20),
CONSTRAINT department_pk PRIMARY KEY (department_id));

Table created.

SQL> CREATE TABLE product (
product_id number,
name varchar2(20),
price number(*,2),
CONSTRAINT product_pk PRIMARY KEY (product_id));

Table created.

SQL> CREATE TABLE purchase (
purchase_code number,
department_id number,
employee_id number,
product_id number,
amount number,
purchase_date date,
CONSTRAINT purchase_pk PRIMARY KEY (purchase_code),
FOREIGN KEY (department_id) REFERENCES department (department_id),
FOREIGN KEY (employee_id) REFERENCES employee (employee_id),
FOREIGN KEY (product_id) REFERENCES product (product_id));

Table created.

 

The advantage of ON STATEMENT materialized views is that there is no need to create materialized view logs in order to create them:

SQL> CREATE MATERIALIZED VIEW onstatement_purchases
REFRESH FAST ON STATEMENT
AS
SELECT p.rowid rid, e.name, p.purchase_code, pr.product_id, p.amount
FROM department d, employee e, purchase p, product pr
WHERE d.department_id=p.department_id and
pr.product_id=p.product_id and
e.employee_id=p.employee_id;

Materialized view created.

One of the disadvantages of using ON COMMIT materialized views is that materialized view logs must be created with “INCLUDING NEW VALUES” and “WITH ROWID” as well as including all the columns that will be referenced inside the materialized view.

CREATE MATERIALIZED VIEW LOG ON purchase WITH PRIMARY KEY,ROWID, SEQUENCE(department_id,employee_id,product_id,amount,purchase_date) INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON department WITH PRIMARY KEY,ROWID, SEQUENCE(name) INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON employee WITH PRIMARY KEY,ROWID, SEQUENCE(name,phone,position ) INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON product WITH PRIMARY KEY,ROWID, SEQUENCE(name,price) INCLUDING NEW VALUES;

 

Creating the ON COMMIT materialized view:

SQL> CREATE MATERIALIZED VIEW oncommit_purchases
REFRESH FAST ON COMMIT
AS
SELECT e.name, p.purchase_code, pr.product_id, p.amount
FROM department d, employee e, purchase p, product pr
WHERE d.department_id=p.department_id and
pr.product_id=p.product_id and
e.employee_id=p.employee_id
group by e.name, p.purchase_code, pr.product_id, p.amount;

Materialized view created.

 

Verifying the refresh mode of each materialized view:

SQL> select owner, mview_name, REFRESH_MODE from dba_mviews where owner='DGOMEZ'

OWNER      MVIEW_NAME                REFRESH_MODE
---------- ------------------------- ------------
DGOMEZ     ONCOMMIT_PURCHASES        COMMIT
DGOMEZ     ONSTATEMENT_PURCHASES     STATEMENT

Now I will insert some rows without committing them:

SQL> Insert into employee values (1,'Jose',55555555,'Manager');

1 row created.

SQL> Insert into department values (1,'Sales');

1 row created.

SQL> Insert into product values (1,'Soda',100.50);

1 row created.

SQL> insert into purchase values (1,1,1,1,100,sysdate);

1 row created.

I will query the materialized view onstatement_purchases and we will see that It was populated:

 

NAME                 PURCHASE_CODE PRODUCT_ID AMOUNT
-------------------- ------------- ---------- ----------
Jose                             1          1       100

 

However the ON COMMIT materialized view oncommit_purchases is empty:

SQL> select name, purchase_code, product_id, amount from oncommit_purchases;

no rows selected

 

I will commit the rows:

SQL> commit;

Commit complete.

 

As soon as the rows are committed, the ON COMMIT materialized view is populated:

SQL> select name, purchase_code, product_id, amount from oncommit_purchases;

NAME                 PURCHASE_CODE PRODUCT_ID AMOUNT
-------------------- ------------- ---------- ----------
Jose                             1          1        100

 

Conclusion:

Materialized views are frequently used to improve the performance of complex queries and are very popular. Oracle has been improving them, and with the introduction of ON STATEMENT materialized views, DBAs will have one more option they can use to meet client requirements or solve performance issues. In this article we looked at some basic concepts of materialized views, and two examples: an ON STATEMENT materialized view, where we saw that without to commit the data the materialized view was populated, and an ON COMMIT materialized  view, which needed the commit instruction to get populated.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

 

Oracle EM 13c Database's historic data without DBA_HIST*

$
0
0

By Deiby Gómez

Introduction

Data changes frequently in OLTP environments and Oracle has to be aware of those changes or at least to try detect these changes in order to adjust the optimizer and execute sentences in the best possible way. To do so, Oracle generates several metrics from the system, from the session, from the services, etc., and also it gathers statistics automatically via AUTOTASK.

There is a huge amount of information generated by the metrics, which is captured mainly in AWR repository tables. The information generated by the metrics is very important because by using it the database administrators can perform troubleshooting and capacity planning, analyze the workload over a period of time, and so on.  When there are no performance issues, database administrators mostly think about capacity planning in order to understand how the database is growing over time.  In the past, this information was used to size the new hardware that they had to buy every two or three years, but with Oracle Cloud, that’s a thing of the past. Nowadays this information is used to understand different aspects of the growth of the business.

Businesses impose several different requirements; for example, a business might want to know  about the increase in users consuming their services or products; the DBA would want to know about increased space requirements, increase in physical writes, and so on. These are among several scenarios where historical data is needed to create complex and customized reports.

When we think about historical data, our first thought is AWR/ASH; however, there is another alternative that few DBAs use: the repository views of Enterprise Manager. These views have hundreds of different metrics that are captured automatically by Enterprise Manager and can be used to create customized reports as complex as we could want. Just imagine, hundreds of metrics to play with!

As per Oracle "Database Licensing Information" (I didn’t find other sources of information on this), the following views also require Oracle Diagnostic Pack. If this license cannot be acquired you can use the STATSPACK tables.

MGMT$METRIC_DETAILS: The MGMT$METRIC_DETAILS view displays a rolling 7 day window of individual metric samples. These are the metric values for the most recent sample that has been loaded into the Management Repository plus any earlier samples that have not been aggregated into hourly statistics.

MGMT$METRIC_CURRENT: The MGMT$METRIC_CURRENT view displays information on the most recent metric values that have been loaded into the Management Repository.

MGMT$METRIC_HOURLY: The MGMT$METRIC_HOURLY view displays metric statistics information that has been aggregated from the individual metric samples into hourly time periods. For example, if a metric is collected every 15 minutes, the 1 hour rollup would aggregate the 4 samples into a single hourly value by averaging the 4 individual samples together. The current hour of statistics may not be immediately available from this view. The timeliness of the information provided from this view is dependent on when the query against the view was executed and when the hourly rollup table was last refreshed.

MGMT$METRIC_DAILY: The MGMT$METRIC_DAILY view displays metric statistics that have been aggregated from the samples collected over the previous twenty-four hour time period. The timeliness of the information provided from this view is dependent on when the query against the view was executed and when the hourly rollup table was last refreshed.

MGMT$TARGET_TYPE:  MGMT$TARGET_TYPE displays metric descriptions for a given target name and target type. This information is available for the metrics for the managed targets that have been loaded into the Management Repository. Metrics are specific to the target type.

You can build reports as complex as you want. In this article I will show you some basic examples that you can take as a starting point. You can also read my article “Creación de un reporte simple usando Information Publisher Report”, where you will learn how to use Infomration Publisher to build nice reports.

List all the metrics available in Enterprise Manager Repository Views

With this query you can list all the metrics that you can use to build your reports. This query will return hundreds of rows, each row for one specific metric:

SELECT distinct metric_name,
metric_column,
metric_label,
metric_column
FROM MGMT$METRIC_DAILY
ORDER BY 1,2,3;

All the metrics for all the database targets

With this query you list all the metrics available for one specific type of target, in this case the type ‘oracle_database’:

SELECT t.target_name target_name,
       t.metric_name,
       m.metric_column metric_column,
       to_char(m.rollup_timestamp,'YYYY-MM-DD HH24') as TIME,
       sum(m.average/1024) as value
FROM   mgmt$metric_hourly M,
       mgmt$target_type T
WHERE  t.target_type='oracle_database'
       and m.target_guid=t.target_guid
       and m.metric_guid=t.metric_guid
GROUP BY  t.target_name,
          t.metric_name,
          m.metric_column,
          m.rollup_timestamp
ORDER BY 1,2,3;

Once you know which metrics are available to build reports, you can proceed to create a basic report.

Current value for the metric iombs_ps

Let’s start with something basic: learning the current value for one specific metric. In this example, we’ll learn the value of the metric “iombs_ps”, which is part of the category “instance_throughput”.

This query uses the view mgmt$metric_current:

SQL> SELECT t.target_name target_name,
     t.metric_name,
     m.metric_column metric_column,
     to_char(m.collection_timestamp,'YYYY-MM-DD HH24:MI') as TIME,
     m.value as value
FROM mgmt$metric_current M,
     mgmt$target_type T
WHERE t.target_type='oracle_database'
      and m.target_guid=t.target_guid
      and m.metric_guid=t.metric_guid
      and t.metric_name='instance_throughput'
      and t.metric_column='iombs_ps'
      ORDER BY 1,2,3;

TARGET_NAME  METRIC_NAME         METRIC_COLUMN TIME             VALUE
------------ ------------------- ------------- ---------------- --------
cloud1       instance_throughput iombs_ps      2017-08-20 20:32 378

Historic data for the metric iombs_ps per hour

Now I will use the historic data for the same metric for the last 24 hours and then I will build a chart with Google Chart to see the behavior of this metric across the time. This query uses the view mgmt$metric_hourly.

SQL> SELECT t.target_name target_name,
            t.metric_name,
            m.metric_column metric_column,
            to_char(m.rollup_timestamp,'YYYY-MM-DD HH24') as TIME,
            sum(m.average/1024) as value
FROM        mgmt$metric_hourlyM,
            mgmt$target_type T
WHERE       t.target_type='oracle_database'
            and m.target_guid=t.target_guid
            and m.metric_guid=t.metric_guid
            and t.metric_name='instance_throughput'
            and t.metric_column='iombs_ps'
GROUP BY t.target_name,
         t.metric_name,
         m.metric_column,
         m.rollup_timestamp
ORDER BY 1,2,3; 

TARGET_NAME  METRIC_NAME          METRIC_COLUMN   MONTH_TIMESTA VALUE
------------ -------------------- --------------- ------------- ----------
cloud1       instance_throughput  iombs_ps        2017-08-19 00 296
cloud1       instance_throughput  iombs_ps        2017-08-19 01 374
cloud1       instance_throughput  iombs_ps        2017-08-19 02 362
cloud1       instance_throughput  iombs_ps        2017-08-19 03 360
cloud1       instance_throughput  iombs_ps        2017-08-19 04 378
cloud1       instance_throughput  iombs_ps        2017-08-19 05 378
cloud1       instance_throughput  iombs_ps        2017-08-19 06 378
cloud1       instance_throughput  iombs_ps        2017-08-19 07 362
cloud1       instance_throughput  iombs_ps        2017-08-19 08 360
cloud1       instance_throughput  iombs_ps        2017-08-19 09 362
cloud1       instance_throughput  iombs_ps        2017-08-19 10 360
cloud1       instance_throughput  iombs_ps        2017-08-19 11 359
cloud1       instance_throughput  iombs_ps        2017-08-19 12 362
cloud1       instance_throughput  iombs_ps        2017-08-19 13 361
cloud1       instance_throughput  iombs_ps        2017-08-19 14 370
cloud1       instance_throughput  iombs_ps        2017-08-19 15 378
cloud1       instance_throughput  iombs_ps        2017-08-19 16 378
cloud1       instance_throughput  iombs_ps        2017-08-19 17 378
cloud1       instance_throughput  iombs_ps        2017-08-19 18 161
cloud1       instance_throughput  iombs_ps        2017-08-19 19 161
cloud1       instance_throughput  iombs_ps        2017-08-19 20 175
cloud1       instance_throughput  iombs_ps        2017-08-19 21 178
cloud1       instance_throughput  iombs_ps        2017-08-19 22 179
cloud1       instance_throughput  iombs_ps        2017-08-19 23 164
cloud1       instance_throughput  iombs_ps        2017-08-19 24 160

 

Now I will use Google Chart to chart the data. We can see that interpreting a graphic is easier than looking only at numbers. In this graphic we can see that something happened around 17:00 because the IO throughput decreased:

Historic data for the metric iombs_ps per day

Our last report example will use the view mgmt$metric_daily to create a report on the same metric, but daily. You can add more WHERE clauses to filter the period of time and also you can play with the values MAXIMUM and MINIMUM.

SQL> SELECT t.target_name target_name,
            t.metric_name,
            m.metric_column metric_column,
            to_char(m.rollup_timestamp,'YYYY-MM-DD') as TIME,
            sum(m.average/1024) as value
FROM        mgmt$metric_daily M,
            mgmt$target_type T
WHERE       t.target_type='oracle_database'
            and m.target_guid=t.target_guid
            and m.metric_guid=t.metric_guid
            and t.metric_name='instance_throughput'
            and t.metric_column='iombs_ps'
GROUP BY t.target_name, t.metric_name, m.metric_column, m.rollup_timestamp
ORDER BY 1,2,3; 

TARGET_NAME  METRIC_NAME          METRIC_COLUMN   MONTH_TIME VALUE
------------ -------------------- --------------- ---------- ----------
cloud1       instance_throughput  iombs_ps        2017-08-13 377
cloud1       instance_throughput  iombs_ps        2017-08-14 360
cloud1       instance_throughput  iombs_ps        2017-08-15 367
cloud1       instance_throughput  iombs_ps        2017-08-16 378
cloud1       instance_throughput  iombs_ps        2017-08-17 378
cloud1       instance_throughput  iombs_ps        2017-08-18 378
cloud1       instance_throughput  iombs_ps        2017-08-19 378

 


Conclusion

In this article I have showed you one more historic data source that you can use to understand the behavior of your business through hundreds of metrics that are available in the Enterprise Manager Repository Views. You have views to see the current value of the metrics, the hourly value, or the daily value, and can play with values like the MAXIMUM in a day (or in an hour), MINUMUM, or AVERAGE. You can create very complex queries to analyze different problems across time, and then you can chart the data and get nice graphics that you can present to the board.

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Why Certifications Are Important

$
0
0

By Deiby Gómez

Introduction:

Ever since I started my career in Oracle technology I’ve always wanted to deliver the best support to my clients. I have wanted to solve the problems quickly. I am not afraid of new challenges, I am not afraid to start looking into a problem that I have never seen before; on the contrary, I am happy to look into unfamiliar problems because they are opportunities to learn. Following that approach, and to comply with my commitment with my clients, I started to look into Oracle certification program. I began to learn what Oracle University was about, and the paths to get certified.

I started my career with Oracle Database 11g. Then, because of the clients I was doing work for, I extended my knowledge to 10g and even 9i, and the oldest version I had was 8i but with few tickets on it. At the moment, the newest version of Oracle is 12c and all the certifications are already available for 12c. You can even get certified in a specific release, like OCP on 12cR2. I recommend that you get certified on the most recent versions of the technology you are interested.

Anyhow, since I came into Oracle technology on 11g my path to get certified was the following:

 

So I worked hard to pass the following exams. This should give you an idea of the time it would take to progress through the certifications:

  • 1Z0-051: Oracle Database 11g SQL Fundamentals I – January 2011
  • 1Z0-052: Oracle Database 11g Administration I – March 2011
  • 1Z0-053: Oracle Database 11g: Administration II – May 2011
  • 1Z0-402: Enterprise Linux Fundamentals – May 2011
  • 1Z0-451: Oracle Service Oriented Architecture Foundation Practitioner – August 2012
  • 1Z0-027: Oracle Exadata X3 and X4 Administration – August 2013
  • 1Z0-058: Oracle RAC 11g Release 2 and Grid Infrastructure Administration – December 2013
  • 1Z0-060: Upgrade to Oracle Database 12c – February 2014
  • 1Z0-093: Oracle Database 11g Certified Master Exam (OCM) – February 2015
  • 1Z0-432: Oracle Real Application Clusters 12c Essentials – September 2015
  • 1Z0-029: Oracle Database 12c Certified Master Upgrade Exam– April 2016
  • 1Z0-066: Oracle Database 12c: Data Guard Administration – December 2016

Additionally, I became an Oracle ACE in 2013 and an Oracle ACE Director in 2015. I also was a technical reviewer of the book "Oracle Database 12c Release 2 Multitenant" and a co-author of the book "Oracle Database 12c Release 2 Testing Tools and Techniques for Performance and Scalability".

After all this hard work, I can tell you why certifications are important.

Of course, this is a personal opinion.  At the beginning of my career I started getting certifications frequently in order to get a salary hike (like the most people that are starting a career) , but after two certifications I changed my thinking and started to enjoy the path because it was aligned with what I wanted to deliver: to fix problems quickly and deliver excellence to my clients, which is the right approach. It's all about the enjoy the journey!

When preparing for a certification, you have to build several environments, practice installations and different rman scenarios, test every Oracle database feature and ASM feature. You find errors, and investigate how to fix those errors. While investigating the problems you will read blogs, Metalink notes, whitepapers, Oracle Press books, Oracle University manuals and even videos on YouTube!  You will spend several hours and days in front of a computer practicing. You’ll study so hard that when you are in front of the computer actually taking the exam, it’s anticlimactic – just a set of some questions that you already know how to answer. You’ll feel like it’s a time sink to sit in front of that laptop taking the exam because you already know you’ve got the knowledge. Yes, you do have the knowledge, but you still have to pass the exam to prove it. And once that certification is in hand, it is proof of all the preparation and hard work that help you deliver better support to your clients. 

So the advantages I can highlight from the perspective of a consultant are:

  • Preparing for the exam increases your knowledge.
  • You get faster at fixing problems.
  • You face several issues while practicing that sometimes only with "hear" or "see" the symptoms you already know where the problem would be.
  • You acquire friends and colleagues through forums, blogs and Oracle events around the world.
  • You can get better jobs.
  • You can deliver your clients a better quality of support.
  • Because your credentials sometimes you are invited to a community project (to be Speaker, co-authoring a book, to help in a blog, to contribute in an Open Source Project, etc)
  • Depending on where you are, yes, you may get that pay raise.
  • You get a profile in www.youracclaim.com
  • If you become an OCM you also get a special profile in Oracle OCMs list.
  • You get less stressed, because with the knowledge you’ve acquired preparing for certification there will be fewer things that you don’t know, and less reason to fear making errors.
  • Since your knowledge has increased, you also can help your colleagues.
  • You get respect from newbies. [:)]

And perhaps much more! But those are just the advantages for consultants. There’s another beneficiary of your certifications; namely, the company you are working for. I became part of Nuvola Consulting Group in 2016 and since then we’ve gotten several clients on board (YAY!). Still, I can tell you why certifications are important for organizations:

  • Companies promote your certifications to prove that they have good consultants.
  • For partnerships, When you are looking for being a partner of another company, the other company will look into your consultants and their certifications. 
  • Companies use your certifications to prove that they can work with a specific technology or product very well (Amazon AWS, Oracle DB, Tuning, SOA, etc.).
  • It’s better to hire certified consultants where the risk that they make mistakes is less than a consultant that doesn’t have certifications. Of course there are also consultants without certifications with a lot of experience, but in those cases, they have to demonstrate that experience from past performance unless the person is well-known and is very well-recommended by others that we already know.
  • Companies can charge a higher hourly rate for support or consulting when the consultants are certified.
  • Having several certified consultants is very helpful when the company wants to get on board with a big prospective customer or get a very good contract. Generally large enterprises want companies with certified consultants to provide them services.
  • Having certified consultants helps a firm compete with other companies in the same industry.

In Guatemala, for example, the country where I am currently living, I have observed that certifications are more important to hiring companies in the IT industry than a bachelor’s degree. For non-IT companies, it may be different, but in Latin American IT companies this is common. And over the years  I have seen many students starting early in their college years and getting certified to increase their expertise in a single technology (Let's say Java, etc). I’m included in this group, because I started working with Oracle technology professionally before completing university. IT industry wants people very specialized in a single technology or product and ready to get involved in projects. 

Conclusion:

Certifications are important for consultants and also for the companies to we work for. The industry wants specialized people. The IT industry is growing fast, with some of the largest companies in the world today being in IT, and they’re demanding certified people. This is an opportunity that you have to take advantage of: Get certified!

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter LinkedIn 

Prepare yourself for passing the Oracle Certified Master 12c requirements!

$
0
0

By Deiby Gómez

Introduction

In 2013 I began my preparation for becoming Oracle Certified Master (OCM) 11g. I was already OCP (Oracle Certified Professional) 11g and OCP 12c, so to get to the next level, I made myself a schedule of reading and practice. The OCM exam is no joke—it takes a lot of knowledge, as well as speed in working to solve problems, to pass it. Later in this article I’ll share my study and practice schedule, which you can use to prepare for the exam yourself.

But first, some background. There are three levels of Oracle Database Certifications (for any version 11g or 12c):

  • Associate (OCA)
  • Professional (OCP)
  • Master (OCM)

The Professional level requires Associate as a prerequisite, and Master requires Professional. So your certification quest starts at the Associate level. You have to pass two exams in order to achieve it. Next, you have to get your professional certification, which requires you to pass an exam and also take an Oracle University course. Once you are an OCP you can start your journey to become an OCM.

For OCM, you have to take two courses in Oracle University, and then you have to pass one more exam. This exam is different from the ones for the first two levels of certification, because it does not consist of multiple-choice questions and also it’s not online like the exams for OCA and OCP. To find out where to take it, you need to look at the Oracle Certified Master Exam Worldwide Schedule. There are only few countries where you can take this exam. 

This exam is for real DBAs! It is 100% practice, rather than answering questions.

Basically you have to be prepared for anything and you have to do everything as fast as you can, because you have a limited time for each problem.

Above is the path to OCM for 11g. If you want to start directly toward certification in the 12c version, the path you follow is as follows:

 

Some months after I passed the OCM 11g exam, the OCM 12c exam was released, so I decided to take it as well. When I was preparing my OCM 12c I created the following schedule, which you can use, too, for your own preparation.

I focused my preparation in two main areas: Knowledge and Speed.  

Hours to develop knowledge

The hours I allotted to increasing my knowledge I spent reading everything I could about that topic, blogs, metalink notes, forums, books, videos, etc. And inside that time I also practiced every topic on a virtual machine, at least twice. For example if the topic was “install database software”, I read everything about that topic and then I installed the software at least two times. With these hours I also was reading every single option of every single command J Yes! It was fun. I also tried to memorize as much syntax as I could. Once I knew how to do everything related to a topic and I got considerable knowledge about the syntax and concepts I moved to the hours for get faster.

 

Hours to increase speed: During these hours, I didn’t have to read more because I already knew how to do the things I was focusing on. This was time I set aside to practice and practice and practice and yes, practice.  I tried to get as fast as I could.

So here is the schedule I used:

 

Topic

Time (hrs) to read and practice (Knowledge)

Time (hrs) to improve speed

(Time)

General Database and Network Administration

40

14

Create and manage pluggable databases

16

4

Administer users, roles, and privileges

4

2

Configure the network environment to allow connections to multiple databases

4

2

Administer database configuration files

8

2

Configure shared server

4

2

Manage network file directories

4

2

 

 

 

Manage Database Availability

60

18

Install the EM Cloud Control agent

24

8

Configure recovery catalog

8

2

Configure RMAN

8

2

Perform a full database backup

4

2

Configure and monitor Flashback Database

16

4

 

 

 

Data Warehouse Management

56

23

Manage database links

4

2

Manage a fast refreshable materialized view

16

4

Create a plug-in tablespace by using the transportable tablespace feature

16

4

Optimize star queries

4

2

Configure parallel execution

4

2

Apply a patch

4

2

Configure Automatic Data Optimization, In-Row Archiving, and Temporal Validity

8

4

Manage external tables

8

3

 

 

 

Data Management

60

16

Manage additional buffer cache

4

2

Optimize space usage for the LOB data

8

2

Manage an encrypted tablespace

8

2

Manage schema data

8

2

Manage partitioned tables

8

2

Set up fine-grained auditing

8

2

Configure the database to retrieve all previous versions of the table rows

16

4

 

 

 

Performance Management

68

27

Configure the Resource Manager

16

12

Tune SQL statements

8

3

Use real application testing

16

3

Manage SQL Plan baselines

8

3

Capture performance statistics

8

3

Tune an instance - Configure and manage result cache, Control CPU use for Oracle Instances, Configure and manage "In Memory" features

12

3

Manage extended statistics

8

2

Create and manage partitioned indexes

8

2

 

 

 

Data Guard

56

26

Administer a Data Guard environment

12

4

Create a physical standby database

16

8

Configure a standby database for testing

4

4

Configure a standby database to apply redo

8

2

Configure a standby database to use for reporting

4

2

Configure fast start failover

4

2

Manage extended statistics

4

2

Manage DDL in a Data Guard environment

4

2

 

 

 

Grid Infrastructure

80

34

Install Oracle Grid Infrastructure

16

8

Create ASM Disk Groups

8

4

Create and manage ASM instances

8

4

Configure ASM Cloud File System (ACFS)

8

4

Administer Oracle Clusterware

16

6

Manage Flex Clusters and Flex ASM

12

4

Manage Flex Clusters and Flex ASM

12

4

 

 

 

Real Application Cluster Database

40

9

Install Oracle Database software

8

3

Create a Real Application Clusters (RAC) database

8

2

Configure Database Services

16

2

Administer Oracle RAC databases on one or more cluster nodes

8

2

Using this schedule, I tried to practice four hours every day after my job, and I dedicated my weekends to this effort completely (16 hours) so I was able to get prepared in about three months. Depending on the time you have to commit to your own effort, your ‘mileage may vary’.

In addition to my schedule, you can also use the following books for your preparation. One of them is from Kamran Agayev, an 11g OCM and a good friend.

Oracle Certified Master 11g Study Guide by Kamran Aghayev.

 

OCM: Oracle Database 10g Administrator Certified Master Exam Guide by Nilesh Kakkad

Once you have passed your OCM exam you will receive a card like this:

 

 

Conclusion

Getting prepared for the OCM is not easy, it takes time. And without good preparation, you likely will not pass the exam. This exam is no joke, it is serious and you should be well prepared in every area before scheduling it. In this article I’ve provided a preparation plan you can follow to get to take it and become an Oracle Certified Master. Best of luck!

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Block Corruption in an Oracle Database

$
0
0

By Deiby Gómez

Introduction

Block corruption is a common topic when we are dealing with any software that stores data. In Oracle Database there are several types of logical structures that are mapped to a physical file named “datafile” that is divided into filesystem blocks.

 

A block can have logical or physical corruption. A corrupt block is a block that has been changed so that it differs from what Oracle expects to find. A logical corruption is a block that has a valid checksum but its content is corrupt; for example, a row locked by a non-existent transaction, the amount of space used is not equal to block size, avsp bad, etc. Logical corruption can cause ORA-600 depending on which content inside the block is corrupted. A physical corruption is also called a media corruption; the database does not recognize the block at all, it’s when the problem is not related to the content but to the physical location or structure itself; for example, a bad header, a fractured/incomplete block, the block checksum is invalid, the block is misplaced, zeroed-out blocks, the header and footer of the block do not match, one of the key data block data structures is incorrect, such as the data block address (DBA), etc.

Detecting, monitoring and fixing corrupt blocks is an important task that we have take care of regularly and frequently. A corrupt block not only means a problem with the block but also means that there is data that may be lost, and this is very important for the business.

The Problem

The problem with corrupted blocks is that we don’t know they are corrupted until we try to use them. Of course, this applies to a scenario where we are not executing a proactive activity to detect corrupt blocks. For example, a table block can be corrupted and there is no way to know it until someone performs a SELECT or any other DML that reads that block. Once the block is read, Oracle will know the block is corrupted and then an ORA-0600, ORA-27047 or ORA-01578 will be returned to the user.

A long time ago a customer called me saying that they were trying to execute a SELECT from the application and whenever the SELECT was executed the application got an ORA-01578. I detected the block # and the datafile # and I fixed it. At that time, the user was able to continue working the rest of the day. However, the next day again, the same customer called me saying that they were receiving more ORA-01578’s. This time I confirmed that the corrupted block was in a different datafile than the block I’d fixed a day before. This made me think that there could be more corrupted blocks.  I executed dbverify against the full database and I saw that the database had several corrupted blocks.  However last rman backup didn’t report any corrupted blocks. We engaged a sysadmin and he detected that the storage was having issues that day. Fortunately we detected the storage problem quickly and no data was lost. But if these kinds of issues are not detected properly the data can be compromised.  In this example we have been talking about a physical problem, but there are some other cases where it is more difficult to detect the problem, especially when it is a logical corruption.

How to avoid it

Using Oracle ASM: Oracle recommends using ASM as the storage for the database. ASM has three types of redundancy: External, Normal and High.  If we are using Normal or High Oracle keeps a copy (Normal) or two copies (High) of every block. This block is called “Mirror Block” and whenever it finds a corrupt block, it automatically restores the corrupt block from one of its mirror copies. I have written an article where I explain with a lot of details how Oracle recover a block from its mirror copy, in case you want you read it: Data block recovering process using Normal Redundancy

Using parameter db_block_checking: This parameter is used to control whether block checking is done for transaction managed blocks. As early detection of corruptions is useful, and has only a small performance impact. However, there are some types of applications where having the parameter DB_BLOCK_CHECKING = TRUE can have a considerable overhead, all depends on the application,  to test the change in a test environment is recommended. The immediate overhead is a CPU overhead of checking a block contents after each change but a secondary effect is than that this means blocks are held for longer periods of time so other sessions needing the current block image may have to wait longer. The actual overhead on any system depends heavily on the application profile and data layout.

Using parameter db_block_checksum: determines whether DBWn and the direct loader will calculate a checksum (a number calculated from all the bytes stored in the block) and store it in the cache header of every data block when writing it to disk. Checksums are verified when a block is read – only if this parameter is TYPICAL or FULL and the last write of the block stored a checksum. In FULL mode, Oracle also verifies the checksum before a change like  update/delete statements and recomputes it after the change is applied. In addition, Oracle gives every log block a checksum before writing it to the current log. Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O systems. If set to FULL, DB_BLOCK_CHECKSUM also catches in-memory corruptions and stops them from making it to the disk. Turning on this feature in TYPICAL mode causes only an additional 1% to 2% overhead. In FULL mode it causes 4% to 5% overhead.

Dbfsize: Can be used to check the consistency of Block 0.

Dbverify: Can be used to check Oracle datafiles for signs of corruption and give some degree of confidence that a datafile is free from corruption. It opens files in a read-only mode and so cannot change the contents of the file being checked. It checks that datafile has a valid header. Each data block in the file has a special "wrapper" which identifies the block – this "wrapper" is checked for correctness. Dbverify also checks that DATA (TABLE) and INDEX blocks are internally consistent. And, from 8.1.6 onwards, it checks that various other block types are internally consistent (such as rollback segment blocks).

RMAN VALIDATE command:  You can use the VALIDATE command to manually check for physical and logical corruptions in database files. This command performs the same types of checks as BACKUP VALIDATE. By default, RMAN does not check for logical corruption. If you specify CHECK LOGICAL on the BACKUP command, however, then RMAN tests data and index blocks for logical corruption, such as corruption of a row piece or index entry.

RMAN> validate check logical database;

RMAN > validate database;

RMAN > validate backupset 11;

RMAN > validate datafile 2 block 11;

I have written some other articles related to RMAN and corrupt blocks,  in case you want to read more about the issue

Conclusion 

Perform proactive tasks to detect or avoid having physical and logical corruption, if the corruption is detected on time, the solution can be easily executed. Oracle offers several tools that we can use to detect, monitor, and fix corruption in the block. It is important to be aware of these type of problems so that our data is not compromised. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

What Oracle 12cR2 brought us in 2017 and what Oracle brings us for 2018

$
0
0

By Deiby Gómez (OCM11g , MAA OCM 12c and Oracle ACE Director)

 The year 2017 brought us a lot of good Oracle stuff; Oracle Database version 12cR2 was released for On Premises, and the new Autonomous Database 18c was announced at Oracle Open World in San Francisco. In this article I will bring to your attention some of the best new features of Oracle Database 12cR2 and what to expect in 2018 from Oracle 18c and the Oracle Autonomous Database.

New Features in Oracle Database 12cR2

 Materialized Views: Statement-Level Refresh: Oracle Introduced the “ON STATEMENT” clause to refresh materialized views. With this refresh mode, any changes to the base tables are immediately reflected in the materialized view. There is no need to commit the transaction or maintain materialized view logs on the base tables. If the DML statements are subsequently rolled back, then the corresponding changes made to the materialized view are also rolled back.

For more information about this feature you can read the following article: https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/08/28/oracle-database-12-2-statement-level-refresh-for-materialized-views

Oracle Database 12.2 – How to track index usage: Oracle introduced two views, V$INDEX_USAGE_INFO and DBA_INDEX_USAGE. With these two new views, Oracle automatically tracks the usage of indexes. There are several columns in dba_index_usage that can be used to find out how many accesses the indexes have received, how many rows have returned, and, even better, there are buckets to create histograms for accesses and rows returned. The most recent time that the index was used is also recorded.

For more information about this feature you can read the following article: https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/07/25/oracle-database-12-2-how-to-track-index-usage

Oracle 12cR2 RMAN New Feature: UNTIL AVAILABLE REDO: In Oracle database 12.2.0.1.0 the clause “UNTIL AVAILABLE REDO” is available. As its name indicates, this clause makes all the required calculations to recover the database up to the last available archive log. This is a really cool feature, since all the DBA has to do is catalog all the archivelogs available and use “UNTIL AVAILABLE REDO” in the “RECOVER DATABASE” phase, and Oracle will do all the work., This also lets us avoid human error in the calculations.

For more information about this feature you can read the following article:  https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/06/02/oracle-12cr2-rman-new-feature-until-available-redo

Oracle Database 12cR2 new feature: Lockdown Profiles: One of the most important features is “Lockdown Profiles”. Lockdown Profiles provides the granularity we were talking about. With this feature you can enable and disable database functions, features and options. It even lets you specify a range or list of values that may be used.

For more information about this feature you can read the following article:  https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/05/24/oracle-database-12cr2-new-feature-lockdown-profiles

Oracle Database 12cR2 new feature: Proxy PDB: A Proxy PDB is physically an empty PDB that has the minimum tablespaces required (SYSTEM, SYSAUX, UNDO), created in one CDB that references a remote Pluggable Database in a different CDB. All the operations (DDLs & DMLs) that are executed within the Proxy PDB are sent to the referenced Pluggable Database and remotely executed in it, except for the operations ALTER PLUGGABLE DATABASE and ALTER DATABASE.

For more information about this feature you can read the following article:   https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/05/24/oracle-database-12xr2-new-feature-proxy-pdb

Introduction to Application Containers in Oracle Database 12cR2: This new feature helps developers a lot with the day-to-day tasks. With "Application Container", developers can create applications, every application can have its own data and version and developers decide which database should have which version of the same application and when to refresh the data. With "Application Containers" the developers keep the objects and data only in one side, not in every database in the organization, and sync from that principal side all the dependent databases.

For more information about this feature you can read the following article: https://www.toadworld.com/platforms/oracle/w/wiki/11740.introduction-to-application-containers-in-oracle-database-12cr2

Oracle Database 12cR2 new feature: Application Root Replica: Application Root Replica is a physical replica of a master Application Root but in another remote Container Database. This lets us synchronize applications in an Application Container across different and remote Container Databases without using solutions like RMAN, Data Pump, or remote cloning.

For more information about this feature you can read the following article: https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/05/24/oracle-database-12cr2-new-feature-application-root-replica

Oracle Database 12cR2 new feature: Container Maps: Container Maps allowhs  to use PDBs as if they were partitions. With PDB as partitions we can query data across all the PDBs in the CDB by filtering the data by a key.

For more information about this feature you can read the following article: https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/05/24/oracle-database-12cr2-new-feature-container-maps

Introduction to Oracle SQL Plan Directives in Oracle Database 12.2: Oracle SQL Plan Directives is part of the category “Adaptive Statistics”.  Basically, they are notes that the optimizer writes and stores in the database to “adapt” itself to the environment or data changes.

For more information about this feature you can read the following article: https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/04/13/introduction-to-oracle-sql-plan-directives-in-oracle-database-12-2

Oracle DB 12.2 Local Undo: PDB undo tablespace creation: Local Undo is a new kind of undo configuration for Multitenant Architecture and it is a new feature introduced in 12.2.0.1.0. When we say "Local Undo" basically we are saying that every Pluggable Database will have its own Undo Tablespace.

For more information about this feature you can read the following articles:

https://www.toadworld.com/platforms/oracle/b/weblog/archive/2016/11/24/oracle-database-12-2-local-undo-pdb-undo-tablespace-creation

https://www.toadworld.com/platforms/oracle/w/wiki/11733.how-to-enable-and-disable-local-undo-in-oracle-12-2

How to solve user errors with Oracle Flashback 12cR2 and its enhancements: Flashback Database has had several enhancements since it was introduced, with the biggest enhancements in 12.1 and 12.2. In Oracle Database 12.1 Flashback Database supported Container Databases (CDBs) supporting the Multitenant Architecture, however Flashback Database at the PDB Level was not possible. In Oracle Database 12cR2 Flashback Database added support at the PDB level.

For more information about this feature you can read the following article: https://www.toadworld.com/platforms/oracle/b/weblog/archive/2017/01/10/how-to-solve-user-errors-with-oracle-flashback-12cr2-and-its-enhancements

Near Zero Downtime PDB Relocation in Oracle Database 12cR2: Two features that I really like are "Hot Cloning" and "Online Relocation". Basically it is the same feature as in 12.1.0.2 for cloning locally and remotely but now they can be done online. The source PDB can be in read-write.

For more information about this feature you can read the following article: http://www.toadworld.com/platforms/oracle/w/wiki/11750.near-zero-downtime-pdb-relocation-in-oracle-database-12cr2

What’s coming for 2018?

Well, in Oracle Open World 2017 Larry Ellison introduced the World’s first Self-Driving Database.  People have been using the terms “Autonomous Database” and “Oracle 18c” interchangeably. They different concepts, the best definition I have found is delivered by Maria Colgan in this article: “The Autonomous Database is a Cloud service running on top of Oracle Database 18c along with additional services to provide performance and availability SLAs.”

Oracle 18c itself is just the software with several new features but in itself is not an “autonomous database”. Oracle 18c is not released yet for on-premises databases; however, I took some notes at the OOW session delivered by Joan Loaiza about autonomous database. The following are some of the notes:

The difference between Automated and Autonomous:

  • The customer can choose to just use automation or hand over all management to Oracle Cloud Operations for Autonomous operation.
  • If the customer hands over management to Oracle then:
    • Database and OS Administrator Privileges are not needed and not provided
    • Exception and failure cases are handled by Oracle Experts
  • The payoff is huge – Eliminate generic tasks, reduce labor, reduce costs, reduce errors, while increasing security and availabilty.

Autonomous Database removes generic tasks:

DBAs will have more time to innovate and improve the business.

  • Tasks Specific to business
    • Architecture, planning, data modeling
    • Data security and lifecycle management
    • Application related tuning
    • End-to-end service level management

Automatically Diagnoses Performance

  • Automatic Database Diagnostic Monitor (ADDM)
    • Automatically diagnoses root cause of performance issues
    • Active Workload Repository (AWR)
      • Automatically keeps detailed performance and resource utilization history
      • Real-Time SQL Monitoring
        • Automatically diagnoses how resources are used in SQL statements.
        • Many database algorithms self-optimize – caching, locking, storage indexes, offload, etc.
        • Automatic SQL re-tuning using machine learning.

The autonomous database subscription includes:

  • Data Encryption
  • Diagnostics Pack
  • Tuning Pack
  • Real Application Testing
  • Data Masking, Redaction and Subsetting
  • Hybrid Columnar Compression
  • Database Vault
  • Database in Memory (subset) – In Autonomous Data Warehouse
  • Advanced Analytics (subset) – In Autonomous Data Warehouse

Conclusion:

Since Oracle Database 12cR1, released in 2013, Oracle has introducing several functionalities that support Cloud adoption; in Oracle Database 12cR2 those functionalities were improved even more, introducing everything “online” so that no interruption is needed. Additionally, new features were introduced to support Cloud adoption totally. Starting in Oracle 18c and Autonomous Database, Oracle wants to offer companies a self-driven database-as-a-Service in Oracle Cloud. Definitely a futuristic service that can be used only in Oracle Cloud. Several new things are coming for 2018, looking forward to see them!

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g, Oracle Certified Master 12c and Maximum Availability Architecture Oracle Certified Master. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12cR2 new feature: Lockdown Profiles

$
0
0

By Deiby Gómez

 

Introduction:

In the past, roles, system privileges, and table privileges were used to control the functionalities allowed to database users. However, roles and privileges don’t have enough granularity to effectively restrict what work a user may do.  For example, you can grant the privilege “ALTER SYSTEM” to a user, but with that, you are allowing that user to change any database parameter. “ALTER SYSTEM” is not granular enough to enable the user to change some database parameters but not others. Even worse, there is no way to allow a user to change a specific database parameter with a range or list of values but disable another range or list of values. This functionality has been requested by DBAs for years and finally Oracle has heard us.

Oracle has introduced several new features in its newest version, 12.2.0.1. One of the most important features is “Lockdown Profiles”. Lockdown Profiles provides the granularity we were talking about. With this feature you can enable and disable database functions, features and options. It even lets you specify a range or list of values that may be used.

 

About Lockdown Profiles creation

Lockdown Profiles can be created only in Container Databases, and you must be connected to CDB$ROOT. If you try to create a lockdown profile in a non-container database you will receive the following error:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;
CREATE LOCKDOWN PROFILE WANNACRY_PROFILE
*
ERROR at line 1:
ORA-65090: operation only allowed in a container database

 

If you try to create a lockdown profile while connected to a PDB you will get the following error:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;
CREATE LOCKDOWN PROFILE WANNACRY_PROFILE
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

 

How to create a Lockdown Profile

Connect to CDB$ROOT:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

 

Execute the CREATE LOCKDOWN PROFILE sentence:

SQL> CREATE LOCKDOWN PROFILE WANNACRY_PROFILE;

Lockdown Profile created.

 

Unfortunately, you cannot specify which functionality to enable or disable along with the CREATE LOCKDOWN PROFILE sentence. To do this, you have to use the ALTER LOCKDOWN PROFILE sentence separately.

 

Enabling or disabling functionalities:

There are three functionalities that you can disable:

  • FEATURE: Allows you to enable or disable database features. To see the full list of features that you can indicate, check here.
  • OPTION: The two options you can either enable or disable are “DATABASE QUEUING” and “PARTITIONING”.
  • STATEMENT: You can either enable or disable the statements “ALTER DATABASE”, “ALTER PLUGGABLE DATABASE”, “ALTER SESSION”, and “ALTER SYSTEM”. You can specify granular options along with these statements.

In the three functionalities, you can also use clauses like ALL and EXCEPT, which allows you to include or exclude a set of features instead of specifying them one by one.

In the following example we will disable two features, one option, and one statement.

The statement that we will disable is to change the parameter “nls_date_format” in an ALTER SYSTEM statement:

SQL>  ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE STATEMENT = ('ALTER SYSTEM') CLAUSE = ('SET')  OPTION= ('nls_date_format');

Lockdown Profile altered.

 

The next example is similar to the previous one, but here we are specifying a minimum value and a maximum value. All the values between are allowed, while all the values outside of this range are disallowed.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE STATEMENT = ('ALTER SYSTEM') CLAUSE = ('SET') OPTION = ('parallel_max_servers') MINVALUE = '10' MAXVALUE = '39';

Lockdown Profile altered.

 

In the next example I am disabling the feature “COMMON_USER_CONNECT”. This feature disallows common users to connect to pluggable databases directly. All common users must first connect to CDB$ROOT and then jump to any Pluggable Database.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE FEATURE = ('COMMON_USER_CONNECT'); 

Lockdown Profile altered.

 

The last example disables the option “PARTITIONING”, which means I cannot use any operations that relies on partitioning.

SQL> ALTER LOCKDOWN PROFILE WANNACRY_PROFILE DISABLE OPTION = ('PARTITIONING'); 

Lockdown Profile altered.

 

Reviewing Lockdown Profiles information:

Once the lockdown profile has been created and you have enabled or disabled the required functionalities, you can review all the information using the view DBA_LOCKDOWN_PROFILES:

SQL> select rule_type, rule, clause, clause_option, option_value , min_value, max_value, status from DBA_LOCKDOWN_PROFILES where profile_name='WANNACRY_PROFILE' ;

RULE_TYPE  RULE                CLAUS    CLAUSE_OPTION        OPTION_VAL MIN MA STATUS
---------- ------------------- -------- -------------------- ---------- --- -- ----------
FEATURE    COMMON_USER_CONNECT DISABLE
OPTION     PARTITIONING        DISABLE
STATEMENT  ALTER SYSTEM        SET      NLS_DATE_FORMAT      MM-DD-YYYY         DISABLE
STATEMENT  ALTER SYSTEM        SET      PARALLEL_MAX_SERVERS            40  60  DISABLE

 

Enable Lockdown Profile:

As we have seen, I created the lockdown profile directly without specifying whether I want that lockdown profile in one specific PDB, or in all the PDBs, etc., I just created it. Don’t worry about it: The creation of a lockdown profile doesn’t mean it is enabled by default. Lockdown profile works like a Database Resource Manager Plan; you can create as many as you want, but only one is enabled and it must be enabled explicitly. And enabling a lockdown profile is similar to enabling a Database Resource Manager Plan; it is enabled by a database parameter.

So far we have created the lockdown profile “WANNACRY_PROFILE” and we have customized it but we haven’t enabled it yet.  You can enable a lockdown profile in one specific PDB, in a set of them or in all PDBs. If you want to enable the lockdown profile in all the PDBs you have to be connected to CDB$ROOT and set the database parameter “pdb_lockdown” to the name of your lockdown profile; in this case, “WANNACRY_PROFILE”. If you want to enable the lockdown profile in a specific PDB, first you have to connect to the specific PDB and then you have to set the database parameter “pdb_lockdown”. 

In the following example we have a CDB called “db12c” with two PDBs, one named “PDB1” and the second one named “PDB2”. We will enable the lockdown profile “WANNACRY_PROFILE” only in “PDB1”.

Checking out that the parameter is not set in any container:

SQL> select con_id, name, value from gv$system_parameter where name='pdb_lockdown';

CON_ID     NAME            VALUE
---------- --------------- ----------
0          pdb_lockdown

 

Connecting to “PDB1”:

SQL> show con_name

CON_NAME
------------------------------
PDB1

 

Set the database parameter pdb_lockdown:

SQL> alter system set pdb_lockdown='WANNACRY_PROFILE';

System altered.

 

Verifying that the parameter is set only in “PDB1” (CON_ID=3):

SQL> select con_id, name, value from gv$system_parameter where name='pdb_lockdown';

CON_ID     NAME VALUE
---------- -------------- ------------------------------
0          pdb_lockdown
3          pdb_lockdown   WANNACRY_PROFILE

 

Confirming whether the functionalities were successfully disabled:

Testing to change the parameter nls_date_format:

Connecting to “PDB1”:

SQL> show con_name

CON_NAME

------------------------------

PDB1

 

I am using a common user with “alter system” privileges:

SQL> show user

USER is "C##DGOMEZ"

 

As you see, even if the user has “alter system” privilege it is not allowed to change the database parameter because of the lockdown profile.

SQL> alter system set nls_date_format='mmddyyyy' scope=spfile;
alter system set nls_date_format='mm-dd-yyyy' scope=spfile
*
ERROR at line 1:
ORA-01031: insufficient privileges

 

Testing the feature 'COMMON_USER_CONNECT'. Without the lockdown profile, I was able to connect directly to a PDB with a common user, however now it is not allowed because of the lockdown profile:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/dgomez@192.168.1.22:1521/pdb1 

ERROR:

ORA-01017: invalid username/password; logon denied

Testing the parameter parallel_max_servers. The range we specified in the lockdown profile was [10,39]. As we explained before, all the values outside of this range are disabled, while the values between these values are allowed.

SQL> alter system set parallel_max_servers=9;
alter system set parallel_max_servers=9
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set parallel_max_servers=10;

System altered.

SQL> alter system set parallel_max_servers=39;

System altered.

SQL> alter system set parallel_max_servers=40;
alter system set parallel_max_servers=40
*
ERROR at line 1:
ORA-01031: insufficient privileges

 

How to drop a lockdown profile:

To drop a lockdown profile is easy. You just have to execute the following sentence from CDB$ROOT. You don’t have to reset or clean the parameter pdb_lockdown in all the PDBs that are using this lockdown profile (although I strongly think it should not be this way). When you execute this sentence, all the PDBs using the lockdown profile will automatically stop using the settings provided by this lockdown profile.   

DROP LOCKDOWN PROFILE WANNACRY_PROFILE;

 

Conclusion:

In this article, I outlined the required steps to create a new lockdown profile, I explained which kind of functionalities we can enable and disable, and I provided several examples. I provided comments to help you quickly understand how to use lockdown profiles and take advantage of them; very important in an era where the security is of utmost value and a finer granularity is needed to restrict people to only those tasks necessary for their role. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter LinkedIn 


Oracle Database 12cR2 new feature: Proxy PDB

$
0
0

By Deiby Gómez

 

Introduction:

The need to communicate with external systems and exchange data made Oracle develop a way to connect to different Oracle databases to execute operations. Traditionally, whenever we wanted to bring data in from a different database, we used a Database Link. But 12.1.0.1.0 Oracle introduced a major new multi-tenant architecture. With Multitenant, a database could be either Container Database or non-Container Database. If we decided to create a new database as a Container (CDB) we could create Pluggable Databases connected to the CDB.  However, DBAs still needed to use Database Links to exchange data between the pluggable databases within a Container.

In the newest version of Oracle Database 12.2.0.1.0 introduces a feature called “Proxy PDB”. A Proxy PDB is physically an empty PDB that has the minimum tablespaces required (SYSTEM, SYSAUX, UNDO), created in one CDB that references a remote Pluggable Database in a different CDB. All the operations (DDLs & DMLs) that are executed within the Proxy PDB are sent to the referenced Pluggable Database and remotely executed in it, except for the operations ALTER PLUGGABLE DATABASE and ALTER DATABASE. This is why it is called “Proxy”.

The benefit of a Proxy PDB is that it’s exactly as if the referenced PDB was in the local CDB, but the data is stored remotely and the operations are executed remotely in the referenced Pluggable Database. For instance, if we have Database Resource Manager active in the local CDB, the current Resource Manager Plan also applies to the Proxy PDB. Another example is the CONTAINERS clause, which allows retrieval of data from all the Pluggable Databases; this clause also works for a Proxy PDB. For all operations, the Proxy PDB will be seen as a normal PDB.

The image below sets up our example. It shows two containers, CDB1 and CDB2.  The remote container is shown at the top of the illustration: CDB1. The local CDB is shown at the bottom of the illustration: CDB2. Each container has two pluggable databases within it, designated as PDB1 and PDB2. The PDB2 in the local container is a Proxy PDB that references the PDB2 within CDB1.  

In the illustration we see a user connected to the CDB$ROOT of CDB2 who is executing a query using the CONTAINERS clause across all the PDBs that belong to CDB2. The data returned includes “Guatemala”, which is physically stored in the referenced PDB, that is, the PDB2 within CDB1. the PDB2. The row with the value “Guatemala” is returned because the query was sent to the Referenced PDB and executed there. (The referenced PDB can be either a normal PDB or an application PDB. In this example the referenced PDB is a normal PDB.)

 

 

To create a Proxy PDB there are some prerequisites:

  • The CDB that contains the referenced PDB must be in local undo mode.
  • The CDB that contains the referenced PDB must be in ARCHIVELOG mode.
  • The referenced PDB must be in open read/write mode when the proxy PDB is created.

We will go through the example presented in the above image. First I will connect to CDB1 and create the PDB1 and PDB2 Pluggable Databases, and then I will jump to CDB2 to create its PDB1 and then the Proxy PDB called PDB2. Once everything is completed I will perform the query with the CONTAINERS clause from CDB2, my local container.

 

Preparation in CDB1:

I will create the PDB1 and PDB2 in CDB1:

SQL> create pluggable database pdb1 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> create pluggable database pdb2 admin user pdbadmin identified by nuvola;

Pluggable database created.

 

Opening PDB1 and PDB2:

SQL> alter pluggable database all open;

Pluggable database altered.

 

One of the prerequisites is that the referenced PDB is in read/write; in this example both are in read/write:

SQL> select name, open_mode from v$pdbs;

NAME       OPEN_MODE
---------- ----------
PDB$SEED   READ ONLY
PDB1       READ WRITE
PDB2       READ WRITE

 

Another prerequisite is that the user that connects to the referenced PDB has to be a common user:

SQL> select username, common from dba_users where username='C##DGOMEZ';

USERNAME   COM
---------- ---
C##DGOMEZ  YES

Another prerequisite is that the remote CDB, in this case CDB1, has to be configured with Local Undo:

SQL>SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME        PROPERTY_VALUE
-------------------- --------------------
LOCAL_UNDO_ENABLED   TRUE

In the previous image, you can see that there is a table with 1 row inserted. I will load these rows into the PDB1 and the PDB2 in CDB1 to make this environment match with the image:

SQL> alter session set container=pdb1;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Brazil');

1 row created.

SQL> commit;

Commit complete.

 

The PDB2 of CDB1 will be our referenced PDB. In the image you can see that the value in the referenced PDB is “Guatemala”:

SQL> alter session set container=pdb2;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

 

The work in CDB is done. Two PDBs were created, the table was created and the rows were inserted. Now it’s time to configure CDB2 and create the Proxy PDB.

 

Preparation in CDB2:

We will start from the CDB$ROOT:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

 

I will create a common user in order to perform the example with the CONTAINERS clause. For more information about the CONTAINERS clause you can read my article “New CONTAINERS Clause in 12.1.0.2 - Common Perspective”.

SQL> create user c##dgomez identified by nuvola container=all;

User created.

SQL> grant connect, resource, unlimited tablespace to c##dgomez container=all;

Grant succeeded.

 

I will create the same table in CDB$ROOT in CDB2 and insert a row in order to follow the example in the image:

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('USA');

1 row created. 

SQL> commit;

Commit complete.

 

Creating the PDB1 in CDB2:

SQL> create pluggable database pdb1 admin user pdbadmin identified by nuvola;

Pluggable database created.

 

Opening the PDB1 of CDB2:

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

 

Creating the table country in the PDB1 of CDB2:

SQL> alter session set container=pdb1;

Session altered.

SQL> create table c##dgomez.country (name varchar2(25));

Table created.

SQL> insert into c##dgomez.country values ('Canada');

1 row created. 

SQL> commit; 

Commit complete.

 

Creation of “Proxy PDB”:

Well, so far everything we have done is only to build the environment in the example in the image shown at the beginning of this article. We have not seen how “Proxy PDB” works; I have only provided concepts and some prerequisites.  The next sentence creates a database link in the CDB$ROOT of CDB2. The database link is required only at the time of the Proxy PDB creation. Once the Proxy PDB has been created the database link is no longer required; Proxy PDB connects directly to the referenced PDB without using the database link.  

Note that the database link references directly to a common user in the PDB that will be the Referenced PDB, in this case PBD2 of CDB1.

SQL> CREATE DATABASE LINK link_to_pdb2_in_cdb1 CONNECT TO c##dgomez IDENTIFIED BY nuvola USING '192.168.1.22:1521/pdb2';

Database link created.

 

Note that the database like uses the common user in CDB1, this was one of the prerequisites I mentioned before. The database link connects to the PDB2 in CDB1 since this will be our Referenced PDB.

Once the database link is created, the next step is to create the Proxy PDB.

SQL> create pluggable database pdb2 AS PROXY FROM pdb2@link_to_pdb2_in_cdb1;

Pluggable database created.

 

And that’s it! The Proxy PDB was created successfully. I will proceed to open it in read/write to start using it:

SQL> alter pluggable database pdb2 open; 

Pluggable database altered.

 

Now it’s time to test how Proxy PDB works! Since the example in this article is based on the CONTAINERS clause, I will connect to the CDB$ROOT of CDB2 using password authentication and execute a query:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/nuvola@'192.168.1.22:1521/cdb2'

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

 

Note that the query from CDB$ROOT of CDB2 returns the value “Guatemala”; this is because of the Proxy PDB. The value “Guatemala” is not stored in the PDB2 of CDB2 (the Proxy PDB) but, as I said before, the Proxy PDB will behave transparently for all the DDLs and DMLs, it as if a normal PDB was there..

SQL> select name from containers(c##dgomez.country);

NAME
-------------------------
USA
Canada
Guatemala

There is a limitation on Proxy PDBs, they don’t support OS authentication. If you login to the CDB2 with OS authentication and try to run a query from the PDB2 you will get no data. This is because the Proxy PDB will not be able to connect to the referenced PDB and get the data from it (. Proxy PDB supports only password authentication.

[oracle@nuvola2 ~]$  sqlplus  / as sysdba

SQL> show con_name

CON_NAME
-----------------------------
CDB$ROOT

SQL> select name from containers(c##dgomez.country);

NAME
-------------------------
USA
Canada 

If we connect with OS authentication to the PDB2 in CDB2 and we try to execute a query, the query will fail, saying that the password used is not correct. Of course, we know that there was not a password provided since we used OS authentication.

[oracle@nuvola2 ~]$ sqlplus / as sysdba 

SQL> alter session set container=pdb2; 

Session altered.

SQL> select * from c##dgomez.country;

select * from c##dgomez.country

                        *

ERROR at line 1:

ORA-01017: invalid username/password; logon denied

ORA-02063: preceding line from PROXYPDB$DBLINK

 

When we use password authentication the Selects works well:

[oracle@nuvola2 ~]$ sqlplus c##dgomez/nuvola@'192.168.1.22:1521/cdb2'

SQL>  alter session set container=pdb2;

Session altered.

SQL> select * from c##dgomez.country;

NAME
-------------------------
Guatemala

Now I will test an INSERT operation in the Proxy PDB, but since it is a Proxy, the operation will be executed in the referenced PDB, which means that the row will be stored in the referenced PDB:

SQL> insert into c##dgomez.country values ('Costa Rica');

1 row created.

SQL> commit;

Commit complete.

 

In PDB2 of CDB1, I will verify if the row was inserted there:

SQL> select name from v$database;

NAME

---------

DB12C

SQL> alter session set container=pdb2; 

Session altered.

SQL> select * from c##dgomez.country;

NAME
-------------------------
Guatemala
Costa Rica

This confirms that the Proxy PDB sends SELECTs and also INSERTS (DDLs+DMLs) to be processed inside the referenced PDB.

 

Conclusion:

We have seen that a Proxy PDB is a special PDB that receives operations (DDLs and DMLs) in a local CDB but sends all the operations to its referenced PDB, and processes the operations remotely within the referenced PDB. This brings is the advantage of “Location Transparency”. Location Transparency means that it doesn’t matter where the data is located physically; when we use Proxy PDBs, we can present a PDB in other CDBs as if the PBD that has all the data stored physically was there. All the operations will be processed remotely. Data can be used everywhere in 12.2.0.1.0 without actually having the data physically in all the sites. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle Database 12cR2 new feature: Application Root Replica

$
0
0

By Deiby Gómez

 

Introduction:

In my previous articles we have seen concepts like “Application Containers” and “Proxy PDB”, which are new in Oracle Database 12cR2. With Application Containers, you can install applications in an Application Root and synchronize the application (metadata without data) to Application PDBs.On the other hand, a Proxy PDB provides location transparency; this is useful when we want to access data or objects remotely from another Container Database (CDB). An advantage of a Proxy PDB is that we don’t have to copy all the data to the remote CDB in order to access the objects and its data, however this is also a disadvantage. If something goes wrong with the Application Root in the Master Application Container, all the remote Proxy PDBs in others CDB will be broken. To avoid this, we would probably want to have a physical replica of all the objects and data in another remote Container Database. Here is where a new feature called “Application Root Replica”, also introduced in 12.2.0.1.0, is helpful.

Application Root Replica is a physical replica of a master Application Root but in another remote Container Database. This lets us synchronize applications in an Application Container across different and remote Container Databases without using solutions like RMAN, Data Pump, or remote cloning. 

There are two methods to create an Application Root Replica:

  1. Create an empty application container and then synchronize the application.
  2. Clone the master application root.

In this article, I will show you a use-case example.

 

Preparation of the Environment:

With these steps I will create the environment described in the following image. I already have the two Container Databases, CDB1 and CDB2. So I will start by creating the Application Root “AppRoot” and the Application PDB “AppPDB1” in CDB1. I will create an application in “AppRoot” and I will sync that application to “AppPDB1”.  Then I will create the Application Root “AppRoot2” and the Application PDB “AppPDB2” in CDB2.

 

Creating an Application Root named “AppRoot”:

SQL> create pluggable database AppRoot as application container admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> alter pluggable database AppRoot open;

Pluggable database altered.

 

Creating the Application PDB named “AppPDB1”:

SQL> alter session set container=AppRoot;

Session altered.

SQL> show con_name

CON_NAME

------------------------------

APPROOT

 

SQL> create pluggable database AppPDB1 admin user pdbadmin identified by nuvola; 

Pluggable database created.

SQL>  alter pluggable database AppPDB1 open;

Pluggable database altered.

 

Installing the application named “MyApp” in the Application Root “AppRoot” in CDB1:

 

SQL> alter pluggable database application MyApp begin install '1.0';

Pluggable database altered.

SQL> create table c##dgomez.dataLinkedTable SHARING=DATA   (name varchar2(20));

Table created.

SQL> insert into c##dgomez.dataLinkedTable values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

SQL> alter pluggable database application MyApp end install '1.0';

Pluggable database altered.

 

Synchronizing the Application PDB “AppPDB1”:

SQL> alter session set container=AppPDB1;

Session altered.

SQL> alter pluggable database application MyApp sync;

Pluggable database altered.

 

Confirming that the table and data were synchronized:

SQL>  select * from c##dgomez.dataLinkedTable;

NAME

--------------------

Guatemala

 

In the Container Database “CDB2” I will create the Application Root named “AppRoot2”

SQL> create pluggable database AppRoot2 as application container admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> alter pluggable database AppRoot2 open;

Pluggable database altered.

 

Creating the Application PDB “AppPDB2” in CDB2:

SQL> alter session set container=AppRoot2;

Session altered.

SQL> show con_name

CON_NAME

------------------------------

APPROOT2

 

SQL> create pluggable database AppPDB2 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL>  alter pluggable database AppPDB2 open;

Pluggable database altered.

 

Confirming that the table c##dgomez.dataLinkedTable doesn’t exist in “AppPDB2”. This is just to confirm that the environment we have created matches with the previous image.

 

SQL> alter session set container=AppPDB2;

Session altered.

SQL> select * from c##dgomez.dataLinkedTable;

select * from c##dgomez.dataLinkedTable

                        *

ERROR at line 1:

ORA-00942: table or view does not exist

 

The problem:

At this time we have two CDBs, one called CDB1, which has an Application Container created with one application installed. However, I also want to have that Application in CDB2 in the Application Container that has already been created there. Also I would like to be able to synchronize all the data whenever the “master” Application receives any change. In the past, we would have used a Full Backup and Restore with RMAN or perhaps an export and import with Data Pump, or even a materialized view. In 12.1.0.2.0 we would use “Remote PDB Cloning”. However, all these solutions are not the best!

The solution:

The best solution to this problem is “Application Root Replica”. An Application Root Replica is a physical replica of one Application Root in another CDB. In this case our Master Application Root is “AppRoot” in CDB1, and the Application Root Replica is “AppRoot2” in CDB2. The Application Root Replica uses a Proxy PDB to synchronize the data with the Master Application Root. In the following image you can see that the Proxy PDB is created in the CDB1, this is because the Proxy PDB will be seen as a normal PDB in the Application Container in CDB1, which means that the Proxy PDB will get the data (via synchronization) from the Master Application Root. Since the “Referenced PDB” of that Proxy PDB is “AppRoot2”, it is as if “AppRoot2” was physically located in CDB1. This is the concept of a Proxy PDB, and this is how “AppRoot2” can get all the data from “AppRoot2”. Once the Application Root “AppRoot2” get synchronized with the Application Root “AppRoot” through the Proxy PDB, we will have to synchronize the Application PDB “AppPDB2” in CDB2.

 

 

In the Application Root “AppRoot” in CDB1:

SQL> alter session set container=AppRoot; 

Session altered.

SQL> show con_name

CON_NAME

------------------------------

APPROOT

 

Since the Proxy PDB needs a database link to be created:

SQL>  CREATE DATABASE LINK link_to_AppRoot CONNECT TO c##dgomez IDENTIFIED BY nuvola USING '192.168.1.22:1521/approot2';

Database link created.

 

Note that the database link connects to the Application Root “AppRoot2” in CDB2.

Creating the Proxy PDB in CDB1:

SQL> create pluggable database ProxyPDB AS PROXY FROM approot2@link_to_AppRoot;

Pluggable database created.

SQL> alter pluggable database ProxyPDB open;

Pluggable database altered.

 

Unfortunately Proxy PDB doesn’t support OS Authentication, so I have to open a session to “ProxyPDB” in CDB1 using password authentication:

[oracle@nuvola2 apex]$ sqlplus sys/manager1@'192.168.1.22:1521/ProxyPDB' as sysdba

 

The following step will synchronize the “Proxy PDB”, which automatically will fill up the “Application Root Replica” called “AppRoot2” in CDB2:

SQL> alter pluggable database application MyApp sync;

Pluggable database altered.

 

If we connect to the application root replica “AppRoot2” in CDB2 we will see that the application is there as well as its data, physically.

SQL> show con_name

CON_NAME

------------------------------

APPROOT2

 

SQL> select app_name, app_version from dba_app_versions where app_name='MYAPP'

APP_NAME          APP_VERSION

-------------------- ------------------------------

MYAPP             1.0

 

So the application “MyApp” has been synchronized to the Application Root [Replica] “AppRoot2”. It’s time to synchronize all the Application PDBs in the Application Container in CDB2: 

SQL> alter session set container=AppPDB2;

Session altered.

SQL> show con_name

CON_NAME

------------------------------

APPPDB2

 

SQL> alter pluggable database application MyApp sync;

Pluggable database altered.

 

We can confirm that the Application “MyApp” was successfully replicated from AppRoot to ProxyPDB in CDB1, from ProxyPDB in CDB1 to AppRoot2 in CDB2, and from AppRoot2 to AppPDB2 in CDB:

SQL> select * from c##dgomez.dataLinkedTable;

NAME

--------------------

Guatemala

 

Starting now, we only have to keep performing “SYNC” operations to replicate the data through all of the configuration that involves both Container Databases.

 

Conclusion:

We have seen through this article how to synchronize application data in an Application Container across Container Databases without using Backup and Recovery operations with RMAN, Export & Import with Data Pump or Remote PDB Cloning. When we are working with Application Containers, both Proxy PDB and Application Root Replica are useful for replicating our installed applications to other Containers Databases. 

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

How to run SQL Statements across Pluggable Databases with catcon.pl

$
0
0

Introduction:

Beginning with Oracle Database 12.1.0.1.0, DBAs started to work with Pluggable Databases. There were some large migrations of several databases from 10g/11g to 12c where they were consolidated into a new Oracle Database Container using several Pluggable Databases. However, running operations in several Pluggable Databases became a problem, since people had to login into every Pluggable Database and to run the required script or SQL Statement there. To avoid causing people to spend too much time doing this kind of work Oracle introduced the Perl script “catcon.pl”. Basically catcon.pl receives either a script or the text of a SQL Statement and executes it in the Pluggable Databases that we specify, even in PDB$SEED and CDB$ROOT, depending on which flags of catcon.pl are used. In the following image we see a script received by catcon.pl, and catcon.pl executes the script in CDB$ROOT and PDB$SEED if the flag “-S” is used as well as in the rest of Pluggable Databases.

 

Using catcon.pl considerably reduces the time spent on running scripts across several databases. One of its advantages is that you can filter the pluggable databases where you want to execute the script or SQL Statement by using “-C” for exclusion of pluggable databases and “-c” for inclusion of pluggable databases. You can also specify the order of the pluggable databases where the script or SQL statement has to be executed.

In this article we will use the environment described in the previous image. I will start creating the three pluggable databases and the scripts that will be executed across the PDBs:

SQL> create pluggable database PDB1 admin user pdbadmin identified by nuvola; 

Pluggable database created.

SQL> create pluggable database PDB2 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> create pluggable database PDB3 admin user pdbadmin identified by nuvola;

Pluggable database created.

SQL> alter pluggable database all open;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

        2 PDB$SEED                 READ ONLY  NO

        3 PDB1                     READ WRITE NO

        4 PDB2                     READ WRITE NO

        5 PDB3                     READ WRITE NO

 

Creating the Script #1:

The following script contains a CREATE TABLE statement, an INSERT statement, a commit and a SELECT statement. All these operations use the same table, C##DGOMEZ.COUNTRY.

[oracle@nuvola2 ~]$ pwd

/home/oracle

 

[oracle@nuvola2 ~]$ vi script.sql

[oracle@nuvola2 ~]$ cat script.sql

show con_name;

create table c##dgomez.country (name varchar2(20));

insert into c##dgomez.country values ('Guatemala');

commit;

select * from c##dgomez.country ;

[oracle@nuvola2 admin]$

 

Creating the Script #2:

This script doesn’t create any table; instead, it only inserts rows in the table C##DGOMEZ.COUNTRY

[oracle@nuvola2 admin]$ cat /home/oracle/script2.sql

insert into c##dgomez.country values ('Canada');

commit;

[oracle@nuvola2 admin]$

 

Running catcon.pl without “-S” flag:

When the flag “-S” is not used, catcon.pl executes the script or the SQL Statement in all the containers including CDB$ROOT and PDB$SEED. Also all the objects created by catcon.pl are created as “ORACLE_MAINTAINED”, which means that those will be objects owned by Oracle and which cannot be modified by any database user. I don’t recommend using this method to create objects for the business or our application schema; this method is used to run perhaps a script for patching, migration, or any other task that touches the data dictionary or any other aspect owned by Oracle.

Moving to the directory where catcon.pl is located:

[oracle@nuvola2 ~]$ cd $ORACLE_HOME/rdbms/admin

 

Executing catcon.pl. The flag “-d” specifies where the script is located. The flag “-l” specifies the directory where all the logs will be created. The flag “-b” specifies the prefix name of the log files that will be generated and finally the value with the name of the script that will be executed by catcon.pl.

[oracle@nuvola2 admin]$  $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -b catcon-example script.sql

 

As you can see, the script was executed and it created the objects as “ORACLE_MAINTAINED”. The script was executed in CDB$ROOT and also in PDB$SEED. In this example, the script failed in PDB$SEED because the schema c##dgomez didn’t exist within the PDB, and catcon.pl couldn’t create the table.

SQL> select con_id, owner, object_name, object_type, ORACLE_MAINTAINED from cdb_objects where owner='C##DGOMEZ';

    CON_ID OWNER      OBJECT_NAM OBJECT_TYP ORACLE_MAINTAIN

---------- ---------- ---------- ---------- ---------------

        1 C##DGOMEZ  COUNTRY     TABLE     Y

        3 C##DGOMEZ  COUNTRY     TABLE     Y

        4 C##DGOMEZ  COUNTRY     TABLE     Y

        5 C##DGOMEZ  COUNTRY     TABLE     Y

 

Running catcon.pl with “-S” flag

I recommend using this flag when you are running either a script or SQL Statement that create objects for your business application schema like the Script #1 or the Script #2 that I created in this article. In other words, when you are running operations not related to patching, upgrades, or to the data dictionary. When the flag “-S” is used, catcon.pl doesn’t execute the script in CDB$ROOT or in PDB$SEED.

[oracle@nuvola2 ~]$ cd $ORACLE_HOME/rdbms/admin

[oracle@nuvola2 admin]$  $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -S  -b catcon-example script.sql

catcon: ALL catcon-related output will be written to [/home/oracle/catcon_logs/catcon-example_catcon_26297.lst]

catcon: See [/home/oracle/catcon_logs/catcon-example*.log] files for output generated by scripts

catcon: See [/home/oracle/catcon_logs/catcon-example_*.lst] files for spool files, if any

catcon.pl: completed successfully

[oracle@nuvola2 admin]$

 

The logs will be generated in the directory “/home/oracle/catcon_logs” with the prefix “catcon-example” as it was specified:

[oracle@nuvola2 admin]$ ls -ltr /home/oracle/catcon_logs/

total 12

-rw-r--r-- 1 oracle oinstall  419 May  7 05:57 catcon-example_catcon_26297.lst

-rw-r--r-- 1 oracle oinstall 3371 May  7 05:58 catcon-example0.log

-rw-r--r-- 1 oracle oinstall 1922 May  7 05:58 catcon-example1.log

[oracle@nuvola2 admin]$

 

The script was executed only in the pluggable databases. It was not executed in CDB$ROOT nor PDB$SEED and the table was created as non-Oracle maintained:

SQL> select con_id, owner, object_name, object_type, ORACLE_MAINTAINED from cdb_objects where owner='C##DGOMEZ'

    CON_ID OWNER      OBJECT_NAM OBJECT_TYP ORACLE_MAINTAINED

---------- ---------- ---------- ---------- -----------------

        3 C##DGOMEZ  COUNTRY     TABLE     N

        4 C##DGOMEZ  COUNTRY     TABLE     N

        5 C##DGOMEZ  COUNTRY     TABLE     N

 

We can verify that the table was created and the rows inserted in every PDB:

SQL> select con_id, name from containers(C##DGOMEZ.COUNTRY) ;

    CON_ID NAME

---------- --------------------

        1 Guatemala

        3 Guatemala

        4 Guatemala

        5 Guatemala

 

NOTE: I manually created the table in CDB$ROOT, just to make the CONTAINERS clause work.

In the following example I am using the flag “-c”, which is useful when we want to use “inclusion”. We have to provide the list of the PDBs where the script will be executed. In this example, the script will be executed only in PDB1 and PDB3. I will use in this example the script #2, which  performs only an INSERT operation.

[oracle@nuvola2 admin]$ $ORACLE_HOME/perl/bin/perl catcon.pl -d /home/oracle -l /home/oracle/catcon_logs -S -c 'PDB1 PDB3'-b catcon-example script2.sql

catcon: ALL catcon-related output will be written to [/home/oracle/catcon_logs/catcon-example_catcon_27384.lst]

catcon: See [/home/oracle/catcon_logs/catcon-example*.log] files for output generated by scripts

catcon: See [/home/oracle/catcon_logs/catcon-example_*.lst] files for spool files, if any

catcon.pl: completed successfully

[oracle@nuvola2 admin]$

 

We can verify whether the script was executed in only PDB1 and PDB3 by querying the table c##dgomez.country:

[oracle@nuvola2 admin]$ sqlplus / as sysdba

SQL> select con_id, name from containers(C##DGOMEZ.COUNTRY) ;

 

    CON_ID NAME

---------- --------------------

        1 Guatemala

        3 Guatemala

        3 Canada

        4 Guatemala

        5 Guatemala

        5 Canada

8 rows selected.

 

Conclusion:

When the multi-tenant architecture was introduced, the Perl script catcon.pl was also introduced to help running frequent scripts in multiple pluggable databases. In this article we saw some examples where different flags of catcon.pl were used, such as the flag to include or exclude PDB, the flag to execute a script as if it was provided by Oracle, and when we want to create objects for our application schema. There was also an example in which the order of PDB was provided. The Perl script catcon.pl is certainly useful to avoid wasting too much time executing the same task in every PDB.    

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Oracle 12cR2 RMAN New Feature: UNTIL AVAILABLE REDO

$
0
0

By Deiby Gómez

Introduction:

 Oracle has introduced several new features in its new version Oracle Database 12.2.0.1.0 and RMAN it is not the exception. Most of the DBA would agree that one of the difficult tasks whenever a database needs to be restored is to calculate the SCN, or the Sequence to use in the “RECOVER DATABASE UNTIL (…)” operation, in order to apply as many archived logs as possible, to recover as much data as possible. Every DBA has different methods to discover the target SCN or the target Sequence. Some use the “PREVIEW” clause, some others the view v$log, some others the RMAN “LIST” commands, and so on. The problem is that when the calculation is not correct, and the database that is being restored is huge (let’s say 8TB), an error on the “RECOVER” phase might take us to restore the whole database from scratch. In Oracle database 12.2.0.1.0 the clause “UNTIL AVAILABLE REDO” is available. As its name indicates, this clause makes all the required calculations to recover the database up to the last available archive log. This is a really cool feature, since all the DBA has to do is catalog all the archivelogs available and use “UNTIL AVAILABLE REDO” in the “RECOVER DATABASE” phase, and Oracle will do all the work., This also lets us avoid human error in the calculations.

In order to show how this feature works I will use an empty database with the table DGOMEZ.COUNTRY; currently it has no rows.  This database is in archivelog mode.

 

Performing a backup:

RMAN> backup database;

Starting backup at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=53 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/others/db1/DB1/datafile/o1_mf_system_djyxzjxt_.dbf
input datafile file number=00003 name=/others/db1/DB1/datafile/o1_mf_sysaux_djyy0ynm_.dbf
input datafile file number=00004 name=/others/db1/DB1/datafile/o1_mf_undotbs1_djyy23sy_.dbf
input datafile file number=00007 name=/others/db1/DB1/datafile/o1_mf_users_djyy24y4_.dbf
channel ORA_DISK_1: starting piece 1 at 07-MAY-17
channel ORA_DISK_1: finished piece 1 at 07-MAY-17
piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:38
Finished backup at 07-MAY-17

Starting Control File and SPFILE Autobackup at 07-MAY-17
piece handle=/others/db1/fra/DB1/autobackup/2017_05_07/o1_mf_s_943372550_djyyy6vo_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-MAY-17

I will insert a row with the value ‘Guatemala’ into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

 

A second row with the value ‘Canada’ will be inserted into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Canada');

1 row created.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

 

A last row with the value ‘Colombia’ will be inserted into the table, the row will be committed and a new archived log will be generated:

SQL> insert into dgomez.country values ('Colombia');

1 row created 

SQL> commit;

Commit complete.

SQL> alter system switch logfile; 

System altered.

 

You can see that there were three archived logs created. This is because for every row that was inserted we executed a switch of the log file, and that resulted in the creation of a new archived log.

[oracle@nuvola2 2017_05_07]$ ls -ltr

total 155072

-rw-r----- 1 oracle dba 158784512 May  7 15:59 o1_mf_1_1_djyz5fgk_.arc

-rw-r----- 1 oracle dba      2560 May  7 16:00 o1_mf_1_2_djyz6dyd_.arc

-rw-r----- 1 oracle dba      3072 May  7 16:00 o1_mf_1_3_djyz723j_.arc

[oracle@nuvola2 2017_05_07]$

 

Confirming the three rows are in the table:

SQL> select * from dgomez.country;

NAME

--------------------

Guatemala

Canada

Colombia

 

Basically what I have done is what the following picture explains.  Initially the database was empty. The row with the value ‘Guatemala’ was inserted and then I generated an archived log (#1). I repeated these steps with the value ‘Canada’ and ‘Colombia’ respectively.

 

First Test – Using all the archived logs generated:

The first test that I will perform is to use these three newly generated archived logs to recover the database. For this I will simulate that all the datafiles of the existing database were deleted and we have to restore and recover the database.

Shutting down the existing database:

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 

Mounting the database:

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area  843055104 bytes

Fixed Size              8626288 bytes

Variable Size         322965392 bytes

Database Buffers      507510784 bytes

Redo Buffers            3952640 bytes

Database mounted.

 

Deleting datafiles and online logs in order to simulate a storage damage:

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/datafile/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/onlinelog/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/onlinelog/*

 

Restoring the database:

RMAN> restore database;

Starting restore at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /others/db1/DB1/datafile/o1_mf_system_djyxzjxt_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /others/db1/DB1/datafile/o1_mf_sysaux_djyy0ynm_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /others/db1/DB1/datafile/o1_mf_undotbs1_djyy23sy_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /others/db1/DB1/datafile/o1_mf_users_djyy24y4_.dbf
channel ORA_DISK_1: reading from backup piece /others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp
channel ORA_DISK_1: piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 07-MAY-17

 

Recovering the database: Here is where the magic happens. All we have to do is use the “UNTIL AVAILABLE REDO” clause and Oracle automatically will apply all the archived logs that have registered into its control file or a catalog; if a catalog is used. There is no need to perform calculations for the target SCN.

RMAN> recover database until available redo;

Starting recover at 07-MAY-17
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 1 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc
archived log for thread 1 with sequence 2 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc
archived log for thread 1 with sequence 3 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc thread=1 sequence=1
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc thread=1 sequence=2
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc thread=1 sequence=3
warning: attempt media recovery until thread 1, sequence 4
Finished recover at 07-MAY-17

We can see that the three archived logs were applied automatically and there were no errors.

Opening the database in resetlogs:

SQL> alter database open resetlogs; 

Database altered.

 

Verification of the data:

SQL> select * from dgomez.country;

NAME

--------------------

Guatemala

Canada

Colombia

 

Since the three rows are there, we can confirm that Oracle indeed applied the three archived logs automatically, without our having to specify any target SCN or target sequence.

 

Second Test – Deleting the last two archived logs:

The test that I will perform now is with the last two archived logs deleted and only the first archived log available. I will again use the UNTIL AVAILABLE REDO clause and Oracle should be able to discover that the maximum time to which the database can be recovered is right after the first row was inserted (with the value ‘Guatemala’).  

Shutting down the existing database:

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 

Mounting the database:

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area  843055104 bytes

Fixed Size              8626288 bytes

Variable Size         322965392 bytes

Database Buffers      507510784 bytes

Redo Buffers            3952640 bytes

Database mounted.

 

Deleting datafiles and online logs in order to simulate a storage damage:

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/datafile/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/DB1/onlinelog/*

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/onlinelog/*

 

Confirming that our three archived logs are there:

[oracle@nuvola2 2017_05_07]$ ls -ltr  /others/db1/fra/DB1/archivelog/2017_05_07/*

-rw-r----- 1 oracle dba 158784512 May  7 15:59 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc

-rw-r----- 1 oracle dba      2560 May  7 16:00 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc

-rw-r----- 1 oracle dba      3072 May  7 16:00 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc

 

Deleting the last two archived logs that were generated:

[oracle@nuvola2 2017_05_07]$ rm -rf  /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_3_djyz723j_.arc

[oracle@nuvola2 2017_05_07]$ rm -rf /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_2_djyz6dyd_.arc

 

Confirming that only the first archived log is available now:

[oracle@nuvola2 2017_05_07]$ ls -ltr  /others/db1/fra/DB1/archivelog/2017_05_07/*

-rw-r----- 1 oracle dba 158784512 May  7 15:59 /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc

[oracle@nuvola2 2017_05_07]$

 

The following image explains what we are doing. We deleted the last two generated archived logs in order to test whether Oracle is aware of it and whether it automatically handles the situation and applies all the redo data in the first archived log. If Oracle performs its job well, at the end, we will be see only one row inserted with the value ‘Guatemala’.

 

Restoring the database:

RMAN> restore database;

Starting restore at 07-MAY-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=44 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /others/db1/DB1/datafile/o1_mf_system_djyznwbl_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /others/db1/DB1/datafile/o1_mf_sysaux_djyznwby_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /others/db1/DB1/datafile/o1_mf_undotbs1_djyznwc9_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /others/db1/DB1/datafile/o1_mf_users_djyznwcn_.dbf
channel ORA_DISK_1: reading from backup piece /others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp
channel ORA_DISK_1: piece handle=/others/db1/fra/DB1/backupset/2017_05_07/o1_mf_nnndf_TAG20170507T155509_djyywznq_.bkp tag=TAG20170507T155509
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:22
Finished restore at 07-MAY-17

 

Recovering the database:

RMAN> recover database until available redo;

Starting recover at 07-MAY-17
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 1 is already on disk as file /others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc
archived log file name=/others/db1/fra/DB1/archivelog/2017_05_07/o1_mf_1_1_djyz5fgk_.arc thread=1 sequence=1
warning: attempt media recovery until thread 1, sequence 2
Finished recover at 07-MAY-17

 

You can see that Oracle automatically discovered that only one archived log is available and automatically calculated the target sequence for the database to be recovered.

Opening the database with resetlogs:

RMAN> alter database open resetlogs; 

Statement processed

 

Confirming the data:

RMAN> select * from dgomez.country;

NAME               

--------------------

Guatemala          

 

We can see that the result is correct. Since only the first archived log was applied, only the row with the value ‘Guatemala’ exists in the table.

 

Conclusion:

Definitely the ‘UNTIL AVAILABLE REDO’ clause is something DBAs have been waiting for, since it eliminates time spent calculating the target SCN or sequence and also removes the risk of human error in the calculations that in might result in having to restore the entire database from scratch. That would be acceptable for small databases, but for huge, multi-terabyte databases it’s not acceptable.  Oracle has made our life easier.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

How to analyze Undo statistics to proactively avoid undo space issues

$
0
0

By Deiby Gómez

Introduction

In my previous articles I explained two very important concepts about Undo Data; one is how Oracle manages the retention time and the other is how Oracle reuses the undo extents. You can also check my presentation "How to avoid ORA-01555" if you want to know more about that error. In this article, I will show you how the view V$UNDOSTAT can give you useful information about how everything is going regarding your undo data in your database. First, let me give you a small definition about two views:

V$UNDOSTAT: Each row in the view keeps statistics collected in the instance for a 10-minute interval. The rows are in descending order by the BEGIN_TIME column value. Each row belongs to the time interval marked by (BEGIN_TIME, END_TIME). Each column represents the data collected for the particular statistic in that time interval. The first row of the view contains statistics for the (partial) current time period. The view contains a total of 576 rows, spanning a 4 day cycle.

DBA_HIST_UNDOSTAT: This view contains snapshots of V$UNDOSTAT. Basically is has the history of V$UNDOSTAT.

As you can see, the main view is V$UNDOSTAT; the other is just its history. There are several columns in the view. Here are the ones we’ll focus on:

UNDOBLKS: Represents the total number of undo blocks consumed. You can use this column to obtain the consumption rate of undo blocks, and thereby estimate the size of the undo tablespace needed to handle the workload on your system

TXNCOUNT: Identifies the total number of transactions executed within the period

UNXPBLKREUCNT: Number of unexpired undo blocks reused by transactions

EXPBLKRELCNT: Number of expired undo blocks stolen from other undo segments

ACTIVEBLKS: Total number of blocks in the active extents of the undo tablespace for the instance at the sampled time in the period

UNEXPIREDBLKS: Total number of blocks in the unexpired extents of the undo tablespace for the instance at the sampled time in the period

EXPIREDBLKS: Total number of blocks in the expired extents of the undo tablespace for the instance at the sampled time in the period.

NOSPACEERRCNT: Identifies the number of times space was requested in the undo tablespace and there was no free space available. That is, all of the space in the undo tablespace was in use by active transactions. The corrective action is to add more space to the undo tablespace.

By using these columns, there are some interesting combinations that every DBA can use to tune undo data generation. If we combine UNDOBLKS and TXNCOUNT, for instance, we can find out the consumption rate of undo blocks per transaction.  Use the following query:

select min(UNDOBLKS/TXNCOUNT), avg(UNDOBLKS/TXNCOUNT), max (UNDOBLKS/TXNCOUNT) from V$UNDOSTAT

select BEGIN_TIME, END_TIME, UNDOBLKS/TXNCOUNT from V$UNDOSTAT;

You can also combine UNDOBLKS, the Undo tablespace’s block size, and the retention time in order to learn how many MB you will need for your undo tablespace’s size to match with a specific retention time.

And even more interesting, we can extract the data from V$UNDOSTAT in a CSV format and create line charts in order to understand the undo behavior of our databases.

Let’s see how this would work. As an example, I have created a 12.2.0.1 EE database, where I have loaded some workload with SLOB. The SLOB was configured to perform 95% UPDATES and 5% SELECTs, a WORK_UNIT=8192, 5 SLOB schemas and 5 threads per schema in order to generate a lot of undo data. 

For each chart that I will show, SLOB was running for around 60 minutes. This means that we will have 6 rows in V$UNDOSTAT, since every row is a sample of 10 mins.

Before you study the charts, I really recommend that you first read these two articles to master the two principal concepts:

How does Oracle reuse the Expired and Unexpired undo extents?

Undo retention time with autoextend=on and autoextend=off

Let’s begin. The following charts use the columns: NOSPACEERRCNT, ACTIVEBLKS, UNEXPIREDBLKS, EXPIREDBLKS (but you can build more complex charts using the others columns of V$UNDOSTAT).

First type of workload 

The chart below characterizes an OLTP database; the database is receiving transactions (because there are active undo extents) but the transactions seem to happen infrequently since most of the undo extents are "expired" and the active extents have not increased enough to require reusing expired/unexpired extents.

If you have your undo data behavior looking like this chart, you would say your database is healthy from an undo space perspective. This would be a "perfect" environment. In this chart, there is no reason to be worried regarding undo space.

 

First Workload Example

Second type of workload

This workload is quite different.. In the previous chart, the higher line was of “Expired Blocks” and the lower line was of “Unexpired Blocks”; however, in this second chart this is reversed. Now we can see that the higher line is of “Unexpired Blocks”. This means that the database is receiving the workload and the undo retention time is high enough to keep the undo data of the completed transactions (Unexpired extents) stored.

Here, you have to review whether there are Unexpired extents that are being reused by new transactions. This happens more frequently when the line of Unexpired extents is getting close to the line of the active extents (the next two charts). If you see that “UNXPBLKREUCNT” has a value greater than one, you probably should tune undo retention. If the undo retention has the value that you require, then you can increase the size of your undo tablespace; otherwise, unexpired extents will be overwritten by other transactions if Oracle requires it. In that case you would see some ORA-01555 in your SELECT operations.

In the chart below, however, there is no reason to be worried regarding space.

Second Workload Example

Third type of workload

The chart below is very similar to the previous one; however, in this chart the line of “Unexpired extents” is closer to the line of Active extents. This behavior increases the probability of getting ORA-01555 in your SELECT operations. If you want to avoid ORA-01555, you can increase the undo retention time or increase the size of the undo tablespace.

In this chart, there is no reason to be worried regarding space, only about ORA-0155, but you should look a little bit deeper because if you don’t pay attention, your database might reach the status of either of the two charts we’ll be looking at later on.

Third Workload Example

Fourth type of workload 

This chart indicates a worse situation than the two previous charts. Here, the number of transactions is increased such that the number of active undo extents has also increased, and started to overwrite (reuse) some unexpired undo extents.

In a database with this undo behavior there will surely be some SELECTs failing with ORA-01555, and space issues will be around the corner. I recommend in this case that you make a deep analysis of why expired undo extents have started to be reused.

If you just ignore the type status shown in this chart, your database will at some point reach the behavior shown in next chart. There will be space problems and your transactions (INSERT, UPDATE, DELETE) will start failing because there is no free space in the undo tablespace to be assigned for new extents.


Fourth Workload Example

Fifth type of workload

You should avoid having your database in this status as much as possible. In this status, some transactions (INSERT, UPDATE, DELETE) have already started to fail because there was no free space in undo tablespace to create new active undo extents.You should definitely increase the size of some datafiles of undo tablespace.


Fifth Workload Example

I’ve just shown you five charts created from the view V$UNDOSTAT that allows you to chart up to 4 days of historic data. You could  use DBA_HIST_UNDOSTAT if you want to chart several days in the past.

Determining the proper undo tablespace size

Oracle provides the function dbms_undo_adv.required_undo_size , which you can use to determine the proper undo tablespace size to comply with an specific undo retention time.

SQL> SELECT 'The Required undo tablespace size using Statistics In Memory is ' || dbms_undo_adv.required_undo_size(128) || ' MB' required_undo_size FROM dual;

REQUIRED_UNDO_SIZE

--------------------------------------------------------------------------------

The Required undo tablespace size using Statistics In Memory is 79 MB

You can use this function as a starting point, but I recommend that you set the size of the undo tablespace based on your analysis of the behavior and historic statistics of your undo data.

Conclusion

In this article I demonstrated that the view V$UNDOSTAT has very useful information that you can review, or even better, that you can chart. You can build charts as complex as you want in order to analyze the behavior of your database from the undo usage perspective and then make decisions to properly tune undo retention time and undo tablespace size.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

Viewing all 108 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>