Introduction
Oracle has introduced in Oracle 12.1.0.1 in 2013 Pluggable Databases, since then Oracle has been enhancing its features for Multitenant. In Oracle 12.1.0.1 we had features to convert non-CDBs to PDBs and some features to copy and move pluggable databases. In Oracle 12.1.0.2 we saw several features for cloning (mostly remotely). However before 12.2 all those features to create , copy, close and move pluggable databases need downtime since the source PDB had to be read-only. Downtime is a bad word for companies. To read how to clone, move and create PDBs in 12.1 you can read the following set of articles:
- How to create a PDB from a remote Non-CDB
- Cómo clonar una PDB desde un CDB remoto
- PDB Subset Cloning from a Remote non-CDB
- Creating a PDB with no data using PDB Metadata Cloning
- How to move a non-CDB into a PDB with DBMS_PDB
Beginning with Oracle Database 12.2.0.1 those features were enhanced and the downtime was replaced with the word "hot" or "online". Two features that I really like are "Hot Cloning" and "Online relocation". Basically it is the same feature than 12.1.0.2 for clonning locally and remotely but now they can be done online. The source PDB can be in read-write. First let me tell you what's the difference between Hot Cloning and Online Relocation.
Hot Cloning. This feature allows you to "clone" a pluggable database either locally or remotely without to put the source PDB in read-only. I can be in read-write receiving several DMLs.
Online Relocation: This feature allows you to "relocate" a pluggable database to another CDB with zero downtime. After to transfer the PDB, the source PDB in the source CDB is removed, that's why it is called "relocation".
In this article we will discuss the feature "Near Zero Downtime PDB Relocation". This feature needs another new feature introduced in Oracle 12.2 called "Local Undo". If you don't know what is Local Undo you can read some of my articles about that:
- ¿A bug after configuring Local Undo in Oracle 12.2?
- Oracle DB 12.2 Local Undo: PDB undo tablespace creation
- How to Enable and Disable Local Undo in Oracle 12.2
Online PDB Relocations uses a database link that is created in the target CDB pointing to the CDB$ROOT of the source CDB. There are some privileges that we have to grant but that's discussed later in the examples. Once the database link is created, the sentence "CREATE PLUGGABLE DATABASE" is executed with the clause "RELOCATE" and the optional clause "AVAILABILITY (NORMAL|MAX|HIGH)". When RELOCATE sentence is used, Oracle creates an identical pluggable database in the target CDB, while the source PDB is still open in read-write. While the new PDB in the target CDB is being created you can execute your DMLs as if nothing was happening against your source PDB. That's why it is called "online". When the sentence completes, you will have two identical PDBs, one in the source CDB and another in the target CDB. During this time the source PDB will be generating more redo, which will be applied when the final "switch" is performed. That "Switch" is made when the new PDB in the target CDB is opened in read-write. While opening the target pdb in read-write the source PDB is paused while the pending redo is applied in the new PDB and once they are both totally syncronized, Oracle applies undo data in the new PDB to rollback the uncommitted transactions that were running in the source PDB. Once the undo is applied the source PDB is deleted (all the datafiles) and the new client's session can be now redirected to the new PDB. Even if during this short step there are new sessions being created, oracle can redirect new sessions to a the new PDB if the clause "AVAILABILITY" was used. With this good feature, the PDBs can be relocated now from CDB to other CDB with near zero downtime.
In this article I will explain step by step how this feature works.
Firstable let me show you the environment that I am using:
source CDB: NuvolaCG
target CDB: Nuvola2
source PDB: sourcepdb
database version: 12.2.0.1 (in both CDBs)
Both CDBs are running on Oracle Public Cloud (EE)
The article has the following sections:
- Preparation
- Copy-Phase
- Relocation-Phase
- Known Issues
- Conclusion
Preparation
Create a common user in source CDB:
SQL> create user c##deiby identified by deiby container=all;
User created.
Granting privileges in source CDB:
SQL> grant connect, sysoper, create pluggable databaseto c##deiby container=all;
Grant succeeded.
Create a database link in target CDB pointing to CDB$ROOT of source CDB:
SQL> create database link dblinktosource connect to c##deiby identified by deiby using 'NUVOLACG';
Database link created.
The source CDB and the target CDB is in archivelog mode and also with Local Undo:
SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';
PROPERTY_NAME PROPERTY_V
-------------------- ----------
LOCAL_UNDO_ENABLED TRUE
Copy Phase
Before the relocation process, the service where your clients are connecting is running in the source CDB:
[oracle@NuvolaDB Hash]$ lsnrctl service
Service "pdbsource.gtnuvolasa.oraclecloud.internal" has 1 instance(s).
Instance "NuvolaCG", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[oracle@NuvolaDB Hash]$
In another terminal, I created a session in the source PDB and I did two INSERTs operations, one committed and another has not committed. With this I will show you how commited and uncommitted
transactions are handled:
SID SERIAL# USERNAME MACHINE
------ ---------- ---------- --------------------
367 44360 DGOMEZ NuvolaDB.compute-gtn
SQL> insert into test values ('Guatemala');
SQL> commit;
Commit complete.
SQL> insert into test values ('USA');
1 row created.
Start the copy-phase:
In this phase the datafiles will be created in the new CDB and some redo also will be applied:
SQL> create pluggable database pdbsource from pdbsource@dblinktosource keystore IDENTIFIED BY "Nuv0la#1" relocateavailability max;
Pluggable database created.
pdbsourcemust be the same name than source PDB.
pdbsource is the name of the source PDB.
dblinktosource is the name of the database link.
keystore identified by - it's not needed on-premise. But this is Oracle Public Cloud.
relocate the clause which make this operation a "relocation pdb".
availability max - redirects new connections to the new PDB.
You can see that the status of the source PDB is now "RELOCATING":
SQL> select pdb_name, status from cdb_pdbs
PDB_NAME STATUS
-------------------- ----------
PDB$SEED NORMAL
PDBSOURCE RELOCATING
In this phase some redo also is applied as you can see bellow:
create pluggable database pdbsource from pdbsource@dblinktosource keystore identified by * relocate availability max
Pluggable Database PDBSOURCE with pdb id - 3 is created as UNUSABLE.
local undo-1, localundoscn-0x000000000011f500
Applying media recovery for pdb-4099 from SCN 1206636 to SCN 1206649
thr-1, seq-39, logfile-/u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc, los-1199468, nxs-18446744073709551615
PDBSOURCE(3):Media Recovery Start
PDBSOURCE(3):Serial Media Recovery started
2DBSOURCE(3):Media Recovery Log /u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc
PDBSOURCE(3):Incomplete Recovery applied until change 1206649 time 02/12/2017 08:20:48
PDBSOURCE(3):Media Recovery Complete (Nuvola2)
Completed: create pluggable database pdbsource from pdbsource@dblinktosource keystore Identify by * relocate availability max
Once this phase has completed there will be two PDBs, one in the source PDB and another in the target PDB. Your source PDB can still receive transactions in the source PDB, the transactions executed after the copy-phase generate redo data which will be applied in the "relocation-phase".
Relocation Phase
In the terminal where I have the session created, I will perform a couple of INSERTs more. Be aware that these sentences were executed after the copy-phase to show you that the source PDB can receive DMLs, that's why is called "online":
SQL> rollback;
Rollback complete.
SQL> insert into test values ('Canada');
SQL> commit;
Commit complete.
SQL> insert into test values ('Nicaragua');
1 row created.
The relocation phase is done when you open the new target PDB in read-write. The source PDB is paused, the new PDB is opened, and the source PDB is closed and it's datafiles are deleted. (Execute 2 times this, check the Known Issues Section at the end of this article)
SQL> alter pluggable database pdbsource open;
Pluggable database altered.
After the relocation phase is completed, in the source CDB you are able to see the source PDB, but only its metadata, its datafiles were removed physically and the status of such PDB is "relocated":
SQL> select pdb_name, status from cdb_pdbs
PDB_NAME STATUS
------------ ----------
PDB$SEED NORMAL
PDBSOURCE RELOCATED
And in the target CDB you will be the new PDB, opened in read-write, ready to receive sessions:
SQL> select pdb_name, status from cdb_pdbs
PDB_NAME STATUS
----------- ----------
PDB$SEED NORMAL
PDBSOURCE NORMAL
Now let's confirm that the PDB was indeed online:
The value 'Guatemala' was commited. The value 'USA' was rolled back (after copy-phase). The value 'Canada' was commited and the value 'Nicaragua' was never commited nor uncommited. Then only "Guatemala" and "Canada" should be present in the new PDB since all the uncommited transactions were rolled back in the relocation-phase:
SQL> alter session set container=pdbsource;
Session altered.
SQL> select * from dgomez.test;
VALUE
--------------------
Guatemala
Canada
In this phase the redo generated after the copy phase is applied and all the uncommited transactions are rolled back using undo data. There are some validations and the service is relocated as well:
alter pluggable database pdbsource open
PDBSOURCE(3):Deleting old file#6 from file$
PDBSOURCE(3):Deleting old file#7 from file$
PDBSOURCE(3):Deleting old file#8 from file$
PDBSOURCE(3):Deleting old file#9 from file$
PDBSOURCE(3):Adding new file#6 to file$(old file#6)
PDBSOURCE(3):Adding new file#7 to file$(old file#7)
PDBSOURCE(3):Adding new file#8 to file$(old file#8)
PDBSOURCE(3):Adding new file#9 to file$(old file#9)
PDBSOURCE(3):Successfully created internal service pdbsource.gtnuvolasa.oraclecloud.internal at open
****************************************************************
Post plug operations are now complete.
Pluggable database PDBSOURCE with pdb id - 3 is now marked as NEW.
****************************************************************
PDBSOURCE(3):Pluggable database PDBSOURCE dictionary check beginning
PDBSOURCE(3):Pluggable Database PDBSOURCE Dictionary check complete
PDBSOURCE(3):Database Characterset for PDBSOURCE is US7ASCII
Pluggable database PDBSOURCE opened read write
Completed: alter pluggable database pdbsource open
In the middle of the blue lines Oracle applies a Semi-Patching.
Now the service that our customers are using to connect is running in the new CDB. This is totally transparently to customers. You don't have to send them a new connection string....
[oracle@NuvolaDB trace]$ lsnrctl service
Service "pdbsource.gtnuvolasa.oraclecloud.internal" has 2 instance(s).
Instance "Nuvola2", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Known Issues
Privileges:
The documentation says "SYSDBA" or "SYSOPER", however I did a couple of tests with "SYSDBA" and it didn't work. I received the error: "ORA-01031: insufficient privileges"
Target CDB in Shared Mode:
The target CDB doesn't need to be in Local Undo mode. In that case the new PDB being relocated will be converted to "Shared Undo". But you could have an issue here, if your source PDB is receiving several DMLs you could have some issues when you try to open the new PDB in read-write you will get a message saying "unrecovered txns found". In that case you must clear those unrecovered transactions by yourself and then re-execute "alter pluggable database open".
alter pluggable database pdbsource open
Applying media recovery for pdb-4099 from SCN 1207258 to SCN 1207394
thr-1, seq-39, logfile-/o1_mf_1_39_2572397703_.arc, los-1199468, nxs-18446744073709551615
PDBSOURCE(3):Media Recovery Start
PDBSOURCE(3):Serial Media Recovery started
PDBSOURCE(3):Media Recovery Log /o1_mf_1_39_2572397703_.arc
PDBSOURCE(3):Incomplete Recovery applied until change 1207394 time 02/12/2017 08:41:11
PDBSOURCE(3):Media Recovery Complete (Nuvola2)
DBSOURCE(3):Zero unrecovered txns found whileconverting pdb(3) to shared undo mode,recovery not necessary
PDB PDBSOURCE(3) converted to shared undo mode, scn: 0x000000008a2f90c0
Applying media recovery for pdb-4099 from SCN 1207394 to SCN 1207446
DBSOURCE(3):Media Recovery Start
PDBSOURCE(3):Serial Media Recovery started
PDBSOURCE(3):Media Recovery Log /u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc
DBSOURCE(3):Incomplete Recovery applied until change 1207446 time 02/12/2017 08:41:19
PDBSOURCE(3):Media Recovery Complete (Nuvola2)
In my case I had the following session opened with active transactions (the third terminal I was using to perform DMLs),
I just killed the session :)
SQL> alter system kill session '367,44360' immediate;
System altered.
After to kill that session the new PDB opened successfully.
Open in Read Write:
After the copy-phase the new PDB must be opened in read-write, if you try to open the new PDB in other mode right after the copy phase you will get errors:
SQL> alter pluggable database pdbsource open read only;
alter pluggable database pdbsource open read only
*
ERROR at line 1:
ORA-65085: cannot open pluggable database in read-only mode
Another name for the new PDB:
The name of the target PDB must be the same than name of the source PDB, if you try to use another name you will get an error:
SQL> create pluggable database relocatedPDB from pdbsource@dblinktosource relocate availability max;
create pluggable database relocatedPDB from pdbsource@dblinktosource relocate availability max
*
ERROR at line 1:
ORA-65348: unable to create pluggable database
The following image can be useful:
Deadlock in the first "alter pluggable database open":
The new PDB has to be opened twice because the first opening fails due to a bug (undocumented so far):
SQL> alter pluggable database pdbsource open;
alter pluggable database pdbsource open
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
An investigation shows that this deadlock is due to a row cache lock. I found some bugs already documented for 12.2 while opening a database, there is not any workaround for all of them, only to apply a patch. However those bugs that I found are not for Relocation PDB. Here an extract of the trace generated by the deadlock:
-------------------------------------------------------------------------------
Oracle session identified by:
{
instance: 1 (nuvola2.nuvola2)
os id: 18582
process id: 8, oracle@NuvolaDB (TNS V1-V3)
session id: 10
session serial #: 32155
pdb id: 3 (PDBSOURCE)
}
is waiting for 'row cache lock' with wait info:
{
p1: 'cache id'=0x0
p2: 'mode'=0x0
p3: 'request'=0x5
time in wait: 0.186033 sec
timeout after: never
wait id: 2670
blocking: 0 sessions
current sql: alter pluggable database pdbsource open
wait history:
* time between current wait and wait #1: 0.000327 sec
1. event: 'db file sequential read'
time waited: 0.000260 sec
wait id: 2669 p1: 'file#'=0x1e
p2: 'block#'=0xeb2
p3: 'blocks'=0x1
* time between wait #1 and #2: 0.000824 sec
2. event: 'db file sequential read'
time waited: 0.000235 sec
wait id: 2668 p1: 'file#'=0x1e
p2: 'block#'=0xbff
p3: 'blocks'=0x1
* time between wait #2 and #3: 0.002020 sec
3. event: 'db file sequential read'
time waited: 0.000250 sec
wait id: 2667 p1: 'file#'=0x1e
p2: 'block#'=0xd4e
p3: 'blocks'=0x1
}
and is blocked by the session at the start of the chain.
-------------------------------------------------------------------------------
There is another "Deadlock" in Oracle 12.2 related to Local Undo and Shared Undo. If you want to read the workaround read this article: ¿A bug in Local Undo mode in Oracle 12.2?
Closing the source PDB:
Closing the source PDB right after the copy phase and before the relocation phase:
I did it just for fun and I found an ORA-65020 [;)]. But you don't do that ...
SQL> alter pluggable database pdbsource open;
alter pluggable database pdbsource open
*
ERROR at line 1:
ORA-17628: Oracle error 65020 returned by remote Oracle server
ORA-65020: pluggable database already closed
Conclusion:
Oracle Database 12.2 brings several new features to work with Pluggable Database totally online by taking advantage of redo data and undo data generated locally (Local Undo Mode). It's time to relocate PDBs! Enjoy!.
Follow me:
About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil.