Quantcast
Channel: Deiby Gomez's Activities
Viewing all 108 articles
Browse latest View live

How to Enable and Disable Local Undo in Oracle 12.2

$
0
0

Article written by Deiby Gómez.

Local Undo is a new kind of undo configuration for Multitenant Architecture and it is a new feature introduced in 12.2.0.1.0. A couple of weeks ago the documentation of 12.2.0.1.0 was released and also the binaries in Oracle Public Cloud, several DBAs around the world started to play with the new features. When we say "Local Undo" basically we are saying that every Pluggable Database will have its own Undo Tablespace, similar to the following image where the Pluggable Databases "NuvolaPDB1", "NuvolaPDB2", "NuvolaPDB3", and also PDB$SEED have its own Undo tablespace.

This was a big change compared with the multitenant undo configuration in 12.1. In 12.1 only CDB$ROOT has its own Undo tablespace and all the Pluggable Databases "shared" that undo tablespace, that's why the former multitenant undo configuration is called "Shared Undo". To summary, starting in 12.2.0.1.0 we have "Local Undo" or "Shared Undo".  In this article I will show you step by step how to configure Local Undo in a Multitenant Database and also how to deconfigure it.

NOTE: This article was written using Oracle Database 12.2.0.1.0 Enterprise Edition Extreme Performance (Oracle Public Cloud).

The environment I am using is the following:

  • a CDB Database called "NuvolaCG".
  • 4 Pluggable Databases:
    • NuvolaPDB1 (con_id=3)
    • NuvolaPDB2 (con_id=4)
    • NuvolaPDB3 (con_id=5)
    • NuvolaPDB4 (con_id=6)

Currently the configuration my environment is using is "Shared Undo". In a Shared Undo configuration, all the pluggable databases use (Share)  the same Undo Tablespace, the Undo Tablespace is owned by CDB$ROOT. For example, in the following query result you can see that all my PDBs are using the same undo tablespace called "UNDOTBS1" and you can see that the owner of that undo tablespace is the CDB$ROOT (con_id=1):


SQL> select s.con_id   fromwhichpdb, s.username usersession, r.con_id undo_owner, r.tablespace_name current_undo,  segment_name segmentused
from v$session s,
v$transaction t,
cdb_rollback_segs r
where s.taddr=t.addr
and t.xidusn=r.segment_id(+)
and t.con_id=r.con_id
and t.ses_addr=s.saddr
order by 1;  

FROMWHICHPDB USERSESSION  UNDO_OWNER CURRENT_UNDO SEGMENTUSED
------------ ------------ ---------- ------------ ------------------------------
           3 USERA        1          UNDOTBS1     _SYSSMU3_1251228189$
           4 USERB        1          UNDOTBS1     _SYSSMU9_3256821283$
           5 USERC        1          UNDOTBS1     _SYSSMU1_307601955$
           6 USERD                 UNDOTBS1     _SYSSMU7_442620111$

How to configure Local Undo:

Shutdown the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Start the database up in upgrade mode:

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 5452595200 bytes
Fixed Size            8804328 bytes
Variable Size         1090521112 bytes
Database Buffers     4345298944 bytes
Redo Buffers            7970816 bytes
Database mounted.
Database opened.

Enable Local Undo:

SQL> alter database local undo on;

Database altered.

Shutdown the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Start the database up normally:

SQL> startup;
ORACLE instance started.

Total System Global Area 5452595200 bytes
Fixed Size            8804328 bytes
Variable Size         1090521112 bytes
Database Buffers     4345298944 bytes
Redo Buffers            7970816 bytes
Database mounted.
Database opened.

Confirm the new undo confiruation is "Local Undo":

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE  PROPERTY_NAME = 'LOCAL_UNDO_ENABLED'

PROPERTY_NAME        PROPERTY_VALUE
-------------------- --------------------
LOCAL_UNDO_ENABLED   TRUE

Now let's open all the Pluggable Databases:

SQL> alter pluggable database all open;

Pluggable database altered.

NOTE:if you get the error "ORA-00060: deadlock resolved;" here, you can read my last articlewhere you can find the solution.

As you can see bellow, now all the Pluggable Databases have its own Undo Tablespace, by default the undo tablespace is called "UNDO_1".

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME  from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id and tbs.name like 'UNDO%' order by 1;

PDB_NAME    TABLESPACE_NAME
----------- ------------------------------
NUVOLAPDB1  UNDO_1
NUVOLAPDB2  UNDO_1
NUVOLAPDB3  UNDO_1
NUVOLAPDB4  UNDO_1
PDB$SEED    UNDO_1

NOTE: If you want to know how those Undo Tablespaces were created in every Pluggable Database you can read my article called "How Undo Tablespace is created in Local Undo Config".

I executed a couple of DMLs just to use undo segments in each Pluggable Database, and now you can see that every Pluggable Database is using its own Undo Tablespace. For example the session started in NuvolaPDB1 (con_id=3) is using the undo segment called "_SYSSMU8_3241223907$" which is part of the tablespace "UNDO_1" which is owned by NuvolaPDB1 (con_id=3).

SQL> select s.con_id   fromwhichpdb, s.username usersession, r.con_id undo_owner, r.tablespace_name current_undo,  segment_name segmentused
from v$session s,
v$transaction t,
cdb_rollback_segs r
where s.taddr=t.addr
and t.xidusn=r.segment_id(+)
and t.con_id=r.con_id
and t.ses_addr=s.saddr
order by 1;  2    3    4    5    6    7    8    9  

FROMWHICHPDB USERSESSION  UNDO_OWNER CURRENT_UNDO  SEGMENTUSED
------------ ------------ ---------- ------------- ------------------------------
           3 USERA        3          UNDO_1        _SYSSMU8_3241223907$
           4 USERB                UNDO_1        _SYSSMU9_2687006412$
           5 USERC        5          UNDO_1        _SYSSMU4_2039586447$
           6 USERD        6          UNDO_1        _SYSSMU7_3889563214$


It is important to know that if you try to drop an undo tablespace when Local Undo is in use you wil get an error:


SQL> alter session set container=NuvolaPDB1;

Session altered.

SQL> drop tablespace UNDO_1 including contents and datafiles;
drop tablespace UNDO_1 including contents and datafiles
*
ERROR at line 1:
ORA-30013: undo tablespace 'UNDO_1' is currently in use


How to disable Local Undo:

shutdown the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Start the database up in upgrade mode:

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 5452595200 bytes
Fixed Size            8804328 bytes
Variable Size         1090521112 bytes
Database Buffers     4345298944 bytes
Redo Buffers            7970816 bytes
Database mounted.
Database opened.

Disable Local Undo:

SQL> alter database local undo off;

Database altered.

Shutdown the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Start the database up normally:

SQL> startup;
ORACLE instance started.

Total System Global Area 5452595200 bytes
Fixed Size            8804328 bytes
Variable Size         1090521112 bytes
Database Buffers     4345298944 bytes
Redo Buffers            7970816 bytes
Database mounted.
Database opened.
 
Confirm Shared Undo is used (Local Undo is false):

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE  PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME         PROPERTY_VALUE
-------------------- --------------------
LOCAL_UNDO_ENABLED   FALSE


How to delete Undo Tablespaces after switch from Local Undo to Shared Undo:

There is an important thing here that you should know when you switch from Local Undo to Shared Undo. Since you used Local Undo you know that every Pluggable Database had its own Undo Tablespace, however when you enable "Shared Undo" all those undo tablepaces are not removed, which means that you will have them there and you have to take a desition either leave them there or remove them.

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME  from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id and tbs.name like 'UNDO%' order by 1;

PDB_NAME    TABLESPACE_NAME
----------- ------------------------------
NUVOLAPDB1  UNDO_1
NUVOLAPDB2  UNDO_1
NUVOLAPDB3  UNDO_1
NUVOLAPDB4  UNDO_1
PDB$SEED    UNDO_1

If you decide to remove them, you have two options, The first option is to use "catcon.pl" against all the Pluggable Database as I show you bellow:

[oracle@NuvolaDB ~]$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u sys/Nuvola1 -c 'NuvolaPDB1 NuvolaPDB2 NuvolaPDB3 NuvolaPDB4' -s  -b DropUndoPDBs -- --x'drop tablespace UNDO_1 including contents and datafiles;'
catcon: ALL catcon-related output will be written to [/home/oracle/DropUndoPDBs_catcon_13739.lst]
catcon: See [/home/oracle/DropUndoPDBs*.log] files for output generated by scripts
catcon: See [/home/oracle/DropUndoPDBs_*.lst] files for spool files, if any
catcon.pl: completed successfully
[oracle@NuvolaDB ~]$

The second Option is to connect to every Pluggable Database manually and drop the undo tablespace, this could take more time than using catcon. I recommend catcon, it's easy and fast. The following sentences should be executed in every Pluggable Database you have:

SQL> alter session set container=NuvolaPDB1;

Session altered.

SQL>  drop tablespace undo_1 including contents and datafiles;

Tablespace dropped.

In both options, Using catcon.pl and also drop the undo tablespace manually you have to do the following you have want to remove the Undo tablespace also from PDB$SEED:

SQL> alter session set "_oracle_script"=true;

Session altered.

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read write;

Pluggable database altered.

SQL> alter session set container=pdb$seed;

Session altered.

SQL>  drop tablespace UNDO_1 including contents and datafiles;

Tablespace dropped.

SQL> alter session set container=cdb$root;

Session altered.

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read only;

Pluggable database altered.

SQL> alter session set "_oracle_script"=false;

Session altered.

After all these steps, you finally will leave your database as nothing happened. Of course you can see that disabling Local Undo and reverting back all the changes takes more time compared with enabling Local Undo.

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME  from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id and tbs.name like 'UNDO%' order by 1;

no rows selected

Conclusion:

  • Enabling Local Undo creates all the undo tablespaces automatically in every PDB including PDB$SEED.
  • Disabling Local Undo doesn't remove the undo tablespaces automatically.
  • You need to bounce your database either for enabling Local Undo or Disabling it.
  • Local Undo is strongly recommended. It gives more isolation to Pluggable Databases.
  • The former Undo configuration (<12.2) is called "Shared Undo".

Follow me:

      




OTN Appreciation Day: Multitenant

$
0
0

Supporting the idea of Tim, I am writing this post to say that my favorite feature of Oracle Database is "Multitenant". Oracle Multitenant Architecture was introduced in Oracle Database 12.1.0.1 (12c). Indeed when I was learning Oracle 12c in 2012 I was impressed with this new architecture and how Oracle Designed it. With this new Architecture several new concepts appeared in 12c, for example "Common Users" and "Local Users"; "Privileges granted commonly" and "Privileges granted locally"; "Container Database", "Pluggable Database" and "Seed". As you can see from Oracle 11g to Oracle 12c there were big and good changes, it was not the same case when we upgraded our knowledge from 10g to 11g where several features were introduced but not a new architecture. Probably I like this architecture more than the other features because since 2012 I was reading about it, I wrote several articles for almost every database feature introduced in 12.1.0.1 and in 12.1.0.2 (just put in google search a feature name followed of my name and you will find an article), I delivered several sessions of Multitenant in several Oracle Events around the world, I was part of Beta Test Program in 2015 for 12cR2 for Multitenant and also I was one of the Technical Reviewers of the Book written by Frank Pachot, Vit Spinka and Anton Els called "Oracle Database 12c Release 2 Multitenant (1st Edition)" with the Editorial "Oracle Press", the other Technical Reviewer was Arup Nanda. So imagine! how I love this new architecture after 4 years of studying / working with it!

There are several benefits of Using Oracle Multitenant:

  • Provisioning
  • Consolidation
  • Better Usage of Resources
  • Managing databases "all in one". 
  • Flexibility to move databases 
  • Database Isolation
  • Database Scalability
  • Common Operations (Operations across all the databases)
  • a lot more. 

Oracle Database have more than 500 new features in 12.1.0.1, it has more than 600 new features in 12.1.0.2, Oracle Database 12cR2 is coming soon with hundreds of new features.... What's your favorite one?

Follow me:

      

Extents Allocation: Round-Robin

$
0
0

DBA's perform many tasks related to datafiles and tablespaces like creation of them but also creation of segments. Every segment as you know have extents and finally those extents are allocated in Datafiles in blocks. Usually we perform those tasks without thinking what happen behind that process, what happen with extents and the datafiles, how extents are allocated, this is what you will read in this article. In this article I will show you few examples where you will be able to understand how the extents are allocated in datafiles. I will analyse only Locally Managed Tablespaces. If you have been reading my articles you should know already that I like to write my articles with a "Concept" followed by "The example/Internals" fashion. Well, let me give you the Concept of this Article:

"Extents are allocated in datafiles in round-robin Fashion".

Yes, in round-robin fashion. Some people could think that first a datafile is filled up and then the next datafile starts to get filled up but it's not that way. In order to explain you this, let's go to examples.

In this example I will Create a Locally Managed Tablespace with the name "tbslocal" and I will use the table "dgomez".

Note: These examples were applied on the following versions and it's the same behavior, this concept applies as well.

10.2.0.1.0
11.2.0.4.0
12.1.0.1.0

Extent Management Local Uniform:

Creating the tablespace:

SQL> create tablespace tbslocal datafile size 10m, size 10m, size 10m
extent management local uniform size 64k; 2

Tablespace created.

Creating our Segment:

SQL> create table dgomez (id number, value varchar2(40)) tablespace TBSLOCAL;

Table created.

When you create the Table Segment 1 extents is allocated.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID     EXTENT_ID  BLOCKS      KB
---------- ---------- ---------- ----------
8           0         8           64

Let's add 2 more extents to the segment manually:

SQL> alter table dgomez allocate extent;

Table altered.

SQL> alter table dgomez allocate extent;

Table altered.

And now let's check the result:

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID     EXTENT_ID    BLOCKS     KB
---------- ---------- ---------- ----------
8      0       8       64
7      1       8       64
6      2       8       64

As you can see our extents were allocated in round-robin fashion. But do you remember what kind of tablespace we created? In this example I was using "EXTENT MANAGEMENT LOCAL UNIFORM". There is a little difference between "EXTENT MANAGEMENT LOCAL UNIFORM" and "EXTENT MANAGEMENT LOCAL AUTOALLOCATE" and that difference is what we will see in the following example:

Extent Management Local Autoallocate: With Autoallocate, Oracle tries to understand what kind of segments are being created in this tablespace, it analyse the data and then based on that it creates the extents, next extent could be bigger than the last one. That's why we have a little difference using "Autoallocate".

SQL> drop tablespace tbslocal including contents and datafiles;

Tablespace dropped.

SQL> create tablespace tbslocal datafile size 10m, size 10m, size 10m
extent management local autoallocate; 2

Tablespace created.

SQL> create table dgomez (id number, value varchar2(40)) tablespace TBSLOCAL;

Table created.

First extent created:

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID     EXTENT_ID     BLOCKS     KB
---------- ---------- ---------- ----------
8       0       8       64

Allocating a new extent manually:

SQL> alter table dgomez allocate extent;

Table altered.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID EXTENT_ID BLOCKS KB
---------- ---------- ---------- ----------
8    0     8     64
8    1     8     64

It was allocated in the same file? Oracle doesn't allocate extents at a round-robin fashion when we're using "EXTENT MANAGEMENT LOCAL AUTOALLOCATE"?. Perhaps we are missing something here, because Oracle is expensive enough to not have this feature, you know... [;)]

Let's create another extent manually:

SQL> alter table dgomez allocate extent;

Table altered.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID EXTENT_ID BLOCKS KB
---------- ---------- ---------- ----------
8    0     8     64
8    1     8     64
8    2     8     64

No, The same behavior, we are not seeing round-robin. Ok, let me try the last time with 13 more extents:

Creation of 13 extents (13 iterations):

SQL> alter table dgomez allocate extent;

Table altered.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID EXTENT_ID BLOCKS KB
---------- ---------- ---------- ----------
8 0 8 64
8 1 8 64
8 2 8 64
8 3 8 64
8 4 8 64
8 5 8 64
8 6 8 64
8 7 8 64
8 8 8 64
8 9 8 64
8 10 8 64
8 11 8 64
8 12 8 64
8 13 8 64
8 14 8 64
8 15 8 64

16 rows selected.

People could think in this point that with "EXTENT MANAGEMENT LOCAL AUTOALLOCATE" a datafile is filled up first and then another datafile, and so on. But wait, something happens in the extent 16:

SQL> alter table dgomez allocate extent;

Table altered.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID EXTENT_ID BLOCKS KB
---------- ---------- ---------- ----------
8 0 8 64
8 1 8 64
8 2 8 64
8 3 8 64
8 4 8 64
8 5 8 64
8 6 8 64
8 7 8 64
8 8 8 64
8 9 8 64
8 10 8 64
8 11 8 64
8 12 8 64
8 13 8 64
8 14 8 64
8 15 8 64
7 16 128 1024

17 rows selected.

[:O], Finally!!
Now looks like Oracle is using another datafile in the tablespace. I will let Oracle makes me happy again:

4 iterations:

SQL> alter table dgomez allocate extent;

Table altered.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID EXTENT_ID BLOCKS KB
---------- ---------- ---------- ----------
8 0 8 64
8 1 8 64
8 2 8 64
8 3 8 64
8 4 8 64
8 5 8 64
8 6 8 64
8 7 8 64
8 8 8 64
8 9 8 64
8 10 8 64
8 11 8 64
8 12 8 64
8 13 8 64
8 14 8 64
8 15 8 64
7 16 128 1024
6 17 128 1024
8 18 128 1024
7 19 128 1024
6 20 128 1024

21 rows selected.

Fine, our extents started to get allocated at a round-robin fashion. But, does it mean that the round-robin starts in the extent 16? Not at all. it doesn't depend of the Extent 16, we will see this later in the article.

Hey Deiby, but, then the extents are not allocated "evenly" (Evenly is ASM's word for sure).
-Are you sure? Do you know how many blocks are in each datafile?

SQL> select file_id, sum(blocks) from dba_extents where file_id in (6,7,8) group by file_id;

FILE_ID SUM(BLOCKS)
---------- -----------
6 256
8 256
7 256

The 15 first extents have 256 Blocks which is the same than extents after the extent 15.

Now, let's go back and let's answer the question: is it always after extent 16? No.

SQL> drop tablespace tbslocal including contents and datafiles;

Tablespace dropped.

SQL> create tablespace tbslocal datafile size 10m, size 10m, size 10m
extent management local autoallocate; 2

Tablespace created.

SQL> create table dgomez (id number, value varchar2(40)) tablespace TBSLOCAL;

Table created.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID EXTENT_ID BLOCKS KB
---------- ---------- ---------- ----------
8 0 8 64

Creation of 2M Extent manually:

SQL> alter table dgomez allocate extent (size 2m);

Table altered.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID EXTENT_ID BLOCKS KB
---------- ---------- ---------- ----------
8 0 8 64
7 1 128 1024
6 2 128 1024

As you can see Oracle didn't create a 2M Extent, instead of that Oracle created 2 Extents of 1M. But there is another interesting thing, starting in the second extent Oracle created the extents at a round-robin fashion. So it depends of extent 16? No. Depends of the the number of blocks already allocated. I could say that it is after 128 blocks allocated Oracle starts to use Round-robin, but we should research more on it. Let's confirm that now we are using round-robin:

SQL> alter table dgomez allocate extent (size 3m);

Table altered.

SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID EXTENT_ID BLOCKS KB
---------- ---------- ---------- ----------
8 0 8 64
7 1 128 1024
6 2 128 1024
8 3 128 1024
7 4 128 1024
6 5 128 1024

6 rows selected.

Confirmed!.

Disadvantage:

Round-Robin is not so well. Unfortunately Oracle only create extents using round-robin but Oracle is not aware of the size of every filesystem so that Oracle cannot create the extents evenly. For example:

  • Using ASM: With 2 disks (1G, 100G). For every extent created in the 1G Disk, I will see 10 extents created in the 100G Disk. This is very good because every disk regardless its size will have the same percentage of usage.
  • Using Filesystem: with 2 disks (1G, 100G). Indeed you can create 1 datafile in the 1G Disk, and another datafile in the 100G, but oracle always will create 1 extent in each disk, so when the 1G Disk is full the 100G disk will be filled up just at 10%.

As always I recommend strongly using ASM for our datafiles.

SQL> drop tablespace tbslocal including contents and datafiles;

Tablespace dropped.

SQL> create tablespace tbslocal datafile size 10m, size 100m
extent management local uniform size 64k; 2

Tablespace created.

SQL> create table dgomez (id number, value varchar2(40)) tablespace TBSLOCAL;

Table created.

SQL> alter table dgomez allocate extent;

Table altered.

SQL> alter table dgomez allocate extent;

Table altered.

SQL> alter table dgomez allocate extent;

Table altered.


SQL> select file_id, extent_id,blocks, bytes/1024 KB from dba_extents where file_id in (6,7,8) order by 2 ;

FILE_ID    EXTENT_ID  BLOCKS     KB
---------- ---------- ---------- ----------
7          0          8          64
6          1          8          64
7          2          8          64
6          3          8          64

SQL>

Follow me:

      

 

SERVER=NONE in V$SESSION using Shared Server

$
0
0

When you are dealing with Shared Server there are some things that you have to know in order to understand what is going on in your database. Last night I was talking with a friend who asked me "Hey why my shared server configuration is not working?" He said that he read in the Oracle Documentation that the only parameter in order to configure shared server is "shared_servers". Indeed, he was right, but why his configuration was not working? Well, I asked him why and said the following:

"I see the SERVER column in the v$session view says NONE".

Maybe this is a well known value for the SERVER column in v$session for all the top DBAs in the world, however there are also people who don't know this, that's why I decided to write a little blog post on this. 

My answer was simple: "Your configuration is well, but your session is not doing anything that's why you see NONE in the SERVER column. If it was a wrong configuration then you should see DEDICATED".

After that conversation I started to write this blog post and below my notes for you:

Current configuration:

SQL> show parameters dispatcher

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
dispatchers string (PROTOCOL=TCP) (SERVICE=ORCLXDB)
max_dispatchers integer

SQL> show parameter shared_servers

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_shared_servers integer
shared_servers integer 1
SQL>

Currently there is 1 Dispatcher and 1 Shared Server:

SQL> select pname, addr,pid from v$process where pname like 'D0%' or pname like 'S0%';

PNAME ADDR PID
----- ---------------- ----------
D000 000000009F4B9558 20  <--- Dispatcher
S000 000000009F4BB6C8 22  <--- Shared Server

Note: Keep in mind the PID number because we will use it later with oradebug

My dispatcher is accepting connections:

SQL> select name, status, accept, messages, breaks,owned from v$dispatcher;

NAME STATUS ACC MESSAGES BREAKS OWNED
---- ---------- --- -------- ------ -----
D000 WAIT YES 0 0 0

And my shared server is "new" with no work currently:

SQL> select name, paddr, status, messages from v$shared_server

NAME PADDR STATUS MESSAGES
---- ---------------- --------------- --------
S000 000000009F4BB6C8 WAIT(COMMON) 0

SQL>

Let's create a session using "shared"

[oracle@a1 ~]$ sqlplus dgomez/dgomez@a1.oracle.com:1521/orclxdb:shared

Now in owned says "1" and messages it says "28" which means there is activity, you can see the documentation for v$dispatcher for further information ...

SQL> select name, status, accept, messages, breaks,owned from v$dispatcher;

NAME STATUS ACC MESSAGES BREAKS OWNED
---- ---------- --- -------- ------ -----
D000 WAIT YES 28 0 1

Now let's see the SERVER column in v$SESSION:

SQL> select username, server, service_name , PADDR from v$session where username='DGOMEZ';

USERNAME SERVER SERVICE_NAME PADDR
------------------------------ --------- ---------------------------------------------------------------- ----------------
DGOMEZ NONE ORCLXDB 000000009F4B9558

Here is what my friend was saying... the value NONE. Let's confirm what I told him:

Confirmation #1: The process for my session is the dispatcher.

SQL> select NAME, STATUS, ACCEPT, MESSAGES, OWNED, PADDR from v$dispatcher where PADDR=(Select paddr from v$session where username='DGOMEZ');

NAME STATUS ACC MESSAGES OWNED PADDR
---- ---------- --- -------- ----- ----------------
D000 WAIT YES 2920 1 000000009F4B9558

SQL>

Confirmation #2:D000 shows also 1 current connection.

[grid@a1 ~]$ lsnrctl services

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 02-OCT-2014 01:41:35

Copyright (c) 1991, 2013, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "ORCLXDB" has 1 instance(s).
Instance "orcl", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:1 refused:0 current:1 max:1022 state:ready
DISPATCHER <machine: a1.oracle.com, pid: 6824>
(ADDRESS=(PROTOCOL=tcp)(HOST=a1.oracle.com)(PORT=31843))
Service "orcl" has 1 instance(s).
Instance "orcl", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully

Ok, so far we know that the dispatcher process is managing our session but then why "NONE", as I said it is because of no work doing by our session. I will use oradebug in order to suspend S000 that way we can see that when my session is doing something the SERVER column in V$SESSION shows "SHARED":

SQL> oradebug setorapid 22   <--do you remember the pid of Shared server process?
Oracle pid: 22, Unix process pid: 6828, image: oracle@a1.oracle.com (S000)
SQL>

Session with dgomez user: SQL> select * from user_tables;

SQL> oradebug suspend
Statement processed.

SQL> select username, server, service_name , PADDR from v$session where username='DGOMEZ';

USERNAME SERVER SERVICE_NAME PADDR
------------------------------ --------- ---------------------------------------------------------------- ----------------
DGOMEZ SHARED ORCLXDB 000000009F4BB6C8  

The column has "SHARED" and the address"000000009F4BB6C8" is the address of the Shared Server Process. As you can see this value is showed only while the Shared Server is working:

SQL> select name, paddr, status, messages from v$shared_server;

NAME PADDR STATUS MESSAGES
---- ---------------- --------------- --------
S000 000000009F4BB6C8 EXEC 4363

Follow me:

      

Oracle Internals: Adding a column with default value

$
0
0

For this example I will use the following table:

SQL> create table dgomez.t1 (id number, value varchar2(20)) tablespace tbs1;

Table created.

Let's insert 2 rows:

SQL> insert into dgomez.t1 values (1,'deiby');

1 row created.

SQL> insert into dgomez.t1 values (2,'mauricio');

1 row created.

SQL> commit;

Commit complete.

Now, I will show you how the table block looks:

data_block_dump,data header at 0x7fde229bc064
===============
tsiz: 0x1f98
hsiz: 0x16
pbl: 0x7fde229bc064
76543210
flag=--------
ntab=1
nrow=2
frre=-1
fsbo=0x16
fseo=0x1f7d
avsp=0x1f67
tosp=0x1f67
0xe:pti[0] nrow=2 offs=0
0x12:pri[0] offs=0x1f8c
0x14:pri[1] offs=0x1f7d
block_row_dump:
tab 0, row 0, @0x1f8c
tl: 12 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 2] c1 02<<----1
col 1: [ 5] 64 65 69 62 79<<----Deiby
tab 0, row 1, @0x1f7d
tl: 15 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 2] c1 03<<----2
col 1: [ 8] 6d 61 75 72 69 63 69 6f<<----Mauricio
end_of_block_dump

Now, I will alter the table adding a column with default value specifying "not null":

SQL> alter table dgomez.t1 add value2 varchar2(20) default 'oraworld' not null;

Table altered.

Let's see how Oracle managed this DDL internally:

data_block_dump,data header at 0x7fde229bc064
===============
tsiz: 0x1f98
hsiz: 0x16
pbl: 0x7fde229bc064
76543210
flag=--------
ntab=1
nrow=2
frre=-1
fsbo=0x16
fseo=0x1f7d
avsp=0x1f67
tosp=0x1f67
0xe:pti[0] nrow=2 offs=0
0x12:pri[0] offs=0x1f8c
0x14:pri[1] offs=0x1f7d
block_row_dump:
tab 0, row 0, @0x1f8c<<--- The row has only 2 columns physically. 
tl: 12 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 2] c1 02<<----1
col 1: [ 5] 64 65 69 62 79<<----Deiby
tab 0, row 1, @0x1f7d<<--- The row has only 2 columns physically. 
tl: 15 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 2] c1 03<<----2
col 1: [ 8] 6d 61 75 72 69 63 69 6f<<----Mauricio
end_of_block_dump

As you can see the default value 'oraworld' wasn't inserted in the new column in the rows already created. Why? Because Oracle store the default value in the dictionary, let me show you:

SQL> select table_name, column_name, data_default from dba_tab_columns where table_name='T1'

TABLE_NAME COLUMN_NAME DATA_DEFAULT
---------- ----------- --------------------
T1          ID
T1          VALUE
T1          VALUE2          'oraworld'

Whenever Oracle needs the value in that column, it only review the dictionary and return the default data. Using this feature of Oracle Database you will be able to alter any table adding a column with a default value in seconds regardless the table size. Even if the table has 1TB of data the alter table will last seconds.

I will insert another row with a non-default value and let's check if the rows already created get any change:

SQL> insert into dgomez.t1 values (3,'robles','nodefault');
SQL> commit;

data_block_dump,data header at 0x7fde229bc064
===============
tsiz: 0x1f98
hsiz: 0x18
pbl: 0x7fde229bc064
76543210
flag=--------
ntab=1
nrow=3
frre=-1
fsbo=0x18
fseo=0x1f66
avsp=0x1f4e
tosp=0x1f4e
0xe:pti[0] nrow=3 offs=0
0x12:pri[0] offs=0x1f8c
0x14:pri[1] offs=0x1f7d
0x16:pri[2] offs=0x1f66
block_row_dump:
tab 0, row 0, @0x1f8c<<-- Row didn't have any change.
tl: 12 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 2] c1 02
col 1: [ 5] 64 65 69 62 79
tab 0, row 1, @0x1f7d<<-- Row didn't have any change.
tl: 15 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 2] c1 03
col 1: [ 8] 6d 61 75 72 69 63 69 6f
tab 0, row 2, @0x1f66
tl: 23 fb: --H-FL-- lb: 0x2 cc: 3
col 0: [ 2] c1 04
col 1: [ 6] 72 6f 62 6c 65 73
col 2: [ 9] 6e 6f 64 65 66 61 75 6c 74<<---Non Default Value
end_of_block_dump

As you can see the rows already created weren't modified and the default value is stored physically only for new rows.

What happen with the upcoming rows?

SQL> insert into dgomez.t1 (id, value) values (4,'robles');
SQL> insert into dgomez.t1 (id, value) values (5,'jose');

Now we will see the Table Block and you will see that for all the new rows the value in the new column has the the data physically stored:

data_block_dump,data header at 0x7fde229bc064
===============
tsiz: 0x1f98
hsiz: 0x1c
pbl: 0x7fde229bc064
76543210
flag=--------
ntab=1
nrow=5
frre=-1
fsbo=0x1c
fseo=0x1f3c
avsp=0x1f20
tosp=0x1f20
0xe:pti[0] nrow=5 offs=0
0x12:pri[0] offs=0x1f8c
0x14:pri[1] offs=0x1f7d
0x16:pri[2] offs=0x1f66
0x18:pri[3] offs=0x1f50
0x1a:pri[4] offs=0x1f3c
block_row_dump:
tab 0, row 0, @0x1f8c
tl: 12 fb: --H-FL-- lb: 0x0 cc: 2
col 0: [ 2] c1 02
col 1: [ 5] 64 65 69 62 79
tab 0, row 1, @0x1f7d
tl: 15 fb: --H-FL-- lb: 0x0 cc: 2
col 0: [ 2] c1 03
col 1: [ 8] 6d 61 75 72 69 63 69 6f
tab 0, row 2, @0x1f66
tl: 23 fb: --H-FL-- lb: 0x0 cc: 3
col 0: [ 2] c1 04
col 1: [ 6] 72 6f 62 6c 65 73
col 2: [ 9] 6e 6f 64 65 66 61 75 6c 74
tab 0, row 3, @0x1f50
tl: 22 fb: --H-FL-- lb: 0x1 cc: 3
col 0: [ 2] c1 05
col 1: [ 6] 72 6f 62 6c 65 73
col 2: [ 8] 6f 72 61 77 6f 72 6c 64<<--- default value inserted physically
tab 0, row 4, @0x1f3c
tl: 20 fb: --H-FL-- lb: 0x2 cc: 3
col 0: [ 2] c1 06
col 1: [ 4] 6a 6f 73 65
col 2: [ 8] 6f 72 61 77 6f 72 6c 64<<--- default value inserted physically
end_of_block_dump

Be carefully with "no null"

When you don't specify "not null" in the "alter table add column" with default value, the default value is inserted physically for all the rows already created:

SQL> drop table dgomez.t1 purge;
SQL> create table dgomez.t1 (id number, value varchar2(20)) tablespace tbs1;
SQL> insert into dgomez.t1 values (1,'deiby');
SQL> insert into dgomez.t1 values (2,'mauricio');
SQL> commit;

I will alter the table adding a column with default value but without specify "not null":

SQL> alter table dgomez.t1 add value2 varchar2(20) default 'oraworld';

Table altered.

Let me show you the table block:

data_block_dump,data header at 0x7fde229bc064
===============
tsiz: 0x1f98
hsiz: 0x16
pbl: 0x7fde229bc064
76543210
flag=--------
ntab=1
nrow=2
frre=-1
fsbo=0x16
fseo=0x1f50
avsp=0x1f55
tosp=0x1f55
0xe:pti[0] nrow=2 offs=0
0x12:pri[0] offs=0x1f68
0x14:pri[1] offs=0x1f50
block_row_dump:
tab 0, row 0, @0x1f68
tl: 21 fb: --H-FL-- lb: 0x2 cc: 3
col 0: [ 2] c1 02
col 1: [ 5] 64 65 69 62 79
col 2: [ 8] 6f 72 61 77 6f 72 6c 64<<--Default value inserted for all rows
tab 0, row 1, @0x1f50
tl: 24 fb: --H-FL-- lb: 0x2 cc: 3
col 0: [ 2] c1 03
col 1: [ 8] 6d 61 75 72 69 63 69 6f
col 2: [ 8] 6f 72 61 77 6f 72 6c 64<<--Default value inserted for all rows
end_of_block_dump

So, be carefully because if your table has 1TB of data, can you imagine how long the "alter table add column" will last?

More articles about this feature: 

Otimização de comandos DDL - Portuguese, Dr. Mohamed Houri (ACE) and Alex Zaballa (ACE, OCM)
Optimización de operaciones DDL - Spanish, Dr. Mohamed Houri (ACE) and Deiby Gómez (ACE)

Follow me:

      

Applaud

Oracle 12c Global and Session Sequences

$
0
0

In Oracle 12c we have two new  keywords to create sequences: SESSION and GLOBAL as you can see in the following image that was taken from Oracle Database 12c SQL Reference:

The definitions of  these two types of Sequences are simple:

SESSION: The values of the sequences are unique for every session in the database. This means that with every session the sequence will start from the beginning (the first value), all the values are isolated across sessions. 

GLOBAL: This means the values of the sequences are across sessions. In other words, with every session you will take a new value from the sequence, this value is the next value, not from the beginning. But there is another interesting thing here, it is also "across Primary-Standby databases". This means that if you query the sequence from a read only physical standby database you will receive the new value of the sequence, that's why is called "global". Basically is global because it is across sessions but also involves the physical standby. 

I like to show examples, let's start from the basic. For these examples I will use three sessions, I will use colors to identify them easier. 

  • Sessions from Primary (black color and blue color).
  • Sessions from Standby (green color).

Session Sequences:

Session #1 in Primary Database: 

[oracle@db12102 ~]$ sqlplus user1/user1@db1

SQL> create sequence session_seq session;

Sequence created.

SQL> select sequence_name, cache_size, increment_by from user_sequences;

SEQUENCE_NAME   CACHE_SIZE INCREMENT_BY
--------------- ---------- ------------
SESSION_SEQ     20         1

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_US DB_ROLE    SESSION_ID
---------- ---------- ----------
USER1      PRIMARY    2500123

SQL> select session_seq.nextval from dual;

NEXTVAL
----------
1

SQL> select session_seq.nextval from dual;

NEXTVAL
----------
2

As you see the values of the sequence started from the beginning, in this case the value "1" because we created the sequence with all the default values for the options. 

Session #2 in Primary:

Now let's Open another session with the same user :

[oracle@db12102 ~]$ sqlplus user1/user1@db1

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_US DB_ROLE    SESSION_ID
---------- ---------- ----------
USER1      PRIMARY    2510101


SQL> select session_seq.nextval from dual;

NEXTVAL
----------
1

SQL> select session_seq.nextval from dual;

NEXTVAL
----------
2

SQL> select session_seq.nextval from dual;

NEXTVAL
----------
3

The values started again from the beginning (value 1) because all the values already used from the session #1 were isolated, since this is a new session, the sequence will be as if it was never used. Another thing that you have to know is that the values are not stored, when the session is closed (or killed) everything will start again from the beginning, this makes sense.

Session #3 in Standby Database:

[oracle@db12102 ~]$ sqlplus user1/user1@db1s

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_USER    DB_ROLE           SESSION_ID
--------------- ----------------- ----------
USER1           PHYSICAL STANDBY  4294967295

SQL> select session_seq.nextval from dual;

NEXTVAL
----------
1

SQL> select session_seq.nextval from dual;

NEXTVAL
----------
2

And just to show you that we can also use the sequence from the standby database (db1s) I am putting this example, but  as you already saw ( and expected ) since the session from the standby is a "new session" the values of the sequence started from the beginning. 

Global Sequences with NO ORDER and CACHE:

Now it is more interesting because we are creating  a global session using cache and also using no order for the values. Let's see why these values are important when we are working with global sequences:

Session #1 from Primary:

[oracle@db12102 ~]$ sqlplus user1/user1@db1

SQL> create sequence global_seq global;

Sequence created.

SQL> select sequence_name, cache_size, increment_by from user_sequences;

SEQUENCE_NAME   CACHE_SIZE INCREMENT_BY ORDER
--------------- ---------- ------------ -----
GLOBAL_SEQ      20        1            N

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_US DB_ROLE    SESSION_ID
---------- ---------- ----------
USER1      PRIMARY    2530102

SQL> select global_seq.nextval from dual;

NEXTVAL
----------
1

SQL> select global_seq.nextval from dual;

NEXTVAL
----------
2

Since this is the first session that uses the sequence the values started from "1", but let's see what happens when we query the sequence from a second session:

Session #2 from Primary:

[oracle@db12102 ~]$ sqlplus user1/user1@db1

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_USER    DB_ROLE    SESSION_ID
--------------- ---------- -------------
USER1           PRIMARY    2530103

SQL> select global_seq.nextval from dual;

NEXTVAL
----------
3

SQL> select global_seq.nextval from dual;

NEXTVAL
----------
4

Yes, as you see, the behavior of the sequence is the same of the sequences from old versions of Oracle Database, the values are across sessions, and also those values are stored so that new sessions will take new values from the sequence. The most interesting thing is when we have a read-only Physical Standby like in the following example:

Session #3 from Standby:

[oracle@db12102 ~]$ sqlplus user1/user1@db1s

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_USER    DB_ROLE          SESSION_ID
--------------- ---------------- ---------------
USER1           PHYSICAL STANDBY 4294967295

SQL> select global_seq.nextval from dual;

NEXTVAL
----------
21

SQL> select global_seq.nextval from dual;

NEXTVAL
----------
22

SQL>

This is a third session, opened from a read-only Physical Standby Database and look at the values that were returned from the sequence, the first value returned was "21", this is because the sequence was created with a cache size of "20", this means that 20 values were loaded in the Primary Database Instance, and here in the Physical Standby Instance  a new set of values were taken starting with "21", but it is interesting that the sequences now are across Primary-Standby databases. 

Global Sequences with NO ORDER and NO CACHE:

Now let's change the options, now let's use values with no oder and no cache and let's see what happens:

Session #1 from Primary:

[oracle@db12102 ~]$ sqlplus user1/user1@db1

SQL> create sequence global_nocache_seqglobal nocache;

Sequence created.

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_US DB_ROLE    SESSION_ID
---------- ---------- ----------
USER1      PRIMARY    2540100

SQL>
SQL> select sequence_name, cache_size, increment_by , order_flag from user_sequences;

SEQUENCE_NAME        CACHE_SIZE INCREMENT_BY ORDER
-------------------- ---------- ------------ -----
GLOBAL_NOCACHE_SEQ   0          1            N

SQL> select global_nocache_seq.nextval from dual;

NEXTVAL
----------
1

SQL> select global_nocache_seq.nextval from dual;

NEXTVAL
----------
2

Session #2 from Standby:

[oracle@db12102 ~]$ sqlplus user1/user1@db1s

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_USER    DB_ROLE           SESSION_ID
--------------- ----------------- ---------------
USER1           PHYSICAL STANDBY  4294967295

SQL> select global_nocache_seq.nextval from dual;
select global_nocache_seq.nextval from dual
*
ERROR at line 1:
ORA-03179: NOCACHE or ORDER sequences cannot be accessed from Active Data Guard standby


Look at that error, the error clearly says that we cannot use that sequence from the read-only Physical Standby because the sequence was not created using "cache" and using "no order". This is important and I wanted to show you that, that's why I created this scenario, because if you want to use sequences from the read-only Physical Standby Database you have to know that you must create the sequence using CACHE and NO ORDER otherwise you will get errors in the Standby site.  Just to confirm what I am saying I will create the last scenario, where I am using Cache but I am not using values with order. 

Global Sequences with  ORDER and CACHE:

Session #1 from Primary:

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_USER    DB_ROLE         SESSION_ID
--------------- --------------- ---------------
USER1 PRIMARY 2630100

SQL> create sequence global_order_seqglobal order;

Sequence created.

SQL> select sequence_name, cache_size, increment_by , order_flag from user_sequences;

SEQUENCE_NAME        CACHE_SIZE INCREMENT_BY ORDER
-------------------- ---------- ------------ -----
GLOBAL_ORDER_SEQ     20         1            Y


SQL> select global_order_seq.nextval from dual;

NEXTVAL
----------
1

SQL> select global_order_seq.nextval from dual;

NEXTVAL
----------
2

Session #2 from Standby:

[oracle@db12102 ~]$ sqlplus user1/user1@db1s

SQL> SELECT SYS_CONTEXT ('USERENV','SESSION_USER') session_user, SYS_CONTEXT ('USERENV','DATABASE_ROLE') db_role, SYS_CONTEXT ('USERENV','SESSIONID') session_id FROM DUAL;

SESSION_USER    DB_ROLE   SESSION_ID
--------------- --------- ---------------
USER1 PHYSICAL  STANDBY   4294967295

SQL> select global_order_seq.nextval from dual;
select global_order_seq.nextval from dual
*
ERROR at line 1:
ORA-03179: NOCACHE or ORDER sequences cannot be accessed from Active Data Guard standby

As you see, the result is the same. This confirms we must use CACHE and NO ORDER if we want to use the sequences from the read-only Physical Standby.

Follow me:

      

 

X Convención de Informática UMG San Marcos

$
0
0

Note: This article was written originally in Spanish. If you are reading this article in other language it is because of the automatic translator of ToadWorld.

El pasado sábado tuve la oportunidad de asistir a la "X Convención de informática" que fue organizada por la Universidad Mariano Galvez de San Marcos:

En esta convención tuve la oportunidad de participar como conferencista, compartí con los estudiantes muchas anécdotas que tuve durante estuve estudiando la universidad, así también como los problemas que fui teniendo a lo largo de mi carrera y cómo los fuí solventando. Tuve el agrado de poder darles muchas recomendaciones y darles a conocer que la carrera de Ingeniería en Ciencias y Sistemas si bien no es facil,  es una carrera que vale la pena estudiar, el esfuerzo es bastante pero vale la pena el recorrido.

Al final de las conferencias, nos tomamos fotografias con los estudiantes:

Un especial agradecimiento a Marlon García y a David Velasquez por la invitación a participar en tan bonito evento, así tambien un especial agradecimiento a la Universidad Mariano Galvez de San Marcos por permitir que estos eventos se lleven a cabo. 

Follow me:

      


How to rename an ASM Diskgroup

$
0
0

Oracle ASM was introduced in Oracle Database 10g, since then, several enhancements were introduced with every version. Nowadays ASM is the most common filesystem used by the Database Administrators to store the database files, it is also highly recommended by Oracle. Said that, to perform maintenance tasks of ASM Disks and ASM Diskgroups are very frequent. In this article we will focus on only one maintenance tasks, this is renaming an ASM Diskgroup. This task sounds easy to perform but we will see that it's not that way, it needs a carefully execution by the DBA, specially because this task requieres Downtime. Whenever we need downtime it is necessary to coordinate with the other areas of the company like "Application team" and sometimes "sysadmins". it is highly recommended also to have a backup of the database before to proceed. While renaming a diskgroup only headers of the disks are modified, not the data. But as a best practice and if you don't like headaches like me it's better to have a backup.

In this article we will perform the following activity. We have already one ASM diskgroup called "DATA" and we will rename it to "DATA2". 

 

The first step it is to know which databases will be impacted if we unmount the ASM Diskgroup "DATA", to know this, we can query the view "v$asm_client", that view will show us which database instances are using the diskgroup that we want to rename. To do that, first let's find what is the ASM Diskgroup number for the diskgroup "DATA":

SQL> select group_number, name from v$asm_diskgroup where name='DATA';

GROUP_NUMBER NAME
------------ ------------------------------
1            DATA

SQL> select group_number, instance_name, db_name, status from v$asm_client where group_number=1;

GROUP_NUMBER INSTANCE_NAME   DB_NAME  STATUS
------------ --------------- -------- ------------
1            +ASM            +ASM     CONNECTED
1            orcl            orcl     CONNECTED

Ok, we have found that there is one database instance using the diskgroup. Now it's time to review that database instance because we have to shut it down. 

[oracle@a1 ~]$ ps -ef |grep pmon
grid 3759 1 0 Oct19 ? 00:00:08 asm_pmon_+ASM
oracle 3851 1 0 Oct19 ? 00:00:09 ora_pmon_orcl
oracle 12038 12016 0 12:36 pts/2 00:00:00 grep pmon
[oracle@a1 ~]$

I will review where the datafiles of this database are located. This step is important, because not all the databases have the datafiles in the same ASM Diskgroup, to avoid any surprise later I am checking out the location of the datafiles. In this case, all the datafiles are located in the same ASM Diskgroup, in "DATA". 


SQL> select name from v$datafile;

NAME
-------------------------------------------
+DATA/orcl/datafile/system.262.912909191
+DATA/orcl/datafile/sysaux.257.912909191
+DATA/orcl/datafile/undotbs1.261.912909191
+DATA/orcl/datafile/users.271.912909191
+DATA/orcl/datafile/tbs1.279.918100673
+DATA/orcl/datafile/tbs2.256.918102673

Now it's time to check out some information about the disks of the diskgroup "DATA":

SQL> select group_number, state, name, label, path from v$asm_disk where group_number=1;

GROUP_NUMBER STATE    NAME       PATH
------------ -------- ---------- -----------------------------
1            NORMAL   DATA_0002  /dev/oracleasm/disks/ASMDISK3
1            NORMAL   DATA_0001  /dev/oracleasm/disks/ASMDISK2
1            NORMAL   DATA_0000  /dev/oracleasm/disks/ASMDISK1

We see that three disks will be involved in the activity. As I said before, only the headers are modified, not the data.

Shutting down the database: This step is required because the ASM Diskgroup DATA must be unmounted. 

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
orcl

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

Unmount the ASM Diskgroup:

With the user "grid" which is the owner of the Grid Infrastructure we check out the current status of the ASM Diskgroup:

[grid@a1 ~]$ asmcmd lsdg
State   Type   Rebal Sector Block AU     Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL Y     512    4096 1048576 115262   95528   51489           22019   0    N   DATA/
[grid@a1 ~]$

And now I will proceed to unmount it:

[grid@a1 ~]$ asmcmd umount DATA
[grid@a1 ~]$ asmcmd lsdg
[grid@a1 ~]$

Once the ASM Diskgorup has been unmounted we can proceed to rename the diskgroup using the tool "renamedg".

 

Renaming the ASM Diskgroup:

To perform the renaming of the ASM Diskgroup we will use the tool "renamedg". As most of the tools of Oracle, a "-help" will tell us a lot of useful information about  how to use it. I recommend to take a couple of minutes to read the description of every option. 

[grid@a1 ~]$ renamedg -help

Parsing parameters..
phase                Phase to execute,
                      (phase=ONE|TWO|BOTH), default BOTH

dgname               Diskgroup to be renamed

newdgname            New name for the diskgroup

config               intermediate config file

check                just check-do not perform actual operation,
                      (check=TRUE/FALSE), default FALSE

confirm              confirm before committing changes to disks,
                      (confirm=TRUE/FALSE), default FALSE

clean                ignore errors,
                      (clean=TRUE/FALSE), default TRUE

asm_diskstring       ASM Diskstring (asm_diskstring='discoverystring',
                      'discoverystring1' ...)

verbose              verbose execution,
                      (verbose=TRUE|FALSE), default FALSE

keep_voting_files    Voting file attribute,
                      (keep_voting_files=TRUE|FALSE), default FALSE

[grid@a1 ~]$

The most important thing of this tool is to know that it works with two phases. 

  • Phase one: This phase generates a configuration file to be used in phase two.
  • Phase two: This phase uses the configuration file to perform the renaming of the disk group.

Said that I recommend to run "renamedg" with the option "check=true", doing so it will not write anything in the headers of the ASM Disks, it will only perform the phase one which is the creation of the file of configuration and it will check out the steps in the phase two without  really perform it. 

 

Running "renamedg" with "check=true":

[grid@a1 ~]$ renamedg phase=both dgname=DATA newdgname=DATA2 asm_diskstring='/dev/oracleasm/disks/' check=true verbose=true

Parsing parameters..

Parameters in effect:

Old DG name : DATA
New DG name : DATA2
Phases :
Phase 1
Phase 2
Discovery str : /dev/oracleasm/disks/
Check : TRUE
Clean : TRUE
Raw only : TRUE
renamedg operation: phase=both dgname=DATA newdgname=DATA2 asm_diskstring=/dev/oracleasm/disks/ check=true verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:/dev/oracleasm/disks/
Identified disk UFS:/dev/oracleasm/disks/ASMDISK1 with disk number:0 and timestamp (33017185 1812365312)
Identified disk UFS:/dev/oracleasm/disks/ASMDISK2 with disk number:1 and timestamp (33017185 1812365312)
Identified disk UFS:/dev/oracleasm/disks/ASMDISK3 with disk number:2 and timestamp (33017186 -1487072256)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/oracleasm/disks/
Identified disk UFS:/dev/oracleasm/disks/ASMDISK1 with disk number:0 and timestamp (33017185 1812365312)
Identified disk UFS:/dev/oracleasm/disks/ASMDISK2 with disk number:1 and timestamp (33017185 1812365312)
Identified disk UFS:/dev/oracleasm/disks/ASMDISK3 with disk number:2 and timestamp (33017186 -1487072256)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Checking disk number:1
Checking disk number:2
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/oracleasm/disks/ASMDISK1
Leaving the header unchanged
Looking for /dev/oracleasm/disks/ASMDISK2
Leaving the header unchanged
Looking for /dev/oracleasm/disks/ASMDISK3
Leaving the header unchanged
Completed phase 2
Terminating kgfd context 0x7fc8de5cb0a0
[grid@a1 ~]$

There are some important messages that we saw in the output, the message "Leaving the header unchanged" means that the disks were not modified. Only the phase one was performed (creating a config file) and a review of the disk were performed without changes. That's because we execute "renamedg" with the option "check=true". 

After execute it we will see the config file created in the same directory where we executed "renamedg", since we didn't specify a specific name for the config file, the default name is "renamedg_config":

[grid@a1 ~]$ ls -ltr renamedg_config
-rw-r--r-- 1 grid oinstall 123 Oct 20 12:54 renamedg_config
[grid@a1 ~]$

Let's take a look into the config file created by the phase one:

[grid@a1 ~]$ cat renamedg_config
/dev/oracleasm/disks/ASMDISK1 DATA DATA2
/dev/oracleasm/disks/ASMDISK2 DATA DATA2
/dev/oracleasm/disks/ASMDISK3 DATA DATA2
[grid@a1 ~]$

It seems that only the disks of the ASM Diskgroup DATA are listed, in this case three disks are listed, the second column seems to be the current name of the ASM Diskgroup (DATA) and the third column seems to be the new name of the ASM Diskgroup (DATA2). 

 

Performing the ASM Diskgroup renaming: Since we already executed the phase one, we will re-execute "renamedg" but only for the phase two and using the config file generated by the phase one:

[grid@a1 ~]$ renamedg dgname=DATA newdgname=DATA2 asm_diskstring='/dev/oracleasm/disks/' verbose=true phase=twoconfig='/home/grid/renamedg_config'

Parsing parameters..

Parameters in effect:

Old DG name : DATA
New DG name : DATA2
Phases :
Phase 2
Discovery str : /dev/oracleasm/disks/
Clean : TRUE
Raw only : TRUE
renamedg operation: dgname=DATA newdgname=DATA2 asm_diskstring=/dev/oracleasm/disks/ verbose=true phase=two config=/home/grid/renamedg_config
Executing phase 2
Looking for /dev/oracleasm/disks/ASMDISK1
Modifying the header
Looking for /dev/oracleasm/disks/ASMDISK2
Modifying the header
Looking for /dev/oracleasm/disks/ASMDISK3
Modifying the header
Completed phase 2
Terminating kgfd context 0x7f7b3673c0a0
[grid@a1 ~]$

it takes just a few seconds to complete. 

 

Mounting the ASM Diskgroup: The next step is to mount the ASM Diskgroup, don't forget that you have to mount it using the new name because it was already renamed. 

[grid@a1 ~]$ asmcmd mount DATA2
[grid@a1 ~]$ asmcmd lsdg
State   Type  Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTEDNORMAL Y   512   4096   1048576   115262   95528   51489   22019  0   N   DATA2/
[grid@a1 ~]$

After validate that the ASM Diskgroup is again in status "MOUNTED" we can proceed with the post-renaming steps. 

 

Renaming the Spfile:

The first thing of post-renaming steps is  to perform a modification in the spfile that the database instance was using in order to open our database. In this case we can see that the database instance uses a pfile in "$ORACLE_HOME/dbs" but that pfile is a pointer to a spfile that is stored inside the ASM Diskgroup "DATA". Since the new diskgroup name is "DATA2" we have to update that information:

[oracle@a1 ~]$ cat $ORACLE_HOME/dbs/initorcl.ora
SPFILE='+DATA/orcl/spfileorcl.ora'
[oracle@a1 ~]$
[oracle@a1 ~]$ vi $ORACLE_HOME/dbs/initorcl.ora
[oracle@a1 ~]$ cat $ORACLE_HOME/dbs/initorcl.ora
SPFILE='+DATA2/orcl/spfileorcl.ora'
[oracle@a1 ~]$

Once we have done that change we can start the database instance up in status "nomount":

[oracle@a1 ~]$ sqlplus / as sysdba

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 1870647296 bytes
Fixed Size 2254304 bytes
Variable Size 503319072 bytes
Database Buffers 1358954496 bytes
Redo Buffers 6119424 bytes
SQL>

 

Modifying the Control File location in the spfile:

We have already started up the database instance, but before to proceed to mount it we have to do another step. We have to change the location of the control files inside the spfile. To do so, I am creating a temporary pfile from the current spfile:

SQL> create pfile='/home/oracle/stagePfile.ora' from spfile;

File created.

SQL>

I will modify the current location of the control files with the new ASM Diskgroup:

[oracle@a1 ~]$ cat /home/oracle/stagePfile.ora|grep DATA
*.control_files='+DATA/orcl/controlfile/current.275.912909297'
[oracle@a1 ~]$

[oracle@a1 ~]$ vi /home/oracle/stagePfile.ora
[oracle@a1 ~]$ cat /home/oracle/stagePfile.ora|grep DATA
*.control_files='+DATA2/orcl/controlfile/current.275.912909297'
[oracle@a1 ~]$

Once done the change, in order to create a spfile from the temporary pfile we have to shutdown again the instance and re-creating the spfile using the temporary pfile and then start the database instance up until mount state:

[oracle@a1 ~]$ sqlplus / as sysdba

SQL> shutdown immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> create spfile='+DATA2/orcl/spfileorcl.ora' from pfile='/home/oracle/stagePfile.ora';

File created.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1870647296 bytes
Fixed Size 2254304 bytes
Variable Size 503319072 bytes
Database Buffers 1358954496 bytes
Redo Buffers 6119424 bytes
Database mounted.

Renaming the Database Files:

So far we have renamed the spfile, updated the controlfile locations and the last step is to rename every file used by the database, these are redo logs, datafiles, temporary files and in case you are using a block change tracking file you have to update it as well. In order to rename these files, I have used the following query to create the sentences that do the work:

SQL> set head off
SQL>select 'alter database rename file '''||name||''' to '''||replace(name, 'DATA','DATA2')||''';' from v$datafile
union
select 'alter database rename file '''||member||''' to '''||replace(member, 'DATA','DATA2')||''';' from v$logfile
union
select 'alter database rename file '''||name||''' to '''||replace(name, 'DATA','DATA2')||''';' from v$tempfile;


alter database rename file '+DATA/orcl/onlinelog/group_1.259.916424605' to '+DATA2/orcl/onlinelog/group_1.259.916424605';
alter database rename file '+DATA/orcl/onlinelog/group_2.266.916424607' to '+DATA2/orcl/onlinelog/group_2.266.916424607';
alter database rename file '+DATA/orcl/onlinelog/group_3.270.916424607' to '+DATA2/orcl/onlinelog/group_3.270.916424607';
alter database rename file '+DATA/orcl/tempfile/temp.263.912909305' to '+DATA2/orcl/tempfile/temp.263.912909305';
alter database rename file '+DATA/orcl/datafile/sysaux.257.912909191' to '+DATA2/orcl/datafile/sysaux.257.912909191';
alter database rename file '+DATA/orcl/datafile/system.262.912909191' to '+DATA2/orcl/datafile/system.262.912909191';
alter database rename file '+DATA/orcl/datafile/tbs1.279.918100673' to '+DATA2/orcl/datafile/tbs1.279.918100673';
alter database rename file '+DATA/orcl/datafile/tbs2.256.918102673' to '+DATA2/orcl/datafile/tbs2.256.918102673';
alter database rename file '+DATA/orcl/datafile/undotbs1.261.912909191' to '+DATA2/orcl/datafile/undotbs1.261.912909191';
alter database rename file '+DATA/orcl/datafile/users.271.912909191' to '+DATA2/orcl/datafile/users.271.912909191';

6 rows selected.

The sentences to rename every file used by the database were created, all what we have to do is just execute them:


SQL> alter database rename file '+DATA/orcl/datafile/system.262.912909191' to '+DATA2/orcl/datafile/system.262.912909191';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/sysaux.257.912909191' to '+DATA2/orcl/datafile/sysaux.257.912909191';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/undotbs1.261.912909191' to '+DATA2/orcl/datafile/undotbs1.261.912909191';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/users.271.912909191' to '+DATA2/orcl/datafile/users.271.912909191';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/tbs1.279.918100673' to '+DATA2/orcl/datafile/tbs1.279.918100673';

Database altered.

SQL> alter database rename file '+DATA/orcl/datafile/tbs2.256.918102673' to '+DATA2/orcl/datafile/tbs2.256.918102673';

Database altered.

SQL> SQL> alter database rename file '+DATA/orcl/onlinelog/group_1.259.916424605' to '+DATA2/orcl/onlinelog/group_1.259.916424605';

Database altered.

SQL> alter database rename file '+DATA/orcl/onlinelog/group_2.266.916424607' to '+DATA2/orcl/onlinelog/group_2.266.916424607';

Database altered.

SQL> alter database rename file '+DATA/orcl/onlinelog/group_3.270.916424607' to '+DATA2/orcl/onlinelog/group_3.270.916424607';

Database altered.

SQL> alter database rename file '+DATA/orcl/tempfile/temp.263.912909305' to '+DATA2/orcl/tempfile/temp.263.912909305';

Database altered.

 

Opening the database in read-write:

Once all the files were renamed, we are ready to open the database normally:

SQL> set head on
SQL> select name , open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
ORCL      READ WRITE

SQL>

Follow me:

      

Invisible Columns in Oracle 12c

$
0
0

Starting in Oracle 12.1.0.1 there are several new features, +500 I have heard, and one of a good features for developers is  "Invisible Columns". Invisible Columns allows a developer create a table with some special columns. These special columns are not shown to everybody who is using the table, in order to get the value of that column whoever is performing DMLs against the table must specify the name of the column explicitly, otherwise the behavior of that table will be as if it hadn't that column. This is useful when an application has changed, but some users are still using the former "structure" of the table. In this case "Invisible Columns" can be used, and let the new users know that they must specify the new columns explicitly while the old users can still using the former structure without issues. I will show you a couple of examples in this article in order to know all the "properties" around "Invisible Columns". 

To begin, you have to know that Invisible Columns can be created at the time of the table creation, the syntax has changed a little bit for columns as I show you in the following picture:

Now let's create a table with invisible columns:

SQL> create table dgomez.TableWithInvisibleColumns (
col1 varchar2 (20) visible,
col2 varchar2 (20) invisible); 

Table created.

Now let's  see how DMLs work with Invisible Columns:

 

Insert Operations: 

In an insert operation where we don't specify explicitly the invisible column however we try to use it we will get an error. For example, in the following sentence, I am not specifying explicitly the column "col2" which is our invisible column, however I am trying to use it because I am inserting two values:

SQL> insert into dgomez.TableWithInvisibleColumns values ('b','b');
insert into dgomez.TableWithInvisibleColumns values ('b','b')
*
ERROR at line 1:
ORA-00913: too many values

SQL>

The correct way to use the invisible column is as following, specifying the "col2", that will let Oracle know that we are aware of that invisible column and indeed we want to use it:

SQL> insert into dgomez.TableWithInvisibleColumns (col1, col2) values ('a','a');

1 row created.

SQL>

 

Select Operations:

In a select operation is the same, if we want to get the values of the invisible columns we have to specify the name of the invisible column in the "SELECT" sentence. For example, in the following sentence, we are trying to get all the columns from the table "dgomez.TableWithInvisibleColumns", however only one column is returned. This is because even if we specify "*" that is not a guarantee for oracle that we are aware about the invisible column, based on that, oracle returns us only the "visible" columns. 

SQL> select * from dgomez.TableWithInvisibleColumns;

COL1
--------
a

If we want to get the values of the invisible columns we have to specify the names, as the following example:

SQL> select col1, col2 from dgomez.TableWithInvisibleColumns;

COL1  COL2
----- -----
a     a

SQL>

Are the values stored physically into the table?

Yes, invisible columns are not the same than "Virtual Columns". This is totally different, with Virtual Columns the value (or the function that produces the value) is stored as metadata of that column but the value is not stored physically. This is different in indexes as you can read in my last article. But when we are using Invisible Columns the value is in fact stored physically. The visibility of those values are only managed as metadata, but the data is there. 


data_block_dump,data header at 0x7f340fe60264
===============
tsiz: 0x1f98
hsiz: 0x14
pbl: 0x7f340fe60264
76543210
flag=--------
ntab=1
nrow=1
frre=-1
fsbo=0x14
fseo=0x1f91
avsp=0x1f7b
tosp=0x1f7b
0xe:pti[0] nrow=1 offs=0
0x12:pri[0] offs=0x1f91
block_row_dump:
tab 0, row 0, @0x1f91
tl: 7 fb: --H-FL-- lb: 0x1 cc: 2
col 0: [ 1] 61  
--> In ascii 'a'
col 1: [ 1] 61  
--> In ascii 'a' (This is the value of Invisible Column)
end_of_block_dump
End dump data blocks tsn: 4 file#: 6 minblk 227 maxblk 227

Metadata of the Invisible Columns:

So, what about if I am not one more user that is using the table?, What about if I am the DBA of that table and I want to know which columns are invisible and which columns are not? There should be a way to know this. The first thought would be a "DBA_" table, but which one? Then we would think that the table DBA_TAB_COLUMNS has that information and we perform a "DESC DBA_TAB_COLUMNS", but unfortunately we see that there is not a column called "VISIBLE" or "VISIBILITY" or something like that. This is because Oracle didn't add a new column to describe the visibility of every column in a table, indeed the view "DBA_TAB_COLUMNS" has our information but is handled in a column that already exist, that column is "COLUMN_ID". When a column has NULL as the value of "COLUMN_ID" that means that column is Invisible, as in the following example:


SQL> select table_name, column_name, column_id from dba_tab_columns where owner='DGOMEZ' and table_name='TABLEWITHINVISIBLECOLUMNS';

TABLE_NAME                COLUMN_NAME  COLUMN_ID
------------------------- ------------ ----------
TABLEWITHINVISIBLECOLUMNS COL1         1
TABLEWITHINVISIBLECOLUMNS COL2

SQL>

We clearly see that the column "COL2" has a NULL value, that means that COL2 is Invisible.  

 

Adding Invisible Columns:

Not only at the time of the table creation we can create the invisible columns, we can add them as well after the table creation by using "ALTER TABLE. In the following example I will show you how to add a Invisible Column but also I will confirm another property of invisible columns, this is that Virtual Columns can be also invisible:

SQL> alter table dgomez.TableWithInvisibleColumns add (col3 invisible as (col1||col2) virtual ) ;

Table altered.

 

Does the structure of the table has the invisible columns information?

To answer this question, let's describe the table. Usually we use "DESCRIBE" to have a quick look at the table's structure:

SQL> desc dgomez.TableWithInvisibleColumns;

Name   Null?  Type
------ ------ ----------------------------
COL1          VARCHAR2(20)

SQL>

But as we see, the "DESCRIBE" command doesn't show any information about it. Now let's extract the structure but using "DBMS_METADATA":

SQL> select dbms_metadata.get_ddl('TABLE','TABLEWITHINVISIBLECOLUMNS','DGOMEZ') from dual;

DBMS_METADATA.GET_DDL('TABLE','TABLEWITHINVISIBLECOLUMNS','DGOMEZ')
--------------------------------------------------------------------------------

CREATE TABLE "DGOMEZ"."TABLEWITHINVISIBLECOLUMNS"
( "COL2" VARCHAR2(20) INVISIBLE,
"COL3" VARCHAR2(40) INVISIBLE GENERATED ALWAYS AS ("COL1"||"COL2") VIRTUAL ,
"COL1" VARCHAR2(20)
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"

SQL>

There is a very interesting thing here, do you remember how we were creating the columns in that table? At the table creation I put "COL1" as the first column, and "COL2" as the second column. After that I added a third column (COL3) via "ALTER TABLE". But see how DBMS_METADATA returns the DDL of that able, all the invisible columns are put at the beginning. If you use that DDL to create new tables and later you decide to put those columns VISIBLE the order of the columns will be different from the "original table's DDL". 

 

Are indexes supported on Invisible Columns?

The answer is yes, we can. I will put a couple of examples here:

SQL> create index dgomez.Index1OnInvisibleColumn on dgomez.TableWithInvisibleColumns (col2);

Index created.

SQL> create index dgomez.Index2OnInvisibleColumn on dgomez.TableWithInvisibleColumns (col2,col3);

Index created.

 

Are Partition Keys supported on Invisible Columns?

This is interesting as well, when we are creating partitioned tables we can select an invisible column for the partition key:

SQL> create table dgomez.Table3WithInvisibleColumns (
col1 varchar2 (20),
col2varchar2 (20) invisible)
partition by hash (col2)
partitions 2;

Table created.

 

How to change the visibility of a column?

Fo finish this article I will show you how to change from a column from "Invisible" to "visible" and from "visible" to "invisible":

SQL> alter table dgomez.Table3WithInvisibleColumns modify (col2 visible);

Table altered.

SQL> alter table dgomez.Table3WithInvisibleColumns modify (col2 invisible);

Table altered.

SQL>

Follow me:

      

Oracle Developer Tour 2016 - Guatemala

$
0
0

El día lunes 21 de Noviembre del 2016 se llevó a cabo el primer Oracle Developer Tour en Guatemala, este es el único evento que puede reunir a expositores con Certificaciones de Java Master y Títulos como Java Champions. En  este evento se se tuvo la participación de desarrolladores provenientes de toda la república de Guatemala que utilizan tecnologías de desarrollo Oracle tales como ADF, APEX, Oracle PL/SQL, Java, Forms entre otras. El Oracle Developer Tour surgió como una necesidad de cubrir un segmento de expertos que no estaba siendo cubierto por el Oracle Technology Network Tour (OTN Tour) que se realiza en Agosto. El OTN Tour es un evento que toca 11 países de latiamerica y tiene un efoque para expertos de Infraestructura, de Base de Datos, Middleware, Engineered Systems, pero no incorpora el area de desarrollo con Tecnología Oracle. Por otra parte, desde hace varios años se venia realizando el evento "Apex Tour" un evento similar al OTN Tour pero que se especializaba en cubrir temas específicamente de Apex. Las personas que iniciaron con el Apex Tour tuvieron la iniciativa de expandir el Apex Tour a más tecnologías de desarrollo tales como Forms & Reports, PLSQL, Java y fue así como surgió el Oracle Developer Tour, un evento que nace en el año 2016 pero que se realizó en 7 países de Latinoamérica, entre ellos: Argentina, Brasil, Mexico, Colombia, Guatemala, Costa Rica y Panamá.

En Guatemala el evento fue todo un éxito, tuvimos expositores internacionales y dos Java Champions:

La agenda del evento fue la siguiente:

Unas fotos del evento:

Un Especial Agradecimiento a nuestro patrocinadores Tomitribe y Certificatic y a todas las personas que nos apoyaron en el evento. ¡Nos vemos en el Oracle Developer Tour 2017!

  

Oracle DB 12.2 Local Undo: PDB undo tablespace creation

$
0
0

The documentation of 12cR2 was released and with that several people are learning all the new features. There are some questions that people are starting to have, for example, when configuring Local Undo (New Undo Configuration in Oracle 12cR2) in what moment the undo tablespaces are created for the Pluggable Databases? that is the question I will treat in this article. 

Firstly let me show you the Environment I am using for this example:

  • Oracle 12.2.0.1.0 Enterprise Edition Extreme Performance (Oracle Cloud). 
  • Two Pluggable Databases already created: NuvolaPDB1 and NuvolaPDB2.

The first step is of course to configure Local Undo in the Container Database:

Configuring Oracle Local Undo:

SQL> shutdown immediate;
SQL> startup upgrade;

SQL> alter database local undo on;

Database altered.

SQL> shutdown immediate;
SQL> startup;

At this time the Container Database was  started up and the PDB$SEED is in read only, by default. The other two Pluggable Databases didn't open by default, because they don't have the state saved (new feature introduced in 12.1.0.2), so the two Pluggable Databases are mounted at this time. The point here (and it is the key) is that the only PDB that has been opened is PDB$SEED.

SQL> select name, open_mode from v$pdbs

NAME       OPEN_MODE
---------- ----------
PDB$SEED   READ ONLY
NUVOLAPDB1 MOUNTED
NUVOLAPDB2 MOUNTED

Undo tablespace creation in PDB$SEED:

The answer of the question of this article is: In a "Local Undo" configuration, the Undo Tablespace for one Pluggable Database is created when the Pluggable Database is Opened for the first time. Oracle likes this style for Multitenant Architecture. Whenever a Pluggable Database open for the first time Oracle does things. You can read more about this in my article"The semi-patching of a PDB right after creation". In this case, the first PDB Opening right after configuring Local Undo is when the undo tablespace is created in that specific Pluggable Database. 

The proof of that is that since only the PDB$SEED has been opened after configuring Local Undo, only that PDB got its undo tablespace created:

Completed: ALTER DATABASE MOUNT
ALTER DATABASE OPEN
PDB PDB$SEED(2)converted to local undo mode, scn: 0x000000008a2f8eb0
PDB$SEED(2):Autotune of undo retention is turned on.
PDB$SEED(2):Undo initialization finished serial:0 start:344096331 end:344096559 diff:228 ms (0.2 seconds)
PDB$SEED(2):Database Characterset for PDB$SEED is US7ASCII
PDB$SEED(2):Opatch validation is skipped for PDB PDB$SEED (con_id=0)
PDB$SEED(2):Opening pdb with no Resource Manager plan active
PDB$SEED(2):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 116391936 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
PDB$SEED(2):[2595] Successfully onlined Undo Tablespace 3.
PDB$SEED(2):Completed: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 116391936 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
PDB$SEED(2):JIT: pid 2595 requesting stop
PDB$SEED(2):Autotune of undo retention is turned on.
PDB$SEED(2):Endian type of dictionary set to little
PDB$SEED(2):Undo initialization finished serial:0 start:344102100 end:344102100 diff:0 ms (0.0 seconds)
PDB$SEED(2):Database Characterset for PDB$SEED is US7ASCII
PDB$SEED(2):Opatch validation is skipped for PDB PDB$SEED (con_id=0)
PDB$SEED(2):Opening pdb with no Resource Manager plan active
Completed: ALTER DATABASE OPEN

In the next query we can see that only PDB$SEED has "UNDO_1" tablespace created:

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id order by 1;

PDB_NAME    TABLESPACE_NAME
----------- ------------------------------
NUVOLAPDB1  TEMP
NUVOLAPDB1  SYSAUX
NUVOLAPDB1  SYSTEM
NUVOLAPDB2  TEMP
NUVOLAPDB2  SYSTEM
NUVOLAPDB2  SYSAUX
PDB$SEED    UNDO_1
PDB$SEED    TEMP
PDB$SEED    SYSAUX
PDB$SEED    SYSTEM

11 rows selected.

Undo tablespace creation in other PDBs:

Now let's open another Pluggable Database, NuvolaPDB1:

SQL> alter pluggable database NuvolaPDB1 open;

Pluggable database altered.

Since this is the first time NuvolaPDB1 is opened after configuring Local Undo, its undo tablespace is created in that opening:

alter pluggable database NuvolaPDB1 open
NUVOLAPDB1(3):Endian type of dictionary set to little
PDB NUVOLAPDB1(3) converted to local undo mode, scn: 0x000000008a2f90c0
NUVOLAPDB1(3):Autotune of undo retention is turned on.
NUVOLAPDB1(3):Undo initialization finished serial:0 start:345249262 end:345249847 diff:585 ms (0.6 seconds)
NUVOLAPDB1(3):Database Characterset for NUVOLAPDB1 is US7ASCII
NUVOLAPDB1(3):Opatch validation is skipped for PDB NUVOLAPDB1 (con_id=0)
NUVOLAPDB1(3):Opening pdb with no Resource Manager plan active
NUVOLAPDB1(3):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 116391936 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
NUVOLAPDB1(3):[2595] Successfully onlined Undo Tablespace 3.
NUVOLAPDB1(3):Completed: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 116391936 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
Pluggable database NUVOLAPDB1 opened read write
Completed: alter pluggable database NuvolaPDB1 open

Now PDB$SEED and NuvolaPDB1 are the only PDBs that have undo tablespace, it is pending NuvolaPDB2 but is because it has not been opened:

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id order by 1;

PDB_NAME     TABLESPACE_NAME
------------ -----------------
NUVOLAPDB1   TEMP
NUVOLAPDB1   UNDO_1
NUVOLAPDB1   SYSAUX
NUVOLAPDB1   SYSTEM
NUVOLAPDB2   TEMP
NUVOLAPDB2   SYSTEM
NUVOLAPDB2   SYSAUX
PDB$SEED     UNDO_1
PDB$SEED     TEMP
PDB$SEED     SYSAUX
PDB$SEED     SYSTEM

11 rows selected.

Follow me:

      

catcdb.sql and the util.pm issue in Oracle Database 12.2

$
0
0

A couple of days ago I was playing with Oracle Database 12.2.0.1.0 Enterprise Edition Extreme Performance (released  last week in Cloud) and I tried to create a new Container Database. As per the 12.2 documentation (that was released a couple of weeks ago) we have to execute the "catcdb.sql" script right after create a Container Database (CDB). This script is located in $ORACLE_HOME/rdbms/admin.

So after execute the CREATE DATABASE sentence (using the new Local Undo btw) I executed the script as I show you below: 


SQL> @?/rdbms/admin/catcdb.sql
SQL>
SQL> Rem The script relies on the caller to have connected to the DB
SQL>
SQL> Rem This script invokes catcdb.pl that does all the work, so we just need to
SQL> Rem construct strings for $ORACLE_HOME/rdbms/admin and
SQL> Rem $ORACLE_HOME/rdbms/admin/catcdb.pl
SQL>
SQL> Rem $ORACLE_HOME
SQL> column oracle_home new_value oracle_home noprint
SQL> select sys_context('userenv', 'oracle_home') as oracle_home from dual;

SQL>
SQL> Rem OS-dependent slash
SQL> column slash new_value slash noprint
SQL> select sys_context('userenv', 'platform_slash') as slash from dual;

SQL>
SQL> Rem $ORACLE_HOME/rdbms/admin
SQL> column rdbms_admin new_value rdbms_admin noprint
SQL> select '&&oracle_home'||'&&slash'||'rdbms'||'&&slash'||'admin' as rdbms_admin from dual;
old 1: select '&&oracle_home'||'&&slash'||'rdbms'||'&&slash'||'admin' as rdbms_admin from dual
new 1: select '/u01/app/oracle/product/12.2.0/dbhome_1'||'/'||'rdbms'||'/'||'admin' as rdbms_admin from dual

SQL> Rem $ORACLE_HOME/rdbms/admin/catcdb.pl
SQL> column rdbms_admin_catcdb new_value rdbms_admin_catcdb noprint
SQL> select '&&rdbms_admin'||'&&slash'||'catcdb.pl' as rdbms_admin_catcdb from dual;
old 1: select '&&rdbms_admin'||'&&slash'||'catcdb.pl' as rdbms_admin_catcdb from dual
new 1: select '/u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin'||'/'||'catcdb.pl' as rdbms_admin_catcdb from dual

SQL> SQL> host perl -I &&rdbms_admin &&rdbms_admin_catcdb --logDirectory &&1 --logFilename &&2
Enter value for 1: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin
Enter value for 2: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catcdb.pl

The script asked me to type two values. Firstly the documentation doesn't say catcdb.pl will request  some inputs, I felt this strange.

Well I spent a couple of seconds to figure out which values should I put for the bind variables :1 and :2 but when I saw the previous lines (in green color) I saw that the values were built already by the script but for some reason it is not using them properly. Anyways I used the values obtained from the previous lines and I hit Enter. The result, was an error, the following one:

Can't locate util.pm in @INC (you may need to install the util module) (@INC contains: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin /u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/site_perl/5.22.0/x86_64-linux-thread-multi /u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/site_perl/5.22.0 /u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/5.22.0/x86_64-linux-thread-multi /u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/5.22.0 .) at /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catcdb.pl line 35.
BEGIN failed--compilation aborted at /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catcdb.pl line 35.

Here different thoughts came to my mind, but the first thought was: Perhaps I provided wrong values, but since these "inputs" are not documented I didn't know what to type. After investigation, I found the values were correct (now you don't  have to spend time on this [:)] ). I found that the problem was the "util.pm"  perl module. While my investigation I had to take a look into the "catcdb.pl" file in the line #35 as the error says:

vi /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catcdb.pl

Line #32   use Cwd;
Line #33   use File::Spec;
Line #34   use Data::Dumper;
Line #35   use util qw(trim, splitToArray);
Line #36   use catcon qw(catconSqlplus);

I searched that perl module in my filesystem:

[oracle@NuvolaDB $]$ find $ORACLE_HOME -name util.pm | wc -l
0

But interestingly I didn't find any, then I tried (just for fun) to search "Util" instead of "util":

[oracle@NuvolaDB ~]$ find $ORACLE_HOME -name Util.pm | wc -l
5

I had 5 results [:O]

[oracle@NuvolaDB ~]$ find $ORACLE_HOME -name Util.pm
/u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/site_perl/5.22.0/HTTP/Headers/Util.pm
/u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/5.22.0/x86_64-linux-thread-multi/Hash/Util.pm
/u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/5.22.0/x86_64-linux-thread-multi/Sub/Util.pm
/u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/5.22.0/x86_64-linux-thread-multi/Scalar/Util.pm
/u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/5.22.0/x86_64-linux-thread-multi/List/Util.pm
[oracle@NuvolaDB ~]$

I changed the line #35 in the catcdb.sql file. I replaced "util" by "Util":

vi /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catcdb.pl

Line #32     use Cwd;
Line #33     use File::Spec;
Line #34     use Data::Dumper;
Line #35     use Util qw(trim, splitToArray);
Line #36     use catcon qw(catconSqlplus);

Then I re-executed catcdb.pl and I got the same error, but, since I was pretty sure that Util.pm exists I thought "perhaps I have to include the directory where the Util.pm is located? So Included in the PATH env variable the directory where one Util.pm was located, it didn't work [:(]   . Then I thought "Should I move to that directory?" Perhaps Oracle is not calling Util.pm from PATH env variable but using "." (current directory) to call it. I decided to move there:

[oracle@NuvolaDB ~]$ cd /u01/app/oracle/product/12.2.0/dbhome_1/perl/lib/5.22.0/x86_64-linux-thread-multi/Hash/

I re-executed catcdb.pl again and guess what? it worked [:)]

SQL> host perl -I &&rdbms_admin &&rdbms_admin_catcdb --logDirectory &&1 --logFilename &&2

Enter value for 1: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin
Enter value for 2: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catcdb.pl
Enter new password for SYS: Nuvola1
Enter new password for SYSTEM: Nuvola1
Enter temporary tablespace name: temp
No options to container mapping specified, no options will be installed in any containers
....
....
catcon.pl: completed successfully

NOTE: if you get the error "Can't locate Term/ReadKey.pm in @INC", this article is for you: "How to create a CDB with sqlplus and 12c Documentation"

The script lasted several minutes (more than 1hour) to complete but that is another story.... So, to fix the issue of util.pm do the the following:

  • Change Line #35 in catcdb.pl replacing "util" by "Util"
  • Move to the directory where "Util.pm" is located and from there exeucte catcdb.pl

Follow me:

      

¿A bug after configuring Local Undo in Oracle 12.2?

$
0
0

So while playing with Local Undo configuration in my Oracle Cloud environment, using binaries Oracle Database 12.2.0.1.0 Enterprise Edition Extreme Performance I found the following strange scenario. I was searching documentation about this in several parts and I didn't find, perhaps because 12.2 Cloud binaries has few days since released.  Is it a bug? I don't know, that's why I am sharing my thoughts here becuase I hit the behavior and I also found the workaround. If you hit the, let say "bug", you can apply the "workaround" and you will be fine.

Let me tell you a little bit more about my environment and how to reproduce this behavior:

  1. I am using Oracle Database 12.2.0.1.0 EE Extreme Performance (Oracle Cloud).
  2. I created a CDB with SQL sentences without configuring Local Undo. I mean I created the CDB with Local Undo OFF.
  3. I created 2 Pluggable Databases: NuvolaPDB1 and NuvolaPDB2

So now you should have something like this:

After that I started to run the steps to reproduce the scenario:

Configuring Local Undo:

SQL> shutdown immediate;
SQL> startup upgrade;
SQL> alter database local undo on;
SQL> shutdown immediate;
SQL> startup;

It is interesting that All the commands completed successfully. No errors were returned in the terminal. BUT! when I saw the alert log the following error appeared:

PDB$SEED(2):Undo initialization finished serial:0 start:358935410 end:358935501 diff:91 ms (0.1 seconds)
PDB$SEED(2):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
PDB$SEED(2):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/nuvolacg/NuvolaCG/trace/NuvolaCG_ora_25220.trc
PDB$SEED(2):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE...
PDB$SEED(2):Automatic creation of undo tablespace failed with error 604 60
Could not open PDB$SEED error=604
2016-11-24T05:17:17.630435+00:00
Errors in file /u01/app/oracle/diag/rdbms/nuvolacg/NuvolaCG/trace/NuvolaCG_ora_25220.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00060: deadlock detected while waiting for resource

Also it's interesting that the PDB$SEED was in READ WRITE, this for sure is a proof that something wrong happened:

SQL> select name, open_mode from v$pdbs;

NAME       OPEN_MODE
---------- ----------
PDB$SEED   READ WRITE
NUVOLAPDB1 MOUNTED
NUVOLAPDB2 MOUNTED

So in order to leave the things in peace with Oracle (because I don't like to fight with it) I will put PDB$SEED in read only again, since that should be the default status:

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read only;

Pluggable database altered.

I verified if at least the Undo tablespace in PDB$SEED was created or not:

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id and pdb.name='PDB$SEED' order by 1

PDB_NAME      TABLESPACE_NAME
------------- ------------------
PDB$SEED    TEMP
PDB$SEED    SYSTEM
PDB$SEED    SYSAUX

3 rows selected.

Ok, so after several investigation, I found the trick, this is not documented that why I believe this behavior is a bug, otherwise it should be documented and Oracle should be clear to say "If you have already Pluggable Databases created and you enable Undo Local, all the already created Pluggable Databases should be Open in Upgrade mode the first time right after configure Local Undo". But it's not the case:

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open upgrade;

Pluggable database altered.

To open the PDB in upgrade mode is the Workaround. I confirmed it by checking out the log, when I opened the PDB in upgrade mode Oracle was able to create the Undo Tablspace:

alter pluggable database pdb$seed open upgrade
PDB$SEED(2):Autotune of undo retention is turned on.
PDB$SEED(2):Undo initialization finished serial:0 start:359616842 end:359616849 diff:7 ms (0.0 seconds)
PDB$SEED(2):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE  SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
PDB$SEED(2):[27995] Successfully onlined Undo Tablespace 3.
PDB$SEED(2):Completed: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE  SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
Pluggable database PDB$SEED opened in upgrade mode
Completed: alter pluggable database pdb$seed open upgrade

After to verify that everything is fine with PDB$SEED, I had to put it back in read only. Of course, all these steps should not be done by the DBA, PDB$SEED must open normally, without dedlocks, issues, errors. that's the normal behavior.

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read only;

It is interesting that the already created PDBs before configure Local Undo don't open, we can try as many times as we want and we will get the same result as I show you bellow:

SQL> alter pluggable database NuvolaPDB1 open;
alter pluggable database NuvolaPDB1 open
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-00060: deadlock detected while waiting for resource

As you can see with others PDBs (not PDB$SEED) the error is returned in the terminal so it is easy to know that something wrong is happening. in PDB$SEED we didn't receive any error, if I wouldn't have seen the alert log I wouldn't have realized that there is something wrong with PDB$SEED. Let's take a look into the alert log to confirm it is the same issue that I had in PDB$SEED:

alter pluggable database NuvolaPDB1 open
NUVOLAPDB1(3):Undo initialization finished serial:0 start:360066830 end:360066892 diff:62 ms (0.1 seconds)
NUVOLAPDB1(3):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE  SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
NUVOLAPDB1(3):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/nuvolacg/NuvolaCG/trace/NuvolaCG_ora_27995.trc
NUVOLAPDB1(3):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE  SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE...
NUVOLAPDB1(3):Automatic creation of undo tablespace failed with error 604 60
ORA-604 signalled during: alter pluggable database NuvolaPDB1 open...

As I said before, it doesn't matter how many times we close and open the PDB, the result will be the same until we apply "the workaround":


SQL> alter pluggable database NuvolaPDB1 close;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 open;
alter pluggable database NuvolaPDB1 open
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-00060: deadlock detected while waiting for resource


SQL>  alter pluggable database NuvolaPDB1 close;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 open;
alter pluggable database NuvolaPDB1 open
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-00060: deadlock detected while waiting for resource

... until we apply "the workaround":


SQL> alter pluggable database NuvolaPDB1 close;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 open upgrade;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 close;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 open;

Pluggable database altered.

Another thing I confirmed is that the, let say "bug", happens only for the already created Pluggable Databases before configure Local Undo. Because for new Pluggable Databases the opening is susccess:


SQL> create pluggable database "NuvolaPDB4" ADMIN USER pdb4admin IDENTIFIED BY "Nuvola1";

Pluggable database created.

SQL> alter pluggable database NuvolaPDB4 open;

Pluggable database altered.

Some more comments:

  • No, this is not a "one-time" bug. I was able to replicate this scenario 3 times, recreating everything from scratch. This makes me think that more people would get this behavior.
  • On-Premise 12.2.0.1.0 binaries will get this fixed? Maybe, I don't know. But I already reported this behavior to some Product Managers.
  • This is critical and I will loose data? No, this impacts only the undo tablespace creation. Apply the workaournd and you will be fine.

Follow me:

      

Introduction to Application Containers in Oracle Database 12cR2

$
0
0

NOTE: This article was written using Oracle Public Cloud.

  


Introduction:

Developers and end users are the roles that mostly use the database. Developers keep fixing code, maintaining the legacy applications, creating new applications or creating new versions of the same applications. There area a lot of tasks involved in these activities, some of them would be creating new databases for new applications, cloning the data of production database, re-creating packages for new versions of the applications, and if we have several customers using those applications we have to sync those customer's application with the new data or performing refreshes of the new version. Developers and DBAs work together, Oracle knows that and that's why with every version of Oracle Database several functions, packages and features are introduced to help not only DBAs but also Developers. In Oracle Database 12.2 a new feature called"Application Container" was introduced, this new feature helps developers a lot with the day-to-day tasks. With "Application Container", developers can create Applications, every Application can have its own data and version and Developers decide which database should have which version of the same Application and when to refresh the data. With "Application Containers" the developers keep the objects and data only in one side, not in every database in the organization, sync from that principal side all the dependent databases. Also there are three levels of "Sharing" for those data, some allow to store the data in each PDB. This what we will discuss in this article, how to create applications and how to sync them with the PDBs.


What is an Application Container? An Application Container is composed by One Application Root, zero or more Application Pluggable Databases (also known as Application Tenants), zero or one Application Seed and zero or more Applications.


Creating an Application Root:

An Application Root is an special Pluggable Database where the "Applications" are installed. Developers Maintain the objects and data only in this Pluggable Databases and later they can sync the Application PDBs with these objects and data. There may be only one Application Root per Application Container. Using different "Sharing" levels of the data we can store some data into each PDB.

In order to create an Application Root you have to be connected with SYS or other user with privileges:

SQL> show user
USER is "SYS"

You have to be connected to CDB$ROOT:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

Then you create the Application Root. As you can see bellow, the syntax to create an Application Root is very similar to create a normal Pluggable Database, the difference is the addition of the clause "as Application Container".

SQL> create pluggable database AppRoot as application container admin user pdbadmin identified by xxxx;

Pluggable database created.

Opening the Application Root:

SQL> alter pluggable database AppRoot open;

Pluggable database altered.

Confirming the Application Root was created successfully:

SQL> select con_id, name , open_mode, application_root from v$pdbs where application_root='YES';

CON_ID     NAME       OPEN_MODE  APP
---------- ---------- ---------- ---
5          APPROOT    READ WRITE YES

From these steps you would realize that we added "as Application Container". So does that mean we created an Application Container or an Application Root? Well, this could be confusing but at the end it is simple. I prefer to see it this way: "When we create an Application Root, by default one Application Container is created because an Application Root cannot exist alone", or if you want... you can see it this way: "When we create an Application Container by default an Application Root is created". You can pick your preferred definition :)

Creating an Application Pluggable Database

An Application PDB or Application Tenant is one special Pluggable Database that can get metadata and data from the Application Root and also it can have its own metadata and data, it depends on how the "Application" was created. We will discuss this "depends" later. So basically the Application PDBs are those databases that belong to one and only one Application Root that's why when you create an Application PDB you must be connected to one Application Root. So far you have seen that an Application Root can have zero or many "Application PDBs" but an "Application PDB" belongs to only one "Application Root".

The first step to create an "Application PDB" is to be connected to an Application Root:

SQL> alter session set container=AppRoot;

Session altered.

Verify you are connected to the Application Root:

SQL> show con_name

CON_NAME
------------------------------
APPROOT

The creation of the Application PDB is exactly the same than creating a normal PDB, the only difference is that now we are connected to an Application Root:

SQL> create pluggable database AppPDB1 admin user apppdb1admin identified by xxxx;

Pluggable database created.

Opening the Application Tenant:

SQL> alter pluggable database AppPDB1 open;

Pluggable database altered.

Verifying the Application PDBs were created successfully:

SQL> select con_id, name , open_mode, application_root, application_pdb from v$pdbs;

CON_ID     NAME     OPEN_MODE  APPLICATION_ROOT APPLICATION_PDB
---------- -------- ---------- ---------------- ---------------
5          APPROOT  READ WRITE YES              NO
6          APPPDB1 READ WRITE NO               YES

So far we have created 1 Application Container containing 1 Application Root and two Application PDBs. But there is not any Application yet. That is the next step.

Creating an Application

An Application is composed by Objects  and Data. Every Object can be created with three levels of "Sharing": Metadata-linked, Data-Linked and Extended-Data Linked. Depending on which level of "Sharing" we used to create the objects, the objects and data will be shared from Application Root or stored in each container.

Applications can be created only in an Application Root.

SQL> show con_name

CON_NAME
---------------
APPROOT

To install an Application you have to declare that you will start installing it, you must specify the name of the Application and you must specify the version of the Application. You can have several "Applications" in an "Application Container" as long as their name are different inside that "Application Container". 

SQL> alter pluggable database application MyApp begin install'1.0';

Pluggable database altered.

After declaring that you are installing an "Application", all the next sentences are marked as part of the installation, here is where you start to create all the objects and data:

SQL> create user test identified by xxxx;

User created.

SQL> grant connect, resource, unlimited tablespace to test;

Grant succeeded.

Metadata-Linked: A metadata link shares the database object’s metadata, but its data is unique to each container.

SQL> create table test.metadataLinkedTable SHARING=METADATA (name varchar2(20));

Table created.

SQL> insert into test.metadataLinkedTable values ('Guatemala');

1 row created.

SQL> commit;

Commit complete.

Data-Linked: A data link shares the database object, and its data is the same for all containers in the application container. Its data is stored only in the application root.

SQL> create table test.dataLinkedTable SHARING=DATA (name varchar2(20));

Table created.

SQL> insert into test.dataLinkedTable values ('Costa Rica');

1 row created.

SQL> commit;

Commit complete.

Extended Data-Linked: An extended data link shares the database object, and its data in the application root is the same for all containers in the application container. However, each application PDB in the application container can store data that is unique to the application PDB. Personally, I like to call this "Row-Linked" because some rows are stored in the Applicaiton PDB and some others in the Application Root, basically you are sharing a set of rows from Application Root. 

SQL> create table test.extendedDataLinkedTable SHARING=EXTENDED DATA (name varchar2(20));

Table created.

SQL> insert into test.extendedDataLinkedTable values ('Nicaragua');

1 row created.

SQL> commit;

Commit complete.

To finish the installation of the Application the following sentence has to be executed specifying the Application's name and the version:

SQL> alter pluggable database application MyApp end install '1.0';

Pluggable database altered.


Excellent! So far we have created an "Application Container" containing 1 Application Root, 1 Application Tenant and 1 Application with 3 Tables: 1 metadata-linked Table, 1 data-linked Table and 1 Row-Linked Table.

The Application PDBs don't see the Application yet. This is because the synchronization is not automatically as we can see bellow:

Checking out if the Application PDB "AppPDB1" has the objects of the Application "MyApp":

SQL> alter session set container=AppPDB1;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
APPPDB1

SQL> select * from test.metadataLinkedTable;
select * from test.metadataLinkedTable
*
ERROR at line 1:
ORA-00942: table or view does not exist

Synchronizing Application PDBs

In order to sync an "Application" to an "Application PDB" you have to open a session in that specific Application PDB:

SQL> alter session set container=AppPDB1;

Session altered.

SQL> show con_name

CON_NAME
------------
APPPDB1

Then execute the following sentence specifying the Application's name:

SQL> alter pluggable database application MyApp sync;

Pluggable database altered.

After to execute the "Application Sync" we will be able to see the objects and data depending on how the SHARING clause was used:

SQL> select * from test.metadataLinkedTable;

NAME
--------------------
Guatemala

SQL> select * from test.dataLinkedTable ;

NAME
--------------------
Costa Rica

SQL> select * from test.extendedDataLinkedTable;

NAME
--------------------
Nicaragua

Now let's see the difference between the sharing levels. In order to explain this I have to do some more inserts into the tables. All these inserts will be executed from the Application PDB "AppPDB1":

SQL> show con_name

CON_NAME
------------------------------
APPPDB1

Insert #1:

SQL> insert into test.metadataLinkedTable values ('Mexico');

1 row created.

Insert #2:

SQL> insert into test.dataLinkedTable values ('Canada');
insert into test.dataLinkedTable values ('Canada')
*
ERROR at line 1:
ORA-65097: DML into a data link table is outside an application action

Insert #3:

SQL> insert into test.extendedDataLinkedTable values ('USA');

1 row created.

SQL> commit;

Commit complete.

Explanation of Insert #1: This insert was executed against a medata-linked table, the result is that the insert was accepted from the Application PDB and the row will be stored in each Application PDB. This means for every row inserted while the Application was being created there will be a row in each "Application PDB" that is Synchronized, this is because the rows are unique to each "Application PDB". We can confirm that by checking out the ROWID:

SQL> alter session set container=AppRoot;

Session altered.

SQL> show con_name

CON_NAME
----------
APPROOT

SQL> select c.con_id, p.name PDB_NAME, dbms_rowid.rowid_to_absolute_fno(t.rowid,'TEST','METADATALINKEDTABLE') file_num, t.name from test.metadataLinkedTable t, v$datafile c, v$pdbs p where c.file#=dbms_rowid.rowid_to_absolute_fno(t.rowid,'TEST','METADATALINKEDTABLE') and c.con_id=p.con_id;

CON_ID     PDB_NAME   FILE_NUM   NAME
---------- ---------- ---------- ----------
5          APPROOT    38         Guatemala

This means that row is stored in the datafile # 38, and that datafile belongs to the container called "AppRoot", in this case is the "Application Root".

SQL> alter session set container=AppPDB1;

Session altered.

SQL> show con_name

CON_NAME
------------------------------
APPPDB1

SQL> select c.con_id, p.name PDB_NAME, dbms_rowid.rowid_to_absolute_fno(t.rowid,'TEST','METADATALINKEDTABLE') file_num, t.name from test.metadataLinkedTable t, v$datafile c, v$pdbs p where c.file#=dbms_rowid.rowid_to_absolute_fno(t.rowid,'TEST','METADATALINKEDTABLE') and c.con_id=p.con_id;

CON_ID     PDB_NAME     FILE_NUM   NAME
---------- -------- ---------- --------------------
6          APPPDB1  41         Guatemala
6          APPPDB1  41         Mexico

And now you see that the same row "Guatemala" was also stored in a different datafile, in this case the datafile #41 which belongs to the PDB caled "AppPDB1" which is an "Application PDB". Additionally the row "Mexico" is also inserted in the same datafile. This confirms that with this level of "Sharing" each container has its own data. 

As you see there are two rows with the same value "Guatemala", one inserted in "AppRoot" and other stored in "AppPDB1", this is because every row here is stored in each container.

Explanation of Insert #2: In this case we tried to insert a row in a Data-Linked Table and we received an error. This is because in a Table using Sharing=Data (Data-Linked) there can be only those rows inserted in the Application Root in order to later sync the Application PDBs. No rows are accepted in each Application PDB.

Explanation of Insert #3: This insert was executed against an extended data-linked table (Row-Linked), the result is that the insert was accepted from the Application PDB and the row will be stored in that specific Application PDB because the "INSERT" operation was executed inside the "Application PDB", if we had executed the "INSERT" from "Application Root" then that row had been stored in the "Application Root" and shared to the "Application PDB". I tried to confirm this by using ROWID but I saw that ROWID cannot be used against an row-linked table, the following error is returned:

ORA-02031: no ROWID for fixed tables or for external-organized tables

So you can use the following query to confirm that some rows are returned from "Application Root" and some others from the local "Application PDB":

SQL> select con_id, owner, table_name, common_data from cdb_tables where table_name='EXTENDEDDATALINKEDTABLE'

CON_ID     OWNER  TABLE_NAME                COMMON_DATA
---------- ------ ------------------------- -----------
6          TEST   EXTENDEDDATALINKEDTABLE   YES

The meaning of the column COMMON_DATA is the following:

SQL>select owner, table_name, column_name, comments from cdb_COL_COMMENTS where column_name like 'COMMON_DATA%' and table_name='CDB_TABLES' and con_id=1

OWNER  TABLE_NAME COLUMN_NAME     COMMENTS
------ ---------- --------------- -----------------------------------------
SYS    CDB_TABLES COMMON_DATA     Whether the table is enabled for fetching
                                  common data from Root

SYS    CDB_TABLES COMMON_DATA_MAP Whether the table is enabled for use with
                                  common_data_map database property

I had to get the definition from the data dictionary because those columns are not documented at the time in 12.2 public oracle database documentation (Database Reference Book), I already sent an email to oracle asking why :)

Conclusion:

So far you have seen an introduction to "Application Containers". We created an "Application Container", by default an "Application Root" was created, then we created an "Application PDB" and we installed and application with three tables. And finally we inserted some rows and we saw how "Sharing" levels work. 

Follow me:

      


How to solve user errors with Oracle Flashback 12cR2 and its enhancements

$
0
0

Introduction:

Flashback is a technology introduced in Oracle Database 10g to provide fixes for user errors. For example, one of the most common issues it can solve is when a DELETE operation was executed without a proper WHERE clause. Another case: a user has dropped a table but after some time that table is required. And the worst-case error: the data of a complete database has been logically corrupted. There are several use cases for Flashback technology, all of them focused on recovering objects and data or simply reverting data from the past. Flashback technology is not a replacement for other recovery methods such as RMAN hot backups, cold backups or datapump export/import; Flashback technology is a complement. While RMAN is the main tool to recover and restore physical data, Flashback technology is used for logical corruptions. For instance, it cannot be used to restore a datafile, while RMAN is the perfect tool for that purpose. Also, be careful when NOLOGGING operations are used; Flashback Database cannot restore changes through NOLOGGING.

Flashback Technology includes several "Flashback Operations", among them Flashback Drop, Flashback Table, Flashback Query, Flashback Version, Flashback Transaction and Flashback Database. They use different data sources to restore/revert user data from the past. The following table shows which data source is used for which Flashback operation:

Flashback Operation         Data Source

Flashback Database            Flashback Logs
Flashback Drop                   Recycle bin
Flashback Table                  Undo Data
Flashback Query                 Undo Data
Flashback Version               Undo Data
Flashback Transaction        Undo Data

In this article, we will focus on Flashback Database, a feature that is able to "flash back" a complete database to a point in the past. Flashback Database has the following use cases:

  • Taking a database to an earlier SCN: This is really useful when a new version of an application needs to be tested and all the changes made for the testing discarded afterwards. In this case, a new environment (for testing or dev) must be created that contains the data in the production database at a specific time in the past.
  • Recovery through resetlogs: Flashback Database can revert (logically) a database to a specific date in the past, even if that specific date precedes that of a RESETLOGS operation.
  • Activating a Physical Standby Database: With Oracle Database 10g, Flashback Database can be used in a Physical Standby. The Physical Standby can be opened in read-write for testing purposes and when the activity completes, the database can be reverted to the time before the Physical Standby was activated.
  • Creating a Snapshot Standby: In 11g, Snapshot Standby was introduced. The concept is basically to automate all the steps involved in activating (opening in read-write) a Physical Standby in version 10g, then later make it Physical Standby again (with recovery). This "automated" conversion of a Physical Standby into a “Snapshot Standby” uses Flashback Database transparently to the DBA. 
  • Configuring Fast Start Failover: To configure Fast Start Failover in Data Guard Broker, Flashback Database is required.
  • Reinstating a Physical Standby: Data Guard broker uses Flashback Database to reinstate a former primary after Failover operations. Read more about reinstating a database in the following articles:  Role Operations with Snapshot Standby 12cRole Operations involving two 12c Standby Databases
  • Upgrade testing: A Physical Standby can be used to test an upgrade; in this case, the Physical Standby is opened in read-write and upgraded. Applications can be tested with the upgraded database and when the activity completes the Physical Standby can be reverted to the former version using Flashback Database. The Transient Logical Standby method for upgrades also involves Flashback Database.

How Flashback Database works:

When blocks are modified in the Buffer Cache, some of the before-the-change block images are stored in the Flashback Buffer and subsequently stored physically in the Flashback Logs by the RVWR process. All blocks are captured: index blocks, table blocks, undo blocks, segment headers, etc. When a Flashback Database operation is performed, Oracle uses the target time and checks out its Flashback Logs to find which Flashback Logs have the required block images with the SCN right before the target time. Then Oracle restores the appropriate data blocks from Flashback Logs to the Datafiles, applies redo records to reach the exact target time, and when the Database is opened with resetlogs, the changes that were not committed are rolled back using undo data to finally have a consistent database ready to be used.

Flashback Database Enhancements:

Flashback Database has had several enhancements since it was introduced, with the biggest enhancements in 12.1 and 12.2. In Oracle Database 12.1 Flashback Database supported Container Databases (CDBs) supporting the Multitenant Architecture, however Flashback Database at the PDB Level was not possible. In Oracle Database 12cR2 Flashback Database added support at the PDB level. This was enabled thanks to another good feature introduced in Oracle Database 12.2 called "Local Undo". Local Undo allows you to create an undo tablespace in each Pluggable Database and use it to store locally undo data for that PDB specifically. Local Undo must be enabled at the CDB level. However, if the CDB is not running in Local Undo mode, Flashback Pluggable Database can also be used, but the mechanism used is totally different. In a Shared Undo mode, Flashback Pluggable Database needs an auxiliary instance in which the required tablespaces will be restored and recovered to perform the Flashback Database operation and a switch is then performed between the current tablespaces and the new restored-and-recovered tablespaces in the required Pluggable Database.

NOTE: All the examples in this article were created using Oracle Public Cloud:

Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE 12.2.0.1.0 Production
TNS for Linux: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 – Production

Enabling Flashback:

Local Undo is used in this example:

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME        PROPERTY_VALUE
-------------------- ---------------
LOCAL_UNDO_ENABLED   TRUE

To read more about Local Undo and Shared Undo the following articles are recommended: Oracle DB 12.2 Local Undo: PDB undo tablespace creation, How to Enable and Disable Local Undo in Oracle 12.2.

Flashback cannot be enabled at the PDB level in 12.1 and 12.2.0.1. Flashback must be enabled at the CDB level. Before you can Enable Flashback in your CDB you have to ensure that enough space is available to store the Flashback Logs. Oracle recommends using the following generic formula to setup your Fast recovery area space:

Target FRA = (Current FRA)+[DB_FLASHBACK_RETENTION_TARGET x 60 x Peak Redo Rate (MB/sec)]

After setup the FRA space properly, Flashback may be enabled:

SQL> alter database flashback; 

Database altered.

 

Creating a table and some rows

To test the result of Flashback Database operation, I will create a table with some rows in it; that data will be used to flashback the database and verify that the database was thereby successfully reverted to a past time.

SQL> alter session set container=nuvolapdb2;

Session altered.

SQL> create table deiby.piece (piece_name varchar2(20));

Table created.

SQL> insert into deiby.piece values ('King');

SQL> insert into deiby.piece values ('Queen');

SQL> insert into deiby.piece values ('Rook');

SQL> insert into deiby.piece values ('Bishop');

SQL> insert into deiby.piece values ('Knight');

SQL> insert into deiby.piece values ('Pawn');

SQL> commit;

Commit complete.

SQL> select * from deiby.piece;

PIECE_NAME
--------------------
King
Queen
Rook
Bishop
Knight
Pawn

6 rows selected.

 

Restore Point creation

To perform Flashback Database a restore point, a guaranteed restore point, an SCN or a timestamp is required. In this example a normal restore point is used.

SQL> create restore point before_resetlogs for pluggable database Nuvola2;

Restore point created.

SQL> SELECT name, pdb_restore_point, scn, time FROM V$RESTORE_POINT
NAME PDB SCN TIME
----------------- --- ---------- -------------------------------
BEFORE_RESETLOGS  YES 3864200    09-JAN-17 08.12.56.000000000 PM

 

Truncating and dropping the table

Now let's assume a user error: a DBA, developer, or end user truncates a table and then drops it. This is a simple example, but you can make this "logical error" as complex as you want so long as a physical error is not involved and a NOLOGGING is not used.

Truncating the table:

SQL> truncate table deiby.piece;

Table truncated. 

Drop the table with purge:

SQL> drop table deiby.piece purge;

Table dropped.

 

Open the database with resetlogs:

To make it more interesting, I will simulate a recovery-until-time operation in order to perform a resetlogs operation:

RMAN> recover pluggable database nuvolapdb2 until scn 3864712;
Starting recover at 09-JAN-17
current log archived
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 09-JAN-17

Opening the Pluggable Database with resetlogs:

RMAN> alter pluggable database nuvolapdb2 open resetlogs;

Statement processed

We can verify that indeed a new incarnation was created for the PDB:

Flashback the database

Now it's time for the magic, the new feature introduced in Oracle Database 12.2 called "Flashback Pluggable Database". To use Flashback Database at Pluggable Database level, the PDB must first be closed.

SQL> alter pluggable database nuvolapdb2 close;

Pluggable database altered.

SQL> select con_id, db_incarnation# db_inc#, pdb_incarnation# pdb_inc#, status,incarnation_scn from v$pdb_incarnation where con_id=4;

CON_ID     DB_INC#    PDB_INC#   STATUS  INCARNATION_SCN
---------- ---------- ---------- ------- ---------------
4          1          5          CURRENT 3864712
4          1          0          PARENT  1

Then Flashback PDB may be used:

SQL> flashback pluggable database nuvolapdb2 to restore point before_resetlogs;

Flashback complete. 

After a Flashback PDB operation, the PDB must be opened with resetlogs:

SQL>  alter pluggable database nuvolapdb2 open resetlogs;

Pluggable database altered.

Verifying the data

Once the Flashback PDB has completed successfully, the data that existed before the truncate, drop and resetlogs (and even more if you want) can be queried:

SQL> alter session set container=nuvolapdb2;

Session altered. 

SQL> select * from deiby.piece;

PIECE_NAME
--------------------
King
Queen
Rook
Bishop
Knight
Pawn

6 rows selected. 

A quick look at the incarnations will show that a new incarnation was created for the PDB (Incarnation #6) and the former Incarnation was made orphan (Incarnation #5).

SQL> select con_id, db_incarnation# db_inc#, pdb_incarnation# pdb_inc#, status,INCARNATION_SCN from v$pdb_incarnation where con_id=4;

CON_ID     DB_INC#    PDB_INC#   STATUS  INCARNATION_SCN
---------- ---------- ---------- ------- ---------------
4          1          6          CURRENT 3864201
4          1          0          PARENT  1
4          1          5          ORPHAN  3864712

Conclusion:

Flashback Database has several use cases and is a very useful feature that DBAs should keep “in their pocket” and ready to use when they need to revert a database to a time in the past. It allows you to test upgrades, activate a physical standby, undo user errors, and test applications—all without worry. I’m sure that Oracle will keep improving this feature; perhaps in the next version of Oracle we will gain the ability to enable Flashback in PDB Level and several others functions. For now, the enhancements made by Oracle in 12.1 and 12.2 are enough to work with non-CDB, CDBs and PDBs.

 

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil. Twitter | LinkedIn 

 

Changing DB Parameters for Oracle Database on Amazon RDS

$
0
0

Nowadays, a DBA has to know how to administer a database on-premise but also on Cloud. Cloud is not the future, it's the present. Several databases are being moved from on-premise to Cloud. Since we are in the transition phase, a DBA has to know how to shutdown, start, create tablespaces, change database parameters, and several more tasks on Amazon RDS, Oracle Public Cloud, Microsoft Azure, others providers and of course on-premise. Every Cloud provider have several similar things, but also they have several different ways to performa a task. In this article I will cover how to change the values for database parameters in Amazon RDS. If you are an experienced on-premise Oracle DBA you would wonder "To change database parameters" is the easiest thing, and it is, indeed, but it is different on Amazon RDS and you will see why.

I have broken up the article in the following sections:

  • The on-premise approach
  • The research 
  • How to change DB parameters in Amazon RDS using AWS Management Console
  • How to change DB parameters in Amazon RDS using CLI
  • Conclusion

For this article I have used the following environment:

Database Name: Oracle
Database Type: Amazon RDS - Single Instance
Database version: 12.1.0.2

The on-premise approach

An experienced on-premise DBA would want to execute an "ALTER SYSTEM SET", at the end, the user that provides Amazon RDS has the role "DBA", and the privilege "ALTER SYSTEM" is included in that role. Let's follow that approach and let's see what happens:

SQL> alter system set statistics_level='BASIC' scope=spfile;;
alter system set statistics_level='BASIC'
*
ERROR at line 1:
ORA-01031: insufficient privileges

So, here is the first problem we have. If we were using an on-premise database with an user that has "DBA" role, the same sentence works:

SQL> alter system set statistics_level='BASIC' scope=spfile;

System altered.

The only difference is that the first database was on Amazon RDS and the second was on-premise. So in this "difference" there should be the "reason" of that "insufficient privileges" error. Now let's move on to the research part.

The research 

The first question we have to clarify is why one user with "DBA" role doesn't have privilege to execute "ALTER SYSTEM". The reason is that in Amazon RDS, the role "DBA" has two privileges less than an on-premise database:

In Amazon RDS:

SQL> select count(*) from dba_sys_privs where grantee='DBA' and privilege like 'ALTER%';

COUNT(*)
----------
32

On-premise database (same version, 12.1.0.2):

SQL> select count(*) from dba_sys_privs where grantee='DBA' and privilege like 'ALTER%';

COUNT(*)
----------
34

I found that the two privileges that were removed are:

  1. ALTER DATABASE
  2. ALTER SYSTEM

So that's the reason why we can not execute "ALTER SYSTEM SET" in our Oracle Amazon RDS.

Now you would think: Why not use SYS? and here is the second thing that an Oracle database on Amazon RDS has different compared with on-premise. An Oracle Database on Amazon RDS doesn't allow to use SYS, SYSTEM  users, as per the Amazon documentation:

"The SYS user, SYSTEM user, and other administrative accounts are locked and cannot be used."

I also recommend to read the following notes:

    • Oracle Database Support for Amazon AWS EC2 (Doc ID 2174134.1)
    • Amazon RDS Support for AIA (Doc ID 2003294.1)

Amazon documentation says two things: Locked and it cannot be used. 

It is not correct regarding the first property, both SYS and SYSTEM users are not locked:

SQL> select username, account_status from dba_users where username in ('SYS','SYSTEM')

USERNAME      ACCOUNT_STATUS
------------- -----------------
SYSTEM        OPEN
SYS           OPEN

But Amazon Documentation is correct,  when it says that they cannot be used, and you will know why [:)]

It tried to change the password of SYS just for fun and the output was this:

SQL> alter user sys identified by Manager1;

alter user sys identified by Manager1
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20900: ALTER USER SYS not allowed.
ORA-06512: at "RDSADMIN.RDSADMIN", line 208
ORA-06512: at line 2

That was my first meeting with RDSADMIN.RDSADMIN. 

When I saw "RDSADMIN.RDSADMIN" I thought: "I can change the password of SYS, but this function RDSADMIN.RDSADMIN is not allowing it". I mean, without this function that sentence should work. Of course this is a customized function created by Amazon RDS, it is not created by default by Oracle Database.

Then I took a look into that function:

 SQL> select owner, trigger_name, trigger_type, status, triggering_event, trigger_body from dba_triggers where owner='RDSADMIN' and triggering_event like '%DDL%';

OWNER    TRIGGER_NAME     TRIGGER_TYPE STATUS   TRIGG TRIGGER_BODY
-------- ---------------  ------------ -------- ----- ------------
RDSADMIN RDS_DDL_TRIGGER  BEFORE EVENT ENABLED  DDL   BEGIN
                                                      rdsadmin.secure_ddl;
                                                      END;

RDSADMIN RDS_DDL_TRIGGER2 BEFORE EVENT ENABLED  DDL   BEGIN
                                                      rdsadmin.secure_ddl;
                                                      END;

Firstable I don't understand why there are two triggers with the same trigger type, the same triggering even, the same status (enabled) and calling exactly the same procedure (rdsadmin.secure_ddl), it seems like if Amazon RDS developers were playing on production? Who knows!.

Anyways I tried to disable that trigger, at the end my user has "DBA" role, right? :)

SQL> alter trigger RDSADMIN.RDS_DDL_TRIGGER disable;
alter trigger RDSADMIN.RDS_DDL_TRIGGER disable
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20900: RDS restricted DDL found: ALTER TRIGGER SYS.RDS_DDL_TRIGGER
ORA-06512: at "RDSADMIN.RDSADMIN", line 407
ORA-06512: at line 2

Again our friend RDSADMIN.RDSADMIN....

I did a research on the package "RDSADMIN.RDSADMIN" and after several tests and different inputs, and some cups of coffee, by observation I found the following:

  • It is a customized package that is included in any Oracle Database on Amazon RDS that is on charge to verify what it is allowed and what it's not. If you are SYS, SYSTEM or RDSADMIN likely all the validations performed by this package are not applied. But if you are another user, several validations are performed on the sentence that you are trying to execute. 
  • It doesn't allow you to grant the privileges ALTER SYSTEM, ALTER DATABASE,GRANT ANY PRIVILEGE, DATAPUMP_EXP_FULL_DATABASE,DATAPUMP_IMP_FULL_DATABASE,IMP_FULL_DATABASE,EXP_FULL_DATABASE. If you try to do it, you will get the error "ORA-20997". 
  • It validates which objects are being touched in the sentence you are executing and if those objects belong to the schemas SYS, SYSTEM and RDSADMIN it cancel your sentence and you will receive the error "ORA-20900". For example, when I tried to disable the trigger.
  • It doesn't allow you to alter or to drop the schema RDSADMIN. 
  • it doesn't allow you to change the password of SYS, SYSTEM and RDSADMIN users.
  • it doesn't allow you to revoke any privilege from the RDSADMIN user. (This is the first time that with an user that has DBA role I felt too unprivileged [:(])
  • it doesn't allow you to ALTER or DROP the tablespace RDSADMIN (Yes, there is a tablespace called RDSADMIN that is created by default. Also an user profile called RDSADMIN. RDSADMIN is everywhere! [:|] ! )
  • It doesn't allow to add datafiles to any tablespace specifying a full path. You must let the OMF to handle that.
  • It allows you to create new tablespaces. omg! finally! [:D]
  • It allows you to compile packages, procedures, functions, triggers and views. Only to compile.

So when someone asks you why it is not possible to change a db parameter with "ALTER SYSTEM", you will know what to say :)

How to change DB parameters in Amazon RDS using AWS Management Console

In order to Change a DB parameter on Amazon RDS, you must use a "Parameter Group". I like the concept behind this because it is cloud-oriented concept, which is good. A Parameter Group, as the name says, it is a set of parameters identified by a name. The good thing of this is that a "Parameter Group" can be shared with several Oracle Database Instances. Instead to manage several parameters for every single database instance  as on-premise, Amazon allows you to create only one single group and to re-use that group across several database instances. That way you will have your databases standardized. You can have a Parameter Group for all your Dev databases, another Parameter Group for all your Test Databases, and so on.

You can set a Parameter Group for your database when you are creating the Oracle RDS:

Or you can create a Parameter Group any time after the database creation. In this case, at the time of the creation Amazon will assign a default Parameter Group usually called "default.oracle-ee-12.1". The problem of this is that the default Parameter Group cannot be modified. You cannot change the value of any parameter inside that default one.

 

Since a default Parameter Group cannot be modified, we must create another one. To do so, go to AWS Management Console -> Parameter Groups  and click on  Button "Create Parameter Group" and follow the instructions:

You have to Select a "Parameter Group Family" which is basically for which kind of database you are creating the Parameter Group:

the Parameter Group Family could be one of the following:

Provide the name for the Parameter Group and a Description as well. Click on the button "Create".

When the Parameter Group is created, you can be able to modify the values of the parameters.

For non-default Parameter Group you have to click on AWS Management Console -> Parameter Groups -> [your non-default Parameter Group] and then click on the button "Edit Parameters". There will be a page where you will find all the parameters and you will be able to change the values:

Once you have modified all the required parameters, click on the button "Save Changes". Be aware that every parameter could be either "dynamic" or "static". All the dynamic parameters will be applied immediately regardless the "Apply Immediately" setting. However for the Static parameters you will have to reboot the Oracle RDS Instance.

If you want to know if your change was applied Immediately or it will be applied after a reboot you can check AWS Management Console -> Parameter Groups ->[ your non-default Parameter Group]   -> Recent Events (Button)

How to change DB parameters in Amazon RDS using CLI

Amazon provides a terminal tool (CLI). It allows to make changes faster than in Amazon AWS Management Console. 

To list the Parameter Groups:

Deibys-MacBook-Pro$ aws rds describe-db-parameter-groups
{
   "DBParameterGroups": [
      {
         "DBParameterGroupArn": "arn:aws:rds:us-west-2:062377963666:pg:default.oracle-ee-12.1",
         "DBParameterGroupName": "default.oracle-ee-12.1",
         "DBParameterGroupFamily": "oracle-ee-12.1",
         "Description": "Default parameter group for oracle-ee-12.1"
      }
   ]
}

To List All the Parameter of a Parameter Group:

Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1
{
   "Parameters": [
      {
         "ApplyMethod": "pending-reboot",
         "Description": "_allow_level_without_connect_by",
         "DataType": "boolean",
         "AllowedValues": "TRUE,FALSE",
         "Source": "engine-default",
         "IsModifiable": true,
         "ParameterName": "_allow_level_without_connect_by",
         "ApplyType": "dynamic"
      },
      {
         "ApplyMethod": "pending-reboot",
         "Description": "_always_semi_join",
         "DataType": "string",
         "AllowedValues": "CHOOSE,OFF,CUBE,NESTED_LOOPS,MERGE,HASH",
         "Source": "engine-default",
         "IsModifiable": true,
         "ParameterName": "_always_semi_join",
         "ApplyType": "dynamic"
      },
     {.......}
   ]
}

The output can have a different format:

Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 --output table

or

Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 --output text

 

To filter one single parameter in a Parameter Group:

Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 --output text --query "Parameters[?ParameterName=='statistics_level']"

or

Deibys-MacBook-Pro$ aws rds describe-db-parameters --db-parameter-group-name default.oracle-ee-12.1 --output table --query "Parameters[?ParameterName=='statistics_level']"

 

And finally, you change a parameter value with the following sentence:

Deibys-MacBook-Pro$ aws rds modify-db-parameter-group --db-parameter-group-name nuvolaparameters --parameters "ParameterName=statistics_level,ParameterValue=basic,ApplyMethod=pending-reboot"
{
     "DBParameterGroupName": "nuvolaparameters"
}
Deibys-MacBook-Pro$

Conclusion

To finish the article I would say that to use a Parameter Group is a good approach for on-cloud databases, because they can be shared across several databases and that allows to Standardize. Both Amazon AWS Management Console and CLI allows a fast method to change parameters.

I didn't like RDSADMIN.RDSADMIN package because I am a DBA, I am used to have the control of everything inside the database, but I understand the security perspective of Amazon, also I understand that RDS is that... "Relational Database Service" and that means others have the control, others maintain the database. It doesn't make sense that if it is a "DaaS"  I still have to maintain the database, so that's fine. by using a DaaS, I could be focused on others areas like SQL Tuning, Instance Tuning, Reporting for the board of directors, capacity planning, monitoring, etc, etc. So at the end, the package is fine. [H]

I think Amazon should ask if we want to apply the changes on memory, or spfile or both, instead to apply immediately whenever it's possible. +1 for Oracle Public Cloud :)

Always create your own Parameter Group , before to create a database, so that you can use it since the beginning. The problem to use a default Parameter Group is that when you require to change a parameter value you will have to create a new Parameter Group, stop the database and to assign the new Parameter Group, and that means Downtime, mostly when you database is already used by clients. 

Follow me:

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil.

      

Near Zero Downtime PDB Relocation in Oracle Database 12cR2

$
0
0

Introduction

Oracle has introduced in Oracle 12.1.0.1 in 2013 Pluggable Databases, since then Oracle has been enhancing its features for Multitenant. In Oracle 12.1.0.1 we had features to convert non-CDBs to PDBs and some features to copy and move pluggable databases. In Oracle 12.1.0.2 we saw several features for cloning (mostly remotely). However before 12.2 all those features to create , copy, close and move pluggable databases need downtime since the source PDB had to be read-only. Downtime is a bad word for companies. To read how to clone, move and create PDBs in 12.1 you can read the following set of articles:

Beginning with Oracle Database 12.2.0.1 those features were enhanced and the downtime was replaced with the word "hot" or "online". Two features that I really like are "Hot Cloning" and "Online relocation". Basically it is the same feature than 12.1.0.2 for clonning locally and remotely but now they can be done online. The source PDB can be in read-write. First let me tell you what's the difference between Hot Cloning and Online Relocation.

Hot Cloning. This feature allows you to "clone" a pluggable database either locally or remotely without to put the source PDB in read-only. I can be in read-write receiving several DMLs.


Online Relocation: This feature allows you to "relocate" a pluggable database to another CDB with zero downtime. After to transfer the PDB, the source PDB in the source CDB is removed, that's why it is called "relocation".

In this article we will discuss the feature "Near Zero Downtime PDB Relocation". This feature needs another new feature introduced in Oracle 12.2 called "Local Undo". If you don't know what is Local Undo you can read some of my articles about that:

Online PDB Relocations uses a database link that is created in the target CDB pointing to the CDB$ROOT of the source CDB. There are some privileges that we have to grant but that's discussed later in the examples. Once the database link is created, the sentence "CREATE PLUGGABLE DATABASE" is executed with the clause "RELOCATE" and the optional clause "AVAILABILITY (NORMAL|MAX|HIGH)". When RELOCATE sentence is used, Oracle creates an identical pluggable database in the target CDB, while the source PDB is still open in read-write. While the new PDB in the target CDB is being created you can execute your DMLs as if nothing was happening against your source PDB. That's why it is called "online". When the sentence completes, you will have two identical PDBs, one in the source CDB and another in the target CDB. During this time the source PDB will be generating more redo, which will be applied when the final "switch" is performed. That "Switch" is made when the new PDB in the target CDB is opened in read-write. While opening the target pdb in read-write the source PDB is paused while the pending redo is applied in the new PDB and once they are both totally syncronized, Oracle applies undo data in the new PDB to rollback the uncommitted transactions that were running in the source PDB. Once the undo is applied the source PDB is deleted (all the datafiles) and the new client's session can be now redirected to the new PDB. Even if during this short step there are new sessions being created, oracle can redirect new sessions to a the new PDB if the clause "AVAILABILITY" was used. With this good feature, the PDBs can be relocated now from CDB to other CDB with near zero downtime.

In this article I will explain step by step how this feature works.

Firstable let me show you the environment that I am using:

source CDB: NuvolaCG
target CDB: Nuvola2
source PDB: sourcepdb
database version: 12.2.0.1 (in both CDBs)
Both CDBs are running on Oracle Public Cloud (EE)

The article has the following sections:

  • Preparation
  • Copy-Phase
  • Relocation-Phase
  • Known Issues
  • Conclusion

Preparation

Create a common user in source CDB:

SQL> create user c##deiby identified by deiby container=all;

User created.

Granting privileges in source CDB:

SQL> grant connect, sysoper, create pluggable databaseto c##deiby container=all;

Grant succeeded.

Create a database link in target CDB pointing to CDB$ROOT of source CDB:

SQL> create database link dblinktosource connect to c##deiby identified by deiby using 'NUVOLACG';

Database link created.

The source CDB and the target CDB is in archivelog mode and also with Local Undo:

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME        PROPERTY_V
-------------------- ----------
LOCAL_UNDO_ENABLED   TRUE

Copy Phase

Before the relocation process, the service where your clients are connecting is running in the source CDB:

[oracle@NuvolaDB Hash]$ lsnrctl service
Service "pdbsource.gtnuvolasa.oraclecloud.internal" has 1 instance(s).
   Instance "NuvolaCG", status READY, has 1 handler(s) for this service...
      Handler(s):
         "DEDICATED" established:0 refused:0 state:ready
            LOCAL SERVER
The command completed successfully
[oracle@NuvolaDB Hash]$

In another terminal, I created a session in the source PDB and I did two INSERTs operations, one committed and another has not committed. With this I will show you how commited and uncommitted
transactions are handled:

SID    SERIAL#    USERNAME   MACHINE 
------ ---------- ---------- -------------------- 
367    44360      DGOMEZ    NuvolaDB.compute-gtn

SQL> insert into test values ('Guatemala');

SQL> commit;

Commit complete.

SQL> insert into test values ('USA');

1 row created.


Start the copy-phase:

In this phase the datafiles will be created in the new CDB and some redo also will be applied:

SQL> create pluggable database pdbsource from pdbsource@dblinktosource keystore IDENTIFIED BY "Nuv0la#1" relocateavailability max;

Pluggable database created.

pdbsourcemust be the same name than source PDB.
pdbsource is the name of the source PDB.
dblinktosource is the name of the database link.
keystore identified by - it's not needed on-premise. But this is Oracle Public Cloud.
relocate the clause which make this operation a "relocation pdb".
availability max - redirects new connections to the new PDB.

You can see that the status of the source PDB is now "RELOCATING":

SQL> select pdb_name, status from cdb_pdbs

PDB_NAME             STATUS
-------------------- ----------
PDB$SEED             NORMAL
PDBSOURCE            RELOCATING

In this phase some redo also is applied as you can see bellow:

create pluggable database pdbsource from pdbsource@dblinktosource keystore identified by * relocate availability max
Pluggable Database PDBSOURCE with pdb id - 3 is created as UNUSABLE.
local undo-1, localundoscn-0x000000000011f500
Applying media recovery for pdb-4099 from SCN 1206636 to SCN 1206649
thr-1, seq-39, logfile-/u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc, los-1199468, nxs-18446744073709551615
PDBSOURCE(3):Media Recovery Start
PDBSOURCE(3):Serial Media Recovery started
2DBSOURCE(3):Media Recovery Log /u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc
PDBSOURCE(3):Incomplete Recovery applied until change 1206649 time 02/12/2017 08:20:48
PDBSOURCE(3):Media Recovery Complete (Nuvola2)
Completed: create pluggable database pdbsource from pdbsource@dblinktosource keystore Identify by * relocate availability max


Once this phase has completed there will be two PDBs, one in the source PDB and another in the target PDB. Your source PDB can still receive transactions in the source PDB, the transactions executed after the copy-phase generate redo data which will be applied in the "relocation-phase".

Relocation Phase

In the terminal where I have the session created, I will perform a couple of INSERTs more. Be aware that these sentences were executed after the copy-phase to show you that the source PDB can receive DMLs, that's why is called "online":

SQL> rollback;

Rollback complete.

SQL> insert into test values ('Canada');

SQL> commit;

Commit complete.

SQL> insert into test values ('Nicaragua');

1 row created.


The relocation phase is done when you open the new target PDB in read-write. The source PDB is paused, the new PDB is opened, and the source PDB is closed and it's datafiles are deleted. (Execute 2 times this, check the Known Issues Section at the end of this article)

SQL> alter pluggable database pdbsource open;

Pluggable database altered.

After the relocation phase is completed, in the source CDB you are able to see the source PDB, but only its metadata, its datafiles were removed physically and the status of such PDB is "relocated":

SQL> select pdb_name, status from cdb_pdbs

PDB_NAME     STATUS
------------ ----------
PDB$SEED     NORMAL
PDBSOURCE    RELOCATED

And in the target CDB you will be the new PDB, opened in read-write, ready to receive sessions:

SQL> select pdb_name, status from cdb_pdbs

PDB_NAME    STATUS
----------- ----------
PDB$SEED    NORMAL
PDBSOURCE   NORMAL

Now let's confirm that the PDB was indeed online:

The value 'Guatemala' was commited. The value 'USA' was rolled back (after copy-phase). The value 'Canada' was commited and the value 'Nicaragua' was never commited nor uncommited. Then only "Guatemala" and "Canada" should be present in the new PDB since all the uncommited transactions were rolled back in the relocation-phase:

SQL> alter session set container=pdbsource;

Session altered.

SQL> select * from dgomez.test;

VALUE
--------------------
Guatemala
Canada

In this phase the redo generated after the copy phase is applied and all the uncommited transactions are rolled back using undo data. There are some validations and the service is relocated as well:

alter pluggable database pdbsource open
PDBSOURCE(3):Deleting old file#6 from file$
PDBSOURCE(3):Deleting old file#7 from file$
PDBSOURCE(3):Deleting old file#8 from file$
PDBSOURCE(3):Deleting old file#9 from file$
PDBSOURCE(3):Adding new file#6 to file$(old file#6)
PDBSOURCE(3):Adding new file#7 to file$(old file#7)
PDBSOURCE(3):Adding new file#8 to file$(old file#8)
PDBSOURCE(3):Adding new file#9 to file$(old file#9)
PDBSOURCE(3):Successfully created internal service pdbsource.gtnuvolasa.oraclecloud.internal at open
****************************************************************
Post plug operations are now complete.
Pluggable database PDBSOURCE with pdb id - 3 is now marked as NEW.
****************************************************************
PDBSOURCE(3):Pluggable database PDBSOURCE dictionary check beginning
PDBSOURCE(3):Pluggable Database PDBSOURCE Dictionary check complete
PDBSOURCE(3):Database Characterset for PDBSOURCE is US7ASCII
Pluggable database PDBSOURCE opened read write
Completed: alter pluggable database pdbsource open

In the middle of the blue lines Oracle applies a Semi-Patching. 

Now the service that our customers are using to connect is running in the new CDB. This is totally transparently to customers. You don't have to send them a new connection string....

[oracle@NuvolaDB trace]$ lsnrctl service
Service "pdbsource.gtnuvolasa.oraclecloud.internal" has 2 instance(s).
   Instance "Nuvola2", status READY, has 1 handler(s) for this service...
      Handler(s):
         "DEDICATED" established:0 refused:0 state:ready
            LOCAL SERVER

Known Issues

Privileges:

The documentation says "SYSDBA" or "SYSOPER", however I did a couple of tests with "SYSDBA" and it didn't work. I received the error: "ORA-01031: insufficient privileges" 

 

Target CDB in Shared Mode:

The target CDB doesn't need to be in Local Undo mode. In that case the new PDB being relocated will be converted to "Shared Undo". But you could have an issue here, if your source PDB is receiving several DMLs you could have some issues when you try to open the new PDB in read-write you will get a message saying "unrecovered txns found". In that case you must clear those unrecovered transactions by yourself and then re-execute "alter pluggable database open".

alter pluggable database pdbsource open
Applying media recovery for pdb-4099 from SCN 1207258 to SCN 1207394
thr-1, seq-39, logfile-/o1_mf_1_39_2572397703_.arc, los-1199468, nxs-18446744073709551615
PDBSOURCE(3):Media Recovery Start
PDBSOURCE(3):Serial Media Recovery started
PDBSOURCE(3):Media Recovery Log /o1_mf_1_39_2572397703_.arc
PDBSOURCE(3):Incomplete Recovery applied until change 1207394 time 02/12/2017 08:41:11
PDBSOURCE(3):Media Recovery Complete (Nuvola2)
DBSOURCE(3):Zero unrecovered txns found whileconverting pdb(3) to shared undo mode,recovery not necessary
PDB PDBSOURCE(3) converted to shared undo mode, scn: 0x000000008a2f90c0
Applying media recovery for pdb-4099 from SCN 1207394 to SCN 1207446
DBSOURCE(3):Media Recovery Start
PDBSOURCE(3):Serial Media Recovery started
PDBSOURCE(3):Media Recovery Log /u03/app/oracle/fast_recovery_area/NUVOLACG/foreign_archivelog/PDBSOURCE/2017_02_12/o1_mf_1_39_2572397703_.arc
DBSOURCE(3):Incomplete Recovery applied until change 1207446 time 02/12/2017 08:41:19
PDBSOURCE(3):Media Recovery Complete (Nuvola2)

In my case I had the following session opened with active transactions (the third terminal I was using to perform DMLs),

I just killed the session :)

SQL> alter system kill session '367,44360' immediate;

System altered.

After to kill that session the new PDB opened successfully.

 

Open in Read Write:

After the copy-phase the new PDB must be opened in read-write, if you try to open the new PDB in other mode right after the copy phase you will get errors:

SQL> alter pluggable database pdbsource open read only;
alter pluggable database pdbsource open read only
*
ERROR at line 1:
ORA-65085: cannot open pluggable database in read-only mode

 

Another name for the new PDB:

The name of the target PDB must be the same than name of the source PDB, if you try to use another name you will get an error:

SQL> create pluggable database relocatedPDB from pdbsource@dblinktosource relocate availability max;
create pluggable database relocatedPDB from pdbsource@dblinktosource relocate availability max
*
ERROR at line 1:
ORA-65348: unable to create pluggable database

The following image can be useful:

 

Deadlock in the first "alter pluggable database open":

The new PDB has to be opened twice because the first opening fails due to a bug (undocumented so far):

SQL> alter pluggable database pdbsource open;

alter pluggable database pdbsource open
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource

An investigation shows that this deadlock is due to a row cache lock. I found some bugs already documented for 12.2 while opening a database, there is not any workaround for all of them, only to apply a patch. However those bugs that I found  are not for Relocation PDB. Here an extract of the trace generated by the deadlock:

-------------------------------------------------------------------------------
Oracle session identified by:
{
             instance: 1 (nuvola2.nuvola2)
                os id: 18582
           process id: 8, oracle@NuvolaDB (TNS V1-V3)
           session id: 10
     session serial #: 32155 
               pdb id: 3 (PDBSOURCE)
}
is waiting for 'row cache lock' with wait info:
{
                   p1: 'cache id'=0x0
                   p2: 'mode'=0x0
                   p3: 'request'=0x5
         time in wait: 0.186033 sec
        timeout after: never
              wait id: 2670
             blocking: 0 sessions
          current sql: alter pluggable database pdbsource open
         wait history:
           * time between current wait and wait #1: 0.000327 sec
           1.       event: 'db file sequential read'
              time waited: 0.000260 sec
                  wait id: 2669 p1: 'file#'=0x1e
                                              p2: 'block#'=0xeb2
                                              p3: 'blocks'=0x1
           * time between wait #1 and #2: 0.000824 sec
           2.       event: 'db file sequential read'
              time waited: 0.000235 sec
                  wait id: 2668 p1: 'file#'=0x1e
                                              p2: 'block#'=0xbff
                                              p3: 'blocks'=0x1
           * time between wait #2 and #3: 0.002020 sec
           3.       event: 'db file sequential read'
              time waited: 0.000250 sec
                  wait id: 2667 p1: 'file#'=0x1e
                                              p2: 'block#'=0xd4e
                                              p3: 'blocks'=0x1
}
and is blocked by the session at the start of the chain.
-------------------------------------------------------------------------------

There is another "Deadlock" in Oracle 12.2 related to Local Undo and Shared Undo. If you want to read the workaround read this article: ¿A bug in Local Undo mode in Oracle 12.2?

 

Closing the source PDB:

Closing the source PDB right after the copy phase and before the relocation phase:

I did it just for fun and I found an ORA-65020 [;)]. But you don't do that ...

SQL> alter pluggable database pdbsource open;
alter pluggable database pdbsource open
*
ERROR at line 1:
ORA-17628: Oracle error 65020 returned by remote Oracle server
ORA-65020: pluggable database already closed

Conclusion:

Oracle Database 12.2 brings several new features to work with Pluggable Database totally online by taking advantage of redo data and undo data generated locally (Local Undo Mode). It's time to relocate PDBs! Enjoy!.

Follow me:

About the Author: Deiby Gómez is an Oracle ACE Director from Guatemala, he has the certifications Oracle Certified Master 11g and Oracle Certified Master 12c. Deiby Gómez currently works for Nuvola Consulting Group, a Guatemalan company that provides Consulting Services for Oracle Products. He is the winner of “SELECT Journal Editor's Choice Award 2016”. Deiby has been Speaker in Collaborate in Las Vega USA, Oracle Open World in San Francisco USA and in Sao Paolo Brazil.

      

¿A bug after configuring Local Undo in Oracle 12.2?

$
0
0

So while playing with Local Undo configuration in my Oracle Cloud environment, using binaries Oracle Database 12.2.0.1.0 Enterprise Edition Extreme Performance I found the following strange scenario. I was searching documentation about this in several parts and I didn't find, perhaps because 12.2 Cloud binaries has few days since released.  Is it a bug? I don't know, that's why I am sharing my thoughts here becuase I hit the behavior and I also found the workaround. If you hit the, let say "bug", you can apply the "workaround" and you will be fine.

Let me tell you a little bit more about my environment and how to reproduce this behavior:

  1. I am using Oracle Database 12.2.0.1.0 EE Extreme Performance (Oracle Cloud).
  2. I created a CDB with SQL sentences without configuring Local Undo. I mean I created the CDB with Local Undo OFF.
  3. I created 2 Pluggable Databases: NuvolaPDB1 and NuvolaPDB2

So now you should have something like this:

After that I started to run the steps to reproduce the scenario:

Configuring Local Undo:

SQL> shutdown immediate;
SQL> startup upgrade;
SQL> alter database local undo on;
SQL> shutdown immediate;
SQL> startup;

It is interesting that All the commands completed successfully. No errors were returned in the terminal. BUT! when I saw the alert log the following error appeared:

PDB$SEED(2):Undo initialization finished serial:0 start:358935410 end:358935501 diff:91 ms (0.1 seconds)
PDB$SEED(2):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
PDB$SEED(2):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/nuvolacg/NuvolaCG/trace/NuvolaCG_ora_25220.trc
PDB$SEED(2):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE...
PDB$SEED(2):Automatic creation of undo tablespace failed with error 604 60
Could not open PDB$SEED error=604
2016-11-24T05:17:17.630435+00:00
Errors in file /u01/app/oracle/diag/rdbms/nuvolacg/NuvolaCG/trace/NuvolaCG_ora_25220.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00060: deadlock detected while waiting for resource

Also it's interesting that the PDB$SEED was in READ WRITE, this for sure is a proof that something wrong happened:

SQL> select name, open_mode from v$pdbs;

NAME       OPEN_MODE
---------- ----------
PDB$SEED   READ WRITE
NUVOLAPDB1 MOUNTED
NUVOLAPDB2 MOUNTED

So in order to leave the things in peace with Oracle (because I don't like to fight with it) I will put PDB$SEED in read only again, since that should be the default status:

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read only;

Pluggable database altered.

I verified if at least the Undo tablespace in PDB$SEED was created or not:

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id and pdb.name='PDB$SEED' order by 1

PDB_NAME      TABLESPACE_NAME
------------- ------------------
PDB$SEED    TEMP
PDB$SEED    SYSTEM
PDB$SEED    SYSAUX

3 rows selected.

Ok, so after several investigation, I found the trick, this is not documented that why I believe this behavior is a bug, otherwise it should be documented and Oracle should be clear to say "If you have already Pluggable Databases created and you enable Undo Local, all the already created Pluggable Databases should be Open in Upgrade mode the first time right after configure Local Undo". But it's not the case:

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open upgrade;

Pluggable database altered.

To open the PDB in upgrade mode is the Workaround. I confirmed it by checking out the log, when I opened the PDB in upgrade mode Oracle was able to create the Undo Tablspace:

alter pluggable database pdb$seed open upgrade
PDB$SEED(2):Autotune of undo retention is turned on.
PDB$SEED(2):Undo initialization finished serial:0 start:359616842 end:359616849 diff:7 ms (0.0 seconds)
PDB$SEED(2):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE  SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
PDB$SEED(2):[27995] Successfully onlined Undo Tablespace 3.
PDB$SEED(2):Completed: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE  SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
Pluggable database PDB$SEED opened in upgrade mode
Completed: alter pluggable database pdb$seed open upgrade

After to verify that everything is fine with PDB$SEED, I had to put it back in read only. Of course, all these steps should not be done by the DBA, PDB$SEED must open normally, without dedlocks, issues, errors. that's the normal behavior.

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read only;

It is interesting that the already created PDBs before configure Local Undo don't open, we can try as many times as we want and we will get the same result as I show you bellow:

SQL> alter pluggable database NuvolaPDB1 open;
alter pluggable database NuvolaPDB1 open
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-00060: deadlock detected while waiting for resource

As you can see with others PDBs (not PDB$SEED) the error is returned in the terminal so it is easy to know that something wrong is happening. in PDB$SEED we didn't receive any error, if I wouldn't have seen the alert log I wouldn't have realized that there is something wrong with PDB$SEED. Let's take a look into the alert log to confirm it is the same issue that I had in PDB$SEED:

alter pluggable database NuvolaPDB1 open
NUVOLAPDB1(3):Undo initialization finished serial:0 start:360066830 end:360066892 diff:62 ms (0.1 seconds)
NUVOLAPDB1(3):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE  SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE
NUVOLAPDB1(3):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/nuvolacg/NuvolaCG/trace/NuvolaCG_ora_27995.trc
NUVOLAPDB1(3):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE  SIZE 125829120 AUTOEXTEND ON NEXT 3145728 MAXSIZE 10307919872 ONLINE...
NUVOLAPDB1(3):Automatic creation of undo tablespace failed with error 604 60
ORA-604 signalled during: alter pluggable database NuvolaPDB1 open...

As I said before, it doesn't matter how many times we close and open the PDB, the result will be the same until we apply "the workaround":


SQL> alter pluggable database NuvolaPDB1 close;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 open;
alter pluggable database NuvolaPDB1 open
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-00060: deadlock detected while waiting for resource


SQL>  alter pluggable database NuvolaPDB1 close;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 open;
alter pluggable database NuvolaPDB1 open
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-00060: deadlock detected while waiting for resource

... until we apply "the workaround":


SQL> alter pluggable database NuvolaPDB1 close;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 open upgrade;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 close;

Pluggable database altered.

SQL> alter pluggable database NuvolaPDB1 open;

Pluggable database altered.

Another thing I confirmed is that the, let say "bug", happens only for the already created Pluggable Databases before configure Local Undo. Because for new Pluggable Databases the opening is susccess:


SQL> create pluggable database "NuvolaPDB4" ADMIN USER pdb4admin IDENTIFIED BY "Nuvola1";

Pluggable database created.

SQL> alter pluggable database NuvolaPDB4 open;

Pluggable database altered.

Some more comments:

  • No, this is not a "one-time" bug. I was able to replicate this scenario 3 times, recreating everything from scratch. This makes me think that more people would get this behavior.
  • On-Premise 12.2.0.1.0 binaries will get this fixed? Maybe, I don't know. But I already reported this behavior to some Product Managers.
  • This is critical and I will loose data? No, this impacts only the undo tablespace creation. Apply the workaournd and you will be fine.

Follow me:

      

How to Enable and Disable Local Undo in Oracle 12.2

$
0
0

Article written by Deiby Gómez.

Local Undo is a new kind of undo configuration for Multitenant Architecture and it is a new feature introduced in 12.2.0.1.0. A couple of weeks ago the documentation of 12.2.0.1.0 was released and also the binaries in Oracle Public Cloud, several DBAs around the world started to play with the new features. When we say "Local Undo" basically we are saying that every Pluggable Database will have its own Undo Tablespace, similar to the following image where the Pluggable Databases "NuvolaPDB1", "NuvolaPDB2", "NuvolaPDB3", and also PDB$SEED have its own Undo tablespace.

This was a big change compared with the multitenant undo configuration in 12.1. In 12.1 only CDB$ROOT has its own Undo tablespace and all the Pluggable Databases "shared" that undo tablespace, that's why the former multitenant undo configuration is called "Shared Undo". To summary, starting in 12.2.0.1.0 we have "Local Undo" or "Shared Undo".  In this article I will show you step by step how to configure Local Undo in a Multitenant Database and also how to deconfigure it.

NOTE: This article was written using Oracle Database 12.2.0.1.0 Enterprise Edition Extreme Performance (Oracle Public Cloud).

The environment I am using is the following:

  • a CDB Database called "NuvolaCG".
  • 4 Pluggable Databases:
    • NuvolaPDB1 (con_id=3)
    • NuvolaPDB2 (con_id=4)
    • NuvolaPDB3 (con_id=5)
    • NuvolaPDB4 (con_id=6)

Currently the configuration my environment is using is "Shared Undo". In a Shared Undo configuration, all the pluggable databases use (Share)  the same Undo Tablespace, the Undo Tablespace is owned by CDB$ROOT. For example, in the following query result you can see that all my PDBs are using the same undo tablespace called "UNDOTBS1" and you can see that the owner of that undo tablespace is the CDB$ROOT (con_id=1):


SQL> select s.con_id   fromwhichpdb, s.username usersession, r.con_id undo_owner, r.tablespace_name current_undo,  segment_name segmentused
from v$session s,
v$transaction t,
cdb_rollback_segs r
where s.taddr=t.addr
and t.xidusn=r.segment_id(+)
and t.con_id=r.con_id
and t.ses_addr=s.saddr
order by 1;  

FROMWHICHPDB USERSESSION  UNDO_OWNER CURRENT_UNDO SEGMENTUSED
------------ ------------ ---------- ------------ ------------------------------
           3 USERA        1          UNDOTBS1     _SYSSMU3_1251228189$
           4 USERB        1          UNDOTBS1     _SYSSMU9_3256821283$
           5 USERC        1          UNDOTBS1     _SYSSMU1_307601955$
           6 USERD                 UNDOTBS1     _SYSSMU7_442620111$

How to configure Local Undo:

Shutdown the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Start the database up in upgrade mode:

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 5452595200 bytes
Fixed Size            8804328 bytes
Variable Size         1090521112 bytes
Database Buffers     4345298944 bytes
Redo Buffers            7970816 bytes
Database mounted.
Database opened.

Enable Local Undo:

SQL> alter database local undo on;

Database altered.

Shutdown the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Start the database up normally:

SQL> startup;
ORACLE instance started.

Total System Global Area 5452595200 bytes
Fixed Size            8804328 bytes
Variable Size         1090521112 bytes
Database Buffers     4345298944 bytes
Redo Buffers            7970816 bytes
Database mounted.
Database opened.

Confirm the new undo confiruation is "Local Undo":

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE  PROPERTY_NAME = 'LOCAL_UNDO_ENABLED'

PROPERTY_NAME        PROPERTY_VALUE
-------------------- --------------------
LOCAL_UNDO_ENABLED   TRUE

Now let's open all the Pluggable Databases:

SQL> alter pluggable database all open;

Pluggable database altered.

NOTE:if you get the error "ORA-00060: deadlock resolved;" here, you can read my last articlewhere you can find the solution.

As you can see bellow, now all the Pluggable Databases have its own Undo Tablespace, by default the undo tablespace is called "UNDO_1".

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME  from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id and tbs.name like 'UNDO%' order by 1;

PDB_NAME    TABLESPACE_NAME
----------- ------------------------------
NUVOLAPDB1  UNDO_1
NUVOLAPDB2  UNDO_1
NUVOLAPDB3  UNDO_1
NUVOLAPDB4  UNDO_1
PDB$SEED    UNDO_1

NOTE: If you want to know how those Undo Tablespaces were created in every Pluggable Database you can read my article called "How Undo Tablespace is created in Local Undo Config".

I executed a couple of DMLs just to use undo segments in each Pluggable Database, and now you can see that every Pluggable Database is using its own Undo Tablespace. For example the session started in NuvolaPDB1 (con_id=3) is using the undo segment called "_SYSSMU8_3241223907$" which is part of the tablespace "UNDO_1" which is owned by NuvolaPDB1 (con_id=3).

SQL> select s.con_id   fromwhichpdb, s.username usersession, r.con_id undo_owner, r.tablespace_name current_undo,  segment_name segmentused
from v$session s,
v$transaction t,
cdb_rollback_segs r
where s.taddr=t.addr
and t.xidusn=r.segment_id(+)
and t.con_id=r.con_id
and t.ses_addr=s.saddr
order by 1;  2    3    4    5    6    7    8    9  

FROMWHICHPDB USERSESSION  UNDO_OWNER CURRENT_UNDO  SEGMENTUSED
------------ ------------ ---------- ------------- ------------------------------
           3 USERA        3          UNDO_1        _SYSSMU8_3241223907$
           4 USERB                UNDO_1        _SYSSMU9_2687006412$
           5 USERC        5          UNDO_1        _SYSSMU4_2039586447$
           6 USERD        6          UNDO_1        _SYSSMU7_3889563214$


It is important to know that if you try to drop an undo tablespace when Local Undo is in use you wil get an error:


SQL> alter session set container=NuvolaPDB1;

Session altered.

SQL> drop tablespace UNDO_1 including contents and datafiles;
drop tablespace UNDO_1 including contents and datafiles
*
ERROR at line 1:
ORA-30013: undo tablespace 'UNDO_1' is currently in use


How to disable Local Undo:

shutdown the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Start the database up in upgrade mode:

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 5452595200 bytes
Fixed Size            8804328 bytes
Variable Size         1090521112 bytes
Database Buffers     4345298944 bytes
Redo Buffers            7970816 bytes
Database mounted.
Database opened.

Disable Local Undo:

SQL> alter database local undo off;

Database altered.

Shutdown the database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Start the database up normally:

SQL> startup;
ORACLE instance started.

Total System Global Area 5452595200 bytes
Fixed Size            8804328 bytes
Variable Size         1090521112 bytes
Database Buffers     4345298944 bytes
Redo Buffers            7970816 bytes
Database mounted.
Database opened.
 
Confirm Shared Undo is used (Local Undo is false):

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE  PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';

PROPERTY_NAME         PROPERTY_VALUE
-------------------- --------------------
LOCAL_UNDO_ENABLED   FALSE


How to delete Undo Tablespaces after switch from Local Undo to Shared Undo:

There is an important thing here that you should know when you switch from Local Undo to Shared Undo. Since you used Local Undo you know that every Pluggable Database had its own Undo Tablespace, however when you enable "Shared Undo" all those undo tablepaces are not removed, which means that you will have them there and you have to take a desition either leave them there or remove them.

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME  from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id and tbs.name like 'UNDO%' order by 1;

PDB_NAME    TABLESPACE_NAME
----------- ------------------------------
NUVOLAPDB1  UNDO_1
NUVOLAPDB2  UNDO_1
NUVOLAPDB3  UNDO_1
NUVOLAPDB4  UNDO_1
PDB$SEED    UNDO_1

If you decide to remove them, you have two options, The first option is to use "catcon.pl" against all the Pluggable Database as I show you bellow:

[oracle@NuvolaDB ~]$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u sys/Nuvola1 -c 'NuvolaPDB1 NuvolaPDB2 NuvolaPDB3 NuvolaPDB4' -s  -b DropUndoPDBs -- --x'drop tablespace UNDO_1 including contents and datafiles;'
catcon: ALL catcon-related output will be written to [/home/oracle/DropUndoPDBs_catcon_13739.lst]
catcon: See [/home/oracle/DropUndoPDBs*.log] files for output generated by scripts
catcon: See [/home/oracle/DropUndoPDBs_*.lst] files for spool files, if any
catcon.pl: completed successfully
[oracle@NuvolaDB ~]$

The second Option is to connect to every Pluggable Database manually and drop the undo tablespace, this could take more time than using catcon. I recommend catcon, it's easy and fast. The following sentences should be executed in every Pluggable Database you have:

SQL> alter session set container=NuvolaPDB1;

Session altered.

SQL>  drop tablespace undo_1 including contents and datafiles;

Tablespace dropped.

In both options, Using catcon.pl and also drop the undo tablespace manually you have to do the following you have want to remove the Undo tablespace also from PDB$SEED:

SQL> alter session set "_oracle_script"=true;

Session altered.

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read write;

Pluggable database altered.

SQL> alter session set container=pdb$seed;

Session altered.

SQL>  drop tablespace UNDO_1 including contents and datafiles;

Tablespace dropped.

SQL> alter session set container=cdb$root;

Session altered.

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read only;

Pluggable database altered.

SQL> alter session set "_oracle_script"=false;

Session altered.

After all these steps, you finally will leave your database as nothing happened. Of course you can see that disabling Local Undo and reverting back all the changes takes more time compared with enabling Local Undo.

SQL> select pdb.name PDB_NAME, tbs.name TABLESPACE_NAME  from v$tablespace tbs, v$pdbs pdb where tbs.con_id=pdb.con_id and tbs.name like 'UNDO%' order by 1;

no rows selected

Conclusion:

  • Enabling Local Undo creates all the undo tablespaces automatically in every PDB including PDB$SEED.
  • Disabling Local Undo doesn't remove the undo tablespaces automatically.
  • You need to bounce your database either for enabling Local Undo or Disabling it.
  • Local Undo is strongly recommended. It gives more isolation to Pluggable Databases.
  • The former Undo configuration (<12.2) is called "Shared Undo".

Follow me:

      



Viewing all 108 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>