Friday, April 12, 2019

The New Features of Oracle Active Data Guard 12c

The New Features of Oracle Active Data Guard 12c


Here are the major ones:

1. Far Sync. This feature allows you to failover to a standby database even if it is remotely located in another site thousands of miles away. You do this without any data loss and without negatively affecting the performance of the primary database. Far Sync also keeps things simple and your costs down.

How does Far Sync works ?

The principle is fairly simple, the Far Sync instance receive data synchronously from the primary database and then forward it asynchronously to up de 29 remote destinations.

The far sync database is not a standard database, it only contains a specific controlfile, a spfile and standby redologs. This database must be placed near the primary database to guarantee an optimal network latency during synchronous replication. But be careful, don’t place this database on the same geographical place than the primary, because if your primary database experiences a geographical disaster, your Far Sync will be impacted too, and some data could be lost.


In case of an outage on the primary database, the standard failover procedure applies and the far sync instance guarantee that no data is lost during the failover. “Far Sync Standby” cannot be used as the target. Note that it is possible to create a “Far Sync Standby” on both sides in case the database roles change.

2. Fast Sync. Data Guard Fast Sync (SYNC NOAFFIRM) with Oracle Database 12c provides an easy way of improving performance in synchronous zero data loss configurations. It enables all sessions waiting for a remote RFS write to proceed as soon as the write is submitted, not when the write completes. This offers the best performance since it reduces the overall SYNC remote write wait event

Data Guard maximum availability supports the use of the noaffirm  redo transport parameter. A standby database returns receipt acknowledgment to its primary database as soon as redo is received in memory. The standby database does not wait for the Remote File Server (RFS) to write to a standby redo log file.
 
This noaffirm parameter provides increased primary database performance in Data Guard and SYNC for redo transport. FastSync isolates the primary database in a maximum availability configuration from any performance impact due to slow I/O at the standby database.

This feature gives you the easiest way to improve the database performance when you have synchronous no data loss setups. When a standby database gets a redo in memory, it will check with the primary database, even without getting a disk I/O to a redo log file in the standby. Fast Sync then effectively reduces the total round trip time between the standby and primary, resulting in better primary database performance. It also prevents fluctuations and outliers in standby I/O performance from impacting application response time. Fast Sync can make it practical to increase the distance between the primary and any Data Guard synchronous destination to provide greater geographic protection.

You should know, however, that there is a minute chance that data loss would happen when you use Fast Sync. But Oracle assures you that these instances would be so rare and it has to happen when both databases fail milliseconds within each other.

3. Real-time Cascade. Real-time Cascade allows your standby database to immediately transmit a redo to another standby database even when it has yet to be archived in the redo log file. Before 12c, redo is only cascade after the standby redo log file has finished being archived locally.

4. SYSDG. SYSDG is an administration privilege wherein the user can only perform basic administration actions and tasks. As such, SYSDG privileges are limited to STARTUP, SHUTDOWN, ALTER DATABASE, ALTER SESSION, ALTER SYSTEM, CREATE RESTORE POINT, CREATE SESSION, DROP RESTORE POINT, FLASHBACK DATABASE, SELECT ANY DICTIONARY, DELETE, and EXECUTE.

Other minor features that were introduced in Oracle Active Data Guard 12c:
Data Manipulation Language operations are now permitted on global temporary tables on the standby.
 
You no longer have to add the USING CURRENT LOGFILE clause when you initiate real-time apply. You can now use sequences in a Data Guard environment.
You no longer need to shut down all other primary database instance if you need to switchover from an Oracle Real Application Clusters primary database to a physical standby. Application continuity is now supported as well.
Beside these new features the Data Guard Broker commands have been extended. One of them are in my point of view very useful: The “validate database” command checks if the database is ready for a role transition.

 
PHYSICAL STANDBY DATABASE RELATED PROCESSES


On the Primary Database:

LGWR : The log writer process on the primary database writes entries from the redo log buffer to the online redo log file. When the current online redo log file is full, it triggers the archiver process to start the archiving activity. In some cases, the log writer process writes redo entries to the online redo log file of the primary database and the standby redo log file of the standby database. Usually, in this kind of arrangement the LGWR works as the log transport agent that is setup to achieve high data protection modes.

LNS : The LogWriter Network Service (LNS) reads the redo being flushed from the redo buffers by the LGWR and sends the redo over network to the standby database. The
main purpose of the LNS process is to free up the LGWR process from performing the redo transport role.

ARCH  : The archiver processes archives the ORL files to archive log files. Up to 30 ARCH processes can exist, and these ARCH processes are also used to fulfill gap resolution requests. Note that one ARCH process has a special role in that it is dedicated to local redo log archiving only and never communicates with a standby database.

Fetch Archive Log (FAL) The FAL process has two components: FAL Client and FAL Server. Both processes are used for archive gap resolution. If the Managed Recovery Process (MRP) on the standby database site detects an archive gap sequence, it initiates a fetch request to the FAL client on the standby site. This action, in turn, requests the FAL server process on the primary database to re-transmit the archived log files to resolve the gap sequence.


On the Standby Database:

RFS : The main objective of the Remote File Server process is to perform a network receive of redo transmitted from the primary site and then writes the network buffer (redo data) to the standby redo log (SRL) files.

ARCH : The archive processes on the standby site perform the same functions performed on the primary site, except that on the standby site, an ARCH process generates archived log files from the SRLs.

MRP : The managed recovery process coordinates media recovery management. Remember that a physical standby is in perpetual recovery mode.

Logical Standby Process (LSP) - The LSP applies the redo records from archived redo logs to the logical standby database. The Oracle database log miner engine is used by the logical standby process for the SQL apply operations. Using the log miner engine, the LSP process recreates the SQL statements from redo logs that have been executed on the primary database. These statements are then applied to the standby database to keep it current with the primary database.




Basically we can categorize physical standby database  into three major components:

1) Data Guard Redo Transport Services
– To transfer the redo that is generated by the primary database to the standby database.

2) Data Guard Apply Services  
– To receive and apply the redo sent by Redo Transport Services to the standby database.

3) Data Guard Role Management Services
– To assist in the database role changes in switchover and failover scenarios.
This service works in the background and takes care of switchover/failover scenarios




Oracle 12c RAC New Features


Oracle 12c RAC New Features

Oracle ASM Flex

This will help reduce the overhead on the database server by running an ASM instance remotely. Instances can use the remote ASM node for any planned or unplanned downtime as well. All the metadata requests can be covered by non-local ASM instances. In addition to that this feature removes the risk of single point of failure, of having only one ASM instance for storage management. Now if a local ASM instance fails, the database instance can now connect directly to any of the other surviving remote ASM instances in the cluster.

ASM Disk Scrubbing

This monitors all the disks in the ASM disk groups and will discover logical corruptions. Normally these corruptions were discovered when an RMAN backup job ran. Disk Scrubbing will try to automatically recover those logical corruptions without the DBA even knowing!

Shared Password file in ASM

A single password file can now be stored in the ASM diskgroup and can be shared by all nodes. No need to have individual copies for each instance.

ASM Multiple Diskgroup Rebalance and Disk Resync Enhancements

Resync Power limit – Allows multiple diskgroups to be resynced concurrently.
Disk Resync Checkpoint – Faster recovery from instance failures.

Grid Infrastructure Rolling Migration support for one-off’s

When applying a one-off patch to the ASM instance, the databases that it is serving can be pointed to use a different ASM instance.

Oracle Clusterware Flex Cluster

This feature may appear similar to the ASM Flex feature but actually it is not. This is another type of cluster that has been introduced in Oracle 12c. We need to understand the two main components Hub Nodes and Leaf Nodes.

Hub Nodes are nodes that you currently see in the 11g RAC architecture. Every node is a full-fledged node with the required Clusterware software, share storage with a voting disk, interconnect network etc. components. On the other hand the leaf nodes are lightweight nodes with no shared storage and minimal Clusterware software. A leaf node will be connected to a Hub Node.

Grid Home Server

This new feature will allow you to have a single Golden Oracle Home on one of nodes and all other nodes to be a client of that Golden Home. You will only have to patch the single golden Oracle Home and rest will take it from there.

Application Continuity

This helps minimize the application downtime caused by temporary failures in the infrastructure and/or the database servers. This piece sits between the application and the database working at the JDBC driver layer. If any failure occurs and is recoverable, it will be recovered automatically while being transparent to the application. The application will only observe a minor latency delay in the transactions and the failure will automatically be recovered transparently. Additionally Oracle guarantees the successful completion of the in-flight transactions, eliminating the chance of duplicate transactions. The purpose of leaf nodes is to include application servers and other servers with additional software running on the Oracle 12c Clusterware infrastructure. These leaf nodes will not have any database instances running on them. If a leaf node goes down then there will be no impact on Hub nodes either. This allows the flexibility to run leaf nodes on Virtual Machines while Hub nodes can run on the actual physical machines.

IPv6 Support

IPv6 was supported in Oracle database 11gr2 but was only available for a standalone database. In 12c Database, clients can also now connect to database in RAC environments, using the IPv6 protocol. The interconnect however still only supports IPv4. This feature helps the customers meet the PCI, SOX and other Security Compliance standards.

Multiple SCAN’s Per Subnet

Now you can configure multiple SCAN’s per subnet mask, per cluster. This obviously is made available to provide redundancy.

GHCTL

The new ghctl utility will improve patching process.

UI auto runs root.sh

Oracle UI will execute the root.sh script on all nodes. You don’t have to do this on all nodes manually.

Deprecated Features

While it is good to be familiar with the new features it equally if not more to be aware of what features of RAC are being deprecated so plans can be made to move away from those Deprecated feature and alternatives chosen.

Oracle Restart

Oracle Restart feature which was provided as a part of the Oracle Grid Infrastructure has been deprecated. It will be de-supported in future versions.

RAW/Block Storage Devices

Oracle Database 12c and Oracle Clusterware 12c, no longer support raw storage devices. The files must be moved to Oracle ASM before upgrading to Oracle Clusterware 12c.

Thursday, April 11, 2019

Oracle Database 12c – RMAN New Features

Oracle Database 12c – RMAN New Features:


Oracle Database 12c has new enhancements and additions in Recovery Manager (RMAN).  The Recovery Manager continues to enhance and extend the reliability, efficiency, and availability of Oracle Database Backup and Recovery. In this article series, I will be explaining the new features and how it will help Oracle community. In this article I will cover: SQL Interface Improvements SYSBACKUP Privilege

Oracle Database 12c has new enhancements and additions in Recovery Manager (RMAN).  The Recovery Manager continues to enhance and extend the reliability, efficiency, and availability of Oracle Database Backup and Recovery. In this article series, I will be explaining the new features and how it will help Oracle community.

Part I will cover:
SQL Interface Improvements
SYSBACKUP Privilege
Support for multitenant container and pluggable databases
DUPLICATE enhancements

Part II will cover:
Multisection Backup Improvements
Restoring and Recovering Files Over Network
Storage Snapshot Optimization
Active Database Duplication Improvements

Part III will cover:
Cross-Platform Backup and Restore Improvements
Recovering Tables and Table Partitions using RMAN Backups
Unified auditing and RMAN

Oracle Database 12c New Features

During this Oracle Database 12c new features article series, I shall be extensively exploring some of the very important new additions and enhancements introduced in the area of Database Administration, RMAN, High Availability and Performance Tuning.

During this Oracle Database 12c new features article series, I shall be extensively exploring some of the very important new additions and enhancements introduced in the area of Database Administration, RMAN, High Availability and Performance Tuning.

Part I covers:
1. Online migration of an active data file
2. Online table partition or sub-partition migration
3. Invisible column
4. Multiple indexes on the same column
5. DDL logging
6. Temporary undo in- and- outs
7. New backup user privilege
8. How to execute SQL statement in RMAN
9. Table level recovery in RMAN
10. Restricting PGA size

Part 2 covers:
1. Table partition maintenance enhancements
2. Database upgrade improvements
3. Restore/Recover data file over the network
4. Data Pump enhancements
5. Real-time ADDM
6. Concurrent statistics gathering

Part 3 covers:
1. Additions/Enhancements in ASM
2. Additions/Enhancements in Grid Infrastructure
3. Additions/Enhancements in Real Application Cluster (database)

Part 4 covers:
1. How to truncate a master table while child tables contain data
2. Limiting ROWS for Top-N query results
3. Miscellaneous SQL*Plus enhancements
4. Session level sequences
5. WITH clause improvements
6. Extended data types

Thursday, December 21, 2017

Oracle Mutex

Mutexes are objects that exist within the operating system to provide access to shared memory structures. They are similar to latches, which will be covered in following chapters, as they are serialized mechanisms used to control access to shared data structures within the Oracle SGA.
Serialization provides benefits via mutexes in that they are required to avoid having database objects being read during modification and to provide consistency as part of the relational database ACID (Atomicity, Consistency, Isolation and Durability) model.

Mutexes can be used and defined in various ways. Each data structure within Oracle which is under the protection of a mutex can also have its own mutex such as a parent cursor may have its own separate mutex as well as each child cursor can also have its own mutex. Structures within Oracle can be protected by multiple mutexes so that each mutex will protect a different area of the database structure. While latches and mutexes are similar regarding both being serialization mechanisms and providing data protection, mutexes differ from latches in the following ways.

Mutexes are smaller and operate faster than latches because they contain fewer instructions than those in a latch get operation. Secondly, mutexes take up fewer memory resources and space than latches. Mutexes also provide less chance of contention within the database than latches do which means that mutexes provide greater protection of data structures and flexibility than latches.
Another key feature of mutexes is that they can be referenced across many sessions concurrently by using shared mode. Mutexes also function in a dual role as both a serialization item similar to a latch and also as pin operator by preventing objects from aging out of the Oracle memory buffers. Since both latches and mutexes are independent mechanisms, a process within Oracle can hold both a latch and mutex at the same time.

Oracle 10g release 2 and beyond replaced some latch mechanisms with the mutex approach, claiming that they are faster and more efficient than traditional locking mechanisms.  To improve cursor execution speed and hard pare time within the library cache, mutexes replace library cache latches and library cache pins. Oracle claims that mutexes are faster and use less CPU, which is important for CPU-bound database where large data buffers remove I/O as a primary source of contention. 
Oracle also claims that a mutex allows for improved concurrency over the older latch mechanism because the code path is shorter.   

Mutating Table Problems with DELETE CASCADE


mutating table is a table that is currently being modified by an UPDATE, DELETE, or INSERT statement, or it is a table that might need to be updated by the effects of a declarative DELETE CASCADE referential integrity constraint. The restrictions on such a table apply only to the session that issued the statement in progress.

we suddenly had a "strange" error message in one of our applications:
ORA-04091: table APP is mutating, trigger/function may not see it
ORA-06512: at "TRG_GET_ID_APP", line 4
ORA-04088: error during execution of trigger 'TRG_GET_ID_APP'.

For all row triggers, that were fired as the result of a DELETE CASCADE, there are two important restrictions regarding mutating tables. These restrictions prevent a trigger from seeing an inconsistent set of data.
The SQL statements of a trigger cannot read from (query) or modifya
mutating table of the triggering statement.
Example 1
Select in a mutating table from a row trigger (Tested on Oracle 8.1.7)
We want to explain this situation on an example. We have two tables "A" and "B". "A" is the master table and "B" the detail table. We specified a foreign key between "B" and "A" with the CASCADE DELETE option.
Here are the CREATE statements
drop table B;
drop table A;
create table A (
  ida    number   not null,
  vala   varchar2(10),
  primary key(ida));
create table B (
  idb    number,
  valb   varchar2(10),
  foreign key (idb) references A (ida) on delete cascade)
/
create or replace trigger b_br
after delete on B
for each row
declare
    n integer;
begin
  select count(*) into n from A;
  dbms_output.put_line('there are ' || n || ' rows in A');
  dbms_output.put_line('after statment on B');
  dbms_output.new_line;
end;
/
insert into A values(1,'Table A');
insert into A values(2,'Table A');
insert into B values(1,'Table B');
insert into B values(1,'Table B');
commit;
set serveroutput on;
delete from A where idA = 1;
ERROR at line 1:
ORA-04091: table SCOTT.A is mutating, trigger/function may not see
ORA-06512: at "SCOTT.B_BR", line 4
ORA-04088: error during execution of trigger 'SCOTT.B_BR'
Notice that the SQL statement ( "select count(*) into n from A" ) is run for the first row of the table, and then the AFTER row trigger B_BR is fired. In turn, a statement in the AFTER row trigger body attempts to query the original table A. However, because the table A is mutating due to the CASCADE DELETE foreign key, this query is not allowed by Oracle. If attempted, a runtime error occurs, the effects of the trigger body and triggering statement are rolled back, and control is returned to the user or application.
Solution: Use statement trigger instead of row trigger
If you delete the line "FOR EACH ROW" from the trigger above, then the trigger becomes a statement trigger, the table is not mutating when the trigger fires, and the trigger does output the correct data.
SQL> delete from A where idA = 1;

there are 1 rows in A
after statment on B

1 row deleted.
SQL> select count(*) from B;

   COUNT(*)
----------
         0
It is not always possible to change the row trigger to a statement trigger. In this case, the oracle manual proposes the following solution:
Example 2
Cascading Update with Oracle 8i
Before Oracle8i, there was a "constraining error" that prevented a row trigger from modifying a table when the parent statement implicitly read that table to enforce a foreign key constraint. As of Oracle8i, there is no constraining error. In addition, checking of the foreign key is deferred until at least the end of the parent statement. The mutating error still prevents the trigger from reading or modifying the table that the parent statement is modifying.
This allows most foreign key constraint actions to be implemented via their obvious after-row trigger, providing the constraint is not self-referential. Update cascade, update set null, update set default, delete set default, inserting a missing parent, and maintaining a count of children can all be implemented easily.
However there is one problem with a multirow update - let's demonstrate this on an example.
create table P (
  p1 number not null,
  primary key (p1));
create table F (
  f1 number,
  foreign key (f1) references P (p1) on delete cascade);
create trigger pu_ar
after update on p
for each row
begin
  if (:new.p1 != :old.p1) then
    update f
       set f1 = :new.p1
     where f1 = :old.p1;
  end if;
end;
/
insert into p values (1);
insert into p values (2);
insert into p values (3);
insert into f values (1);
insert into f values (2);
insert into f values (3);
commit;
update p set p1 = p1+1;
commit;
SQL> select * from p;
        P1
----------
         2  <===== OK
         3  <===== OK
         4  <===== OK
SQL> select * from f;
        F1
----------
         4   <===== Wrong !
         4   <===== Wrong !
         4   <===== Wrong !
The statement first updates (1) to (2) in p, and the trigger updates (1) to (2) in f, leaving two rows of value (2) in f. Then the statement updates (2) to (3) in p, and the trigger updates both rows of value (2) to (3) in f. Finally, the statement updates (3) to (4) in p, and the trigger updates all three rows in f from (3) to (4). The relationship of the data in p and f is lost.
To avoid this problem, you must forbid to update to an existing primary key, if this primary key have any childs, this can be accomplished with the following trigger.
create or replace trigger pu_ar
after update on P
for each row
declare
  echildexists   exception;
  -- check if child table have child records with this new
  -- key, this is not allowed.
  cursor curs_exists is
  select 'x'
    from F
   where f1 = :new.p1;
  rtf  curs_exists%rowtype;
begin
  if(:new.p1 <> :old.p1) then
    open curs_exists;
    fetch curs_exists into rtf;
    if (curs_exists%found) then
      close curs_exists;
      raise echildexists;
    end if;
    close curs_exists;
    update f
       set f1 = :new.p1
     where f1 = :old.p1;
  end if;
exception
  when echildexists then
    raise_application_error (
    -20102,'error: this primary key: '
    ||to_char(:new.p1)
    ||' exists and has child rows in f, this triggers
    again an update and so on ...');
end pu_ar;
/
Test the update
SQL> update p set p1 = p1+1;
 
ERROR at line 1:
ORA-20102: error: this primary key: 2 exists and has child
rows in f, this triggers again an update and so on ...
ORA-06512: at "SCOTT.PU_AR", line 34
ORA-04088: error during execution of trigger 'SCOTT.PU_AR'
Example 3
Using a temporary table
If you need to update a mutating table, then you could use a temporary table, a PL/SQL table, or a package variable to bypass these restrictions. For example, in place of a single AFTER row trigger that updates the original table, resulting in a mutating table error, you may be able to use two triggers - an AFTER row trigger that updates a temporary table, and an AFTER statement trigger that updates the original table with the values from the temporary table.
In the next example, "from the real world", we want to show this. The table CUG can only have records of the following types
  • A: Type = 1
  • B: Type = 2 (Leader for C or D)
  • C: Type = 3 (Lead by B)
  • D: Type = 4 (Lead by B)
Note, that the types C and D must be leaded by the type B. 
Create table CUG
drop table CUG cascade constraints;
create table CUG (
  id_cug      number(12) not null primary key,
  id_B        number(12) not null,
  type        number(1),
foreign key (id_B) references CUG (id_cug) 
on delete cascade
);
Next we create a temporary table to avoid the "Mutating Table Problem".
drop table CUGTMP;
create global temporary table CUGTMP (
  id_B        number(12),
  type        number(1))
on commit delete rows;
The following trigger checks new rows (Inserts) in CUG
create or replace trigger bi_r
before insert on CUG
for each row
declare
  l_type     CUG.type%type;
begin
  if (:new.type in (3,4)) then
    select type into l_type from CUG
     where id_cug = :new.id_B;
  end if;
  if (l_type != 2) then
     raise_application_error(-20002,
     'C and D CUGs must have a leading B');
  end if;
end;
/
The following Trigger saves the new values for id_B in the temporary table.
create or replace trigger au_r
after update of id_B on CUG
for each row
begin
  insert into CUGTMP (id_B,type)
  values (:new.id_B,:new.type);
end;
/
The following Trigger finally checks, that C and D CUGs belong to a B CUG.
create or replace trigger au_s
after update of id_B on CUG
declare
  l_id_B        number(12);
  l_typeCD      number(1);
  l_typeB       number(1);
  cursor cur_cugtmp is
  select id_B,type
   from CUGTMP;
begin
  open cur_cugtmp;
  loop
    fetch cur_cugtmp into l_id_B,l_typeCD;
    exit when cur_cugtmp%notfound;
     select type into l_typeB from CUG
      where id_cug = l_id_B;
    if (l_typeB != 2) then
       raise_application_error(-20002,
       'C and D CUGs must have a leading B');
    end if;
  end loop;
  close cur_cugtmp;
end;
/
Test insert and update
insert into CUG (id_cug,id_B,type)
  values (0,0,0);
insert into CUG (id_cug,id_B,type)
  values (1,0,2);
insert into CUG (id_cug,id_B,type)
  values (2,0,2);
insert into CUG (id_cug,id_B,type)
  values (3,1,3);
insert into CUG (id_cug,id_B,type)
  values (4,2,3);
insert into CUG (id_cug,id_B,type)
  values (5,1,4);
insert into CUG (id_cug,id_B,type)
  values (6,2,4);
commit;
SQL> select * from CUG;
    ID_CUG       ID_B       TYPE
---------- ---------- ----------
         0          0          0
         1          0          2
         2          0          2
         3          1          3
         4          2          3
         5          1          4
         6          2          4
Now, we want that that the CUGs 3,4,5,6 changes the leadership to CUG 2
SQL> update CUG set id_B = 2 where id_cug in (3,4,5,6);

4 rows updated.
SQL> select * from cug;
    ID_CUG       ID_B       TYPE
---------- ---------- ----------
         0          0          0
         1          0          2
         2          0          2
         3          2          3
         4          2          3
         5          2          4
         6          2          4
Next we delete the "Leader" with ID_CUG = 2. All childs must be deleted automatically with the DELETE CASCADE.
SQL> select * from cug;
    ID_CUG       ID_B       TYPE
---------- ---------- ----------
         0          0          0
         1          0          2
Everything looks fine - cool isn't it ?

Oracle traces including 10053 and 10046 trace

Running various oracle traces including 10053 and 10046 trace file   10053 trace file generation trace 10053 enables you to ...