Sunday, March 16, 2014

Oracle 12c New Features



01. Pluggable Databases Through Database Consolidation:

Oracle is doing every thing to jump into the cloud bandwagon. With 12C, Oracle is trying to address the problem of Multitenancy through this feature. There is a radical change and a major change in the core database architecture through the introduction of Container Databases also called CBD and Pluggable Databases (PDB). The memory and process is owned by the Container Database. The container holds the metadata where the PDBs hold the user data. You can create upto 253 PDBs including the seed PDB.
In a large setup, it is common to see 20 or 30 different instances running in production environment. With these many instances, it is a maintenance nightmare as all these instances have to be separately

•Upgrade
•Patch
•Monitor
•Tune
•RAC Enable
•Adjusted
•Back up
•Data Guard

With Pluggable Databases feature, you just have to do all this for ONE single instance. Without this feature, prior to 12C, you would have to create separate schemas and there is always a thread of security how much ever the isolation we build into it. There are problems with namespace conflicts, there is always going to be one public synonym that you can create. With PDBs you can have a separate HR or Scott schema for each PDB, separate Emp, Dept Tables and separate public synonyms. Additionally, 2 PDBs can talk to each other through the regular DB Link feature. There is no high startup cost of creating a database any more. Instead of one instance per day, the shift is into one instance per many databases. For the developer community, you can be oblivious of all this and still continue to use the PDBs as if it were a traditional database, but for the DBAs the world would look like it has changed a lot.
Another cool feature is, you can allocate a CPU percentage for each PDB.

This is one of the top features in Oracle 12C. Data Redaction in simple terms means, masking of data. You can setup a Data Redaction policy, for example SSN field in a Employee table can be masked. This is called redaction. From Sql Develop you can do this by going to the table: Employee->Right click on Security Policy->click on New->click on Redaction Policy->Enter SSN.
When you do a select * from employee, it will show that the SSN is masked.
The new data masking will use a package called DBMS_REDACT. It is the extension to the FGAC and VPD present in earlier versions.
By doing this, whoever needs to view the data will be able to see it where as the other users will not be able to view it.

With the release of Oracle Database 12c, Oracle has introduced this new SQL syntax to simplify fetching the first few rows. The new sql syntax "Fetch First X Rows only" can be used.

With this feature, it helps the optimizer to make runtime adjustments to execution plan which leads to better stats. For statements like CTAS (Create Table As Select) and IAS (Insert As Select), the stats is gathered online so that it is available immediately.

05. Restore a Table easily through RMAN:
Earlier if you had to restore a particular table, you had to do all sorts of things like restoring a tablespace and or do Export and Import. The new restore command in RMAN simplifies this task.

The previous limit on these data types was 4K. In 12C, it has been increased to 32,767 bytes. Upto 4K, the data is stored inline. I am sure everyone will be happy with this small and cute enhancement.


07. Inline PL/SQL Functions and Procedures:
The in line feature is extended in Oracle 12C. In addition to Views, we can now have PL/SQL Procedures and Functions as in line constructs. The query can be written as if it is calling a real stored procedure, but however the functions do not actually exist in the database. You will not be able to find them in ALL_OBJECTS. I think this will be a very good feature for the developers to explore as there is no code that needs to be compiled.

08. Generated as Identity/Sequence Replacement:
You can now create a col with 'generated as identity' clause. Thats it. Doing this is equivalent to creating a separate sequence and doing a sequence.nextval for each row. This is another handy and a neat feature which will help developer community. This is also called No Sequence Auto Increment Primary Key.

09. Multiple Indexes on a Single Column:
Prior to 12C, a column cant be in more than one index. In 12C, you can include a column in B-tree index as well as a Bit Map index. But, please note that only one index is usable at a given time.

10. Online Migration of Table Partition or Sub Partition:
You can very easily migrate a partition or sub partition from one tablespace to another. Similar to how the online migration was achieved for a non-partitioned table in prior releases, a table partition or sub partition can be moved to another tablespace online or offline. When an ONLINE clause is specified, all DML operations can be performed without any interruption on the partition|sub-partition which is involved in the procedure. In contrast, no DML operations are allowed if the partition|sub-partition is moved offline.
Prior to 12C, undo records generated by TEMP Tablespace is stored in the undo tablespace. With Temp undo feature in 12C, temp undo records can be stored in temporary table instead of UNDO TS. The benefit is - reduced undo tablespace and reduced redo log space used.
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; --disables the hard limit

12. Database Archiving:
This feature enables archiving rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression but are not visible to the application. These records are skipped during FTS (Full Table Scan).

Advanced Replication and Streams are Dead. It is being replaced with Oracle Golden Gate.
Invisible Columns. You can now have a invisible column in a table. When a column is defined as invisible, the column won’t appear in generic queries
PGA Aggregate Limit setting:
In 12c, you can set a hard limit on PGA by enabling the automatic PGA management, which requires PGA_AGGREGATE_LIMIT parameter settings. Therefore, you can now set the hard limit on PGA by setting the new parameter to avoid excessive PGA usage.
DDL Logging:
The DDL statements will automatically get recorded in xml/log file if ENABLE_DDL_LOGGING is set to True. ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE

Source: Internet


Tuesday, March 11, 2014

Checkpoints in Oracle

Information about checkpoints in oracle.

PURPOSE OF CHECKPOINTS
   Database blocks are temporarily stored in Database buffer cache. As blocks are read, they are stored in DB buffer cache so that if any user accesses them later, they are available in memory and need not be read from the disk. When we update any row, the buffer in DB buffer cache corresponding to the block containing that row is updated in memory. Record of the change made is kept in redo log buffer . On commit, the changes we made are written to the disk thereby making them permanent. But where are those changes written? To the datafiles containing data blocks? No !!! The changes are recorded in online redo log files by flushing the contents of redo log buffer to them.This is called write ahead logging.  If the instance crashed right now, the DB buffer cache will be wiped out but on restarting the database, Oracle will apply the changes recorded in redo log files to the datafiles.
    Why doesn’t Oracle write the changes to datafiles right away when we commit the transaction? The reason is simple. If it chose to write directly to the datafiles, it will have to physically locate the data block in the datafile first and then update it which means that after committing, user has to wait until DBWR searches for the block and then writes it before he can issue next command. This will bring down the performance drastically. That is where the role of redo logs comes in. The writes to the redo logs are sequential writes – LGWR just dumps the info in redologs to log files sequentially and synchronously so that the user does not have to wait for long. Moreover, DBWR will always write in units of Oracle blocks whereas LGWR will write only the changes made. Hence,  write ahead logging also improves performance by reducing the amount of data written synchronously. When will the changes be applied to the datablocks in datafiles? The data blocks in the datafiles will be updated by the DBWR asynchronously in response to certain triggers. These triggers are called checkpoints.
  Checkpoint is a synchronization event at a specific point in time which causes some / all dirty blocks to be written to disk thereby guaranteeing that blocks dirtied prior to that point in time get written.
  Whenever dirty blocks are written to datafiles, it allows oracle
- to reuse a redo log : A redo log can’t be reused until DBWR writes all the dirty blocks protected by that logfile to disk. If we attempt to reuse it before DBWR has finished its checkpoint, we get the following message in alert log : Checkpoint not complete.
to reduce instance recovery time : As the memory available to a database instance increases, it is possible to have database buffer caches as large as several million buffers. It requires that the database checkpoint advance frequently to limit recovery time, since infrequent checkpoints and large buffer caches can exacerbate crash recovery times significantly.
to free buffers for reads : Dirtied blocks can’t be used to read new data into them until they are written to disk. Thus DBWrR writes dirty blocks from the buffer cache, to make room in the cache.
Type of Checkpoints  in Oracle :
- Full checkpoint
- Thread checkpoint
- File checkpoint
- Parallel Query checkpoint
- Object checkpoint
- Log switch checkpoint
_ Incremental checkpoint

Whenever a checkpoint is triggered :
- DBWR writes some /all dirty blocks to datafiles
- CKPT process updates the control file and datafile headers
                     
 FULL CHECKPOINT
- Writes block images to  the database for all dirty buffers from all instances.
- Statistics updated
  . DBWR checkpoints
  . DBWR checkpoint buffers written
  . DBWR thread checkpoint buffers written
- Caused by :
  . Alter system checkpoint [global]
  . ALter database begin backup
  . ALter database close
  . Shutdown [immediate]
- Controlfile and datafile headers are updated
  . Checkpoint_change#
                 
  THREAD CHECKPOINT
- Writes block images to the database for all dirty buffers from one instance
- Statistics updated
  . DBWR checkpoints
  . DBWR checkpoint buffers written
  . DBWR thread checkpoint buffers written
- Caused by :
  . Alter system checkpoint local
- Controlfile and datafile headers are updated
  . Checkpoint_change#

FILE CHECKPOINT
  When a tablespace is put into backup mode or take it offline, Oracle writes all the dirty blocks from the tablespace to disk before changing the state of the tablespace.
- Writes block images to the database for all dirty buffers for all files of a tablespace from all instances
- Statistics updated
  . DBWR checkpoints
  . DBWR tablespace checkpoint buffers written
  . DBWR checkpoint buffers written
- Caused by :
  . Alter tablespace xxx offline
  . Alter tablespace xxx begin backup
  . Alter tablespace xxx read only
- Controlfile and datafile headers are updated
  . Checkpoint_change#
                  
PARALLEL QUERY CHECKPOINT
   Parallel query often results in direct path reads (Full tablescan or index fast full scan). This means that blocks are read straight into the session’s PGA, bypassing the data cache; but that means if there are dirty buffers in the data cache, the session won’t see the most recent versions of the blocks unless they are copied to disk before the query starts – so parallel queries start with a checkpoint.
- Writes block images to the database for all dirty buffers belonging to objects accessed by the query from all instances.
- Statistics updated
  . DBWR checkpoints
  . DBWR checkpoint buffers written
- Caused by :
  . Parallel Query
  . Parallel Query component of Parallel DML (PDML) or Parallel DDL (PDDL)
- Mandatory for consistency
- Controlfile and datafile headers are updated
  . Checkpoint_change#
OBJECT CHECKPOINT
  When an object is dropped/truncated, the session initiates an object checkpoint telling DBWR to copy any dirty buffers for that object to disk and the state of those buffers is changed to free.
- Writes block images to the database for all dirty buffers belonging to an object from all instances.
- Statistics updated
  . DBWR checkpoints
  . DBWR object drop buffers written
- Caused by dropping or truncating a segment:
  . Drop table XXX
  . Drop table XXX Purge
  . Truncate table xxx
  . Drop index xxx
- Mandatory for media recovery purposes
- Controlfile and datafile headers are updated
  . Checkpoint_change#
               
LOG SWITCH CHECKPOINT

- Writes the contents of the  dirty buffers whose information is protected by a redo log to the database .
- Statistics updated
  . DBWR checkpoints
  . DBWR checkpoint buffers written
  . background checkpoints started
  . background checkpoints completed
- Caused by log switch
- Controlfile and datafile headers are updated
  . Checkpoint_change#
                  
INCREMENTAL CHECKPOINT
  Prior to Oracle 8i, only well known checkpoint was log switch checkpoint. Whenever LGWR filled an online logfile, DBWR would go into a frenzy writing data blocks to disks, and when it had finished, Oracle would update each data file header block with the SCN to show that file was updated up to that point in time.
   Oracle 8i introduced incremental checkpointing which triggered DBWR to write some dirty blocks from time to time so as to advance the checkpoint and reduce the instance recovery time.
Incremental checkpointing has been implemented using two algorithms :
- Ageing algorithm
- LRU/TCH algorithm
                                               
AGEING ALGORITHM
This strategy involves writing changed blocks that have been dirty for the longest time and  is called aging writes. This algorithm relies on the CKPT Q running thru the cache and buffers being linked to the end of this list the first time they are made dirty.
   .The LRU list contains all the buffers – free / pinned / dirty. Whenever a buffer in LRU list is dirtied, it is placed in CKPT Q as well i.e. a buffer can  simultaneously have pointers in both LRU list and CKPT Q but the buffers in CKPT Q are arranged in the order in which they were dirtied.Thus,  checkpoint queue contains dirty blocks in the order of SCN# in which they were dirtied
  Every 3 secs DBWR wakes up and checks if there are those many  dirty buffers in CKPT Q which need to br written so as to satisfy instance recovery requirement..
If those many or more dirty buffers are not found,
   DBWR goes to sleep
else (dirty buffers found)
  .CKPT target RBA is calculated based on
   - The most recent RBA
   - log_checkpoint_interval
   - log_checkpoint_timeout
   - fast_start_mttr_target
   - fast_start_io_target
   - 90% of the size of the smallest redo log file
  . DBWR walks the CKPT Q from the low end (dirtied earliest) of the redo log file collecting buffers for writing to disk until it reaches the buffer that is more recent than the target RBA. These buffers are placed in write list-main.
  . DBWR walks the write list-main and checks all the buffers
    – If changes made to the buffer have already been written to redo log files
        . Move those buffers to write-aux list
      else
        . Trigger LGWR to write changes to those buffers to redo logs
        . Move those buffers to write-aux list
  . Write buffers from write-aux list to disk
  . Update checkpoint RBA in SGA
  . Delink those buffers from CKPT Q
  . Delink those buffers from write-aux list
- Statistics Updated :
   . DBWR checkpoint buffers written
- Controlfile updated every 3 secs by CKPT
   . Checkpoint progress record
  As sessions link buffers to one end of the list, DBWR can effectively unlink buffers from the other end and copy them to disk. To reduce contention between DBWR and foreground sessions, there are two linked lists in each working set so that foreground sessions can link buffers to one while DBWR is unlinking them from the other.
                                                          
LRU/TCH ALGORITHM
 LRU/TCH algorithm writes the cold dirty blocks to disk that are on the point of being pushed out of cache.
  As per ageing algorithm, DBWR will wake up every 3 seconds to flush dirty blocks to disk. But if blocks get dirtied at a fast pace during those 3 seconds and a server process needs some free buffers, some buffers need to be flushed to the disk to make room. That’s when LRU/TCH algorithm is used to write those dirty buffers which are on the cold end of the LRU list.
   Whenever a server process needs some free buffers to read data, it scans the LRU list from its cold end to look for free buffers.
While searching
  If unused buffers found
    Read blocks from disk into the buffers and link them to the corresponding hash bucket
  if it finds some clean buffers (contain data but not dirtied or dirtied and have been flushed to disk),
      if they are the candidates to be aged out (low touch count)
          Read blocks from disk into the buffers and link them to the corresponding hash bucket
      else (have been accessed recently and should not be aged out)
         Move them to MRU end depending upon its touch count.
  If it finds dirty buffers (they are already in CKPT Q),
     Delink them from LRU list
     Link them  to the write-main list (Now these buffers are in CKPT Q and write-main list)
  The server process scans a threshold no. of buffers (_db_block_max_scan_pct = 40(default)). If it does not find required no. of free buffers,
   It triggers DBWR to dirty blocks in write-mainlist to disk
 . DBWR walks the write list-main and checks all the buffers
    – If changes made to the buffer have already been written to redo log files
           . Move those buffers to write-aux list
       else
          . Trigger LGWR to write changes to those buffers to redo logs
          . Move those buffers to write-aux list
  . Write buffers from write-aux list to disk
  . Delink those buffers from CKPT Q and w rite-aux list
  . Link those buffers to LRU list as free buffers
 Note that
- In this algorithm, the dirty blocks are delinked from LRU list before linking them to write-main list in contrast to ageing algorithm where the blocks can be simultaneously be in both CKPT Q and LRU list.
- In this algorithm, checkpoint is not advanced because it may be possible that the dirty blocks on the LRU end may actually not be the ones which were dirtied earliest. They may be there because the server process did not move them to the MRU end earlier. There might be blocks present in CKPT Q which were dirtied earlier than the blocks in question.


References book :
Essential Internals  for DBA’s and Developers By Jonathan Lewis
 

Saturday, March 8, 2014

RACcheck (RAC Configuration Audit Tool)

RACcheck (RAC Configuration Audit Tool)

RACcheck is a RAC Configuration Audit tool designed to check various important configuration settings within a Real Application Clusters (RAC), Oracle Clusterware (CRS), Automatic Storage Management (ASM) and Grid Infrastructure environment. The tool audits configuration settings within the following categories:
•             OS kernel parameters
•             OS packages
•             Many other OS configuration settings important to RAC.
•             CRS/Grid Infrastructure
•             RDBMS
•             ASM
•             Database parameters

Many other database configuration settings important to RAC 11.2.0.3 Upgrade Readiness assessment
Please note that this script has to be executed by Database Owner. (in my case it is oracle)

$ ./raccheck
$ ./raccheck -v            -v - returns the version of the tool.
$ ./raccheck -a            -a - performs all checks, best practice and Database/Clusterware patch/OS recommendations.
$ ./raccheck -a -o v      -o - for invoking various optional functionality.
$ ./raccheck -a -o verbose
$ ./raccheck -a -c ASM       -c - for use when checking individual components, for only ASM component checks.
$ ./raccheck -a -c ASM -o verbose
$ ./raccheck -a -c ACFS      -c - for use when checking individual components, for only ACFS component checks.
$ ./raccheck -b      -b - best practice recommendations only.
$ ./raccheck -p      -p - database/clusterware patch recommendations only.
$ ./raccheck -f       -f - offline, performs analysis for all.
$ ./raccheck -m     -m - suppresses the Maximum Availability Architecture (MAA) Scorecard which is enabled by default in RACcheck.
$ ./raccheck -u -o pre      -u - performs pre-upgrade readiness checks for 11.2.0.3.
$ ./raccheck -u -o post    -u - performs post-upgrade readiness checks for 11.2.0.3.
$ ./raccheck -s    -s - run RACcheck silently, unattended execution when the Oracle user can SUDO to root without a password.
$ ./raccheck -S    -S - run RACcheck silently, unattended execution without the root password and root privileged collections/audits.

Related Links :  

New features in Oracle 9i/10g/11g Real Application Cluster