Friday, August 5, 2016

Change default export path for BI Discoverer

Whenever user would like to export results of Discoverer report , report is saved to default export path which is taken from pref.txt. In order to change this parameter , set the value for DefaultExportPath from pref.txt file.

10g:$ORACLE_HOME/discoverer/util/pref.txt
11g:$MW_HOME/asinst_1/config/PreferenceServer/Discoverer_asinst_1/pref.txt

To save exported files to client's home default directory ,set DefaultExportPath to "". Otherwise you can set value like "c:\temp".

After changing value , applypreferences.sh or applypreferences.bat(Windows) is executed which is under following directories.

10g:$ORACLE_HOME/discoverer/util
11g:$MW_HOME/asinst_1/Discoverer/Discoverer_asinst_1/util

Monday, June 13, 2016

Connected to idle instance while database is up

While database is up , whenever i would like to connect database with sqlplus it gives me "Connected to idle instance" message.ORACLE_SID enviroment variable seems true by checking pmon process with this command

[bash$]# ps -ef | grep pmon
oracle   30607     1  0 Jun09 ?        00:01:35 ora_pmon_testdb

[bash$]# echo $ORACLE_SID
testdb

But ORACLE_HOME enviroment variable is not the same as that the instance was started with.Oracle user which owns the database has ORACLE_HOME enviroment variable

/oracle/product/11.2.0/dbhome_2

But instance was started with ,

/oracle/product/11.2.0/dbhome_2/

In order to check which enviroment variables has been used when Oracle instance started ,following commands give detail.These commands are run with root user.

Pid of process is determined at OS level,
[bash$]# ps -ef | grep pmon

Enviroments of process (pmon) are get

[bash$]# cat /proc/<pid of process taken above>/environ

Output of above command gives all detail about enviroment variable of smon process.ORACLE_SID,ORACLE_HOME,ORA_CRS_HOME can be seen at output.In my case i have seen ORACLE_HOME env variable was with slash at the end of variable.After setting ORACLE_HOME with slash on oracle user , I can be able to login to the instance.

Tuesday, May 31, 2016

Restore 11g Full BAckup To 12c

Rman can restore a backup taken on older database version into new version.Important point is after restore and recover process , database must not be opened and manual upgrade operation must be done.This method can be used for out-of place upgrades.In my case I have a full backup from 11g and it will be restored into 12c version on new host.

Before start , in older version pre-upgrade and post-upgrade procedures should be completed for successfull upgrade.

Basically ,

  1. Copy preupgrd.sql and utluppkg.sql files from new 12c $ORACLE_HOME/rdbms/admin to old 11g $ORACLE_HOME/rdbms/admin
  2. Execute  preupgrd.sql under 11g to detect pre-upgrade issues that must be completed.
  3. Backup full database 
  4. Copy backup files and archive logs to new host 
  5. Copy current init file and password file to new 12c $ORACLE_HOME/dbs
  6. Startup database in nomount mode
  7. Restore controlfile.
  8. Open database in mount mode.
  9. Restore and recover database
  10. Shutdown database
  11. Open database in upgrade
  12. SQL> startup upgrade
    In this step i have encountered error ,

    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open

    ORA-1589 signalled during: ALTER DATABASE OPEN MIGRATE...

    In order to pass over this error  following command can be used.

    alter database open resetlogs upgrade 
  13. For 12c new catctl.pl perl script is executed
    cd $ORACLE_HOME/rdbms/admin
    $ORACLE_HOME/perl/bin/perl catctl.pl -n  6 -l $ORACLE_HOME/diagnostics catupgrd.sql
     
  14. Run post-upgrade status tool $ORACLE_HOME/rdbms/admin/utlu121s.sql to display the summary of upgrade process.If any error(s) declared in output these errors must be exmained.
  15. Upgrade log file which is located at $ORACLE_HOME/diagnostics/catupgrd0.log is checked for "BEGIN catuppst.sql" to verify catuppst.sql ran during upgrade process.If this sql has not run then execute it .
  16. Run utlrp.sql to recompile invalid objects.
  17. Change parameter compatible in init.ora to "12.0.0"
  18. Shutdown database
  19. Recreate password file
  20. Startup database
After these steps new db version is 12c and it has been ugraded successfully. If you have a downtime and necessary disk size , this method can be used for upgrade to new versions.

Friday, May 6, 2016

Using SSL web service certificates in Oracle EBS

In this blog post, i would like to give some basic information about using SSL certificates in Oracle EBS applications. This requirement came from new development which takes data from web service over SSL. In order to connect securely, distributed certificate must be imported into keystore which your program will use to connect.

The keystore (cacerts) located in $AF_JRE_TOP/lib/security was used for this.To import related certificates into this truststore $AF_JRE_TOP/bin/keytool is used.

Change directory into $AF_JRE_TOP/bin

cd $AF_JRE_TOP/bin
./keytool -keystore ../lib/security/cacerts -importcert -file "your_cert_file" -alias "your_alias"

Default keystore password is : changeit

After answer "trust this certificate "as Yes , your certificate will be imported.

You can query certificate with following command,

./keytool -keystore ../lib/security/cacerts -list | grep "your_alias"

Output should show your certificate information .The keystore is now used in any code for using web service calls.

Thursday, April 21, 2016

OGG-01028 Redo sequence no longer available in online logs

Oracle GoldenGate has been set up  from noarchivelog db to noarchivelog db which are DWH dbs.The purpose of this , is to transfer processed data from live system to Test in order to avoid to run same ETL process on Test system.

But there was a miss point that all source and target systems are in noarchive mode  and no archive logs exist to read for extract process.Because of this, following error has been occured when log switch frequency increased while ETL process and GG extract was not able to capture transaction from old redo log .

By default GG reads online redo logs , but when they are all switched , GG can read from archive logs. 

For noarchive mode database , my solution was increase of size of redo logs according to switch frequency. I have increased from 4GB to 16GB for each redo logs and i have not seen any error after this change.

Error :
OGG-01028  Oracle GoldenGate Capture for Oracle, efact.prm:  Redo sequence 14368 no longer available in online logs for thread 1 without archiving enabled, SQL <SELECT MAX(sequence#)  FROM v$log WHERE thread# = :ora_thread>, error retrieving redo file name for sequence 14368, archived = 1, use_alternate = 0.

Thursday, March 31, 2016

Tips & Tricks 5 : Insert data over dblink from partition

Inserting data over a dblink from specified partition gave following error,

insert into testtbl select * from testtbl@TESTLINK partition (p1);

ORA-02070: database TESTLINK does not support extended partition name in this context

Briefly , partition clause is not supported over the dblink.In order to solve this issues , view is created at remote site that uses partition clause

Remote site : create view testtblp1_v as select * from testtbl partition (p1);
Local site    :  insert into testtbl select * from testtblp1_v@TESTLINK;

Monday, March 21, 2016

Foreign Data Wrapper for PostgreSQL Part 2 : Create Server , User Mapping and Foreign Table

After installation of foreign data wrapper which can be read in this blog post , in order to access data from PostgreSQL database we must create server, user mapping and foreign table.
Creation of server provides connection information of foreign data resource that foreign data wrapper uses to access.

create server oracle_SERVER foreign data wrapper oracle_fdw options (dbserver '//10.1.X.X/DB_INSTANCE_NAME');

Next step which is creation of user mapping contains a mapping of user to a foreign server.

create user mapping for $POSTGRESQL_USER server oracle_SERVER options(user 'oracle_user',password 'oracle_user_password');

$POSTGRESQL_USER is the PostgreSQL database user which is mapped to foreign server.

Foreign server owner can create user mapping for any user in server.Also a user which has usage privilege for foreign server can create user mapping for his own user.

Finally,foreign table is created with following command,

create foreign table foreign_table (user_id numeric,user_name text) server oracle_SERVER options (schema 'oracle_schema_name',table 'oracle_table');

With psql interface you can select foreign_table.

user_id and user_name are table fields which are identical to Oracle table

Source :
http://www.postgresql.org/docs/9.1/static/sql-createserver.html
http://www.postgresql.org/docs/9.1/static/sql-createusermapping.html
http://www.postgresql.org/docs/9.1/static/sql-createforeigntable.html

Wednesday, March 9, 2016

Installation of Oracle Foreign Data Wrapper for PostgreSQL

If you are working on both Oracle and PostgreSQL databases , sometimes you need to access each database from other. My project contains accessing Oracle database from PostgreSQL database and this took me to search how to do it and found foreign data wrappers.Foreign data wrapper based on accessing to data that is not in PostgreSQL database.

Following link contains all FDW information for PostgreSQL .

In order to install Oracle FDW , below Oracle Instant Client rpm files and oracle_fdw extension are needed.


Install oracle-instantclient-basic, oracle-instantclient-sqlplus, oracle-instantclient-devel packages for your version.

After installation below envinroments are set.

export PATH=$PATH:/usr/pgsql-9.3/bin
export ORACLE_HOME=/usr/lib/oracle/11.2/client64
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin:$LD_LIBRARY_PATH

Extract downloaded oracle_fdw.master.zip file and change into directory.Software installation is done with following commands

$make
$make install

Now oracle_fdw shared library has been installed in the PostgreSQL library directory and oracle_fdw.control and the SQL files are in the PostgreSQL extension directory.

Following step is done to install extension in a database with superuser.

$psql -d {Your database name}
$CREATE EXTENSION oracle_fdw;

ERROR:  could not load library "/usr/pgsql-9.3/lib/oracle_fdw.so": libclntsh.so.11.1: cannot open shared object file: No such file or directory
  
Above error can be seen while creating extension because of library missing which is actually exists.In order to solve this error, Oracle library path is added to system library list.

$echo /usr/lib/oracle/11.2/client64/lib> /etc/ld.so.conf.d/oracle.conf
$ldconfig

Executing same command for install extension completed successfully after library arrangement.

$CREATE EXTENSION oracle_fdw;
CREATE EXTENSION

Monday, March 7, 2016

enq tx - row lock contention tx mode 4

After transition of new project to live system, we have seen excessive "enq tx - row lock contention" wait event on 2 node RAC system.From top activity as seen below ,

Node 1 :


Node 2 :

these waits cause users not work properly. When examine the sql which causes lock wait was simple insert statement and from P1 field of v$session who waits for this event indicates that lock type was  shared mode 4 TX wait. Common causes for mode 4 are 

a.Unique index
b.Foreign Key
c.Bitmap index
  
v$session , for sessions which wait for this event gives object which cause wait.

select row_wait_obj#,row_wait_file#,row_wait_block#,row_wait_row#
  from gv$session
where event='enq: TX - row lock contention' and state='WAITING'

Output of sql gave me object which was bitmap index and after dropping index  problem has gone away.According to explanation for this type of operation , each entry in bitmap index can cover multiple rows   in table.If two sessions would like to update rows covered by the same index key entry , second session waits for the first session's transaction to be performed by commit or rollback by waiting for the TX lock in mode 4.