I have just released Kafka Oracle source connector , in order to import changed data from Oracle database to Kafka. Main logic is based on Oracle Logminer solution and all details and instructions can be found at following URL
This blog is related with mostly Oracle database , Oracle EBS and other Oracle tools . I will also try to share my experiences about other databases. Thanks for reading this blog and your feedback is appreciated.
Thursday, August 9, 2018
Thursday, April 19, 2018
Create Materialized View Fails With ORA-30372
ORA-30372 error has been declared when user would like to refresh materialized view . Detailed explanation of error ,
Oracle Error: ORA-30372
Error Description:
Fine grain access policy conflicts with materialized view
Error Cause:
A fine grain access control procedure has applied a non-null policy to the query for the materialized view.
Action:
In order for the materialized view to work correctly, any fine grain access control procedure in effect for the query must return a null policy when the materialized view is being created or refreshed. This may be done by ensuring that the usernames for the creator, owner, and invoker of refresh procedures for the materialized view all receive a null policy by the user-written fine grain access control procedures.
If any table used in materialized view has VPD policy should return null while refreshing. This causes error and in order to solve this Oracle suggests to define policy function to user which will do operations to return NULL policy during create or refresh operations on materialized view.But it can cause security issues for this user who has some data access restrictions.
There is another solution .Following code can be put into security policy function and only while refreshing policy returns null.
if dbms_mview.i_am_a_refresh then
return null;
end if;
Wednesday, April 4, 2018
SQL Profile ignores hints
Sometimes , hints can be very powerful to let optimizer choose proper execution plans. This lets application execute with good performance. But sometimes given hints to sqls can be harmful choosing wrong execution plans.At this situations , development can take care of this sqls and by removing hints problems can be solved. But if development process can not be done immediately . DBA can solve this issues by telling optimizer to ignore hints.
alter session set "_optimizer_ignore_hints"=true
command ignores all hints during session. This can solve your problem but also can be trouble.At sql level ignoring hints can be done by import_sql_profile procedure in dbms_sqltune package.
For sample below sql is used . Firstly given hint for sql tell optimizer not to use index. But for some reason hint losed its validity and index should be used for query.
For sample below sql is used . Firstly given hint for sql tell optimizer not to use index. But for some reason hint losed its validity and index should be used for query.
SQL> explain plan for select /*+no_index(s) */ count(1) from xdba_dcyear s where cust_name like 'ER%';
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 3491557762
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| T
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 252 | 2 (0)| 0
| 1 | SORT AGGREGATE | | 1 | 252 | |
|* 2 | TABLE ACCESS FULL| XDBA_DCYEAR | 20 | 5040 | 2 (0)| 0
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("CUST_NAME" LIKE 'ER%')
Note
-----
- dynamic sampling used for this statement (level=2)
18 rows selected
Executed in 0.076 seconds
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 3491557762
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| T
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 252 | 2 (0)| 0
| 1 | SORT AGGREGATE | | 1 | 252 | |
|* 2 | TABLE ACCESS FULL| XDBA_DCYEAR | 20 | 5040 | 2 (0)| 0
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("CUST_NAME" LIKE 'ER%')
Note
-----
- dynamic sampling used for this statement (level=2)
18 rows selected
Executed in 0.076 seconds
Now dbms_sqltune.import_sql_profile procudure is used with attribute "IGNORE_OPTIM_EMBEDDED_HINTS".
begin
dbms_sqltune.import_sql_profile(
name => 'SQLPROF1',
category => 'DEFAULT',
sql_text => 'select /*+no_index(s) */ count(1) from xdba_dim_cust_year s where cust_name like ''ER%''',
profile => sqlprof_attr('IGNORE_OPTIM_EMBEDDED_HINTS'));
end;
/
SQL> explain plan for select /*+no_index(s) */ count(1) from xdba_dcyear s where cust_name like 'ER%';
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2253536563
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 252 | 1 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 252 | | |
|* 2 | INDEX RANGE SCAN| X1 | 20 | 5040 | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("CUST_NAME" LIKE 'ER%')
filter("CUST_NAME" LIKE 'ER%')
Note
-----
- dynamic sampling used for this statement (level=2)
- SQL profile "SQLPROF1" used for this statement
20 rows selected
Executed in 0.126 seconds
As a result no_index hint is ignored and index is being used according to explain plan. Also usage of "SQLPROF1" profile can be observe from explain plan notesdbms_sqltune.import_sql_profile(
name => 'SQLPROF1',
category => 'DEFAULT',
sql_text => 'select /*+no_index(s) */ count(1) from xdba_dim_cust_year s where cust_name like ''ER%''',
profile => sqlprof_attr('IGNORE_OPTIM_EMBEDDED_HINTS'));
end;
/
SQL> explain plan for select /*+no_index(s) */ count(1) from xdba_dcyear s where cust_name like 'ER%';
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2253536563
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 252 | 1 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 252 | | |
|* 2 | INDEX RANGE SCAN| X1 | 20 | 5040 | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("CUST_NAME" LIKE 'ER%')
filter("CUST_NAME" LIKE 'ER%')
Note
-----
- dynamic sampling used for this statement (level=2)
- SQL profile "SQLPROF1" used for this statement
20 rows selected
Executed in 0.126 seconds
In order to remove sql profile following procedure is used.
begin
dbms_sqltune.import.drop_sql_profile('SQLPROF1');
end;
/
dbms_sqltune.import.drop_sql_profile('SQLPROF1');
end;
/
Note : dbms_sqltune package is under Oracle Tuning Pack license.This must be considered.
Wednesday, March 21, 2018
CRS-4228: Value of attribute ')' is missing
After changing disk group of database which is configured on cluster ready service , start and stop dependency attributes configured for old disk group must be changed in order to provide proper configuration.
$ crsctl status res ora.test.db -p
NAME=ora.test.db
TYPE=ora.database.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
...
START_DEPENDENCIES=hard(ora.TESTRECO.dg,ora.TESTDATADG.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.TESTRECO.dg,ora.TESTDATADG.dg)
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.TESTRECO.dg,shutdown:ora.TESTDATADG.dg)
...
Disk group TESTDATADG is not using anymore and all data files have been moved another disk group named TESTDATADG2.
Change the attributes related with old disk to indicate new disk group name.
$ crsctl modify res ora.test.db -attr START_DEPENDENCIES="hard(ora.TESTRECO.dg.ora, ora.TESTDATADG2.dg).........."
gives CRS-4228 error.
The comma is used as a delimiter between attributes , so string after comma "ora.TESTDATADG2.dg)" is viewed as resource which does not exist actually. In order to prevent this error , single quotes is used between atrribute value like this,
$ crsctl modify res ora.test.db -attr START_DEPENDENCIES="'hard(ora.TESTRECO.dg.ora, ora.TESTDATADG2.dg)..........'"
Thursday, March 15, 2018
SOA Gateway soagenerate.sh gives error server.java.io.FileNotFoundException
After clone of Live E-Business System existing web services on SOA Gateway should be regenerated and redeployed to use them for test purposes. In order to do this , a file is prepared which contains all web service interface names line by line.
vi /tmp/services.lst
XX_TEST_PKG
XX_TEST2_PKG
soagenerate.sh under $FND_TOP/bin is executed to generate WSDLs
$FND_TOP/bin/soagenerate.sh
Enter the single or multiple generate as S/M :
M
Enter the filename of the interface to be generated :
/tmp/services.lst
If you encounter following error ,
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.DBConnection.getJDBCConnection : Obtained new Jdbc Connection via WebAppsContext
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.DBConnection.getJDBCConnection : Obtained new Jdbc Connection via WebAppsContext
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.IRepAccess.getClassId : Class Id for irep_name XX_TEST_PKG = 4283
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.ui.ProviderAccess.generateWSDL : Generating Service : 4283
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Sending Message : Generate
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : For Class Id : 4283
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Encoded ClassId : 4283
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : SOA_SERVER_URL : http://ebstest.localdomain.com.tr:8010
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Host : http://ebstest.localdomain.com.tr:8010
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Port : 8010
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : ticket is null : false
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Error while sending message to server.java.io.FileNotFoundException: http://ebstest.localdomain.com.tr:8010/webservice
s/SOAProvider/EbizAuth?Generate=4283&soa_ticket=EgsTINbmlZ7ofmxqvGD-gw..
you must reset ASADMIN password which explained before in this blog post
http://erdemcer.blogspot.com.tr/2014/04/oracle-ebs-soa-gateway-testquery-does.html
After resetting ASADMIN password , reexecute soagenerate.sh .
As last step execute re-deploy interfaces
$FND_TOP/bin/soadeploy.sh
Enter the single or multiple deploy as S/M :
M
Enter the filename of the interface to be deployed :
/tmp/services.lst
vi /tmp/services.lst
XX_TEST_PKG
XX_TEST2_PKG
soagenerate.sh under $FND_TOP/bin is executed to generate WSDLs
$FND_TOP/bin/soagenerate.sh
Enter the single or multiple generate as S/M :
M
Enter the filename of the interface to be generated :
/tmp/services.lst
If you encounter following error ,
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.DBConnection.getJDBCConnection : Obtained new Jdbc Connection via WebAppsContext
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.DBConnection.getJDBCConnection : Obtained new Jdbc Connection via WebAppsContext
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.IRepAccess.getClassId : Class Id for irep_name XX_TEST_PKG = 4283
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.ui.ProviderAccess.generateWSDL : Generating Service : 4283
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Sending Message : Generate
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : For Class Id : 4283
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Encoded ClassId : 4283
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : SOA_SERVER_URL : http://ebstest.localdomain.com.tr:8010
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Host : http://ebstest.localdomain.com.tr:8010
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Port : 8010
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : ticket is null : false
Wed Feb 21 07:54:56 EET 2018 : oracle.apps.fnd.soa.provider.util.ServerAccess.sendMessageToServer : Error while sending message to server.java.io.FileNotFoundException: http://ebstest.localdomain.com.tr:8010/webservice
s/SOAProvider/EbizAuth?Generate=4283&soa_ticket=EgsTINbmlZ7ofmxqvGD-gw..
you must reset ASADMIN password which explained before in this blog post
http://erdemcer.blogspot.com.tr/2014/04/oracle-ebs-soa-gateway-testquery-does.html
After resetting ASADMIN password , reexecute soagenerate.sh .
As last step execute re-deploy interfaces
$FND_TOP/bin/soadeploy.sh
Enter the single or multiple deploy as S/M :
M
Enter the filename of the interface to be deployed :
/tmp/services.lst
Monday, March 5, 2018
ORA-27154: post/wait create failed / ORA-27301: OS failure message: No space left on device
After SAN migration of database everything gone fine and system worked for 1 days successfully . But in an instant because of multipath configuration db has crashed due to dbwr failure.After fixing storage unit failure whenever would like to open database below error occured. Actually there was no configuration change on database but immediately following error occured.
ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpcreates
Mon Mar 05 12:32:54 2018
According to notes inadequate setting of semaphore can cause not to accommodate the request. Setting semaphore to higher value resolved issue
In /etc/sysctl.conf
Before change
kernel.sem = 250 32000 100 128
After change
kernel.sem = 250 32000 100 256
After applying settings as root execute "sysctl -p" command
Friday, February 23, 2018
OBIEE Error ADF_FACES-60097 ADF_FACES-60096
When selecting "Coreapplication" in enterprise manager of OBIEE following error occured,
[2018-02-22T21:17:13.868+02:00] [AdminServer] [NOTIFICATION] [] [oracle.bi.management.adminservices] [tid: DmsThread-2] [userId: <WLS Kernel>] [ecid: 7326609a41c1cc90:101d0126:161bee9fb6b:-8000-000000000000007d,1:16628]
[APP: biadminservices#11.1.1] BIInstanceStatusMetricMBeanImpl; getServiceStatus: currentPrincipal is null
[2018-02-22T21:17:14.512+02:00] [AdminServer] [NOTIFICATION] [OBI-SYSMAN-1230] [oracle.bi.management.sysmancommon] [tid: DmsThread-2] [userId: <WLS Kernel>] [ecid: 7326609a41c1cc90:101d0126:161bee9fb6b:-8000-00000000000000
7d,1:16628] [APP: biadminservices#11.1.1] Summary of current system component statuses; Up: 6; Down: 0; Up(HA): 0; Down(HA): 0; Error: 0
[2018-02-22T21:17:14.514+02:00] [AdminServer] [NOTIFICATION] [] [oracle.bi.management.adminservices] [tid: DmsThread-3] [userId: <WLS Kernel>] [ecid: 7326609a41c1cc90:101d0126:161bee9fb6b:-8000-000000000000007d,1:16626] [A
PP: biadminservices#11.1.1] BIInstanceStatusMetricMBeanImpl; getServiceStatus: currentPrincipal is null
[2018-02-22T21:17:15.158+02:00] [AdminServer] [NOTIFICATION] [OBI-SYSMAN-1230] [oracle.bi.management.sysmancommon] [tid: DmsThread-3] [userId: <WLS Kernel>] [ecid: 7326609a41c1cc90:101d0126:161bee9fb6b:-8000-00000000000000
7d,1:16626] [APP: biadminservices#11.1.1] Summary of current system component statuses; Up: 6; Down: 0; Up(HA): 0; Down(HA): 0; Error: 0
[2018-02-22T21:17:15.236+02:00] [AdminServer] [WARNING] [] [oracle.mds] [tid: [ACTIVE].ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: weblogic] [ecid: 7326609a41c1cc90:101d0126:161bee9fb6b:
-8000-0000000000000137,0] [APP: em] A call to flushChanges on the current MDSSession does not specify the correct transaction key
[2018-02-22T21:17:15.241+02:00] [AdminServer] [ERROR] [ADFC-50018] [oracle.adfinternal.controller.application.AdfcExceptionHandler] [tid: [ACTIVE].ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)']
[userId: weblogic] [ecid: 7326609a41c1cc90:101d0126:161bee9fb6b:-8000-0000000000000137,0] [APP: em] ADFc: No exception handler was found for an application exception.
[[javax.faces.FacesException: java.io.IOException: Stream closed
[2018-02-22T21:17:15.243+02:00] [AdminServer]
[WARNING] [] [oracle.adfinternal.view.faces.lifecycle.LifecycleImpl] [tid: [ACTIVE].ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: weblogic] [ecid: 7326609a41c1cc90:101d0126:161bee9fb6b:-8000-0000000000000137,0] [APP: em] ADF_FACES-60098:Faces lifecycle receives unhandled exceptions in phase RENDER_RESPONSE 6[[javax.faces.FacesException: java.io.IOException: Stream closed
Error can be observed from AdminServer-diagnostic logs.
According to metalink if your system ran out of disk space in any time this corrupted some files and caused this issue.In order to solve this issue,
1.Stop all OBIEE services and weblogic server
2.Find 0 byte files under $DOMAIN_HOME/sysman/mds directory
cd $DOMAIN_HOME/sysman/mds
find . -name "*" -size 0
3.delete these files
4.Open weblogic and OBIEE services
Now "Coreapplication" should be accessible again and deleted files will be recreated.
Saturday, February 17, 2018
Execute ASM Rebalance 1 time while adding & droppping disks
Sometimes it is required to add disks and drop disks sequentially in ASM environments. For example diskgroup migration between SAN , disk arrays etc .. , the disks are added and dropped . In this situation as default rebalance operation executes after every add & drop statements .
alter diskgroup DATA add disk
'ASMNEWDDISK_1',
'ASMNEWDDISK_2',
'ASMNEWDDISK_3' rebalance power 4;
alter diskgroup DATA drop disk
'ASMOLDDDISK_1,
'ASMOLDDDISK_2,
'ASMOLDDDISK_3' rebalance power 4;
With this above statements rebalance operation executes 2 times , and not necessary. In order to execute rebalance in one operation , following statement can be used.
alter diskgroup DATA add disk
'ASMNEWDDISK_1',
'ASMNEWDDISK_2',
'ASMNEWDDISK_3'
drop disk
'ASMOLDDDISK_1,
'ASMOLDDDISK_2,
'ASMOLDDDISK_3' rebalance power 4;
Monday, February 12, 2018
Open Standby Database due to Archive Gap
Renewing of TEST or Development environments , clone of standby database , can be used. At my last renew process for TEST , i have used last backup of production system which was used to open standby database . Production system has been backed up and standby controlfile created . After successfull standby database creation , by using same backup sets new TEST environment has been restored to another location.
Whenever with command "alter database activate standby database" to open standby database as active database , below error encountered.
alter database activate standby database
ALTER DATABASE ACTIVATE [PHYSICAL] STANDBY DATABASE (XXX)
Begin: Standby Redo Logfile archival
End: Standby Redo Logfile archival
Signalling error 1152 for datafile 1!
Beginning standby crash recovery.
Serial Media Recovery started
Managed Standby Recovery starting Real Time Apply
Media Recovery Waiting for thread 1 sequence 318427
Fetching gap sequence in thread 1, gap sequence 318427-318427
FAL[client]: Error fetching gap sequence, no FAL server specified
Wed Jan 24 23:54:19 2018
FAL[client]: Error fetching gap sequence, no FAL server specified
Wed Jan 24 23:54:29 2018
FAL[client]: Error fetching gap sequence, no FAL server specified
Wed Jan 24 23:54:39 2018
FAL[client]: Error fetching gap sequence, no FAL server specified
Wed Jan 24 23:54:49 2018
FAL[client]: Error fetching gap sequence, no FAL server specified
Wed Jan 24 23:54:59 2018
FAL[client]: Error fetching gap sequence, no FAL server specified
Wed Jan 24 23:55:09 2018
FAL[client]: Error fetching gap sequence, no FAL server specified
Standby crash recovery need archive log for thread 1 sequence 318427 to continue.
Please verify that primary database is transporting redo logs to the standby database.
Wait timeout: thread 1 sequence 318427
Standby crash recovery aborted due to error 16016.
ORA-16016: archived log for thread 1 sequence# 318427 unavailable
Recovery interrupted!
Completed standby crash recovery.
Signalling error 1152 for datafile 1!
ORA-1152 signalled during: alter database activate standby database...
Database has requested archivelog (318427) and could not get from primary system and finally timed out.
Actually database has been restored and recovered with last backup sets and archivelogs and at this point database can be used as TEST environment.
Because of type of controlfile which was standby , could cause this situation and thought recreate controlfile as primary controlfile.
Got controlfile script by using below command
"alter database backup controlfile to trace"
Created controlfile with above output of command with resetlogs and opened .
Wednesday, February 7, 2018
txkWfClone.sh Timed out
While executing first autoconfig for clone system , txkWfClone.sh gives error that autoconfig did not complete successfully,
AutoConfig could not successfully execute the following scripts:
Directory: /testapps/R12/inst/apps/TEST_testapp/admin/install
txkWfClone.sh INSTALL_SETUP -1
AutoConfig is exiting with status 1
From log file,
AutoConfig Setup Phase
Running Setup Process 7 of 8 for AD_TOP
Executing script in InstantiateFile:
/testapps/R12/inst/apps/TEST_testapp/admin/install/txkWfClone.sh
script returned:
****************************************************
Timed out( 3750000 ): Interrupted Exception
If Workflow tables have high row counts related error can be observed . Below tables can be reviewed for row counts
select count(*) from wf_resources where name = 'WF_WEB_AGENT' and language = 'US';
select count(*) from WF_ITEM_ATTRIBUTE_VALUES;
select count(*) from WF_ITEM_ATTRIBUTES;
select count(*) from wf_activity_attr_values;
select count(*) from wf_process_activities;
select count(*) from wf_activity_attributes;
select count(*) from wf_notification_attributes;
select count(*) from wf_message_attributes;
Firstly "Purge Obsolete Workflow RunTime Data" concurrent program can be executed on source system to get less data for clone system.
In my situation , i have already restored source system for clone which is nearly 15 TB and no chance to run related concurrent program before system configuration.
To solve my issue , after getting error from adcfgclone script , I have executed txkWfClone.sh as single process from command line and completed adcfgclone successfully.
Friday, January 19, 2018
Change database JDBC URL from deployed ear file
When i would like to change database JDBC URL of deployed application in weblogic from console , it gave below error
Console encountered the following error java.lang.IllegalArgumentException: [J2EE Deployment SPI:260140]Unable to remove descriptor bean
weblogic.j2ee.descriptor.wl.JDBCPropertyBeanImpl@179efc1([MYAPP]/JDBCDriverParams/Properties/Properties[user]) due to 'Unable to remove bean since not defined in plan'. The remover should first check to see if the bean is removable in the deployment plan prior to removing it.
Of course there is a way to solve this issue at weblogic side , but i prefered changing database url from ear file directly changing related jdbc.xml file
Action Plan,
- Stop WebLogic
- Locate ear file of application
- Backup ear file
- Unzip ear file
- unzip XXXX_myapp.ear
- Change url entitiy in META-INF/MYAPP-jdbc.xml to desired url
- Save MYAPP-jdbc.xml file
- Using jar , update relate ear file
- jar uvf XXXX_myapp.ear META-INF/MYAPP-jdbc.xml
- Start Weblogic
You can observe the change from weblogic console
Friday, January 12, 2018
Capture DDL of Dumpfile
Sometimes dump files are wanted to import your system . But if you do not know source system structure import process can be trouble.Without knowing source tablespace , schema knowledge import will produce errors .
In order to solve this issue and do successful import , impdp are called with sqlfile parameter to get all DDL statements . Output contains all DDL statements and all necessary statements can be captured .
impdp $DBUSER directory=$DIRECTORY dumpfile=$DUMPFILE.dmp sqlfile=impfile.sql
After executing above statement impfile.sql generated.For example tablespaces can be seen with ,
grep TABLESPACE impfile.sql
command which lists all "TABLESPACE" statements . You can create tablespaces according to output or "remap_tablespace" parameter for impdp is used to reconfigure tablespaces according to your system.
Labels:
DDL,
grep,
impdp,
remap_tablespace,
schema,
sqlfile,
tablespace
Thursday, January 4, 2018
ORA-01378 The logical block size error
For some reason client asked me to open production database in another environment until specific time.After successfull restore and recover process , clear logfile gave the below error ,
ORA-01378: The logical block size (4096) of file +RECO is not compatible with the disk sector size (media sector size is 512 and host sector size is 512)
Production system 's redo log files have 4K sector size and clearing log file on 512 byte is not possible.
Recreating controlfiles without redo logs statements which are 4K size has solved our issue.I have added new redo logfiles to system before recreating controlfile which area group 11,12,13
create controlfile reuse database "testdb" noresetlogs force logging archivelog
maxlogfiles
max...
...
logfile2
group 2('+RECO/TESTDB/log2a','+RECO/TESTDB/log2b') size 100M blocksize 4096,
group 11('+RECO/TESTDB/log11a','+RECO/TESTDB/log11b') size 100M blocksize 512,
group 12('+RECO/TESTDB/log12a','+RECO/TESTDB/log12b') size 100M blocksize 512
group 13('+RECO/TESTDB/log13a','+RECO/TESTDB/log13b') size 100M blocksize 512
datafile
...
...
New Controlfile script :
create controlfile reuse database "testdb" noresetlogs force logging archivelog
maxlogfiles
max...
...
logfile2
group 11('+RECO/TESTDB/log11a','+RECO/TESTDB/log11b') size 100M blocksize 512,
group 12('+RECO/TESTDB/log12a','+RECO/TESTDB/log12b') size 100M blocksize 512
group 13('+RECO/TESTDB/log13a','+RECO/TESTDB/log13b') size 100M blocksize 512
datafile
...
...
Subscribe to:
Posts (Atom)