Michael Dinh

Side Notes

Schema/DB Migration.

Oracle 12c NON System Related schemas.

Oracle 11.2.0.4.0 NON System Related schemas.

Gather info from source.

OBJECT_TYPE

DB_LINK

DBA_DIRECTORIES

PUBLIC DBA_SYNONYMS

AUDITS

Disable DB Jobs for DB Migration

Perform full export with metadata only.

expdp_full_meta.par

Perform schema export

expdp_schema.par

Perform import

Precreate tablespace

impdp_full01_tbs.par

Check for RESIZE

Precreate Role and Profile

impdp_full02_meta.par

Create Guaranteed Restore Point

Import Schema

impdp_schema.par

Import Metadata

impdp_full03_meta.par

Grant SYS Object Privileges

Enable DB Jobs

Export and Import table data

expdp_table_data.par

impdp_table_data.par

Side Notes

Good artists copy, great artists steal. Pablo Picasso

SEPTEMBER 1, 2005 BY LAURENT SCHNEIDER migrate database with imp exp

The above is one good alternatives.

What I have is not yet perfect if there is such a thing as perfection.

Schema/DB Migration.

Oracle 12c NON System Related schemas.

select 'SCHEMAS='||username from dba_users where ORACLE_MAINTAINED='N' order by 1;

Oracle 11.2.0.4.0 NON System Related schemas.

select 'SCHEMAS='||name from EXU8USR where name not in ('SYS','SYSTEM')order by 1;

Gather info from source.

OBJECT_TYPE

select OWNER,OBJECT_TYPE,STATUS,count(*)

from dba_objects

-- where owner in (select name from EXU8USR where name not in ('SYS','SYSTEM'))

where owner in ('DEMO','HR')

group by OWNER,OBJECT_TYPE,STATUS

order by OWNER,OBJECT_TYPE,STATUS

;

DB_LINK

col db_link for a100

select OWNER,DB_LINK from dba_db_links

;

DBA_DIRECTORIES

col DDL for a100

select 'grant '||privilege||' on directory '||owner||'.'||table_name||' to '||grantee DDL

from dba_tab_privs

where grantee in (select name from EXU8USR where name not in ('SYS','SYSTEM'))

and table_name in (select directory_name from dba_directories)

order by grantee

;

PUBLIC DBA_SYNONYMS

select 'create public synonym ' ||synonym_name|| ' for ' ||table_owner|| '.' ||table_name|| ';' DDL

from dba_synonyms

where table_owner in (select name from EXU8USR where name not in ('SYS','SYSTEM'))

and owner = 'PUBLIC'

;

select owner, count(*)

from dba_synonyms

where table_owner in (select name from EXU8USR where name not in ('SYS','SYSTEM'))

group by owner order by 1

;

AUDITS

select * from dba_stmt_audit_opts where user_name in ('DEMO','HR')

union

select * from dba_priv_audit_opts where user_name in ('DEMO','HR')

;

check_import.sql

Disable DB Jobs for DB Migration

JOB_QUEUE_PROCESSES

https://docs.oracle.com/cd/E18283_01/server.112/e17110/initparams108.htm

As of 11g Release 2 (11.2)

If the value of JOB_QUEUE_PROCESSES is set to 0, then DBMS_JOB jobs and Oracle Scheduler jobs will not run on the instance.

show parameter JOB_QUEUE_PROCESSES

alter system set job_queue_processes=0 scope=both sid='*';

exec dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','TRUE');

show parameter JOB_QUEUE_PROCESSES

select * from DBA_SCHEDULER_GLOBAL_ATTRIBUTE;

Take note for DEFAULT_TIMEZONE from DBA_SCHEDULER_GLOBAL_ATTRIBUTE

Perform full export with metadata only.

You might ask, why are there 2 versions for flashback_time?

There's a bug out there where systimestamp does not work.

flashback_time=systimestamp

#flashback_time="TO_TIMESTAMP(TO_CHAR(systimestamp,'DD-MM-YYYY HH24:MI:SS'),'DD-MM-YYYY HH24:MI:SS')"

expdp_full_meta.par

# EXP-10 Usernames Which Cannot Be Exported (Doc ID 217135.1)

# $ORACLE_HOME/rdbms/admin/catexp.sql the view EXU8USR

# select name from exu8usr order by 1;

# GRANTs On SYS Owned Objects Missing at Target Database (Doc ID 1911151.1)

# After Full Import All Object Privileges Granted By SYS Are Lost (Doc ID 97902.1)

directory=DATA_PUMP_DIR

userid="/ as sysdba"

flashback_time=systimestamp

#flashback_time="TO_TIMESTAMP(TO_CHAR(systimestamp,'DD-MM-YYYY HH24:MI:SS'),'DD-MM-YYYY HH24:MI:SS')"

metrics=Y

reuse_dumpfiles=Y

dumpfile=full_meta.dmp

logfile=expdp_full.log

content=METADATA_ONLY

FULL=Y

Perform schema export

ARROW1:(SYS@HAWKA):PRIMARY> select 'SCHEMAS='||name from EXU8USR where name not in ('SYS','SYSTEM') order by 1;

'SCHEMAS='||NAME

--------------------------------------

SCHEMAS=DEMO

SCHEMAS=GGS_ADMIN

SCHEMAS=HR

SCHEMAS=OUTLN

ARROW1:(SYS@HAWKA):PRIMARY>

expdp_schema.par

directory=DATA_PUMP_DIR

userid="/ as sysdba"

flashback_time=systimestamp

metrics=Y

reuse_dumpfiles=Y

dumpfile=schema.dmp

logfile=expdp_schema.log

schemas=DEMO

schemas=HR

Perform import

Don’t forget to comment sqlfile when performing actual import.

Precreate tablespace

impdp_full01_tbs.par

directory=DATA_PUMP_DIR

userid="/ as sysdba"

metrics=Y

dumpfile=full_meta.dmp

logfile=impdp_full_tbs.log

include=TABLESPACE

sqlfile=tbs.sql

Check for RESIZE

grep -i tbs.sql

  '/oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf' RESIZE 16785408;

If there is no RESIZE then, consider yourself lucky.

At this point, here are the options if there are RESIZE:

  1. Edit tbs.sql to remove resize operation and properly size datafile for tablespace.
  1. Run revised tbs.sql
  1. Perform import and pick a datafile to resize afterwards.
  1. If you want to be precise, then compare datafile size with source and resize the one that is different.
  1. Create SQL scripts to create tablespace and don’t use datapump.

Using DBMS_METADATA.get_ddl('TABLESPACE', &tablespace_name) does not work and will still result in the same symptoms.

Create Guaranteed Restore Point

alter database flashback on;

select name,database_role,open_mode,flashback_on,log_mode from v$database;

create restore point test_import guarantee flashback database;

col name for a20

col scn for 999999999999

col time for a32

select name, scn, time, database_incarnation#, guarantee_flashback_database, storage_size

from v$restore_point

where guarantee_flashback_database='YES';

Precreate Role and Profile

impdp_full02_meta.par

directory=DATA_PUMP_DIR

userid="/ as sysdba"

metrics=Y

dumpfile=full_meta.dmp

logfile=impdp_full02_meta.log

include=PASSWORD_VERIFY_FUNCTION

include=PROFILE

include=ROLE

#sqlfile=full02_meta.sql

Import Schema

impdp_schema.par

directory=DATA_PUMP_DIR

userid="/ as sysdba"

metrics=Y

dumpfile=schema.dmp

logfile=impdp_schema.log

table_exists_action=REPLACE

Flashback Database To Restore Point

When something goes wrong.

SQL>

shu immediate;

startup mount;  

RMAN>

flashback database to restore point test_import;

sql 'alter database open resetlogs';

Import Metadata

impdp_full03_meta.par

directory=DATA_PUMP_DIR

userid="/ as sysdba"

metrics=Y

dumpfile=full_meta.dmp

logfile=impdp_full03_meta.log

include=AUDIT

include=CONTEXT

include=DB_LINK

include=DIRECTORY

include=GRANT

include=PUBLIC_SYNONYM/SYNONYM

include=RESOURCE_COST

include=SYSTEM_PROCOBJACT

include=TRUSTED_DB_LINK

#sqlfile=full03_meta.sql

Grant SYS Object Privileges

# GRANTs On SYS Owned Objects Missing at Target Database (Doc ID 1911151.1)

# After Full Import All Object Privileges Granted By SYS Are Lost (Doc ID 97902.1)

Enable DB Jobs

alter system set job_queue_processes=100 scope=both sid='*';

exec dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','FALSE');

exec dbms_scheduler.set_scheduler_attribute(attribute=>'DEFAULT_TIMEZONE',value=>'US/Mountain');

Drop Restore Point

drop restore point test_import;

Export and Import table data

You might ask, why is this necessary? It is not.

For specific requirement, data from table needs to be updated at later point in time after import and might as well document it.

expdp_table_data.par

directory=DATA_PUMP_DIR

userid="/ as sysdba"

flashback_time=systimestamp

metrics=Y

reuse_dumpfiles=Y

dumpfile=table_data.dmp

logfile=expdp_table_data.log

content=DATA_ONLY

tables=(DEMO.T)

Example:

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

     Estimated 1 TABLE_DATA objects in 2 seconds

Total estimation using BLOCKS method: 13 MB

. . exported "DEMO"."T"                                  10.40 MB  117081 rows

impdp_table_data.par

directory=DATA_PUMP_DIR

userid="/ as sysdba"

metrics=Y

dumpfile=table_data.dmp

logfile=impdp_table_data.log

table_exists_action=TRUNCATE

data_options=SKIP_CONSTRAINT_ERRORS

tables=(DEMO.T)

Example:

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

. . imported "DEMO"."T"                                  10.40 MB  117081 rows