1 of 119

GoldenGate

Rudaiba Adnin

1

2 of 119

Contents

  • Architecture Components
  • Supported Topology
  • Installation
  • Navigation
  • Commands
  • Setting up VMs
  • Configuration
    • Unidirectional Implementation
    • Bi-directional Implementation
    • Testing in Some Specific Cases

2

3 of 119

GoldenGate

  • Oracle GoldenGate is a comprehensive software package for enabling the replication of data in heterogeneous data environments
  • Source and Target tables need not have the same structure to set up replication
  • Golden gate can be configured for DDL commands also

3

4 of 119

GoldenGate Architecture Components

4

5 of 119

Golden Gate Workflow

5

Source Database

Capture (Local Extract)

Data Pump

Source Trail

Network

Collector

Remote Trail

Deliver (Replicate)

Target Database

Manager

Manager

6 of 119

Source Database

  • It’s the database to be replicated
  • It can be any Golden Gate supported database
  • Source database can be Oracle, MySQL, DB2, Sybase, Teradata, etc.

6

Source Database

7 of 119

Manager Process

  • A single manager process runs on both source and target servers
  • It is used to manage all GG processes
  • First, we have to start the manager process. Then it will start the other GG processes
  • Maintains communication port
  • Starts/ Stops extract and replicat processes
  • Generates reports for processes
  • Contains control parameters
  • Manages trail files
  • It also manages reports and produces log files
  • Manager can be considered as the parent process

7

8 of 119

Capture (Extract) Process

  • Also known as extract process
  • Captures inserted, updated, or deleted data. Even captures DDL changes
  • Only committed changes are captured
  • In oracle, it captures data from redo logs
  • Writes the captured data to source trail files
  • We can have multiple extract processes
  • Each extract process can work on different tables independently
  • It can capture changes from: Entire database , Schema, Table, Single column, Rows or where clause

8

Source Database

Capture (Extract) Process

9 of 119

Extract Process

  • Types of extract process
    • Change Synchronization Extract: In this method of replication, extract process continuously captures data (DML and DDL) from the source database to keep the source and target database in a consistent state of replication and it is the sole method to implement continuous replication between the source and target database.
    • Initial Load Extract: For the Initial Load method of replication, extract captures a static set of data directly from the source table or objects.
  • The process of getting the already existed rows from source to target is knowns as initial load
  • A separate extract can be configured to perform initial load

9

Source Database

Capture (Extract) Process

10 of 119

Classic capture vs Integrated capture

  • The main difference between the classic capture and integrated capture modes is that in the classic capture the extract reads the Oracle database online redo log files/archive log files to capture changes while in the integrated capture mode the database log mining server reads the redo log files and captures changes in the form of LCR’s (Logical Change Records) which are then accessed by the Golden Gate extract process.
  • Integrated capture supports more data and storage types as compared to classic capture, and the support is more transparent. It can handle the absence of a log file caused by disk corruption, hardware failure, or operator error, assuming that additional copies of the archived and online logs are available
  • Oracle GoldenGate 21c does not support classic Extract.

10

Classic capture

Integrated capture

11 of 119

Source Trail

  • Extract process writes all captured transactions to source trail files
  • Data written in big blocks
  • Ready to distribute files
  • Can set size for source trail files
  • Golden Gate can remove source trail files automatically
  • Logdump is used to examine trail files

11

Capture (Extract) Process

Source Trail Files

12 of 119

Data Pump

  • Reads from source trail file
  • Creates a target file
  • Reads from trail files and send them over network
  • It’s default mode: Pass thru mode
  • It can also be configured to transform or filter the data
  • It’s a type of secondary extract process
  • Optional GG component
  • We can have multiple extract data pump processes
  • The Data Pump acts as a secondary extract process where it reads the records from Local Trail Files and delivers to the Target system Trail files through the collector
  • Data Pump is also known as secondary extract process

12

Data Pump

Source Trail

13 of 119

Data Pump

13

Source Database

Capture (Extract) Process

Data Pump Process

Source Trail

Target Database

Target Database

Target Database

Data Pump Process

Data Pump Process

14 of 119

What can go wrong if we do not have Data Pump process?

  • In case of network issues, DP waits until network is established and starts sending trails to target again
  • Configure multiple DPs in one to many replication, so that when one target gets down, only corresponding data pump gets affected
  • DP is optional and EX can directly send data. In that case, if network issue occurs EXTRACT process also do not create source trail files which cause missing transaction in source trail files. If DP is available, then even if network issues occur, it does not stop extract to write in source trail files. Also, DP can transform data to solve network throughput issues to send data.

14

Data Pump Process

Source Trail Files

15 of 119

Network

  • Source trail data is sent over TCP/IP network
  • Collector communicates with incoming requests
  • Compress and send data
  • Network issue leads to replication delay
  • Network setup between source and target servers
  • Collector auto starts by default which is defined in the manager process. It does not require configuration. Background process.
  • Collector writes records to remote trail

15

Data Pump Process

Collector Process

Network

16 of 119

Collector Process

  • When a connection request is sent from the source extract, the collector process on the target system scan and map the requesting connection to the available port and send the port details back to the manager for assignment to the requesting extract process.
  • Collector receives the data sent by source extract process and writes them to Trail Files on the target system.
  • There is one collector process on the target system per one extract process on the source system, i.e it is a one to one mapping between extract and collector process.

16

17 of 119

Delivery (Replicat) Process

  • Reads trail on target system
  • Applies changes on target database
  • Change apply happens in same order as they were committed on source
  • Filter or transform data using replicat as well.
  • There are two types of replicat.
    • Initial load replicat
    • Change synch replicat
  • Multiple delivery process
  • Independent management

17

Deliver (Replicate) Process

Remote Trail Files

18 of 119

Classic Replicat vs Integrated Replicat

18

Integrated Replicat

Classic Replicat

  • Classic Replicat directly connects to the target database via OCI (Oracle Call Interface) and delivers the SQL transactions.
  • However, Integrated Replicat process only reads the GoldenGate trail files, constructs logical change records (LCR), and then delivers the LCRs to the Inbound Server.
  • The Inbound Server runs inside the Oracle database, which collects the LCRs, calculates the dependencies, group transactions, and then applies the transactions to the target database. The replicat process still performs data filtering, mapping and transformations but doesn't apply the DML and DDL operations.

19 of 119

Target Database

  • Similar to source database
  • Replicated source database
  • Goldengate supported database
  • Can be same vendor as source

19

Target Database

Deliver (Replicate) Process

20 of 119

Golden Gate Workflow

20

Source Database

Capture (Local Extract)

Data Pump

Source Trail

Network

Collector

Remote Trail

Deliver (Replicate)

Target Database

Manager

Manager

21 of 119

Initial Load & Change Sync

21

22 of 119

Two Important Terms

  • Change Synchronization
    • Configure normal extract and replicat and apply changes from source to target
  • Initial Load
    • Loading existing data from source to target before configuring change synchronization
    • Can be performed using GG initial load or database’s native utilities such as export, import (source and target same vendor)

22

23 of 119

23

24 of 119

24

14 rows

0 rows

Source Table

Target Table

Initial Load

  • Create blank table on target
  • Setup GG Initial Load to match source table with target table (here, after initial load target table will have 14 rows)
  • Configure Change Synch
  • Delete initial load (Initial load Extract/Replicat)
  • Configure change synch before initial load
  • Do not start replicat
  • Start initial load - wait until it completes
  • Then, start replicat

Change Synch

25 of 119

25

Capture (Local Extract)

Data Pump

Source Trail

Network

Collector

Remote Trail

Deliver (Replicat)

Source Table

Target Table

Initial Load

Do not start the replicat. Wait until Initial load completes to start the replicat

26 of 119

26

Capture (Local Extract)

Data Pump

Source Trail

Network

Collector

Remote Trail

Deliver (Replicat)

Source Table

Target Table

Initial Load (takes 4 hrs)

Configure change sync before initial load. Transaction happening in this 4 hours will be captured until the initial load completes.

27 of 119

27

Capture (Local Extract)

Data Pump

Source Trail

Network

Collector

Remote Trail

Deliver (Replicat)

Source Table

Target Table

Start the replicat after the initial load. (4hrs). Transaction that happened in these 4 hours will be replicated in the target.

Initial Load (takes 4 hrs)

28 of 119

28

How do you define initial load extract? How do you tell a extract is for initial load?

  • Add extract INITLE, SOURCEISTABLE
  • Add replicat INITLR, SPECIALRUN
  • Info INITLE
  • Info INITLR
  • Strat change synch extract and data pump on source. This will start capturing changes while we perform the initial load. Do not start replicat add this point.
  • Now start initial load extract. This will automatically start initial load replicat on source
  • After initial load replicat we can delete initial load extract and replicat as they are no longer needed

29 of 119

29

Initial Load Extract

  • Add the initial load extract process by executing the following command. Here, SOURCEISTABLE informs
  • Oracle GoldenGate that the extract process is the initial load extract and the entire table content needs to be captured for replication.
  • GGSCI> ADD EXTRACT INITEXT1, SOURCEISTABLE
  • Edit the extract parameter file.
    • GGSCI> EDIT PARAMS INITEXT1
    • EXTRACT INITEXT1
    • RMTHOST node2.ravin-pc.com, MGRPORT 7809
    • RMTTASK REPLICAT, GROUP INITREP1
    • USERID tiger, PASSWORD tiger123_
    • TABLE TIGER.ORDER_DTL;

Initial Load Replicat

  • Create an initial load replicat called INITREP1 on the target as shown here.
  • GGSCI> ADD REPLICAT INITREP1, SPECIALRUN
  • GGSCI> EDIT PARAMS INITREP1
  • Edit the parameter file
    • REPLICAT INITREP1
    • BULKLOAD
    • ASSUMETARGETDEFS
    • DISCARDFILE /app/ggs/fox/dirrpt/INITREP1.dsc, MEGABYTES 599, append
    • USERID fox, PASSWORD fox123_
    • MAP TIGER.*, TARGET FOX.*;

30 of 119

Supported Topology by Golden Gate

30

31 of 119

31

32 of 119

Unidirectional Replication

Any database change on the source is replicated to the target.

  • To maintain a hot standby database for failover purposes
  • For query offloading
  • For zero-downtime database upgrades or migration

32

Source Database

Target Database

33 of 119

Bidirectional Replication

Both the systems are active and receive transaction changes.

  • Active-active high availability system
  • Load sharing
  • Disaster tolerance
  • Zero-downtime upgrades

33

Primary Database

Secondary Database

34 of 119

One-to-many Replication

Source database is synchronized across multiple target databases.

  • When source data is to be distributed across multiple target databases
  • When multiple standby databases serve as backup

34

Source Database

Target DatabaseA

Target DatabaseB

35 of 119

Many-to-one Replication

Data is sent from multiple source databases to one target database.

  • Needed in data-warehousing system
  • Effectively generate reports using real-time data

35

Target Database

Source DatabaseA

Source DatabaseB

36 of 119

Peer-to-peer Replication

Changes made on one site are propagated to all other sites.

  • Common among industries globally in many business scenarios
  • Localized data access
  • Load sharing
  • High availability
  • Disaster Tolerance

36

Database A

Database B

Database D

Database C

37 of 119

Cascading Replication

Oracle GoldenGate propagates data changes from the source database to a second database, and then on to a third database.

  • Target systems does not have direct connection to source
  • Limit network activity
  • Servers are very apart geographically

37

Target DatabaseA

Target DatabaseA

Second Database

Source DatabaseA

38 of 119

GoldenGate Pre-installation Tasks

38

39 of 119

Design Consideration and Requirements

  • Oracle GoldenGate supports upto 5000 extracts and replicats.
  • Each extract needs minimum of 25 MB to 55 MB for processing
  • Only 400 MB disk space needed for oracle binaries
  • A good rule of thumb is to have at least 2 GB of free disk space for trail files
  • Default port 7840 but good to have a range of ports
  • Need to have read-write permissions on GoldenGate repositories
  • If you have more than one database instance on the same server, then you would need to set ORACLE_ HOME and ORACLE_SID for each GoldenGate process using SETENV statements in parameter files of each Oracle GoldenGate process.
  • Oracle GoldenGate processes use the database shared libraries. You will need to set the shared library variable for this purpose. Set LD_LIBRARY_PATH if you are using HP-UX, Linux, or Sun Solaris OS.

39

40 of 119

Design Consideration and Requirements

  • Oracle GoldenGate uses redo logs to capture transactions for replication. For this purpose, you would need supplemental logging enabled for your source database so it can add additional undo information to redo logs, which can be used to locate rows when ROWID information is unavailable..
  • Enabling supplemental logging in Oracle databases creates a supplemental log group for each table; this log group contains the list of columns for which supplemental logging will be captured.
  • Setting up force logging is optional; you can only set supplemental minimal logging and add table/schema-level supplemental logging from the GoldenGate Software Command Interface (GGSCI). This is especially recommended when only a few tables/schema are to be replicated. This will in turn reduce logging overhead for the entire database.
  • Supported data-types
  • Supported operations

40

41 of 119

Replication Setup

  • Using a well-designed and intuitive naming convention while designing the overall architecture of your GoldenGate real-time replication configuration will serve multiple benefits.
  • Know the type of replication you need.
  • Are you planning to replicate only data changes or are structural changes also involved?
  • Know the various capture process groups into which you will divide the tables to be replicated. This grouping can be based on various application-specific factors such as transactional load, criticality of the tables, and so on. Keep in mind, if one channel goes down because of any reason, all the tables in the channel are affected. Also, loading all the tables or a large number of tables in same channel may delay replication in the case of heavy transactional load on the tables.

41

42 of 119

Database Privileges for GoldenGate Users

42

43 of 119

Install Golden Gate

43

44 of 119

Installing Golden Gate

44

Create directory

  • mkdir -p /u01/GG_SOFT
  • mkdir -p /u01/app/GG_HOME

Unzip the downloaded zip file in /u01/GG_SOFT

  • unzip 213000_fbo_ggs_Linux_x64_Oracle_shiphome.zip

Update the bashrc file (Setting up environment variables) and run

export GG_HOME=/u01/app/GG_HOME

export PATH=$GG_HOME:$PATH

export LD_LIBRARY_PATH=$GG_HOME/lib:$LD_LIBRARY_PATH

  • . .bashrc

45 of 119

45

Go to location

  • cd /u01/GG_SOFT/fbo_ggs_Linux_x64_Oracle_shiphome/ Disk1/response/

Update response file oggcore.rsp for silent installation

  • INSTALL_OPTION=ORA21c
  • SOFTWARE_LOCATION=/u01/app/GG_HOME
  • START_MANAGER=TRUE
  • MANAGER_PORT=7809
  • DATABASE_LOCATION=/u01/app/oracle/product/version/db_1
  • INVENTORY_LOCATION=/u01/app/oraInventory
  • UNIX_GROUP_NAME=oinstall

RUN the installer

  • Go to location /u01/GG_SOFT/
  • Run the cmd
    • ./runInstaller -silent -showProgress -waitforcompletion -responseFile /u01/GG_SOFT/fbo_ggs_Linux_x64_Oracle_shiphome/Disk1/response/oggcore.rsp

46 of 119

Installing Golden Gate

46

47 of 119

Create the Golden Gate Schema Owner

A new user should be created to own the GoldenGate database objects in both the source and target servers.

  • This user performs maintenance on the Oracle GoldenGate database objects. On each server create the GoldenGate schema owner gg01
  • On the source server create a new tablespace for the GoldenGate objects. Ensure that AUTOEXTEND is enabled., Set the new tablespace as the default for the GoldenGate user

On each server set the GGSCHEMA in the global parameter file.

  • Use ALLOWNONVALIDATEDKEYS to allow Extract, Replicat, and GGSCI commands to use a non-validated primary key or an invalid key as a unique identifier which means If a key proves to be non-valid and the table on which it is defined contains more than one record with the same key value, Oracle GoldenGate might choose the wrong target row to update

47

[oracle@source]$ sqlplus / as sysdba

SQL> CREATE USER gg01 IDENTIFIED BY gg01;

SQL> GRANT CONNECT, RESOURCE, DBA TO gg01;

[oracle@source]$ ggsci

GGSCI (source) 1> EDIT PARAMS ./GLOBALS

48 of 119

Create GoldenGate Subdirectories

On each server create subdirectories for GoldenGate using the CREATE SUBDIRS command.

48

[oracle@source ]$ ggsci

GGSCI (source) 1> CREATE SUBDIRS

[oracle@target ]$ ggsci

GGSCI (target) 1> CREATE SUBDIRS

49 of 119

Enable GoldenGate Replication

Connect to database and enable goldengate replication.

[oracle@source]$ sqlplus / as sysdba

  • SQL> Alter system set enable_goldengate_replication=True;
  • SQL> Show parameter enable_goldengate_replication

49

50 of 119

Run Role Setup Script in the Source Server

  • On the source server run the Role setup script. Specify the GoldenGate schema name when prompted.
  • On the source server grant GGS_GGSUSER_ROLE to the GoldenGate user

50

[oracle@source]$ sqlplus / as sysdba

  • SQL> @role_setup
  • SQL> GRANT GGS_GGSUSER_ROLE TO gg01;

51 of 119

Navigating Golden Gate

51

52 of 119

GG Process Status

52

Status

Meaning

STARTING

Process just started, not ready yet

RUNNING

Active

STOPPED

Process is normally stopped

ABENDED

Process is terminated, or abnormally ended

53 of 119

GG Process Status

53

54 of 119

GG Process Status

54

  • Time since checkpoint is the time elapsed since the checkpoint file was updated.
  • By default the checkpoint file of each process gets updated every 10 seconds if the process is in running status.
  • So this value, in normal circumstances will not cross 10 secs.
  • So when a process is stopped or abended, u will notice that timing continously increases as the checkpoint files are not updated overrwritten into.
  • Lag at checkpoint is the lag encountered when the checkpoint file was updated.
  • So, checkpoint file gets updated every 10 seconds.
  • When the checkpoint file was updated last, lag encountered for the process at that moment is Lag at checkpoint.

55 of 119

GG Process Status

55

  • Only the PORT 7809 part is mandatory for getting your manager process running.
  • PORT 7809 tells GoldenGate to communicate via TCP/IP port 7809. You can also set up a dynamic port list by adding the following command:
    • DYNAMICPORTLIST 7809-7879
    • LAGINFOMINUTES specifies how often lag information is reported to the error log file ggserror.log.
    • AUTOSTART automatically starts the specified extract/replicat process when the manager starts. Here we have specified to start all processes (channels) starting with E when the manager starts. This is specifically useful during system reboots.
      • PORT 7809
      • AUTOSTART E*
      • LAGINFOMINUTES 5
      • LAGCRITICALMINUTES 5

56 of 119

info <extract_name>, showch

56

If you do "info <extract_name>, showch", it will show you the recovery checkpoint and current checkpoint for the redo log reading position. Those output has SCN and timestamp info in it. The recovery checkpoint tracks the oldest open transaction Extract is reading from. The current checkpoint tracks the last committed transaction's SCN Extract just finished processing before stop.

RBA (Relative Byte Address)

The relative byte address (RBA) is the location within the trail file to indicate the current transaction. It is basically a marker within the trail file to identify the location of the transaction.

57 of 119

GG Directory Structure

57

Directory

Contained Files

dirrpm

Parameter files

dirpcs

Process status files

dirchk

Checkpoint files

dirdef

Database definition files

dirdat

Stores the trail files

dirtmp

Temporary files

dirrpt

Report files/ Process specific alert logs

[oracle@source]$ ggsci

GGSCI (source) 1> CREATE SUBDIRS

58 of 119

dirrpt

58

  • If the process gets abended or their is any issue with the starting of process then view this report files. It works as Process specific alert logs.
  • Processname.rpt (Report files)
  • Processname.dsc (Discard files) (Example: If some rows do not meet specific condition, those rows will be discarded. They will be kept in these discard files)
  • Reverse numbering while report generates.
  • Should be opened in ggsci prompt

[oracle@source]$ ggsci

GGSCI (source) 1> View report EX1

  • Last line of the report is more important.

59 of 119

dirrpt

Abended alert in EX1.rpt

Running status in EX1.rpt

59

60 of 119

dirrpt

Abended alert in dp1.rpt

60

61 of 119

GoldenGate Commands

61

62 of 119

Data Transformation Using Column Conversion Functions (Used with extract and replicat parameter files)

62

  • Condition Analysis Functions- IF/CASE/EVAL
  • Date Functions- DATE/DATENOW/DATEDIFF
  • Math Functions-COMPUTE
  • String Functions- STRCAT/STRCMP
  • Miscellaneous Functions- BEFORE/AFTER/BEFOREAFTER

63 of 119

GGSCI COMMANDS

63

  • !- Executes the previous (most recent) command.
  • FC- Displays and edits the previously executed command and re-executes it.
  • INFO ALL- Displays status and lag information for all processes (manager, extract, replicat).
  • OBEY- Runs batches of GoldenGate commands in a file.
  • SHELL- Executes shell commands from the GGSCI prompt.
  • VIEWGGSEVT- Displays the GoldenGate error log (ggserr.log).

64 of 119

GGSCI COMMANDS

64

  • Manager Process Commands-INFO MANAGER, SEND MANAGER, START MANAGER, STOP MANAGER, STATUS MANAGER
  • Extract and Replicat Commands- ADD EXTRACT, ALTER EXTRACT, INFO EXTRACT, CLEANUP EXTRACT, DELETE EXTRACT, KILL EXTRACT, ADD REPLICAT, ALTER REPLICAT, INFO REPLICAT, CLEANUP REPLICAT , DELETE REPLICAT, KILL REPLICAT
  • Trail Files- ADD EXTRAIL, ALTER EXTRAIL, DELETE EXTRAIL, INFO EXTRAIL, ADD RMTTRAIL, ALTER RMTTRAIL, DELETE RMTTRAIL, INFO RMTTRAIL
  • TRANDATA commands- ADD SCHEMATRANDATA, ADD TRANDATA, DELETE SCHEMATRANDATA, DELETE TRANDATA, INFO SCHEMATRANDATA, INFO TRANDATA
  • Checkpointtable commands-ADD CHECKPOINTTABLE, CLEANUP CHECKPOINTTABLE, DELETE CHECKPOINTTABLE, INFO CHECKPOINTTABLE

65 of 119

Native non-GGSCI COMMANDS

65

DEFGEN

  • if your source and target table structures differ, you will need to generate a definition file at the source using the DEFGEN utility and use this file on the target

LOGDUMP

  • The LOGDUMP utility allows you to view transactions and records within the trail files and gather statistical information. You can also save the data within the trail files

66 of 119

Native non-GGSCI COMMANDS

66

Trail File Structure

67 of 119

Configure GoldenGate

67

68 of 119

Prepare Databases

Enable Forced Logging in source database

  • Forced logging should be enabled on the source database to ensure that all changes are written to the redo logs.
  • The documentation recommends performing a redo log switch to ensure that the change is applied in all subsequent redo.

Enable Minimal Supplemental Logging so that rows updated on Source can be uniquely identified at Target and updated

  • If minimal supplemental logging is not enabled, the extract process fails to start
  • Minimal supplemental logging only needs to be configured on the source database. It may be prudent to enable it on the target database as well.

68

[oracle@source]$ sqlplus / as sysdba

SQL> SELECT force_logging FROM v$database;

SQL> ALTER DATABASE FORCE LOGGING;

SQL> ALTER SYSTEM SWITCH LOGFILE;

[oracle@source]$ sqlplus / as sysdba

SQL> SELECT supplemental_log_data_min FROM v$database;

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

69 of 119

Prepare Test Environment

Create Test User

  • In both databases create a user (schema) called HR.
  • grant permissions to the new user.
  • Create Test Tables DEPT1 and DEPT2,
  • Insert some rows

69

[oracle@source]$ sqlplus / as sysdba

SQL> CREATE USER hr IDENTIFIED BY hr;

SQL> GRANT CONNECT, RESOURCE, DBA TO hr;

70 of 119

Configure Checkpoint Table

70

71 of 119

Configure VMs

71

72 of 119

Prepare VMs for Test Purpose

  • In the network settings, select bridged adapter
  • Update new mac address (Set different mac addresses for each VMs that will generate different IPs for each VMs)
  • Update the /etc/hosts with the IP address of that particular VM
  • Update /etc/hostname with the name you want to give to that VM
  • Try ping the two machines to check if the two machines can communicate

72

73 of 119

Configure GoldenGate

Unidirectional (source to target)

73

74 of 119

74

EX1

DP1

REP1

75 of 119

75

76 of 119

Configure Process Parameter files in Source server

76

77 of 119

Configure Supplemental Logging for Replicated Tables

On the source server configure supplemental logging for all tables that will be replicated.

Here, we will replicate all tables of HR schema.

Supplemental logging can be configured by any user that has privileges to modify the underlying database table.

[oracle@source]$ ggsci

GGSCI (source) 1> DBLOGIN USERID hr@//localhost:1521, PASSWORD oracle

GGSCI (source) 2> ADD TRANDATA hr.*

77

78 of 119

Configure Parameter Files in source server

  • Manager Process
  • Extract Process
  • Datapump Process

78

79 of 119

Configure Manager Parameters

On both servers configure the MGR parameters.

The above command created the file : /u01/app/GG_HOME/dirprm/mgr.prm

Add the following parameters to the parameter file:

PORT 7809

DYNAMICPORTLIST 7810-7820

  • Save and close the parameter file

79

80 of 119

Configure Extract Parameters

On the source server, create the parameter file for Extract ex1:

[oracle@source]$ ggsci

GGSCI (source) 1> EDIT PARAMS ex1

The above command created the file /u01/app/GG_HOME/dirprm/ex1.prm

Add the following parameters to the new file:

  • Process name
  • Goldengate admin details
  • EXTTRAIL file location
  • Table to replicat

80

81 of 119

Configure Datapump Parameters

On the source server create the parameter file for Data Pump

process dp1:

[oracle@source]$ ggsci

GGSCI (source) 1> EDIT PARAMS dp1

The above command created the file /home/oracle/goldengate/dirprm/dp1.prm

Add the following parameters to the new file:

  • Process name
  • Goldengate admin details
  • Remote host ip address, manager port
  • Remote TRAIL file location
  • Table to replicat

81

82 of 119

Add Processes in Source server

Add the Extract Process: On the source server add the Extract process (ex1)

GGSCI (source) 1> ADD EXTRACT ex1, TRANLOG, BEGIN NOW

Add the Extract Trail: On the source server add the Extract trail (/home/oracle/goldengate/dirdat/ex)

GGSCI (source) 2>ADD EXTTRAIL /u01/app/GG_HOME/dirdat/ex, EXTRACT ex1

Add the Data Pump Process: On the source server add the Data Pump process (dp1)

GGSCI (source) 3> ADD EXTRACT dp1 EXTTRAILSOURCE/u01/app/GG_HOME/dirdat/ex

Add the Data Pump Trail: On the source server add the Data Pump trail (/u01/app/GG_HOME/dirdat/rt). This trail is created on the target server. However, the name is required in order to set up the Data Pump process on the source server.

GGSCI (source) 4> ADD RMTTRAIL /u01/app/GG_HOME/dirdat/rt, EXTRACT dp1

82

83 of 119

Add Processes in Source server

83

84 of 119

Configure Process Parameter files in Target server

84

85 of 119

Configure Checkpoint Table

  • The check point table should be created in the target database.
  • On the target server login as the GG01 user and add the check point table.
  • The name of the check point table must be added to the GLOBALS file on the target server.
  • In case of some unexpected failure happened, the checkpoint file or database table ensures extract and replicat processes restart’s from the point of failure and avoid re-capture and re-apply of transactions.

85

[oracle@target]$ ggsci

GGSCI (target) 1> DBLOGIN USERID gg01@//localhost:1521, PASSWORD gg01

GGSCI (target) 1> ADD CHECKPOINTTABLE gg01.checkpointtable

GGSCI (target) 1> Edit params ./GLOBALS

86 of 119

Configure Parameter Files in Target Server

  • Manager Process
  • Replication Process

86

87 of 119

Configure Replicat Parameters

  • On the target, create the parameter file for replication process rep1:
    • [oracle@target]$ ggsci
    • GGSCI (target) 1> EDIT PARAMS rep1
  • The above command created the file /u01/app/GG_HOME/dirprm/rep1.prm
  • Use the ASSUMETARGETDEFS parameter directs Oracle GoldenGate to assume that the data definitions of the source and target objects are identical, and to refer to the target definitions when metadata is needed for the source data.
  • Add the following parameters to the new file:
    • Process name
    • Goldengate admin details
    • If table structure same or not
    • Source table to map, target table to map

87

88 of 119

Add Replicat

On the target server add the Replication process (rep1)

[oracle@target]$ ggsci

GGSCI (target) 1> ADD REPLICAT rep1, EXTTRAIL /u01/app/GG_HOME/dirprm/dirdat/rt

88

89 of 119

Start Processes in Both Server

First start the GoldenGate manager and replicat processes on the target followed by the GoldenGate manager and extract processes on the source system.

89

90 of 119

Source

90

On the source server, start the GoldenGate manager:

[oracle@source]$ ggsci

GGSCI (source) 1> START MANAGER

GGSCI (source) 2> START EXTRACT ex1

GGSCI (source) 10> INFO EXTRACT ex1

On the source server, start the Data Pump (dp1):

GGSCI (source) 3> START EXTRACT dp1

GGSCI (source) 5> INFO EXTRACT dp1

91 of 119

Target

91

On the target server,

start the Replicat process (rep1):

[oracle@target]$ ggsci

GGSCI (target) 1> START REPLICAT rep1

GGSCI (target) 2> INFO REPLICAT rep1

92 of 119

Configure GoldenGate

Bidirectional

92

93 of 119

93

EXA

DPA

REPB

REPA

EXB

DPB

94 of 119

94

95 of 119

95

Some key considerations before setting up bidirectional replication using Oracle GoldenGate.

  • GGS can replicate sequence values. But bi-directional sequence replication is not allowed.
  • The following are some key points for bidirectional replication using OGG: Both the systems in an active-active replication should have the same time zone. This is required so that timestamp-based conflicts can be resolved. The TRUNCATE statement cannot be configured bidirectional. It can, however, be configured to originate from one machine to another but not vice versa. All truncates should originate only from one machine and be replicated to the other.
  • If triggers on tables exist and generate DML operations, then in order to avoid conflicts between local DML operations and replicated triggered DML operations, modify your triggers to handle local DML and replicated DMLs. The OGG replicat in integrated mode allows you to handle triggers.
  • Do not replicate database-generated values such as sequences to avoid conflicts. You can use even sequences on one database and odd sequences on another to avoid Conflicts.
  • Tables should have a primary or unique key to identify rows. This will be needed for resolving conflicts.

96 of 119

Add Trandata for in both serverA and serverB

Add checkpoint table in both serverA and serverB

96

  • The check point table should be created in both databases. Login as the GG01 user and add the check point table.
  • The name of the check point table must be added to the GLOBALS file on the both servers.
  • On the source server configure supplemental logging for all tables that will be replicated.
  • Here, we will replicate all tables of HR schema.
  • Supplemental logging can be configured by any user that has privileges to modify the underlying database table.

97 of 119

Configure Parameterfiles in ServerA

97

98 of 119

Configure Parameter Files in ServerA

  • Manager Process
  • Extract Process for serverA
  • Datapump Process for serverA
  • Replicat process for serverB

98

99 of 119

Configure Parameter Files in ServerA

Use the HANDLECOLLISIONS parameters to control whether or not Replicat tries to resolve duplicate-record and missing-record errors when applying SQL on the target.

When Oracle GoldenGate applies the replicated changes, HANDLECOLLISIONS provides Replicat with error-handling logic for these collisions.

The option EXCLUDEUSER or EXCLUDEUSERID of the parameter tranlogoptions explicitly avoid the re-capture of the transactions applied by specified user.

This requires using a dedicated Oracle GoldenGate user so that transactions applied by this user are eliminated - avoiding re-sending the transaction back to the originated system

99

100 of 119

Add processes

  • Extract process EXA
  • Datapump process DPA
  • Replicate process REPB

100

101 of 119

Configure Parameterfiles in ServerB

101

102 of 119

Configure Parameter Files in ServerB

  1. Manager Process
  2. Extract Process for serverB
  3. Datapump Process for serverB
  4. Replicat process for serverA

102

103 of 119

Add processes

  • Extract process EXB
  • Datapump process DPB
  • Replicate process REPA

103

104 of 119

Testing in Some Specific Cases

104

105 of 119

Suggestions for Testing

105

  • Test while deleting and updating using where clause and add in the video
  • Character Set Testing
    • add CLOB field and insert Bangla/Multichar values
  • Add blob data field and check replication of image/file

106 of 119

Checking UPDATE and DELETE operation

106

107 of 119

107

108 of 119

Checking CLOB datatype

108

109 of 119

109

110 of 119

Instructions

110

  • Create Table on both servers:
    • create table test2 (

id int not null primary key,

clobtest CLOB);

  • Insert Values on both server:
    • Insert into test2 values (1,'প্রভা');
  • Update GoldenGate parameter files (Update the Table name)
  • Add trandata in source server for Table test2
    • ADD Trandata hr.test2
  • Start GoldenGate Processes
  • Now, check the replication

111 of 119

Checking BLOB datatype

111

112 of 119

112

113 of 119

Instructions

113

  • Create Table on both servers:
    • CREATE TABLE tab1 (

id int not null primary key,

blob_data BLOB

);

  • Create directory of the blob (BLOB_DIR) files to insert
    • CREATE OR REPLACE DIRECTORY BLOB_DIR AS '/u01/userhome/oracle/Desktop';
    • GRANT all ON DIRECTORY BLOB_DIR TO public;
  • Insert Values on both server:
    • Using PL/SQL (See next slide)
  • Update GoldenGate parameter files (Update the Table name)
  • Add trandata in source server for Table tab1
    • ADD Trandata hr.tab1
  • Start GoldenGate Processes
  • Now, check the replication by exporting the BLOB file using PL/SQL (See next slide)

114 of 119

Insert files as BLOB datatype

Export BLOB file to other files

114

DECLARE

l_file UTL_FILE.FILE_TYPE;

l_buffer RAW(32767);

l_amount BINARY_INTEGER := 32767;

l_pos INTEGER := 1;

l_blob BLOB;

l_blob_len INTEGER;

BEGIN

SELECT blob_data

INTO l_blob

FROM tab1

WHERE id = 1;

l_blob_len := DBMS_LOB.getlength(l_blob);

l_file := UTL_FILE.fopen('BLOB_DIR','cat2.jpg','wb', 32767);

WHILE l_pos <= l_blob_len LOOP

DBMS_LOB.read(l_blob, l_amount, l_pos, l_buffer);

UTL_FILE.put_raw(l_file, l_buffer, TRUE);

l_pos := l_pos + l_amount;

END LOOP;

UTL_FILE.fclose(l_file);

EXCEPTION

WHEN OTHERS THEN

IF UTL_FILE.is_open(l_file) THEN

UTL_FILE.fclose(l_file);

END IF;

RAISE;

END;

/

DECLARE

l_bfile BFILE;

l_blob BLOB;

l_dest_offset INTEGER := 1;

l_src_offset INTEGER := 1;

BEGIN

INSERT INTO tab1 (id, blob_data)

VALUES (2, empty_blob())

RETURN blob_data INTO l_blob;

l_bfile := BFILENAME('BLOB_DIR', 'sunflower.jpg');

DBMS_LOB.fileopen(l_bfile, DBMS_LOB.file_readonly);

DBMS_LOB.loadblobfromfile (

dest_lob => l_blob,

src_bfile => l_bfile,

amount => DBMS_LOB.lobmaxsize,

dest_offset => l_dest_offset,

src_offset => l_src_offset);

DBMS_LOB.fileclose(l_bfile);

END;

/

115 of 119

Supported Datatypes

115

116 of 119

Data types, objects and operations that are supported by Oracle GoldenGate

There are different types for replication support:

  • Support by Capturing from Redo
  • Procedural Replication Support

116

  • Most data types are supported (SUPPORT_MODE=FULL), which imply that Oracle GoldenGate captures the changes out of the redo.
  • In some unique cases, the information cannot be captured, but the information can be fetched with a connection to the database (SUPPORT_MODE=ID KEY).
  • From Oracle GoldenGate 21c (21.1.0) onward, DML on tables that are not supported will be automatically skipped when DBA_GOLDENGATE_SUPPORT_MODE.SUPPORT_MODE= NONE is set.
  • Data types listed here are not readable in the redo logs and must be fetched by the Extract process during it's processing. The method for fetching these records is controlled by the use of the FETCHOPTIONS parameter.
  • Other changes can be replicated with Procedural Replication (SUPPORT_MODE=PLSQL) that requires additional parameter setting of Extract

117 of 119

Data types, objects and operations that are supported by Oracle GoldenGate

117

  • Oracle GoldenGate supports tables with these partitioning attributes:
    • Range partitioning
    • Hash Partitioning Interval Partitioning
    • Composite Partitioning
    • Virtual Column-Based Partitioning
    • Reference Partitioning
    • List Partitioning
  • Oracle GoldenGate supports tables with virtual columns, but does not capture change data for these columns or apply change data to them: The database does not write virtual columns to the transaction log, and the Oracle Database does not permit DML on virtual columns. For the same reason, initial load data cannot be applied to a virtual column. You can map the data from virtual columns to non-virtual target columns.
  • Oracle GoldenGate will not consider unique/index with virtual columns.
  • Oracle GoldenGate supports Transparent Data Encryption (TDE).

118 of 119

Preparing the database for Oracle Golden Gate

  • Configuring Connections for Extract and Replicat Processes
  • Configuring Logging Properties

118

  • Capture and integrated Replicat require a dedicated server connection in the tnsnames.ora file.
  • Oracle GoldenGate can connect to a database instance without using the network listener with a Bequeath connect descriptor added in tnsnames.ora.
  • Oracle Golden Gate relies on the redo logs to capture the data that it needs to replicate source transactions. The Oracle redo logs on the source system must be configured properly before starting Oracle GoldenGate processing.
  • Oracle strongly recommends putting the Oracle source database into forced logging mode. This ensures that no source data in the Extract configuration gets missed.

119 of 119

Thank You

119