IBM Support

Database migration checklist with Q Replication

General Page

This replaces the IBM Data Replication Community Wiki Database migration checklist with Q Replication page.

Preparation

  • Develop migration scenario plan

    • Gather the latest customer environment info

      • Platform information, db2look, db2set setting, dbm cfg, db cfg configuration..etc

    • Decide basic replication configuration, and any modifications to the standard deployment e.g

      • remote capture function to reduce source CPU

      • using a remote MQServer

  • db2look output analysis

    • Identify replication source tables.

    • Determine if any of the following apply:

      • If all tables contain a primary key or unique index

        • create unique index at target if we can. If not. we need to exclude such tables as replication source and do export/Load from old to new in switching day

      • If any trigger(s) are defined on the source tables

        • drop trigger in target and put them back in switching day.

      • If any identity column or sequence is defined

        • we might need to issue alter table/sequence, including start with XX if they want to use continuous number in switching day

      • If an immediate or deferred MQT is defined

        • Exclude such table as replication and run refresh table command in switching day when the base tables are ready in the target

      • If any generated always columns are defined

        • This will impact fallback procedure, as replication cannot apply into a generated always column

  • MQ Planning, determine

    • QMGR MAXMSGL – determine the largest row-size being sent across. Default of 4MB normally is sufficient.

    • Disk space for largest transaction being replicated

    • Queue space for spilling requirements while target tables are being loaded

Replication design

  • Evaluate common Replication settings

  • How many Replication Qmaps are required between source and target

    • Note that only subscriptions applied within one Qmap are transitionally consistent

  • Choose HAS_LOADPHASE from

    • 'I' - Replication loads tables

    • 'E' - User loads tables and notifies replication

    • 'N' – Tables are in sync before replication starts

  • Choose a CONFLICT_ACTION of

    • 'I' - Ignore conflicts (e.g. an UPDATE for a row that doesn't exist)

    • 'F' - Force changes to target table (e.g. turn an UPDATE into an INSERT)

  • Choose an ERROR_ACTION when an unrecoverable error occurs of

    • 'Q' - Stop processing the queue

    • 'S' - Stop the program

    • 'D' - Stop the subscription and ignore future changes (will require table refresh later)

  • If reverse replication is needed for fallback plan

    • For migration scenarios, two uni directional subscriptions will normally suffice, if updates will not occur on both sides at the same time

  • Exception table pruning

    • How to handle reported conflicts during "overlapped" time by roll-forward and Q Capture

  • Monitoring tool during load phase to check if the load ends successfully

  • Develop replication definition tool script ( to generate ASNCLP scripts by reading sysibm.systables, syscolumns )

    • Or use ASNCLP's wildcard statements to select groups of tables.

  • Customize sample MQ script for user environment.

Pre set up / evaluation

  • Consider special DB2 consideration ( such as large tablespace usage in new DB )

  • Installing Replication Server if not already installed

  • Installing MQ or MQ Client

  • Define MQ objects

  • Set up test Q Subscriptions

  • First Synchronize source and target DDL, using FlashCopy / Backup+Restore

  • Start Q Capture and Q Apply

  • Confirmation 'Active-Active' state between source and target ( to confirm that the replication is ready, data between source and target does not match at this point )

Execution

  • Second Synchronize source and target (by FlashCopy / Backup+Restore or full-refresh by starting subscriptions )

  • Migration at the target if you choose "FlashCopy/Backup+Restore".

  • Large tablespace enablement ( ALTER and REORG ) , this is not possible in V82, it is possible from V9.

  • Create unique index at target if possible.

    • If not. we need to exclude such tables and do export/Load from old to new in switching day.

  • Start Q Capture/ Q Apply and catch-up

  • Runstats target table once both state becomes active-active ( we might use Spillsub/Resumesub ).

  • Application bind when the target table is ready to use.

Switching day

  • Stop source application

  • Compare data between old source and new source if data integrity check required (can use tool like asntdiff if schema is the same)

  • Stop subscriptions for the migrating application

  • Restore target trigger(s), alter table(s) if neccesary, sequence start with xxx, refresh table to populate MQTs...etc

  • Optional - Start reverse replication - if required for fallback capabilities

  • Resume application at new source

  • cleanup Q replication after verification period

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSTRGZ","label":"InfoSphere Data Replication"},"Component":"","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF012","label":"IBM i"},{"code":"PF016","label":"Linux"},{"code":"PF051","label":"Linux on IBM Z Systems"},{"code":"PF033","label":"Windows"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
03 December 2019

UID

ibm11105041