Descriptions of asnqapp parameters

These descriptions provide detail on the asnqapp parameters, their defaults, and why you might want to change the default in your environment.

activate (z/OS)

Default: activate=1140.0

Method of changing: When Q Apply starts

The activate parameter specifies the level of functionality that you want to enable for a Q Apply program. Under the Q Replication delivery model that began with Version 11.4 on z/OS®, you have the option of enabling or disabling new functions by using this parameter. This parameter is supported on z/OS only.

For example, you might install a PTF on z/OS that contains new functions, and you want to enable the functions that are included in the PTF. You would start Q Apply with the activate parameter and set its value to the newly available functional level. The initial function level for Version 11.4 is 1140.0. The first function level that includes new features is 1140.100. The function level for V10.2.1 is 1021.0.

The value that you set with this parameter is stored in the CURRENT_LEVEL column in the IBMQREP_APPLYPARMS table. The limit for activate is the value of the POSSIBLE_LEVEL column of the IBMQREP_APPLYPARMS table, which indicates the maximum functional level that can be set for Q Apply. If the level of the control tables does not support the functional level that you specify with the activate parameter, the ASN0734E message is issued and Q Apply does not start.

apply_server

Default (z/OS): None

Default (Linux®, UNIX, Windows): apply_server=value of DB2DBDFT environment variable, if it is set

The apply_server parameter identifies the database or subsystem where a Q Apply program runs, and where its control tables are stored. The Q Apply server must be the same database or subsystem that contains the targets.

z/OS: For data sharing, provide the group attach name instead of a subsystem name so that you can run the replication job in any LPAR.

apply_schema

Default: apply_schema=ASN

The apply_schema parameter lets you distinguish between multiple instances of the Q Apply program on a Q Apply server.

The schema identifies one Q Apply program and its control tables. Two Q Apply programs with the same schema cannot run on a server.

A single Q Apply program can create multiple browser threads. Each browser thread reads messages from a single receive queue. Because of this, you do not need to create multiple instances of the Q Apply program on a server to divide the flow of data that is being applied to targets.

On z/OS, no special characters are allowed in the Q Apply schema except for the underscore (_).

apply_path

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The apply_path parameter specifies the directory where a Q Apply program stores its work files and log file. By default, the path is the directory where you start the program. You can change this path.

z/OS
Because the Q Apply program is a POSIX application, the default path depends on how you start the program:
  • If you start a Q Apply program from a USS command line prompt, the path is the directory where you started the program.
  • If you start a Q Apply program using a started task or through JCL, the default path is the home directory in the USS file system of the user ID that is associated with the started task or job.

To change the path, you can specify either a path name or a high-level Qualifier (HLQ), such as //QAPP. When you use an HLQ, sequential files are created that conform to the file naming conventions for z/OS sequential data set file names. The sequential data sets are relative to the user ID that is running the program. Otherwise these file names are similar to the names that are stored in an explicitly named directory path, with the HLQ concatenated as the first part of the file name. For example, sysadm.QAPPV9.filename. Using an HLQ might be convenient if you want to have the Q Apply log and LOADMSG files be system-managed (SMS).

If you want the Q Apply started task to write to a .log data set with a user ID other than the ID that is executing the task (for example TSOUSER), you must specify a single quotation mark (‘) as an escape character when using the SYSIN format for input parameters to the started task. For example, if you wanted to use the high-level qualifier JOESMITH, then the user ID TSOUSER that is running the Q Apply program must have RACF® authority to write data sets by using the high-level qualifier JOESMITH, as in the following example:

//SYSIN    DD  *
 APPLY_PATH=//'JOESMITH
/*     
Windows
If you start a Q Apply program as a Windows service, by default the program starts in the \SQLLIB\bin directory.

You can set the apply_path parameter when you start the Q Apply program, or you can change its saved value in the IBMQREP_APPLYPARMS table. You cannot alter this parameter while the Q Apply program is running.

applycmd_interval

Default: applycmd_interval=3000 milliseconds

Method of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The applycmd_interval parameter specifies how often the Q Apply program reads the IBMQREP_APPLYCMD table to look for inserts that prompt the running Q Apply program to execute a specific command. The minimum value is 1000 milliseconds.

applydelay

Default: applydelay=0 seconds

Method of changing: When Q Apply starts

The applydelay parameter controls the amount of time in seconds that the Q Apply program waits before replaying each transaction at the target. The delay is based on the source commit time of the transaction. Q Apply delays applying transactions until the current time reaches or exceeds the source transaction commit time plus the value of applydelay. Changes at the source database are captured and sent to the receive queue, where they wait during the delay period.

This parameter can be used, for example, to maintain multiple copies of a source database at different points in time for failover in case of problems at the source system. For example, if a user accidentally deletes data at the primary system, a copy of the database exists where the data is still available.

The applydelay parameter has no effect on the applyupto or autostop parameters.

Important: If you plan to use the applydelay parameter, ensure that the receive queue has enough space to hold messages that accumulate during the delay period.

applyupto

Default: None

Method of changing: When Q Apply starts

The applyupto parameter identifies a timestamp that instructs the Q Apply program to stop after processing transactions that were committed at the source on or before one of the following times:

  • A specific timestamp that you provide
  • The CURRENT_TIMESTAMP keyword, which signifies the time that the Q Apply program started

You can optionally specify the WAIT or NOWAIT keywords to control when Q Apply stops:

WAIT (default)
Q Apply does not stop until it receives and processes all transactions up to the specified GMT timestamp or the value of CURRENT_TIMESTAMP, even if the receive queue becomes empty.
NOWAIT
Q Apply stops after it processes all transactions on the receive queue, even if it has not seen a transaction with a commit timestamp that matches or exceeds the specified GMT timestamp or the value of CURRENT_TIMESTAMP.

The applyupto parameter applies to all browser threads of a Q Apply instance. Each browser thread stops when it reads a message on its receive queue with a source commit timestamp that matches or exceeds the specified time. The Q Apply program stops when all of its browser threads determine that all transactions with a source commit timestamp prior to and including the applyupto timestamp have been applied. All transactions with a source commit time greater than the specified GMT timestamp stay on the receive queue and are processed the next time the Q Apply program runs.

The timestamp must be specified in Greenwich mean time (GMT) in a full or partial timestamp format. The full timestamp uses the following format: YYYY-MM-DD-HH.MM.SS.mmmmmm. For example, 2007-04-10-10.35.30.555555 is the GMT timestamp for April 10th, 2007, 10:35 AM, 30 seconds, and 555555 microseconds.

You can specify the partial timestamp in one of the following formats:

YYYY-MM-DD-HH.MM.SS
For example, 2007-04-10-23.35.30 is the partial GMT timestamp for April 10th, 2007, 11:35 PM, 30 seconds.
YYYY-MM-DD-HH.MM
For example, 2007-04-10-14.30 is the partial GMT timestamp for April 10th, 2007, 2:30 PM.
YYYY-MM-DD-HH
For example, 2007-04-10-01 is the partial GMT timestamp for April 10th, 2007, 1:00 AM.
HH.MM
For example, 14:55 is the partial GMT timestamp for today at 2:55 PM.
HH
For example, 14 is the partial GMT timestamp for today at 2 PM.
The partial timestamp could be used to specify a time in the format HH.MM. This format could be helpful if you schedule a task to start the Q Apply program every day at 1 AM Pacific Standard Time (PST) and you want to stop the program after processing the transactions that were committed at the source with a GMT timestamp on or before 4 AM PST. For example, run the following task at 1 AM PST and set the applyupto parameter to end the task at 4 AM PST:
asnqapp apply_server=MYTESTSERVER apply_schema=ASN applyupto=12.00

During daylight saving time, the difference between GMT and local time might change depending on your location. For example, the Pacific timezone is GMT-8 hours during the fall and winter. During the daylight saving time in the spring and summer, the Pacific timezone is GMT-7 hours.

Restriction: You cannot specify both the autostop parameter and the applyupto parameter.

You might want to set the heartbeat interval to a value greater than zero so that the Q Apply program can tell if the time value specified in the applyupto parameter has passed.

arm

Default: None

Method of changing: When Q Apply starts

You can use the arm=identifier parameter on z/OS to specify a unique identifier for the Q Apply program that the Automatic Restart Manager uses to automatically start a stopped Q Apply instance. The three-character alphanumeric value that you supply is appended to the ARM element name that Q Apply generates for itself: ASNQAxxxxyyyy (where xxxx is the data-sharing group attach name, and yyyy is the Db2® member name). You can specify any length of string for the arm parameter, but the Q Apply program will concatenate only up to three characters to the current name. If necessary, the Q Apply program pads the name with blanks to make a unique 16-byte name.

autostop

Default: autostop=n

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The autostop parameter lets you direct a Q Apply program to automatically stop when there are no transactions to apply. By default (autostop=n), a Q Apply program keeps running when queues are empty and waits for transactions to arrive.

Typically, the Q Apply program is run as a continuous process whenever the target database is active, so in most cases you would keep the default (autostop=n). Set autostop=y only for scenarios where the Q Apply program is run at set intervals, such as when you synchronize infrequently connected systems, or in test scenarios.

If you set autostop=y, the Q Apply program shuts down after all receive queues are emptied once. When the browser thread for each receive queue detects that the queue has no messages, the thread stops reading from the queue. After all threads stop, the Q Apply program stops. Messages might continue to arrive on queues for which the browser thread has stopped, but the messages will collect until you start the Q Apply program again.

Restriction: You cannot specify both the autostop parameter and the applyupto parameter.

buffered_inserts

Default: buffered_inserts=n

Method of changing: When Q Apply starts

Linux, UNIX, Windows: The buffered_inserts parameter specifies whether the Q Apply program uses buffered inserts, which can improve performance in some partitioned databases that are dominated by INSERT operations. If you specify buffered_inserts=y, Q Apply internally binds appropriate files with the INSERT BUF option. This bind option enables the coordinator node in a partitioned database to accumulate inserted rows in buffers rather than forwarding them immediately to their destination partitions. When a buffer is filled, or when another SQL statement such as an UPDATE, DELETE, or INSERT to a different table, or COMMIT/ROLLBACK are encountered, all the rows in the buffer are sent together to the destination partition.

You might see additional performance gains by combining the use of buffered inserts with the commit_count parameter.

When buffered inserts are enabled, Q Apply does not perform exception handling. Any conflict or error prompts Q Apply to stop reading from the queue. To recover past the point of an exception, you must start message processing on the queue and start Q Apply with buffered_inserts=n.

caf (z/OS)

Default: caf=n

Method of changing: When Q Apply starts

By default (caf=n), the Q Apply program uses Resource Recovery Services (RRS) connect on Db2 for z/OS. You can override this default and prompt Q Apply to use the Call Attach Facility (CAF) by specifying caf=y.

If RRS is not available Q Apply switches to CAF. A message is issued to warn that the program was not able to connect because RRS is not started.

chkdep_noncondccd

Default: chkdep_noncondccd=y

Method of changing: When Q Apply starts

The chkdep_noncondccd parameter specifies whether the Q Apply program performs key dependency analysis on noncondensed CCD target tables. By default (chkdep_noncondccd=y), Q Apply performs this analysis, sometimes causing rows to be applied serially rather than in parallel if key dependencies are detected. When the cardinality of a replication key column is low, too much serialization can occur because of the dependency checking. You can specify chkdep_noncondccd=n to disable Q Apply dependency checking for noncondensed CCD targets. With this setting, all serialization is removed for this target type.

Note: Prior to Version 11.4, the replication administration tools created noncondensed CCD tables with the first column assigned as a replication key column (IS_KEY of Y). This setting might cause excessive dependency checking and you might see improved performance by specifying chkdep_noncondccd=n.

commit_count

Default: commit_count=1

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The commit_count parameter specifies the number of transactions that each Q Apply agent thread applies to the target table within a commit scope. By default, the agent threads commit after each transaction that they apply.

By increasing commit_count and grouping more transactions within the commit scope, you might see improved performance.

Recommendation: Use a higher value for commit_count only with row-level locking. This parameter requires careful tuning when used with a large number of agent threads because it could cause lock escalation resulting in lock timeouts and deadlock retries.

commit_count_unit

Default: commit_count_unit=t

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The commit_count_unit parameter specifies whether the commit_count parameter uses the number of transactions applied or the number of rows applied as the unit for determining the size of a commit scope.

By default (commit_count_unit=t), Q Apply agent threads use the number of transactions that were applied to the target table to determine the size of a batch (when to commit). If you specify commit_count_unit=r, the agent threads count the number of rows to commit within a commit scope. The agent threads track the total number of rows for all uncommitted transactions. When this number reaches or exceeds the value of COMMIT_COUNT, the agent threads commit all pending transactions within the commit scope. With commit_count_unit=r, Q Apply agents count all rows that they try to apply, even if a row is not applied because of a conflict or if a row is diverted to a spill queue.

deadlock_retries

Default: deadlock_retries=3

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The deadlock_retries parameter specifies how many times the Q Apply program tries to reapply changes to target tables when it encounters an SQL deadlock or lock timeout. The default is three tries. This parameter also controls the number of times that the Q Apply program tries to insert, update, or delete rows from its control tables after an SQL deadlock.

After the limit is reached, if deadlocks persist the browser thread stops. You might want to set a higher value for deadlock_retries if applications are updating the target database frequently and you are experiencing a high level of contention. Or, if you have a large number of receive queues and corresponding browser threads, a higher value for deadlock_retries might help resolve possible contention in peer-to-peer and other multidirectional replication environments, as well as at control tables such as the IBMQREP_DONEMSG table.

Restriction: You cannot lower the default value for deadlock_retries.

dftmodelq

Default: None

Method of changing: When Q Apply starts

By default, the Q Apply program uses IBMQREP.SPILL.MODELQ as the name for the model queue that it uses to create spill queues for the loading process. To specify a different default model queue name, specify the dftmodelq parameter. The following list summarizes the behavior of the parameter:

If you specify dftmodelq when you start Q Apply
For each Q subscription, Q Apply will check to see if you specified a model queue name for the Q subscription by looking at the value of the MODELQ column in the IBMQREP_TARGETS control table:
  • If the value is NULL or IBMQREP.SPILL.MODELQ, then Q Apply will use the value that you specify for the dftmodelq parameter.
  • If the column contains any other non-NULL value, then Q Apply will use the value in the MODELQ column and will ignore the value that you specify for the dftmodelq parameter.
If you do not specify dftmodelq when you start Q Apply
Q Apply will use the value of the MODELQ column in the IBMQREP_TARGETS table. If the value is NULL, Q Apply will default to IBMQREP.SPILL.MODELQ.

eif_conn1 (z/OS)

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The eif_conn1 parameter specifies connection information for the primary event server for the Event Interface Facility (EIF) in Tivoli® NetView® Monitoring for GDPS®. Use this parameter in conjunction with enabling Q Apply event notification (event_gen=y).

You specify the connection information in the format address(port) where address is the host name or IPv4 address of the event server and (port) is the port that the Event Receiver monitors for incoming events.

eif_conn2 (z/OS)

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The eif_conn2 parameter specifies connection information for the backup event server for EIF. Use this parameter in conjunction with enabling Q Apply event notification (event_gen=y).

You specify the connection information in the format address(port) where address is the host name or IPv4 address of the event server and (port) is the port that the Event Receiver monitors for incoming events.

eif_hbint (z/OS)

Default: eif_hbint=10000 milliseconds (10 seconds)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The eif_hbint parameter determines how often a Q Apply program sends "heartbeat" messages to the EIF server to indicate that it is running and monitoring event conditions. The default and minimum values are 10000 milliseconds (10 seconds).

Heartbeat messages are for EIF only and do not go to the console or IBMQREP_APPEVENTS table. These messages are unconditionally generated, even if other events are also generated within the heartbeat interval. The only exception is if Q Apply stops processing messages on the receive queue for the specified replication queue map.

event_gen

Default: event_gen=n

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The event_gen parameter specifies whether the Q Apply program creates a separate thread to check in-memory monitoring data and issue events when certain conditions occur or thresholds are exceeded. By default, Q Apply does not generate events, but you can invoke this function by specifying event_gen=y.

You can use events to speed the response to certain conditions, for example the latency of replicated transactions exceeding a desired level or a problem that forces Q Apply to stop reading from a receive queue. You define the events for which you want to receive notification by inserting rows in the IBMQREP_APPEVTDEFS control table. You can specify whether the events are sent to the console, the IBMQREP_APPEVENTS control table, or sent to the Event Interface Facility (EIF) over a TCP/IP IPv4 socket for use by Tivoli NetView Monitoring for GDPS as part of GDPS Continuous Availability.

Note: If event_gen=y, you must also run with term=y.

event_interval

Default: event_interval=3000 milliseconds (3 seconds)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The event_interval parameter determines how often a Q Apply program collects end-to-end replication latency values for generating events. The default is every 3000 milliseconds (3 seconds) and the minimum value is 1000 milliseconds (1 second). A longer interval might provide more data on which to base responses to events and reduces the data collection overhead. A shorter interval allows faster responses. You should determine a value for this parameter based on the type of events that are defined for your environment.

event_limit

Default: event_limit=10080 minutes (7 days)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The event_limit parameter specifies how old the rows must be in the IBMQREP_APPEVENTS table before the rows are eligible for pruning. By default, rows that are older than 10080 minutes (7 days) are pruned. At most five rows are inserted at each event interval. Adjust the event limit based on your needs.

gdps_total_num_cg_override

Default: None

Method of changing: When Q Apply starts

The gdps_total_num_cg_override parameter enables you to override the field consistency_group_total in EIF messages. The value for this parameter corresponds to the number of consistency groups (replication queue maps) that are defined in the workload for GDPS Continuous Availability. You should specify the gdps_total_num_cg_override parameter if the GDPS workload spans more than one multiple consistency group. Otherwise, the number of consistency groups that are reported to GDPS in EIF messages for a workload is the number of consistency groups in the multiple consistency group.

ignbaddata

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

Note: This parameter applies only if the Q Apply program uses International Components for Unicode (ICU) for code page conversion (if the code page of the source database and the code page that Q Apply uses are different).

The ignbaddata parameter specifies whether the Q Apply program checks for illegal characters in data from the source and continues processing even if it finds illegal characters.

If you specify ignbaddata=y, Q Apply checks for illegal characters and takes the following actions if any are found:

  • Does not apply the row with the illegal characters.
  • Inserts a row into the IBMQREP_EXCEPTIONS table that contains a hexadecimal representation of the illegal characters.
  • Continues processing the next row and does not follow the error action that is specified for the Q subscription.

A value of n prompts Q Apply to not check for illegal characters and not report exceptions for illegal characters. With this option, the row might be applied to the target table if Db2 does not reject the data. If the row is applied, Q Apply continues processing the next row. If the bad data prompts an SQL error, Q Apply follows the error action that is specified for the Q subscription and reports an exception.

import_commit_count

Default: import_commit_count=0

Methods of changing: When Q Apply starts

The import_commit_count parameter specifies the number of rows after which the Db2 IMPORT utility commits changes to the target table during the loading process. This parameter applies only to automatic loads for Db2 targets that use the IMPORT utility (LOAD_TYPE 2 or 102 in the IBMQREP_TARGETS table).

By default (import_commit_count=0), only one commit is issued after all rows have been inserted and no intermediate commits are issued. In some cases you might see improved loading performance by specifying that the IMPORT utility commit after a certain number of rows are inserted.

Q Apply passes the value of this parameter to the commitcount parameter of the IMPORT utility. When a number n is specified, the utility performs a COMMIT after every n records are imported.

inhibit_supp_log

Default: inhibit_supp_log=y if a file transfer queue is defined for the receive queue; n if not

Methods of changing: When Q Apply starts

The inhibit_supp_log parameter specifies whether the Q Apply program turns off Db2 maintenance mode so that Db2 produces supplemental logs at the target database for transactions that are applied by Q Apply.

When Q Apply connects to Db2, by default it identifies itself as a replication program by calling the Db2 stored procedure SYSPROC.ADMIN_SET_MAINT_MODE, which turns off logging for the changes on column-organized tables that Q Apply makes to the database. Calling this stored procedure also allows Q Apply to perform privileged operations on the database such as inserting values into generated always columns, which is needed for Q Apply to replicate the values that come from the source system for these columns. Calling this stored procedure requires the user under which Q Apply connects to the database to have DBADMIN authorization.

If you specify inhibit_supp_log=y, the Q Apply program turns off Db2 maintenance mode so that Db2 produces these supplemental logs. One implication of suppressing logging for Q Apply operations is that the target database cannot be used as a source database for cascading replication into a third system. A Q Capture program that runs at the target does not see the changes that are made by Q Apply. You can enable cascading replication and request that Q Apply not run in maintenance mode by specifying inhibit_supp_log=n.

If a file transfer queue is defined for the receive queue to enable replication of column-organized tables, the default for this parameter is inhibit_supp_log=y.

insert_bidi_signal

Default: insert_bidi_signal=y

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The insert_bidi_signal parameter specifies whether the Q Capture and Q Apply programs use signal inserts to prevent recapture of transactions in bidirectional replication.

By default (insert_bidi_signal=y), the Q Apply program inserts P2PNORECAPTURE signals into the IBMQREP_SIGNAL table to instruct the Q Capture program at its same server not to recapture applied transactions at this server.

When there are many bidirectional Q subscriptions, the number of signal inserts can affect replication performance. By specifying insert_bidi_signal=n, the Q Apply program does not insert P2PNORECAPTURE signals. Instead, you insert Q Apply's AUTHTKN information into the IBMQREP_IGNTRAN table, which instructs the Q Capture program at the same server to not capture any transactions that originated from the Q Apply program, except for inserts into the IBMQREP_SIGNAL table.

Note: The option insert_bidi_signal=n is not valid for unidirectional or peer-to-peer replication. Use this setting only for bidirectional replication.

For improved performance when you use insert_bidi_signal=n, update the IBMQREP_IGNTRAN table to change the value of the IGNTRANTRC column to N (no tracing). This change prevents the Q Capture program from inserting a row into the IBMQREP_IGNTRANTRC table for each transaction that it does not recapture.

loadcopy_path

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

You must specify this parameter when:

  • The target database is the primary server in a Db2 High Availability Disaster Recovery (HADR) configuration.
  • You take online backups of the target database and replication subscriptions use an initial load with load from cursor or export/load.

If loadcopy_path is not specified, Q Apply uses the NONRECOVERABLE load option when the target table for a subscription is loaded by the Db2 load utility. If tables are reloaded while an online backup or an HADR takeover is taking place, these tables become inaccessible. You can run the following query to check which tables are inaccessible should this situation occur:

DB2  SELECT SUBSTR(TABSCHEMA,1,18) AS TABSCHEMA ,SUBSTR(TABNAME,1,18) AS TABNAME,AVAILABLE FROM SYSIBMADM.ADMINTABINFO WHERE  AVAILABLE='N'

To recover, you need to drop and recreate the affected tables at the target and reinitialize the subscriptions. To avoid this situation in the future, stop Q Apply, specify loadcopy_path either in IBMQREP_APPLYPARMS or as a startup parameter, and then start Q Apply.

For HADR
You can use the loadcopy_path parameter instead of the DB2_LOAD_COPY_NO_OVERRIDE registry variable when the Q Apply server is the primary server in a HADR configuration and tables on the primary server are loaded by the Q Apply program calling the Db2 load utility. HADR sends log files to the standby site, but when a table on the primary server is loaded by the Db2 load utility, the inserts are not logged. Setting loadcopy_path to an NFS directory that is accessible from both the primary and secondary servers prompts Q Apply to start the load utility with the option to create a copy of the loaded data in the specified path. The secondary server in the HADR configuration then looks for the copied data in this path.
For initial load with load from cursor or export/load (LOAD_TYPE 1 or 3)
A DB2® backup image that is taken online might not be usable for the tables that were loaded by Q Apply with load from cursor or export/load during the period from the start of online backup to the end of rollforward. Without loadcopy_path, you will get an SQL1477N error when you access those tables, because Q Apply uses NONRECOVERABLE load by default. To avoid this error, set loadcopy_path to a local directory if the target database is local. If the target database is remote, in which case the ROLLFORWARD command requires remote access to the copy file that is created by the load utility, specify an NFS directory. For more about NONRECOVERABLE load, see the section about "Whether to keep a copy of the changes made" in the Db2 Load overview topic.

load_data_buff_sz

Default: load_data_buff_sz=8

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

Use with multidimensional clustering (MDC) tables: Specifies the number of 4KB pages for the Db2 LOAD utility to use as buffered space for transferring data within the utility during the initial loading of the target table. This parameter applies only to automatic loads using the Db2 LOAD utility.

By default, the Q Apply program starts the utility with the option to use a buffer of 8 pages, which is also the minimum value for this parameter. Load performance for MDC targets can be significantly improved by specifying a much higher number of pages.

logmarkertz

Default: logmarkertz=gmt

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The logmarkertz parameter controls the time zone that the Q Apply program uses when it inserts source commit data into the IBMSNAP_LOGMARKER column of consistent-change data (CCD) tables or point-in-time (PIT) tables. By default (logmarkertz=gmt), Q Apply inserts a timestamp in Greenwich mean time (GMT) to record when the data was committed at the source. You can specify logmarkertz=local and Q Apply inserts a timestamp in the local time of the Q Capture server.

Existing rows in CCD or PIT targets that were generated before the use of logmarkertz=local are not converted by Q Apply and remain in GMT unless you manually convert them.

The logmarkertz parameter does not affect stored procedure targets. The src_commit_timestamp IN parameter for stored procedure targets always uses GMT-based timestamps.

logreuse

Default: logreuse=n

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

Each Q Apply program keeps a log file that tracks its work history, such as when it starts and stops reading from queues, changes parameter values, prunes control tables, or encounters errors.

By default, the Q Apply program adds to the existing log file when the program restarts. This default lets you keep a history of the program's actions. If you don't want this history or want to save space, set logreuse=y. The Q Apply program clears the log file when it starts, then writes to the blank file.

The log is stored by default in the directory where the Q Apply program is started, or in a different location that you set using the apply_path parameter.

z/OS: The log file name is apply_server.apply_schema.QAPP.log. For example, SAMPLE.ASN.APP.log. Also, if apply_path is specified with slashes (//) to use a High Level Qualifier (HLQ), the file naming conventions of z/OS sequential data set files apply, and apply_schema is truncated to eight characters.

Linux, UNIX, Windows: The log file name is db2instance.apply_server.apply_schema.QAPP.log. For example, Db2.SAMPLE.ASN.QAPP.log.

logstdout

Default: logstdout=n

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

By default, a Q Apply program writes its work history only to the log. You can change the logstdout parameter if you want to see program history on the standard output (stdout) in addition to the log.

Error messages and some log messages (initialization, stop, subscription activation, and subscription deactivation) go to both the standard output and the log file regardless of the setting for this parameter.

You can specify the logstdout parameter when you start a Q Apply program with the asnqapp command. If you use the Replication Center to start a Q Apply program, this parameter is not applicable.

max_parallel_loads

Default: max_parallel_loads=1 (z/OS); 15 (Linux, UNIX, Windows)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The max_parallel_loads parameter specifies the maximum number of automatic load operations of target tables that Q Apply can start at the same time for a given receive queue. The default for max_parallel_loads differs depending on the platform of the target server:

z/OS
On z/OS the default is one load at a time because of potential issues with the DSNUTILU or DSNUTILS (deprecated) stored procedure that Q Apply uses to call the Db2 LOAD utility. If you plan to set values higher than max_parallel_loads=1, ensure that your Workload Manager (WLM) policy allows more than one load at a time.
Linux, UNIX, Windows
On Linux, UNIX, and Windows the default is 15 parallel loads.

mcgsync_location (z/OS)

Default: None

Methods of changing: When Q Apply starts

The mcgsync_location parameter enables you to synchronize Q Apply programs across multiple data-sharing groups. It specifies the location name or location alias of the database or subsystem where the synchronization control tables ASN.IBMQREP_MSGSYNC and ASN.IBMQREP_MCGPARMS are stored. This parameter is required only if you are using the synchronized apply function and the Q Apply programs that are part of the synchronization group reside in different Db2 data sharing groups.

When you specify mcgsync_location, Q Apply uses the location to access these two control tables by using three-part names, for example mcgsync_location.ASN.IBMQREP_MCGSYNC and mcgsync_location.ASN.IBMQREP_MCGPARMS.

For more information, see Synchronizing Q Apply programs across multiple data-sharing groups.

monitor_interval

Default: monitor_interval=60000 milliseconds (1 minute) on z/OS; 30000 milliseconds (30 seconds) on Linux, UNIX, and Windows

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The monitor_interval parameter tells a Q Apply program how often to insert performance statistics into the IBMQREP_APPLYMON and IBMQREP_MCGMON tables. You can view these statistics by using the Q Apply Throughput and Latency windows.

You can adjust the monitor_interval based on your needs:

If you want to monitor a Q Apply program's activity at a more granular level, shorten the monitor interval
For example, you might want to see the statistics for the number of messages on queues broken down by each 10 seconds rather than one-minute intervals.
Lengthen the monitor interval to view Q Apply performance statistics over longer periods
For example, if you view latency statistics for a large number of one-minute periods, you might want to average the results to get a broader view of performance. Seeing the results averaged for each half hour or hour might be more useful in your replication environment.
Important for Q Replication Dashboard users: When possible, you should synchronize the Q Apply monitor_interval parameter with the dashboard refresh interval (how often the dashboard retrieves performance information from the Q Capture and Q Apply monitor tables). The default refresh interval for the dashboard is 10 seconds (10000 milliseconds). If the value of monitor_interval is higher than the dashboard refresh interval, the dashboard refreshes when no new monitor data is available.

monitor_limit

Default: monitor_limit=10080 minutes (7 days)

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The monitor_limit parameter specifies how old the rows must be in the IBMQREP_APPLYMON and IBMQREP_MCGMON tables before the rows are eligible for pruning.

By default, rows that are older than 10080 minutes (7 days) are pruned. The IBMQREP_APPLYMON table provides statistics about a Q Apply program's activity. A row is inserted at each monitor interval. You can adjust the monitor limit based on your needs:

Increase the monitor limit to keep statistics
If you want to keep records of the Q Apply program's activity beyond one week, set a higher monitor limit.
Lower the monitor limit if you look at statistics frequently.
If you monitor the Q Apply program's activity on a regular basis, you probably do not need to keep one week of statistics and can set a lower monitor limit.

You can set the monitor_limit parameter when you start the Q Apply program or while the program is running. You can also change its saved value in the IBMQREP_APPLYPARMS table.

multi_row_insert

Default: multi_row_insert=y

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The multi_row_insert parameter controls whether the Q Apply program uses multi-row insert SQL statements.

Inserting replicated rows in batches of 100 or less can reduce CPU consumption at the target server and increase throughput. Performance improvements might be greater for tables with fewer columns, smaller row size, and fewer indexes than tables with more columns, larger row size, and more indexes. All of the rows that are part of a multi-row insert must be continuous insert statements against the same table and in the same transaction.

If an insert fails for any row in the rowset, Db2 rolls back all of the changes in the rowset. Q Apply then switches to single-row insert mode to process all of the rows in the rowset that caused an error. The error is retried and handled with the error action and conflict action that are specified for the Q subscription.

You can allocate additional memory for Q Apply to use in performing multi-row inserts by increasing the value of the MRI_MEMORY_LIMIT column in the IBMQREP_RECVQUEUES table. The default value is 1024 KB per agent thread. A larger value for this parameter can enable the Q Apply program to group more rows into each multi-row insert operation. The allocation for MRI_MEMORY_LIMIT is separate from the overall memory that Q Apply uses, which is set in the MEMORY_LIMIT column in IBMQREP_RECVQUEUES.

The following restrictions apply:

  • Peer-to-peer replication is not eligible.
  • Stored procedure and temporal table targets are not eligible.
  • LOB and XML columns are not eligible.
  • Non-key columns can have expressions but key columns cannot have expressions.
  • Self-referencing tables are not eligible.
  • When a source table is mapped to multiple tables that are being replicated through the same receive queue, Q Apply might not be able to use multi-row inserts even if multi_row_insert=y is used and even if the inserts are continuous in the source transaction.
Note: Unsubscribed TIMESTAMP columns with a default value of CURRENT TIMESTAMP are likely to be given identical values in multi-row insert mode. To use the multi-row-insert option, you should subscribe to the TIMESTAMP columns with default values.

nickname_commit_ct

Default: nickname_commit_ct=10

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

Federated targets: The nickname_commit_ct parameter specifies the number of rows after which the Db2 IMPORT utility commits changes to nicknames that reference a federated target table during the loading process. This parameter applies only to automatic loads for federated targets that use the IMPORT utility.

By default, Q Apply specifies that the IMPORT utility commits changes every 10 rows during the federated loading process. You might see improved load performance by raising the value of nickname_commit_ct. For example, a setting of nickname_commit_ct=100 would lower the CPU overhead by reducing interim commits. However, more frequent commits protect against problems that might occur during the load, enabling the IMPORT utility to roll back a smaller number of rows if a problem occurs.

The nickname_commit_ct parameter is a tuning parameter used to improve Db2 IMPORT performance by reducing the number of commits for federated targets.

nmi_enable (z/OS)

Default: nmi_enable=n

Method of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The nmi_enable parameter specifies whether the Q Apply program is enabled to provide a Network Management Interface (NMI) for monitoring Q Replication statistics from IBM® Tivoli NetView Monitoring for GDPS. The NMI client application must be on the same z/OS system as the Q Apply program. By default (nmi_enable=n), the interface is not enabled.

When you specify nmi_enable=y, the Q Apply program acts as an NMI server and listens on the socket that is specified by the nmi_socket_name parameter for client connection requests and data requests. Q Apply can support multiple client connections, and has a dedicated thread to interact with NMI clients. The thread responds to requests in the order that they arrived.

nmi_socket_name (z/OS)

Default: nmi_socket_name=/var/sock/group-attach-name_apply-schema_asnqapp

Method of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The nmi_socket_name parameter specifies the name of the AF_UNIX socket where the Q Apply program listens for requests for statistical information from NMI client applications. You can specify this parameter to change the socket name that the program automatically generates. The socket file is generated in the directory /var/sock. The socket name is constructed by combining the file path, group attach name, Q Apply schema name, and the program name (asnqapp). An example socket name is /var/sock/V91A_ASN_asnqapp.

To use this parameter you must set nmi_enable=y.

After a Q Apply program is started with either a default or a user-defined NMI socket name, the name cannot be changed dynamically.

oracle_empty_str

Default: oracle_empty_str=y for Oracle targets (must be set manually), n for Db2 targets on Linux, UNIX, Windows, and z/OS

Methods of changing: When Q Apply starts, IBMQREP_APPLYPARMS table

The oracle_empty_str parameter specifies whether the Q Apply program replaces an empty string in VARCHAR columns with a space.k

Db2 allows empty strings in VARCHAR columns. When a source Db2 VARCHAR column is mapped to an Oracle target, or to a Db2 server that is running with Oracle compatibility mode, the empty string is converted to a NULL value. The operation fails when the target column is defined with NOT NULL semantics.

With oracle_empty_str=y, Q Apply replaces the empty string with a one-character space in the application code page just before applying the data to the target and after any code page conversion. If you are using SQL expressions in any Q subscriptions, take the following considerations into account:

  • When expressions are defined on nonkey and key columns, Q Apply transforms the data for dependency analysis before applying it to the target. Therefore, Q Apply looks for NULL values in the transformed data, and if found replaces the NULL value with a space.
  • When expressions are defined only on nonkey columns, Q Apply looks for NULL values in the nontransformed data (the source data after codepage conversion), and if found replaces the NULL value with a space. Any SQL expressions are evaluated on the space value.

You can also specify oracle_empty_str=t and Q Apply replaces an empty string with a space only if the target column is defined as NOT NULL.

p2p_2nodes

Default: p2p_2nodes=n

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The p2p_2nodes parameter allows the Q Apply program to optimize for performance in a peer-to-peer configuration with only two active servers by not logging conflicting deletes in the IBMQREP_DELTOMB table. Only use the setting p2p_2nodes=y for peer-to-peer replication with two active servers.

By default, the Q Apply program records conflicting DELETE operations in the IBMQREP_DELTOMB table. With p2p_2nodes=y the Q Apply program does not use the IBMQREP_DELTOMB table. This avoids any unnecessary contention on the table or slowing of Q Apply without affecting the program's ability to correctly detect conflicts and ensure data convergence.

Important: The Q Apply program does not automatically detect whether a peer-to-peer configuration has only two active servers. Ensure that the option p2p_2nodes=y is used only for a two-server peer-to-peer configuration. Using the option for configurations with more than two active servers might result in incorrect conflict detection and data divergence.

prune_batch_size

Default: prune_batch_size=1000

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The prune_batch_size parameter specifies the number of rows that are deleted from the IBMQREP_DONEMSG table (and corresponding messages deleted from the receive queue) in one commit scope. The default is 1000 rows. The minimum value is 2.

The IBMQREP_DONEMSG table is an internal table used by the Q Apply program to record all transaction or administrative messages that are received. The records in this table help ensure that messages are not processed more than once (for example in the case of a system failure) before they are deleted. During regular execution, Q Apply follows the value of prune_batch_size when it deletes rows from the table and receive queue.

Q Apply follows the value set for this parameter regardless of the setting for the prune_method parameter.

prune_interval

Default: prune_interval=300 seconds (5 minutes)

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The prune_interval parameter determines how often a Q Apply program looks for old rows to delete from the IBMQREP_APPLYMON, IBMQREP_APPLYTRACE, IBMQREP_MCGMON, and IBMQREP_APPEVENTS tables. By default, a Q Apply program looks for rows to prune every 300 seconds (5 minutes).

Your pruning frequency depends on how quickly these control tables grow, and what you intend to use them for:

Shorten the prune interval to manage monitor tables
A shorter prune interval might be necessary if the IBMQREP_APPLYMON table is growing too quickly because of a shortened monitor interval. If this table is not pruned often enough, it can exceed its table space limit, which forces a Q Apply program to stop. However, if the table is pruned too often or during peak times, pruning can interfere with application programs that run on the same system.
Lengthen the prune interval for record keeping
You might want to keep a longer history of a Q Apply program's performance by pruning the IBMQREP_APPLYTRACE and IBMQREP_APPLYMON tables less frequently.

The prune interval works in conjunction with the trace_limit and monitor_limit parameters, which determine when data is old enough to prune. For example, if the prune_interval is 300 seconds and the trace_limit is 10080 seconds, a Q Apply program will try to prune every 300 seconds. If the Q Apply program finds any rows in the IBMQREP_APPLYTRACE table that are older than 10080 minutes (7 days), it prunes them.

The prune_interval parameter does not affect pruning of the IBMQREP_DONEMSG table. Pruning of this table is controlled by the prune_method and prune_batch_size parameters.

prune_method

Default: prune_method=2

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The prune_method parameter specifies the method that the Q Apply program uses to delete unneeded rows from the IBMQREP_DONEMSG table. By default, (prune_method=2), Q Apply prunes groups of rows based on the prune_batch_size value. A separate prune thread records which messages were applied, and then issues a single range-based DELETE.

When you specify prune_method=1, Q Apply prunes rows from the IBMQREP_DONESG table one at a time. First Q Apply queries the table to see if data from a message was applied, then it deletes the message from the receive queue, and then prunes the corresponding row from IBMQREP_DONEMSG by issuing an individual SQL statement.

pwdfile

Default: pwdfile=apply_path/asnpwd.aut

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The pwdfile parameter specifies the name of the encrypted password file that the Q Apply program uses to connect to the Q Capture server. This connection is required only when a Q subscription specifies an automatic load that uses the EXPORT utility. When you use the asnpwd command to create the password file, the default file name is asnpwd.aut. If you create the password file with a different name or change the name, you must change the pwdfile parameter to match. The Q Apply program looks for the password file in the directory specified by the apply_path parameter.

z/OS: No password file is required.

You can set the pwdfile parameter when you start the Q Apply program, and you can change its saved value in the IBMQREP_APPLYPARMS table. You cannot change the value while the Q Apply program is running.

report_exception

Default: report_exception=y

Methods of changing: When Q Apply starts

The report_exception parameter controls whether the Q Apply program inserts data into the IBMQREP_EXCEPTIONS table when a conflict or SQL error occurs at the target table but the row is applied to the target anyway because the conflict action that was specified for the Q subscription was F (force). By default, (report_exception=y), Q Apply inserts details in the IBMQREP_EXCEPTIONS table for each row that causes a conflict or SQL error at the target, regardless of whether the row was applied or not. You can specify report_exception=n and Q Apply will not insert data into the IBMQREP_EXCEPTIONS table when a row causes a conflict but is applied. With report_exception=n, Q Apply continues to insert data about rows that were not applied.

When report_exception=n, the Q Apply program also tolerates codepage conversion errors when writing SQL text into the IBMQREP_EXCEPTIONS table and continues normal processing.

richklvl

Default: richklvl=2

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The richklvl parameter specifies the level of referential integrity checking. By default (richklvl=2), the Q Apply program checks for RI-based dependencies between transactions to ensure that dependent rows are applied in the correct order.

If you specify richklvl=5, Q Apply checks for RI-based dependencies when a key value is updated in the parent table, a row is updated in the parent table, or a row is deleted from the parent table.

A value of 0 prompts Q Apply to not check for RI-based dependencies.

When a transaction cannot be applied because of a referential integrity violation, the Q Apply program automatically retries the transaction until it is applied in the same order that it was committed at the source table.

spill_commit_count

Default: spill_commit_count=10

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The spill_commit_count parameter specifies how many rows are grouped together in a commit scope by the Q Apply spill agents that apply data that was replicated during a load operation. Increasing the number of rows that are applied before a COMMIT is issued can improve performance by reducing the I/O resources that are associated with frequent commits. Balance the potential for improvement with the possibility that fewer commits might cause lock contention at the target table and the IBMQREP_SPILLEDROW control table.

skiptrans

Default: None

Method of changing: When Q Apply starts

The skiptrans parameter specifies that the Q Apply program should not apply one or more transactions from one or more receive queues based on their transaction ID.

Stopping the program from applying transactions is useful in unplanned situations, for example:

  • Q Apply receives an error while applying a row of a transaction and either stops or stops reading from the receive queue. On startup, you might want Q Apply to ignore the entire transaction in error.
  • After the failover from a disaster recovery situation, you might want to ignore a range of transactions on the receive queue from the failover node to the fallback node.
For details on how to specify the transaction identifier or range of identifiers, see Prompting a Q Apply program to ignore transactions.

You can also prompt the Q Capture program to ignore transactions. This action would be more typical when you can plan which transactions do not need to be replicated.

Note: Ignoring a transaction that was committed at the source server typically causes divergence between tables at the source and target. You might need use the asntdiff and asntrep utilities to synchronize the tables.

startallq

Default: startallq=n (z/OS); y (Linux, UNIX, Windows)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The startallq parameter specifies how Q Apply processes receive queues when it starts. With startallq=y, Q Apply puts all receive queues in active state and begins reading from them when it starts. When you specify startallq=n, Q Apply processes only the active receive queues when it starts.

You can use startallq=y to avoid having to issue the startq command for inactive receive queues after the Q Apply program starts. You can use startallq=n to keep disabled queues inactive when you start Q Apply.

term

Default: term=y

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The term parameter controls whether a Q Apply program keeps running when Db2 or the queue manager are unavailable.

By default (term=y), the Q Apply program terminates when Db2 or the queue manager are unavailable. You can change the default (term=n) if you want a Q Apply program to keep running while Db2 or the queue manager are unavailable. When Db2 or the queue manager are available, Q Apply begins applying transactions where it left off without requiring you to restart the program.

Restriction: The setting term=n is not supported for federated targets.
Note: Regardless of the setting for term, if the MQ sender or receiver channels stop, the Q Apply program keeps running because it cannot detect channel status. This situation causes replication to stop because the two queue managers cannot communicate. If you find that replication has stopped without any messages from the Q Replication programs, check for MQ errors. For example, check the channel status by using the MQ DISPLAY CHSTATUS command.

tolerate_lsn_trunc

Default: tolerate_lsn_trunc=n

Methods of changing: When Q Apply starts

The tolerate_lsn_trunc parameter specifies whether the Q Apply program truncates 16-byte log sequence numbers (LSN) to 10 bytes before applying them to consistent-change data (CCD) tables or stored procedure targets. Q Apply uses 16-byte LSNs for source Q Capture programs at v10.2.1 and later. This parameter enables Q Apply to tolerate 10-byte LSN data while you migrate your CCD tables to the longer data length. If you start Q Apply with tolerate_lsn_trunc=y, Q Apply truncates the LSN data to 10 bytes before inserting it into the IBMSNAP_COMMITSEQ and IBMSNAP_INTENTSEQ columns of the CCD table and the SRC_CMT_LSN field of stored procedure targets.

After all CCD target tables or stored procedure targets are modified to handle 16-byte LSNs, you can run Q Apply with tolerate_lsn_trunc=n.

Note: This parameter is not valid when Q Apply is involved in a three-tier data distribution scenario that involves one or more CCD tables at the middle tier. The value of the SQL_CAP_SCHEMA column in the IBMQREP_APPLYPARMS table must be null for this parameter to work.

trace_ddl

Default: trace_ddl=n

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The trace_ddl parameter specifies whether, when DDL operations at the source database are replicated, the SQL text of the operation that the Q Apply program performs at the target database is logged. By default (trace_ddl=n), Q Apply does not log the SQL text. If you specify trace_ddl=y, Q Apply issues an ASN message to the Q Apply log file, standard output, and IBMQREP_APPLYTRACE table with the text of the SQL statement. The SQL text is truncated to 1024 characters.

trace_limit

Default: trace_limit=10080 minutes (7 days)

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The trace_limit parameter specifies how long rows remain in the IBMQREP_APPLYTRACE table before the rows can be pruned.

The Q Apply program inserts all informational, warning, and error messages into the IBMQREP_APPLYTRACE table. By default, rows that are older than 10080 minutes (7 days) are pruned at each pruning interval. Modify the trace limit depending on your need for audit information.

use_applycmd_table

Default: use_applycmd_table=n

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The use_applycmd_table parameter specifies whether the Q Apply program reads the IBMQREP_APPLYCMD table to look for user-inserted commands to be processed. If you specify use_applycmd_table=y, Q Apply reads the IBMQREP_APPLYCMD table every n milliseconds. The frequency is determined by the value of the applycmd_interval parameter. The Q Apply initial thread reads from this table and asynchronously executes any commands that were inserted by using the existing asnqacmd (Linux, UNIX, Windows) or MODIFY (z/OS) command processing infrastructure

warntxlatency

Default: warntxlatency=0

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The warntxlatency parameter specifies whether the Q Apply program issues warning messages when the apply latency of a transaction exceeds a threshold. Apply latency is the time elapsed between getting a message from the receive queue and committing the transaction at the target table. You enable warnings and set the warning threshold by specifying an integer value greater than 0.

If you set warntxlatency to 10, for example, Q Apply would issue the ASN7878W and ASN7879W messages whenever the apply latency of any transaction exceeds 10 milliseconds. You use the warntxevts and warntxreset parameters to control the number of warnings and the reset interval for the warnings. Q Apply also issues summary messages at the end of each reset period that include transactions that exceeded the latency threshold.

By default (warntxlatency=0), warning messages for apply latency are not enabled.

warntxevts

Default: warntxevts=10

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The warntxevts parameter specifies the maximum number of latency warning messages that the Q Apply program issues during a reset interval. By default ( warntxevts=10), Q Apply issues warning messages for the first 10 transactions that exceed the latency threshold. Any transactions beyond 10 that exceed the threshold do not prompt individual warning messages but are instead included in summary messages at the end of the reset period.

Allowed values range from 0 to the maximum integer value for Db2. This option is ignored if WARNTXLATENCY is set to 0.

warntxreset

Default: warntxreset=300000 milliseconds (5 minutes)

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The warntxreset parameter specifies a time interval in milliseconds that you set for warning messages about apply transaction latency. At the end of the interval the Q Apply program issues summary messages ASN7881W and ASN7882W and resets its latency counters if any transactions exceeded the latency threshold.

By default (warntxreset=300000), Q Apply resets the latency counters after 5 minutes if any transactions exceed the threshold.

Allowed values range from 60000 (1 minute) to the maximum integer value for Db2. This option is ignored if WARNTXLATENCY is set to 0. This option must have a value greater than WARNTXLATENCY.