Question & Answer
Question
What are the Db2 diagnostic and transaction log files and tips to maintain them?
Answer
The log files that Db2 generates/uses are described in the following three categories:
The circular or rotating logging log files generally do not need maintenance. However, for some cases, the size of the log file will require tuning.
In this document, the term log file does not equate to the Db2 transaction log file as referenced in documentation such as Knowledge Center.
Environment variables referenced: INSTANCE_DIR, DIAGPATH, DBPATH and DASHOME
$INSTANCE_DIR | UNIX/Linux: <the home directory of the Db2 instance owner>/sqllib Windows: %DB2INSTPROF%\%DB2INSTANCE% Note: Confirm the value of DB2INSTPROF by using the db2set -all command. |
$DIAGPATH |
It depends on whether DIAGPATH parameter exists or not, and the version of Windows OS.
|
$DBPATH | 1. Check the value of "Local database directory" and "Catalog database partition number" by running the following command.
3. $DBPATH is defined as the combination of the values from steps 1 and 2 and the instance name. ["Local database directory"]/[DB2 instance name/NODE000["Catalog database partition number"]/["Database directory"] For example, in V10.1 ESE or pureScale, "Member name" like MEMBER0000 is added after "Database directory" (like SQL00001). <please provide an example here>
Note: The current value of "DBPATH" against the active databases can be obtained by running "list active databases" command.
|
$DASHOME | DASHOME is the home directory of the DAS (Db2 Administration Server) owner. Linux/UNIX: home directory of the DAS instance owner Windows: DB2INSTPROF/<DAS instance name> |
Transaction log
|
|
Output path | Active log path: "logpath" and "mirrorlogpath" as defined in the Db2 database cfg parameter Archive log path: "logarchmeth*" as defined in the Db2 database cfg parameter or the path specified in the userexit. |
Description |
Db2 generates transaction log entries to record all changes to a database which may needed during crash recovery or rolling forward recovery after restoring backup to ensure consistency.
In circular logging mode, transaction logs are reused and the number of log files does not exceed the sum of logprimary and logsecond database cfg parameter.
In archive logging mode, the transaction log files need to be maintained.
To confirm which logging mode the database is using, check the logarchmeth1 and logarchmeth2 database cfg parameters. When both are set as OFF, the database is using circular logging mode; otherwise it is using archive logging mode.
|
References |
logarchmeth1 - Primary log archive method configuration parameter
logarchmeth2 - Secondary log archive method configuration parameter The following technote is a good reference on how to maintain transaction log files.
When transaction logs are full, please refer to the following technote:
|
Recovery history file | |
Output path |
$DBPATH/db2rhist.*.
|
Description |
The recovery history file is created for each database and is used to record database related operations such as backup, restore, altering table space, archiving log file, etc.
If backups using the BACKUP DATABASE command is not performed, the recovery history file should be maintained periodically using the PRUNE HISTORY command.
The entries in the recovery history file which are older than the specified timestamp passed to the PRUNE HISTORY command will be removed. How long to keep the entries in the recovery history file is dependent on the business requirement.
Taking periodic backups using the BACKUP DATABASE command will automatically prune the entries in the recovery history file.
When the BACKUP DATABASE command is run, the entries for the older backups than as specified in the num_db_backups database cfg parameter will be marked as expired. The related entries for these backups such as tablespace backup and load copy backup also will be marked as expired.
When the BACKUP DATABASE command is next run, the entries which are marked as expired will be removed.
|
References |
It is recommended to avoid the recovery history file become too large as it may impact performance against specific operations for the database:
|
Db2 diagnostic log | |
Output path |
$DIAGPATH/db2diag.log
|
Description |
The Db2 diagnostic log (db2diag.log) file records diagnostic information such as errors or warnings encountered by the database manager.
The Db2 diagnostic log files are primarily intended for use by IBM Software Support for troubleshooting purposes.
From V9.7, one can specify the DIAGSIZE database manager configuration parameter to limit the total size of Db2 diagnostic log file and Administration log file. Setting the parameter to a non-zero value and restarting the instance, results in the use of rotating log files instead of appending indefinitely to the one (default) log file.
(In the Windows environment, Administration notification log cannot be limited by the DIAGSIZE parameter since it will be recorded into Windows event log.)
Please refer to following technote for additional details:
|
Reference | Db2 diagnostic (db2diag) log files |
Administration notification log | |
Output path | Linux/Unix: $DIAGPATH/[instance_name].nfy Windows: application event log |
Description |
The administration notification log is the repository from which information about numerous database administration and maintenance activities can be obtained. A database administrator can use this information to diagnose problems, tune the database, or simply monitor the database.
One can configure the retention policy for Windows application event log to make the application log cycle through separate log files. Please refer to the following document published by Microsoft for additional details:
|
Reference |
Db2 Administration Server (DAS) diagnostic log | |
Output path | Linux/Windows: $DASHOME/das/dump/db2dasdiag.log Windows: $DASHOME\dump\db2dasdiag.log |
Description |
This log file contains diagnostic information about errors and warnings recorded by the DAS.
NOTE: Db2 Administration Server (DAS) functionality has been deprecated in V9.7.
|
References |
Error log files for instance start/stop | |
Output path | Error log file for instance start: Linux/Unix: $INSTANCE_DIR/ctrl/db2start.timestamp.PID.errlog Windows: $INSTANCE_DIR\ctrl\db2start.exe.timestamp.PID.errlog Error log file for instance stop: Linux/Unix: $INSTANCE_DIR/ctrl/db2stop.timestamp.PID.errlog Windows: $INSTANCE_DIR\ctrl\db2stop.exe.timestamp.PID.errlog NOTE: prior to V10.1, the error log files will be created in the log subdirectory under $INSTANCE_DIR |
Description |
The error log file will be only be created when the instance encountered error/s when starting or stopping. No error log files will be created if the instance started or stopped successfully.
|
Reference | N/A |
Governor log files | |
Output path | Linux/Unix: $INSTANCE_DIR/log Windows: $INSTANCE_DIR\log |
Description |
The governor monitors the behavior of applications that run against a database and can change that behavior, depending on the rules specified in the governor configuration file.
Whenever a governor daemon performs an action, it writes an entry to this log file. Log entries are only written when the governor is running.
The base name for the log file is specified when the governor is started with the db2gov command. For a partitioned database, the partition number is automatically added to the governor log file base name. NOTE: The governor utility is deprecated since V9.7.
|
References | Governor log files Starting the governor |
Db2 audit log | |
Output path |
$INSTANCE_DIR/security/auditdata directory
One can modify the "Db2 audit log" file destination by specifying the datapath option in the db2audit command.
One can modify the "Archived audit log" file destination by specifying the archivepath option in the db2audit command.
To confirm the current setting, the db2audit describe command can be run.
The file name of audit log is:
where nnn is the 3-digit database partition number.
|
Description |
Do NOT remove the Db2 audit log file manually. The audit log file will be truncated when the content is archived by using db2audit archive command.
The file name for archived audit log is:
where nnn is the 3-digit database partition number and timestamp reflects when it was archived.
One can extract the audit data from the archived audit log file, by using audit extract command.
The archived audit log file will not be removed automatically and can deleted using OS commands as needed.
When extracting audit data from the archived audit log files, one can specify the output file name in the extract file option of the db2audit command. Default file name is db2audit.out.
The Db2 audit facility generates and permits the maintenance of an audit trail for a series of predefined database events. The records generated from this facility are kept in the audit log files. |
References | Storage and analysis of audit logs Audit log file names [DB2 LUW] DB2 9.5 以降における監査の設定方法 (IM-10-0AF) |
Q Capture log (*.QCAP.log) | |
Output path |
db2instance.capture_server.capture_schema.QCAP.log
under the directory specified in the capture_path option in the asnqcap command.
Default is the current directory where the asnqcap command is run.
|
Description |
By default, the Q Capture log file will be appended to indefinitely even if the Q Capture program is restarted. It is recommended to use the default setting to help with any support diagnosis work post an issue.
Specifying the logreuse=y option when starting Q Capture program will delete and recreate the log file. The following are use cases for this option:
|
References |
Q Apply log (*.QAPP.log) | |
Output path | db2instance.apply_server.apply_schema.QAPP.log This is under the directory specified in the apply_path option in the asnqapp command. Default is the current directory where asnqapp command is run. |
Description |
By default, the Q Apply log file will be appended to indefinitely even if the Q Apply program is restarted. It is recommended to use the default setting to help with any support diagnosis work post an issue.
Specifying the logreuse=y option when starting Q Apply program will delete and recreate the log file. The following are use cases for this option:
|
References |
SQL Replication Capture log (*.CAP.log) | |
Output path | db2instance.capture_server.capture_schema.CAP.log This is under the directory specified in capture_path option in asncap command. Default is the current directory where the asncap command is run. |
Description |
By default, the SQL Replication Capture log file will be appended to indefinitely even if the SQL Replication Capture program is restarted. It is recommended to use the default setting to help with any support diagnosis work post an issue.
Specifying the logreuse=y option when starting SQL Replication Capture program will delete and recreate the log file. The following are use cases for this option:
|
References |
SQL Replication Apply log (*.APP.log) | |
Output path | db2instance.control_server.apply_qualifier.APP.log This is under the directory specified in apply_path option in asnapply command. Default is the current directory where the asnapply command is run. |
Description |
By default, the SQL Replication Apply log file will be appended to indefinitely even if the SQL Replication program is restarted. It is recommended to use the default setting to help with any support diagnosis work post an issue.
Specifying the logreuse=y option when starting SQL Replication Apply program will delete and recreate the log file. The following are use cases for this option:
|
References |
Replication alert monitor log | |
Output path |
db2instance.monitor_server.monitor_qualifier.MON.LOG
under the directory specified in the monitor_path option in the asnmon command. Default is the current directory where the asnmon command is run.
|
Description |
By default, the replication alert log file will be appended to indefinitely even if the alert monitor program is restarted. It is recommended to use the default setting to help with any support diagnosis work post an issue.
Specifying the logreuse=y option when starting alert monitor program will delete and recreate the log file. The following are use cases for this option:
Additionally, trace information can be produced to stdout by specifying the debug=yes option when starting the alert monitor program. The following example starts the monitoring and outputs the trace information to the monitor.trc file:
asnmon monitor_server=mondb monitor_qual=MON1 debug=yes > monitor.trc |
References |
Event monitor files (WRITE TO FILE option) | |
Output path |
$DBPATH/db2event/[event_monitor_name] directory
|
Description |
Event monitor can collect specific events which occur during database use.
The collected events can be recorded to either a file, pipe, or a table. If the event monitor is configured to write to file, collecting and recording events will stop when the output file has reached its configured size.
The size of files for each event monitor can be specified using the MAXIFILESIZE option in the CREATE EVENT MONITOR statement.
Default for this option is:
One can limit the maximum number of event monitor files in the destination directory by specifying the MAXFILES option in the CREATE EVENT MONITOR statement. The default is NONE ie. number of files is unlimited.
select EVMONNAME, MAXFILES, MAXFILESIZE from SYSCAT.EVENTMONITORSRun following query to get the current setting: |
References | Event monitor file management |
Statistics log | |
Output path |
$DIAGPATH/events/db2optstats.n.log (default)
n is the file number starting from 0.
|
Description |
The statistics log records of all the statistics collection activities both manual and automatic against a database. By default, the log files are rotated in 5 files with a total size of 15MB.
One can modify the total size of log, number of log files, the destination directory and file name by setting the DB2_OPTSTATS_LOG registry variable. |
References | Automatic statistics collection activity logging DB2_OPTSTATS_LOG |
Self-tuning memory manager (STMM) log | |
Output path |
$DIAGPATH/stmmlog/stmm.n.log
where n is 1-digit value (0-4). |
Description |
These files are used for logging the self-tuning activities of the Db2 Self Tun ing Memory Manager (STMM).
Five files of size 10MB each are created and the activities are logged in these rotating files.
The size and number of the log files cannot be changed. |
Reference | N/A |
Syncpoint manager (SPM) log | |
Output path |
$INSTANCE_DIR/spmlog directory
|
Description |
When the spm_name database manager configuration parameter is specified and is not NULL (which is the default), the sync point manger (SPM) will start when the Db2 instance starts.
SPM is used for managing two-phase commit between Db2 for LUW and Db2 for Z/i. SPM log files are used for logging two-phase transaction manager activities and used for possible recovery.
Three SPM logs will be created in the directory specified in the spm_log_path database manager configuration parameter and the size (number of 4KB pages) is specified in the spm_log_file_sz parameter. These log files are used in a rotating manner and the number of log files cannot be changed. |
References |
Db2 event log | |
Output path | $DIAGPATH/db2eventlog.nnn where the 3-digit nnn is the dbpartitionnum value for the node in the db2nodes.cfg file. |
Description | The Db2 event log file is a log that records infrastructure-level events occurring in the database manager between a db2start and a db2stop. The event log is maintained in a circular event log buffer of the size defined by the registry variable DB2_EVENT_LOG_CONFIG. The default size is 6MB and does not need to be maintained manually. |
References |
Additional Information:
The log files listed below may be created in the DIAGPATH when a problem occurs. For example, the database manager crashes. These files will not be created if the system is working normally.
These files, especially system core file may be very large. One should check whether these files are created when you find a problem.
Related Information
First occurrence data capture (FODC)
First occurrence data capture (FODC)
[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSEPGG","label":"Db2 for Linux, UNIX and Windows"},"Component":"OTHER - Uncategorised","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF010","label":"HP-UX"},{"code":"PF016","label":"Linux"},{"code":"PF027","label":"Solaris"},{"code":"PF033","label":"Windows"}],"Version":"9.7;9.5;10.1;10.5;11.1;11.5","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]
Was this topic helpful?
Document Information
Modified date:
01 December 2020
UID
ibm11073238