Advanced Log Space Management
When running Db2® 11.5.4 and later, use Advanced Log Space Management (ALSM) to reduce your likelihood of hitting transaction log full conditions. The feature is enabled by setting the DB2_ADVANCED_LOG_SPACE_MGMT registry variable to ON.
Overview
- A LOAD operation, including ADMIN_MOVE_TABLE that makes use of LOAD.
- A CREATE INDEX operation.
- An indoubt transaction.
- An idle application which has not issued COMMIT or ROLLBACK after modifying the database.
With ALSM, the Db2 transaction manager identifies long running transactions that could be causing a log full condition and extracts (by copying) their log data from the active log files into separate extraction log files dedicated for the transaction. This allows the original log files to be removed to free up disk space, so that new active log files can be created.
When enabled, ALSM will periodically check the log space usage and the active transactions to evaluate if there is any benefit in extracting log data. There are cases when extraction provides limited or no benefit. For example, if log archival is not working properly, all log files will remain in the active log path. In this case extraction would provide no benefit because the extracted data would only duplicate the log data already present in the corresponding log files, and it would not be possible to free up such log files.
Another example is insufficient disk space. ALSM is designed to never interfere with the disk space needed by the configured active and secondary log files. This evaluation process is called "throttling", and the complete list of the throttle reasons is described in detail below. The throttling conditions are also evaluated periodically during an active extraction process in order to determine whether it is still beneficial to continue extracting. In summary, ALSM has been designed to prevent most log full conditions, but not all.
In general, ALSM has a relatively low performance overhead, resulting in low intrusiveness to an ongoing workload. If the current transactional workload requires a high amount of system resources, ALSM will gradually yield to the ongoing workload so that the transactional processing can use the system resources it needs. The success of a running workload is prioritized over extracting log data.
Apart from the small performance overhead, ALSM is designed to be tolerant of any database resiliency or stability issues. If an extraction log file becomes unavailable, the original active log file will be used instead. If this log file is in the archives, then it will be retrieved before being used.
Process model
All ALSM extraction logic is contained within a single-threaded engine dispatchable unit (EDU), db2loggx. If configured to run, the log extraction EDU is started as a background agent during database activation or during a user or utility connection such as crash recovery and database rollforward.
Once started, db2loggx takes periodical samples of the active log path and other relevant logging parameters to determine if extraction should begin. Due to the sampling, the log extraction EDU consumes a small amount of system resources even when there is no extraction running. However, the impact of this overhead on performance is negligible.
When the conditions permit for extraction, db2loggx initiates a single log stream scan through all relevant log files. Log records from the log stream scan are used to populate the log extraction files as necessary. When the extraction is finished, the log stream scan is closed and the EDU returns back to the default sampling state. The log extraction EDU continues to exist until the database is deactivated. When db2loggx is terminated, the deactivation performs a full clean-up of memory resources and extraction files that are not needed.
Extraction log files
Extraction log files are located in the active log path denoted by the logpath database configuration parameter. An extraction file can be thought of as a contained log file, including data for only a single transaction, one that has a potential to cause a transaction log full error. The parent active log file typically contains many more log records, including log records for other transactions that have already ended.
Extraction log files use disk space available outside of the configured upper limit of the number of log files, logprimary and logsecond. ALSM is designed not to interfere with the log space configured for the database. Extraction log files do not get managed by log archiving. They will only be removed from the active log path once the transaction contained in the particular extraction log file finishes.
There are three types of extraction log files:
- X<log file
number>_TID<tid>_<tidLogStreamId>.LOG
An extraction transaction ID (TID) file. This file contains extracted log records from the log file <log file number> for a single transaction identified by <tid>_<tidLogStreamId>. If available, the TID file is used for rollback, currently committed and all recovery purposes, including crash recovery and database rollforward. There is one TID file per active log file per transaction ID.
- X<log file number>.TMP
A temporary metadata file describing transactions and log records extracted from the log file <log file number>. This file gets created while active extraction is in progress and has not yet completed.
- X<log file number>.META
A permanent metadata file describing transactions and log records extracted from the log file <log file number> . This file gets created by renaming the aforementioned TMP file after extraction has finished processing the current log file.
Advanced throttling
ALSM needs to have a defined set of criteria to select qualifying log records and control when extraction starts and finishes. This process is called throttling. The criteria are examined and applied to the log extraction scan during the following events:
- Extraction scan start
This event gets triggered continuously and periodically at predefined intervals. The lifetime of the event is closely tied to the lifetime of the ALSM EDU, db2loggx. Normally, the event starts to occur after the database activation and continues to get triggered until the database deactivation. In error cases that disable the extraction scan, this event ceases to occur as soon as db2loggx is terminated. The database may still be active at that point.
- Extraction scan restart
This event gets triggered by the extraction scan every time the scanner has read a transaction log record, occurring only on an active database with an ongoing extraction scan. After a new log record has been read, the log record is examined to determine whether extraction should proceed.
- New active log file
This event gets triggered by the extraction scan every time the scanner has transitioned to a new active transaction log file, occurring only on an active database with an ongoing extraction scan.
Throttling ensures the extraction process does not impact other critical database operations. Special consideration is given to the amount of disk space consumed by the extraction process. Extraction log files must never occupy too much space in the active log path, to avoid contributing to a database outage. Another goal for throttling is to prevent cases where extraction would provide limited or no benefit.
If a transaction log full error still occurs and throttling is in effect, the throttle reason will be printed to the Db2 diagnostics log at the point where the error occurred for problem determination. See Problem determination section below.
For information regarding advanced throttling, see log_extraction_throttle_reason - Reason for extraction throttling monitor element.
- WHEN THROTTLING OCCURS
- Throttling can occur for any of the following reasons:
- Throttling not enabled
-
- Throttling not available, for example because ALSM has been disabled.
- Throttle reason:
n/a
- Disk full
-
- During runtime and recovery undo, throttle extraction if the extraction log files were to
interfere with the upper limit of the configured number of log files for the database. This rule
ensures that the disk space that would be needed for extraction files is not taken away from the
configured log space for the database. Running out of disk space in the active log path during
regular logging activities could result in an unexpected database shutdown, therefore this condition
overrules any other throttle reasons.
This rule accounts for the logprimary and logsecond database configuration parameters, and ensures that extraction log files never breach this space. There are special cases when Db2 may end up with more primary log files than specified by logprimary (for example, with extraction running), or with less primary log files (for example, when the database is in the process of being activated and log files are being allocated asynchronously). Additionally, the logsecond parameter can change dynamically. The rule accounts for all such cases.
When log spooling is enabled on an HADR standby database, extraction is throttled to protect disk space for log spooling. If the database hadr_spool_limit configuration parameter is set to a fixed value, this rule will ensure that extraction does not take away any disk space needed for log spooling. If hadr_spool_limit is set to AUTOMATIC and disk space is limited, this rule forces extraction to share disk space between itself and log spooling. If hadr_spool_limit is set to -1, then this rule will not throttle extraction.
- Throttle reason:
DISK_FULL
- During runtime and recovery undo, throttle extraction if the extraction log files were to
interfere with the upper limit of the configured number of log files for the database. This rule
ensures that the disk space that would be needed for extraction files is not taken away from the
configured log space for the database. Running out of disk space in the active log path during
regular logging activities could result in an unexpected database shutdown, therefore this condition
overrules any other throttle reasons.
- Distance from active log file
-
- Throttle extraction if the log file being considered for extraction is the current active log file for writing. Extraction only works on closed log files.
- Throttle reason:
CURRENT_ACTIVE_FILE
- Log archiving
-
- Throttle extraction if log archiving has not been enabled for the database, or if the log file being considered for extraction has not yet been archived. Active log files waiting to be archived will always stay in the active log path, and extraction would only duplicate disk space without any added benefit.
- Throttle reason:
LOG_ARCHIVING
- Log space usage
-
- Throttle extraction when consumed active log space is below a calculated threshold. This rule is designed to save system resources and ensures that extraction will only start when the database is close to running out of the configured log space.
- Throttle reason:
DB_LOG_SPACE_USED
- Extraction ratio
-
- Throttle extraction if the sum of the extracted data exceeds the calculated configured log space percentage limit. The purpose of this rule is to prevent extraction for very large transactions, thus potentially duplicating what is found in the active log files.
- Throttle reason:
EXTRACTION_RATIO
- New extraction zone
-
- Do not extract data if a new extraction zone has been detected and any extraction log files before this zone are no longer needed. An extraction zone is the range of log records from start to end that the extraction scan will need to process.
- Throttle reason:
NEW_EXTRACTION_ZONE
- Buffer pool flush needed
-
- Throttle extraction if the currently processed log record has not yet been flushed to the disk. Log records that have not been flushed from the buffer pool will always be needed for recovery purposes, and thus would always need to be extracted. This can be controlled by the database configuration parameters PAGE_AGE_TRGT_MCR and PAGE_AGE_TRGT_GCR
- Throttle reason:
SLOW_BP_FLUSH
- Previous extraction error
-
- Throttle extraction if the currently processed log file needs to be skipped. Upon encountering certain types of errors, the extraction scanner may decide to skip the currently processed log file and restart the scan in the next log file.
- Throttle reason:
SCAN_ERROR
Interactions
- Crash recovery
-
The feature has full support for crash recovery. If a database outage happens while extraction files are present, the extraction files will be used during the redo and undo phase of the subsequent crash recovery. This helps speed up the recovery process as previously archived log files do not need to be retrieved back into the active log path. If the extraction files are unusable, for example due to a disk error, previously archived log files will be retrieved as usual and crash recovery will use the retrieved log files instead. Extraction during crash recovery is controlled by throttling rules similar to the runtime rules. ALSM will make every effort not to fail due to a transaction log or disk full error during crash recovery, therefore new extraction files may be produced during the redo and undo phase of crash recovery.
- Online traditional backup and restore (Starting from version 11.5.6)
-
- BACKUP
-
When log files are included in a backup image, Db2 will include a range of logs suitable to recover the database to a point in time that represents the end of the backup operation. For a database-level recovery, this represents the minimum point in time where the database can be guaranteed to be consistent.
If extraction log files are available at the time of the backup, Db2 will select the optimal combination of extraction and active log files to be included in the image that will guarantee the recovery point promise made above.
Extraction log files will not be included in the backup image if the backup is taken offline or if the EXCLUDE LOGS parameter is specified on the BACKUP command.
If an error is encountered while attempting to back up an extraction log file, Db2 will stop backing up extraction log files for the log stream in question. It will instead back up active log files beginning at the failing extent.
Note that when ALSM is configured, new backup objects are included in an online backup image. The following restrictions are in place:- Db2 11.5.3 and earlier cannot restore an online backup image taken in Db2 11.5.6 or later.
- Db2 11.5.4 and 11.5.5 can restore an online backup image taken in Db2 11.5.6 or later. When restoring, new objects are ignored.
- RESTORE
-
For a database-level restore operation, if the backup image contains extraction log files, the extraction logs will be restored to the active log directory, regardless of the setting of the LOGTARGET parameter.
No extraction log files will be restored for tablespace-level restore operations, or for restore types that do not restore data. This even includes restore operations of the form RESTORE DB ... LOGS.
For a multi-step RESTORE operation, such as an incremental restore or a REBUILD, Db2 will ensure that the correct log extraction files are restored so that they can be used by the subsequent rollforward operation. In practice, this will be the target image for an incremental restore, which will typically be the oldest image restored as part of a database rebuild.
If an extraction log file cannot be restored, the restore operation will continue. However, Db2 will not be able to validate that the correct range of extraction log files are present at the start of the subsequent ROLLFORWARD operation so it will delete all extraction log files on the log stream in question. You must then ensure that the database's log archive is available to the rollforward utility so that active log files can be used. If the log archive is not available or the rollforward operation is performed using the NORETRIEVE option, you will need to retrieve the recovery logs manually.
- Online traditional backup and restore (Prior to version 11.5.6)
-
During a traditional online backup, extraction log files will not be included in the backup image. For the INCLUDE LOGS parameter, if logs are needed in the backup image they will be retrieved from the archive if not found locally. With extraction, this could increase the range of active log files that need to be included in the backup image, resulting in larger backup image sizes.
Following a restore, if log files are needed for rollforward purposes and cannot be found locally, the necessary log files will be retrieved from the archive as usual.
- Online native snapshot backup and restore
-
If the INCLUDE LOGS parameter is specified with the BACKUP command, the active log paths containing the extraction log files will be included in the target snapshot image.
- Database rollforward
-
The feature has full support for database rollforward. If extraction files exist at the beginning of a database rollforward operation, the extraction files are used during the redo and undo phase of the database rollforward operation. This helps speed up the recovery process as previously archived log files do not need to be retrieved back into the active log path. If the extraction files are unusable (for example due to a disk error), previously archived log files will be retrieved as usual (unless the NORETRIEVE option is specified), and database rollforward will use the retrieved log files instead. If the NORETRIEVE option is specified, the database rollforward operation will not retrieve any log files from the archives.
Extraction during database rollforward is controlled by throttling rules similar to the runtime rules. ALSM will make every effort not to fail due to a transaction log or disk full error during database rollforward, therefore new extraction files may be produced during the redo and undo phase of database rollforward. The number of log and extraction files present in the active log path during a database rollforward operation may be different from runtime because rollforward log file retrieval requirements are different from the runtime requirements.
- High Availability and Disaster Recovery (HADR)
-
Starting from Db2 11.5.6, the database rollforward feature has full support for databases using HADR. For a primary database, log extraction follows the same runtime rules for throttling and in the case an extraction log file becomes inaccessible. If a primary database is configured for ALSM, the standby should match and be configured to use ALSM.
For a standby database, log extraction behaves independently from the primary. Standby log extraction behavior is based on the configured and physical environment under which it runs. Log extraction follows the same database rollforward rules for throttling and for when an extraction log file becomes inaccessible.
Log spooling is handled differently for a standby database. Based on the hadr_spool_limit database configuration parameter, log extraction will be throttled to protect disk space for log spooling. If hadr_spool_limit is set to a fixed value, this rule will ensure that extraction will not take away any disk space needed for log spooling. If hadr_spool_limit is set to AUTOMATIC and disk space is limited, this rule will force extraction to share disk space between itself and log spooling. If hadr_spool_limit is set to -1, this rule will not throttle extraction.
Note that it is recommended to use a shared archive between the primary and standby databases. In the rare case that an extraction log file is unusable (for example due to a disk error), the previously archived log file will be retrieved from the archives and used for takeover. If the archived log file is not accessible, the operation will fail, requiring manual user intervention.
- Table space rollforward
-
ALSM provides no benefit to a table space rollfoward operation because the log records needed by a table space rollforward operation are usually older than the log file containing the head of the log, for example
HeadExtentID
. ALSM does not extract any data before this point. If an archived log file is needed for a table space rollforward operation, the log file will be retrieved from the archive as usual. - Mirrored database (db2inidb)
-
The feature has full support for a mirrored database. If the db2inidb command is issued with the AS SNAPSHOT option, the database will need to undergo a crash recovery. Before the crash recovery begins, the on-disk extraction files are verified for completeness and correctness. The point is to ensure that there is the correct number of extraction files, that they are readable, consistent, and usable for crash recovery purposes. Likewise, if the db2inidb command is issued with the AS STANDBY or AS MIRROR option which places the database in a rollforward pending state, the same extraction file verification is performed.
- Currently committed
-
ALSM has full support for the currently committed feature. If the committed version of the row is not present in the log buffer, the row data must be retrieved from the corresponding log file. If there is an extraction log file containing the data of interest, the data will be read from the extraction log file. In case of an error reading from the extraction log, for example due to a disk problem, the parent active log file will not be retrieved. Instead, the behavior will revert to the default wait for the row or table lock.
- SET WRITE SUSPEND
-
If a SET WRITE SUSPEND command with the INCLUDE LOGS parameter is issued while log extraction is running, the log extraction will be paused. While I/O writes are being suspended for the database, existing extraction files will not be written to, new extraction files will not be created, and none of the existing extraction files will be deleted. The extraction will be resumed after issuing the SET WRITE RESUME command. Issuing the SET WRITE SUSPEND using the EXCLUDE LOGS parameter has no impact to log extraction because both features can co-exist.
- Databases configured with a MIRRORLOGPATH
-
Starting from Db2 11.5.5, Advanced Log Space Management has basic support for databases configured with a MIRRORLOGPATH. For such databases, log extraction will take place as long as log data can be read from the active log files found in either the active or mirror log path and the extracted log data can be written to the extraction log files found under the active log path.
If the active log path becomes inaccessible then the log extraction scan may become throttled due to a SCAN_ERROR state and a transaction log full error could occur. The log extraction scan will resume once the active log path error is fixed and the log extraction zone moves up (see section Previous extraction error). If a rollback, crash recovery or database rollforward needs to read from an extraction log file found under the active log path that is inaccessible then the parent log file will be retrieved from the archives and used to read the log data. If a currently committed scanner needs to read from an extraction log file found under the active log path that is inaccessible then the parent log file will not be retrieved. Instead, the behavior will revert to the default wait for the row or table lock.
- Infinite logging
-
ALSM has full support for databases configured with infinite logging (LOGSECOND = -1). Required log data for inflight transactions will be found locally, thus ensuring fast and reliable rollback and crash recovery that does not need to retrieve log files from the archives. This is an improvement of infinite logging behavior prior to ALSM.
Infinite logging guarantees no transaction log full. In cases where the extraction scan is slow or being idled due to a throttle reason, such as extraction ratio, an active log file may need to be reused for new log data before it can be extracted from. This will happen when the configured number of log files has been reached. In this case, required log data for inflight transactions may not be found locally and so for rollback or crash recovery a log file may need to be retrieved from the archives. To avoid this from happening set LOG_DISK_CAP to -1 or a large enough value so that additional active log files can be created. for more details, see log_disk_cap - Active log space disk capacity configuration parameter.
- MAX_LOG
-
The MAX_LOG database configuration parameter specifies if there is a limit to the percentage of the primary log space that a transaction can consume, and what that limit is. This parameter will continue to work as before with ALSM on.
NUM_LOG_SPAN
-
The NUM_LOG_SPAN database configuration parameter specifies whether there is a limit to how many log files one transaction can span, and what that limit is. This parameter should be revisited to ensure it is not set to a value that would cause a transaction to be rolled back before extraction is able to take place on this transaction.
- db2ReadLog API
-
Extraction log files are not supported because the db2ReadLog API requires to read all log records from the starting LRI provided.
- Online index create
-
Extraction log files are not supported because the online index create (OLIC) operation requires to read all log records.
- db2flsn
-
The db2flsn tool supports using the extraction log files to map a LSN to a log file number. If an active log file does not exist locally but an extraction log file does, it will use the information from the extraction log file for the mapping.
- db2fmtlog
-
The db2fmtlog tool has been updated to display contents from the extraction log files. See db2fmtlog – Format and display log file information command.
- Disk space
-
ALSM consumes additional disk space to store the extraction log files. ALSM works best when there is more physical disk space than what is required by the configured active log space (LOGPRIMARY, LOGSECOND, and LOGFILSIZ database configuration parameters). It is recommended to have at least 20% more disk space than the configured amount. Over time, use the monitoring tools described later in this topic to adjust LOGPRIMARY, LOGSECOND and the physical amount of disk space made available to best fit your typical workloads.
Limitations/Restrictions
- Databases configured with circular or log retention logging (at least one of logarchmeth1 or logarchmeth2 database configuration parameters must be set to a value other than OFF or LOGRETAIN).
- Databases in an IBM Db2 pureScale® environment.
- Prior to Db2 11.5.6, databases configured with the High Availability and Disaster Recovery (HADR) feature.
In these situations, log extraction will not take place and transaction log full errors can still occur as before.
Monitoring
- MON_GET_TRANSACTION_LOG
- MON_GET_UNIT_OF_WORK
- MON_GET_UNIT_OF_WORK_DETAILS
- db2pd
-logs
Monitor the state of extraction through the two new fields:- Extraction Status
- Current Log to Extract
Example 1: Is ALSM enabled?
db2pd -db sample -logs
Extraction Status Active
Current Log to Extract 1038
This illustrates that log extraction is enabled. The Extraction Status could also report Idle or Recovery. The Db2 diagnostics log will also have a message indicating that log extraction has been enabled.
Extraction Status n/a
Current Log to Extract n/a
The Db2 diagnostic log will also have a message indicating why that log extraction is not enabled.
Example 2: Determining the extraction ratio
SELECT log_extraction_written_bytes,
log_extraction_processed_bytes
FROM TABLE(MON_GET_TRANSACTION_LOG(-1)) as t
LOG_EXTRACTION_WRITTEN_BYTES LOG_EXTRACTION_PROCESSED_BYTES
---------------------------- ------------------------------
16589 647632
This shows that about 3% of the active log data has been extracted, which fits the ideal model that would show a benefit from using ALSM.
Example 3: Disk space consumption of extraction
SELECT log_extraction_processed_bytes AS processed_bytes,
log_extraction_written_bytes AS written_bytes,
log_extraction_disk_space_used_total AS disk_space_used_total,
log_extraction_disk_space_used_total_top AS disk_space_used_total_top
FROM TABLE(MON_GET_TRANSACTION_LOG(-1)) as t
PROCESSED_BYTES WRITTEN_BYTES DISK_SPACE_USED_TOTAL DISK_SPACE_USED_TOTAL_TOP
--------------- ------------- --------------------- -------------------------
266882 165 35165 54461
Problem determination
ADM1560W Unable to read from extraction log file "<log-file-name>".
Explanation:
Not able to read transaction log data from this file, either
because the file is missing, or its content is not correct.
This may result in retrieving of active log file(s) from archive.
User response:
Investigate the cause of the failure by reviewing the Db2 diagnostic (db2diag) log file.
Although using ALSM helps to reduce your likelihood of hitting transaction log full conditions, it is still possible that under certain conditions if the extraction scan is throttled that a transaction log full error can still occur.
- Reason: Log archiving failure
-
At the point of the transaction log full error the Db2 diagnostic log will show:
Active log S0001038.LOG has not been archived yet. Active log S0001038.LOG has not been extracted from yet. Current log extraction information: loggxLastProcessedLsn = 0000000000072FEE loggxLastProcessedLso = 78454802 logExtractionCurrentExtNum = 1038 logExtractionState = IDLE logExtractionFlushLsn = 0000000000000000 throttleReason = LOG_ARCHIVING
If you run the db2pd -logs command, it will show you that "Extraction Status" is set to "Active", but log archive method 1 is in an error state on transaction log file 1038. Extraction is also currently trying to extract from the same file.Logs: Current Log Number 1047 Pages Written 0 Cur Commit Disk Log Reads 0 Cur Commit Total Log Reads 0 Method 1 Archive Status Failure Method 1 Next Log to Archive 1047 Method 1 First Failure 1038 Method 2 Archive Status n/a Method 2 Next Log to Archive n/a Method 2 First Failure n/a Extraction Status Active Current Log to Extract 1038 Log Chain ID 0 Current LSO 78605624 Current LSN 0x00000000000735A6
Searching backwards from the transaction log full error in the Db2 diagnostic log brings you to the reason for the log archiving failure. Look into the archiving issue and resolve, at which point extraction will resume.
- Reason: Disk full
-
At the point of the transaction log full error the Db2 diagnostic log will show:
Active log S0001051.LOG has not been extracted from yet. Current log extraction information: loggxLastProcessedLsn = 0000000000073801 loggxLastProcessedLso = 78666799 logExtractionCurrentExtNum = 1051 logExtractionState = IDLE logExtractionFlushLsn = 0000000000000000 throttleReason = DISK_FULL
If you run the db2pd -logs command, it will show you that "Extraction Status" is set to "Active", but "Current Log to Extract" is equal to the first active log in the active log path, which usually indicates that extraction is stalled.Logs: Current Log Number 1060 Method 1 Archive Status Success Method 1 Next Log to Archive 1060 Method 1 First Failure n/a Extraction Status Active Current Log to Extract 1051 Current LSO 78818610 Current LSN 0x0000000000073E38 StartLSN StartLSO State Filename 0000000000073802 78666801 0x00000000 S0001051.LOG
Look into resolving the disk space issue, at which point extraction will resume.
- Reason: Scan error
-
At the point of the transaction log full error the Db2 diagnostic log will show:
Active log S0001079.LOG has not been extracted from yet. Current log extraction information: loggxScanStartExtNum = 1079 loggxScanStartLsn = 0000000000074AC5 loggxMinLsnToStartOnError = 0000000000074AF3 logExtractionCurrentExtNum = 1079 logExtractionState = ERROR logExtractionFlushLsn = 0000000000000000 throttleReason = SCAN_ERROR
Searching backwards from the transaction log full error in the Db2 diagnostic log brings you to the reason for the scan error listed in the error return code.MESSAGE :ZRC=0xFFFFFFFF=-1 Log extraction scan error. Function = sqlpshrScanNext File Array Element 0 = 1073 Head Extent = 1050 Group Head Extent = 1050 loggxScanStartExtNum = 1079 loggxScanStartLsn = 0000000000074AC5 loggxMinLsnToStartOnError = 0000000000074AF3 loggxLastProcessedExtNum = 1079 loggxLastProcessedLsn = 0000000000074AF1 loggxLastProcessedLso = 79139424 loggxLastProcessedByte = 79139471 logExtractionCurrentExtNum = 1079 logExtractionPendingReadLso = 79139471 logExtractionReadLso = 79123332
Note that if the scan error is due to reading or writing from an active log path it may be required to ensure that one or more active log paths are accessible and of adequate storage.
However, if this error is related to something that cannot be resolved and becomes problematic, contact IBM Support. Once the scan error is resolved, the scan will resume.
- Reason: Slow buffer pool flush
-
At the point of the transaction log full error the Db2 diagnostic log will show:
Active log S0001060.LOG has not been extracted from yet. Current log extraction information: loggxLastProcessedLsn = 0000000000073801 loggxLastProcessedLso = 78666799 logExtractionCurrentExtNum = 1060 logExtractionState = IDLE logExtractionFlushLsn = 0000000000074801 throttleReason = SLOW_BP_FLUSH
If you run the db2pd -logs command, it will show you that "Extraction Status" is set to "Active", but "Current Log to Extract" is equal to the first active log in the active log path, which usually indicates that extraction is stalled.
Since the throttle reason indicates a slow buffer pool flushing condition, run the following to get the LSN of the log record belonging to the oldest dirty page in the buffer pool.db2pd -db sample -dirtypages | grep minbuflsn minbuflsn : 0000000000073802
Supplying this LSN to db2flsn will determine what log file contains this log record.db2flsn -db sample 0000000000073802 Given LSN is in log file S0001060.LOG
The log file returned matches the log file that still needs to be extracted from.
Slow buffer pool flushing can result from, for example, a database misconfiguration (PAGE_AGE_TRGT_MCR and PAGE_AGE_TRGT_GCR database configuration parameters), or a transaction which frequently accesses and updates the same page without giving the database a breathing point to flush the page. In some cases, a manual FLUSH BUFFERPOOLS statement may help.