You can make better use of your resources and improve concurrency
by understanding the effects of the parameters that Db2 uses to control locks
Before you begin
Some
performance problems might seem to be locking problems even though
they are really problems somewhere else in the system. For example,
a table space scan of a large table can result in timeout situations.
Similarly, when tasks are waiting or swapped out, and the unit of
work is not committed, the tasks continue to hold locks. When a system
is heavily loaded, contention for processing, I/O, and storage can
also cause waiting.
Therefore, You might consider the following
approaches before you take specific actions to tune locks:
- Resolve overall system, subsystem, and application performance
problems to ensure that you not only eliminate locking symptoms, but
also correct other underlying performance problems.
- Reduce the number of threads or initiators.
- Increase the priority of Db2 tasks
on the system.
- Increase the amount of processor resources, I/O, and real memory.
About this task
You
might not need to do anything about Db2 locks.
Explicit lock requests are not necessary to prevent concurrent applications
from reading or modifying uncommitted data. Applications acquire implicit
locks under the control of Db2 to
preserve data integrity. However, locks can sometimes result in performance
problems from contention situations, such as suspension, timeout,
and deadlock.
You can sometimes prevent such situations
by considering concurrency when you design your system and subsystem
options, databases, and applications.
Procedure
To achieve acceptable concurrency in your Db2 subsystems, you can follow certain
basic recommendations. The recommendations described here are basic
starting points for improving concurrency. Therefore, detailed analysis
of your data design and applications might be required to achieve
the best possible concurrency:
- Bind
most applications with the ISOLATION(CS) and CURRENTDATA(NO) options.
These options enable Db2 to release locks early and avoid taking locks
in many cases.
- Use the REORG utility to keep your data
organized.
Doing
so can prevent the additional lock and unlock requests for situations
such as updates to compressed and varying-length rows, and auto-release
locks for pseudo-deleted index entries and keys. You
can enable automated cleanup of pseudo-deleted index entries by setting
the INDEX_CLEANUP_THREADS subsystem parameter to a non-zero value.
- Use LOCKSIZE
ANY or PAGE as a design default. Consider LOCKSIZE ROW only when applications
encounter significant lock contention, including deadlock and timeout.
LOCKSIZE
ANY is the default for CREATE TABLESPACE. It allows Db2 to choose the lock size, and Db2 usually chooses LOCKSIZE PAGE and LOCKMAX
SYSTEM for non-LOB/non-XML table spaces. For LOB
table spaces, Db2 chooses LOCKSIZE
LOB and LOCKMAX SYSTEM. Similarly,
for XML table spaces, Db2 chooses
LOCKSIZE XML and LOCKMAX SYSTEM.
Page-level locking generally results in fewer requests to lock and unlock
data for sequential access and manipulation, which translates to reduced CPU cost. Page-level
locking is also more likely to result in sequentially inserted rows in the same data page.
Row-level
locking with MAXROWS=1 can suffer from data page p-locks in data sharing environments. However,
page-level locking can avoid the data page p-locks when MAXROWS=1.
Row-level locking provides better concurrency because the locks are more granular. However, the
cost of each lock and unlock request is roughly the same for both page and row-level locking.
Therefore, row-level locking is likely to incur additional CPU cost.
Row-level
locking might also result in more data page latch contention. Sequentially inserted rows, by
concurrent threads, are less likely to be in the same data page under row-level locking.
- Reduce locking
contention on the catalog and directory for data definition, bind,
and utility operations
You can use the following approaches
to reduce this type of contention:
- Reduce the number of objects per database.
- Group data definition statements from the same database within
the same commit scope, apart from data manipulation statements, and
commit frequently.
- Assign a unique authorization ID and private database to each
user.
- Avoid using LOCK TABLE statements and statements that use RR isolation
to query the catalog.
- Specify the TRACKMOD
NO and MEMBER CLUSTER options when you create table spaces.
These
options can reduce p-lock and page latch contention on space map pages
during heavy inserts into GBP-dependent table spaces. TRACKMOD
NO cannot be used when incremental image copies are used for the table
spaces.
- Use the RELEASE(DEALLOCATE)
bind option to avoid the cost of repeatedly releasing and reacquiring
locks for applications that use frequent commit points for repeated
access to the same table spaces.
- Use the RELEASE(COMMIT) bind option
for plans or packages that are used less frequently to avoid excessive
increases to the EDM pool storage.
- For
mixed INSERT, UPDATE, and DELETE workloads consider the LOCKSIZE PAGE
and MAXROWS 1 options to reduce page latch contention on data pages.
Do not use LOCKSIZE ROW for such mixed workloads, regardless of whether
MEMBER CLUSTER is used.
MAXROWS
1 is recommended only when high levels of lock or latch contention
are encountered. The trade-off is a potential increase in getpage
and read-writer I/O operations. The number of pages required to contain
the data might increase by as many rows as can fit on a page when
MAXROWS 1 is used. For example, if 20 rows fit in a single page, then
the result is a 20 times increase in the number of pages used. Another
result is a significantly reduce buffer pool hit ratio.
What to do next
For Db2 subsystems
that are members of data sharing groups extra recommendations apply. For information about improving concurrency in
data sharing groups, see Improving concurrency in data sharing environments.