tune.html   [plain text]


<!--$Id: tune.so,v 11.25 2008/01/24 00:38:39 sarette Exp $-->
<!--Copyright (c) 1997,2008 Oracle.  All rights reserved.-->
<!--See the file LICENSE for redistribution information.-->
<html>
<head>
<title>Berkeley DB Reference Guide: Transaction tuning</title>
<meta name="description" content="Berkeley DB: An embedded database programmatic toolkit.">
<meta name="keywords" content="embedded,database,programmatic,toolkit,btree,hash,hashing,transaction,transactions,locking,logging,access method,access methods,Java,C,C++">
</head>
<body bgcolor=white>
<a name="2"><!--meow--></a><a name="3"><!--meow--></a>
<table width="100%"><tr valign=top>
<td><b><dl><dt>Berkeley DB Reference Guide:<dd>Berkeley DB Transactional Data Store Applications</dl></b></td>
<td align=right><a href="../transapp/reclimit.html"><img src="../../images/prev.gif" alt="Prev"></a><a href="../toc.html"><img src="../../images/ref.gif" alt="Ref"></a><a href="../transapp/throughput.html"><img src="../../images/next.gif" alt="Next"></a>
</td></tr></table>
<p align=center><b>Transaction tuning</b></p>
<p>There are a few different issues to consider when tuning the performance
of Berkeley DB transactional applications.  First, you should review
<a href="../../ref/am_misc/tune.html">Access method tuning</a>, as the
tuning issues for access method applications are applicable to
transactional applications as well.  The following are additional tuning
issues for Berkeley DB transactional applications:</p>
<br>
<b>access method</b><ul compact><li>Highly concurrent applications should use the Queue access method, where
possible, as it provides finer-granularity of locking than the other
access methods.  Otherwise, applications usually see better concurrency
when using the Btree access method than when using either the Hash or
Recno access methods.</ul>
<b>record numbers</b><ul compact><li>Using record numbers outside of the Queue access method will often slow
down concurrent applications as they limit the degree of concurrency
available in the database.  Using the Recno access method, or the Btree
access method with retrieval by record number configured can slow
applications down.</ul>
<b>Btree database size</b><ul compact><li>When using the Btree access method, applications supporting concurrent
access may see excessive numbers of deadlocks in small databases.  There
are two different approaches to resolving this problem.  First, as the
Btree access method uses page-level locking, decreasing the database
page size can result in fewer lock conflicts.  Second, in the case of
databases that are cyclically growing and shrinking, turning off reverse
splits (with <a href="../../api_c/db_set_flags.html#DB_REVSPLITOFF">DB_REVSPLITOFF</a>) can leave the database with enough
pages that there will be fewer lock conflicts.</ul>
<b>read locks</b><ul compact><li>Performing all read operations outside of transactions or at
<a href="../../ref/transapp/read.html">snapshot isolation</a> can often
significantly increase application throughput.  In addition, limiting
the lifetime of non-transactional cursors will reduce the length of
times locks are held, thereby improving concurrency.</ul>
<b><a href="../../api_c/env_set_flags.html#DB_DIRECT_DB">DB_DIRECT_DB</a>, <a href="../../api_c/env_log_set_config.html#DB_LOG_DIRECT">DB_LOG_DIRECT</a></b><ul compact><li>Consider using the <a href="../../api_c/env_set_flags.html#DB_DIRECT_DB">DB_DIRECT_DB</a> and <a href="../../api_c/env_log_set_config.html#DB_LOG_DIRECT">DB_LOG_DIRECT</a> flags.
On some systems, avoiding caching in the operating system can improve
write throughput and allow the creation of larger Berkeley DB caches.</ul>
<b><a href="../../api_c/db_open.html#DB_READ_UNCOMMITTED">DB_READ_UNCOMMITTED</a>, <a href="../../api_c/db_cursor.html#DB_READ_COMMITTED">DB_READ_COMMITTED</a></b><ul compact><li>Consider decreasing the level of isolation of transaction using the
<a href="../../api_c/db_open.html#DB_READ_UNCOMMITTED">DB_READ_UNCOMMITTED</a> or <a href="../../api_c/db_cursor.html#DB_READ_COMMITTED">DB_READ_COMMITTED</a> flags for
transactions or cursors or the <a href="../../api_c/db_open.html#DB_READ_UNCOMMITTED">DB_READ_UNCOMMITTED</a> flag on
individual read operations.  The <a href="../../api_c/db_cursor.html#DB_READ_COMMITTED">DB_READ_COMMITTED</a> flag will
release read locks on cursors as soon as the data page is nolonger
referenced.  This is also called <i>degree 2 isolation</i>.  This
will tend to block write operations for shorter periods for applications
that do not need to have repeatable reads for cursor operations.
<p>The <a href="../../api_c/db_open.html#DB_READ_UNCOMMITTED">DB_READ_UNCOMMITTED</a> flag will allow read operations to
potentially return data which has been modified but not yet committed,
and can significantly increase application throughput in applications
that do not require data be guaranteed to be permanent in the database.
This is also called <i>degree 1 isolation</i>, or <i>dirty
reads</i>.</p></ul>
<b><a href="../../api_c/dbc_get.html#DB_RMW">DB_RMW</a></b><ul compact><li>If there are many deadlocks, consider using the <a href="../../api_c/dbc_get.html#DB_RMW">DB_RMW</a> flag to
immediate acquire write locks when reading data items that will
subsequently be modified.  Although this flag may increase contention
(because write locks are held longer than they would otherwise be), it
may decrease the number of deadlocks that occur.</ul>
<b><a href="../../api_c/env_set_flags.html#DB_TXN_WRITE_NOSYNC">DB_TXN_WRITE_NOSYNC</a>, <a href="../../api_c/env_set_flags.html#DB_TXN_NOSYNC">DB_TXN_NOSYNC</a></b><ul compact><li>By default, transactional commit in Berkeley DB implies durability, that is,
all committed operations will be present in the database after recovery
from any application or system failure.  For applications not requiring
that level of certainty, specifying the <a href="../../api_c/env_set_flags.html#DB_TXN_NOSYNC">DB_TXN_NOSYNC</a> flag will
often provide a significant performance improvement. In this case, the
database will still be fully recoverable, but some number of committed
transactions might be lost after application or system failure.</ul>
<b>access databases in order</b><ul compact><li>When modifying multiple databases in a single transaction, always access
physical files and databases within physical files, in the same order
where possible.  In addition, avoid returning to a physical file or
database, that is, avoid accessing a database, moving on to another
database and then returning to the first database.  This can
significantly reduce the chance of deadlock between threads of
control.</ul>
<b>large key/data items</b><ul compact><li>Transactional protections in Berkeley DB are guaranteed by before and after
physical image logging.  This means applications modifying large
key/data items also write large log records, and, in the case of the
default transaction commit, threads of control must wait until those
log records have been flushed to disk.  Applications supporting
concurrent access should try and keep key/data items small wherever
possible.</ul>
<b>mutex selection</b><ul compact><li>During configuration, Berkeley DB selects a mutex implementation for the
architecture. Berkeley DB normally prefers blocking-mutex implementations over
non-blocking ones.  For example, Berkeley DB will select POSIX pthread mutex
interfaces rather than assembly-code test-and-set spin mutexes because
pthread mutexes are usually more efficient and less likely to waste CPU
cycles spinning without getting any work accomplished.
<p>For some applications and systems (generally highly concurrent
applications on large multiprocessor systems), Berkeley DB makes the wrong
choice.  In some cases, better performance can be achieved by
configuring with the
<a href="../../ref/build_unix/conf.html#--with-mutex">--with-mutex</a>
argument and selecting a different mutex implementation than the one
selected by Berkeley DB.  When a test-and-set spin mutex implementation is
selected, it may be useful to tune the number of spins made before
yielding the processor and sleeping.  For more information, see the
<a href="../../api_c/mutex_set_tas_spins.html">DB_ENV-&gt;mutex_set_tas_spins</a> method.</p>
<p>Finally, Berkeley DB may put multiple mutexes on individual cache lines.  When
tuning Berkeley DB for large multiprocessor systems, it may be useful to tune
mutex alignment using the
<a href="../../api_c/mutex_set_align.html">DB_ENV-&gt;mutex_set_align</a> method.</p></ul>
<b><a href="../../ref/build_unix/conf.html#--enable-posixmutexes">--enable-posixmutexes</a></b><ul compact><li>By default, the Berkeley DB library will only select the POSIX pthread mutex
implementation if it supports mutexes shared between multiple processes.
If your application does not share its database environment between
processes and your system's POSIX mutex support was not selected because
it did not support inter-process mutexes, you may be able to increase
performance and transactional throughput by configuring with the
<a href="../../ref/build_unix/conf.html#--enable-posixmutexes">--enable-posixmutexes</a> argument.</ul>
<b>log buffer size</b><ul compact><li>Berkeley DB internally maintains a buffer of log writes.   The buffer is
written to disk at transaction commit, by default, or, whenever it
is filled.  If it is consistently being filled before transaction
commit, it will be written multiple times per transaction, costing
application performance.  In these cases, increasing the size of the
log buffer can increase application throughput.</ul>
<b>log file location</b><ul compact><li>If the database environment's log files are on the same disk as the
databases, the disk arms will have to seek back-and-forth between the
two.  Placing the log files and the databases on different disk arms
can often increase application throughput.</ul>
<b>trickle write</b><ul compact><li>In some applications, the cache is sufficiently active and dirty that
readers frequently need to write a dirty page in order to have space in
which to read a new page from the backing database file.  You can use
the <a href="../../utility/db_stat.html">db_stat</a> utility (or the statistics returned by the
<a href="../../api_c/memp_stat.html">DB_ENV-&gt;memp_stat</a> method) to see how often this is happening in your
application's cache.  In this case, using a separate thread of control
and the <a href="../../api_c/memp_trickle.html">DB_ENV-&gt;memp_trickle</a> method to trickle-write pages can often increase
the overall throughput of the application.</ul>
<br>
<table width="100%"><tr><td><br></td><td align=right><a href="../transapp/reclimit.html"><img src="../../images/prev.gif" alt="Prev"></a><a href="../toc.html"><img src="../../images/ref.gif" alt="Ref"></a><a href="../transapp/throughput.html"><img src="../../images/next.gif" alt="Next"></a>
</td></tr></table>
<p><font size=1>Copyright (c) 1996,2008 Oracle.  All rights reserved.</font>
</body>
</html>