As explained in the Prioritized Cache page, eXtremeDB provides a number of C API functions to monitor and manage the runtime cache. These APIs and important information regarding their usage are described in the following sections.
Applications can influence how long certain pages remain in memory by assigning the cache priority database objects. This is done by setting the following parameters in the
db_paramspassed to the
mco_db_open_dev()function: for indexes
index_caching_priority, for the memory allocator bitmap pages
allocation_bitmap_priorityand for data object pages (excluding BLOBs)
object_caching_priority. The default value of zero means that the caching priority for all objects is the same. But an integer value greater than zero can be specified; the higher the value the longer the index, allocation bitmap or object pages will remain in the cache.
Using the preset object priority as a baseline, the generated function
<classname>_set_caching_priority()can be called to adjust the relative priorities of specific classes. For example, large and rarely accessed objects can be assigned lower priority, while small frequently accessed classes can be assigned a higher priority. The caching priority assigned at runtime is stored in the database and is used until it is explicitly overwritten.
Other memory initialization factors that can affect overall performance are the sizes specified for the cache and
disk_max_database_size. These are explained in the following sections.
The memory address and size for the cache are specified in the devices parameter
devspassed to the
mco_db_open_dev()function. The memory can be either shared memory or local memory. (It must be shared memory if two or more processes are to share the database.) Generally a larger cache will improve application performance, but the frequency of updates to persistent media (flushing of cache pages) is more important for performance. How database updates are written to persistent media is determined by the Transaction Commit Policy.
Maximum Database Size
The eXtremeDB runtime uses the value of the
disk_max_database_sizeelement of the
mco_db_params_tparameter passed to
mco_db_open_dev()to allocate the “dirty pages bitmap”. The bitmap is allocated in cache at the time the cache is created. The bitmap size can be roughly calculated as:disk_max_database_size / page_size / 8.
The application can set
disk_max_database_size = MCO_INFINITE_DATABASE_SIZEto indicate that the maximum size of the database is unknown. In this case, the size of the bitmap is set to 1/16 of the size of the cache. The runtime can also be configured with the ‘extendable bitmap’ option, allowing for unlimited database size. If the runtime is configured with the extendable bitmap, then
disk_max_database_size = MCO_INFINITE_DATABASE_SIZEis specified and in this scenario the bitmap is allocated in eXtremeDB heap space.
Reserve Page Pool
As explained in the Prioritized Cache page, the database runtime provides the page pool reservation mechanism that facilitates out-of-memory error handling for the cache. The size of the reserve pool is calculated internally by the runtime based on the value of the
mco_db_params.max_active_pagesparameter (the default value is 32) and the number of currently active connections to the database runtime.
It is possible to disable this mechanism by setting the
MCO_DB_DISABLE_PAGE_POOL_RESERVEbit in the database
The connection cache is enabled by default. Two functions,
mco_disk_reset_connection_cache(), are provided to allow applications control over the connection cache:mco_bool mco_disk_enable_connection_cache(mco_db_h con, mco_bool enabled); MCO_RET mco_disk_reset_connection_cache(mco_db_h con);
The first function enables or disables the connection cache. Passing
enableparameter value enables and disables the cache. The function returns the current state of the connection cache. The second function commits the connection cache (resets) to the database.
These two functions address a scenario with many connections and long-lasting transactions. In this scenario, the connection cache could cause the page pool to run out of free pages (a new transaction allocates its own connection cache, but long transactions prevent those pages from being released back to the shared page pool). To address this, the connection cache could be turned off or reset often. Under normal circumstances, the application does not need to control the connection cache.
In-memory Page allocation
The minimum and maximum number of pages held by the per-connection allocator in
MVCCmode are determined by the database parameters
max_conn_local_pagesparameters in the
MVCCtransaction manager optimizes access to the shared memory pool by pre-allocating a number of pages at once and assigning these pages to the connection. The default value for
min_conn_local_pagesis 256 pages and for
max_conn_local_pagesis 512. The min/max value assignments represents a tradeoff between accessing a shared resource more frequently and allocating extra memory. Changing these default values can be effective if there are well defined object allocation and deallocation patterns in the application.
Obtaining runtime cache statistics
mco_disk_get_cache_info()function allows applications to obtain runtime disk manager cache statistics, including cache hits and cache misses. A cache hit occurs when the address or data required by the database runtime is found in the cache and does not require retrieval from the storage media. This information could, for example, be used to fine-tune the application’s caching policies (see the Prioritized Cache page).
Saving and Loading the Cache
mco_disk_save_cache()function allows applications to save the disk manager cache to persistent storage; and mco_disk_load_cache() can be called to load a previously saved image of the cache from same.