In-Memory Database Systems: Myths and Facts
From the database experts at McObject
Many software vendors offer in-memory database system (IMDSs), described as accelerating data management by holding all records in main memory. Many database management systems have employed caching. Several vendors offer something called “memory tables.” RAM-disks and Flash-based solid state drives (SSDs) are available for use with databases. Do IMDSs really add anything unique? In fact, the distinction between these technologies and true in-memory database systems is significant, and can be critical to project success.
Myth 1: In-memory database performance can be obtained through caching.
Caching is the process whereby on-disk databases keep frequently-accessed records in memory, for faster access. However, caching only speeds up retrieval of information, or “database reads.” Any database write – that is, an update to a record or creation of a new record – must still be written through the cache, to disk. So, the performance benefit only applies to a subset of database tasks.
Caching is also inflexible. While the user has some control over cache size, the data to be stored there is chosen automatically, usually by some variant of most-frequently-used or least-frequently-used algorithms. The user cannot designate certain records as important enough to always be cached. It is typically impossible to cache the entire database.
In addition, managing the cache imposes substantial overhead. To select and then to add or remove a record from cache, the algorithms described above use memory and CPU cycles. When the cache memory buffers fill up, some portion of the data is written to the file system (logical I/O). Each logical I/O requires a time interval, which is usually measured in microseconds. Eventually, the file system buffers also fill up, and data must be written to the hard disk (at which point logical I/O implicitly becomes physical I/O). Physical I/O is usually measured in milliseconds, therefore its performance burden is several orders of magnitude greater than logical I/O.
In-memory databases store data in main memory, keeping all records available for instant access. IMDSs eliminate cache management as well as logical and physical I/O, so that they will always turn in better performance than an on-disk DBMS with caching.
This white paper goes into further detail, explains the key differences, and replaces IMDS myths with facts.
Review other white papers from the database experts at McObject.
Visit our page on In-memory Database Questions & Answers