White Papers on Database Management
“It is a capital mistake to theorize before one has data.” – Sir Arthur Conan Doyle, Author
Shared Data in Asymmetric Multiprocessing (AMP) Configurations
Heterogeneous multicore systems are becoming ever more popular for automotive and industrial applications due to their high performance and energy efficiency. This paper will discuss the design and challenges of a shared data implementation for AMP configurations, explore implementation options to synchronize access to shared data, and discuss example use cases.
Real-time Deterministic Database Management
This paper will discuss the objectives of deterministic, predictable database management in the context of real-time application design. We will introduce extensions to a conventional database management systems (DBMS) transaction scheduler that add semantics, enforce database transaction priorities and deadline scheduling. We will then focus on the practical aspects of the design and demonstrate its use in several real-life application patterns in a variety of real-time operating system (RTOS) environments.
Benchmarking In-memory & On-Disk Databases with Hard-Disk, SSD and Memory-Tier NAND Flash
In-memory database systems (IMDSs) accelerate data management. To provide data durability, IMDSs offer transaction logging, in which changes to the database are recorded on persistent media. But critics object that logging re-introduces the storage-related latency of on-disk DBMSs. Will an IMDS with transaction logging still outperform a traditional DBMS? Will type of storage—hard disk drive vs. solid state drive vs. state-of-the-art memory-tier products—affect the results? McObject’s original research answers these and related questions.
Pipelining Vector-Based Statistical Functions for In-Memory Analytics
Columnar data handling accelerates time series analysis (including market data analysis) by maximizing the proportion of relevant data brought into CPU cache with each fetch. As explained in this white paper, McObject’s eXtremeDB for HPC (formerly the Financial Edition,) delivers columnar data handling, and builds on it with pipelining technology that enables multiple vector-based statistical functions to work on a given data sequence (time series) within CPU cache, without the need to “materialize” interim results as output in main memory. This eliminates the latency caused by back-and-forth transfers between CPU cache and memory.
Database Persistence, Without The Performance Penalty
Some applications require higher data durability than in-memory storage provides. What if DRAM could be made persistent? AgigA Tech’s AGIGARAM non-volatile DIMM (NVDIMM) delivers that capability. McObject benchmarked the eXtremeDB In-Memory Database System using AGIGARAM as storage, including “pulling the plug” mid-execution, and comparing the NVDIMM to transaction logging as a solution for data durability/recoverability. This paper presents the benchmark tests and results.
In-Memory Database Systems: Myths and Facts
In the past decade, software vendors have emerged to offer in-memory database system (IMDSs), described as accelerating data management by holding all records in main memory. But is this new? For years, database management systems have employed caching. Several vendors offer something called “memory tables.” RAM-disks and — more recently — Flash-based solid state drives (SSDs) are available for use with databases. Do IMDSs really add anything unique? In fact, the distinction between these technologies and true in-memory database systems is significant, and can be critical to project success. This paper explains the key differences, replacing IMDS myths with facts.
Will the Real IMDS Please Stand Up?
In-memory database systems (IMDSs) have changed the software landscape, enabling “smarter” embedded applications and sparking mergers and acquisitions involving the largest technology companies. But IMDSs’ popularity has sparked a flurry of products falsely claiming to be in-memory database systems. Understanding the distinction is critical to determining the performance, cost and ultimately the success or failure of a solution. This white paper examines specific products, seeking to answer the question, “is it really an in-memory database system?”
IBM White Paper – Powering the Financial Industry
STAC testing confirms that deploying McObject’s eXtremeDB in an environment of Power Systems and IBM FlashSystem can provide financial institutions with the application performance they need to capture and keep competitive advantage, increase profitability and move confidently into what IBM calls the “Cognitive Era.”
Gaining an Extreme Performance Advantage
Financial systems, telecommunications and Cloud-based software-as-a-service (SaaS) are a few of the application types that bump up against the performance limits of database management system (DBMS) software. In-memory database systems (IMDSs) eliminate much of the latency associated with traditional DBMSs, but some applications require higher data durability (i.e. recoverability if volatile memory is disrupted). As a solution, IMDSs offer transaction logging – but critics object that logging re-introduces storage related latency.
NoSQL, Object Caching & IMDSs: Alternatives for Highly Scalable Data Management
Has the traditional relational database management system (RDBMS) reached its limits in today’s high volume, highly scalable applications? Arguably the RDBMS imposes a bottleneck in such environments; this widely held view can be seen in current enthusiasm over NoSQL solutions. McObject’s white paper examines RDBMS limits and the technologies that are suggested to replace or supplement it, including NoSQL (actually an umbrella term for numerous software categories), object caching solutions (such as Memcached), and in-memory database systems (IMDSs). Characteristics discussed include persistence, performance, scalability, recoverability, data integrity, and database developer tools.
Terabyte-Plus In-Memory Database System (IMDS) Benchmark
In-memory database systems (IMDSs) hold out the promise of breakthrough performance for time-sensitive, data-intensive tasks. Yet IMDSs’ compatibility with very large databases (VLDBs) has been largely uncharted. This benchmark analysis fills the information gap and pushes the boundaries of IMDS size and performance. Using McObject’s 64-bit eXtremeDB-64, the application creates a 1.17 Terabyte, 15.54 billion row database on a 160-core Linux-based SGI® Altix® 4700 server. It measures time required for database provisioning, backup and restore. In SELECT, JOIN and SUBQUERY tests, benchmark results range as high as 87.78 million query transactions per second. The report also examines efficiency in utilizing all of the test bed system’s 160 processors. The full report includes complete database schema, relevant application source code and additional analysis.
Data Management in Set-Top Box Electronic Programming Guides
The electronic programming guide (EPG) enables digital television users to search, filter and customize program listings and even control access to content. These capabilities entail significant data management, and a handful of vendors have incorporated commercial, off-the-shelf (COTS) databases in their set-top boxes. This report presents lessons learned in such projects, mapping emerging digital TV standards, set-top box data management requirements, and typical data objects and interrelationships. Sample code and embedded database schema focus on efficiencies gained by implementing EPG data management using an in-memory database.
SQL or Navigational Database APIs: Which Best Fits Embedded Systems?
For embedded systems developers, the choice of database application programming interfaces (APIs) often boils down to the high-level SQL language and Call Level Interface, and navigational APIs integrated with C++ and other languages. Which API is best? This paper examines the familiarity and ease-of-use often cited as benefits of SQL. A sample application is implemented with SQL and then with a navigational API, to explore the issues of programming ease, maintainability, determinism and learning curve. Special attention is given to the significance of SQL optimizers in evaluating embedded database APIs.
In-memory vs. RAM-Disk Databases: A Linux-based Benchmark
A new type of DBMS, the in-memory database system (IMDS), claims breakthrough performance and availability via memory-only processing. But doesn’t database caching, or using a RAM-disk, achieve the same result with a traditional (disk-based) database? This benchmark tests eXtremeDB against a widely used embedded database, in both disk-based and RAM-disk modes. Deployment on RAM-disk boosts the traditional database by as much as 74 percent, but it still lags the IMDS substantially. Read about the architectural reasons for this disparity.
The Role of In-memory Database Systems for Routing Table Management in IP Routers
Core Internet bandwidth grows at triple the rate of CPU power, but high-value applications depend on managing much more data traffic at the network’s edge. This requires rapid evolution of routing table management (RTM) software within IP routers. This paper examines using in-memory database systems (IMDS) to add RTM development flexibility, data integrity and fault tolerance. It provides performance examples on Linux and Windows 2000. This embedded database solution adds to vendors’ ability to produce new generations of routers faster and at less cost, improving their competitive position.
Re-inventing Data Management For Intelligent Devices
Intelligent devices such as set-top boxes, consumer electronics, and networking gear are adding software “smarts” and managing larger volumes of more complex data –a challenge typically met with embedded database management systems (DBMS). But traditional databases, with roots in business processing, present CPU and memory requirements that are too expensive for price-sensitive high-tech gear. This paper examines the emerging on-device database requirements, and looks at one in-memory database, eXtremeDB, developed in response to these needs.
Portability Techniques for Embedded Systems
Whether an embedded systems database is developed for a specific application or as a commercial product, portability matters. Most embedded data management code is still “homegrown,” and when external forces drive an operating system or hardware change, data management code portability saves significant development time. This is especially important since increasingly, hardware’s lifespan is shorter than firmware’s. For database vendors, compatibility with the dozens of hardware designs, operating systems and compilers used in embedded systems provides a major marketing advantage.
Distributed Database Systems and Edge/Fog/Cloud Computing
A distributed database system is one in which the data belonging to a single logical database is distributed to two or more physical databases. Beyond that simple definition, there are a confusing number of possibilities for when, how, and why the data is distributed. Some are applicable to edge and/or fog computing, some others are applicable to fog and/or cloud computing, and some are applicable across the entire spectrum of edge, fog and cloud computing.
Exploring Code Size and Footprint, and In-memory Database Techniques to Minimize Footprint
The terms ‘code size’ and ‘footprint’ are often used interchangeably. But they are not the same; code size is a subset of footprint. This paper will explain the differentiation and relevance, then proceed to describe some of the techniques employed within eXtremeDB to minimize footprint.
A Kernel Mode Database System for High Performance Applications
Typically viewed as the lowest-level software abstraction layer, the kernel is responsible for resource allocation, scheduling, low-level hardware interfaces, network, security and other integral tasks. Certain software categories such as security applications (access control systems, firewalls, etc.) and operating system monitors commonly place their functions in the operating system kernel and have a need for local, high performance data sorting, storage and retrieval.