MckoiDDB is a distributed database system designed for applications that need to support low latency database queries on datasets that span a cluster of servers in a network. MckoiDDB provides the system for managing the storage of data over the network, and a client API for accessing and updating the data in a familiar and intuitive way to database developers. MckoiDDB can scale up or down while remaining online as network resources are added or removed. MckoiDDB also supports an adaptive design that is able to represent many different types of logical data models (tabular, graphs, file systems, etc).
MckoiDDB is intended to be installed on high-speed private networks, and over instances provided by Cloud service providers.
MckoiDDB is a transactional system allowing for complex and consistent data model design. MckoiDDB implements Multiversion Concurrency Control (MVCC), which means a MckoiDDB transaction is a fully isolated snapshot view that has strong consistency guarantees. MVCC transactions ensure that your application will not be able to see out-of-date versions of the database with partially updated/committed data or indeterminate state within a given partition.
Under the hood, MckoiDDB's versioning system allows for a database client to make extensive changes to a locally visible version of the database which, when completed, is committed to make a new snapshot that is visible to all. MckoiDDB implements Optimistic Concurrency Control, which means any conflicting changes made by concurrent database clients are detected at commit time. This design allows MckoiDDB to be lock (and deadlock) free.
MckoiDDB is a low latency database appropriate for interactive applications with large global query throughput, such as web applications. High throughput is achieved through decentralization. All data managed by MckoiDDB is replicated over multiple servers by default. When any data is requested there are multiple choices to pick when fetching the data, thus query load is distributed over multiple servers.
Reading data in a MckoiDDB database is handled in an independent decentralized way by each database client. However, when it comes to updating a database, experts in scalable database infrastructure will rightly explain that a reliable highly decentralized system that can support a strongly consistent state is an impossible goal (see the CAP theorem). To handle these limitations, MckoiDDB has settled on an architecture wherein only a single function is needed to be processed in a serialized way when a commit happens. This function, called the Consensus Function, decides if a proposed version of the database is allowed to commit or not. When allowed, changes are merged into the current snapshot by using a number of tools that are able to logically move data very efficiently in high performance operations. While this method will reach a scalability limit, MckoiDDB also provides a method for partitioning a database instance into multiple instances (a feature called sharding), allowing the application developer to sacrifice data model consistency for partitionality when needed. In MckoiDDB, partitions are first-class concepts that can be created, populated and destroyed extremely efficiently.
MckoiDDB has been carefully engineered to be extensible. Different data model APIs are supported as add-on packages and new data models can be developed independently. The main release includes two data models; a Simple Database API that supports a File and Table data structure with concurrent consistency rules, and a graph database model with a simple object database API. We intend to support other structured models in the future such as the relational (SQL) data model.