As of version 4.7 DEX high-performance graph database comes with high-availability features which are best suited for those applications with large petition load.
In fact, DEX high-availability (DEXHA) enables multiple replicas working together, allowing the highest scalability for DEX applications.
This chapter covers the architecture for these DEXHA features, the configuration details for enabling it, and examples of typical usage scenarios.
DEXHA allows to horizontal scaling of read operations whilst writes are managed centrally. Future work on DEXHA will provide fault tolerance and master re-election.
DEXHA is a software feature which is enabled through the license. DEX free evaluation does not provide it by default. More information about the licenses at the 'Introduction' section.
DEXHA provides a horizontally scaling architecture that allows DEX-based applications to handle larger read-mostly workloads.
DEXHA has been thought to minimize developers' work to go from a single node installation to a multiple node HA-enabled installation. In fact, it does not require any change in the user application because it is simply a question of configuration.
To achieve this, several DEX slave databases work as replicas of a single DEX master database, as seen in the figure below. Thus, read operations can be performed locally on each node and write operations are replicated and synchronized through the master.
Figure 7.1 shows all components in a basic DEXHA installation:
This is responsible for receiving write requests from a slave and redirecting them to the other slave instances. At the same time the master itself also plays the role of a slave.
Only a single node of the cluster can be configured to be the master. The election of the master is automatically done by the coordinator service when the system starts.
The master is in charge of the synchronization of write operations with the slaves. To do this task it manages a history log where all writes are serialized. The size of this log is limited, and it can be configured by the user.
Slaves are exact replicas of the master database; they can therefore locally perform read operations on their own without requiring synchronization.
However, for write operations the synchronization with the master in order to preserve data consistency is a must. These writes are eventually propagated from the master to other slaves. Therefore, the result of a write operation is not immediately visible in all slaves. These synchronizations are made by default made during a write operation; however there is optional polling to force synchronization that can be configured by the user.
It is not mandatory to have a slave in the architecture, as the master can work as a standalone.
Coordinator service: Apache ZooKeeper
A ZooKeeper cluster is required to perform the coordination tasks, such as the election of the master when the system starts.
The size of the ZooKeeper cluster depends on the number of DEX instances. In every case, the size of the ZooKeeper cluster must be an odd number.
DEX v4.7 works with Apache ZooKeeper v3.4. All our tests have been performed using v3.4.3.
As DEX is an embedded graph database, a user application is required for each instance. As it has already been mentioned, moving to DEXHA mode does not require any update in the user application.
Note that the user application can be developed for all the platforms and languages supported by DEX. For the current version will be running on Windows, Linux or MacOSX and using Java, .NET or C++
The load balancer redirects the requests to each of the running applications (instances).
The load balancer is not part of the DEX technology, therefore it must be provided by the user.
In order to achieve the horizontal scalability, this redistribution of the application requests must be done efficiently. A round-robin approach would be a good starting solution but depending on the application requirements smarter solutions may be required. In fact, using existing third-party solutions is advisable.
More information about load balancing strategies & available solutions in [this article][URL-Load_balancing].
Now that the pieces of the architecture are clear, let's see how DEXHA works in different scenarios or acts in typical operations using these components. Below is an explanation of how the system acts in the described situations.
The first time a DEX instance goes up, it registers itself into the coordinator service. The first instance registered which becomes the master. If a master already exists, it becomes a slave.
As all DEX slave databases are replicas of the DEX master database, slaves can answer read operations by performing the operation locally. They do not need to synchronize with the master.
In order to preserve data consistency, write operations require slaves to be synchronized with the master. A write operation is as follows:
If two slaves perform a write operation on the same object at the same time, it may result in a lost update in the same way as may happen in a DEX single instance installation if two different sessions want to write the same object at the same time.
Slave goes down
A failure in a slave during a regular situation does not affect the rest of the system. However if it goes down in the middle of a write operation the behavior of the rest of the system will depend on the use of transactions:
Slave goes up
When a DEX instance goes up, it registers itself with the coordinator. The instance will become a slave if there is already a master in the cluster.
If polling is enabled for the slave, it will immediately synchronize with the master to receive all pending writes. On the other hand, if polling is disabled, the slave will synchronize when a write is requested (as explained previously).
This is a first version of DEXHA, so although it is fully operational some important functionality is not available which will assure a complete high-availability of the system. Subsequent versions will focus on the following features:
Master goes down
A failure in the master leaves the system non-operational. In future versions this scenario will be correctly handled automatically converting one of the slaves into a master.
A failure during the synchronization of a write operation between a master and a slave leaves the system non-operational. For instance, a slave could fail during the performance of a write operation enclosed in a transaction, or there could be a general network error.
This scenario requires that the master should be able to abort (rollback) a transaction. As DEX does not offer that functionality, these scenarios cannot currently be solved. DEXHA will be able to react when DEX implements the required functionality.
A complete installation includes all the elements previously described in the architecture: DEX (DEXHA configuration), the coordination service (ZooKeeper) and the load balancer. The last one is beyond the scope of this document because, as has been previously stated, it is developers' decision which is the best to use for their specific system.
DEXHA is included in all distributed DEX packages. Thus, it is not necessary to install any extra package to make the application HA-enabled it is only a matter of configuration. DEX can be downloaded as usual from Sparsity's website. Use DEX to develop your application. Plus, visit DEX documentation site to learn how to use DEX.
DEXHA requires Apache ZooKeeper as the coordination service. Latest version of ZooKeeper v3.4.3 should be downloaded from their website. Once downloaded, it must be installed on all the nodes of the cluster where the coordination service will run. Please note that Apache ZooKeeper requires Java to work, we recommend consulting the Apache ZooKeeper documentation for requirements details.
The configuration of Apache ZooKeeper can be a complex task, so we refer the user to the Apache ZooKeeper documentation for more detailed instructions.
This section does, however, cover the configuration of the basic parameters to be used with DEXHA, to serve as an introduction for the configuration of the ZooKeeper.
Basic ZooKeeper configuration can be performed in the
$ZOOKEEPER_HOME/conf/zoo.cfg file. This configuration file must be installed on each of the nodes which is part of the coordination cluster.
clientPort: This is the port that listens for client connections, to which the clients attempt to connect.
dataDir: This shows the location where ZooKeeper will store the in-memory database snapshots and, unless otherwise specified, the transaction log of updates to the database. Please be aware that the device where the log is located strongly affects the performance. A dedicated transaction log device is a key to a consistently good performance.
tickTime: The length of a single tick, which is the basic time unit used by ZooKeeper, as measured in milliseconds. It is used to regulate heartbeats, and timeouts. For example, the minimum session timeout will be two ticks.
server.x=[hostname]:nnnnn[:nnnnn]: There must be one parameter of this type for each server in the ZooKeeper ensemble. When a server goes up, it determines which server number it is by looking for the
myid file in the data directory. This file contains the server number in ASCII, and should match the
server.x of this setting. Please take into account the fact that the list of ZooKeeper servers used by the clients must exactly match the list in each one of the Zookeper servers.
For each server there are two port numbers
nnnnn. The first port is mandatory because it is used for the Zookeeper servers, assigned as followers, to connect to the leader. However, the second one is only used when the leader election algorithm requires it. To test multiple servers on a single machine, different ports should be used for each server.
This is an example of a valid
$ZOOKEEPER_HOME/conf/zoo.cfg configuration file:
tickTime=2000 dataDir=/var/lib/zookeeper/ clientPort=2181 initLimit=5 syncLimit=2 server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
As previously explained, enabling HA in a DEX-based application does not require any update of the user's application nor the use of any extra packages. Instead, just a few variables must be defined in the DEX configuration.
dex.ha: Enables or disables HA mode.
Default value: false
dex.ha.ip: IP address and port for the instance. This must be given as follows:
Default value: localhost:7777
dex.ha.coordinators: Comma-separated list of the ZooKeeper instances. For each instance, the IP address and the port must be given as follows:
ip:port. Moreover, the port must correspond to that given as
clientPort in the ZooKeeper configuration file.
Default value: ""
dex.ha.sync: Synchronization polling time. If 0, polling is disabled and synchronization is only performed when the slave receives a write request, otherwise the parameter fixes the frequency the slaves poll the master asking for writes. The polling timer is reset if the slave receives a write request, at that moment it is (once again) synchronized.
The time is given in time-units:
<X> is a number followed by an optional character representing the unit;
D for days,
H for hours,
M for minutes,
s for seconds,
m for milliseconds and
u for microseconds. If no unit character is given, seconds are assumed.
Default value: 0
dex.ha.master.history: The history log is limited to a certain period of time, so writes occurring after that period of time will be removed and the master will not accept requests from those deleted DEX slaves. For example, in case of
12H, the master will store in the history log all write operations performed during the previous 12 hours. It will reject requests from a slave which has not been updated in the last 12 hours.
This time is given in time-units, as with the previous variable.
Default value: 1D
Please, take into account the fact that slaves should synchronize before the master's history log expires. This will happen if the write ratio of the user's application is high enough, otherwise you should set a polling value, which must be shorter than the master's history log time.
These variables must be defined in the DEX configuration file (
dex.cfg) or set using the
DexConfig class. More details on how to configure DEX can be found on the documentation site.
Figure 7.2 is an example of a simple DEXHA installation containing:
HAProxy is a free, fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. Check their documentation site for more details about the installation and configuration of this balancer.
The configuration file for the example would look like this:
global daemon maxconn 500 defaults mode http timeout connect 10000ms timeout client 50000ms timeout server 50000ms frontend http-in bind *:80 default_backend dex backend dex server s1 192.168.1.3:8080 server s2 192.168.1.5:8080 listen admin bind *:8080 stats enable
In this example, the
$ZOOKEEPER_HOME/conf/zoo.cfg configuration file for the ZooKeeper server would be:
tickTime=2000 dataDir=$ZOOKEEPER_HOME/var clientPort=2181 initLimit=10 syncLimit=5
Please note that, as it is running a single-node ZooKeeper cluster,
server.x variable is not necessary.
The DEX configuration file for the first instance (the master) would look like this:
dex.ha=true dex.ha.ip=192.168.1.3:7777 dex.ha.coordinators=192.168.1.2:2181 dex.ha.sync=600s dex.ha.master.history=24H
And this would be the content for the file in the second instance (the slave):
dex.ha=true dex.ha.ip=192.168.5.3:7777 dex.ha.coordinators=192.168.1.2:2181 dex.ha.sync=600s dex.ha.master.history=24H
The only difference between these two files is the value of the
As seen in the ['Architecture' chapter][doc:Architecture] the role of the master is given to the first starting instance, so to make sure the instance master is that designated in the example, the order of the operations is as follows:
Likewise, to shut down the system it is highly recommended that the slaves are stopped first, followed by the master.