- Run From a VM
- Install on Ubuntu Linux
- Install on CentOS/Red Hat
- Installation on Windows Server 2008
- Windows Server 2012
- Install on Windows 8, 9 & 10
- Installing Elasticsearch on Windows
- Install on Virtual Hosting
- Configuring On-Premise
- Importing Data
- Migrating Servers
- Performance and Security
To achieve high availability with Deskpro, you need to create a server infrastructure capable of handling failure. This is not really a Deskpro problem more than a general architecture problem.
There are four main places where you need to handle redundancy for HA. We’ll go over each in this document and provide links to further reading.
To make MySQL highly available, you need some kind of replication and failover strategy.
The two most common options:
- MySQL’s own master-slave replication is easy to configure and well understood. When a failure occurs, you need a way to promote a slave to be the new master. There are many posisble solutions to this, such as mysqlfailover.
- There is also a more automated clustering technology called Galera Cluster which aims to automate some tedious aspects of MySQL replication and failover, and is a more “all in one” solution which doesn’t require as much custom scripting to handle provisioning and failover. Note that although Galera is a multi-master cluster, Deskpro itself does not yet support multi-master setups. If you choose to use Galera, please refer to [this document]Using Deskpro With Galera Cluster for instructions.
When a failure occurs and a new master is chosen, you need a way to update Deskpro to use the new master server. You can of course choose to do this manually by editing Deskpro’s configuration files to chagne the database host. But there are ways to automate this too.
For examlpe, you can combine a failover script with a MySQL proxy such as ProxySQL. By using a proxy like ProxySQL, Deskpro connects to the proxy, and the proxy is able to dynamically determine the best host to pass the connection through.
You should create many web servers with all identical configuration (i.e. Deskpro config files use the same database). A load balancer such as HAProxy can be installed in front of your servers distribute requests. HAProxy is able to detect a failed server and remove it from the rotation.
Deskpro does not normally store any important application state in the filesystem aside from configuration. That means web servers are stateless and can be added/removed at will.
Updating Deskpro becomes slightly more difficult because you need to make sure file updates are made to every server.
Proxies can be useful both in managing database connections (e.g. with ProxySQL) and web requests (e.g. HAProxy). But when you employ a proxy, the proxy itself becomes a single point of failure. That is, if the proxy server crashes for some reason, then everything still breaks.
Proxy servers “do less” and are generally expected to be more stable. But you still need a strategy for failing over the proxy to reduntant proxy standby in the event of a problem.
How you do this will depend on your provider. For example, many providers will allow you to obtain a floating IP address that you can attach/detach on-the-fly.
Blobs / File Attachments
Deskpro will store blobs (file attachments) in the database by default. However, storing blobs in the database is inefficient and we often suggest moving blobs into a more suitable store.
In a HA set up, web servers become “dumb” (necessary so you can add/remove them at will). This means it’s not possible to write blobs to the local filesystem.
There are two other options:
- You can create shared filesystem used by every web server node. For example, you could use NFS to create a networked filesystem. If you go this route, you just configure Deskpro to write blobs to your shared filesystem. From the Deskpro point of view, it’s just a regular filesystem.
- You can use Deskpro’s Amazon S3 storage mechanism where blobs are stored in the cloud.
ElasticSearch includes clustering features right out of the box. You just need to make sure you provision enough nodes to achieve high availability.