Access Server

Follow us

Active / Active High Availability Setup for OpenVPN Access Server



Introduction

For your convenience, OpenVPN Access Server offers an active / passive high availability mode out of the box that satisfies most high availability needs, however, in some cases, you may want to leverage the computing power that would otherwise remain unused in this setup. An active / active HA configuration is especially ideal if the link between the servers are not Multicast / Broadcast compatible (e.g. Amazon AWS), but are allowing access to the same network resources (e.g. network subnets / services, etc). Organizations needing more than two instances for their high availability setup may also find this setup helpful for their deployment. Please be advised that unlike the built-in failover mode that requires only one license purchase per deployment, you will need to purchase separate license keys for each node you want to add in an active / active cluster. Also, this setup will only work if you use an open source based client software, since the Connect Client that comes with the Access Server has security requirements that prevent it from automatically switching between servers.

Prerequisites

Before you begin, you will need the latest version of GlusterFS installed on your servers. If you are using our Hyper-V or ESXi virtual appliances (version 2.0.5 or above), you simply have to install it by using the apt-get install glusterfs-server command. You can also find the latest version of GlusterFS by going to www.gluster.org. Please note that the instructions provided in this article are designed for servers within a trusted segment, and not servers across an untrusted link such as the Internet. This is because similar to the Network File System (NFS), the GlusterFS protocol is not encrypted, and this can lead to accidental information disclosures about your OpenVPN users if used on an untrusted link such as the Internet. For these applications, you are encouraged to use a secondary encryption layer for your GlusterFS connection such as an IPsec transport or an OpenVPN point to point connection. Instructions on how to setup these secondary encryption layers will not be covered in this tutorial. For more information about security and the firewall rules needed for GlusterFS, please visit the GlusterFS troubleshooting page here. IMPORTANT: In order to establish quorum, it is important that you have at least three GlusterFS servers available for GlusterFS peering (not all GlusterFS servers have to be part of an OpenVPN Access Server active / active cluster). Otherwise, your HA cluster may become instable when one of your peers go down unexpectedly.

Server Preparation

In order to have an active / active HA setup, you will first need to install OpenVPN Access Server on all of the machines that would be inside your HA deployment. Configure these servers as you normally would as it was an independent server. All servers should be setup in the same manner, with the exception that users and groups only has to be created in one server of your choosing (all other servers can be left unset or empty). Please do not use PAM for authentication method if you would like to share your user and group data between servers in your HA cluster. To avoid routing conflicts, you must setup different VPN subnets for each node inside your cluster when using Layer 3 routing mode. For example, if your VPN subnet for node 1 is 192.168.100.0/24, node 2 of your cluster must not also be using the same subnet. If limiting addressing is available, you can subdivide the subnet for your nodes to use (e.g. 192.168.100.0/25 for node 1 and 192.168.100.128/25 for node 2).

Once this is complete, SSH or console into all of your servers and issue the following commands (on GlusterFS quorum servers, skip the first command):

/etc/init.d/openvpnas stop
mkdir -p /usr/local/openvpn_as/etc/db_remote
mkdir /gfs/

On the server where you have created users and groups, issue the following command (if you are using LDAP or RADIUS and you have no precreated local users, you can run this on any one of your servers):

cd /usr/local/openvpn_as/etc/db/
mv certs.db userprop.db ../db_remote/

On the same server, add all of your other HA machines by peering with them using GlusterFS. To do so, use the command in the format (repeat this command until all of your peers have been added):
gluster peer probe <hostname or IP address of HA machine or quorum server>

For example, if you have another HA machine with the IP address 9.0.0.1, issue the command:
gluster peer probe 9.0.0.1

Do note that you do not have to probe your own machine. If your HA setup only has two machines, then the probe command should be issued for the other peer. If using a hostname (instead of an IP address), you should enter this hostname in the /etc/hosts file to avoid excessive lookups. Afterwards, the following message:

peer probe: success

should appear. If this is not the case, please refer to the GlusterFS troubleshooting guide for firewall rules and tips.

To check and make sure the peering connection is working correctly, issue the gluster peer status command. A screen similar to the following should appear:

Number of Peers: 1

Hostname: 9.0.0.1
Port: 24007
Uuid: ed2e61e1-420c-46fc-9b85-bc66bf04c0ef
State: Peer in Cluster (Connected)

After you have verified that all the peers are properly added, add the network volume by using the following command syntax (note that this time you will have to include the server you are currently on - all IP addresses or hostnames entered here should be reachable by all other hosts; in other words, do not use localhost or 127.0.0.1 in this statement):

gluster volume create openvpnas replica <num of HA nodes> transport tcp <ip/host1>:/usr/local/openvpn_as/etc/db_remote <ip/host2>:/usr/local/openvpn_as/etc/db_remote <ip/hostn>:/usr/local/openvpn_as/etc/db_remote

For example, if you have two nodes in your HA cluster, with IP addresses 9.0.0.1 and 9.0.0.2, respectively, enter this command on the same server you have started the GlusterFS probing process (remember to include the machine you are on in this statement).

gluster volume create openvpnas replica 2 transport tcp 9.0.0.1:/usr/local/openvpn_as/etc/db_remote 9.0.0.2:/usr/local/openvpn_as/etc/db_remote

Following successful creation, the following message should appear:

volume create: openvpnas: success: please start the volume to access data

Start the volume by using the command:

gluster volume start openvpnas

After the volume is started, make sure the volume works by mounting it locally with the command:

mount.glusterfs 127.0.0.1:/openvpnas /gfs/

ls -la /gfs/ should show two files in the directory - certs.db, userprop.db. If you do not see this, stop and check your work as the volume might not have been setup correctly.

Following a successful mount, add the previous mount line to the /etc/rc.local file just before the exit 0 line to all nodes in the cluster as well as the quorum servers, and then save the file. To make sure the mount is activated upon boot up, you should restart the server and then redo the ls -la /gfs/ command to make sure the two aforementioned files appear in the folder. Do not proceed unless the mounts are coming up successfully.

Server Preparation

Once you are confident that the mount points are setup correctly by restarting the proper nodes, go ahead and stop all of your nodes again by running the following command (you do not have to do this on any quorum servers):

/etc/init.d/openvpnas stop

Afterwards, open the /usr/local/openvpn_as/etc/as.conf file with a text editor like nano, and then change the following lines to reflect the following:

certs_db=sqlite:////gfs/certs.db
user_prop_db=sqlite:////gfs/userprop.db

Note that there are four slashes after the colon (:) in the word sqlite. After making this change, restart the servers by issuing the following command:

/etc/init.d/openvpnas start

The servers should then start and be using the new shared database. If you have previously created an admin user on the main server, you should now be able to use that username and password to login to all nodes in that cluster. If any of your nodes are not coming up, you may want to make sure that the /gfs/ mount is mounted correctly and that the GluterFS service has been started successfully.

After this has been verified, login to all of the nodes into the administration interface, then select Advanced VPN Settings on the left navigation bar. Scroll to the bottom of the page, where you can find the Additional OpenVPN Config Directives (Advanced) section. In the text box below for Client Config Directives, enter the following (adjust the port numbers and omit protocol entries if you have changed these from the defaults):

-remote *
remote [ip addr 1] 443 tcp
remote [ip addr 1] 1194 udp
remote [ip addr 2] 443 tcp
remote [ip addr 2] 1194 udp
remote [ip addr n] 443 tcp
remote [ip addr n] 1194 udp

For example, a cluster setup with IPs 9.0.0.1 and 9.0.0.2, enter the following (repeat this until all nodes in the cluster are represented, and enter this same information for all nodes inside the cluster):

Tip 1: If you would like your clients to randomly choose a server entry instead of the order you have specified, add the line remote-random (by itself) to the Client Config Directives text box above.

Tip 2: To assign priority to a server entry, repeat the entry multiple times in the text box above. This is especially useful when you have a server with larger capacity and when operating in conjunction with remote-random.

Tip 3: Whenever possible, use UDP instead of TCP for your VPN connections. Using TCP on unstable or high latency links may result in connection instabilities or performance degradations. To increase the priority for UDP links, specify the udp lines multiple times until you reach the desired ratio.

For example, the following client config directives demonstrates the tips listed above:

-remote *
remote-random
remote 9.0.0.1 1194 udp
remote 9.0.0.1 1194 udp
remote 9.0.0.1 1194 udp
remote 9.0.0.1 443 tcp
remote 9.0.0.2 1194 udp
remote 9.0.0.2 1194 udp
remote 9.0.0.2 443 tcp
remote 9.0.0.3 1194 udp
remote 9.0.0.3 1194 udp
remote 9.0.0.3 1194 udp
remote 9.0.0.3 1194 udp
remote 9.0.0.3 1194 udp

where there is a:

25% chance of using 9.0.0.1 UDP port 1194
8.3% chance of using 9.0.0.1 TCP port 443
16.7% chance of using 9.0.0.2 UDP port 1194
8.3% chance of using 9.0.0.2 TCP port 443
41.7% chance of using 9.0.0.3 UDP port 1194

for each VPN connection attempt. If a particular server fails, the client will automatically pick another server to use based on these probability ratios.