The Load Balancing Service in vCHS

In this blog post I am going to describe the capabilities of the Load Balancing service in vCHS.

This isn't going to focus on a specific use case (albeit I may refer to various software and solutions for examples). Instead I will focus more on the technical capabilities.

I'd like to think about this article as the foundation that describes the capabilities, the flexibility and consumption principles of the load balancing service. I can then refer to this post when I discuss specific use cases in the future. You may also use this article as an how-to guide applied to your own use cases.

Background

Let's start from the main plumbing. This picture illustrates our starting point:

In a nutshell, we have an on-premise data center as well as a subscription to a vCHS virtual data center. In addition to this, there is also a user connecting from the Internet (it could be a partner or a customer or an employee on the road).

Please note that 1.1.1.1, 2.2.2.2 and 3.3.3.3 represents real public IP addresses. I have obfuscated them for security reasons.

The two virtual machines on the "Front-End Network" in vCHS are the VMs we need to load balance. For simplicity we will refer to these two VMs as Web1 (192.168.109.5) and Web2 (192.168.109.6). These could be Microsoft SharePoint front end servers or any other kind of web servers. Later, I will configure load balancing rules to only reach port 80 (the same exercise would apply for port 443).

IPSec VPN Configuration

First thing first, we will configure an IPSec VPN between the on-premise data center and the vCHS virtual data center. You may want / need to do this cause all traffic originating from your data center needs to be encrypted, or simply because your internal end-users do not even have access to the Internet.

Setting up a VPN is optional but it is useful, in the context of this article, to demonstrate the flexibility of the load balancing features in vCHS.

Describing how to setup an IPSec VPN is beyond the scope of this article. See this good article from Chris Colotti, if you want to know how to do that.

The end result is depicted in the picture below:

Now that we are done with the VPN, we are able to reach the 192.168.109.0/24 network from the 192.168.0.0/24 network (and viceversa). Please note that you will need to configure firewalls properly. We will discuss firewall rules configurations later, towards the end of the post.

Load Balancing Configuration

Let's now get into how we can configure the Load Balancing service. This is the meat of the post.

Please note that, at the time of this writing, the load balancing configuration of the vCHS Gateway needs to be done in the vCloud Director interface.

You can open the Gateway properties in the vCHS portal and click on the "Manage Advanced Gateway Settings" button. This will open the vCD interface in the proper context.

The first step is to configure a so called load balancing Pool. As we hinted, we will configure a pool that includes the two web servers (Web1 and Web2) to be balanced on port 80. We will use the Round-Robin algorithm in this exercise.

For the records, these are the algorithms we support (straight from the vCNS manual):

IP_HASH: Selects a server based on a hash of the source and destination IP address of each packet.

LEAST_CONNECTED: Distributes client requests to multiple servers based on the number of connections already on the server. New connections are sent to the server with the fewest connections.

ROUND_ROBIN: Each server is used in turn according to the weight assigned to it. This is the smoothest and fairest algorithm when the server's processing time remains equally distributed.

URI: The left part of the URI (before the question mark) is hashed and divided by the total weight of the running servers. The result designates which server will receive the request. This ensures that a URI is always directed to the same server as long as no server goes up or down.

This is how the WebPool that we have just created looks like in the vCloud Director interface:

This is a detail of the pool configuration that list its members:

Now that we have a Pool we need to define a VIP (Virtual IP or Virtual Server). When you hit the VIP, the Edge Gateway will balance the requests (using a round-robin algorithm) to Web1 and Web2 in the back.

This is how I configured my VIP (called Web-VPN) in the vCloud Director interface:

Note the 192.168.109.200 address. That is the Virtual Server (or VIP) that points to the Pool we created above. Also note that we apply this configuration on the TechServices-GW-Default-routed network (which is what in the pictures of this article is described as "Front-End Network").

How did I pick the .200 address? In order to pick up a valid VIP you need to know a couple of things. First you need to know the configuration of the subnet on that network. Second, you need to know the IP Range that has been assigned to that network (this is the pool of IPs that vCD will use to assign IPs automatically to VMs connected to this network). The VIP you choose must be a valid IP inside the subnet, but it also needs to fall outside of the IP Range pool to avoid conflicts.

The good news is that you can easily depict all of these information in the vCHS portal:

The subnet specification is 192.168.109.0/24 and the IP Range is 192.168.109.3-192.168.109.100.

Also note the 172-16-TechServices network (aka Private Network in the drawings of this post). This network is routed via the same Gateway and it represents a segment that can be the source for other connections to the web pool (for example a network segment that hosts virtual desktops in the vCHS virtual data center.

For the purpose of this blog post, we are ignoring the TechServices-Default-Isolated network that exists in the virtual data center (as the name implies this is not even attached to the Gateway).

When we are done with this configuration, this is what happens from a load balancing perspective:

This picture should clarify the flow of the load balancing traffic both from the on-premise data center to the pool (via VPN) as well as from the Private Network (in vCHS) to the same pool:

Users connecting from the on-premise data center can connect to http://192.168.109.200 and will get their requests balanced between 192.168.109.5 and 192.168.109.6.

Similarly, virtual machines running in other networks inside the virtual data center (like virtual machines on the "Private Network") will experience the same behavior. Note that the Edge Gateway will automatically route traffic from 172.16.0.0/16 to 192.168.109.0/24. In fact, the virtual machine with IP 172.16.0.3 will be able to connect to http://192.168.109.200 and be redirected to the servers in the WebPool without any further configuration.

Ultimately, we also want to enable access to the the same front-end web servers to users coming in from the Internet. The nice thing is that you don't need to configure another pool or re-configure the WebPool we already created. You only need to create another VIP and bind it to the same back-end pool.

This is how the new VIP (or Virtual Server) looks like from the vCloud Director interface:

I am now using 2.2.2.2 as the VIP address. I don't have any additional public IP left on this Gateway that I can consume so I decided to use the very own Edge IP address. That is the IP I have also used to configure the VPN. If I had other public IPs available I could configure one of them here. Again, remember that 2.2.2.2 is a dummy IP I am using in this article; it represents the actual (obfuscated) IP address I have used in our tests.

Note also that, this time, I have applied this configuration to the d0p1-extnetwork (which is the Edge Gateway Internet connection). And of course, I have bound this VIP to the WebPool I created previously.

When all is done, this is what happens to a user that comes in from the Internet:

When a user on the Internet connects to http://2.2.2.2 the Edge Gateway will balance those requests to Web1 (192.168.109.5) and Web2 (192.168.109.6) in the back.

Firewall Configuration

All of the above would not work unless you have the proper firewall configurations in place. Sure you can disable all firewall services and everything would connect to everything but this isn't how it works in real life.

So let's see how you can configure firewall rules to make the above load balancing configuration work properly while maintaining full control over who accesses what. Note that the configuration I am describing below isn't to be taken as a best practice. Your needs may vary and the information below only represents an example of what you can potentially do.

The configuration, in my tests, of the on-premise security infrastructure is fairly easy. I have just configured my local firewall to deny all inbound connections and allow all outbound connections. This means that everything coming in will be rejected and everything going out will be let go. Of course you can narrow down what goes out based on your specific needs (such as you may only want traffic on port 80 to go out and / or traffic directed to a specific IP address (e.g. 192.168.109.200) to go out. This is totally up to you.

The configuration for the vCHS virtual data center is slightly more interesting. First and foremost note that the Edge Gateway is the end-point of the VPN tunnel as well as the (virtual) entity that provides the load balancing service, as we have seen.

In addition to all that, the Edge Gateway is also the place where security rules get enforced. It's fair to see the Gateway as "the center of the universe" when it comes to network and security services in vCHS.

There are 4 firewall rules I have configured on my vCHS Gateway (one of which is optional for the purpose of allowing internal and external users to connect to a pool of balanced web servers).

Firewall rules can be configured in the vCHS Portal and they look like this in our tests. Note that other rules exists but they are used for other purposes independent from this load balancing exercise:

(to-LB-Pool-from-VPN) From 192.168.0.0/24:Any To 192.168.109.5-192.168.109.6:Any

I have (temporarily) used this rule when setting up the environment. It opens traffic on any protocol from all IPs on-premise directly to the two web servers in the pool. This was helpful to diagnose problems (such as checking the VPN status by pinging Web1 and Web2 directly). Note this rule is now disabled and it's not used to allow inbound traffic to the VIP(s).

(to-LB-from-Internet) - From external:Any To 2.2.2.2:80 <TCP Protocol>

This rule allows Internet users to hit the externally published VIP.

(to-LB-from-VPN) - From 192.168.0.0/24:Any To 192.168.109.200:80 <TCP Protocol>

This rule allows VPN users to hit the internally published VIP.

(to-LB-from-172) - From 172.16.0.0/16:Any To 192.168.109.200:80 `

This rule allows virtual machines on the Private Network in the vCHS virtual data center to hit the internally published VIP.

Conclusions

In this post we have explored how the load balancing service can be configured to load balance a set of front end servers. We demonstrated how the same "pool" can be bound to different VIPs and how these VIPs can be used as target for users coming in from the on-premise data center, from the Internet or from other networks inside the same virtual data center.

What stands out, for good or bad, is the fact that the operational model is very similar to what a customer would experience in traditional enterprise deployments. All the elementswe discussed (load balancers, firewall devices and VPN end-points) already exists in traditional data centers. vCHS "only" virtualize what was physical in the past. Did I saySoftware Defined Data Center?

Massimo.