The primary role of load balancing is to evenly distribute load among all servers, but here we will pursue the security-related path. Splitting the datastream into a number of separate streams and directing each of them to a different virtual server is one way of preventing the traffic analysis attacks. Since co-location attacks, described elsewhere in this tutorial, are shown to be possible means of compromising a single server, such splitting of datastream among many distinct (and due to cloud's inherent architecture - possibly physically separate computers adds to data security.
The unit responsible for distributing load among virtual serves in the cloud is load balancer. It connects to a number of virtual servers, that can possibly be brought up or down, depending on current need for servicing incoming connections.
Upon arrival of a connection-request packet, the balancer makes the decision on which of the available servers can process it and redirects the connection to that server. Since, by design, all servers are the same (in terms of content they store), the decision is oblivious to the requesting side.
The routing decision can be made based on a number of facts:
This all works well until a point when client and server need to establish a secure communication channel using SSL/TLS mechanisms. If there is a potentially large number of servers behind the balancer, it is possible that the client establishes the channel with one server (which, as you remember, requires establishing a symmetric key) at first, and then its connection is redirected to another server, which has no idea of the key, and thus cannot even decipher the connection!
There are two general solutions, the common session memory, and the "sticky sessions" approach.
All the back-end servers use the same cache memory for storing session-specific data, which induces additional traffic and complexity in the back-end network
This solution again has two flavours, as follows.
Differences
There are some huge differences between the two approaches and we discuss them shortly.