I have assembly a high availability system, as the following illustration suggests:
DNS RR -> Balancer1
\
\
HAproxy1 ---> Backend Servers
HAproxy2 ---> Backend Servers
HAproxy3 ---> Backend Servers
/
/
DNS RR -> Balancer2
In few words: Two load balancers with VIP to receive the requests from clients
and then distribute it between 3 HAproxy servers that act as SSL offload and back-end balancing.
My problem now is the DNS RR. It has its perks but I'm looking for a better solution to distribute
the clients between the Balancer1 and Balancer2. Any sugestions?
PS: GeoDNS is not an option.
Answer
You could utilize a CDN as the user-facing clients. You'd then utilize the CDN's functions to load balance across you Balancer hosts. That may include DNS RR, however the CDN's configuration is known and managed so you can be confident that the CDN will respond properly to backend changes.
As an example, you could use Akamai CDN to route user requests. You could then use Akamai Global Traffic Manager (GTM) to control which origins are used by Akamai. They have a 'failover' and 'round robin' function you could use, and Akamai's healthcheckers will manage which origins are available. They can also retry requests if they experience an error talking to your origin.
Amazon Cloudfront + Route53 'weighted' records + Route53 healthchecking accomplishes it similarly.
This works even if your content is not cacheable, as a CDN does not have to be used exclusively for cacheable content. It has the benefit of bringing the user into a ecosystem you control near the 'edge', and troubleshooting CDN->origin connections is much easier than unknownuser->origin.
This route also gives you a measure of DoS protection as you can apply filters at the edge.
No comments:
Post a Comment