I have a number of stand alone java web applications that currently run on different ports and URLS. I would like to expose all these apps behind a single port(443) and map the different public URLs to the individual internal URL/port. I am thinking clients hit Nginx as reverse proxy.
I also need these apps to be accessible only via SSL and plan on everything in an AWS VPC with the SSL terminating at the AWS ELB before hitting the reverse proxy.
This seems like a pretty standard stack. Is there any reason not to do this? Any reason I should terminate the SSL at the reverse proxy (Nginx or other) instead of the AWS ELB?
thanks
Answer
In some setups there are security aspects to consider when deciding where to terminate SSL:
- Which node do you want to trust with your certificate?
- How much communication will happen behind the SSL termination point and thus remain unprotected?
You also have to consider technical aspects about what is possible, and what is not:
- A load balancer that does not terminate SSL cannot insert X-Forwarded-For headers. Thus the backend will not know the client IP address unless you use DSR based load balancing.
- A frontend that does not terminate SSL cannot dispatch to different backends depending on domain name unless the client supports SNI.
No comments:
Post a Comment