0

If you want to understand in more detail the processes of setting up and providing comprehensive security for the local and network infrastructure, built on the basis of the Linux OS, I recommend that you get acquainted with Linux Security Online Course in OTUS. The course is not for beginners, for admission you need to go.

Introduction

I’ll tell you a little about how the module works. ngx_http_proxy_module. It is he who implements all the functionality that will be discussed. Suppose you have some services in the local or virtual network that do not have direct access from the Internet. And you want to have one. You can forward the necessary ports on the gateway, you can invent something else. And the easiest way to do this is to configure a single entry point to all your services in the form of an Nginx server and proxy various requests to the necessary servers from it.

I will tell you with specific examples where I use it. For clarity and simplicity, I will list these options in order:

  1. Earlier, I talked about setting up chat servers – matrix and matter most. In these articles, I was just talking about how to proxy chat requests using Nginx. Walked on the subject in passing, without stopping in detail. The bottom line is that you configure these chats on any virtual server, put them in closed network perimeters without unnecessary accesses and simply proxy requests to these servers. They go through Nginx, which looks at the external Internet and accepts all incoming connections.
  2. Let’s say you have a large server with many containers, such as docker. It runs many different services. You install another container with pure Nginx, on it configure proxying requests for these containers. The containers themselves map only to the local server interface. Thus, they will be completely closed from the outside, and at the same time you can flexibly control access.
  3. Another popular example. Suppose you have a server with the proxmox hypervisor or any other. You configure a gateway on one of the virtual machines, create a local network only from virtual machines without access to it from the outside. In this local area network, make a default gateway for all virtual machines in the form of your virtual machine with a gateway. On virtual servers in the local network, you place various services and do not bother with the firewall settings for them. Their entire network is still not accessible from the Internet. And you proxy access to services using Nginx installed on the gateway or on a separate virtual machine with ports forwarded to it.
  4. My personal example. I have a synology server at home. I want to organize easy access to it via https from a browser by domain name. There is nothing easier. I configure the Nginx server to receive a free Let’s encrypt certificate, configure the proxying of requests to my home ip, there I do the forwarding to the synology server on the gateway. At the same time, I can restrict the access to the server to only one ip on which Nginx is running with a firewall. As a result, nothing needs to be done on synology itself. He doesn’t even know that they are accessing him via https, on the standard port 443.
  5. Let’s say you have a large project, broken down into components that live on different servers. For example, a forum lives on a separate server, along the / forum path from the main domain. You just take and configure proxying of all requests at the / forum address to a separate server. In the same way, you can easily transfer all images to another server and proxy requests to them. That is, you can create any location and redirect requests to it to other servers.

I hope in general it’s clear what is at stake. There are many use cases. I brought the most common ones that came to mind and which I use myself. Of the advantages that I consider the most useful from my cases, I note 2:

  • You can configure https access to services without any problems, without completely touching these services. You receive and use certificates on the Nginx server, use the https connection with it, and Nginx itself already transmits information to servers with services that can operate via regular http and do not know about https.
  • You can very easily change the addresses where you proxy requests. Suppose you have a site, its requests are proxied to a separate server. You have prepared an update or relocation of the site. Debugged everything on the new server. Now all you have to do on the Nginx server is change the address of the old server to the new one where requests will be redirected. And that’s all. If something goes wrong, you can quickly return everything back.

With the theory finished. We now turn to the configuration examples. In my examples, I will use the following notation:

blog.zeroxzed.ru test site domain name
Nginx_srv name of the external server with Nginx installed
blog_srv local server with a site where we proxy connections
94.142.141.246 external ip Nginx_srv
192.168.13.31 ip address blog_srv
77.37.224.139 ip address of the client from which I will go to the site

Configuring proxy_pass in Nginx

Consider the simplest example. I will use my technical domain zeroxzed.ru in this and subsequent examples. Let’s say we have a blog.zeroxzed.ru site. A record was created in DNS that points to the ip address of the server where Nginx is installed – Nginx_srv. We will proxy all requests from this server to another server on the blog_srv LAN where the site is actually located. We draw a config for the server section.

server {     listen 80;     server_name blog.zeroxzed.ru;     access_log /var/log/Nginx/blog.zeroxzed.ru-access.log;     error_log /var/log/Nginx/blog.zeroxzed.ru-error.log;  location / {     proxy_pass http://192.168.13.31;     proxy_set_header Host $host;     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;     proxy_set_header X-Real-IP $remote_addr;     } }

We go to the address http://blog.zeroxzed.ru. We should get to blog_srv, where some kind of web server should also work. In my case it will be Nginx too. You should see content similar to what you see by typing http://192.168.13.31 on the local network. If something does not work, then check first that everything works correctly at the address of the proxy_pass directive.

Let’s see the logs on both servers. On Nginx_srv I see my request:

77.37.224.139 - - (19/Jan/2018:15:15:40 +0300) "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"

Checking blog_srv:

94.142.141.246 - - (19/Jan/2018:15:15:40 +0300) "GET / HTTP/1.0" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" "77.37.224.139"

As we can see, the request first came to Nginx_srv, was redirected to blog_srv, where it came already with the sender address 94.142.141.246. This is the Nginx_srv address. The real client IP address we see only at the very end of the log. This is inconvenient since the php directive REMOTE_ADDR will not return the real client ip address. And he is often needed. We will fix this further, but for now we will create a test page in the root of the site on chat_srv to check the client’s ip address with the following content:


Let’s call her myip.php. Go to the address http://blog.zeroxzed.ru/myip.php and check how the server determines our address. It will not determine 🙂 It will show the address Nginx_srv. We fix this and teach Nginx to transfer the real client ip address to the server.

Passing the real ip (real ip) client address to Nginx with proxy_pass

In the previous example, we actually pass the real client ip address using the directive proxy_set_headerwhich adds to the title X-Real-IP real client ip address. Now we need to make a reverse replacement on the receiving side, that is, blog_srv – replace the information about the sender address with the one indicated in the X-Real-IP header. Add the following parameters to the server section:

set_real_ip_from 94.142.141.246; real_ip_header X-Real-IP;

The completely server section on blog_srv in the simplest case is as follows:

    server { 	listen       80 default_server; 	server_name  blog.zeroxzed.ru; 	root         /usr/share/Nginx/html; 	set_real_ip_from 94.142.141.246; 	real_ip_header X-Real-IP;              location / { 	index index.php index.html index.htm; 	try_files	$uri $uri/	=404; 	}      location ~ .php$ { 	fastcgi_pass   127.0.0.1:9000; 	fastcgi_index  index.php; 	fastcgi_intercept_errors on;  	include fastcgi_params; 	fastcgi_param       SCRIPT_FILENAME  $document_root$fastcgi_script_name; 	fastcgi_ignore_client_abort     off; 	}      error_page 404 /404.html; 	location = /40x.html {     }      error_page 500 502 503 504 /50x.html; 	location = /50x.html {     } }

Save the config, re-read it and check again http://blog.zeroxzed.ru/myip.php. You should see your real ip address. You will see it in the web server log on blog_srv.

Passing real ip to nginx proxy_pass

Next, consider more complex configurations.

Transfer https via Nginx using proxy pass

If your site works on https, then just configure ssl only on Nginx_srv, if you are not worried about transferring information from Nginx_srv to blog_srv. It can be carried out by an unprotected protocol. I’ll look at an example with a free let’s encrypt certificate. This is just one of the cases when I use proxy_pass. It is very convenient to configure on one server the automatic receipt of all the necessary certificates. I examined the let’s encrypt setting in detail separately. Now we will assume that you have certbot and everything is ready for a new certificate, which will then be automatically updated.

To do this, we need to add another location on Nginx_srv – /.well-known/acme-challenge/. The full server section of our test site at the time of receipt of the certificate will look like this:

server {     listen 80;     server_name blog.zeroxzed.ru;     access_log /var/log/Nginx/blog.zeroxzed.ru-access.log;     error_log /var/log/Nginx/blog.zeroxzed.ru-error.log;      location /.well-known/acme-challenge/ { 	root /web/sites/blog.zeroxzed.ru/www/;     }      location / { 	proxy_pass http://192.168.13.31;     	proxy_set_header Host $host; 	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 	proxy_set_header X-Real-IP $remote_addr;     } }

Reread the Nginx config and get a certificate. After that, the config changes to the following:

server {     listen 80;     server_name blog.zeroxzed.ru;     access_log /var/log/Nginx/blog.zeroxzed.ru-access.log;     error_log /var/log/Nginx/blog.zeroxzed.ru-error.log;     return 301 https://$server_name$request_uri; # редирект обычных запросов на https     }  server {     listen 443 ssl http2;     server_name blog.zeroxzed.ru;     access_log /var/log/Nginx/blog.zeroxzed.ru-ssl-access.log;     error_log /var/log/Nginx/blog.zeroxzed.ru-ssl-error.log;      ssl on;     ssl_certificate /etc/letsencrypt/live/blog.zeroxzed.ru/fullchain.pem;     ssl_certificate_key /etc/letsencrypt/live/blog.zeroxzed.ru/privkey.pem;     ssl_session_timeout 5m;     ssl_protocols TLSv1 TLSv1.1 TLSv1.2;     ssl_dhparam /etc/ssl/certs/dhparam.pem;     ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';     ssl_prefer_server_ciphers on;     ssl_session_cache shared:SSL:10m;      location /.well-known/acme-challenge/ { 	root /web/sites/blog.zeroxzed.ru/www/;     }     location / { 	proxy_pass http://192.168.13.31;  	proxy_set_header Host $host; 	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 	proxy_set_header X-Real-IP $remote_addr;     } }

Check what happened.

Proxying https with proxy pass in nginx

Our site works on https, despite the fact that we did not touch the server where this site is located. Specifically, with a web site this may not be so relevant, but if you proxy requests not to a regular site, but to some non-standard service that is difficult to translate to https, this may be a good solution.

Proxying a specific directory or files

Consider another example. Let’s say your forum lives in a directory http://blog.zeroxzed.ru/forum/, you want to put the forum on a separate web server to increase performance. To do this, add another location to the previous config.

location /forum/ { 	proxy_pass http://192.168.13.31;  	proxy_set_header Host $host; 	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 	proxy_set_header X-Real-IP $remote_addr;         proxy_redirect default; 	}

Another popular solution. You can give pictures from one server, and everything else from another. In my example, the pictures will live on the same server as Nginx, and the rest of the site on another server. Then we should have something like this configuration of locations.

location / { 	proxy_pass http://192.168.13.31;  	proxy_set_header Host $host; 	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 	proxy_set_header X-Real-IP $remote_addr; 	}  location ~ .(gif|jpg|png)$ { 	root /web/sites/blog.zeroxzed.ru/www/images; 	}

For all this to work correctly, it is necessary that the site itself is able to correctly place its images. There are many options to organize this. You can either mount network packs on the server in various ways, or programmers can change the code to control the placement of images. In any case, this is an integrated approach to working with the site.

There are many directives for managing proxy connections. All of them are described in the corresponding Nginx documentation. I am not a big Nginx configuration specialist. I mainly use my ready-made configs, often without even getting to the bottom of the matter, if it turns out to solve the problem right away. I spy on others, write down for myself, try to figure it out.

Particular attention should be paid to proxy_cache caching directives, if necessary. You can significantly increase the responsiveness of a website by properly adjusting the return of the cache. But this is a subtle point and needs to be configured in each case separately. There are no ready-made recipes here.

Read more about the comprehensive configuration of Nginx in a separate large article with my personal examples.

Conclusion

Do not like the article and want to teach me how to administer? Please, I like to study. Comments are at your disposal. Tell me how to do it right!

That’s all for me. I haven’t considered yet another possible option when you proxy a https site and transfer the info to the backend via https as well. There is no ready-made example at hand to check, but in absentia I did not draw a config. In theory, there is nothing complicated about this, configure Nginx on both servers with the same certificate. But I certainly won’t say that everything will work in this case. Perhaps there are some nuances with such proxying. I usually don’t have to do that.

As I wrote at the beginning, I basically proxy requests from one external web server to the closed network perimeter, where there is no access to anyone. In this case, I do not need to use https when sending requests to the backend. There will not necessarily be a separate server as a backend. It can easily be a container on the same server.

Install and Configure LXC Containers on Centos 7

Previous article

How to Configure Matrix Synapse Chat Server and Riot client

Next article

You may also like

Comments

Leave a reply

Your email address will not be published. Required fields are marked *

More in centos