How To Securely Manage Secrets with HashiCorp Vault on Ubuntu 16.04


DigitalOcean areas is an item storage space solution which suitable for the S3 API. Within guide we’ll demonstrate utilizing Nginx to requests that are proxy objects on your Space. Nginx will receive s that are HTTP( demands from your own users and pass them along to your Spaces solution, that may deliver the outcome straight back through Nginx.

Some reasons you might want to place an Nginx proxy before Spaces should:

  • add a customized domain
  • add your own personal caching
  • use your own personal SSL certificate
  • use various access control mechanisms
  • cache assets in a datacenter that is nearer to your users

In this guide, we’ll put up Nginx to respond to demands on our domain that is own optional Let’s Encrypt SSL certificates) and forward those requests to a Space with public assets. We’ll then add caching to speed up responses that are subsequent frequently-accessed things.


To complete this guide, you ought to have the ( that is following******)

  • An Ubuntu 16.04 host with Nginx installed, as explained inside our tutorial how exactly to Install Nginx on Ubuntu 16.04
  • A domain title pointed towards host, depending on just how to create a name that is host DigitalOcean. We shall utilize throughout this tutorial
  • A DigitalOcean area. It is possible to discover ways to produce a space that is new reading An Introduction to DigitalOcean Spaces.

    Youwill need to understand the Address of the specific area. You’ll find that by navigating towards area within the DigitalOcean control interface. The Address is straight underneath the area title within the UI. This might be highlighted within the screenshot below:

    DigitalOcean Spaces UI with the Space's URL highlighted. The URL is directly beneath the name of the Space in the UI's header

    You’ll likewise require a file uploaded towards area to try things away with. The spaces that are aforementioned shows tips on how to upload files utilising the areas internet GUI. We are going to utilize example.png because of this guide.

Setting Up the Proxy

A standard install of Nginx on Ubuntu will get back a Welcome to Nginx placeholder web page for many demands. We have to atart exercising . configuration that is new inform Nginx to accomplish something different with demands to your domain.

To do this, open a configuration that is new in /etc/nginx/sites-available:

  • sudo nano*************************)assets that are/etc/nginx/

This will open a file that is blank the nano text editor. Paste in the configuration that is following making certain to displace the highlighted portions with your own personal domain title and Spaces Address:


server {
    pay attention 80;
    pay attention [::]:80;

    location / {
        proxy_hide_header      Strict-Transport-Security;

Save the file and stop the editor if you are done. This might be a Nginx that is standard block. First we tell it to hear slot 80 on both IPv4 and IPv6, and specify the server_name that Nginx should answer.

Next we create a location block. Any setup directives through this block (between your { and } braces) will only apply to URLs that are specific. The root URL, so all locations will be matched by this block.( in this case, we specify /******)

The proxy_pass directive informs Nginx to pass through demands along to your specified host. The proxy_hide_header line strips the Strict-Transport-Security header before moving the reaction back again to your client. Areas utilizes this header to force all connections up to HTTPS. Moving this header to your users may have consequences that are unintended your internet site is obtainable on both HTTP and HTTPS connections.

Now our setup is scheduled, we must allow it. This is accomplished by producing a hyperlink to your setup file within the /etc/nginx/sites-enabled/ directory:

  • sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/

To check always our setup syntax, operate nginx -t as root:


nginx: the setup file /etc/nginx/nginx.conf syntax is o.k. nginx: setup file /etc/nginx/nginx.conf test works

Finally, reload Nginx to grab the configuration:( that is new******)

  • sudo systemctl reload nginx

With our setup file put up, let’s test the proxy.

Testing the Proxy

We can test the connection that is proxy curl regarding the demand line. curl -I will get back just the HTTP headers of a reply. This might be sufficient to find out that things work well.

First, fetch an item straight from your own area utilising the Address. We are going to utilize our example.png file:

  • curl -I


HTTP/1.1 200 okay Content-Length: 81173 Accept-Ranges: bytes Last-Modified: Tue, 28 Nov 2017 21:19:37 GMT ETag: "7b2d05a5bd1bfeebcac62990daeafd14" x-amz-request-id: tx(*******************************************************************************************)a1edfcd-afba2-nyc3a that is Content-Type: image/png Date: Wed, 29 Nov 2017 16:26:53 GMT Strict-Transport-Security: max-age=15552000; includeSubDomains; preload

We is able to see by the 200 okay regarding the very first type of the production that had been a request that is successful. The host came back how big is the file (Content-Length), the file kind (Content-Type) plus some other date- and cache-related information.

Now fetch similar file through ( that is proxy******)

  • curl -I


HTTP/1.1 200 okay Server: nginx/1.10.3 (Ubuntu) Date: Wed, 29 Nov 2017 16:27:24 GMT Content-Type: image/png Content-Length: 81173 Connection: keep-alive Accept-Ranges: bytes Last-Modified: Tue, 28 Nov 2017 21:19:37 GMT ETag: "7b2d05a5bd1bfeebcac62990daeafd14" x-amz-request-id: tx00000000000000000a045-005a1edfec-a89a3-nyc3a

The reaction is mainly similar. The change that is major a Server header that identifies Nginx. In the event your production is comparable, your proxy is working properly!

In the step that is next we will put up caching to cut back bandwidth usage between your proxy and areas, also to accelerate reaction times.

Setting Up Caching

To cache reactions, Nginx requires someplace to keep secrets, metadata, and also the response content that is actual. We'll set a cache directory up within the system's /tmp directory. To do this, we will include a configuration snippet to a file that is new /etc/nginx/conf.d/. Start that file now:

  • sudo nano /etc/nginx/conf.d/example-cache.conf

Paste within the line that is following then save your self and shut the file:


proxy_cache_path /tmp/example-cache/ levels=1:2 keys_zone=example-cache:16m max_size=10g inactive=60m use_temp_path=off;

This line describes some faculties regarding the cache. Let us tell you your options:

  • /tmp/example-cache/ could be the way to the cache.
  • levels=1:2 creates a two-level hierarchy of directories to keep cached content. Placing a lot of files in a directory that is single cause rate and dependability dilemmas, therefore Nginx will divide files between numerous directories centered on this method.
  • keys_zone=example-cache:16m names our cache and creates 16 megabytes of memory to keep secrets in. This will be memory that is enough keep information for over 100,000 secrets.
  • max_size=10g limits how big is the cache to 10 gigabytes. It is possible to adjust this to match your usage and storage requirements.
  • inactive=60m means Nginx will delete cached files after 60 moments whether they haven't been accessed because time (even in the event the file continues to be legitimate and unexpired). You may want to try increasing this.( if you have a lot of infrequently-accessed objects,*********)
  • use_temp_path=off instructs Nginx to create files that are temporary the cache directory, possibly steering clear of the have to duplicate files between filesystems, that could hinder performance.

Now that people've defined a cache, we must allow it inside our host block, and set some options that are additional. Open your internet site's config file once more:

  • sudo nano*************************)assets that are/etc/nginx/

Add these to your end of the location / block (following the proxy_hide_header directive, but ahead of the closing } bracket):


. . .
        proxy_cache            example-cache;
        proxy_cache_valid      200 60m;
        proxy_cache_use_stale  mistake timeout upgrading http_500 http_502 http_( http_504 that is*******************************************************************)
        proxy_cache_revalidate on;
        proxy_cache_lock       on;

        proxy_ignore_headers   Set-Cookie;
        add_header             X-Cache-Status $upstream_cache_status;
. . .

Save and shut the file. Let us undergo these setup choices ( that is one-by-one******)

  • proxy_cache informs Nginx which cache to utilize. Within full instance we specify example-cache, which we simply put up within the example-cache.conf file.
  • proxy_cache_valid instructs Nginx to think about any 200 reaction legitimate for 60 moments. Which means following the proxy effectively fetches a file from Spaces, for the following 60 moments Nginx uses the copy that is cached ever asking areas for an update. Keep in mind that when your things have actually a Cache-Control header set, the header's value will bypass this setup.
  • proxy_cache_use_stale enables Nginx to go back a stale (expired) reaction in the event that areas host ever times away, comes back a mistake, or in the event that cached reaction is within the procedure of being updated.
  • proxy_cache_revalidate allows the proxy to revalidate cached files utilizing conditional GET demands. Which means whenever a file that is cached, and Nginx needs to check Spaces for changes, Nginx will use the If-Modified-Since or If-None-Match headers to only fetch the object if it has indeed changed. If it hasn't been updated, Spaces will return a 304 Not Modified response and Nginx will mark the existing simply cached reaction as legitimate again.
  • proxy_cache_lock sets a hang on subsequent demands to an item once the proxy is fetching it through the backend host. Once the request that is first complete, one other demands will likely then be offered through the cache.
  • proxy_ignore_headers Set-Cookie ignores snacks, which could affect caching.
  • add_header X-Cache-Status... adds a header with information regarding set up demand had been offered through the cache (HIT) or otherwise not (MISS). In the event that demand was at the cache but had been expired, you will see (REVALIDATED) as an alternative.

We're now willing to validate our setup does not have any mistakes, of course that’s effective, reload Nginx:

  • sudo nginx -t
  • sudo systemctl reload nginx

With caching put up, we are able to test once more to ensure that the cache is being employed as anticipated.

Testing the Cache

To verify the cache is working, we are able to utilize curl once more, to check out the X-Cache-Status header:

  • curl -I


HTTP/1.1 200 okay Server: nginx/1.10.3 (Ubuntu) Date: Wed, 29 Nov 2017 18:40:28 GMT Content-Type: image/png Content-Length: 81173 Connection: keep-alive Last-Modified: Tue, 28 Nov 2017 21:19:37 GMT ETag: "7b2d05a5bd1bfeebcac62990daeafd14" x-amz-request-id: tx000000000000000013841-005a1eff1b-a89e4-nyc3a X-Cache-Status: MISS Accept-Ranges: bytes

The very first demand must certanly be a MISS. Test it another time:

  • curl -I


HTTP/1.1 200 okay Server: nginx/1.10.3 (Ubuntu) Date: Wed, 29 Nov 2017 18:40:53 GMT Content-Type: image/png Content-Length: 81173 Connection: keep-alive Last-Modified: Tue, 28 Nov 2017 21:19:37 GMT ETag: "7b2d05a5bd1bfeebcac62990daeafd14" x-amz-request-id: tx000000000000000013841-005a1eff1b-a89e4-nyc3a X-Cache-Status: HIT Accept-Ranges: bytes

A HIT! We're now proxying and objects that are caching Spaces. In the step that is next we will put up SSL certificates to secure interaction to your proxy.

Setting Up TLS/SSL

Though this is optional, its recommended your internet site and assets are formulated available over a HTTPS that is secure connection. You can learn how to download and install certificates that are free the let us Encrypt certification authority by reading our tutorial how exactly to create let us Encrypt with Nginx Server Blocks on Ubuntu 16.04.


In this guide we created an Nginx setup to requests that are proxy objects to the Spaces service. We then added caching to improve performance, and a TLS/SSL certificate to improve security and privacy.

The settings shown listed below are a starting that is good, but you may want to optimize some of the cache parameters based on your own unique traffic patterns and needs. The Nginx documentation, specifically the***********)http( that is ngx provides more descriptive informative data on the available setup choices.

phpMyAdmin Setup on LAMP Ubuntu and Debian Server

Previous article

Building a Node.js App to be used with DigitalOcean Spaces

Next article

You may also like


Leave a Reply

More in DigitalOcean