How To Securely Manage Secrets with HashiCorp Vault on Ubuntu 16.04


The log files made by your servers and applications are filled with information which possibly helpful whenever debugging pc software, investigating safety incidents, and creating insightful metrics and data.

A typical logging strategy nowadays is always to centralize all of this information through a log aggregation solution for instance the Elastic stack or Graylog. This will be perfect for real-time analysis and short- to medium-term investigations that are historical but frequently it is not feasible to retain long-lasting information in these systems considering storage space constraints or other host resource dilemmas.

A typical solution of these long-lasting storage space requirements is archiving logs with an item storage space solution. The logs can stay available indefinitely for later on analysis, appropriate retention needs, or even for backup purposes.

In this guide, we are going to make use of Logrotate on an Ubuntu 16.04 host to deliver syslog logs to an item storage space solution. This method might be placed on any logs managed by Logrotate.


To complete this guide, you will require the ( that is following****************)

  • An Ubuntu 16.04 host, with a non-root user that is sudo-enabled as described in Initial Server Setup with Ubuntu 16.04. The configurations in this tutorial should broadly work more on lots of Linux distributions, but might need some adaptation.
  • You ought to be acquainted with Logrotate and exactly how the standard setup is initiated on Ubuntu 16.04. Please look over how exactly to handle Logfiles with Logrotate on Ubuntu 16.04 to find out more.
  • You will have to understand the details that are following your item storage space solution:

    • Access Key
    • Secret Key
    • Server (or “Endpoint”) URL
    • Bucket Name

    If you are utilizing DigitalOcean Spaces, you’ll read how exactly to produce a DigitalOcean area and API Key to produce a bucket that is new recover the aforementioned information.

if you have finished the prerequisites, SSH into the host to begin with.

Step 1 — Installing S3cmd

We is likely to be utilizing an instrument called S3cmd to deliver our logs to virtually any object storage service that is s3-compatible. Before setting up S3cmd, we must install some tools to aid united states install Python programs (S3cmd is written in Python):

  • sudo apt-get up-date
  • sudo apt-get python-setuptools that are install

Next, modification to a directory you’ll then write to install the S3cmd .tar.gz file:

  • cd /tmp
  • curl -LO

Note: you can examine to see if a more recent form of S3cmd can be obtained at their Releases web page on Github. If you discover a version that is new copy the .tar.gz Address and replace it in curl demand above.

if the down load has finished, unzip and unpack the file utilizing the tar energy:

Then, turn into the directory that is resulting install the application utilizing sudo:

  • cd s3cmd-*
  • sudo python install

Test the install by asking s3cmd for the variation information:


s3cmd variation 2.0.1

If the truth is comparable production, S3cmd happens to be effectively set up. Next, we will configure S3cmd to get in touch to the item storage space solution.

Step 2 — Configuring S3cmd

S3cmd has an configuration that is interactive that can make the setup file we must connect with our item storage space host. The root individual need usage of this setup file, therefore we will begin the setup procedure utilizing sudo and put the setup file in root individual's house directory:

  • sudo s3cmd --configure --config=/root/logrotate-s3cmd.config

The interactive setup begins. Whenever appropriate, you might accept the default answers (in brackets) by pressing ENTER. We shall walk through choices below, with recommended responses for a place in DigitalOcean's NYC3 area. Replace the S3 bucket and endpoint template as required for any other DigitalOcean datacenters or other item storage space providers:

  • Access Key: your-access-key
  • Secret Key: your-secret-key
  • Default Area [US]: ENTER
  • S3 Endpoint []:
  • DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)]: %(bucket)
  • Encryption password: ENTER, or specify a password to encrypt
  • Path to GPG system [/usr/bin/gpg]: ENTER
  • Use HTTPS protocol [Yes]: ENTER
  • HTTP roxy ip address server title: ENTER, or fill your proxy information out

At this aspect, s3cmd will summarize your reactions, ask you to then test the bond. Press y then ENTER to begin the test:


Test access with provided qualifications? [Y/n] y Please wait, wanting to record all buckets... Success. Your access key and secret key worked:-)( that is fine***********************************)

After the test you will end up prompted to save lots of the settings. Once again, kind y then ENTER to take action. The setup file is likely to be written towards location we formerly specified utilizing the --config demand line choice.

In the step that is next we will create Logrotate to make use of S3cmd to upload our logs.

Step 3 — establishing Logrotate to forward Rotated Logs to Object space

Logrotate is a robust and system that is flexible manage the rotation and compression of log files. Ubuntu uses it by default to maintain all of the operational system logs present /var/log.

For this guide, we will upgrade the setup to deliver the syslog log to object storage space whenever its rotated.

First, start the Logrotate setup declare rsyslog, the device log processor:

  • sudo nano /etc/logrotate.d/rsyslog

There is likely to be two setup obstructs. We are enthusiastic about initial one, which relates to /var/log/syslog:


    turn 7
    every day
        invoke-rc.d rsyslog turn > /dev/null
. . .

This setup specifies that /var/log/syslog is likely to be rotated day-to-day (daily), with seven old logs being held (rotate 7). It shall not produce an error if the log file is missing (missingok) and it won’t rotate the log if it's empty (notifempty). Rotated logs will be compressed (compress), but not the most one that is recentdelaycompress). Finally, the postrotate script informs rsyslog to change towards log that is new following the old you have been rotated away.

Before we add our brand new setup directives, delete the delaycompress line, highlighted above. We wish our logs that are old be compressed instantly before delivering them to object storage space.

Next, include the lines that are following the end of the configuration block (outside of the postrotate . . . endscript block but inside of the closing } bracket):


. . .
        dateformat -%Y-%m-%d-%s
                /usr/local/bin/s3cmd sync --config=/root/logrotate-s3cmd.config /var/log/syslog*.gz "s3://your-bucket-name/$HOSTNAME/"
. . .

Be certain to replace the bucket that is correct for the highlighted portion above. These options will turn on date-based filename extensions (dateext) so we can timestamp our log files. We then set the format of these extensions with dateformat. The files will end up with filenames like syslog-2017-11-07-1510091490.gz: year, month, date, then a timestamp. The timestamp ensures we can ship two log files in the day that is same the filenames conflicting. This will be necessary whenever we must force a log rotation for reasons uknown (more on that within the next step).

The lastaction script operates after all of the log files have now been compressed. It sets an adjustable with all the host's hostname, then makes use of s3cmd sync to sync most of the syslog files around your item storage space bucket, putting them in a folder known as with all the hostname. Keep in mind that the slash that is final "s3://your-bucket-name/$HOSTNAME/" is significant. Without it, s3cmd would treat /$HOSTNAME as a file that is single perhaps not a directory filled with log files.

Save and shut the setup file. The time that is next does its day-to-day run, /var/log/syslog is likely to be relocated to a date-based filename, compressed, and uploaded.

We can force this to take place instantly to check it's working correctly:

  • sudo logrotate /etc/logrotate.conf --verbose --force


rotating pattern: /var/log/syslog . . . considering log /var/log/syslog log requires rotating . . . running final action script switching euid to 0 and egid to 0 upload: '/var/log/syslog-2017-11-08-1510175806.gz' -> 's3://example-bucket/example-hostname/syslog-2017-11-08-1510175806.gz' [1 of 1] 36236 of 36236 100per cent in 0s 361.16 kB/s done Done. Uploaded 36236 bytes in 1.0 moments, 35.39 kB/s.

This will output plenty of information for several log files. The portions strongly related the syslog log and our upload is excerpted above. Your production should look comparable, with a few proof of a upload that is successful. You might have significantly more files being uploaded in the event that host just isn't completely new.

Next, we will optionally set a service up to aid united states upload logs before system shutdowns.

Step 4 — Sending Logs On Shutdown

This action is optional, and just necessary if you are configuring servers that are ephemeral are frequently being shut down and destroyed. Every time you destroy a server.( if this is the case, you could lose up to a day of logs****************)

To fix this, we must force Logrotate to operate one time that is last the device shuts down. We are going to try this by producing a systemd solution that operates the logrotate demand if it is stopped.

First, start a service that is new in a text editor:

  • sudo nano service that is/etc/systemd/system/logrotate-shutdown

Paste in service definition:( that is following****************)


Description=Archive logs before shutdown

ExecStop=/usr/sbin/logrotate /etc/logrotate.conf --force


This file describes something that does absolutely nothing whenever started (it does not have an ExecStart declaration), and operates logrotate (with all the --force choice) whenever stopped. It shall be run prior to the system connection is turn off as a result of line.

Save the file and leave your text editor, then start and enable the solution utilizing systemctl:

  • sudo systemctl begin service that is logrotate-shutdown
  • sudo systemctl permit service that is logrotate-shutdown

Check the status of service:( that is new****************)

  • sudo systemctl status logrotate-shutdown.service


● logrotate-shutdown.service - Archive logs before shutdown Loaded: packed (/etc/systemd/system/logrotate-shutdown.service; enabled; merchant preset: enabled) Active: active (exited) since Wed 2017-11-08 20:00:05 UTC; 8s ago Nov 08 20:00:05 example-host systemd[1]: Started Archive logs before shutdown.

We wish to note that its active. The fact it's exited is okay, that is considering having no ExecStart demand.

You can test your service that is new operating either by stopping it by hand:

  • sudo systemctl end service that is logrotate-shutdown

or by rebooting one's body:

Either technique will trigger the Logrotate demand and upload a log file that is new. Now, barring an shutdown that is ungraceful you are going to lose no logs whenever destroying a server.

Note: numerous cloud platforms do perhaps not perform an elegant shutdown whenever a host will be damaged or ended. You will have to try this functionality together with your specific setup, and either configure it for elegant shutdowns or find another solution for triggering one last log rotation.


In this guide we installed S3cmd, configured it to get in touch to the item storage space solution, and configured Logrotate to upload log files with regards to rotates /var/log/syslog. We then create a systemd solution to operate logrotate --force on shutdown, to be sure we do not lose any logs whenever destroying servers that are ephemeral

To find out about the designs readily available for Logrotate, make reference to its manual web page by entering man logrotate regarding the demand line. Extra information about S3cmd is found on the internet site.

Just how to Install Nginx, HHVM and MySQL on Ubuntu 15.04 (LNMH Stack)

Previous article

Linux lscpu Command Tutorial for newbies (5 Examples)

Next article

You may also like


Leave a reply

Your email address will not be published. Required fields are marked *

More in DigitalOcean