Copy your server logs to Amazon S3 using Logrotate and s3cmd

Copy your server logs to Amazon S3 using Logrotate and s3cmd

Hold on Cowboy

This blog post is pretty old. Be careful with the information you find in here. It's likely dead, dying, or wildly inaccurate.

You want to keep those server logs right? I’ve had customers ask for analytical data for last year and by george Google Analytics doesn’t cover everything on the server.

What you’ll need

  • logrotate (installed on most systems). It’s beyond the scope of this article to install logrotate* s3cmd This you can install on a RedHat based server with their yum.repos.d file that’s easy enough to install* An Amazon S3 account (I hope this goes without saying)
  • Logs you want to rotate (in this case Nginx)

Setting up s3cmd

  • After you get it installed, you’ll want to run config (probably as root)
    • s3cmd --configure
    • This will ask you for your API KEY and API SCERET
    • This will also ask if you want to encrypt it on the disk or during transfer (HTTPS)
  • After you get it configured try running s3cmd ls That should list your buckets

Getting the logrotate set up

  • Go to the logrotate dir cd /etc/logrotate.d/* Edit the nginx file vim nginx to look like
/var/log/nginx/*log {
    rotate 10
      /etc/init.d/nginx reopen_logs
      nice /usr/bin/s3cmd sync /var/log/nginx/*.gz s3://<YOUR-S3-BUCKET-NAME/nginx/
  • This will sync all .gz files to a directory called nginx on the S3 server

Now wasn’t that simple?

Important notes

  • I’m using dates on my access files e.g. access.log-20130326.gz, if you use numbers doing a sync could really mess things up in your back ups. To change this you need to edit your logrotate.conf file adding dateext which makes the date the suffix.