Copy your server logs to Amazon S3 using Logrotate and s3cmd

You want to keep those server logs right? I’ve had customers ask for analytical data for last year and by george Google Analytics doesn’t cover everything on the server. What you’ll need logrotate (installed on most systems). It’s beyond the scope of this article to install logrotate* s3cmd This you can install on a RedHat based server with their yum.repos.d file that’s easy enough to install* An Amazon S3 account (I hope this goes without saying) Logs you want to rotate (in this case Nginx) Setting up s3cmd After you get it installed, you’ll want to run config (probably as root) s3cmd --configure This will ask you for your API KEY and API SCERET This will also ask if you want to encrypt it on the disk or during transfer (HTTPS) After you get it configured try running s3cmd ls That should list your buckets Getting the logrotate set up Go to the logrotate dir cd /etc/logrotate.

Installing Node.js on CentOS using Nave

So I needed to install Node.js on CentOS, but I was hard pressed to find a yum repo that was up to date. Then I stumbled upon Nave. This is really easy. It’s just a bash script that you run to install node and npm. # Get the shell script wget chmod +x ./ usemain 0.8.8 node -v # Will print # v0.8.8 Life could not be easier

Problems using VirtualBox to access sites on the net

Background I’ve set up my VirtualBox to access my localhost (OSX) by following instructions similar to this post, using NAT. Everything was great until I started testing the staging site on the net (not my localhost). Most things worked fine, but uploading larger images would just fail in IE. Uploads worked fine on the localhost, but failed on stage, so I initially thought it was a configuration on the staging server webserver.

MongoDB "Couldn't Send Query"

I would get random error messages from my MongoDB based app GoScouter of It was random and quite frustration. I was not able to find anything definite online about such error, but that it might be a bug in the PHP MongoDB Driver. I was using version 1.2.7, so I did a pecl upgrade mongo on the web server (using PHP-FPM) and this seemed to fix the problem. Now I’m running version 1.

Qwest DSL and OpenDNS... opting out of the DNS override.

While reconfiguring the network at the church, I noticed that my DNS queries were not getting curbed by OpenDNS. Browsing to ‘bad’ sites should give you a no page from OpenDNS, but nothing was tripping it… not even 4chan :S The I realized that Qwest is doing some DNS override where they actually do the DNS query. The Solution You need to opt out of this “wonderful” service from Qwest. http://www.

Nginx SSL certificate error message "key values mismatch"

When setting up a SSL certificate and chain file for Nginx, you need to combine them into one file. If you combine them in the wrong order you’ll get a message similar to the following. SSL_CTX_use_PrivateKey_file(" ... /") failed (SSL: error:0B080074:x509 certificate routines: X509_check_private_key:key values mismatch) This means you either didn’t combine them or you combined them in the wrong order. To combine the two together just do something like this.

Understanding the Nginx map directive

When switching to Nginx I needed to have a variable that signified if the site was in HTTP or HTTPS mode. So I found this little bit of code. map $scheme $fastcgi_https { ## Detect when HTTPS is used default off; https on; } This works great, but I really didn’t understand it until now so let’s take it line by line. Line 1 map $scheme $fastcgi_https { ## Detect when HTTPS is used This is where all the glory happens.

The SetEnv equivalent in Nginx for setting environmental variables.

If you need to pass some environment variables to your application from Nginx, you’ll need to specify them in the config file like so. fastcgi_param APPLICATION_ENV staging; So for example a more full config for a Zend Framework application ` server { listen 443 default ssl; listen 80 default; ssl_certificate /etc/pki/tls/certs/cert.crt; ssl_certificate_key /etc/pki/tls/private/cert.key; keepalive_timeout 70; root /var/www/mysite/public access_log /var/log/nginx/mysite.access.log main; index index.php; location / { try_files $uri $uri/ /index.php?$args; }

Error Description: 0 on Nginx with Drupal and Secure Pages

A curious error message popped up after I moved my Drupal sites to Nginx. It would only happen on AJAX type changes, but not all of them. I first noticed it when I went to change the author of a node. Then when I tried to change a View I received this error message. After checking the logs on Nginx and watching what happens using Firebug’s Net panel, it occurred to me that Drupal was making an HTTP “OPTIONS” request.

Nginx + fcgi (fastcgi) + PHP + APC + Zend Framework + CentOS

My newest toy I’ve been playing with is Nginx. Getting PHP to work over FastCGI. The performance gain over Apache + mod_php is quite staggering. Here are a few things to watch out. Getting PHP v5.2 Some of my apps are not ready to make the jump to PHP 5.3 just yet, so I needed to get a hold of PHP 5.2. In the past I just install the Zend Server CE it would have all the PHP bells and whistles I could ever need.