Gone DigitalOcean

In order to try something new (and save money) I switched my personal "utility" server from EC2 to DigitalOcean. As I was running just a t1.micro instance anyway, I selected DigitalOcean's cheapest offering, which costs $5/month (512MB/20GB(SSD)/1TB). Plus $1/month for regular backups (they somehow backup the whole server while the server is running, so I'm not sure how consistent said backup can possibly be but at least it's cheap).

DigitalOcean is offering just servers, and the whole experience isn't of course quite as polished as with Amazon. Nor can you, say, access S3 (or other services) at quite the same speed as when in Amazon's network. But for my use case it's a good fit.

Tagged with:

Categorised as:


Serving per IAM user S3 data

This isn't meant for public facing web, but a closed environment where it is necessary that each client is individually addressable (common application code, individual data). Each client has a local web server plus locally stored AWS credentials, and can therefore be fed content specific to each client. The bootstrap script is minimalistic by design, with as little moving parts as possible.

AWS credentials file (init.json below):

    init({
      "region": "eu-west-1",
      "common_bucket": "loadres",
      "private_bucket": "697ad820240c48929dce15c25cee8591",
      "access_key": "AKIAILZCSDJEFUN3L53Q",
      "secret_key": "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"
    })
    

index.html:

    <!DOCTYPE html>
    <html>
      <head>
        <meta charset="utf-8">
        <title>loadres</title>
        <script src="sha1.js"></script> <!-- https://github.com/lmorchard/S3Ajax/blob/master/js/sha1.js -->
        <script>

          // the authenticated S3 URL maker function, without STS specific parts:
          // http://www.async.fi/2012/07/s3-query-string-authentication-and-aws-security-token-service/
          var s3url = function(region, bucket, key, access_key, secret_key) {
            var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
            var string_to_sign = [
              'GET\n\n\n',
              expires, '\n',
              '/', bucket, '/', key
            ].join('');
            var signature = b64_hmac_sha1(secret_key, string_to_sign) + '=';
            var url = 'https://s3-' + region + '.amazonaws.com/' + bucket + '/' + key
              + '?AWSAccessKeyId=' + encodeURIComponent(access_key)
              + '&Signature=' + encodeURIComponent(signature)
              + '&Expires=' + expires;
            return url;
          };

          var init = function(settings) {
            var head = document.getElementsByTagName('head')[0];

            // inject prod.css
            var css = document.createElement('link');
            css.setAttribute('rel', 'stylesheet');
            css.setAttribute('href', s3url(settings['region'], settings['common_bucket'], 'prod.css', settings['access_key'], settings['secret_key']));
            head.appendChild(css);

            // inject prod.js
            var js = document.createElement('script');
            js.setAttribute('src', s3url(settings['region'], settings['common_bucket'], 'prod.js', settings['access_key'], settings['secret_key']));
            head.appendChild(js);
          }
        </script>

        <!-- load AWS region and bucket info, plus credentials; this script calls init() (above) -->
        <script src="init.json"></script>
      </head>
      <body></body>
    </html>
    

Now in the loaded prod.js file we would bring in the application code that would fetch data specific to this client (a little repetition here):

    var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
    var string_to_sign = [
      'GET\n\n\n',
      expires, '\n',
      '/', settings['private_bucket'], '/', 'data.txt'
    ].join('');
    var signature = b64_hmac_sha1(settings['secret_key'], string_to_sign) + '=';
    var url = '/' + settings['private_bucket'] + '/' + 'data.txt'
      + '?AWSAccessKeyId=' + encodeURIComponent(settings['access_key'])
      + '&Signature=' + encodeURIComponent(signature)
      + '&Expires=' + expires;

    var r = new XMLHttpRequest();
    r.open('GET', url, true);
    r.onreadystatechange = function () {
      if(r.readyState != 4 || r.status != 200) return;
      alert("Success: " + r.responseText);
    };
    r.send();
    

To make this work without CORS, we're using a local proxy to handle S3 requests. In Nginx config:

    location /697ad820240c48929dce15c25cee8591 {
      rewrite  ^//697ad820240c48929dce15c25cee8591/(.*)$ /$1 break;
      proxy_pass https://s3-eu-west-1.amazonaws.com/697ad820240c48929dce15c25cee8591;
    }
    

Tagged with:

Categorised as:


S3 query string authentication and AWS Security Token Service

Getting this right took some tweaking, so:

// http://docs.amazonwebservices.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationQueryStringAuth
// http://docs.amazonwebservices.com/STS/latest/APIReference/Welcome.html

var access_key = '…', secret_key = '…', session_token = '…';

var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
var string_to_sign = [
    'GET\n\n\n',
    expires, '\n',
    'x-amz-security-token:', session_token, '\n',
    '/', bucket, '/', key
].join('');

// https://github.com/lmorchard/S3Ajax/blob/master/js/sha1.js
var signature = b64_hmac_sha1(secret_key, string_to_sign) + '=';

var url = key
    + '?AWSAccessKeyId=' + encodeURIComponent(access_key)
    + '&Signature=' + encodeURIComponent(signature)
    + '&Expires=' + expires
    + '&x-amz-security-token=' + encodeURIComponent(session_token);
    

Tagged with:

Categorised as:


Gone static

Update 2: Looks like page load times, at least as reported by Pingdom, went up from what they initially were. In my own testing cached pages still load in something like 150 to 250 milliseconds but Pingdom disagees. I don't know if this is regular CloudFront performance fluctuation, some kind of impedance mismatch between Pingdom and CloudFront or "something else".

Update: That really did the trick and the estimated -90% page load time wasn't that far off:

Having been fed up with wastefulness (resource wise) and general slowness of the MySQL/PHP/WordPress/CloudFlare setup for some time, I have now moved this site to S3/CloudFront. Site is generated from an XML file (which I derived from a WordPress export dump) with a Python script that is hosted here. Commenting is obviously impossible but if you for some reason need to contact me you'll find contact details on your left.

Tagged with:

Categorised as:


Daily MySQL database dump, back up to S3

I'm in the process of, or planning at least, ditching MySQL/WordPress/CloudFlare and moving to a static site hosted on S3/CloudFront. At the moment, as AWS Route 53 does not support S3 or CloudFront as an Alias Target, moving to S3/CloudFront means that I have to have an A record pointing to a web server somewhere, which in turn redirects the request to the actual site's CloudFront CNAME. I do have such a server (running Nginx), but the same thing could be as well achieved by using a service such as Arecord.net. This redirect means that there's no way to run a site without the www.-prefix. Which I can live with. Also, at the moment, no SSL support is available but I'm sure I can live with that too as WordPress is simply slow, and most of all a big waste of resources. Getting rid of all the dynamic parts (seriously, it's not like there are a lot of commenters around here) will make this thing run fast, at least compared to what page load times currently are. My tests show that CloudFront returns cached pages in less than 200ms.

So, I'm killing one extra server in the near future and putting these snippets here for my own possible future use.

~/.my.cnf:
[client]
user = usename
password = password
host = hostname

[mysql]
database = dbname 
<dir>/wp-db-backup.sh:
#!/bin/sh

DBFILE="<dir>/dbname-`/bin/date +%s`.gz"

/usr/bin/mysqldump --quick dbname | /bin/gzip -c >$DBFILE
/usr/bin/s3cmd put $DBFILE s3://bucketname/
/bin/rm $DBFILE
crontab:
45 3 * * * /usr/bin/nice -n 20 <dir>/wp-db-backup.sh >/dev/null 2>&1
 

Tagged with:

Categorised as: