November 21, 2012
This isn't meant for public facing web, but a closed environment where it is necessary that each client is individually addressable (common application code, individual data). Each client has a local web server plus locally stored AWS credentials, and can therefore be fed content specific to each client. The bootstrap script is minimalistic by design, with as little moving parts as possible.
AWS credentials file (init.json
below):
init({
"region": "eu-west-1",
"common_bucket": "loadres",
"private_bucket": "697ad820240c48929dce15c25cee8591",
"access_key": "AKIAILZCSDJEFUN3L53Q",
"secret_key": "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"
})
index.html
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>loadres</title>
<script src="sha1.js"></script> <!-- https://github.com/lmorchard/S3Ajax/blob/master/js/sha1.js -->
<script>
// the authenticated S3 URL maker function, without STS specific parts:
// http://www.async.fi/2012/07/s3-query-string-authentication-and-aws-security-token-service/
var s3url = function(region, bucket, key, access_key, secret_key) {
var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
var string_to_sign = [
'GET\n\n\n',
expires, '\n',
'/', bucket, '/', key
].join('');
var signature = b64_hmac_sha1(secret_key, string_to_sign) + '=';
var url = 'https://s3-' + region + '.amazonaws.com/' + bucket + '/' + key
+ '?AWSAccessKeyId=' + encodeURIComponent(access_key)
+ '&Signature=' + encodeURIComponent(signature)
+ '&Expires=' + expires;
return url;
};
var init = function(settings) {
var head = document.getElementsByTagName('head')[0];
// inject prod.css
var css = document.createElement('link');
css.setAttribute('rel', 'stylesheet');
css.setAttribute('href', s3url(settings['region'], settings['common_bucket'], 'prod.css', settings['access_key'], settings['secret_key']));
head.appendChild(css);
// inject prod.js
var js = document.createElement('script');
js.setAttribute('src', s3url(settings['region'], settings['common_bucket'], 'prod.js', settings['access_key'], settings['secret_key']));
head.appendChild(js);
}
</script>
<!-- load AWS region and bucket info, plus credentials; this script calls init() (above) -->
<script src="init.json"></script>
</head>
<body></body>
</html>
Now in the loaded prod.js
file we would bring in the application code that would fetch data specific to this client (a little repetition here):
var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
var string_to_sign = [
'GET\n\n\n',
expires, '\n',
'/', settings['private_bucket'], '/', 'data.txt'
].join('');
var signature = b64_hmac_sha1(settings['secret_key'], string_to_sign) + '=';
var url = '/' + settings['private_bucket'] + '/' + 'data.txt'
+ '?AWSAccessKeyId=' + encodeURIComponent(settings['access_key'])
+ '&Signature=' + encodeURIComponent(signature)
+ '&Expires=' + expires;
var r = new XMLHttpRequest();
r.open('GET', url, true);
r.onreadystatechange = function () {
if(r.readyState != 4 || r.status != 200) return;
alert("Success: " + r.responseText);
};
r.send();
To make this work without CORS , we're using a local proxy to handle S3 requests. In Nginx config:
location /697ad820240c48929dce15c25cee8591 {
rewrite ^//697ad820240c48929dce15c25cee8591/(.*)$ /$1 break;
proxy_pass https://s3-eu-west-1.amazonaws.com/697ad820240c48929dce15c25cee8591;
}
Tagged with: aws iam s3
Categorised as: snippet
July 24, 2012
Getting this right took some tweaking, so:
// http://docs.amazonwebservices.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationQueryStringAuth
// http://docs.amazonwebservices.com/STS/latest/APIReference/Welcome.html
var access_key = '…', secret_key = '…', session_token = '…';
var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
var string_to_sign = [
'GET\n\n\n',
expires, '\n',
'x-amz-security-token:', session_token, '\n',
'/', bucket, '/', key
].join('');
// https://github.com/lmorchard/S3Ajax/blob/master/js/sha1.js
var signature = b64_hmac_sha1(secret_key, string_to_sign) + '=';
var url = key
+ '?AWSAccessKeyId=' + encodeURIComponent(access_key)
+ '&Signature=' + encodeURIComponent(signature)
+ '&Expires=' + expires
+ '&x-amz-security-token=' + encodeURIComponent(session_token);
Tagged with: s3 javascript
Categorised as: snippet
February 11, 2012 I'm in the process of, or planning at least, ditching MySQL/WordPress/CloudFlare and moving to a static site hosted on S3/CloudFront. At the moment, as AWS Route 53 does not support S3 or CloudFront as an Alias Target, moving to S3/CloudFront means that I have to have an A record pointing to a web server somewhere, which in turn redirects the request to the actual site's CloudFront CNAME. I do have such a server (running Nginx), but the same thing could be as well achieved by using a service such as Arecord.net . This redirect means that there's no way to run a site without the www.-prefix. Which I can live with. Also, at the moment, no SSL support is available but I'm sure I can live with that too as WordPress is simply slow, and most of all a big waste of resources. Getting rid of all the dynamic parts (seriously, it's not like there are a lot of commenters around here) will make this thing run fast, at least compared to what page load times currently are. My tests show that CloudFront returns cached pages in less than 200ms.
So, I'm killing one extra server in the near future and putting these snippets here for my own possible future use.
~/.my.cnf:
[client]
user = usename
password = password
host = hostname
[mysql]
database = dbname
<dir>/wp-db-backup.sh:
#!/bin/sh
DBFILE="<dir>/dbname -`/bin/date +%s`.gz"
/usr/bin/mysqldump --quick dbname | /bin/gzip -c >$DBFILE
/usr/bin/s3cmd put $DBFILE s3://bucketname /
/bin/rm $DBFILE
crontab:
45 3 * * * /usr/bin/nice -n 20 <dir>/wp-db-backup.sh >/dev/null 2>&1
Tagged with: mysql mysqldump s3 s3cmd
Categorised as: snippet
December 12, 2011
Setup being along the dev → test → prod lines, to correctly manage database migrations we first set things up at dev :
manage.py syncdb --noinput
manage.py convert_to_south <app>
manage.py createsuperuser
At this point the South migrations are being pushed to repository and pulled in at test :
manage.py syncdb --noinput
manage.py migrate
manage.py migrate <app> 0001 --fake
manage.py createsuperuser
Now, back at dev , after a change to one of the models:
manage.py schemamigration <app> --auto
manage.py migrate <app>
And, after push/pull, at test :
manage.py migrate <app> Tagged with: django south
Categorised as: snippet
October 23, 2011
Using Django and Tastypie , we automagically respond to SNS subscription requests. After that part is handled, the notification messages start coming in and those are used to trigger an SQS polling cycle (trying to do a thorough job there which may seem like an overkill but it's not). A received SQS message is parsed and contents are passed to an external program that forks and exits which keeps the request from blocking.
from django.conf import settings
from tastypie import fields, http
from tastypie.resources import Resource
from tastypie.bundle import Bundle
from tastypie.authentication import Authentication
from tastypie.authorization import Authorization
from tastypie.throttle import BaseThrottle
import boto.sq
from boto.sqs.message import Message
from urlparse import urlparse
import base64, httplib, tempfile, subprocess, time, json, os, sys, syslog
# Http://django-tastypie.readthedocs.org/en/latest/non_orm_data_sources.html
class NotificationObject(object):
def __init__(self, initial=None):
self.__dict__['_data'] = {}
if hasattr(initial, 'items'):
self.__dict__['_data'] = initial
def __getattr__(self, name):
return self._data.get(name, None)
def __setattr__(self, name, value):
self.__dict__['_data'][name] = value
class NotificationResource(Resource):
sns_messageid = fields.CharField(attribute='MessageId')
sns_timestamp = fields.CharField(attribute='Timestamp')
sns_topicarn = fields.CharField(attribute='TopicArn')
sns_type = fields.CharField(attribute='Type')
sns_unsubscribeurl = fields.CharField(attribute='UnsubscribeURL')
sns_subscribeurl = fields.CharField(attribute='SubscribeURL')
sns_token = fields.CharField(attribute='Token')
sns_message = fields.CharField(attribute='Message')
sns_subject = fields.CharField(attribute='Subject')
sns_signature = fields.CharField(attribute='Signature')
sns_signatureversion = fields.CharField(attribute='SignatureVersion')
sns_signingcerturl = fields.CharField(attribute='SigningCertURL')
class Meta:
resource_name = 'notification'
object_class = NotificationObject
fields = ['sns_messageid']
list_allowed_methods = ['post']
authentication = Authentication()
authorization = Authorization()
def get_resource_uri(self, bundle_or_obj):
return ''
def obj_create(self, bundle, request=None, **kwargs):
bundle.obj = NotificationObject(initial={ 'MessageId': '', 'Timestamp': '', 'TopicArn': '', 'Type': '', 'UnsubscribeURL': '', 'SubscribeURL': '', 'Token': '', 'Message': '', 'Subject': '', 'Signature': '', 'SignatureVersion': '', 'SigningCertURL': '' })
bundle = self.full_hydrate(bundle)
o = urlparse(bundle.data['SigningCertURL'])
if not o.hostname.endswith('.amazonaws.com'):
return bundle
topicarn = bundle.data['TopicArn']
if topicarn != settings.SNS_TOPIC:
return bundle
if not self.verify_message(bundle):
return bundle
if bundle.data['Type'] == 'SubscriptionConfirmation':
self.process_subscription(bundle)
elif bundle.data['Type'] == 'Notification':
self.process_notification(bundle)
return bundle
def process_subscription(self, bundle):
syslog.syslog('SNS Subscription ' + bundle.data['SubscribeURL'])
o = urlparse(bundle.data['SubscribeURL'])
conn = httplib.HTTPSConnection(o.hostname)
conn.putrequest('GET', o.path + '?' + o.query)
conn.endheaders()
response = conn.getresponse()
subscription = response.read()
def process_notification(self, bundle):
sqs = boto.sqs.connect_to_region(settings.SQS_REGION)
queue = sqs.lookup(settings.SQS_QUEUE)
retries = 5
done = False
while True:
if retries < 1:
break
retries -= 1
time.sleep(5)
messages = queue.get_messages(10, visibility_timeout=60)
if len(messages) < 1:
continue
for message in messages:
try:
m = json.loads(message.get_body())
m['return_sns_region'] = settings.SNS_REGION
m['return_sns_topic'] = settings.SNS_TOPIC
m['return_sqs_region'] = settings.SQS_REGION
m['return_sqs_queue'] = settings.SQS_QUEUE
process = subprocess.Popen(['/usr/bin/nice', '-n', '15', os.path.dirname(os.path.normpath(os.sys.modules[settings.SETTINGS_MODULE].__file__)) + '/process.py', base64.b64encode(json.dumps(m))], shell=False)
process.wait()
except:
e = sys.exc_info()[1]
syslog.syslog(str(e))
queue.delete_message(message)
def verify_message(self, bundle):
message = u''
if bundle.data['Type'] == 'SubscriptionConfirmation':
message += 'Message\n'
message += bundle.data['Message'] + '\n'
message += 'MessageId\n'
message += bundle.data['MessageId'] + '\n'
message += 'SubscribeURL\n'
message += bundle.data['SubscribeURL'] + '\n'
message += 'Timestamp\n'
message += bundle.data['Timestamp'] + '\n'
message += 'Token\n'
message += bundle.data['Token'] + '\n'
message += 'TopicArn\n'
message += bundle.data['TopicArn'] + '\n'
message += 'Type\n'
message += bundle.data['Type'] + '\n'
elif bundle.data['Type'] == 'Notification':
message += 'Message\n'
message += bundle.data['Message'] + '\n'
message += 'MessageId\n'
message += bundle.data['MessageId'] + '\n'
if bundle.data['Subject'] != '':
message += 'Subject\n'
message += bundle.data['Subject'] + '\n'
message += 'Timestamp\n'
message += bundle.data['Timestamp'] + '\n'
message += 'TopicArn\n'
message += bundle.data['TopicArn'] + '\n'
message += 'Type\n'
message += bundle.data['Type'] + '\n'
else:
return False
o = urlparse(bundle.data['SigningCertURL'])
conn = httplib.HTTPSConnection(o.hostname)
conn.putrequest('GET', o.path)
conn.endheaders()
response = conn.getresponse()
cert = response.read()
# ok; attempt to use m2crypto failed, using openssl command line tool instead
file_cert = tempfile.NamedTemporaryFile(mode='w', delete=False)
file_sig = tempfile.NamedTemporaryFile(mode='w', delete=False)
file_mess = tempfile.NamedTemporaryFile(mode='w', delete=False)
file_cert.write(cert)
file_sig.write(bundle.data['Signature'])
file_mess.write(message)
file_cert.close()
file_sig.close()
file_mess.close()
# see: https://async.fi/2011/10/sns-verify-sh/
verify_process = subprocess.Popen(['/usr/local/bin/sns-verify.sh', file_cert.name, file_sig.name, file_mess.name], shell=False)
verify_process.wait()
if verify_process.returncode == 0:
return True
return False
That process.py
would be something like:
#!/usr/bin/env python
import boto.sqs
from boto.sqs.message import Message
import base64, json, os, sys, syslog
if len(sys.argv) != 2:
sys.exit('usage: %s <base64 encoded json object>' % (sys.argv[0], ))
m = json.loads(base64.b64decode(sys.argv[1]))
# http://code.activestate.com/recipes/66012-fork-a-daemon-process-on-unix/
try:
pid = os.fork()
if pid > 0:
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
os.chdir("/")
os.setsid()
os.umask(0)
try:
pid = os.fork()
if pid > 0:
sys.exit(0)
except OSError, e:
sys.exit(1)
syslog.syslog(sys.argv[0] + ': ' + str(m))
# ...
That is, process.py
gets the received (and doped) SQS message, Base64 encoded, as it's only command line argument, forks, exits and does what it's supposed to do after that on its own. Control returns to NotificationResource
so the request doesn't block unnecessarily.
Tagged with: aws django python rest sns sqs tastypie
Categorised as: snippet