S3 client-upload parameter generation with Python

# http://aws.amazon.com/articles/1434   
def S3UploadParams(bucket_name, object_name, expiry, maxsize, redirect):
    import os, boto, json, base64, hmac, hashlib
    from time import time, gmtime, strftime

    def SignS3Upload(policy_document):
        policy = base64.b64encode(policy_document)
        return base64.b64encode(hmac.new(
                boto.config.get('Credentials', 'aws_secret_access_key'),
                policy,
                hashlib.sha1
                ).digest())

    def GenerateS3PolicyString(bucket_name, object_name, expiry, maxsize, redirect):
        policy_template = '{ "expiration": "%s", "conditions": [ {"bucket": "%s"}, ["eq", "$key", "%s"], {"acl": "private"}, {"success_action_redirect": "%s"}, ["content-length-range", 0, %s] ] }'
        return policy_template % (
            strftime("%Y-%m-%dT%H:%M:%SZ", gmtime(time() + expiry)),
            bucket_name,
            object_name,
            redirect,
            maxsize
            )
    
    params = {
        'key': object_name,
        'AWSAccessKeyId': boto.config.get('Credentials', 'aws_access_key_id'),
        'acl': 'private',
        'success_action_redirect': redirect,
        }

    policy = GenerateS3PolicyString(bucket_name, object_name, expiry, maxsize, redirect)
    params['policy'] = base64.b64encode(policy)

    signature = SignS3Upload(policy)
    params['signature'] = signature

    return params

Tagged with:

Categorised as:


KinectVision.com code

This is old, from December 2010 it seems, but it's here in case the machine goes titsup. Quick, dirty and ugly but it works most of the time. First, the capture program:

#include <libusb-1.0/libusb.h>
#include "libfreenect.h"
#include "libfreenect_sync.h"
#include <stdio.h>
#include <stdlib.h>

/*
  No error checking performed whatsoever; dealing with it later (or not).
 */
int main(int argc, char** argv)
{
  uint16_t * depth = (uint16_t *)malloc(FREENECT_DEPTH_11BIT_SIZE);
  uint32_t timestamp;
  int index = 0;
  freenect_depth_format fmt = FREENECT_DEPTH_11BIT;

  uint8_t * depth8 = (uint8_t *)malloc(FREENECT_FRAME_PIX);
  int i;

  /* Capture one Kinect depth frame */
  freenect_sync_get_depth(&depth, &timestamp, index, fmt);

  /* Convert captured frame to an 8-bit greyscale image */
  for(i = 0; i < FREENECT_FRAME_PIX; i++) {
    depth8[i] = (2048 * 256) / (2048.0 - depth[i]);
  }

  /* Write raw greyscale image to stdout  */
  fwrite(depth8, FREENECT_FRAME_PIX, 1, stdout);

  return 0;
}

Makefile:

all:		capkinect

clean:
		rm -f capkinect.o capkinect

capkinect.o:	capkinect.c
	gcc -g -I/usr/local/include/libfreenect/ -c capkinect.c -o capkinect.o

capkinect:	capkinect.o
	gcc -g capkinect.o -L/usr/local/lib/ -lfreenect_sync -o capkinect

Uploader:

#!/bin/sh

INPUT=`mktemp`
AVG=`mktemp`
TEMP=`mktemp`
OUTPUT=`mktemp --directory`

#COLORMAP="black-#45931c"
COLORMAP="black-white"

# initial average frame
capkinect | rawtopgm 640 480 | pnmcut 8 8 624 464 | pgmtoppm $COLORMAP >$AVG

while [ true ]; do

    #echo "input: $INPUT avg: $AVG temp: $TEMP output: $OUTPUT colormap: $COLORMAP"

    capkinect | rawtopgm 640 480 | pnmcut 8 8 624 464 | pgmtoppm $COLORMAP >$INPUT

    FILENAME=$OUTPUT/`date +%s.%N`

    ppmmix 0.035 $AVG $INPUT >$FILENAME.ppm

    cp $FILENAME.ppm $AVG

    cat $FILENAME.ppm | cjpeg -greyscale -quality 65 >$FILENAME.jpg

    echo "user=XXXX:AAAA" | curl --digest -K - -F "file=@$FILENAME.jpg" http://kinectvision.com/depth

    rm $FILENAME.ppm $FILENAME.jpg

    sleep 1

done

Server end script that inputs and outputs frames:

$latest_path = $_SERVER["DOCUMENT_ROOT"] . "/incoming/latest";

if($_SERVER["REQUEST_METHOD"] == "POST") {

  if(!isset($_FILES["file"]["name"])) {
    exit();
  }
  if(move_uploaded_file($_FILES["file"]["tmp_name"], $_SERVER["DOCUMENT_ROOT"] . "/incoming/" . $_FILES["file"]["name"])) {
    file_put_contents($latest_path, $_FILES["file"]["name"]);
  }

} elseif($_SERVER["REQUEST_METHOD"] == "HEAD") {

  $latest = file_get_contents($latest_path);
  header("X-KinectVision-Latest: " . $latest);

} elseif($_SERVER["REQUEST_METHOD"] == "GET") {

  $latest = $_SERVER["DOCUMENT_ROOT"] . "/incoming/" . file_get_contents($latest_path);
  header("Content-Type: image/jpeg");
  header("X-KinectVision-Latest: " . $latest);

  if(isset($_GET["width"]) && intval($_GET["width"]) < 624) {
    $width = intval($_GET["width"]);
    $f = popen("djpeg -pnm -fast -greyscale $latest | pnmscalefixed -width=$width | cjpeg -greyscale -quality 65", "r");
    while(!feof($f)) {
      echo fread($f, 1024);
    }
    fclose($f);
  } else {
    echo file_get_contents($latest);
  }

}

(Don't know if it's the Suffusion theme or what that kill all the newlines from these listings. They're there, I can assure you, they're just not visible.)

Tagged with:

Categorised as:


Debian SSD tips

  • Do not use swap; this may be overly cautious these days as the drives have fancy wear-leveling schemes and whatnot implemented but still, if you're not tight on memory then it should not hurt. And if memory is an issue then in order to avoid performance problems perhaps you should upgrade it in the first place.
  • Do use the "noop" I/O scheduler, i.e.
  • apt-get install grub
  • add GRUB_CMDLINE_LINUX="elevator=noop" to /etc/default/grub
  • update-grub
  • after boot, /sys/block/sda/queue/scheduler should read "[noop] anticipatory deadline cfq"
That last part about the scheduler ensures that the default disk I/O scheduling, which rearranges reads and writes to boost IOPS for traditional cylindrical platters and is therefore just bad for SSD performance, is not used. With the "noop" scheduler, reads and writes happen in order.

Tagged with:

Categorised as:


Site optimizations

Performace-wise, setting up Amazon CloudFront ("Custom Origin") in addition to WP Minify and WP Super Cache improved site response times a lot. Offloading static content to Amazon not only made those offloaded files load faster (because of Amazon's faster tubes) but this also reduced stress on our feeble-ish server on page load so that the document itself is returned faster. Load time is also more repeatable. Good stuff!

Note that CloudFront makes HTTP/1.0 requests and Apache may take some convincing in order to make it Gzip the 1.0 response.

Tagged with:

Categorised as:


Working ntp.conf for the Pool

driftfile /var/lib/ntp/ntp.drift

statsdir /var/log/ntpstats/

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable

# From http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers
# each of these ISO: FR, Notify?: No
server ntp1.kamino.fr iburst
server ntp1.doowan.net iburst
server ntp.duckcorp.org iburst
server itsuki.fkraiem.org iburst
server time.zeroloop.net iburst

restrict 127.0.0.1
restrict ::1
restrict default kod notrap nomodify nopeer noquery

Pool stats for the servers be found here.

Tagged with:

Categorised as: