Async.fi http://www.async.fi Joni Kähärä Presents Async.fi Sun, 25 Apr 2021 07:51:49 +0000 en hourly 1 https://github.com/kahara/Async.fi WireGuard Notes http://www.async.fi/2020/12/wireguard-notes/ Sun, 20 Dec 2020 20:00:00 +0000 Joni Kähärä http://www.async.fi/2020/12/wireguard-notes/ After being first exposed to WireGuard, it took a while before its simple-yet-powerful nature started to dawn on me. By "simple" I mean simple from an end-user's perspective; the implementation, while supposedly a few orders of magnitude less hairy than e.g. OpenVPN or IPsec, isn't something that I can claim to have grasped. But the ascetic, no bells and whistles, do one thing well approach is something that resonates and got me to invest some time to play with the thing to see how it works in some common scenarios.

Getting packets to travel from A to B to C and back doesn't happen automagically with WireGuard. In fact there's no magic going on behind the scenes to begin with, which can be A Good Thing™ or not, depending on cirumstances. The only thing that WireGuard provides is making packets flow between the two ends of a link in a secure manner, everything else is left to be taken care of by whatever means is appropriate. For setting things up there's the wg program which helps with key generation and with setting up the WireGuard interface, and further the wg-quick script which does a bit more and sets up the interfaces, peers and routes based on an INI-format file. The latter also ships with a handy systemd service for running things permanently. These tools can be installed on Debian-like systems by saying apt install wireguard-tools. Getting the thing running on Raspberry Pis requires a few extra steps, see here.

Before going into the example configuration, a refresher about configuring cryptokey routing. When peers are configured, the "allowed IPs" settings for the peers mean that:

  • when a packet is sent to the WireGuard interface (e.g. wg0), the packet gets passed to the peer that has a prefix matching the packets destination address
  • which means that the prefixes can't overlap between the peers
  • when a packet is received from a peer, if the peer has a prefix mathing the packet's source address, the packet is allowed in
  • when either sending to or receiving from a peer, if no matching prefix is found for a packet, the packet is dropped
Or like it says in the cryptokey routing section linked to above:
In other words, when sending packets, the list of allowed IPs behaves as a sort of routing table, and when receiving packets, the list of allowed IPs behaves as a sort of access control list.

Connecting two sites, both behind NAT

This example connects two sites that are both behind NAT, which requires that there's a publicly accesible host running in between:

+-----------+      +-----------+       +-----------+
|           |      |           |       |           |
|    NAT    +------+ WireGuard +-------+ Network 1 |
|           |      |           |       |           |
+-----+-----+      +-----------+       +-----------+
      |
      |
      |           +-------------+
      |           |             |
      +---------->+ example.com +<-----------+
                  |             |            |
                  +-------------+            |
                                             |
                                             |
+-----------+      +-----------+       +-----+-----+
|           |      |           |       |           |
| Network 2 +------+ WireGuard +-------+    NAT    |
|           |      |           |       |           |
+-----------+      +-----------+       +-----------+

Suppose that the public host's network is 10.0.0.0/24, Network 1 is 192.168.1.0/24, Network 2 is 172.16.1.0/24, and that the WireGuard network is 10.1.1.0/24. And that all hosts in both behind-NAT networks should see each other.

The WireGuard hosts on the behind-NAT networks connect to example.com:51820, which has the following configuration:

[Interface]
Address = 10.1.1.1/32
ListenPort = 51820
PrivateKey = kKELbYxqmwHGUyjdHiVhQ/lzyiLep2kLgAocLF4CR3Q=

[Peer]
PublicKey = D+GcHTk8uRiggEj79IhbbsLWHSdZynYjUVPWcP8aJFg=
AllowedIPs = 10.1.1.11/32,192.168.1.0/24

[Peer]
PublicKey = up9LDZjYw8/LHH29ZQdp7Mg9bB+LIE7T4OsYLlEXLng=
AllowedIPs = 10.1.1.12/32,172.16.1.0/24

WireGuard host on Network 1 (192.168.1.0/24):

[Interface]
Address = 10.1.1.11/32
PrivateKey = 4DQYFpL2kkVd/rjEYLTES8Ah6K2BMOrH504TXRQyv0E=
Table = off
PostUp = ip -4 route add 10.1.1.0/24 dev %i
PostUp = ip -4 route add 172.16.1.0/24 dev %i

[Peer]
PublicKey = 2LbLqgg0hGjsQ+Y15l+mPhEtGN53Uhvzj8n9dpxVqDQ=
AllowedIPs = 10.1.1.0/24,192.168.1.0/24,172.16.1.0/24
Endpoint = example.com:51820
PersistentKeepalive = 25

WireGuard host on Network 2 (172.16.1.0/24):

[Interface]
Address = 10.1.1.12/32
PrivateKey = CPZBHHLywkMqgW70MIgnvJRculKKGyYaBP7rIUJbpXs=
Table = off
PostUp = ip -4 route add 10.1.1.0/24 dev %i
PostUp = ip -4 route add 192.168.1.0/24 dev %i

[Peer]
PublicKey = 2LbLqgg0hGjsQ+Y15l+mPhEtGN53Uhvzj8n9dpxVqDQ=
AllowedIPs = 10.1.1.0/24,192.168.1.0/24,172.16.1.0/24
Endpoint = example.com:51820
PersistentKeepalive = 25

Routing Table is off and routes are manually added in PostUp because otherwise wg-quick would set routes for the local network's traffic which is in AllowedIPs.

All three hosts should enable forwarding:

sysctl -w net.ipv4.ip_forward=1

In order to route traffic to WireGuard from other hosts in the networks, the hosts need a route. If fiddling with the gateway isn't an option, the route needs to be set on each separately, for example (assuming the WireGuard hosts on each network are .1.10):

# Network 1
ip -4 route add 10.1.1.0/24 via 192.168.1.10
ip -4 route add 172.16.1.0/24 via 192.168.1.10

# Network 2
ip -4 route add 10.1.1.0/24 via 172.16.1.10
ip -4 route add 192.168.1.0/24 via 172.16.1.10

Then, supposing that each of the configurations was in /etc/wireguard/wg0.conf, one can say wg-quick up wg0. To make the WireGuard configuration come up automatically, the Systemd service should be enabled:

systemctl enable wg-quick@wg0.service
systemctl daemon-reload
systemctl start wg-quick@wg0.service
]]>
Building virtual machines with vmbuilder http://www.async.fi/2016/02/building-virtual-machines-with-vmbuilder/ Fri, 12 Feb 2016 20:00:00 +0000 Joni Kähärä http://www.async.fi/2016/02/building-virtual-machines-with-vmbuilder/ After installing qemu-kvm, libvirt-bin, bridge-utils and ubuntu-vm-builder, set up a bridge that virtual machines can attach to:

auto lo
iface lo inet loopback

auto enp2s0
iface enp2s0 inet manual

auto br0
iface br0 inet static
      address 192.168.0.8
      netmask 255.255.255.0
      gateway 192.168.0.1
      nameserver 127.0.0.1
      bridge_ports enp2s0
      bridge_stp off
      bridge_fd 0
      bridge_maxwait 0

Then /etc/init.d/networking restart.

Install apt-cacher and give it the following config in /etc/apt-cacher/apt-cacher.conf:

group = www-data
user = www-data
daemon_addr = 192.168.0.8
path_map = ubuntu archive.ubuntu.com/ubuntu
allowed_hosts = 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
ubuntu_release_names = trusty

The vm build script, using the local cache:

#!/bin/sh

if [ $# -lt 2 ]; then
    echo "usage: HOSTNAME IP"
    exit 1
else
    HOSTNAME=$1
    IP=$2
fi

firstboot=`mktemp`
echo "#!/bin/bash" >>$firstboot
echo "rm -f /etc/resolvconf/resolv.conf.d/{original,base,head,tail}" >>$firstboot
echo "reboot" >>$firstboot
chmod +x $firstboot

qemu-img create -f qcow2 -o preallocation=falloc $HOME/vms/$HOSTNAME-rootdisk 4096M

vmbuilder kvm ubuntu \
  --suite trusty \
  --verbose \
  --libvirt qemu:///system \
  --destdir $HOME/vms/$HOSTNAME/ \
  --install-mirror http://192.168.0.8:3142/ubuntu \
  --mirror http://192.168.0.8:3142/ubuntu \
  --raw $HOME/vms/$HOSTNAME-rootdisk \
  --rootsize 4096 \
  --swapsize 0 \
  --mem 128 \
  --cpus 1 \
  --hostname $HOSTNAME \
  --bridge br0 \
  --ip $IP\
  --mask 255.255.255.0 \
  --gw 192.168.0.1 \
  --dns 192.168.0.8 \
  --lang en_US.UTF-8 \
  --timezone UTC \
  --user ubuntu \
  --name Ubuntu \
  --pass ubuntu \
  --ssh-user-key $HOME/.ssh/id_rsa.pub \
  --addpkg linux-image-generic \
  --addpkg openssh-server \
  --addpkg sudo \
  --firstboot $firstboot

rm $firstboot
rmdir $HOME/vms/$HOSTNAME/

virsh autostart $HOSTNAME
virsh start $HOSTNAME

Edit: To inspect and edit the virtual machine disk image, use guestfish, part of the libguestfs project:

$ sudo apt-get install libguestfs-tools
$ guestfish --rw --add testserver-rootdisk

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

><fs> run
100% ⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒⟧ 00:00
><fs> list-filesystems
/dev/sda1: ext4
><fs> mount /dev/sda1 /
><fs> emacs /etc/resolv.conf
><fs> exit

Edit: Disk image pre-allocation can be done with QEMU-provided tools, which may or may not be a more kosher approach. Must investigate.

qemu-img create -f qcow2 -o preallocation=falloc $HOME/vms/$HOSTNAME-rootdisk 32768M

Edit: Fix nameserver enforcement in build script.

]]>
OH2EWL http://www.async.fi/2016/01/oh2ewl/ Sun, 17 Jan 2016 14:45:00 +0000 Joni Kähärä http://www.async.fi/2016/01/oh2ewl/ It's been a slow year and a half on this blog, but I'm finally a licensed amateur radio operator. The call is OH2EWL.

]]>
Salt Notes http://www.async.fi/2014/06/salt-notes/ Tue, 03 Jun 2014 18:00:00 +0000 Joni Kähärä http://www.async.fi/2014/06/salt-notes/ I decided to go for Salt when picking a solution that would help me automate server management. Here are some things that required some figuring out.

Including keys in pillar data

Using Git as an example; deploy key is set in Github repo's settings:

sites:
  example.com:
    gitsource: git+ssh://git@github.com/you/your_repo.git
    gitidentity: |
      -----BEGIN RSA PRIVATE KEY-----
      <Deploy key goes here – mind the indentation!>
      -----END RSA PRIVATE KEY-----
    

Using the above in states:

{% if 'gitsource' in args and 'gitidentity' in args %}
/etc/deploy-keys/{{ site }}:
  file.directory:
    - makedirs: True
    - require:
      - pkg: nginx
    - watch_in:
      - service: nginx

/etc/deploy-keys/{{ site }}/identity:
  file.managed:
    - mode: 600
    - contents_pillar: sites:{{ site }}:gitidentity
    - require:
      - pkg: nginx
    - watch_in:
      - service: nginx

{{ args.gitsource }}:
  git.latest:
    - identity: /etc/deploy-keys/{{ site }}/identity
    - target: /var/www/{{ site }}
    - rev: master
    - force: True
    - require:
      - pkg: nginx
    - watch_in:
      - service: nginx
{% endif %}
    

Swap

Using a swap file here because DigitalOcean instances, at least the small ones that I've tested, don't include any swap.

/swapfile:
  cmd.run:
    - name: "fallocate -l 1024M /swapfile && chmod 600 /swapfile && mkswap /swapfile"
    - unless: test -f /swapfile
  mount.swap:
    - require:
      - cmd: /swapfile
    

Logentries

The "agent" of the excellent Logentries log gathering service doesn't use a config file, and instead relies on the le tool that is used to set thing up. After config changes, the Logentries daemon must be restarted (that last restart part can likely be streamlined but I couldn't get a hard service restart to work otherwise).

logentries:
  pkgrepo.managed:
    - name: deb http://rep.logentries.com/ trusty main
    - dist: trusty
    - file: /etc/apt/sources.list.d/logentries.list
    - keyid: C43C79AD
    - keyserver: pgp.mit.edu
  pkg:
    - latest

logentries_registered:
  cmd.run:
    - unless: le whoami
    - name: le register --force --account-key={{ pillar['logentries']['account_key'] }} --hostname={{ grains.id }} --name={{ grains.id }}-`date +'%Y-%m-%dT%H:%M:%S'`
    - require:
      - pkg: logentries
    - require_in:
      - pkg: logentries-daemon

logentries_follow:
  cmd.run:
    - name: |
        le follow /var/log/syslog
        le follow /var/log/auth.log
        le follow /var/log/salt/minion
{% for site, args in pillar.get('sites', {}).items() %}
        le follow /var/log/nginx/{{ site }}.access.log
        le follow /var/log/nginx/{{ site }}.error.log
{% endfor %}
    - require:
      - pkg: logentries
    - require_in:
      - pkg: logentries-daemon

logentries-daemon:
  pkg:
    - latest

logentries_daemon_stop:
  service.dead:
    - name: logentries
    - require:
      - pkg: logentries-daemon
    - require_in:
      - logentries_daemon_start

logentries_daemon_start:
  service.running:
    - name: logentries
    
]]>
Django and Docker: a Marriage Made in Heaven http://www.async.fi/2014/01/django-and-docker-a-marriage-made-in-heaven/ Fri, 03 Jan 2014 13:45:00 +0000 Joni Kähärä http://www.async.fi/2014/01/django-and-docker-a-marriage-made-in-heaven/ Not especially Django-specific, but good info nevertheless:

]]>
Basic MRTG config http://www.async.fi/2013/12/basic-mrtg-config/ Tue, 31 Dec 2013 09:30:00 +0000 Joni Kähärä http://www.async.fi/2013/12/basic-mrtg-config/ It's a mess, but making things work took some effort, and a lot of copypasting from numerous sources, so I'm recording this here.

/etc/mrtg.cfg:

WorkDir: /var/www/mrtg
WriteExpires: Yes
Language: english
Title[^]: Traffic Analysis for Yourserver
Options[_]: growright, bits

LoadMIBs: /usr/share/mibs/netsnmp/UCD-SNMP-MIB, /usr/share/mibs/ietf/TCP-MIB, /usr/share/mibs/ietf/UDP-MIB, /usr/share/mibs/ietf/HOST-RESOURCES-MIB

Target[localhost.cpu]:ssCpuRawUser.0&ssCpuRawUser.0:yoursnmpcommunity@localhost+ ssCpuRawSystem.0&ssCpuRawSystem.0:yoursnmpcommunity@localhost+ ssCpuRawNice.0&ssCpuRawNice.0:yoursnmpcommunity@localhost
RouterUptime[localhost.cpu]: yoursnmpcommunity@localhost
MaxBytes[localhost.cpu]: 100
Title[localhost.cpu]:  CPU Load
PageTop[localhost.cpu]: <H1> Active CPU Load %</H1>
Unscaled[localhost.cpu]: ymwd
ShortLegend[localhost.cpu]: %
YLegend[localhost.cpu]: %
Legend1[localhost.cpu]: %
Legend2[localhost.cpu]:
Legend3[localhost.cpu]:
Legend4[localhost.cpu]:
LegendI[localhost.cpu]: Active
LegendO[localhost.cpu]:
Options[localhost.cpu]: growright,nopercent

Target[localhost.mem]: .1.3.6.1.4.1.2021.4.6.0&.1.3.6.1.4.1.2021.4.6.0:yoursnmpcommunity@localhost
PageTop[localhost.mem]: <H1> Free Memory</H1>
Options[localhost.mem]: nopercent,growright,gauge,noo
Title[localhost.mem]:  Free Memory
MaxBytes[localhost.mem]: 1000000
kMG[localhost.mem]: k,M,G,T,P,X
YLegend[localhost.mem]: bytes
ShortLegend[localhost.mem]: bytes
LegendI[localhost.mem]: Free Memory:
LegendO[localhost.mem]:
Legend1[localhost.mem]: Free memory, not including swap, in bytes

Target[localhost.swap]: memAvailSwap.0&memAvailSwap.0:yoursnmpcommunity@localhost
PageTop[localhost.swap]: <H1> Swap Available</H1>
Options[localhost.swap]: growright,gauge,nopercent
Title[localhost.swap]:  Swap Available
MaxBytes[localhost.swap]: 1073741824
kMG[localhost.swap]: k,M,G,T,P,X
YLegend[localhost.swap]: bytes
ShortLegend[localhost.swap]: bytes
LegendI[localhost.swap]: Swap Available:
LegendO[localhost.swap]:
Legend1[localhost.swap]: Swap memory available

Target[localhost.rootdisk]: dskPercent.1&dskPercent.1:yoursnmpcommunity@localhost
PageTop[localhost.rootdisk]: <H1> Used Root Disk</H1>
Title[localhost.rootdisk]:  Used Root Disk
MaxBytes[localhost.rootdisk]: 100
Options[localhost.rootdisk]: nopercent,growright,gauge
Unscaled[localhost.rootdisk]: ymwd
ShortLegend[localhost.rootdisk]: %
YLegend[localhost.rootdisk]: %

Target[localhost.eth0]: /95.85.11.102:yoursnmpcommunity@localhost
MaxBytes[localhost.eth0]: 12500000
Title[localhost.eth0]:  eth0
PageTop[localhost.eth0]: <h1> eth0</h1>
ShortLegend[localhost.eth0]: b/s
YLegend[localhost.eth0]: b/s

Target[localhost.eth1]: /10.129.5.83:yoursnmpcommunity@localhost
MaxBytes[localhost.eth1]: 12500000
Title[localhost.eth1]:  eth1
PageTop[localhost.eth1]: <h1> eth1</h1>
ShortLegend[localhost.eth1]: b/s
YLegend[localhost.eth1]: b/s

Target[localhost.udpin]: udpInDatagrams.0&udpInDatagrams.0:yoursnmpcommunity@localhost
PageTop[localhost.udpin]: <H1> Incoming UDP pkts per minute</H1>
Title[localhost.udpin]:  Incoming UDP pkts per minute
MaxBytes[localhost.udpin]: 1000000
ShortLegend[localhost.udpin]: p/m
YLegend[localhost.udpin]: p/m
LegendI[localhost.udpin]: Incoming
LegendO[localhost.udpin]:
Options[localhost.udpin]: nopercent,growright,perminute

Target[localhost.udpout]: udpOutDatagrams.0&udpOutDatagrams.0:yoursnmpcommunity@localhost
PageTop[localhost.udpout]: <H1> Outgoing UDP pkts per minute</H1>
Title[localhost.udpout]:  Outgoing UDP pkts per minute
MaxBytes[localhost.udpout]: 1000000
ShortLegend[localhost.udpout]: p/m
YLegend[localhost.udpout]: p/m
LegendI[localhost.udpout]:
LegendO[localhost.udpout]: Outgoing
Options[localhost.udpout]: nopercent,growright,perminute

Target[localhost.tcpconns]: tcpCurrEstab.0&tcpCurrEstab.0:yoursnmpcommunity@localhost
Title[localhost.tcpconns]:  TCP Connections
PageTop[localhost.tcpconns]: <H1> TCP Connections</H1>
MaxBytes[localhost.tcpconns]: 10000000000
ShortLegend[localhost.tcpconns]: conns
YLegend[localhost.tcpconns]: conns
LegendI[localhost.tcpconns]: Incoming
LegendO[localhost.tcpconns]: Outgoing
Legend1[localhost.tcpconns]: Established incoming connections
Legend2[localhost.tcpconns]: Established outgoing connections
Options[localhost.tcpconns]: nopercent,gauge,growright

Target[localhost.tcpnewconns]: tcpPassiveOpens.0&tcpActiveOpens.0:yoursnmpcommunity@localhost
Title[localhost.tcpnewconns]:  New TCP Connections
PageTop[localhost.tcpnewconns]: <h1> New TCP Connections / minute</h1>
MaxBytes[localhost.tcpnewconns]: 1000000000
ShortLegend[localhost.tcpnewconns]: conns/min
YLegend[localhost.tcpnewconns]: conns/min
LegendI[localhost.tcpnewconns]: Incoming
LegendO[localhost.tcpnewconns]: Outgoing
Legend1[localhost.tcpnewconns]: New inbound connections
Legend2[localhost.tcpnewconns]: New outbound connections
Options[localhost.tcpnewconns]: growright,nopercent,perminute
    
]]>
Running a Django app with Gunicorn and Upstart http://www.async.fi/2013/12/running-a-django-app-with-gunicorn-and-upstart/ Tue, 31 Dec 2013 07:55:00 +0000 Joni Kähärä http://www.async.fi/2013/12/running-a-django-app-with-gunicorn-and-upstart/ Skimmed from here, I mostly just changed the last line that starts Gunicorn from using gunicorn_django to plain gunicorn, as recommended by the Django overlords.

/etc/init/yourapp.conf:

description "Your App"
author "Your Name "

start on (net-device-up and local-filesystems)
stop on shutdown
respawn

script
    export HOME="/root/of/django/app" # i.e. where "manage.py" can be found
    export PATH="$PATH:/root/of/django/app/env/bin" # "env" is our virtualenv
    export DJANGO_SETTINGS_MODULE="settings"
    export LANG="en_US.UTF-8"
    cd $HOME
    exec $HOME/env/bin/gunicorn -b 127.0.0.1:8000 -w 1 --log-file /var/log/gunicorn/yourapp.log app
end script

app.py:

import os, sys

sys.path.insert(0, '/root/of/django/app/')
path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
if path not in sys.path:
    sys.path.append(path)

import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
]]>
Basic iptables setup (Ubuntu) http://www.async.fi/2013/12/basic-iptables-setup-ubuntu/ Mon, 30 Dec 2013 18:20:00 +0000 Joni Kähärä http://www.async.fi/2013/12/basic-iptables-setup-ubuntu/ Accept anything coming in from 127.0.0.1:

iptables -A INPUT -i lo -j ACCEPT
    

Accept "related" ("packet is starting a new connection, but is associated with an existing connection") and "established" ("packet is associated with a connection which has seen packets in both directions") packets:

iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
    

SSH; set port (XXXXX) to 22 if you're running the default, which you perhaps should not do as the script kiddies will not leave you alone. If this is changed to something non-default then do not forget to change the port in /etc/ssh/sshd_config (the Port configuration directive) and do these changes coordinatedly. Otherwise you will be locked out.

iptables -A INPUT -p tcp -m tcp --dport XXXXX -j ACCEPT
    

HTTP and HTTPS:

iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
    

NTP, because we're part of the NTP Pool Project:

iptables -A INPUT -p udp -m udp --dport 123 -j ACCEPT
    

Log dropped packets, but not too much:

iptables -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables DROP: " --log-level 7
    

Do the actual dropping:

iptables -A INPUT -j DROP
    

In order to save these settings on shutdown, create the following file in /etc/network/if-post-down.d/iptables:

#!/bin/sh
iptables-save -c > /etc/iptables.rules
exit 0
    

To restore the settings on boot, create the following file in /etc/network/if-pre-up.d/iptables:

#!/bin/sh
iptables-restore < /etc/iptables.rules
exit 0
    

And make the two executable:

chmod +x /etc/network/if-post-down.d/iptables /etc/network/if-pre-up.d/iptables
    
]]>
Gone DigitalOcean http://www.async.fi/2013/12/gone-digitalocean/ Sun, 29 Dec 2013 20:45:00 +0000 Joni Kähärä http://www.async.fi/2013/12/gone-digitalocean/ In order to try something new (and save money) I switched my personal "utility" server from EC2 to DigitalOcean. As I was running just a t1.micro instance anyway, I selected DigitalOcean's cheapest offering, which costs $5/month (512MB/20GB(SSD)/1TB). Plus $1/month for regular backups (they somehow backup the whole server while the server is running, so I'm not sure how consistent said backup can possibly be but at least it's cheap).

DigitalOcean is offering just servers, and the whole experience isn't of course quite as polished as with Amazon. Nor can you, say, access S3 (or other services) at quite the same speed as when in Amazon's network. But for my use case it's a good fit.

]]>
Autossh with Ubuntu Upstart http://www.async.fi/2013/07/autossh-with-ubuntu-upstart/ Wed, 31 Jul 2013 18:30:00 +0000 Joni Kähärä http://www.async.fi/2013/07/autossh-with-ubuntu-upstart/ Like the title says, the point of this post is getting autossh up when Ubuntu boots. Place this in e.g. /etc/init/autossh.conf, after which you will be able to says things like sudo start autossh and sudo stop autossh.

    description "autossh tunnel"
    author "Joni Kähärä "
	
    start on (local-filesystems and net-device-up IFACE=eth0 and net-device-up IFACE=eth1) # assuming we have multiple interfaces
    stop on runlevel [016]
	
    respawn
    respawn limit 5 60
	
    exec autossh -M 0 -N -R 10000:192.168.1.1:22 -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -o "StrictHostKeyChecking=no" -o "BatchMode=yes" -i /home/user/.ssh/id_rsa username@hostname
    

The “start on” line ensures that autossh won’t start before all network interfaces are up, and “stop on” will stop autossh (if it’s running) on runlevels 0, 1 and 6 (halt, single user and reboot, respectively). The “respawn limit” line will ensure that if autossh goes crazy, it will not be started again. Note that the plain “respawn” line is still needed to actually respawn the process. Of the command line options only the first one (-M 0) is for autossh, the rest are regular ssh options.

  • -M 0 denotes that autossh should not set up it’s own monitoring channel and should instead rely on ssh terminating itself when it decides that the connection’s been lost (see ServerAlive* options below).
  • -N means “Do not execute a remote command”, i.e. just set up the connection and port forward.
  • -R 10000:192.168.1.1:22 means that we want TCP port 10000 on the remote host forwarded to port 22 on local host (192.168.1.1).
  • -o "ServerAliveInterval 60" send “keepalive” messages every 60 seconds
  • -o "ServerAliveCountMax 3" terminate ssh if three consecutive ServerAliveInterval inquiries fail (and thus respawn)
  • -o "StrictHostKeyChecking=no" don’t fail if remote server’s identity changed
  • -o "BatchMode=yes" don’t attempt to use a passphrase if public key login fails
  • -i /home/user/.ssh/id_rsa the private key we’ll use for the tunnel
  • username@hostname connect to this host with this username

]]>
Amazon DynamoDB basics with Boto http://www.async.fi/2013/04/amazon-dynamodb/ Fri, 26 Apr 2013 10:00:00 +0000 Joni Kähärä http://www.async.fi/2013/04/amazon-dynamodb/ The code assumes that Boto credentials have been set up.

      import boto.dynamodb
      from boto.dynamodb.condition import *
      
      connection =  boto.dynamodb.connect_to_region('eu-west-1')
      table = connection.get_table('table')
      
      id = '1'
      timestamp = 1234
      attrs = {
        'key1': 'value1',
        'key2': set(['value2', 'value3'])
      }
      
      # create
      item = table.new_item(hash_key=id, range_key=timestamp, attrs=attrs)
      item.put()
      
      # read
      item = table.get_item(hash_key=id)      
      key2 = list(item['key2'])
      
      # update
      item['key1'] = 'foo'
      item['key3'] = 'bar'
      item.put()
      
      # query
      table.query(hash_key=id, range_key_condition=LT(1500))
      
      # scan
      table.scan(scan_filter={'key1': EQ('foo')})
      
      # delete
      item = table.get_item(hash_key=id)
      item.delete()
      
    
]]>
Serving per IAM user S3 data http://www.async.fi/2012/11/serving-per-iam-user-s3-data/ Wed, 21 Nov 2012 20:00:00 +0000 Joni Kähärä http://www.async.fi/2012/11/serving-per-iam-user-s3-data/ This isn't meant for public facing web, but a closed environment where it is necessary that each client is individually addressable (common application code, individual data). Each client has a local web server plus locally stored AWS credentials, and can therefore be fed content specific to each client. The bootstrap script is minimalistic by design, with as little moving parts as possible.

AWS credentials file (init.json below):

    init({
      "region": "eu-west-1",
      "common_bucket": "loadres",
      "private_bucket": "697ad820240c48929dce15c25cee8591",
      "access_key": "AKIAILZCSDJEFUN3L53Q",
      "secret_key": "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"
    })
    

index.html:

    <!DOCTYPE html>
    <html>
      <head>
        <meta charset="utf-8">
        <title>loadres</title>
        <script src="sha1.js"></script> <!-- https://github.com/lmorchard/S3Ajax/blob/master/js/sha1.js -->
        <script>

          // the authenticated S3 URL maker function, without STS specific parts:
          // http://www.async.fi/2012/07/s3-query-string-authentication-and-aws-security-token-service/
          var s3url = function(region, bucket, key, access_key, secret_key) {
            var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
            var string_to_sign = [
              'GET\n\n\n',
              expires, '\n',
              '/', bucket, '/', key
            ].join('');
            var signature = b64_hmac_sha1(secret_key, string_to_sign) + '=';
            var url = 'https://s3-' + region + '.amazonaws.com/' + bucket + '/' + key
              + '?AWSAccessKeyId=' + encodeURIComponent(access_key)
              + '&Signature=' + encodeURIComponent(signature)
              + '&Expires=' + expires;
            return url;
          };

          var init = function(settings) {
            var head = document.getElementsByTagName('head')[0];

            // inject prod.css
            var css = document.createElement('link');
            css.setAttribute('rel', 'stylesheet');
            css.setAttribute('href', s3url(settings['region'], settings['common_bucket'], 'prod.css', settings['access_key'], settings['secret_key']));
            head.appendChild(css);

            // inject prod.js
            var js = document.createElement('script');
            js.setAttribute('src', s3url(settings['region'], settings['common_bucket'], 'prod.js', settings['access_key'], settings['secret_key']));
            head.appendChild(js);
          }
        </script>

        <!-- load AWS region and bucket info, plus credentials; this script calls init() (above) -->
        <script src="init.json"></script>
      </head>
      <body></body>
    </html>
    

Now in the loaded prod.js file we would bring in the application code that would fetch data specific to this client (a little repetition here):

    var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
    var string_to_sign = [
      'GET\n\n\n',
      expires, '\n',
      '/', settings['private_bucket'], '/', 'data.txt'
    ].join('');
    var signature = b64_hmac_sha1(settings['secret_key'], string_to_sign) + '=';
    var url = '/' + settings['private_bucket'] + '/' + 'data.txt'
      + '?AWSAccessKeyId=' + encodeURIComponent(settings['access_key'])
      + '&Signature=' + encodeURIComponent(signature)
      + '&Expires=' + expires;

    var r = new XMLHttpRequest();
    r.open('GET', url, true);
    r.onreadystatechange = function () {
      if(r.readyState != 4 || r.status != 200) return;
      alert("Success: " + r.responseText);
    };
    r.send();
    

To make this work without CORS, we're using a local proxy to handle S3 requests. In Nginx config:

    location /697ad820240c48929dce15c25cee8591 {
      rewrite  ^//697ad820240c48929dce15c25cee8591/(.*)$ /$1 break;
      proxy_pass https://s3-eu-west-1.amazonaws.com/697ad820240c48929dce15c25cee8591;
    }
    
]]>
Berlin/August 2012 http://www.async.fi/2012/08/berlin-august-2012/ Tue, 28 Aug 2012 21:00:00 +0000 Joni Kähärä http://www.async.fi/2012/08/berlin-august-2012/ Here are some bits of information that I think might be useful for others travelling to this destination.

Fonic prepaid

I arranged, beforehand, two Fonic Prepaid SIMs from this guy. Please note that while during the initial activation at fonic.de you can enter any German address in your details (I gave our rented apartment address) they will send a letter to that address and if the letter returns as non-deliverable (as happened in our case) then the prepaid will be put in “nur erreichbar” state. This means that you can receive calls and SMS, but can not place calls or send SMS or activate any of the flat rate Internet options. After contacting Fonic support, they asked for the original PIN numbers printed in SIM packaging, and also a copy (scan) of my Finnish passport. After this both SIMs were usable once again and I was able to activate the “Tagesflat” Internet option while in Germany.

In addition to top-up vouchers, transferring money to your prepaid account is possible with IBAN bank transfer; see instructions here.

7-Tage-Karte

These 7 day public transport tickets can be purchased from machines in S-Bahn/U-Bahn stations and bus/tram stops, or from a Spätkauf. As far as I know, the machines only accept cash. We got these for all zones (i.e. ABC which covers Berlin and some neighbouring towns) which cost a little over 30 euros per week.

Berlin public transportation works excellently and we took a cab only occasionally (cab rides were reasonably priced too, though).

Öffi Directions

This Android app will, given two stations (or addresses), present you with a selection of best routes using any mix of the above mentioned means of transportation, often in combinations you wouldn’t have thought of by just looking at the route maps and time tables. Can not be recommended enough.

Places to visit

While in no way an exhaustive list, here are some places we went to and that I find worth mentioning.

Teufelsberg

The now abandoned joint NSA/GCHQ listening station. There was some dude claiming to represent the current owner at the gate collecting a five euro admission per person and requiring our signatures in order to avoid any legal hassles should we hurt ourselves during our exploration.

There were people in the actual radomes but we just wandered around the site as climbing didn’t seem like the thing to do at the time. There is a staircase inside the main building leading up but the door leading to that staircase was locked. The key to that lock is supposedly held by the people arranging guided tours of the site.

Fernsehturm

You should book tickets in advance, this way you’re guaranteed the next free window table at the restaurant above the observation deck (we were there on a saturday night at around eight and the place was pretty packed and we had to wait for the table for around 15 minutes). The restaurant is expensive-ish but ok, though the food is nothing to write home about. But then again people come here for the view. Old-school, professional waitresses and live music.

Sightseeing flight with Manuel

I spotted this activity from Gidsy. Our pilot Manuel flew us from the Bienenfarm field on a small Cessna plane. Good photos.

Berliner Unterwelten

These tours of the underground Berlin are arranged by a non-profit organisation. We took the "Cold War” tour which took us first to a big war-era bomb shelter near Gesundbrunnen station and then to a modern facility at Pankstraße station. Both facilities are of course decommisioned (there wouldn't be tours if they weren't) but the guide (a Brit) gave a rather lively presentation about life in the facilities if they had ever been put to use, and the effects of nuclear war in general.

Türkenmarkt

Tuesdays and fridays. Looking at the prices, you could easily stock a weeks supply of vegetables here for ten euros. At the south-east end of the market there is a fresh pasta and gnocchi vendor with an excellent product (same vendor at Hackescher Markt on saturdays).

]]>
S3 query string authentication and AWS Security Token Service http://www.async.fi/2012/07/s3-query-string-authentication-and-aws-security-token-service/ Tue, 24 Jul 2012 20:00:00 +0000 Joni Kähärä http://www.async.fi/2012/07/s3-query-string-authentication-and-aws-security-token-service/ Getting this right took some tweaking, so:

// http://docs.amazonwebservices.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationQueryStringAuth
// http://docs.amazonwebservices.com/STS/latest/APIReference/Welcome.html

var access_key = '…', secret_key = '…', session_token = '…';

var expires = Math.floor(((new Date()).getTime()/1000) + 3600);
var string_to_sign = [
    'GET\n\n\n',
    expires, '\n',
    'x-amz-security-token:', session_token, '\n',
    '/', bucket, '/', key
].join('');

// https://github.com/lmorchard/S3Ajax/blob/master/js/sha1.js
var signature = b64_hmac_sha1(secret_key, string_to_sign) + '=';

var url = key
    + '?AWSAccessKeyId=' + encodeURIComponent(access_key)
    + '&Signature=' + encodeURIComponent(signature)
    + '&Expires=' + expires
    + '&x-amz-security-token=' + encodeURIComponent(session_token);
    
]]>
Python/HTTP Basic Authentication http://www.async.fi/2012/06/python-http-basic-authentication/ Mon, 18 Jun 2012 00:00:00 +0000 Joni Kähärä http://www.async.fi/2012/06/python-http-basic-authentication/ import urllib, httplib2, base64 http = httplib2.Http() auth = base64.encodestring(username + ':' + password) response, content = http.request(url, 'GET', headers={ 'Authorization' : 'Basic ' + auth }) ]]> Debian/Nginx http://www.async.fi/2012/05/debian-nginx/ Sun, 20 May 2012 17:00:00 +0000 Joni Kähärä http://www.async.fi/2012/05/debian-nginx/ >/etc/apt/sources.list # apt-get update # apt-get install nginx [...]]]> # wget http://nginx.org/keys/nginx_signing.key # apt-key add nginx_signing.key # echo -ne "# http://wiki.nginx.org/Install\ndeb http://nginx.org/packages/debian/ squeeze nginx\ndeb-src http://nginx.org/packages/debian/ squeeze nginx\n" >>/etc/apt/sources.list # apt-get update # apt-get install nginx ]]> Debugging pyergometer http://www.async.fi/2012/04/debugging-pyergometer/ Sat, 14 Apr 2012 12:00:00 +0000 Joni Kähärä http://www.async.fi/2012/04/debugging-pyergometer/ pyergometer

See also: ergometria.async.fi

]]>
Kettler ergometer serial protocol http://www.async.fi/2012/03/kettler-ergometer-serial-protocol/ Sat, 31 Mar 2012 06:55:00 +0000 Joni Kähärä http://www.async.fi/2012/03/kettler-ergometer-serial-protocol/ I'm having some issues getting JErgometer working with my new Kettler E3, so to rule out the possibility of miswiring (the bike has a traditional RS-232 interface so I got a USB-RS232-WE-5000-BT_0.0 cable from FTDI which comes "wire-ended" and requires some soldering) I dug out "Kettler Treadmill Serial lua class" from mhwlng (thanks!). This, and my own brute-force testing got me the following list of commands relevant in the bike's context, which may be only a subset of the commands that the bike accepts but what could be used to build one's own implementation:

RS
Reset device
ID
Ergometer computer model info ("SF1B1706")
VE
Ergormeter computer firmware version ("117")
ST
Request status; reply:

pulse rpm speed*10 distance requested_power energy mm:ss actual_power
CM
Enter command mode; required before calling the P-commands below
PW x
Request power of x watts
PT mmss
Request time of mmss
PD x
Request x/10 km distance

Notes: I only connected RX, TX and signal ground (looks like the bike does not use any kind handshaking). Serial port settings are 9600bps, 8N1.

]]>
Gone static http://www.async.fi/2012/03/gone-static/ Sat, 03 Mar 2012 10:00:00 +0000 Joni Kähärä http://www.async.fi/2012/03/gone-static/ Update 2: Looks like page load times, at least as reported by Pingdom, went up from what they initially were. In my own testing cached pages still load in something like 150 to 250 milliseconds but Pingdom disagees. I don't know if this is regular CloudFront performance fluctuation, some kind of impedance mismatch between Pingdom and CloudFront or "something else".

Update: That really did the trick and the estimated -90% page load time wasn't that far off:

Having been fed up with wastefulness (resource wise) and general slowness of the MySQL/PHP/WordPress/CloudFlare setup for some time, I have now moved this site to S3/CloudFront. Site is generated from an XML file (which I derived from a WordPress export dump) with a Python script that is hosted here. Commenting is obviously impossible but if you for some reason need to contact me you'll find contact details on your left.

]]>
Daily MySQL database dump, back up to S3 http://www.async.fi/2012/02/daily-mysql-database-dump-back-up-to-s3/ Sat, 11 Feb 2012 09:24:15 +0000 Joni Kähärä http://www.async.fi/2012/02/daily-mysql-database-dump-back-up-to-s3/ I'm in the process of, or planning at least, ditching MySQL/WordPress/CloudFlare and moving to a static site hosted on S3/CloudFront. At the moment, as AWS Route 53 does not support S3 or CloudFront as an Alias Target, moving to S3/CloudFront means that I have to have an A record pointing to a web server somewhere, which in turn redirects the request to the actual site's CloudFront CNAME. I do have such a server (running Nginx), but the same thing could be as well achieved by using a service such as Arecord.net. This redirect means that there's no way to run a site without the www.-prefix. Which I can live with. Also, at the moment, no SSL support is available but I'm sure I can live with that too as WordPress is simply slow, and most of all a big waste of resources. Getting rid of all the dynamic parts (seriously, it's not like there are a lot of commenters around here) will make this thing run fast, at least compared to what page load times currently are. My tests show that CloudFront returns cached pages in less than 200ms.

So, I'm killing one extra server in the near future and putting these snippets here for my own possible future use.

~/.my.cnf:
[client]
user = usename
password = password
host = hostname

[mysql]
database = dbname 
<dir>/wp-db-backup.sh:
#!/bin/sh

DBFILE="<dir>/dbname-`/bin/date +%s`.gz"

/usr/bin/mysqldump --quick dbname | /bin/gzip -c >$DBFILE
/usr/bin/s3cmd put $DBFILE s3://bucketname/
/bin/rm $DBFILE
crontab:
45 3 * * * /usr/bin/nice -n 20 <dir>/wp-db-backup.sh >/dev/null 2>&1
 ]]>
Mikrotik OpenVPN Server http://www.async.fi/2012/01/mikrotik-openvpn-server/ Sun, 29 Jan 2012 10:55:52 +0000 Joni Kähärä http://www.async.fi/2012/01/mikrotik-openvpn-server/ The purpose of this post is to describe, step by step, my attempt to set up an OpenVPN server on a Mikrotik RouterBOARD 750 and create a working tunnel from an outside machine (AWS EC2 Windows Server 2008 R2) to this OpenVPN server so that an SMB server on the local network can be accessed from said outside machine. The following diagram gives an overview of the setup:

I am going to decribe how to:

  • generate certificates to be used with OpenVPN
  • set up OpenVPN server on Mikrotik router
  • set up a tunnel with OpenVPN client on Windows
I am not going to describe the following:
  • setting up and connecting to an EC2 Windows instance
  • setting up a Samba Server
A few things worth mentioning about Mikrotik OpenVPN server implementation (that will likely bite if not known in advance):
  • only supports TCP mode, UDP is not supported
  • username/password pair is also required even though certificates are being used for authentication

Generate certificates to be used with OpenVPN

root@inhouse-debian:~# apt-get install openvpn
root@inhouse-debian:~# mkdir ovpn-cert
root@inhouse-debian:~# cd ovpn-cert/
root@inhouse-debian:~/ovpn-cert# cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0/* .
root@inhouse-debian:~/ovpn-cert# emacs vars
In the file vars I set the following values:
export KEY_COUNTRY="FI"
export KEY_PROVINCE="Etela-Suomi"
export KEY_CITY="Kotka"
export KEY_ORG="Async.fi"
export KEY_EMAIL="joni.kahara@async.fi"
export KEY_CN="kahara.dyndns.org"
export KEY_NAME="kahara.dyndns.org"
export KEY_OU="kahara.dyndns.org"
If I have understood correctly, of these only CN (Common Name) is obligatory. I may be wrong. Anyway, continuing:
root@inhouse-debian:~/ovpn-cert# source vars
root@inhouse-debian:~/ovpn-cert# ./clean-all
root@inhouse-debian:~/ovpn-cert# ./build-ca
root@inhouse-debian:~/ovpn-cert# ./build-key-server kahara.dyndns.org
root@inhouse-debian:~/ovpn-cert# openssl rsa -in keys/kahara.dyndns.org.key -out keys/kahara.dyndns.org.pem
root@inhouse-debian:~/ovpn-cert# ./build-key ec2 
root@inhouse-debian:~/ovpn-cert# apt-get install ncftp
root@inhouse-debian:~/ovpn-cert# ncftpput -u admin 192.168.1.1 / keys/kahara.dyndns.org.crt keys/kahara.dyndns.org.pem keys/ca.crt

Set up OpenVPN server on Mikrotik router

All the stuff here can also be made through Mikrotik's admin interface; textual form without screen shots is used just to keep thing terse.
root@inhouse-debian:~/ovpn-cert# ssh admin@192.168.1.1
[admin@MikroTik] > /certificate
[admin@MikroTik] /certificate> import file=kahara.dyndns.org.crt
[admin@MikroTik] /certificate> import file=kahara.dyndns.org.pem
[admin@MikroTik] /certificate> import file=ca.crt
[admin@MikroTik] /certificate> decrypt
[admin@MikroTik] /certificate> ..
[admin@MikroTik] > /interface bridge add name=ovpn-bridge
[admin@MikroTik] > /interface bridge port add interface=ether2-master-local bridge=ovpn-bridge
[admin@MikroTik] > /ip address add address=192.168.1.64/24 interface=ovpn-bridge 
[admin@MikroTik] > /ip pool add name=ovpn-pool ranges=192.168.1.65-192.168.1.99
[admin@MikroTik] > /ppp profile add bridge=ovpn-bridge name=ovpn-profile remote-address=ovpn-pool
[admin@MikroTik] > /ppp secret add service=ovpn local-address=192.168.1.64 name=user1 password=pass1 profile=ovpn-profile
[admin@MikroTik] > /interface ovpn-server server set auth=sha1,md5 certificate=cert1 cipher=blowfish128,aes128,aes192,aes256 default-profile=ovpn-profile enabled=yes keepalive-timeout=disabled max-mtu=1500 mode=ethernet netmask=24 port=1194 require-client-certificate=yes
[admin@MikroTik] > /ip firewall filter add action=accept chain=input disabled=no protocol=tcp dst-port=1194
[admin@MikroTik] > /ip firewall filter move 5 destination=1
That last step moves the new rule to the front of the chain; numbers ("5", "1") will likely be something else on your configuration. Firewall rule listing can be printed with the following command:
[admin@MikroTik] > /ip firewall filter print

Setup up a tunnel with OpenVPN client on Windows

After installing OpenVPN, create a config file for it. Here it's called "kahara.dyndns.org.ovpn":
client
dev tap
proto tcp
remote kahara.dyndns.org 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
cert ec2.crt
key ec2.key
verb 3
pull
auth-user-pass userpass.txt
Also, create a file called "userpass.txt" and put the following to it:
user1
pass1
Of course in an IRL situation one should use a real password. Make sure you copied the .crt and .key files over to the Windows machine, after which you can run OpenVPN client with:
PS C:\Users\Administrator\Desktop> openvpn.exe .\kahara.dyndns.org.ovpn
And here we have an EC2 client connected to a local SMB resource over the tunnel: ]]>
Note to Self: More on South http://www.async.fi/2011/12/note-to-self-more-on-south/ Mon, 12 Dec 2011 09:25:47 +0000 Joni Kähärä http://www.async.fi/2011/12/note-to-self-more-on-south/ Setup being along the dev → test → prod lines, to correctly manage database migrations we first set things up at dev:

manage.py syncdb --noinput
manage.py convert_to_south <app>
manage.py createsuperuser

At this point the South migrations are being pushed to repository and pulled in at test:

manage.py syncdb --noinput
manage.py migrate
manage.py migrate <app> 0001 --fake
manage.py createsuperuser

Now, back at dev, after a change to one of the models:

manage.py schemamigration <app> --auto
manage.py migrate <app>
And, after push/pull, at test:
manage.py migrate <app>
]]>
Handling Amazon SNS notifications with a Tastypie Resource http://www.async.fi/2011/10/handling-amazon-sns-notifications-with-a-tastypie-resource/ Sun, 23 Oct 2011 12:49:30 +0000 Joni Kähärä http://www.async.fi/2011/10/handling-amazon-sns-notifications-with-a-tastypie-resource/ Using Django and Tastypie, we automagically respond to SNS subscription requests. After that part is handled, the notification messages start coming in and those are used to trigger an SQS polling cycle (trying to do a thorough job there which may seem like an overkill but it's not). A received SQS message is parsed and contents are passed to an external program that forks and exits which keeps the request from blocking.

from django.conf import settings
from tastypie import fields, http
from tastypie.resources import Resource
from tastypie.bundle import Bundle
from tastypie.authentication import Authentication
from tastypie.authorization import Authorization
from tastypie.throttle import BaseThrottle
import boto.sq 
from boto.sqs.message import Message
from urlparse import urlparse
import base64, httplib, tempfile, subprocess, time, json, os, sys, syslog

# Http://django-tastypie.readthedocs.org/en/latest/non_orm_data_sources.html
class NotificationObject(object):
    def __init__(self, initial=None):
        self.__dict__['_data'] = {}
        if hasattr(initial, 'items'):
            self.__dict__['_data'] = initial
    def __getattr__(self, name):
        return self._data.get(name, None)
    def __setattr__(self, name, value):
        self.__dict__['_data'][name] = value

class NotificationResource(Resource):
    sns_messageid = fields.CharField(attribute='MessageId')
    sns_timestamp = fields.CharField(attribute='Timestamp')
    sns_topicarn = fields.CharField(attribute='TopicArn')
    sns_type = fields.CharField(attribute='Type')
    sns_unsubscribeurl = fields.CharField(attribute='UnsubscribeURL')
    sns_subscribeurl = fields.CharField(attribute='SubscribeURL')
    sns_token = fields.CharField(attribute='Token')
    sns_message = fields.CharField(attribute='Message')
    sns_subject = fields.CharField(attribute='Subject')
    sns_signature = fields.CharField(attribute='Signature')
    sns_signatureversion = fields.CharField(attribute='SignatureVersion')
    sns_signingcerturl = fields.CharField(attribute='SigningCertURL')

    class Meta:
        resource_name = 'notification'
        object_class = NotificationObject
        fields = ['sns_messageid']
        list_allowed_methods = ['post']
        authentication = Authentication()
        authorization = Authorization()

    def get_resource_uri(self, bundle_or_obj):
        return ''

    def obj_create(self, bundle, request=None, **kwargs):

        bundle.obj = NotificationObject(initial={ 'MessageId': '', 'Timestamp': '', 'TopicArn': '', 'Type': '', 'UnsubscribeURL': '', 'SubscribeURL': '', 'Token': '', 'Message': '', 'Subject': '', 'Signature': '', 'SignatureVersion': '', 'SigningCertURL': '' })
        bundle = self.full_hydrate(bundle)

        o = urlparse(bundle.data['SigningCertURL'])
        if not o.hostname.endswith('.amazonaws.com'):
            return bundle

        topicarn = bundle.data['TopicArn']

        if topicarn != settings.SNS_TOPIC:
            return bundle

        if not self.verify_message(bundle):
            return bundle

        if bundle.data['Type'] == 'SubscriptionConfirmation':
            self.process_subscription(bundle)
        elif bundle.data['Type'] == 'Notification':
            self.process_notification(bundle)

        return bundle

    def process_subscription(self, bundle):
        syslog.syslog('SNS Subscription ' + bundle.data['SubscribeURL'])
        o = urlparse(bundle.data['SubscribeURL'])
        conn = httplib.HTTPSConnection(o.hostname)
        conn.putrequest('GET', o.path + '?' + o.query)
        conn.endheaders()
        response = conn.getresponse()
        subscription = response.read()

    def process_notification(self, bundle):
        sqs = boto.sqs.connect_to_region(settings.SQS_REGION)
        queue = sqs.lookup(settings.SQS_QUEUE)
        retries = 5
        done = False
        while True:
            if retries < 1:
                break
            retries -= 1
            time.sleep(5)
            messages = queue.get_messages(10, visibility_timeout=60)
            if len(messages) < 1:
                continue
            for message in messages:
                try:
                    m = json.loads(message.get_body())
                    m['return_sns_region'] = settings.SNS_REGION
                    m['return_sns_topic'] = settings.SNS_TOPIC
                    m['return_sqs_region'] = settings.SQS_REGION
                    m['return_sqs_queue'] = settings.SQS_QUEUE
                    process = subprocess.Popen(['/usr/bin/nice', '-n', '15', os.path.dirname(os.path.normpath(os.sys.modules[settings.SETTINGS_MODULE].__file__)) + '/process.py', base64.b64encode(json.dumps(m))], shell=False)
                    process.wait()
                except:
                    e = sys.exc_info()[1]
                    syslog.syslog(str(e))
                queue.delete_message(message)

    def verify_message(self, bundle):
        message = u''
        if bundle.data['Type'] == 'SubscriptionConfirmation':
            message += 'Message\n'
            message += bundle.data['Message'] + '\n'
            message += 'MessageId\n'
            message += bundle.data['MessageId'] + '\n'
            message += 'SubscribeURL\n'
            message += bundle.data['SubscribeURL'] + '\n'
            message += 'Timestamp\n'
            message += bundle.data['Timestamp'] + '\n'
            message += 'Token\n'
            message += bundle.data['Token'] + '\n'
            message += 'TopicArn\n'
            message += bundle.data['TopicArn'] + '\n'
            message += 'Type\n'
            message += bundle.data['Type'] + '\n'
        elif bundle.data['Type'] == 'Notification':
            message += 'Message\n'
            message += bundle.data['Message'] + '\n'
            message += 'MessageId\n'
            message += bundle.data['MessageId'] + '\n'
            if bundle.data['Subject'] != '':
                message += 'Subject\n'
                message += bundle.data['Subject'] + '\n'
            message += 'Timestamp\n'
            message += bundle.data['Timestamp'] + '\n'
            message += 'TopicArn\n'
            message += bundle.data['TopicArn'] + '\n'
            message += 'Type\n'
            message += bundle.data['Type'] + '\n'
        else:
            return False

        o = urlparse(bundle.data['SigningCertURL'])
        conn = httplib.HTTPSConnection(o.hostname)
        conn.putrequest('GET', o.path)
        conn.endheaders()
        response = conn.getresponse()
        cert = response.read()

        # ok; attempt to use m2crypto failed, using openssl command line tool instead

        file_cert = tempfile.NamedTemporaryFile(mode='w', delete=False)
        file_sig = tempfile.NamedTemporaryFile(mode='w', delete=False)
        file_mess = tempfile.NamedTemporaryFile(mode='w', delete=False)

        file_cert.write(cert)
        file_sig.write(bundle.data['Signature'])
        file_mess.write(message)

        file_cert.close()
        file_sig.close()
        file_mess.close()

        # see: https://async.fi/2011/10/sns-verify-sh/
        verify_process = subprocess.Popen(['/usr/local/bin/sns-verify.sh', file_cert.name, file_sig.name, file_mess.name], shell=False)
        verify_process.wait()

        if verify_process.returncode == 0:
            return True

        return False

That process.py would be something like:

#!/usr/bin/env python

import boto.sqs
from boto.sqs.message import Message
import base64, json, os, sys, syslog

if len(sys.argv) != 2:
    sys.exit('usage: %s <base64 encoded json object>' % (sys.argv[0], ))

m = json.loads(base64.b64decode(sys.argv[1]))

# http://code.activestate.com/recipes/66012-fork-a-daemon-process-on-unix/
try:
    pid = os.fork()
    if pid > 0:
        sys.exit(0)
except OSError, e:
    print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
    sys.exit(1)

os.chdir("/")
os.setsid()
os.umask(0)

try:
    pid = os.fork()
    if pid > 0:
        sys.exit(0)
except OSError, e:
    sys.exit(1)

syslog.syslog(sys.argv[0] + ': ' + str(m))

# ...

That is, process.py gets the received (and doped) SQS message, Base64 encoded, as it's only command line argument, forks, exits and does what it's supposed to do after that on its own. Control returns to NotificationResource so the request doesn't block unnecessarily.

]]>
sns-verify.sh http://www.async.fi/2011/10/sns-verify-sh/ Sat, 22 Oct 2011 12:23:40 +0000 Joni Kähärä http://www.async.fi/2011/10/sns-verify-sh/ #!/bin/sh if [ $# -lt 3 ]; then echo "usage: sns-verify.sh CERT SIG MESS" exit 1 fi CERT=$1 SIG=$2 MESS=$3 PUB=`/bin/tempfile` SIGRAW=`/bin/tempfile` # http://sns-public-resources.s3.amazonaws.com/SNS_Message_Signing_Release_Note_Jan_25_2011.pdf /usr/bin/openssl x509 -in $CERT -pubkey -noout > $PUB /usr/bin/base64 -i -d $SIG > $SIGRAW RET=`/usr/bin/openssl dgst -sha1 -verify $PUB -signature $SIGRAW $MESS` if [ X"$RET" = X"Verified OK" ]; then exit 0 fi exit 1 ]]> Gone CloudFlare http://www.async.fi/2011/10/gone-cloudflare/ Fri, 21 Oct 2011 17:59:18 +0000 Joni Kähärä http://www.async.fi/2011/10/gone-cloudflare/

Enabled CloudFlare on this site, with nearly every optimization thing they offer. So far it's looking good, with an empty (browser) cache it takes a moment to load initial resources but after that subsequent page loads are near-instantaneous (click around to try this out). To get SSL properly working you have to get a Pro account. Recommended!

Update: Getting fishy numbers with Pingdom (over 2000 ms), although page load times from my own machines are ok (around 500 ms or so). Investigating…

]]>
S3 client-upload parameter generation with Python http://www.async.fi/2011/10/s3-client-upload-parameter-generation-with-python/ Sun, 16 Oct 2011 12:50:44 +0000 Joni Kähärä http://www.async.fi/2011/10/s3-client-upload-parameter-generation-with-python/ # http://aws.amazon.com/articles/1434 def S3UploadParams(bucket_name, object_name, expiry, maxsize, redirect): import os, boto, json, base64, hmac, hashlib from time import time, gmtime, strftime def SignS3Upload(policy_document): policy = base64.b64encode(policy_document) return base64.b64encode(hmac.new( boto.config.get('Credentials', 'aws_secret_access_key'), policy, hashlib.sha1 ).digest()) def GenerateS3PolicyString(bucket_name, object_name, expiry, maxsize, redirect): policy_template = '{ "expiration": "%s", "conditions": [ {"bucket": "%s"}, ["eq", "$key", "%s"], {"acl": "private"}, {"success_action_redirect": "%s"}, ["content-length-range", 0, %s] ] }' return policy_template % ( strftime("%Y-%m-%dT%H:%M:%SZ", gmtime(time() + expiry)), bucket_name, object_name, redirect, maxsize ) params = { 'key': object_name, 'AWSAccessKeyId': boto.config.get('Credentials', 'aws_access_key_id'), 'acl': 'private', 'success_action_redirect': redirect, } policy = GenerateS3PolicyString(bucket_name, object_name, expiry, maxsize, redirect) params['policy'] = base64.b64encode(policy) signature = SignS3Upload(policy) params['signature'] = signature return params ]]> KinectVision.com code http://www.async.fi/2011/08/kinectvision-com-code/ Wed, 31 Aug 2011 19:49:42 +0000 Joni Kähärä http://www.async.fi/2011/08/kinectvision-com-code/ This is old, from December 2010 it seems, but it's here in case the machine goes titsup. Quick, dirty and ugly but it works most of the time. First, the capture program:

#include <libusb-1.0/libusb.h>
#include "libfreenect.h"
#include "libfreenect_sync.h"
#include <stdio.h>
#include <stdlib.h>

/*
  No error checking performed whatsoever; dealing with it later (or not).
 */
int main(int argc, char** argv)
{
  uint16_t * depth = (uint16_t *)malloc(FREENECT_DEPTH_11BIT_SIZE);
  uint32_t timestamp;
  int index = 0;
  freenect_depth_format fmt = FREENECT_DEPTH_11BIT;

  uint8_t * depth8 = (uint8_t *)malloc(FREENECT_FRAME_PIX);
  int i;

  /* Capture one Kinect depth frame */
  freenect_sync_get_depth(&depth, &timestamp, index, fmt);

  /* Convert captured frame to an 8-bit greyscale image */
  for(i = 0; i < FREENECT_FRAME_PIX; i++) {
    depth8[i] = (2048 * 256) / (2048.0 - depth[i]);
  }

  /* Write raw greyscale image to stdout  */
  fwrite(depth8, FREENECT_FRAME_PIX, 1, stdout);

  return 0;
}

Makefile:

all:		capkinect

clean:
		rm -f capkinect.o capkinect

capkinect.o:	capkinect.c
	gcc -g -I/usr/local/include/libfreenect/ -c capkinect.c -o capkinect.o

capkinect:	capkinect.o
	gcc -g capkinect.o -L/usr/local/lib/ -lfreenect_sync -o capkinect

Uploader:

#!/bin/sh

INPUT=`mktemp`
AVG=`mktemp`
TEMP=`mktemp`
OUTPUT=`mktemp --directory`

#COLORMAP="black-#45931c"
COLORMAP="black-white"

# initial average frame
capkinect | rawtopgm 640 480 | pnmcut 8 8 624 464 | pgmtoppm $COLORMAP >$AVG

while [ true ]; do

    #echo "input: $INPUT avg: $AVG temp: $TEMP output: $OUTPUT colormap: $COLORMAP"

    capkinect | rawtopgm 640 480 | pnmcut 8 8 624 464 | pgmtoppm $COLORMAP >$INPUT

    FILENAME=$OUTPUT/`date +%s.%N`

    ppmmix 0.035 $AVG $INPUT >$FILENAME.ppm

    cp $FILENAME.ppm $AVG

    cat $FILENAME.ppm | cjpeg -greyscale -quality 65 >$FILENAME.jpg

    echo "user=XXXX:AAAA" | curl --digest -K - -F "file=@$FILENAME.jpg" http://kinectvision.com/depth

    rm $FILENAME.ppm $FILENAME.jpg

    sleep 1

done

Server end script that inputs and outputs frames:

$latest_path = $_SERVER["DOCUMENT_ROOT"] . "/incoming/latest";

if($_SERVER["REQUEST_METHOD"] == "POST") {

  if(!isset($_FILES["file"]["name"])) {
    exit();
  }
  if(move_uploaded_file($_FILES["file"]["tmp_name"], $_SERVER["DOCUMENT_ROOT"] . "/incoming/" . $_FILES["file"]["name"])) {
    file_put_contents($latest_path, $_FILES["file"]["name"]);
  }

} elseif($_SERVER["REQUEST_METHOD"] == "HEAD") {

  $latest = file_get_contents($latest_path);
  header("X-KinectVision-Latest: " . $latest);

} elseif($_SERVER["REQUEST_METHOD"] == "GET") {

  $latest = $_SERVER["DOCUMENT_ROOT"] . "/incoming/" . file_get_contents($latest_path);
  header("Content-Type: image/jpeg");
  header("X-KinectVision-Latest: " . $latest);

  if(isset($_GET["width"]) && intval($_GET["width"]) < 624) {
    $width = intval($_GET["width"]);
    $f = popen("djpeg -pnm -fast -greyscale $latest | pnmscalefixed -width=$width | cjpeg -greyscale -quality 65", "r");
    while(!feof($f)) {
      echo fread($f, 1024);
    }
    fclose($f);
  } else {
    echo file_get_contents($latest);
  }

}

(Don't know if it's the Suffusion theme or what that kill all the newlines from these listings. They're there, I can assure you, they're just not visible.)

]]>
Debian SSD tips http://www.async.fi/2011/08/debian-ssd-tips/ Fri, 26 Aug 2011 10:26:50 +0000 Joni Kähärä http://www.async.fi/2011/08/debian-ssd-tips/
  • Do not use swap; this may be overly cautious these days as the drives have fancy wear-leveling schemes and whatnot implemented but still, if you're not tight on memory then it should not hurt. And if memory is an issue then in order to avoid performance problems perhaps you should upgrade it in the first place.
  • Do use the "noop" I/O scheduler, i.e.
  • apt-get install grub
  • add GRUB_CMDLINE_LINUX="elevator=noop" to /etc/default/grub
  • update-grub
  • after boot, /sys/block/sda/queue/scheduler should read "[noop] anticipatory deadline cfq"
  • That last part about the scheduler ensures that the default disk I/O scheduling, which rearranges reads and writes to boost IOPS for traditional cylindrical platters and is therefore just bad for SSD performance, is not used. With the "noop" scheduler, reads and writes happen in order.
    ]]>
    Site optimizations http://www.async.fi/2011/08/site-optimizations/ Mon, 15 Aug 2011 17:49:36 +0000 Joni Kähärä http://www.async.fi/2011/08/site-optimizations/

    Performace-wise, setting up Amazon CloudFront ("Custom Origin") in addition to WP Minify and WP Super Cache improved site response times a lot. Offloading static content to Amazon not only made those offloaded files load faster (because of Amazon's faster tubes) but this also reduced stress on our feeble-ish server on page load so that the document itself is returned faster. Load time is also more repeatable. Good stuff!

    Note that CloudFront makes HTTP/1.0 requests and Apache may take some convincing in order to make it Gzip the 1.0 response.

    ]]>
    Working ntp.conf for the Pool http://www.async.fi/2011/08/working-ntp-conf-for-the-pool/ Sun, 07 Aug 2011 18:36:46 +0000 Joni Kähärä http://www.async.fi/2011/08/working-ntp-conf-for-the-pool/ driftfile /var/lib/ntp/ntp.drift statsdir /var/log/ntpstats/ statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable filegen peerstats file peerstats type day enable filegen clockstats file clockstats type day enable # From http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers # each of these ISO: FR, Notify?: No server ntp1.kamino.fr iburst server ntp1.doowan.net iburst server ntp.duckcorp.org iburst server itsuki.fkraiem.org iburst server time.zeroloop.net iburst restrict 127.0.0.1 restrict ::1 restrict default kod notrap nomodify nopeer noquery

    Pool stats for the servers be found here.

    ]]>
    Backbone.js automagic syncing of Collections and Models http://www.async.fi/2011/07/backbone-js-automagic-syncing-of-collections-and-models/ Sat, 23 Jul 2011 19:21:21 +0000 Joni Kähärä http://www.async.fi/2011/07/backbone-js-automagic-syncing-of-collections-and-models/ The idea here is to periodically fetch() and keep the client and server Collections in sync in such a way that the consuming View(s) only get updated when a Model is added to or removed from a Collection, or the attributes of one of the Models in it change. What's done is:

    • Compare the existing Collection to the incoming set and remove every Model from the existing Collection that is not in the incoming set.
    • Compare the incoming set to the existing Collection and add every Model from the incoming set to the existing Collection that is not already there.
    • Compare each Model in the two sets and update the ones in the existing Collection to the ones in the incoming set that are different.

    The first two steps compare the Model's resource_uris and the last part is done with SHA1 hashes.

    window.Model = Backbone.Model.extend({
        urlRoot: BASE_API + 'model/',
        defaults: {
            foo: ''
        },
        initialize: function() {
            console.log('Model->initialize()', this);
            this.bind('change', function(model) {
                console.log('Model->change()', model);
            });
        }
    });
    window.ModelCollection = Backbone.Collection.extend({
        model: Model,
        url: BASE_API + 'model/',
        initialize: function() {
            console.log('ModelCollection->initialize()', this);
            this.bind('add', function(model) {
                console.log('ModelCollection->add()', model);
            });
            this.bind('remove', function(model) {
                console.log('ModelCollection->remove()', model);
            });
        }
    });
    
    window.Root = Backbone.Model.extend({
        urlRoot: BASE_API + 'root/',
        defaults: {
            models: new ModelCollection()
        },
        parse: function(data) {
            var attrs = data && data.objects && ( _.isArray( data.objects ) ? data.objects[ 0 ] : data.objects ) || data;
            var model = this;
            incoming_model_uris = _.map(attrs.models, function(model) {
                return model.resource_uri;
            });
            existing_model_uris = this.get('models').map(function(model) {
                return model.get('resource_uri');
            });
            _.each(existing_model_uris, function(uri) {
                if(incoming_model_uris.indexOf(uri) == -1) {
                    model.get('models').remove(model.get('models').get(uri));
                }
            });
            _.each(incoming_model_uris, function(uri) {
                if(existing_model_uris.indexOf(uri) == -1) {
                    model.get('models').add(_.detect( attrs.models, function(model) { return model.resource_uri == uri; }));
                }
            });
            _.each(attrs.models, function(model) {
                if(Sha1.hash(JSON.stringify(model)) != Sha1.hash(JSON.stringify(model.get('models').get(model.resource_uri)))) {
                    model.get('models').get(model.resource_uri).set(model);
                }
            });         
    
            delete attrs.models;        
    
            return attrs;
        },
        initialize: function() {
            _.bindAll(this, 'parse');
            this.fetch();
        }
    });
    

    Update 3: looking at this afterwards, I'm not sure of the complete watertightness of the above. Perhaps the Backbone Poller project would be be a better approach.

    Update 2: In order to avoid borking on an empty response, do:

    var attrs = Backbone.Model.prototype.parse.apply(this, data);
    if(!attrs) return;
    

    Update: My Javascript-Fu is weak which made me not see the obvious. As suggested in Backbone.js documentation, you can call the parent's implementation like this:

    Backbone.Model.prototype.method.apply(this, args);
    

    So, instead of unnecessarily copypasting behavior from Backbone-tastypie.js, we can say:

    var attrs = Backbone.Model.prototype.parse.apply(this, data);
    

    …and still have Backbone-tastypie.js do it's parsing thing for us.

    Ugh.

    ]]>
    Note to self: Django/South basic usage http://www.async.fi/2011/07/note-to-self-djangosouth-basic-usage/ Sat, 16 Jul 2011 08:12:22 +0000 Joni Kähärä http://www.async.fi/2011/07/note-to-self-djangosouth-basic-usage/ ./manage.py syncdb --noinput ./manage.py schemamigration YOUR_APP --initial ./manage.py migrate YOUR_APP 0001 --fake Then, after fiddling with your models:
    ./manage.py schemamigration YOUR_APP --auto
    ./manage.py migrate YOUR_APP
    
    That should do it. But like I said, my understanding of the system is very limited and I'm sure there are cases when this simplistic pattern just won't cut it. My needs, however, at the moment are not the most complicated.]]>
    Note to self: Django's Error: cannot import name Foo http://www.async.fi/2011/07/note-to-self-djangos-error-cannot-import-name-foo/ Fri, 15 Jul 2011 09:56:24 +0000 Joni Kähärä http://www.async.fi/2011/07/note-to-self-djangos-error-cannot-import-name-foo/ name. This is known as a lazy relationship:
    foo = models.ForeignKey('Foo')
    ]]>
    Emacs and UTF-8 Encoding http://www.async.fi/2011/06/emacs-and-utf-8-encoding/ Thu, 30 Jun 2011 05:29:40 +0000 Joni Kähärä http://www.async.fi/2011/06/emacs-and-utf-8-encoding/ http://blog.jonnay.net/archives/820-Emacs-and-UTF-8-Encoding.html:
    ;;;;;;;;;;;;;;;;;;;;
    ;; set up unicode
    (prefer-coding-system       'utf-8)
    (set-default-coding-systems 'utf-8)
    (set-terminal-coding-system 'utf-8)
    (set-keyboard-coding-system 'utf-8)
    ;; This from a japanese individual.  I hope it works.
    (setq default-buffer-file-coding-system 'utf-8)
    ;; From Emacs wiki
    (setq x-select-request-type '(UTF8_STRING COMPOUND_TEXT TEXT STRING))
    ;; MS Windows clipboard is UTF-16LE
    (set-clipboard-coding-system 'utf-16le-dos)
    Also, add this to the beginning of your source files when working with Python (otherwise you'll get "SyntaxError: Non-ASCII character '\xc3' in file…" etc. errors):
    # -*- coding: utf-8 -*-
    ]]>
    autofont.py http://www.async.fi/2011/06/autofont-py/ Tue, 21 Jun 2011 16:05:39 +0000 Joni Kähärä http://www.async.fi/2011/06/autofont-py/ Interpolated Lookup Tables in Python. The idea here is to generate tables like this so that text scales smoothly based on viewport width:
    @media screen and (max-width: 480px) { body { font-size: 0.500000em; } }
    @media screen and (min-width: 481px) and (max-width: 720px) { body { font-size: 0.750em; } }
    @media screen and (min-width: 721px) and (max-width: 960px) { body { font-size: 1.000em; } }
    @media screen and (min-width: 961px) and (max-width: 1200px) { body { font-size: 1.250em; } }
    @media screen and (min-width: 1201px) and (max-width: 1440px) { body { font-size: 1.500em; } }
    @media screen and (min-width: 1441px) and (max-width: 1680px) { body { font-size: 1.750em; } }
    @media screen and (min-width: 1921px) { body { font-size: 2.000000em; } }
    
    Here's the script:
    import sys
    
    if len(sys.argv) < 7:
        print 'usage: %s LOWEST_RESOLUTION HIGHEST_RESOLUTION SMALLEST_FONTSIZE LARGEST_FONTSIZE STEPS FONT_UNIT' % (sys.argv[0])
        sys.exit(1)
    
    llowest_resolution = int(sys.argv[1])
    highest_resolution = int(sys.argv[2])
    smallest_fontsize = float(sys.argv[3])
    largest_fontsize = float(sys.argv[4])
    steps = int(sys.argv[5])-1
    font_unit = str(sys.argv[6])
    
    resolutions = InterpolatedArray(((1, lowest_resolution), (steps, highest_resolution)))
    fontsizes = InterpolatedArray(((1, smallest_fontsize), (steps, largest_fontsize)))
    
    print '@media screen and (max-width: %dpx) { body { font-size: %f%s; } }' % (lowest_resolution, smallest_fontsize, font_unit)
    
    for i in range(2, steps+1):
        print '@media screen and (min-width: %dpx) and (max-width: %dpx) { body { font-size: %.3f%s; } }' % (resolutions[i-1], resolutions[i], fontsizes[i], font_unit)
    
    print '@media screen and (min-width: %dpx) { body { font-size: %f%s; } }' % (highest_resolution, largest_fontsize, font_unit)

    Note: to make this work in Internet Explorer versions < 9, include the css3-mediaqueries.js script by Wouter van der Graaf.

    ]]>
    Reading EC2 tags with Boto http://www.async.fi/2011/05/reading-ec2-tags-with-boto/ Sun, 29 May 2011 14:33:04 +0000 Joni Kähärä http://www.async.fi/2011/05/reading-ec2-tags-with-boto/ (Ouch! Looks like WordPress update to 3.1.3 wiped all the modifications I made to the default theme. Admittedly I should've seen that coming.) What I want to do is basically attach a key-value pair to an EC2 instance when launching it in AWS Management Console and read the value inside the instance when it's running. To be more specific, I use this to to set a key called environment that can have values like dev, stage and prod so that the Django config can decide which database to connect to etc. while starting up. I suspect that in Boto the current instance can somehow be referenced in a more direct fashion but this works as well. First, append the following to /etc/profile:
    # See: http://stackoverflow.com/questions/625644/find-out-the-instance-id-from-within-an-ec2-machine
    export EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
    test -n "$EC2_INSTANCE_ID" || die 'cannot obtain instance-id'
    export EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone || die \"wget availability-zone has failed: $?\"`"
    test -n "$EC2_AVAIL_ZONE" || die 'cannot obtain availability-zone'
    export EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\\([0-9][0-9]*\\)[a-z]*\\$:\\\\1:'`"
    Now we know the region and instance ID. Next, install Boto by running the following commands:
    wget "http://boto.googlecode.com/files/boto-2.0b4.tar.gz"
    zcat boto-2.0b4.tar.gz | tar xfv -
    cd boto-2.0b4
    python ./setup.py install
    Then, add these lines to ~/.profile:
    export AWS_ACCESS_KEY_ID=<ACCESS_KEY>
    export AWS_SECRET_ACCESS_KEY=<SECRET_KEY>
    Or the equivalent in ~/.boto:
    [Credentials]
    aws_access_key_id = <ACCESS_KEY>
    aws_secret_access_key = <SECRET_KEY>
    Now, to read the tag we want in Python:
    #!/usr/bin/env python                                                                                                                                           
    
    import os
    from boto import ec2
    
    ec2_instance_id = os.environ.get('EC2_INSTANCE_ID')
    ec2_region = os.environ.get('EC2_REGION')
    
    conn = ec2.connect_to_region(ec2_region)
    
    reservations = conn.get_all_instances()
    instances = [i for r in reservations for i in r.instances]
    
    for instance in instances:
        if instance.__dict__['id'] == ec2_instance_id:
            print instance.__dict__['tags']['environment']
    ]]>
    Verifying Amazon SNS Messages with PHP http://www.async.fi/2011/05/verifying-amazon-sns-messages-with-php/ Sat, 14 May 2011 14:21:01 +0000 Joni Kähärä http://www.async.fi/2011/05/verifying-amazon-sns-messages-with-php/ Amazon Simple Notification Service are signed, and checking that any received message is indeed from AWS and not from some douche trying to outsmart you is not very hard (nor should it be optional, for that matter): sns-verify.php The verify_sns() function expects the message in JSON format, plus region (e.g. "eu-west-1"), numerical account ID without dashes and an array containing the topics you're interested in. The code will verify both SubscriptionConfirmation and Notification messages. It loads the certificate from the address in SigningCertURL field to check against for each message separately because the certificate changes over time, as described here. It is also checked that the host where the certificate is loaded from is in the amazonaws.com domain. Example usage where subscriptions are automatically confirmed:
    require_once('sns-verify.php');
    
    if($_SERVER['REQUEST_METHOD'] != 'POST') {
        logger('Not POST request, quitting');
        exit();
    }
    
    $post = file_get_contents('php://input');
    
    if(!verify_sns($post, 'REGION', 'ACCOUNT', array('TOPIC 1', 'TOPIC 2'))) {
        exit;
    }
    
    $msg = json_decode($post);
    
    
    if($msg->Type == 'SubscriptionConfirmation') {
        logger('SNS SubscriptionConfirmation received);
        file_get_contents($msg->SubscribeURL);
    } elseif($msg->Type == 'Notification') {
        logger('SNS Notification received);
        process_message($msg);
    }
    
    ]]>
    Windows Server 2008 R2 on Amazon EC2 http://www.async.fi/2011/05/windows-server-2008-r2-on-amazon-ec2/ Sun, 08 May 2011 18:26:23 +0000 Joni Kähärä http://www.async.fi/2011/05/windows-server-2008-r2-on-amazon-ec2/ Plan to use an in-house box to run a XenServer to host XP instances (I need multiple Windows desktops for "testing" purposes if anyone asks) had to be scrapped because the box was simply too loud and I couldn't get the wireless bridge to work – not that the latter would have helped anyway because like I said the box really is loud and relocating it anywhere inside our flat just wouldn't lower the noise level enough for it to not disturb sleep. Which brings us here: launching a Windows Server 2008 R2 instance on Amazon EC2 and setting up Remote Desktop Services to enable multiple simultaneous client sessions. Below we can see Alice, Bob, Charlie and Dave each happily running their own Remote Desktop session at the same time:   The whole thing runs "tolerably" smoothly even on the severely memory-limited Micro Instace: At $0.035 per hour this can be considered cheap. And, the server can be shut down when it's not needed in which case the only charge will be for the admittedly humonguos (35 gigabytes) Windows root partition. And of course those clients would need Client Access Licenses which adds a one time cost of roughly $100 per client. Now, to directly compare this kind of setup with having an actual physical server would indicate poor judgement as both have their strong and weak points but costs can be compared. So here we have an estimate of what the total cost of running a server like this for a three-year period would be, sans CALs:
    On-Demand EC2 Reserved EC2 (1-year Contract) Reserved EC2 (3-year Contract)
    One-time costs $0.00 $54.00 $82.00
    Compute $922.32 $421.56 $421.56
    Storage (35 GB) $138.60 $138.60 $138.60
    I/O (10 IOPS) $103.00 $103.00 $103.00
    Transfer In (1 GB/m) $3.60 $3.60 $3.60
    Transfer Out (10 GB/m) $48.60 $48.60 $48.60
    Total Cost (Euros) 849.69 € 613.00 € 557.11 €
    Per Month (Euros) 23.60 € 17.03 € 15.48 €
    Source: http://calculator.s3.amazonaws.com/calc5.html Then again, that 600 € would get you two HP Proliant MicroServers. Yet, then again, that price does not include Windows licenses and they would need a physical location, electricity, an Internet connection – an so on.]]>
    Building a WDS Bridge with Consumer Grade WLAN APs http://www.async.fi/2011/05/building-a-wds-bridge-with-consumer-grade-wlan-aps/ Sun, 08 May 2011 10:09:45 +0000 Joni Kähärä http://www.async.fi/2011/05/building-a-wds-bridge-with-consumer-grade-wlan-aps/ Small AP is small – and has a built-in antenna, too. I got two of these (for 19,90€ per piece – not A-link list price…) and set up a bridge so I could relocate my noisy Xen box from living room to kitchen to keep the box running 24/7 and sleep. (Turns out that in the end even this didn't help because the box remained loud enough to disturb sleep no matter what settings were selected in BIOS thermal management.) Initially it looked like the bridge worked just fine, except my testing revealed that the transmission speed was nowhere near the advertised "IEEE 802.11n (draft 2.0) / 150Mb":
    XenCenter.iso              100%   44MB   1.5MB/s   00:29
    
    After trying different cryptos from WPA2 to plain text and fiddling with various other settings I came to the conclusion that the slow speed was a feature of the device. Anyway, this was not really any kind of concern as I was more interested in latency, which was low enough (a few milliseconds). Put all this together and my opinion is that it's good enough for an access point that is about the size of a deck of cards and costs twenty euros. What did turn out to be a problem is that at times the AP's would somehow manage get a broadcast storm going on, which of course took the wired network down with it very quickly. I wasn't really able to get to the root of this but from what I observed I can tell that the broadcast storm would happen even when one AP was connected to the primary wired segment and the AP at the other end was just "floating" there, with nothing connected to its' Ethernet ports. Also, while after enabling STP in the devices I could, using tcpdump, observe the STP config packets doing their thing and reconfiguring after for example dropping and then reconnecting either end of the bridge, this (STP) did nothing to prevent the broadcast storm from happening. I should also note for the record that I was using the "WDS", not "AP+WDS" mode. Verdict: the devices just aren't suitable for this application, i.e. they are buggy and do not fully work as advertised but given their relatively compact size and ability to function as clients on a WLAN, I'll keep these.]]>
    XenServer hosting paravirtual 64-bit Ubuntu 10.04 guests http://www.async.fi/2011/03/xenserver-hosting-paravirtual-64-bit-ubuntu-10-04-guests/ Sat, 26 Mar 2011 20:14:02 +0000 Joni Kähärä http://www.async.fi/2011/03/xenserver-hosting-paravirtual-64-bit-ubuntu-10-04-guests/ XenCenter While looking for a virtualization solution in order to make computational matters more flexible, efficient and manageable (et cetera, et cetera) here at home, various offerings that are listed below were tested. To be honest, to say that I "tested" these would be twisting the truth quite a bit as the methodology used was not very scientific and things were guided more by hunch than strict reason. But then again as I would be the only one who would get hurt if things went horribly wrong it wouldn't really matter that much if the "wrong" solution was chosen. So far, after a few days, it's looking like the choice I made was right. The following took part in our non-scientific non-review: At first Eucalyptus sounded like an awesome choise, given for example its Amazon EC2 API compatibility, but in practice it turned out that while the idea of having a private cloud at one's disposal is great, having this much flexibility brings with it a much higher level of complexity in managing the system, which pretty much makes the whole idea of having a cloud a moot point. And as I have just one host machine, running Eucalytus wasn't as straightforward as it could be. And also, what I'm really looking for is virtualization of a couple of servers that I like to have around, not a pool of cloud computing resources which can these days be bought at very reasonable rates (or for free, even) if needed. Nice offering, though, which I bet we will see gaining more and more ground in the future. Oddly, I'm unable to find a single service provider offering a service similar to EC2, but built on top of Eucalyptus. Perhaps the tools that would facilitate selling an Eucalyptus-based cloud service do not yet exist? As for Parallels, I think it's debatable just how "bare metal" their hypervisor really is. It may be so that I have let myself be enchanted by marketers to believe that this bare metal thing is something radically different. The other possibility is that Parallels themselves are bending the meaning of the term here and are selling their system as "bare metal" when in fact it's not that bare. At least to me it looked like a full host operating system was installed and I can't see how this makes things that much different from having a regular server and running the hypervisor on it. Of course one difference is that you don't need for example a Windows server license to run the software but there's still a regular operating system involved that's running the show. Don't get me wrong, I use Parallels products almost on a daily basis (for example the illustration image on top, of the XenCenter management tool, is running on Windows XP installation inside a Parallels Desktop for Mac virtual machine) and I have nothing against them, it's just that this personal experience I have with their server offering wasn't that super. Their management tools are cross-platform (all three Windows, Mac, Linux) which is a plus but they want $500 for a license per server which I'm not going to pay them. Also, Parallels Server could be considered somewhat obscure in comparison to the others so this may very well turn out to not be a good choice in the long run. VMWare's offer just wouldn't work for some reason, perhaps my hardware was somehow incompatible. Or something, I don't know. VMWare being such a traditional virtualization house, this would've been the "correct" choice in a similar way to "No manager ever got fired for buying IBM". But as no one was going to fire me for whatever choice I made here I gave up and moved on. Last on the table was XenServer from Citrix. I went with the default installation and just used one whole disk and let the installer set things up the default way, i.e. a few gigabytes for all the Xen stuff and the rest for LVM storage. Like rest, the system can be managed from command line (local console or over SSH) but as my primary aim here is to get things done and having a point and click interface makes learning curve that much less steep, I went and installed the XenCenter management console on a Windows XP virtual machine (which was of course not hosted on this machine). Making a paravirtual Ubuntu guest did not require any kind of wizardy, I just followed steps 3–5 in Installing Ubuntu Server 10.04 (32bit and 64bit) LTS (steps 1 and 2 were not necessary as the Ubuntu 10.04 64-bit template was already there after a fresh install). After I had one machine set up I turned that in to a virtual machine template and using this template it's super-fast to start new servers when needed. Also, these (para)virtual machines don't seem to be taking much of a performance hit and all and all I'm really pleased with the results. The only thing missing here is lm-sensors or something similar so that I could at least see CPU and motherboard temperatures of a running system but I suppose this can be arranged. Update: A Windows XP guest, with paravirtual device drivers, was also easy enough to install. And following the instructions in chapter 3.4. "Preparing to clone a Windows VM", working with Redmond is greatly simplified as an XP template can be prepared and new virtual machines invoked on demand and disposed of after use – this way one doesn't have to worry if for example installing a software package for testing purposes will mess the system up somehow.]]> H264 HTTP Test Stream Generator http://www.async.fi/2011/03/h264-http-test-stream-generator/ Wed, 09 Mar 2011 21:40:33 +0000 Joni Kähärä http://www.async.fi/2011/03/h264-http-test-stream-generator/ videotestsrc documentation to generate an endless, mildly hypnotic low bitrate zone plate pattern wrapped in an MPEG transport stream. A clock is also shown so that when the stream is transcoded and/or segmented, it's easy to see how bad the lag is. Audio is not included but for example audiotestsrc could be plugged in the pipeline if necessary (although I won't be using audio in my app). VLC is used in the end of the command line to serve the stream over HTTP.
    gst-launch-0.10 -v mpegtsmux name="mux" ! fdsink fd=1 »
    videotestsrc pattern=zone-plate kx2=20 ky2=20 kt=1 ! »
    video/x-raw-yuv,width=320,height=240 ! »
    clockoverlay valign=bottom halign=left font-desc="Sans 23" ! »
    ffmpegcolorspace ! videorate ! video/x-raw-yuv,framerate=15/1 ! »
    x264enc bitrate=100000 cabac=false pass=qual quantizer=27 »
    subme=4 threads=0 bframes=0 dct8x8=false ! mux. | »
    vlc -I "dummy" file/ts:///dev/stdin »
    :sout='#std{access=http{mime=video/mp4},mux=ts,dst=192.168.1.35:8000}'
    

    Update: Here's how to save the encoded video to an MPEG-4 file:

    gst-launch-0.10 -v videotestsrc num-buffers=900000 pattern=zone-plate kx2=20 ky2=20 kt=1 ! video/x-raw-yuv,width=1920,height=1080,framerate=25/1 ! x264enc bitrate=16000 cabac=false pass=qual quantizer=27 subme=4 threads=0 bframes=0 dct8x8=false ! queue ! muxer. ffmux_mp4 name=muxer ! filesink location=zone-plate-900000-frames-1920x1080-25fps.m4v
    
    ]]>
    HTTP Live Streaming with VLC http://www.async.fi/2011/02/http-live-streaming-with-vlc/ Sun, 27 Feb 2011 17:07:49 +0000 Joni Kähärä http://www.async.fi/2011/02/http-live-streaming-with-vlc/ HTTP Live Streaming working with VLC. Downloading and compiling the latest from Videolan's Git repo was required ("1.2.0-git Twoflower" here). I might add that even though on the box that I did this I've compiled a lot of different programs (an Ubuntu installation that has gone through multiple dist-upgrades so it's a few years old and has a lot of packages (2344 atm) installed), quite a few external -dev packages relating to audio and video had to be apt-get'ed to make things work. Below is the command to make VLC read a DVD and generate a segmented stream of H264 video and AAC audio to directory /var/www/html-video-stream/x/ on our local web server. In an IRL situation we would perhaps run the transcoder and segmenter instances on separate machines, or if we already had a suitable H264 stream source (like a camera) we could skip the transcoding step altogether.
    vlc -v -I "dummy" dvdsimple:///dev/scd0 »
    :sout="#transcode{vcodec=h264,vb=100, »
    venc=x264{aud,profile=baseline,level=30,keyint=30,ref=1}, »
    aenc=ffmpeg{aac-profile=low},acodec=mp4a,ab=32,channels=1,samplerate=8000} »
    :std{access=livehttp{seglen=10,delsegs=true,numsegs=5, »
    index=/var/www/html-video-stream/x/stream.m3u8, »
    index-url=http://192.168.1.33/html-video-stream/x/stream-########.ts}, »
    mux=ts{use-key-frames}, »
    dst=/var/www/html-video-stream/x/stream-########.ts}"
    
      QuickTime X (fanboys have had this since Snow Leopard) supports HTTP Live Streaming, so in order to show the above stream on a web page in Safari using the <video> tag, we can do the following:
    <video autoplay loop controls>
      <source src="http://192.168.1.33/html-video-stream/x/stream.m3u8" »
       type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"'/>
      <!-- additional sources here -->
    </video>
      Although I'm not sure if this will work in a situation where we attempt to feed H264 to clients that don't support HTTP Live Streaming, that is, we have an additional <source> element that points to a "regular" H264 HTTP stream. However, adding Ogg/Theora and WebM/VP8 support should not cause problems – I just haven't been able to make VLC output those (properly) yet. HTML5 video tag streaming support in different browsers is also one big question mark.]]>
    Blogging activities migrated to Async.fi http://www.async.fi/2011/01/blogging-activities-migrated-to-async-fi/ Sat, 22 Jan 2011 23:28:45 +0000 Joni Kähärä http://www.async.fi/2011/01/blogging-activities-migrated-to-async-fi/ "PDO (SQLite) For WordPress" adapter to work with a reasonable amount of work, so I gave up and went ahead with a MySQL server install. So it's a tradeoff (what isn't?) but with all these plugins and stuff and whatnot that this system supports I think it's a pretty good deal. I went through the old posts at sivuraide.blogspot.com and imported what I thought I might find useful at least to some extent. Unsurprisingly not too many of the old posts fall to this category. Then again, the old blog is titled Code Notebook. The old posts did not "just work" in all places, a quick eyeballing revealed some glitches that need to be ironed out. I installed the following plugins in addition to what was bundled: ]]> (Desktop) Safari & (W3C) Geolocation API http://www.async.fi/2011/01/desktop-safari-w3c-geolocation-api/ Thu, 20 Jan 2011 22:11:00 +0000 Joni Kähärä http://www.async.fi/2011/01/desktop-safari-w3c-geolocation-api/ Modernizr (which is bundled with the highly recommended HTML5 Boilerplate) which reports that the Geolocation API is supported but I'm unable to even get any dialog asking the user for permission to geolocate (which is, if I read the draft correctly, REQUIRED to be implemented). Also, it seems that others (quite a few people actually) have had this same problem but I have been unable to find a solution yet. Below is a snippet from the code; in desktop Safari the PositionErrorCallback (the latter cb function) gets called after timeout but no luck with PositionCallback, no matter how long the timeout value is. Other tested browsers work as expected. Referenced in other places:

    (Note to self: check if Google Gears, which is still installed, is causing this?)

    [javascript] var position = null; $(document).ready(function() { if(!Modernizr.geolocation) { return; } navigator.geolocation.watchPosition( function(pos) { position = {}; position.lat = pos.coords.latitude; position.lng = pos.coords.longitude; position.allowed = true; init(); }, function(error) { position = {}; position.allowed = false; }, { enableHighAccuracy: false, timeout: 10000, maximumAge: 86400000 } ); checkposition(); }); function checkposition() { log("checkposition()"); if(!position) { setTimeout(function() { checkposition(); }, 1000); return; } else { if(position.allowed) { log("checkposition(): got position: " + position.lat + "," + position.lng); fetchephemeris(); } else { log("checkposition(): could not get a position, giving up"); $("#geolocate").hide(); } } } [/javascript] ]]>
    QUERY_STRING parsing in plain C http://www.async.fi/2011/01/query_string-parsing-in-plain-c/ Mon, 10 Jan 2011 14:23:00 +0000 Joni Kähärä http://www.async.fi/2011/01/query_string-parsing-in-plain-c/ As far as I can tell (which, I'll be the first one to admit, doesn't count for that much) this code is so simple that there are no holes that could be exploited.

      char * query = getenv("QUERY_STRING");
      char * pair;
      char * key;
      double value;
    
      if(query &amp;&amp; strlen(query) > 0) {
        pair = strtok(query, "&amp;");
        while(pair) {
          key = (char *)malloc(strlen(pair)+1);
          sscanf(pair, "%[^=]=%lf", key, &amp;value;);
          if(!strcmp(key, "lat")) {
            lat = value;
          } else if(!strcmp(key, "lng")) {
            lng = value;
          }
          free(key);
          pair = strtok((char *)0, "&amp;");
        }
      }
    
    ]]>
    jQuery Boids (Plugin) http://www.async.fi/2010/11/jquery-boids-plugin/ Sun, 21 Nov 2010 10:50:00 +0000 Joni Kähärä http://www.async.fi/2010/11/jquery-boids-plugin/ From the README:

    "My first attempt making a jQuery plugin, following the guidelines at: http://docs.jquery.com/Plugins/Authoring

    Boids code adapted from Javascript Boids by Ben Dowling, see: http://www.coderholic.com/javascript-boids/

    If this is bound to the window resize event, then the jQuery resize event plugin by "Cowboy" Ben Alman should be used as it throttles the window resize events. See: http://benalman.com/projects/jquery-resize-plugin/"

    The plugin uses HTML Canvas to render the Boids, so a modern browser with Canvas support is required for this to work. I tested with Chrome, Safari and Firefox. IE with Excanvas was painfully slow…

    Code is hosted at GitHub: https://github.com/kahara/jQuery-Boids

    Demo is at: http://jonikahara.com/lab/jQuery-Boids/test.html

    ]]>
    Arduino, DS18B20, Ethernet Shield, Pachube.Com http://www.async.fi/2010/04/arduino-ds18b20-ethernet-shield-pachube-com/ Sat, 03 Apr 2010 14:40:00 +0000 Joni Kähärä http://www.async.fi/2010/04/arduino-ds18b20-ethernet-shield-pachube-com/ Kotkansaari Sensorium Update 2: The sensor is now outside, and running on parasitic power. Update: The data is now available in a more mobile-friendly web page here. Arduino code, based on (i.e. copypasted & modified a little) stuff from http://www.dial911anddie.com/weblog/2009/12/arduino-ethershield-1wire-temperature-sensor-pachube/ is below. Requires the 1-wire library and the Dallas Temperature Control library, both of which can be downloaded from here. Original code utilized DHCP, but I found this to be somewhat unstable and went with a static IP address instead. DS18B20 gets it's power from Arduino, and the data line (that's the center pin) is connected to Arduino pin 8. Data line is pulled up to +5V through a 4k7 resistor, as suggested in Maxim literature. Parasitic power supply was not used as proper voltage was readily available from Arduino. Please note that even though parasitic power is not used, the pull-up resistor is still necessary (see the data sheet).
    #include <Ethernet.h>
    #include <OneWire.h>
    #include <DallasTemperature.h>
    
    char PACHUBE_API_STRING[] = "";  // Your API key
    int PACHUBE_FEED_ID = 0; // Your feed ID 
    
    // Digital IO port used for one wire interface
    int ONE_WIRE_BUS = 8 ;
    
    // Ethernet mac address - this needs to be unique
    byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED };
    
    // IP addres of www.pachube.com
    byte server[] = { 209,40,205,190 };
    
    // Arduino address
    byte ip[] = { 10, 0, 0, 223 };
    byte gateway[] = { 10, 0, 0, 2 };
    
    char version[] = "PachubeClient Ver 0.01c";
    
    #define CRLF "rn"
    
    // simple web client to connect to Pachube.com 
    Client client(server, 80);
    
    // Setup a oneWire instance to communicate with any OneWire device
    OneWire oneWire(ONE_WIRE_BUS);
    
    // Pass our oneWire reference to Dallas Temperature. 
    DallasTemperature sensors(&oneWire);
    
    // 1wire device address
    DeviceAddress thermometer;
    
    void setup()
    {
       // Note: Ethernet shield uses digitial IO pins 10,11,12, and 13   
       Serial.begin(9600);
      
       Serial.println(version);
       Serial.println();
      
       // locate devices on the 1Wire bus
       Serial.print("Locating devices on 1Wire bus...");
       sensors.begin();
       int count = sensors.getDeviceCount();
       Serial.print("Found ");
       Serial.print( count );
       Serial.println(" devices on 1wire bus");
    
       // select the first sensor   
       for ( int i=0; i<count; i++ )
       {
          if ( sensors.getAddress(thermometer, i) ) 
          {
             Serial.print("1wire device ");
             Serial.print(i);
             Serial.print(" has address: ");
             printAddress(thermometer);
             Serial.println();
          }
          else
          {
             Serial.print("Unable to find address for 1wire device "); 
             Serial.println( i );
          }  
       }
      
       // show the addresses we found on the bus
       Serial.print("Using 1wire device: ");
       printAddress(thermometer);
       Serial.println();
    
       // set the resolution to 9 bit 
       sensors.setResolution(thermometer, 9);
    
       Serial.print("Initializing ethernet...");  
       delay(5000);
       Ethernet.begin(mac, ip, gateway);
       delay(5000);
       Serial.println(" done.");
    }
    
    void sendData()
    {     
       float temp = sensors.getTempC(thermometer);
       //float temp = sensors.getTempF(thermometer);
       Serial.print("Temp=");
       Serial.println(temp);
      
       Serial.println("connecting...");
    
       if (client.connect()) 
       {
          Serial.println("connected");
          
          client.print(
             "PUT /api/feeds/" );
          client.print(PACHUBE_FEED_ID);
          client.print(".csv HTTP/1.1" CRLF
                       "User-Agent: Fluffy Arduino Ver 0.01" CRLF
                       "Host: www.pachube.com" CRLF 
                       "Accept: */" "*" CRLF  // need to fix this 
                       "X-PachubeApiKey: " );
          client.print(PACHUBE_API_STRING);
          client.print( CRLF 
                        "Content-Length: 5" CRLF
                        "Content-Type: application/x-www-form-urlencoded" CRLF
                        CRLF );
          client.println(temp);
          unsigned long reqTime = millis();
          
          // wait for a response and disconnect 
          while ( millis() < reqTime + 10000) // wait 10 seconds for response  
          {
             if (client.available()) 
             {
                char c = client.read();
                Serial.print(c);
             }
    
             if (!client.connected()) 
             {
                Serial.println();
                Serial.println("server disconnected");
                break;
             }
          }
          
          Serial.println("client disconnecting");
          Serial.println("");
          client.stop();
       } 
       else 
       {
          Serial.println("connection failed");
       }
    }
    
    void printAddress(DeviceAddress deviceAddress)
    {
       for (uint8_t i = 0; i < 8; i++)
       {
          if (deviceAddress[i] < 16) Serial.print("0");
          Serial.print(deviceAddress[i], HEX);
       }
    }
    
    void loop()
    {
       sensors.requestTemperatures(); // Send the command to get temperatures
       sendData();
       delay( ( 5l * 60l * 1000l) - 11000l  ); // wait 5 minutes
    }
    
    ]]>
    Visitor Locator, Take Two http://www.async.fi/2009/01/visitor-locator-take-two/ Tue, 06 Jan 2009 20:33:00 +0000 Joni Kähärä http://www.async.fi/2009/01/visitor-locator-take-two/ new version that I hacked together stores number of visits per country and shows the totals when a user clicks a countrys' marker. Visits are stored in an SQLite database, which, as you may know, makes things very easy as there is no server to look after etc. I was thinking of using Berkeley DB, because in an app like this, all that SQL is simply unnecessary sugar, but was lazy in the end (as usual). Update: Added country flags in place of the same default icon for every country (see: Custom Icons section in Google Maps API). Update 2: Added tooltip-like functionality, which shows country details in a transient window (label) instead of the default info window. See GxMarker for additional info. Continuing here from where last nights' script ended. This is just the PHP side of things; Google Maps API examples can be found elsewhere. First we open an SQLite database and create a table for our visitor data if table does not exist:
    try {
            $db = new PDO('sqlite:' . $_SERVER['DOCUMENT_ROOT'] . '/../db/visitor-locator.sqlite3');
    } catch(PDOException $exception) {
            die($exception->getMessage());
    }
    
    $stmt = $db->query('SELECT name FROM sqlite_master WHERE type = 'table'');
    $result = $stmt->fetchAll();
    if(sizeof($result) == 0) {
            $db->beginTransaction();
            $db->exec('CREATE TABLE visits (country TEXT, visits INTEGER, lat TEXT, lng TEXT);');
            $db->commit();
    }
    
    Next, check if the country is already in the table and if it is, increment the 'visits' field:
    $stmt = $db->query('SELECT country, visits FROM visits WHERE country = '' . $countryname . ''');
    $result = $stmt->fetch();
    
    if($result['country']) {
            $db->beginTransaction();
            $stmt = $db->prepare('UPDATE visits SET visits=:visits, lat=:lat, lng=:lng WHERE country=:country');
            $stmt->bindParam(':country', $countryname, PDO::PARAM_STR);
            $visits = $result['visits'] + 1;
            $stmt->bindParam(':visits', $visits, PDO::PARAM_INT);
            $stmt->bindParam(':lat', $lat, PDO::PARAM_STR);
            $stmt->bindParam(':lng', $lng, PDO::PARAM_STR);
            $stmt->execute();
            $db->commit();
    }
    
    If country was not in the table, create a row for it:
    else {
            $db->beginTransaction();
            $stmt = $db->prepare('INSERT INTO visits (country, visits, lat, lng) VALUES (:country, :visits, :lat, :lng)');
            $stmt->bindParam(':country', $countryname, PDO::PARAM_STR);
            $visits = 1;
            $stmt->bindParam(':visits', $visits, PDO::PARAM_INT);
            $stmt->bindParam(':lat', $lat, PDO::PARAM_STR);
            $stmt->bindParam(':lng', $lng, PDO::PARAM_STR);
            $stmt->execute();
            $db->commit();
    }
    
    And lastly, fetch all rows and form a Javascript array for our client-side script to use:
    $result = $db->query('SELECT country, visits, lat, lng FROM visits');
    
    echo "<script type=\"text/javascript\">\n";
    echo "//<![CDATA[\n";
    echo "var tbl_country = []; var tbl_visits = []; var tbl_lat = []; var tbl_lng = []; var count = 0;\n";
    foreach($result->fetchAll() as $row) {
            echo 'tbl_country[count] = \'' . $row['country'] . '\'; ';
            echo 'tbl_visits[count] = \'' . $row['visits'] . '\'; ';
            echo 'tbl_lat[count] = \'' . $row['lat'] . '\'; ';
            echo 'tbl_lng[count] = \'' . $row['lng'] . '\';';
            echo " count++;\n";
    }
    echo "//]]>\n";
    echo "</script>\n";
    
    ]]>
    URL Fetch API, MiniDom (Google App Engine) http://www.async.fi/2009/01/url-fetch-api-minidom-google-app-engine/ Sun, 04 Jan 2009 18:28:00 +0000 Joni Kähärä http://www.async.fi/2009/01/url-fetch-api-minidom-google-app-engine/ Fetching stuff with the URL Fetch API is simple (especially if one has faith that the source is there and it will deliver inside GAE time limits):

    from google.appengine.api import urlfetch
    from xml.dom import minidom
    
    def parse(url):
      r = urlfetch.fetch(url)
      if r.status_code == 200:
        return minidom.parseString(r.content)
    

    As is accessing the resulting DOM with MiniDom. Here the source is an Atom feed:

    import time
    
    dom = parse(URL)
    for entry in dom.getElementsByTagName('entry'):
      try:
        published = entry.getElementsByTagName('published')[0].firstChild.data
        published = time.strftime('%a, %d %b', time.strptime(published, '%Y-%m-%dT%H:%M:%SZ'))
      except IndexError, ValueError:
        pass
      …
    
    ]]>
    Berkeley DB XML Python basics http://www.async.fi/2008/01/berkeley-db-xml-python-basics/ Mon, 07 Jan 2008 21:23:00 +0000 Joni Kähärä http://www.async.fi/2008/01/berkeley-db-xml-python-basics/ In an earlier post a C++ snippet can be found where a DB XML container was created (or opened if already exists) and a document read from stdin was put into that container. That same snippet done in Python is pretty much identical:

    from bsddb3.db import *
    from dbxml import *
    
    mgr = XmlManager(DBXML_ALLOW_EXTERNAL_ACCESS)
    uc = mgr.createUpdateContext()
    
    try:
            cont = mgr.openContainer("testcontainer.dbxml", DB_CREATE|DBXML_ALLOW_VALIDATION, XmlContainer.WholedocContainer)
            doc = mgr.createDocument()
            input = mgr.createStdInInputStream()
            doc.setContentAsXmlInputStream(input)
            cont.putDocument(doc, uc, DBXML_GEN_NAME)
    
    except XmlException, inst:
            print "XmlException (", inst.ExceptionCode,"): ", inst.What
            if inst.ExceptionCode == DATABASE_ERROR:
                    print "Database error code:",inst.DBError
    
    
    ]]>
    Timeline: DHTML-based AJAXy widget for visualizing time-based events http://www.async.fi/2006/11/timeline-dhtml-based-ajaxy-widget-for-visualizing-time-based-events/ Fri, 03 Nov 2006 16:19:00 +0000 Joni Kähärä http://www.async.fi/2006/11/timeline-dhtml-based-ajaxy-widget-for-visualizing-time-based-events/ http://simile.mit.edu/timeline/api/timeline-api.js.]]> pkg-config http://www.async.fi/2006/08/pkg-config/ Sat, 19 Aug 2006 11:49:00 +0000 Joni Kähärä http://www.async.fi/2006/08/pkg-config/ CFLAGS_GTK = `pkg-config --cflags gtk+-2.0` LIBS_GTK = `pkg-config --libs gtk+-2.0` gcb-test: main.o         $(CC) $(COMPILER_FLAGS) -o gcb-test main.o $(LIBS_GTK) main.o: main.c         $(CC) $(COMPILER_FLAGS) -c main.c -o main.o $(CFLAGS_GTK) ]]> SQLite, prepare/step/finalize http://www.async.fi/2006/07/sqlite-preparestepfinalize/ Sat, 01 Jul 2006 15:12:00 +0000 Joni Kähärä http://www.async.fi/2006/07/sqlite-preparestepfinalize/ rc =sqlite3_prepare( Db, qry_getnewid, -1, &stmt;, NULL); if( SQLITE_OK !=rc) diediedie( "..."); rc =sqlite3_step( stmt); if( SQLITE_ROW !=rc) { sqlite3_finalize( stmt); diediedie( "..."); } ... a->id =sqlite3_column_int( stmt, 0); ... sqlite3_finalize( stmt); ]]> SQLite, open db and exec sql http://www.async.fi/2006/07/sqlite-open-db-and-exec-sql/ Sat, 01 Jul 2006 15:04:00 +0000 Joni Kähärä http://www.async.fi/2006/07/sqlite-open-db-and-exec-sql/ rc =sqlite3_open(DB_FILENAME, &Db;); if( rc) diediedie( "..."); rc =sqlite3_exec( Db, tbldef_accounts, NULL, 0, &zErrMsg;); if( SQLITE_OK !=rc) diediedie( zErrMsg); ... sqlite3_close( Db); ]]>