Category Archives: Uncategorized

Houdini / CHOPS – an automatic player piano

No keyframes were used to create this: it’s a MIDI file recorded from my keyboard, stuffed into a mad CHOPs network in Houdini to handle all the timings and animation automatically. Depending on how loud a note is supposed to be, the timings are shifted to match how a player would actually play. Louder notes usually mean the key is hit faster, so the animations have to be slightly shorter. The key releases – how long the animation of each key “returning” to its upper position takes – are based on a number of factors, but mostly weighted toward what the overall frequency of notes is at that point. So slower passages have a more relaxed feel visually. And so on and so on… ūüėČ

Welcome to the farm. In the attic.

Big changes at h Manor.

First, a client needed me to re-render some old projects – big projects (dome projection, 4K x 4K, around 10,000 frames), and half were created in Blender, half in Houdini / Redshift.

Second, I’ve moved¬†most of my pipeline over to Linux. Mostly because Apple and nVidia really aren’t getting along, and that’s causing huge problems for people whose pipeline depends on nVidia GPUs. But also because Linux seems to be the OS of choice for larger studios, so it makes sense to get my head round it.

Timescales on the project are fairly tight, which means there’s pressure to deliver fast, but there’s also pressure to not screw up – and to not let the hardware / software screw things up. So, backups, fault-resilience, fast QC etc., all suddenly Very Important.

And because the project involves huge amounts of data (several TB), anything I can do to speed up the pipeline is good.

First step: I bought a server from eBay: a second-hand Dell PowerEdge R510 rack-mount monster; it was about ¬£200, and it’s got 14 drive bays, two redundant power supplies, and it sounds like a jet plane is about to take off when you switch it on. I’m in love.

I don’t have a rack to mount it in, so it’s just here on the floor, sat sideways, but it’s working happily; it’s got 3 pairs of drives, each in a RAID-1 “mirrored” config, so a drive can fail without me losing anything, and when I plug in a replacement, the server will rebuild the data on it.

Yep – there’s an orange “warning” light on one of the bays – one of the drives was failing from new, but it turned out to be a good way to teach me how to rebuild a RAID set from the command-line. Though it’s Dell server with a Dell RAID controller, there’s a package called MegaCli that lets you remotely administer things. Lots of command-line switches to learn, but it’s sorted now, and apart from physically pulling out the dead drive and plugging a new one in, I did it all from downstairs. Freaky.

The server’s running Linux Mint (like everything else here). Not the ideal choice for a server, as it’s got a GUI / graphical desktop that I can’t actually see or use as I don’t have a monitor attached, but it’s good enough for now. And it turns out ¬£200 buys you a lot of grunt if you don’t mind the industrial-size case it comes in: it’s got 32GB of RAM, 2 Xeon quad-core processors (same family as my Mac Pros).

But I need GPUs: the renderers I use for Blender (Cycles) and Houdini (Redshift) use graphics cards for their processing, which makes them less flexible but much faster at churning out the frames. So I needed to set up some render nodes to actually do the rendering.

I dug out some bits and pieces from various junk boxes and managed to put together two machines; they’re both fairly under-powered, CPU-wise (Core i5), haven’t a lot of memory (16GB each) but they can handle a few GPUs:

A bit of a mish-mash: two¬†GTX 1080 TIs, two GTX 1060s, two GTX 690s, and a GTX 2060; plus there’s another two GTX 1080 TIs in the main workstation downstairs. I did have three GTX 690s, but two of them died in (thankfully) different ways, so I managed to cobble together a single working one out of their bones.

For someone who works with images, it’s kinda weird spending a couple of weeks looking at command-lines, setting renders going by typing a command, rather than clicking a button, but you get used to it. Gives you a strange sense of power, too. Rather than watching the frames materialise on screen, I get to watch the nodes’ progress reports in text. Strange.

Both Blender and Houdini were a pain in the arse to get going on Linux, though; I could get a Mac set up in about half an hour if pushed, but on Linux‚ÄĒand with a noob at the helm‚ÄĒthey took a couple of days to sort out. Blender needed nVidia’s CUDA stuff installing, which largely consisted of installing and uninstalling and swearing and roaming help forums and more installing and uninstalling. But I managed in the end, and all without actually plugging a monitor into a computer; all done remotely by ssh.

Houdini and Redshift were a pain in a different way: you can install them perfectly easily from a command prompt, but unlike Blender they’re commercial products and need their own license servers installing and setting up too before they’ll work. And Redshift‚ÄĒreally guys?‚ÄĒwon’t let you actually activate one of your licenses from a command prompt: the licensing tool only works in a GUI. So in the end I had to dig out a monitor and keyboard. And find a VGA cable… I know I’ve got a bunch of them somewhere in here:

Finally found one, plugged it all up, spent about 20 seconds licensing Redshift, disconnected it all again. Finally, everything in the attic seemed to be talking to each other successfully… and even more thankfully, I now don’t have to actually go up there much; I can control it all from downstairs.

So: a server, two dedicated render nodes, three workstations, an old laptop acting as a queue manager, and everything working together; two of the workstations still running MacOS (for compositing and editing and admin/email) while everything else is on Linux.

It’s been quite a month. But the outcome is this:

… I can queue up render jobs, they’ll get farmed out automatically to machines as they become free, and I no longer have to get up in the middle of the night to set the next one going. At least, as far as the Houdini stuff goes; I’m still setting Blender renders going manually (albeit remotely via ssh) so I’ve got to sort out some scripts to do that bit a little more cleverly.

Drone day 1

I bought a drone! This is my first ever flight, so it’s a bit clunky (keep bashing into the gimbal end stops, whoops)

Bloody addictive, this thing, but I’ve only got enough batteries for around an hour of flight at a time…

Star Wars: The Force Awakens – atmosphere and evenings


When I left the cinema, it was this one shot that stuck in my head more than any other. And there were plenty of beautiful, memorable shots throughout the film – but there’s something about seeing futuristic space fighters against a sunset that really resonates with me, something in that shot that elevated the film above other sci-fi flicks.

I think it’s because it’s evening.

Sci-fi¬†always seems to happen¬†at night. Of course a lot of sci-fi happens in space, where it’s always night time; but sometimes stuff happens in the daytime – and when it does, it’s usually bright and harsh and dusty. But there’s something about evenings that’s extremely important to humans: it’s that calm transition, the moment of calming and changing gears at the end of a day’s work, before heading out on the town (or wherever). There’s a whole cosm of human experience tied into the atmosphere that evenings bring.

And the Star Wars films have always used evenings to their advantage. You can be on a planet at the far edge of the universe, a gazillion miles from home, but if you’re watching a sunset (even multiple suns setting), something primeval kicks in and you suddenly feel: this place ain’t so different. Show me a character looking into a sunset and I instantly know and connect with the atmosphere there. I can feel the temperature dropping, feel the anticipation of night falling – but it hasn’t fallen yet. Relief the day’s work is over; expectation of what the night will bring.


And day and night are such broad textures to paint your scenes with that they don’t really tell you anything much about the time. An EXT. DAY shot doesn’t tell you whether it’s morning or afternoon without adding lots of other visual or scripted cues. Likewise an EXT. NIGHT – it could be 10 at night or it could be 4 o’clock in the morning, but either way, you can’t easily communicate that subtlety in a shot until you start adding other elements.

But evening can be communicated instantly. And they communicate so much human stuff – it’s not just an arbitrary time of day, it’s evening, and that’s a¬†very special, a very human, time, rich in atmosphere.

And boy, how important a single shot can be to a whole film: a¬†single shot can colour your whole experience of a movie. I saw The Force Awakens in the cinema, and now, a year or so later, it’s on the movie channels every other day, and I often have one of the screens on my desk playing it as part of my background noise. So I’ve probably seen 90% of the film about a dozen times… but for some reason I keep missing that sunset shot. I glance up and see Finn and Rey in that temple bar place, then glance up again and they’re in the rebel base, and I know I’ve missed that shot again. Huh. Maybe next time.

Why is that shot so important for me? It’s¬†just some spaceships in the air, after all… but yet it was the shot that brought home the Star-Wars-iness of the story to me. These are stories set in fantastical locations a million miles from my sphere of experience, but show me¬†a character watching the sun setting:

Screen Shot 2017-01-26 at 04.54.18

… and I’m there. I know exactly what it feels like to be there on that strange planet, the air cooling, the day over, the night not yet here. I instantly get it. I’m there.

And now I can watch Luke fly a space ship and blow up a Death Star and do¬†whatever fantastical stuff the script wants, but because we had that shared moment together watching the sun setting, I know he’s just a human like me.¬†His universe and mine aren’t so different after all.

Quick hacker tip: DIY pseudo-BGA

h_solar_monitor_b 4

Man, I hate drilling holes in PCBs. I make my boards with a mill, so it shouldn’t be too hard to swap the V-cutting bit for a drill bit, but I Just. Can’t. Be. Arsed.

And besides, I like making things as small and slimline and dinky as possible.

The little PCB above is a backpack for an LCD. Couldn’t avoid having to drill holes along the top to connect to the LCD itself, but everything else can be surface mounted.

But sometimes you need a way to connect, say, an nRF transceiver, or one of those newfangled ESP8266 WIFI module to your PCB, and they come already fitted with a pin header. And that would mean drilling more holes, and having the module standing off the board slightly. Yuck.

So here’s my hacky approach. First, design your PCB so the pads are on the top of your¬†board, rather than the bottom: we’re going to take advantage of the fact the little transceivers are always double-sided boards, with through-hole plated holes.

Transceiver on the left, my board with its 8 pads on the right:

h_solar_monitor_b 5

First step, get that header off the transceiver. With a blade, you can carefully lever off the plastic spacer tying the header pins together – do it gently, a little at a time, working from both ends of the header.

Then you can remove the pins one at a time nice and easily. Last one:

wireless 122

Next step – with the board held vertically, soldering iron on one side and solder sucker on the other, you can clean out all the solder from the holes:

h_solar_monitor_b 6

Important – when you’ve removed the pins, you need to make sure the now-empty pads on the bottom of the board are tinned with solder; as the header was originally soldered on the other side of the board, they may not be. So give ’em a blob of solder.

Then remove as much solder as you can from the holes. You need it to be as tidy as possible, every through-hole tinned, but as clean as possible:

h_solar_monitor_b 7

Next step: on your PCB, tin all the pads with the thinnest layer of solder you can. Then put a slightly larger blob of solder on diagonally opposite pads. Try and make them nicely rounded blobs, as they’ll act like locating pins:

wireless 125

Which means¬†when you press¬†the transceiver board down in position over the pads, it’ll locate itself perfectly squarely, with all the pads (hopefully) lining up perfectly between the two boards:

h_solar_monitor_b 8

Touch your soldering iron to the two corner pads to melt the blobs and fix your boards together:

wireless 127

You should be able to look through the remaining holes to see whether they’re lining up perfectly with the pads underneath, then just fill the remaining holes with solder so they connect all the way through.

It’s a balancing act – you want enough solder to fill the hole and join up with the tinned pad on the board underneath, but you don’t want so much solder that it creates shorts inbetween the two boards.

wireless 128
h_solar_monitor_b 10

Result: two boards stuck together, without the extra height the header would create:

wireless 131

Check the connections with a multimeter before powering it up, just to make sure there are no shorts. If there are (which has only happened once out of the few dozen times I’ve done this), you can separate the boards by melting the solder, pad by pad, working a bit of folded-over kapton tape between them to keep them from reconnecting as they cool. Then clean up the two boards and have another go.

h_solar_monitor_b 11
It may seem like a complicated way to avoid drilling eight little holes, but once you’ve done it a few times it’s surprisingly quick and easy and it knocks quite a few millimetres off the height of the board.

Finished PCB in position (note to future self: gotta find a way of moving those electrolytic caps off to the side of the board next time):

h_solar_monitor_b 12

Final result: a wireless LCD readout for my solar panels:

h_solar_monitor_b 13

Note: this is a hacky and, arguably, utterly unnecessary technique, but I like it, so there ūüôā

Illuminated control panel


A friend of mine has had his recording studio fitted out with custom built furniture and wanted a smart control panel to give quick access to some bits and pieces.

I laser cut a piece of smoked acrylic and mounted some sexy (and surprisingly cheap) illuminated switches from eBay. The decals on the switches are laser printed onto OHP film, cut out, and fitted inside the keycaps:

poseidon_control_panel 2

I soldered up the connections on the back of them… lots of wires. Each switch is illuminated, so it’s 4 connections per switch, 32 in total. The last thing you want is wires hanging loose, even though this panel will be mounted in a box, but I’ve seen some neat ways of “cable lacing” to tie them all together into a neat loom. Wikipedia has¬†some nice pictures you can use as a guide; usually waxed cotton is used (I’m guessing the wax stops the knots from falling apart) but it turns out dental floss works just as well:

poseidon_control_panel 1

At the bottom of the panel, I’ve put the studio logo, all nicely backlit. The actual backlight is an old iPod display – take it apart carefully and you can remove the LCD glass, leaving you with a nice, evenly lit rectangle.

To create the logo, I laser printed several copies of the studio logo onto acetate, then carefully lined them up on top of each other and stuck them together with a little spray mount. Why multiple copies? If you try doing this with a single laser/acetate, you’ll find the black parts of the print aren’t dense enough to block the light properly. Layering up a few copies builds up the contrast, so you don’t end up with a glowing rectangle with the logo on it. Then I stuck it to the front of the iPod backlight, and glued it to the back of the acrylic panel:

poseidon_control_panel 3

The buttons are connected to a Teensy LC running in keyboard emulation mode, so they just trigger keypresses on the Mac the panel’s connected to.


Looks pretty smart in situ (that’s it just over the keyboard):


Motion Control: the source code

The source code is online.

Warning: it’s awful. It has worked, and may still work, but it’s offered more for entertainment than anything else. There are several thousand hard-coded gotchas, and a pervading bad code smell in just about every file. But hey – it’s the biggest project I’ve tried putting together, and my first in Swift. And it’s not like there’s a standalone app you can run and play with; you need the hardware (which has similarly unpleasant firmware) in order to stand a chance of getting it working.

But I’ll do what I can to clean it up, document it etc. in the coming weeks.


Motion Control experiment: stop motion + live action

No CGI! All captured with my zany mo-co contraption on a Canon 7D, then comped together in After Effects.

Got the camera to do 4 passes:
1. Normal speed to capture the hand at the start
2. Frame by frame to do the stop-motion chips
3. Normal speed again to capture the hand at the end
4. Normal speed, all lights off apart from one, and a spray of stage smoke (you can see it at the top of the frame)