A friend wants a Knight Rider style Larson scanner for the back of his cycle helmet. Simple enough. Especially now we have intelligent LED strips available, which only need power plus one data wire to drive a load of LEDs.
Note: if you want to do this sort of thing, make sure you understand the dangers of having LiPo batteries close to your head, and take suitable precautions – see the note at the bottom of this post.
But first things first: I knocked out some very quick and dirty code to do a nice scanning effect (thanks Adafruit for the WS2812 library! Doesn’t half save some time) My code is here. Nothing fancy, but it has a nice decay effect, and a gamma lookup table to keep things pretty. Used an Arduino Mega board to do the testing:
The code works fine, but it needs to run from something a bit smaller than an Arduino. An Attiny85 does just as well (the 45 would probably have worked but I don’t have any): Continue reading →
Most swear boxes work the wrong way round: you say something naughty, then you have to put money in the box as penance. This is the opposite – press the button and this device generates a random swear word for you to use in conversation at your leisure. Technology, eh.
It’s not a new idea: a few years ago I came across this beautiful “Four Letter Word” clock made with delicious old fashioned nixie tubes:
Designed and built by Jeff Thomas, Peter Hand, and Juergen Grau. More information here.
I wanted to make something quick and simple as a birthday present for a friend, and I had four little LED starburst displays sitting in a box, so some sort of random word dispenser seemed like a good idea. Designed a quick circuit in Eagle:
The schematic in Eagle is pretty messy and tangled. I often find that if I’ve got loads of connections to make to a microcontroller, it’s not until I’m laying out the board design that I can see the way things should be connected.
This is the perfect example: I’m connecting two identical starburst displays to the microcontroller in a multiplexed fashion. In an ideal world you’d connect the anodes of the displays together, segment A to segment A, seg B to seg B and so on, leaving just the cathodes of each digit to be connected separately. This makes the software a little simpler – pin D3 (say) on the microcontroller controls the same segment on all the digits:
On a single sided PCB, though, routing the connections like that gets really tricky – you end up with loads of connections that have to jump over others so they end up on the right pins.
The alternative is to design the circuit so it’s easy to route (even if that means segment A on one display is connected to segment F on the other etc) and then sort it all out in the software.
Hence the relatively neat looking PCB design:
Note that there’s an error in the design / layout – I didn’t realise until too late that two of the microcontroller pins I wanted to use (A6 and A7) wouldn’t work as digital pins, so I had to use some wires to connect them to some free pins on the other side of the controller. Live and learn.
With the PCB milled out and populated, I hacked a rectangular hole out of a random wooden box I had knocking about, and stuck a switch and a button on it, and squished a bit of foam and a CR123A battery holder in.
CR123A batteries are great for this sort of project – they only run at 3 volts, but that’s enough to drive LED displays without needing to add current limiting resistors. Helps that the display is multiplexed, too; though it looks like all the displays are lit up simultaneously, they’re actually taking turns, one digit at a time. Stops the LEDs from burning out – at most they’re on for 25% of the time.
The software’s pretty simple – there’s a list of about 45 swear words chosen at random. Proper randomness isn’t easy for a microcontroller (or a computer) to do on its own, so I measure how long the user has pressed the button, in microseconds, and use that value to help choose a word.
Press the button and an animation sweeps across the display, with each letter of the word coming up one at a time like a fruit machine.
I’ve always wanted to play with a motion control rig. Ever since seeing a behind-the-scenes documentary about Star Wars, showing how they filmed the Death Star trench scenes: a huge model, with a robotic camera that could do all the flying shots over and over again perfectly so they could film all the individual elements and have them match up.
Robotic stuff was in its infancy back then; their robotic camera wasn’t controlled by computers, just lots of TTL logic chips wire-wrapped together with loads of knobs and switches to set speeds and design trajectories.
Motion control has come a long way since John Dykstra and the team built those first systems. Nowadays there are more competing systems than you can shake a stick at. But they’re all expensive. Way out of my range. The only way I could afford one is if I wanted to turn it into a business, do mo-co day in and day out, but I need more variety than that.
So I’ve built one. It’s mostly made of junk, and it’s got its limitations and quirks, but it’s mine. And it sort of works. Muhahaha.
This is the story.
First step in any robot is getting motors to do what you want. So I pulled apart an old ink-jet printer and a disco light and set about connecting it up to a computer. I made a silly video:
All seems a bit pointless, but the aim was to see if I could learn enough electronics and coding to get a computer to “play” a motion sequence back on a set of motorised things. And it worked.
At the end of the film, you can see the camera mounted in the yoke of an old disco light, panning and tilting; but what you don’t see is the very first snag I hit. If I tried filming something with the camera while it was being moved around, vibrations and wobbles from the disco light motors made the footage unusable. Disco lights don’t need particularly smooth motion; the motors and gears had been designed more for high speed moves.
So it was back to the drawing board. I needed to build my own camera mount, and motorise it. One key find was a big moving-head stage light, which had some huge pulleys in it – and when paired up with a motor with a tiny pulley it meant I could do much smoother (albeit slower) moves.
I went through several iterations:
By now I had a better idea of what sort of functionality I wanted. On the mechanical side, I wanted my rig to have:
– a pan and tilt head
– some kind of slider so the camera could actually move / translate in space, rather than being stuck in one place
– focus control
– a separate turntable accessory so I could rotate objects in front of the camera
On the computer side, I wanted software that could handle:
– an arbitrary number of axes of motion so I could add new stuff in the future
– manual control, so you could drag sliders around on-screen to move the robot
– kinematic limiters (so if you dragged a slider faster than the actual robot could move, it wouldn’t burn itself up trying to match your speed)
– easy trajectory design (ideally using Blender, my 3D software of choice)
I learnt how to write Mac apps in Swift – look up the courses on iTunes U, they’re great – and managed to cobble together something that worked:
I always knew I was going to need focus control of some sort. Not much point being able to have my camera move around if it couldn’t keep its subject in focus. Traditionally this is done externally to the lens: professional cine lenses all have gear teeth running round the focus ring so you can drive the focus remotely, either mechanically with a flexible shaft with a knob on the end (a follow focus) or with a motorised gear controlled down a wire.
But I’ve already got good DSLR lenses and I want to use them. And they’re autofocus lenses so they’re already motorised – all the hard mechanical work has already been done. But how to control them? I could always open them up and hook a motor driver up directly to the tiny motor inside them. My better lenses use ultrasonic motors, though, which may need rather exotic drivers to make them run. And I don’t know what sort of feedback mechanism the lenses use to track how far the motor has run or what distance the lens is focussed on.
A nicer option is to just pretend to be the camera. Rather than hacking the lenses up and removing their micro controllers, just communicate with them the same way a camera would.
I connected it up to an Arduino, and started poking commands at it to see what happened.
Success! Just getting the lens to move at all without the camera feels like a triumph, but there’s a long way to go.
The nice thing about controlling lenses like this is that you can do all your testing without opening a lens up at all. All communications happen via a set of contacts on the back of the lens, and you can buy macro extenders that fit between a camera and a lens, and have contacts on each side to keep the two connected. I bought a cheap one off eBay, and stripped it down, removing the camera-side contacts. The contacts on the lens-side are spring-loaded, so I soldered a wire to each spring before re-assembling it:
Now it’ll fit onto the back of any Canon lens and let you command it:
The downside is that when you want to actually film something, you now have a macro extender between the lens and the camera, which changes the focus range of the lens significantly. Great for ultra-closeups (well, it is a macro extender) but no good for anything more than a foot or two away. And some lenses are unusable with an extender on; their focus range ends up being pretty much inside the lens itself.
Autofocus lenses and focus distance
One of the big issues with trying to control a lens like this is that it’s not what the lens was designed for. I want to be able to say “set your focus to exactly 300mm…. now pull back to 297mm …” etc. But these lenses are designed for auto-focus, which means neither they nor the camera ever need to know the distance they’re focussing on, just that the subject is in focus. The lens can step the focus forwards or backwards by tiny discrete amounts – steps – but doesn’t need to know what effect it’s had on the focus distance. And the camera is only interested in the sharpness of the subject (which it can measure by looking at the highest frequency in the data coming in).
So it’s normally more a hunt than anything else. The camera looks at the image coming in, decides which way out of focus it is and roughly how much, then tells the lens “focus 20 steps towards infinity”. Then it looks at the image now coming in, and corrects and fine tunes the focus: “3 steps inwards”, “ok, just 1 step back out again”. Smart look-up tables for each lens give the camera help in picking the right number of steps to use for each guess, but it’s an iterative process. Distance measurement doesn’t come into it.
(That’s why some pro camera flashes have an AF illuminator – usually red – that projects a striped pattern onto the subject for a moment before the camera takes the picture. It gives the camera some detail to look at and help it assess focus. Point an autofocus camera at something with not much visible detail (say, a sheet) and it’ll struggle to focus. Note that some cameras, particularly camcorders, may have an infra-red distance detection device to make a quick estimate of the distance first, to speed up the first coarse focus move, but the fine tuning is always done “visually”)
So: autofocus lenses shift focus in steps, but I need to know distances. To make things more awkward, each lens has different mechanics and motors and therefore the number of steps, and how many steps a lens takes to shift the focus by an inch all vary widely. And the relationship between steps and actual focus distances is distinctly non-linear, too (the hint is that word “infinity”) so while 10 steps at the near end of the lens’s range may shift the focus by a few millimetres, at the far end 10 steps may shift it by 100 metres.
All this means look-up tables are the way to go. I can tell the lens to pull the focus all the way in to its closest focus, then have it move out by 10 steps at a time, and measure and record the distance that’s in focus.
So I printed out a nice sharp focussing chart (just a nice sharp picture to focus on), set the camera up on a tripod, and hooked up an external monitor to it so I could check the focus without needing to squint through the viewfinder. Then, tape measure in hand, stepped through the first lens’s range of focus, moving the focussing chart back and forward in front of the camera until it was at it’s best focus, and noting the distance each time.
But it wasn’t working properly.
For this to work, I need to know that if I tell a lens “move focus 10 steps forward” then “move focus 10 steps back again” it’ll end up focussing on the same spot it started at. But the first lens I tried profiling didn’t do that. And while telling it to reset to its closest focus always brought it to the same spot, if I stepped it forward by 100 steps, it ended up at a different position than if I stepped it forward by 10 steps ten times in a row. Like the lens was losing steps. Not good.
The lenses do kind of keep track of their current position, but even that’s not without its issues. All autofocus lenses have a manual option; a focus ring you can grab with your paw and twist to focus. Some lenses have a mechanical switch that disengages the autofocus motor, but some have a slipping clutch that lets you manually focus at the same time as the motor (which probably annoys the camera). But this means the lens can only tell you how many steps cumulatively it’s tried to move the focus since it was powered on, but it has no idea if that’s where the lens focus actually is right now.
Bit of a show-stopper for me, though. The only way I can track where the lens is, is by dead-reckoning; moving the lens to a known point (the nearest focus point), resetting my counter, then just blindly sending commands and keeping track of how many steps back and forth I’ve asked it to do.
Thought I might as well try a better lens, just to see.
My EF-S 17-55mm IS USM lens worked perfectly. As long as I didn’t touch the manual focus ring, I could send it off to a hundred different points of focus, and back to a point near an end-stop (but not actually *on* the endstop, as that’s the one place I know any lens will be consistent), and the lens came back to the right spot every time. Well, within a gnat’s whisker of it.
So: my posh expensive “ultrasonic motor” lens worked. We’re back on track. I tested my other lenses; for the most part, the more expensive, the better. There are different flavours of ultrasonic motor. The better ones are ring-type, where the motor is in the form of a ring that surrounds the focus elements inside the lens, and it more or less directly drives the elements with a minimum of extra gears and cogs. There are cheaper lenses around that get to say they’re ultrasonic although internally they’re very similar to a normal lens: they have a tiny motor (albeit ultrasonic) positioned to the side of the focus elements, with lots of gears and a rack and pinion affair to actually drive the elements.
I now have a lens I can control the focus and iris on, but if I want to get at the contacts on it wile it’s mounted on a camera, I have to use the macro extender. Not ideal. I want to be able to use the lens’s normal focus range. So. (gulp) Time to open it up, remove the contacts so the camera can’t interfere with my plans, solder some wires to the right bits inside the lens, and drill a hole in the lens for the wires to come out. Drilling a hole in an £600 lens… still, can’t make an omelette etc etc
Rather than cut a circuit out I hooked up an old arduino to act as an interface between the computer and the lens. It uses the same protocol as my motor drivers; it receives a list of positions (in steps – I do the conversion from real-world focus distances to focus motor steps on the host computer) and a clock signal, and it just plays through the sequence, sending a new command to the lens every 25th of a second. The additional PCB attached to the Arduino is a DC converter to provide an additional 7.2V supply to the lens:
I set up a quick camera move in Blender, and had Blender calculate the focus distance for each frame along with the motion data. It’s supposed to be focussing on the lego man, but when I measured the scene up to recreate in Blender, I tape-measured to the top of the tripod he’s standing on instead. After shooting the motion, I dropped the video into After Effects and superimposed a star-field over the top to see how well the camera’s actual motion matched the trajectory I’d created. Hence the dots.
I haven’t gone into the actual EF protocol at all here, and it’s not without its quirks. I’ll write an article on it at some point. Google will get you started, though.
Also, I had to do some hacky coding to deal with the fact that EF lenses tend to ignore commands if they’re busy. An ignored command is a show-stopper: all moves from then on will be wrong. There are different kinds of “busy” as far as the lens is concerned, though, and it may report itself as not busy (i.e. ready for commands) when it’s in the middle of moving the lens, and then it’ll ignore the new lens move command. So if you’re following in my footsteps, you’ll need to send the bytes 0x90, 0xB9, 0x00 to the lens, and if the byte that comes back in response to that final 0x00 is either 4 or 36, the lens is moving. So wait a bit and try again, otherwise your command will be acknowledged but ignored.
In the bad old days, I chucked some solar panels on our extension roof, ran some cables to the attic, and had this sort of mess going on: Cheapo MPPT solar battery charger connected to lots of old car batteries.
Not pretty, but it let me wire the house for 12v, and fit lots of lighting and stuff that wasn’t reliant on the grid. Finally got round to tidying it up last year… Continue reading →
Just got hold of a Razer Hydra, a 3D motion controller system aimed at gamers. There’s a base station with a glowing green ball on the top that needs to sit directly in front of you, and two handheld controllers with buttons and joysticks on them. They constantly feed back their orientation and position to the computer, so you can wave them in the air or twist and turn them, and objects on the screen follow along. I’m not into them as games controllers (prefer the ole mouse and WASD meself) but with a bit of hacking and help I’m hoping to use them as motion controllers for my graphics work. Record my motions as I manipulate the controller and apply them to, say, a character’s head on screen. Quick and expressive way to animate secondary characters in animations. Nothing new, but this is dirt cheap – £80! – so well worth a punt, and not the end of the world if I can’t get it working.
The base station is way too inconvenient (and has too many wires attached) to be sat there in the middle of your desk all day. That’s precious real estate, and I’ve got a system for my various keyboards where I can slide ’em in and out under my monitor stand, and that round thing just doesn’t fit.
The base station seems to work just as well upside-down, though, so I’m going to try sticking mine under my desk. (The base station uses a magnetic field to sense where the controllers are, so this trick won’t work with metal desks, and possibly not with very thick wooden ones, but mine seems to work OK. So far.)
Damn thing is too tall, though,10cm-ish, and I’d keep knocking it with my knees. So let’s void its warranty.
Even knocking a few centimetres off would help; I know the coils that let it do its magic are housed in the black ball on the top, and I’m hoping we can get that off and mount it to the side of whatever’s in the bottom section. Slim the whole thing down.
Ahh yes, the old hide-the-screws-under-the-rubber-feet thing.
Managed to get hold of a rare old mill – an Emco F1. It’s a beast. Here’s what they look used to look like when they were first sold (gotta love that jacket, huh?):
Before getting mine into the house I stripped off all the metal casings to leave just the essential working bits. No way to get it into the secret-workshop/attic otherwise. The control box was huge too:
… but with luck I’ll be able to ditch it and control it completely from a computer. Before trying to get the thing to move, though, the mechanics of the thing needed overhauling. I stripped it down completely:
And cleaned and degreased all the bits:
And that’s when I found that the leadscrews were a bit knackered. The lead screws are almost the most essential parts of the mill – they’re the mechanisms that move the head up and down, and move the piece you’re milling back and forth under the milling head. They’re supposed to move smoothly but these ones felt … crunchy.
Taking apart leadscrews is not for the faint hearted. Although they look like a simple screw and nut, there are a line of ball bearings running around the thread between them.
To stop the bearings from just falling out when they reach the end of the thread, there’s a little return tube that carries the ball bearings back to the start of the thread again.
Some of the ball bearings in this mill had shattered and jammed the nuts, stopping them from moving properly up the leadscrew. Nothing for it but to dig ’em all out and re-ball the thing.
Trouble is, when the leadscrew’s assembled you can’t access the ball bearings – can’t even see them. The only way to ‘refill’ the nut with bearings is to stick them into place inside the nut, one by one, with tweezers. I found that coating them in gloopy lube let me stick them into place inside the nut thread, where they held for just long enough to screw the nut back onto the screw again.
Once the nut is loaded with bearings again (not forgetting that the little ball return tube has to be filled with them too), you can gently thread the nut back onto the leadscrew. As you screw them together, the ball bearings get pushed up the thread toward you, but you can use a toothpick to nudge them into the little return tube.
I couldn’t get all the nuts working again – there were two nuts on each leadscrew, with a setscrew to let you put some tension between them, which lets you tighten the nuts to the screw such that there’s no slack or wobble. One of the nuts was completely knackered so I decided to ditch it and go with a single nut on the Z-axis (the axis that lifts and drops the milling head) figuring that gravity would help counter backlash.
I assembled it all again – this time with new stepper motors.
The new motors didn’t quite match the old ones – the shafts were too long – but I managed to get them to fit by spacing them with spare washers and nuts.
Finally, I carried the thing upstairs to the attic. Not something I’d want to do again 🙂
Ever since I found you could get little SMD jumpers (zero ohm resistors) it’s made laying complicated circuits on single-sided circuit boards much easier. Normally if you need a signal to cross over other tracks without touching them, you have to solder little wire jumpers in to form bridges, which means a lot of careful wire measuring and stripping (you can see the red ones above). For little jumps I can use the SMD resistors – the little black oblongs with 000 printed on them. If you’re careful they can jump over 3 other tracks…
h at howiem dot com — animation, music, electronics, ramblings