Blender to Houdini camera exporter script

Unlikely anyone but me needs this, but just in case: a Houdini script to copy camera animation to the clipboard as a Blender-flavour Python script. Select a camera (or camera switcher) in Houdini, run the script, go to Blender, create a new text block and hit Paste. Execute it and Boom! there’s your camera, all animated an’ stuff.

Suspect most folk wanna go the other way, but I’ve a stupidly complex object already in Blender that wouldn’t be trivial to export, and my scene’s in Houdini. Lots of zany animated texture stuff going on as well… why recreate it if Blender can render it happily? Just need the camera to match up with the rest of the scene.

I’d been going round the houses, exporting to AE first, then from AE to Blender, but focal-length and DoF settings weren’t making it through, so this is an improvement.

Always weird, though, writing a Python script that generates … a Python script. ‘Specially when you’re running it in one 3D package, with its own data structures/methods, and it has to produce a script for a different package with different names and concepts for everything

hey ho…. github gist linky

Apple, why did you split the bundle? [rant]

Oh, Apple, oh Apple.

Not often do I feel the need to write about this kinda thing, but on a day (WWDC 2019 Keynote) when you managed to confound my expectations in so many positive ways, one of your decisions has left me in a state of bewilderment: that bloody monitor stand. I just don’t get what you’re trying to do.

To sum up, you’ve built an incredible high-end monitor at an astoundingly low price, relatively speaking ($5000, where comparable monitors are $50,000 and up), but you’re selling the stand separately and charging $1000 for it. A grand, for a stand.

And I’d happily buy the monitor and stand together for $6000—even $7000!—but I can’t afford to pay $5000 plus $1000.

It’s a nice enough stand. A little brutal in its design, perhaps, rather sternly jutting up from your desk, but it’s not offensive, and of course—of course—it’s exceptionally engineered. As is typical of you guys, you’ve managed to reduce and polish and encase and ultimately hide an enormous amount of clever goings-on.

It’s going to be delightful to use. But all that clever finger-tip control counterbalancing stuff’s been miniaturised and hidden away: and it’s all behind the screen anyway, so the only people who’ll be able to see and appreciate its beauty are those walking behind the monitor. Which doesn’t happen much in studios. And, honestly, once you’ve positioned a monitor right, you kinda don’t move it much; it just needs to stay there. So it’s going to be delightful to use, once or twice a month perhaps, for a second or two.

You’ve basically given us an expensive solution to a problem everyone else has apparently solved cheaply. But that’s you, isn’t it, Apple… I mean, even the way you design a remote control is a bit nutty.

[image credit: hifiduino.wordpress.com]

They’re just an afterthought for most manufacturers, but you came up with a jawdropping “ship in a bottle” approach to hide all the electronics and battery in a seamless sliver of metal. Few people, however, actually notice anything beyond “nice remote. Hmm… what’s on?”, click. Lots of engineering-fu, but completely hidden from sight.

And I love it.

I’m one of those who does notice my phone’s feel as I take it out my pocket. I’ll get distracted now and again while reading, suddenly aware again of the screen’s clarity and proportions and quality, and remembering what phones used to be like, and feeling lucky I’m in an age where this stuff is happening. As much as you’d say you want the hardware to fade away in your hand, leaving just a portal to your task, your friends, the internet – well, I find it hard to stop noticing the hardware miracle itself. I love this stuff, Apple.

So I’m your damn target audience. But I still can’t pay a grand for a monitor stand.

Well—more accurately—I can’t be seen to have paid a grand for a monitor stand.

The Apple deal has always been: you’re paying more, but it’s justifiable. I can justify paying “the Apple Tax” to my clients, my family: my 10 year old Mac Pro is still the centre of my studio. It was difficult to explain to my wife at the time why I was paying so much of our money for a computer, but the economics of it are absolutely clear now.

And that’s what’s so crazy to me: the monitor itself is astoundingly good value; noone could dispute it. But the stand – especially to those who aren’t aware of the engineering feat you’ve so carefully hidden inside it – looks like a profligate purchase. The audience at the Keynote reacted with an intake of breath when they heard the price – they’d already forgotten that it, well, probably was worth it, judging by that x-ray view of cogs and springs and stuff inside it – but all they could think – like me – was “the stand is how much?”

[sfx: crowd murmuring “whaaa…?”]

‘Cos you can’t not have a stand. There’ll be the few who buy the VESA mount option instead, and some godawful gas-lift swivel-arm desk-clamp nonsense from Amazon, but most people perceive a monitor stand as a non-optional accessory. And pretty much every manufacturer too. Just like remote controls for TVs.

I mean, that second gen Apple TV (+remote) was $100. But if you’d launched it like this:

… you’d have been ridiculed. Even if that’s how the costs broke down, manufacturing-wise. The box itself would lose perceived value, and by charging for something we’re kinda going to need anyway (I mean, you can control the Apple TV without a remote by using the iPhone app, but it’s not as convenient), things start to feel a bit scammy. A bit nickel-and-dimey.

But that’s what you’ve done with the monitor: as a $6000 bundle, it would have been excellent value. Noone wants to think what proportion of that money is going somewhere other than the pixels – you’re paying $6000 for a glorious window onto your task. In my line of work I can probably at some point justify $6000′ worth of pixels in front of my eyes.

You’ve decided not to offer me that, though, instead telling me I can only have $5000′ worth of pixels, but I have to spend another $1000 anyway.

Perception is everything. And you’ve inexplicably and inextricably tied together two messages here: monitor = professional, stand = status symbol.

I can justify one to my clients, to my wife, but not the other. Oh, Apple. I can’t do it. What were you thinking? Why’d you split the bundle?

[image credit: u/JakesFriendsBrother, reddit]

tldr: I’d have paid you all the moneys if you hadn’t told everyone how it was split between the monitor and the bloody stand

Welcome to the farm. In the attic.

Big changes at h Manor.

First, a client needed me to re-render some old projects – big projects (dome projection, 4K x 4K, around 10,000 frames), and half were created in Blender, half in Houdini / Redshift.

Second, I’ve moved most of my pipeline over to Linux. Mostly because Apple and nVidia really aren’t getting along, and that’s causing huge problems for people whose pipeline depends on nVidia GPUs. But also because Linux seems to be the OS of choice for larger studios, so it makes sense to get my head round it.

Timescales on the project are fairly tight, which means there’s pressure to deliver fast, but there’s also pressure to not screw up – and to not let the hardware / software screw things up. So, backups, fault-resilience, fast QC etc., all suddenly Very Important.

And because the project involves huge amounts of data (several TB), anything I can do to speed up the pipeline is good.

First step: I bought a server from eBay: a second-hand Dell PowerEdge R510 rack-mount monster; it was about £200, and it’s got 14 drive bays, two redundant power supplies, and it sounds like a jet plane is about to take off when you switch it on. I’m in love.

I don’t have a rack to mount it in, so it’s just here on the floor, sat sideways, but it’s working happily; it’s got 3 pairs of drives, each in a RAID-1 “mirrored” config, so a drive can fail without me losing anything, and when I plug in a replacement, the server will rebuild the data on it.

Yep – there’s an orange “warning” light on one of the bays – one of the drives was failing from new, but it turned out to be a good way to teach me how to rebuild a RAID set from the command-line. Though it’s Dell server with a Dell RAID controller, there’s a package called MegaCli that lets you remotely administer things. Lots of command-line switches to learn, but it’s sorted now, and apart from physically pulling out the dead drive and plugging a new one in, I did it all from downstairs. Freaky.

The server’s running Linux Mint (like everything else here). Not the ideal choice for a server, as it’s got a GUI / graphical desktop that I can’t actually see or use as I don’t have a monitor attached, but it’s good enough for now. And it turns out £200 buys you a lot of grunt if you don’t mind the industrial-size case it comes in: it’s got 32GB of RAM, 2 Xeon quad-core processors (same family as my Mac Pros).

But I need GPUs: the renderers I use for Blender (Cycles) and Houdini (Redshift) use graphics cards for their processing, which makes them less flexible but much faster at churning out the frames. So I needed to set up some render nodes to actually do the rendering.

I dug out some bits and pieces from various junk boxes and managed to put together two machines; they’re both fairly under-powered, CPU-wise (Core i5), haven’t a lot of memory (16GB each) but they can handle a few GPUs:

A bit of a mish-mash: two GTX 1080 TIs, two GTX 1060s, two GTX 690s, and a GTX 2060; plus there’s another two GTX 1080 TIs in the main workstation downstairs. I did have three GTX 690s, but two of them died in (thankfully) different ways, so I managed to cobble together a single working one out of their bones.

For someone who works with images, it’s kinda weird spending a couple of weeks looking at command-lines, setting renders going by typing a command, rather than clicking a button, but you get used to it. Gives you a strange sense of power, too. Rather than watching the frames materialise on screen, I get to watch the nodes’ progress reports in text. Strange.

Both Blender and Houdini were a pain in the arse to get going on Linux, though; I could get a Mac set up in about half an hour if pushed, but on Linux—and with a noob at the helm—they took a couple of days to sort out. Blender needed nVidia’s CUDA stuff installing, which largely consisted of installing and uninstalling and swearing and roaming help forums and more installing and uninstalling. But I managed in the end, and all without actually plugging a monitor into a computer; all done remotely by ssh.

Houdini and Redshift were a pain in a different way: you can install them perfectly easily from a command prompt, but unlike Blender they’re commercial products and need their own license servers installing and setting up too before they’ll work. And Redshift—really guys?—won’t let you actually activate one of your licenses from a command prompt: the licensing tool only works in a GUI. So in the end I had to dig out a monitor and keyboard. And find a VGA cable… I know I’ve got a bunch of them somewhere in here:

Finally found one, plugged it all up, spent about 20 seconds licensing Redshift, disconnected it all again. Finally, everything in the attic seemed to be talking to each other successfully… and even more thankfully, I now don’t have to actually go up there much; I can control it all from downstairs.

So: a server, two dedicated render nodes, three workstations, an old laptop acting as a queue manager, and everything working together; two of the workstations still running MacOS (for compositing and editing and admin/email) while everything else is on Linux.

It’s been quite a month. But the outcome is this:

… I can queue up render jobs, they’ll get farmed out automatically to machines as they become free, and I no longer have to get up in the middle of the night to set the next one going. At least, as far as the Houdini stuff goes; I’m still setting Blender renders going manually (albeit remotely via ssh) so I’ve got to sort out some scripts to do that bit a little more cleverly.

Houdini / Redshift: Varying disk-based instance materials

Thought I’d better try writing this up as it took me bloody ages to get my head round it.

Situation: you have an object that you want lots of instances of, all with different looks. Say, one tree object, but you want lots of instances with variations in their colouring.

The approach differs depending on whether your tree object is in the scene file somewhere (ie instanced using s@instance=”/obj/my_tree”) or if it’s a file on disk (s@instancefile=”$HIP/geo_trees/my_tree.rs”).

Instancing a scene object

You can use either point attributes or stylesheets to poke new values into the material parameters: see https://docs.redshift3d.com/pages/viewpage.action?pageId=12419786&product=houdini for guidance.

Instancing a proxy file from disk

This is slightly more tricky: you must use stylesheets, and you can only poke new values into parameters that are exposed at the material’s top level VOP node. I’ve created a hip file to illustrate:

Download: RS per-instance colours_001.hiplc

To test it, just make sure you “render” the Proxy ROP first, to create an .rs proxy file for the scene to use. It’ll create an RS proxy containing this single plane with some nasty coouring:

Then you can go to Render view and render an image.

You should see this lurid mess:

But that means everything is working. Every plane has a different hue shift going on, even though they’re all identical instances. If it wasn’t working, you’d see this:

This is how it works: to vary the material, I’ve stuck an RS Color Correct node in the material, and an RS Multiply node to multiply an incoming 0-1.0 value by 360 (turns out the hue shift parameter on a colour correct node wants a value in degrees, which caught me out for a while):


and exposed the Hue Shift parameter—promoted it to the top level VOP—so it now appears as a parameter on the material itself:

That parameter name is important—hue, in this case—as that’s the thing we can bind and alter on an instance-by-instance basis using a stylesheet override.

So—into the stylesheets: to get to them, open a Data Tree pane, and choose “Material Style Sheets” from the dropdown. I’ve added a Style Sheet Parameter, and a Style, a target (“Point Instances”) and two overrides:

The first override sets the material to the one that’s in the scene (as opposed to any that may be saved in the proxy file itself). You can only override materials present in your scene, not ones within the proxy (as far as I can tell).

The second override is an “Override Script” – even though it’s not really a script: you can choose “Material Parameter” and “Attribute Binding” from the dropdowns. The Override Name is the parameter that’s exposed on the material—”hue”—and the Override Value is the point attribute you want to bind it to. In this case, my Instance object has a bunch of points to instance to, each with an @hue_shift attribute.

And that’s it. No reason to only use this for shifting hues: you could create a material with a shader switch, or a texture, and as long as you expose the switch parameter or the texture filename string in the top VOP level of the material, you can poke new values in at render time like this.

One caveat/limitation: Redshift is blazingly fast at rendering loads of instances. But if you try tweaking 1000s of instance’s materials like this, you may find that the pre-render processing, as Redshift builds the scene, could become quite time-consuming. I’m guessing that behind the scenes Redshift is creating a new shader for each tweaked material. So for huge scenes with loads of instances, you may need to take a different approach to adding variety. Stay tuned.

Houdini HDA: Particle Looker 1.7

Should be called “Point Looker” really. Super simple really, just a time-saver if you keep need certain looks, which I found I did. Plug some points in the top and this will create Alpha / Cd / pscale attributes. Set colour from a ramp, or from a base colour and then hue/sat/val variance; Alpha over age or from a custom attribute; Size based on mass, or a ramp based distribution.

And a “Create specials” option, which takes some proportion of your particles (best not too many) and gives you some extra controls. Handy for making a few odd differently coloured or extra bright particles to spice up the mix, like in the top image.

Colours can be cycled through the ramps too, to add a bit of life.

No guarantees, may not function as advertised, may burn down your house. But it’s reasonably well documented if you dive inside, and most of the parameters have hover-text help. If you’re happy with that, download: com.howiem__h_particle_look__1.8.hdalc

Houdini: copying camera animation data to After Effects

I use AE for comping stuff, so once I’ve rendered something out of Houdini, I usually need the camera animation data in AE to line flares etc up. This Python script, stuck on a button in Houdini, will copy the animation to the clipboard in a format you can paste onto a Canera in AE. Almost. There’s one additional step: paste the data into something else first—any old plain text editor—and copy it from there into AE. Dunno why, but it makes it work…

 

Houdini quick tip: Control particle birth rate with a point attribute

You can control a POP Source’s birth rate using an emission attribute, but it only works with primitive attributes, not points. Bit of a pain. Often I just want to birth from a single point – but I want to leave the door open to adding more sources later, so using the POP Location node is a bit limiting.

Turns out the POP Source node is actually not too tricky to understand – unlock it and dive in. You’ll find a Solver SOP: dive in there, and you’ll see how particles are birthed. Awww.

The wrangle you need to tit about with is here:

It’s called attribwrangle1, and it’s just above the “random_points” null.

Here’s its existing code:

#include <voptype.h>
 #include <voplib.h>

int npts = ch("npts");
 int seed = ch("seed");
 float frame = ch("frame");
 int firstpointcount = ch("firstpointcount");
 int firstprimcount = ch("firstprimcount");

for(int i = 0; i < npts; ++i)
 {
  float r = rand(vop_floattovec((float)i, frame, (float)seed));
  int sourcept = (int)(r * firstpointcount);
  sourcept = clamp(sourcept, 0, firstpointcount-1);
  int newpt = addpoint(geoself(), sourcept);
  setpointattrib(geoself(), "sourceptnum", newpt, sourcept);
 }

 for(int i = 0; i < firstprimcount; ++i)
   removeprim(geoself(), i, 0);
 for(int i = 0; i < firstpointcount; ++i)
   removepoint(geoself(), i);

It works by taking the total number of points that are supposed to be birthed this frame and spreading it randomly across the source points. But we want to go through all the source points, read the @birth attrib, and birth that many particles on the point. So replace the bold bit with this new code:

for(int sourcept = 0; sourcept < firstpointcount; ++sourcept)
{ 
 for (int j = 0; j < point(geoself(), "birth", sourcept); j++) {
 int newpt = addpoint(geoself(), sourcept);
 setpointattrib(geoself(), "sourceptnum", newpt, sourcept);
 }
}

To test it:

  • Set the emission type to “Points”
  • Ensure there’s a point attribute called i@birth on the points being fed into the POPNet
  • Now, every frame, each point will birth @birth number of particles. Ta-da. You can drive the @birth attribute any way you like – even just using noise makes for some sexy effects.

Note that the Constant and Impulse birthrates are now ignored. For tidyness, you ought to do some more tidying up to the node, and perhaps create a new HDA – but this’ll do for now.

A super-quick test (and hey, gotta love the new Attribute Noise SOP – saves having to build a VOP net every time):

That attribute wrangle above the popnet just says

i@birth = int(10.0 * @Cd.r);

Animating the birth rate on a per-point basis is now super easy. And yep – you can do something similar-ish using primitive / surface emission attributes, but this gives you a whole world of different looks, and, for me at least, much easier control. I love point-based emission. Fireworks, here I come

Houdini quick tip: random seed for your HDA instances

Sometimes you need each instance of an HDA to have a random seed for internal use. Sometimes you just can’t be arsed to do it yourself. This’ll do it – create a seed parameter in the UI, and then stick this in the onCreated handler to set it automatically on creation:

node = kwargs['node']
seed = 6173*node.sessionId()+4139;
node.parm('seed').set(seed % 10000)

From Leaf on the SideFX forums, who adds: “Explicitly setting the seed parameter on node create is the safest approach. Basing the seed on the node name can quickly cause problems when you go to clean up your scene and re-name everything (Toootally never accidentally done that before :)”

h at howiem dot com — animation, music, electronics, ramblings