Skip to main content

 

Rendering small glowing particles seems to be a thing I’m doing a lot of at the moment. One thing you discover really quickly is that particles that end up smaller than one-pixel wide in the rendered image need plenty of samples to avoid flickering, as it’s easy for the camera rays to miss them completely on one frame, then hit them squarely on the next.

It’s a real pain if you have a camera moving towards or away from particles, as you generally have to set up the render for the worst case (furthest away), which can mean loads and loads of samples, and really slow renders.

The solution is obvious, really: make the particles bigger. Ultimately, the end result we want when we render a particle that happens to take up half a pixel’s area, is a pixel that’s half “on”. So: we need to take any particles that end up on screen smaller than a pixel, enlarge them so they now take up a whole pixel, and drop their opacity (Alpha) to compensate.

Note: this approach works great for particles that emit light – so it’s great if you’re doing UI or visualisation work, or streams of sparks, fireworks, star fields etc, but if you’re doing solid particle stuff, where their material has a diff and/or specular component to them, you’re on your own. This may help, but it may just make your renders look wrong and strange.

But I do a lot of glowy sparkly particle stuff, so I’ve been wanting to do this for almost as long as I’ve been playing with graphics, and now I’ve found Houdini I’ve finally got the tools I need to do this.

Here’s a before and after of my little resizer thingy—both are rendered with Redshift at 8(min)-32(max) samples per pixel, and both took the same amount of time (35s) to render:

Cleans up those horrible flickery distant / small particles nicely. And the cool thing is that the same wrangle works on strand / wire rendering too. Redshift uses the @pscale attribute to control strand width, so we only need to change the way we deal with setting opacity in order to get the same benefit. This is only 4(min)-8(max) samples, but look how much cleaner the distant lines end up – for exactly the same render time, give or take a second:

I’m guessing that this is a Thing that all seasoned professionals have been doing for ages anyway, but here’s a little explanation ‘cos I’m quite proud I got there on my own.

The whole process only takes a single wrangle, and once set up, it can be saved as a preset and just dropped onto the end of whatever particle/points setup you have.

We need to know how large in pixels each point/particle will end up when rendered with a camera a certain distance away, a certain focal length/film aperture, and with a particular resolution. The resolution matters because of course, all other things being equal, a scene rendered at 1080P has smaller pixels, effectively, than if it were rendered at 720P, so it’s getting more samples/rays thrown at the scene, and will be able to resolve smaller details without our help.

I’ve put comments through the code below, so I won’t repeat it all here, but basically (at the risk of repeating myself) we plug the camera’s dimensions, and the distance and size of each point, into a formula to find out how many pixels wide it’ll end up. Then if it’s smaller than our threshold, we scale it up in size and down in Alpha.

The file:

Download here: Resizer_test_scene.hiplc May contain nuts. If you don’t have Redshift installed, you can safely ignore any “redshift node stuff not found” errors and just play with the Mantra example instead. You’ll find a couple of identical test objects, each with this resizer-wrangle on them. Pick one and try rendering the before and after nodes to see the effect.

Parameters:

Here are the parameters I set up on the wrangle: For the most part you can just leave everything at their default – just make sure you point it at your scene’s camera.

  • Camera: we need to know what camera you’re looking through (obviously)
  • Minimum pixel size: though we’ve been talking about dealing with particles smaller than one pixel, it can make things cleaner and quicker to make that threshold a little larger. So it makes sense to have it as a parameter we can play with
  • Pre-scale multiplier: just a @pscale multiplier for convenience
  • Visualise pixel size: this is handy: rather than just silently correcting the size of points,  switching this option on colours them: green particles are large enough already, red particles are too small and will be resized. Great for sanity checking
  • Change Cd not Alpha: Rather than making our enlarged particles less opaque, this just makes them less bright instead. I thought it could be useful in some cases, though I’ve yet to find a use
  • Brighten and shrink big ones: well, since we’re dealing with points that are too small, why not have an option for points that are larger than a pixel, too? Switching this on shrinks any particles that’d end up larger than the minimum pixel size, and multiplies up the brightness of them (their @Cd attribute) to compensate. So now particles that are close enough, or large enough, still look tiny, but just very bright
  • Treat as wires: enlarging a point radius by x means its area has increased by x^2, so that’s how much we need to reduce its alpha by. Wires are different: increase their width by x and their area on screen increases by that same value, so we need to reduce its alpha by that much too
  • Dome projection: uses a different way of calculating resultant pixel sizes: dome projection cameras have an effective focal length of zero, and no meaningful film aperture (size). Calculations are based on an equidistant (not equisolid) projection as that’s what I need today. So there.

The code:

It’s included in the scene file above, but if you want to have a quick nose around, here it is:

Usage:

For the most part, you can just drop this onto any bunch of points, and make sure the camera parameter is set to your current camera.

Redshift vs Mantra: I don’t use Mantra much (or at all), but the wrangle will work in the same way. The file below has a Mantra example in it too: the render looks subtly different but the benefit is the same.

Keep global point scales at 1.0. Make sure that any Point Scale (Mantra) or Global Scale Multiplier (Redshift) on the object node itself are set to 1.0. Do any scaling and sizing before this resizer node, or use the Pre-multiplier on the node itself.

Your material needs to handle Alpha. Because we need to be able to reduce the opacity of points, whatever material you’ve assigned must read and use the @Alpha point attribute. In Mantra, you’re probably using the Constant material or something similar: make sure Use Point Alpha is ticked; in Redshift, plug an RS Point Attribute (“Alpha”) into the material’s Opacity input.

Don’t use resolution overrides! The camera resolution figures in our equations: we’re getting that information from the camera object. Most renderers will let you override it for previewing, letting you render at half- or quarter-resolution, but our wrangle won’t know that’s happening. So if you need to knock out smaller preview renders, reduce the resolution on the camera object, rather than asking the renderer to override it. Or live with the consequences.

Real cameras only, gah… One (annoying) caveat: this does not work nicely with Camera Switcher objects. Although a Switcher is supposed to be a perfect doppelgänger for whichever camera it has selected, it doesn’t pass through all the parameters we need. So beware.

Adding it to your arsenal:

The wonderful thing about Houdini is that you can make this wrangle into an HDA if you like (here’s an OTL version), or—and even easier for now—just save it as a wrangle preset. All the spare parameters set up (and even the tooltips, and the fact all the spare parameters are on their own tab) get saved with the preset. Super spanky. Click the gear/cog button on the wrangle’s parameters and choose Save Preset…

… and give it a useful name:

Now it’ll be there any time you need it. Just drop a wrangle down and choose the preset.

Comments / corrections welcome.

More Houdini tips…