Welding Glass and Rubber Bands

Sometimes, one feels the need to make long-exposure photos (15s+) during daylight. One option is to buy a Lee Big Stopper, Hitech 100 ND1000 or other neutral-density filter, but these are far from cheap. Alternatively, one could stack a bunch of cheap ND filters on top of each other (ND8+ND4 etc), but then the image-quality suffers and there may be significant magenta colour tints. Further, as the degree of filtration increases, the necessity of preventing light-leaks around the edges of the filters also increases.

Enter the cheapskate: welding glass. According to my experiments, this stuff will pass a tiny fraction of the light from the scene, extending exposure time by 10-11 stops. Admittedly, it more than just tints the scene, it colours it a thick sludgy lurid green, seriously reducing the colour bit-depth in the red and blue channels. Further, it might not fit a regular filter holder.

Hence, a rather complicated shooting and processing methodology.

Perpetrating long exposures

Perpetrating long exposures, complete with rubber bands and welding glass.

When out in the field, if the lens-hood can be inverted, you can use rubber bands to hold the welding glass in place. First advantage: the filter is as close to the lens as it can be, so no light leaks.

Second, we shoot a panorama. In this case, the scene exposure varies with height in the intended finished scene; thus, we choose a vertorama where individual shots are taken in landscape orientation, working up from foreground rocks at the bottom (longer exposure times) to sky at the top, because this keeps the contrast-ratio low in each individual frame. In all, 16 images were shot whilst panning the tripod in the vertical axis, totalling 186 seconds of exposure time.

_1400156

A single source image showing the lurid welding-glass green “tint” and massive lens distortion.

These images were processed in Photivo, using one of my standard presets with corrected white-balance, minimal but adequate noise-reduction (Pyramid algorithm) and Wiener sharpening. Lens distortion, whilst acute, is not addressed at this stage.

Top tip: in scenes such as this, the water just below the far horizon is the closest to an average grey in the light, and makes a good reference for spotting white-balance.

Stitching is done in Hugin, where the considerable distortion from a cheap wide-angle kit zoom lens can be optimized out – the horizon line, bowed to varying degrees by angle of inclination, becomes straight almost by magic. Further, Hugin was set to blend the images using enfuse, which allows selecting pixels to favour both exposure and image entropy – this simply chooses the pixels closest to a midtone, constituting a simple natural HDR.

Second advantage: we’ve just solved lens vignetting as well, because the pixels that were in one image’s corners can be taken from mid-edge pixels in other images.

The output from Hugin’s panorama stitching is a realistic and natural reference image.

From there, it falls to the usual odyssey of post-processing, mostly using Darktable to chose the finished appearance: crop to square, a conversion to black and white (with 80% orange filter), high-pass sharpening and spatial contrast parameters (fourier-space) and a low-pass filter in multiply mode coupled with tonemapping to even-out the large-scale contrast whilst retaining lots of local detail, grad-ND filter applied to retain some contrast in the sky, etc.

At this point, let’s pause to consider the alternatives. Pick an amount of money: you can spend as much as you care to mention on a dSLR to get, maybe, 24 megapixels. What little you have left over can be spent on a sharp lens and a 10-stop ND filter. Now consider taking one image of 3 minutes’ exposure and processing it in some unmentionable popular proprietary processing software. First, the rotation to straighten the horizon and crop from 3:2 aspect-ratio to square are going to cost you about 35% of the pixels, so that 24 is down to 15 megapixels. Then performing noise-reduction is going to cost per pixel, ie the bit-depth is reduced by maybe 2 (on average) from 12 or 14 bits per pixel per channel. Further processing will remove hot and dead pixels. All this is before deciding to process for a particular appearance – we’re still working on a good reference image and the data is vanishing with every click of the mouse, and what little we’re left with is tainted by algorithmic approximation rather than originating with photons coming from the scene.

By performing the panorama routine, the equipment can be much cheaper: a 15-megapixel sensor with poor bit-depth and yet the archive processed image is 22.5 megapixels, all of them good, originating more from photons than approximation.

And it looks pretty good, too:

There Is No Boat.

There Is No Boat.

As an aside, a small boat sailed right around and through the scene while I was shooting the panorama. Because all of the images were fairly long, it left no noticeable impression on the resultant image at all.

[Some product links may be via the Amazon affiliate programme.]