Concerning Rocks

Some years ago I had a passing interest in the abstract shapes and forms rocks can take. 

Recently I was out on the Aberdeenshire coast hunting photos with a friend, who, being impressed with the rocky coastline, wondered exactly where the Highland Boundary Fault emerged at its most north-eastern extremity.

After a bit of research (particularly exploring using the BGS‘s iGeology app), I tracked it to a small headland, Garron Point, beside the golf club outside Stonehaven.

From the outside it doesn’t look like much, but on closer inspection it is awesome.

There are actually two faults – a small one at the north-eastern end of Craigeven Bay corner with Garron Point, forming a small spur off the Highland Boundary Fault which clips the coastline from the town out to sea.

On the lowland side the bedrock is metabasalt, psammite and pelite (North Esk formation) – metamorphic bedrock formed around 461-485MYa in the Ordovician period. On the highland side is gritty psammite (Glen Lethnot grit formation) – around 541-1000MYa.

The fault itself can be tracked to a matter of a few feet – a view from beside one of the golf greens shows the junction of both faults, with a strip of incredibly deformed grey rock leading away some meters rather like a line of chewing-gum.

Some of the most impressive rock outcrops I know. Toward the top-right of the frame – those cliffs are tall, especially from below! – a small fault runs diagonally down to the spray of water and out the left side; from front to back, a line of deformed pale grey rock only a foot or two wide, twisted like chewing gum, marks the Highland Boundary Fault at its most narrow only metres away from its most north-eastern extremity on land, at Garron Point headland near Stonehaven. The gnarly shapes of psammite (metamorphosed medium-grained grey former sandstone) and micro-basalt are awesome.

My favourite image is an abstract closeup – purply-red microbasalt meeting gritty blue-green psammite in a spray of cracks and marbling lines.

Prints are available on my ShinyPhoto photo gallery: Under Pressure

Perhaps the clearest view into the workings of the Highland Boundary Fault I can think of. On the left, red-purple microbasalt (its fine cuboidal structure putting me in mind of streaky thick bacon); elsewhere the blue-green-grey of psammite and pelite, medium- to fine-grained metamorphosed former sandstones. All jumbled together with fine cracks and lines of marble hinting at the pressures involved.

Caithness Holiday Day 4: when a forest is not a wood

Sometimes I have to tell it like it is. Dunnet Forest is one of the least pleasant collections of trees I’ve ever had the displeasure of walking through. From start to finish, a total misuse of the land.

Within 50yd of the carpark are multiple signs warning owners to pick up after their dogs and to use the bin, even with the emotional manipulation that excrement left around could blind a child.

The woodland itself is awful – monoculture spruce with barren lack of undergrowth.

The only burn I saw was a stretch of ~70yd of stagnant scum-covered sludge, vibrant orange with industrial pollution.

There is a reek of unjust hypocrisy about the whole affair: one cannot help but think, even if there is some credibility in the idea of a small kid putting something off the forest floor in their eye, by surface area and decay-rate alone, they would be far more likely to encounter danger in the polluted stream than from anything left behind by a dog – which would, if anything, go some way to re-fertilizing the abused ground beneath the trees.

Toward the end of the ill-defined loop route are several sculptures carved out of the remains of some of the tree trunks. You’ll have had yer entertainment then – but not your walk in nature.

I could not escape fast enough.

Around Here: testing the Fuji X-T20

A couple of weeks ago it occurred to me that, with a holiday looming and desires of taking lots of long timelapse photos of an evening, should I wish to take other scenes while timelapse was recording I’d be out of luck with just the one camera. And so on a whim I followed a friend’s advice and bought a second shooter, a Fuji X-T20.

It’s totally amazing – “the Little Camera that Can™” – personified. Diametrically opposite to everything I’ve become accustomed to with the Pentax K-1: custom user-modes do not encapsulate drive modes such as burst-rate, bracketing or intervalometers, just some rudimentary exposure (auto)-ISO settings and things; and the JPEGs are absolutely stunning, with targets defined within colour-space not by intended purpose (“landscape”, “portrait”) but rather by film-emulation.

It’s so good I’ve reworked my entire RAW-processing workflow such that images from both cameras now look like the Fuji’s out-of-camera JPEGs whilst emulating Fuji Provia slide film of old. Quite some drastic changes required over and above the default RawTherapee profile – most notably yellow-greens are dramatically deeper and greener and there’s a “Fuji red” going off. It’s nice to have a point of reference in colour-space, so I know what’s “normal” and where processing deviates from it.

Anyway. Some quick snaps taken one lunchtime stroll through nearby woods:

And my favourite of all, a partial 22º solar halo – the first photo of the walk:

Having just returned from the holiday that prompted the acquisition, it’s interesting just how fast it’s become my primary choice of camera if only for reasons of lighter-weight whilst travelling.

when a landscape location goes sour

I don’t know how to tag some of these photos: on the one hand, a place that’s been a good walk in the trees for over a decade, but on the other, one that’s now showing the tarnish of human influence. Landscape photographer’s idyll or sad anger?

The Falls of Bruar present several well-known scenes. My own story is of having bought a tripod to do justice to a particular view and then using it to shoot a “hole in the ground in failed light” instead, thus getting one of my most-popular photos on flickr. I’ve been back probably 10 times over more years and enjoyed the stroll up the side of the gorge, admired the rocks, watched the three waterfalls doing their thing.

First there’s the regular cliché photo approaching the lower bridge – everyone gets the view from the top of the gorge opposite the lower bridge; I flew the drone down into the gorge and shot it from more on a level with the natural arch:

Nice rocks – psammite and semi-pelite as most of the Highlands, with the shear strata arising from the end of the Loch Tay fault-line.

Around the corner, there’s a viewpoint amongst the rocks of the second waterfall above the lower bridge:

Rising above the gorge one can see the pine trees lining the Bruar Water – it also shows how the middle cascades extend much further upstream than might be expected from the compressed perspective of ground level:

Abstract view of the lower Falls of Bruar

However, this visit was far from happy. Apart from some offensive uncouth ned muttering “hope you lose your drone” as he passed by, and the hordes of random unwelcome people thinking they were entitled to pester the dog, there are massive botanical problems: the two bridges have been closed so the path up the right side of the gorge is inaccessible due to tree works – possibly because of disease although I’m unconvinced this is the whole reason.

Bridge closed

Walking up the left side of the gorge instead, the area is infested with non-native invasive rhododendron bushes; the Forestry on the adjacent hillsides has been harvested leaving a horrendous unsightly barren landscape; the burns and tributaries beside the path are in a dire state of disrepair also.

Who wants to walk along here?

This is the trouble with well-known landscape locations; as photographers we like the illusion of wilderness, or at least that places are natural. Bruar has become no more than an ill-kempt garden with a water-feature running through the rockery. The light and shade and calming deep greens are no more to be seen.

I’m giving it a decade to recover before revisiting.

How Many Megapixels?

There are several cliches in the field of megapixel-count and resolution required for acceptable photographic prints.

In no particular order:

  • 300dpi is “fine art”
  • you don’t need as many dpi for larger prints because typically they’re viewed further away
  • my printer claims to do 360dpi or 1440dpi or …
  • 24 megapixels is more than enough for anything
  • “for a 5″ print you need 300-800pixels, for medium to large calendars 800-1600 pixels, for A4 900-1600px, for an A3 poster 1200 to 2000px, for an A2 poster 1500 to 2400px, …” (taken from a well-known photo-prints website guidelines)
  • it’s not about the megapixels it’s about the dynamic range

There are probably more set arguments in the field, but all are vague, arising from idle pontificating and anecdote over the last couple of centuries.

Here’s a key question: in a list of required image resolutions by print size, why does the number of dpi required drop-off with print size? What is the driving factor and might there be an upper bound on the number of megapixels required to make a print of any size?

We can flip this around and say that if prints are expected to be viewed at a distance related to their size, then it is no longer a matter of absolute measurements in various dimensions but rather about how much field of view they cover. This makes it not about the print at all, but about the human eye, its field of view and angular acuity – and the numbers are remarkably simple.

From wikipedia, the human eye’s field of view is 180-200 degrees horizontally by 135 degrees vertically (ie somewhere between 4:3 and 3:2 aspect-ratios).  Its angular acuity is between 0.02 to 0.03 deg.

If we simply divide these numbers we end up with a number of pixels that the eye could resolve.

At one end of the range,

 180*135 /0.03 / 0.03 / 1024 / 1024 = 25.75 (6000 x 4500)

and at the other:

200*135 / 0.02 / 0.02 / 1024 / 1024 = 64.37 (10,000 x 6750)

In the middle,

180*135 / 0.025 / 0.025 / 1024 / 1024 = 37.1 (7200 x  5400)

Significant points:

  • this is no longer about absolute print sizes; it’s simply about being able to view a print without one’s eye perceiving pixellation
  • the numbers correlate reassuringly with numbers of megapixels seen in real-world dSLR sensors today
  • you can reasonably say that if you have 64 Megapixels then you can make a print of any size from it
  • you can make an image with more than 64 megapixels if you want to, but the reasons for doing so are not directly to do with resolution – they might be
    in order that you can  crop it – either to physically crop the image in post-processing or to view it from a closer distance than that required merely to fill your eyes’ field of view
    or maybe for pixel-binning to reduce noise, give smoother tonality, etc
  • 24 megapixels is not enough for much; rather it’s is a turning-point: the bare minimum number of pixels for a person of limited acuity to resolve assuming they slightly less than fill their field of view with the print. 36MPel is more usable and 64 will keep you in business selling fine quality wall art.

Now we know how many megapixels are required for various real-world purposes, all that matters is making them good megapixels. Physics to the rescue.

 

Morvern 3/4: In Search of Purity

Time to explain the motivation for this excursion to the Morvern peninsula.

A few months ago, I was exploring what Google Earth had to show for the West coast of Scotland. A lot of photographers gravitate toward the north-west, around Sutherland, and rightly so – the geography up there is impressive. However, coming a little south past Ardnamurchan, there is also epic geology – evidence of volcanoes, beautiful mountains, the works. And so I stumbled across this glen past Loch Arienas and Loch Doire nam Mart, thinking there might be a view to enjoy part-way along the glen up one of the mountains to the left, perhaps.

On a little research, I saw the OS map of the area showed Aoineadh Mor, a former township dissolved in the Highland Clearances. Interesting history. So I drove – about 4.5 hours from Perthshire out through Fort William around Loch Eil and down at some length on wiggly single-lane 60-limit roads – and arrived at the small roadside carpark about 4.30pm.

The walk through the woods was beautiful: birches and oak trees catching the low sunlight.

Now it gets real. On emerging from the woods, the first evidence of habitation one sees is this broken dry-stone wall:

The Perimeter is Breached

which shouts the beginning of the story loud and clear: a township left to ruin, increasingly taken over by nature.

From what I gather, up to the 18th Century, Aoineadh Mor [approximately pronounced, and sometimes spelled, Inniemore, although the Gaelic ao vowel sound is inimitable in English] was a thriving crofting community on the slopes of Sithean na Raplaich where the burn (Allt Aoineadh Mor) flows down to the lochs.

It is a wonderfully beautiful setting – the Allt Aoineadh Mor burn burbling down the hillside, through the former township of the same name.

In 1824, Christina Stewart, newly owner of the Glenmorvern estate, forcibly ejected the crofters in order to farm sheep on the land for supposed greater profit, as happened in many places during the Highland Clearances.

The names of two of the last crofters to leave, James and Mary, have been given to two paths through the surrounding forestry.

By 1930 the sheep were also no longer profitable and the area was planted with trees as the cash-crop of the time. After 60 years, in 1994, the Forestry Commission uncovered the township.

And so my history intersects with the place in 2017.
It is both quieting and disquieting simultaneously: quiet in that there is an open space, there are trees, light, water, all the elements of landscape we photographers like; yet disquieting in that the area is not really pure – on scratching beneath the surface, there seems to be a greater innocence in the subsistence existence of crofting, with subsequent industries of sheep farming and forestry tainted by crass desire for profit to varying extent. And so the hillside is not really wild but barren; the land not just beautiful but exploited.

Dust kicked-up by a passing logging lorry travelling a path through Forestry Commission conifer woods, taken from the former township of Aoineadh Mor.
http://scotland.forestry.gov.uk/activities/heritage/historic-townships/aoineadh-mor-inniemore/marys-story

These conflicting forces of land, money and habitation are summarized in this photo, where we have nature’s pine tree felled in the foreground, a generation of mankind’s ruined croft superseded by the unnatural choice of conifers blown in as seed on the wind, leading to blue skies beyond.

This used to be the township of Aoineadh Mor, a scattering of stone crofts on the braes beside a beautiful river surrounded by forest. 
Now the Forestry Commission has taken over, with several monoculture forests on the surrounding mountains, even wild-seeding into the former township.

For what it’s worth, a few more photos all taken around the township:

References:

As I walk along these shores
I am the history within
As I climb the mountainside
Breaking Eden again –
Runrig, Proterra

The Return of Serif

It feels funny to think that back in the early 1990s Serif was known for PagePlus, in the days when such things were known as desktop publishing or DTP applications.

Recently, however, they’ve produced Affinity Photo for Mac, Windows and iPad. After one or two folks recommended it, I thought it was time to add another trick to the photo-publishing workflow and have a play. After all, if nothing else, having an iPad would solve my doubts with colour-management issues on Linux, wouldn’t it?

So I’ve spent a few weeks driving around hunting scenery and taking photos of it and gradually evolving a few routines: photos are still shot on the Pentax K-1; processed (variously pixel-shift or HDR) using dcraw on Linux, where I also run them through darktable for a lot of toning work; the results are then copied to an ownCloud folder which synchronizes automagically with the iPad; a bit of juggling with share-to-Affinity and share-to-Photos and share-to-ownCloud later and the results are copied back to the workstation for final organization, checks and publication.

Within Affinity, my workflow is to import an image and ensure it’s converted to 16-bit P3 colour-space – this is a bit wider than sRGB and native to the iPad’s display. I then run the Develop module which tweaks exposure, brightness and contrast, clarity, detail, colour-balance (not so much), lens distortions, etc. After that, I use layers to remove sensor-dust and other undesirable feature. Top tip, use a new pixel layer for the in-painting tool set to “this layer and below”; then all the in-paintings can be toggled on and off to see the effect; also use a brightness+contrast adjustment layer above that while you work, so the corrections will be even less visible when the contrast is reduced back to normal. If the image requires it, I’ll add one or two fill layers for gradients – better to use an elliptical gradient that can be moved around the scene than a vignette that only applies in the corners. Finally, I merge all the layers to a visible sum-of-all-below pixel layer,on which I run the Tonemapping persona; ignoring all the standard presets (which are awful), I have a couple of my own that make black-and-white in low- and high-key targets, in which I can balance local versus global contrast. If I produce a black&white image, it probably arises from this layer directly; sometimes, dropping the post-tonemapped layer into the luminosity channel makes for a better colour image as well (subject to opacity tweaking).

So, some results. It produces colour:

It produces black and white:

One final thing: I used to hate the process of adding metadata – titles and descriptions, slogging through all my tags for the most pertinent ones, etc. Because the ownCloud layer is so slow and clunky, it makes more sense to be more selective, choose fewer photos to go through it; if I also add metadata at this stage, I can concentrate on processing each image with particular goals in mind, knowing that all future versions will be annotated correctly in advance. Freedom!

Pentax K-1: an open-source photo-processing workflow

There is a trope that photography involves taking a single RAW image, hunching over the desktop poking sliders in Lightroom, and publishing one JPEG; if you want less noise you buy noise-reduction software; if you want larger images, you buy upscaling software. It’s not the way I work.

I prefer to make the most of the scene, capturing lots of real-world photons as a form of future-proofing. Hence I was pleased to be able to fulfil a print order last year that involved making a 34″-wide print from an image captured on an ancient Lumix GH2 many years ago. Accordingly, I’ve been blending multiple source images per output, simply varying one or two dimensions: simple stacking, stacking with sub-pixel super-resolution, HDR, panoramas and occasionally focus-stacking as the situation demands.

I do have a favoured approach, which is to compose the scene as closely as possible to the desired image, then shoot hand-held with HDR bracketing; this combines greater dynamic range, some noise-reduction and scope for super-resolution (upscaling).

I have also almost perfected a purely open-source workflow on Linux with scope for lots of automation – the only areas of manual intervention were setting the initial RAW conversion profile in RawTherapee and the collation of images into groups in order to run blending in batch.

After a while, inevitably, it was simply becoming too computationally intensive to be upscaling and blending images in post, so I bought an Olympus Pen-F with a view to using its high-resolution mode, pushing the sub-pixel realignment into hardware. That worked, and I could enjoy two custom setting presets (one for HDR and allowing walk-around shooting with upscaling, one for hi-res mode on a tripod), albeit with some limitations – no more than 8s base exposure (hence exposure times being quoted as “8x8s”), no smaller than f/8, no greater than ISO 1600. For landscape, this is not always ideal – what if 64s long exposure doesn’t give adequate cloud blur, or falls between one’s ND64 little- and ND1000 big-stopper filters? What if the focal length and subject distance require f/10 for DoF?

All that changed when I swapped all the Olympus gear for a Pentax K-1 a couple of weekends ago. Full-frame with beautiful tonality – smooth gradation and no noise. A quick test in the shop and I could enable both HDR and pixel-shift mode and save RAW files (.PEF or .DNG) and in the case of pixel-shift mode, was limited to 30s rather than 8s – no worse than regular manual mode before switching to bulb timing. And 36 megapixels for both single and multi-shot modes. Done deal.

One problem: I spent the first evening collecting data, er, taking photos at a well-known landscape scene, came home with a mixture of RAW files, some of which were 40-odd MB, some 130-odd MB; so obviously multiple frames’ data was being stored. However, using RawTherapee to open the images – either PEF or DNG – it didn’t seem like the exposures were as long as I expected from the JPEGs.

A lot of reviews of the K-1 concentrate on pixel-shift mode, saying how it has options to correct subject-motion or not, etc, and agonizing over how which commercial RAW-converter handles the motion. What they do not make clear is that the K-1 only performs any blend when outputting JPEGs, which is also used as the preview image embedded in the RAW file; the DNG or PEF files are simply concatenations of sub-frames with no processing applied in-camera.

On a simple test using pixel-shift mode with the camera pointing at the floor for the first two frames and to the ceiling for the latter two, it quickly becomes apparent that RawTherapee is only reading the first frame within a PEF or DNG file and ignoring the rest.

Disaster? End of the world? I think not.

If you use dcraw to probe the source files, you see things like:

zsh, rhyolite 12:43AM 20170204/ % dcraw -i -v IMGP0020.PEF

Filename: IMGP0020.PEF
Timestamp: Sat Feb  4 12:32:52 2017
Camera: Pentax K-1
ISO speed: 100
Shutter: 30.0 sec
Aperture: f/7.1
Focal length: 31.0 mm
Embedded ICC profile: no
Number of raw images: 4
Thumb size:  7360 x 4912
Full size:   7392 x 4950
Image size:  7392 x 4950
Output size: 7392 x 4950
Raw colors: 3
Filter pattern: RG/GB
Daylight multipliers: 1.000000 1.000000 1.000000
Camera multipliers: 18368.000000 8192.000000 12512.000000 8192.000000

On further inspection, both PEF and DNG formats are capable of storing multiple sub-frames.

After a bit of investigation, I came up with an optimal set of parameters to dcraw with which to extract all four images with predictable filenames, making the most of the image quality available:

dcraw -w +M -H 0 -o /usr/share/argyllcms/ref/ProPhotoLin.icm -p "/usr/share/rawtherapee/iccprofiles/input/Pentax K200D.icc" -j -W -s all -6 -T -q 2 -4 "$filename"

Explanation:

  • -w = use camera white-balance
  • +M = use the embedded colour matrix if possible
  • -H 0 = leave the highlights clipped, no rebuilding or blending
    (if I want to handle highlights, I’ll shoot HDR at the scene)
  • -o = use ProPhotoRGB-linear output profile
  • -p = use RawTherapee’s nearest input profile for the sensor (in this case, the K200D)
  • -j = don’t stretch or rotate pixels
  • -W = don’t automatically brighten the image
  • -s all = output all sub-frames
  • -6 = 16-bit output
  • -T = TIFF instead of PPM
  • -q 2 = use the PPG demosaicing algorithm
    (I compared all 4 options and this gave the biggest JPEG = hardest to compress = most image data)
  • -4 = Lienar 16-bit

At this point, I could hook in to the workflow I was using previously, but instead of worrying how to regroup multiple RAWs into one output, the camera has done that already and all we need do is retain the base filename whilst blending.

After a few hours’ hacking, I came up with this little zsh shell function that completely automates the RAW conversion process:

pic.2.raw () {
        for i in *.PEF *.DNG
        do
                echo "Converting $i"
                base="$i:r" 
                dcraw -w +M -H 0 -o /usr/share/argyllcms/ref/ProPhotoLin.icm -p "/usr/share/rawtherapee/iccprofiles/input/Pentax K200D.icc" -j -W -s all -6 -T -q 2 -4 "$i"
                mkdir -p converted
                exiftool -overwrite_original_in_place -tagsfromfile "$i" ${base}.tiff
                exiftool -overwrite_original_in_place -tagsfromfile "$i" ${base}_0.tiff
                mv ${base}.tiff converted 2> /dev/null
                mkdir -p coll-$base coll-$base-large
                echo "Upscaling"
                for f in ${base}_*.tiff
                do
                        convert -scale "133%" -sharpen 1.25x0.75 $f coll-${base}-large/${f:r}-large.tiff
                        exiftool -overwrite_original_in_place -tagsfromfile "$i" coll-${base}-large/${f:r}-large.tiff
                done
                mv ${base}_*tiff coll-$base 2> /dev/null
        done
        echo "Blending each directory"
        for i in coll-*
        do
          (cd $i && align_image_stack -a "temp_$i_" *.tif? && enfuse -o "fused_$i.tiff" temp_$base_*.tif \
           -d 16 \
           --saturation-weight=0.1 --entropy-weight=1 \
           --contrast-weight=0.1 --exposure-weight=1)
        done
        echo "Preparing processed versions"
        mkdir processed
        (
                cd processed && ln -s ../coll*/f*f . && ln -s ../converted/*f .
        )
        echo "All done"
}

Here’s how the results are organized:

  • we start from a directory with source PEF and/or DNG RAW files in it
  • for each RAW file found, we take the filename stem and call it $base
  • each RAW is converted into two directories, coll-$base/ consisting of the TIFF files and fused_$base.tiff, the results of aligning and enfuse-ing
  • for each coll-$base there is a corresponding coll-$base-large/ with all the TIFF images upscaled 1.33 (linear) times before align+enfusing
    This gives the perfect blend of super-resolution and HDR when shooting hand-held
    The sharpening coefficients given to ImageMagick’s convert(1) command have been chosen from a grid comparison; again the JPEG conversion is one of the largest showing greatest image detail.
  • In the case of RAW files only containing one frame, it is moved into converted/ instead for identification
  • All the processed outptus (single and fused) are collated into a ./processed/ subdirectory
  • EXIF data is explicitly maintained at all times.

The result is a directory of processed results with all the RAW conversion performed using consistent parameters (in particular, white-balance and exposure come entirely from the camera only) so, apart from correcting for lens aberrations, anything else is an artistic decision not a technical one. Point darktable at the processed/ directory and off you go.

All worries about how well “the camera” or “the RAW converter” handle motion in the subject in pixel-shift mode are irrelevant when you take explicit control over it yourself using enfuse.

Happy Conclusion: whether I’m shooting single frames in a social setting, or walking around doing hand-held HDR, or taking my time to use a tripod out in the landscape (each situation being a user preset on the camera), the same one command suffices to produce optimal RAW conversions.

 

digiKam showing a directory of K-1 RAW files ready to be converted

One of the intermediate directories, showing 1.33x upscaling and HDR blending

Results of converting all RAW files – now just add art!

The results of running darktable on all the processed TIFFs – custom profiles applied, some images converted to black+white, etc

One image was particularly outstanding so has been processed through LuminanceHDR and the Gimp as well

Meall Odhar from the Brackland Glen, Callander

eBay + PayPal: Considered Harmful

I forget exactly when I made my first purchase from eBay, but given how my PayPal account was registered in 2005, my usage of both sites certainly dates back over 10 years.

Most of my transactions were simple purchases – old-fashioned (M42-fit) lenses for various cameras, although there were a few sales (again, mostly old camera gear for which I had no use).

Over that time, as a humble consumer, I accumulated a 100% positive feedback score of 92.

Now this is the part where the story turns sad.

Offending lens

Last year, thinking to experiment with what it might do for my photography, I bought a 25mm f/0.95 lens. It arrived, perfectly functional; it was just that we didn’t really get along. The lens wasn’t particularly sharp – had glowing edge halos wide-open – and not even f/0.95 gives sufficiently narrow DoF to be interesting in the landscape – it just makes the whole thing look soft.

So I put the lens back on eBay. It sold first time, for a significant proportion of the initial cost. Bingo, I thought, wrapped it up and posted it off using eBay’s global shipping programme on its way to Germany.

Nearly a month later, however, the buyer opened a request to return the lens, claiming it was broken.

Well it wasn’t, when it left here, of that I’m certain; I just don’t have a photo to prove it, unfortunately. Either it was broken in transit or the buyer was clumsy with it himself.

Here’s where the problems start. eBay’s website is old and clunky. Every single email they sent with links to “see more details” were links to ebay.de, their German site, requiring me to enter my eBay login details on a page written in German. (“Einloggen!”) Pretty obviously, this was not going to happen – in equal parts, it looks like a phishing scam and I wouldn’t have the German with which to follow through even if I did log in.

Coupled with this, the buyer’s return request did not appear in eBay’s regular messaging system – it’s a semi-related sub-system of its own.

I requested what support I could find on eBay; a lady called me back within a couple of minutes and said it was up to me to try and sort matters directly with the buyer and then I wouldn’t be able to do anything for a fortnight when I could raise a case. I mentioned the language barrier, but subsequent emails persisted in sending me to ebay.de.

I never did get the chance to raise a case; with their language barriers, clunky old-fashioned website and “support” system that goes to extreme lengths to keep users locked in reading FAQs instead of establishing contact with a human, to all intents and purposes eBay denied me any right to reply.

During this time, Paypal adjusted the balance on my account – set it to -£286 when it had been at zero previously, citing the ongoing ebay dispute. I emphasize that this was before eBay had stated any resolution on the matter; they simply adjusted the balance out of the blue. During this time the website would not let me make any amendments to the account; given how this happened before resolution on eBay, I limited potential damages by pulling Paypal’s direct-debit mandates with my banks.

The buyer sent me a rude message demanding to know “what was wrong with me” that I would not reply to the return request. He then opened a case himself. Again, I had no right of reply due to the same language barriers.

After a mere 48 hours more, eBay claimed to have “made a final decision” and found in the buyer’s favour. At that point I managed to see the details of the case the buyer had raised; he had included a photo of the lens showing distorted shutter blades and a note from a local shop certifying it was broken. Of course that doesn’t prove anything; it just confirms it was broken and from what I can make out of the serial number, eliminates the chance of it being a scam to replace the lens.

Paypal claim to have “buyer protection” but say nothing about protecting sellers. There was nothing I could do apart from send Paypal a cheque for the outstanding amount and terminate both accounts as soon as possible.

Today I noticed eBay had allowed the buyer to leave me negative feedback – the first and only such event in over 11 years’ dealing with the site – while still denying me the opportunity to respond, only presenting the options to leave him positive feedback or “leave feedback later”.

Having trouble logging in? Not really, any more; no.

Unsurprisingly, I find this all horrendously unjust.

Closing both accounts has resulted in a veritable flurry of emails (19 today alone, at the last count) confirming removal of various bank accounts, credit cards and the termination of ongoing payment arrangements (Skype, two streaming music services, a grocery delivery company, two DNS registrars, Yahoo, Facebook and a few more).

I never made a profit selling goods on eBay – at best swapped objects for a fair consideration and often took a slight hit using better postage than required.

So what’s to be learned from the experience?

First, as a geek by trade, I look back a few years to a time when we were talking about Web-2.0 versus Web-1.0. eBay’s website is clearly a product of the Web-1.0 era, and not just with its layout: it uses whole-page impressions where AJAX would be more appropriate; with sub-systems for “returns” bolted-on to, yet incompatible with, the existing “messaging” system; with no single sign-on and nationality-specific sub-sites. It is an ghastly accident of feature-creep instead of design.

Second, as a human being, I’m deeply concerned by the faceless bureaucracy and lack of control. We get accustomed to “just stick it on eBay”, reliant on there always being someone who wants stuff. Contrarily, there are also local shops with whom one can come to part-exchange arrangements.

eBay and PayPal represent the first and largest and best-known steps towards a digital economy. It is easy to be suckered into thinking it necessary to have accounts with them for ease of shopping and transferring money around the place. However, when I see how unjust the systems are, stacked against honest sellers, I can have no further dealings with either site.

 

And the process of rejecting them both, of realising that my ebay feedback score is not a significant measure of life, is wonderfully liberating.

 

Update 2017-01-18: forgot to say that, on the original auction, I had checked the box saying my policy is “no returns”. Why do eBay allow that if they proceed to completely ignore it based on the buyer’s say-so?

Olympus Pen-F: analysis and hands-on

Mine, my only, my preciousss!

Mine, my only, my preciousss!

There’s only one camera shop left in Perth and yesterday I was their first customer to buy the Olympus Pen-F. It’s not released on Amazon until Monday…

So it’s been the talk of the web. It’s undeniably a beautiful camera, with a rangefinder design reminiscent of its film camera predecessor of the same name. I’ve spent much of the last couple of days putting it through its paces.

I’m pleased to report it has a reasonable sturdy weight to it thanks to the aluminium/magnesium alloy shell. It feels just great in one’s hands. Well, my hands, anyway.

How does it perform?

First, software: it’ll be a little while before Adobe support it, but for us open-source fans, RawTherapee already supports the new .ORF RAW images in both 20-megapixel and hi-resolution 80-megapixel modes. (It seems to be the only open-source raw-converter to do so, at the time of writing.)

Second, I’ve been keeping dark-frame reference images for a few cameras, so I can compare its noise levels against predecessors and competitors. (Method: put the lens-cap on, place camera upside-down on desk, and then at every ISO from 6400 to 100 and below, at various shutter-speeds from 1/250s to 30s, take a couple of photos. A quick Python script reads the images, converts to greyscale and outputs various properties including the average lightness value per pixel.) Results aggregated by camera model and ISO with median-average.)

log-log plot of luminosity noise for various camera models by ISO

log-log plot of luminosity noise for various camera models by ISO

The graph shows 3 things, none surprising but pleasant to see confirmation:

  1. The Lumix GH2 really was awful for sensor quality. Its ISO160 is the Sony NEX-7’s ISO 800.
  2. There is a reasonable progression from Lumix GH2 to Sony NEX-7 to Olympus E-M1 and Pen-F; at native resolution, the E-M1 and Pen-F are much of a muchness for L-channel noise.
  3. The real game-changer is the high-resolution mode on the Pen-F – good, since this is one of the main reasons I bought the camera. This is to be expected: it uses the in-body stabilization to blend 8 images, four of them offset by 1 pixel (so it has full chroma bit-depth at every pixel) and four offset by half a pixel (so it has double the linear luminosity resolution). The result is about 1.5 stops better noise-performance.

Hi-res mode comes with a few limitations: the maximum ISO is 1600; aperture only goes down to f/8; shutter speeds only as long as 8s. You can enable picture modes such as natural, vivid, monochrome etc; however it does not work with bracketing or HDR mode: for HDR, it’s best to work manually.

One interesting point in favour of high-resolution mode: it does not stress the lens’s performance; at no point does it take an image of more than the native resolution, so if the MTF curves of contrast vs spatial frequency are adequate for 20 megapixels, the lens is also good enough for 50-80 (more later).

Hands-on: what’s it like in the field?

I took both current cameras out for a drive around the Highlands of Perthshire this afternoon, the NEX-7 with its kit 18-55mm lens and the Pen-F attached to a new 17mm f/1.8 prime.

Here’s a quick overview of four images: one from the NEX-7, two straight from the Pen-F’s own JPEGs out of camera (just downsized for presentation here) and one converted in RawTherapee:

My first real image from the Pen-F: Amulree and Strathbraan Parish Church at the end of Glen Quaich.

The Olympus images all have the same white-balance; I set the ORF and NEF files to the same whitebalance as the Pen-F chose automatically. It’s interesting that the sky is a proper shade of blue in the converted RAW; this might be a difference of profile between RawTherapee and the camera’s own internal JPEG engine. (Fine by me – I much prefer solid blue skies to cyan ones.)

What about the details?

Now this is a hard question. No doubt the resolution suffers and 80MPel is rather optimistic. Here we compare the NEX-7 against the 20MPel and 80MPel RAW files with no sharpening or detail controls enabled at all:

Obviously the 80MPel image is going to take a lot ofsharpening to bring it up to scratch.

My preferred sharpening algorithm with the NEX-7 has long been deconvolution. However, despite tweaking the radius, strength, dampening and iterations controls a lot, RawTherapee doesn’t seem to have the strength to sharpen the 80MPel hi-res RAW.

The out-of-camera JPEG appears to have a cruder sharpening algorithm with a lot of local contrast, so I switched to unsharp mask. At 80MPel, with enough USM to render the individual stones in the dry stone wall adequately sharp, this reveals serious anti-aliassing issues around straight-line edges:

Pen-F edge-aliassing artifacts

Pen-F edge-aliassing artifacts

These artefacts disappear when downsized to 50MPel in camera. Then again, the in-camera JPEG conversion is very contrasty and over-sharpened, so there’s room for improvement all-round:

In-camera JPEG conversion (20MPel) vs RawTherapee downscaled from 80MPel RAW

In-camera JPEG conversion (20MPel) vs RawTherapee downscaled from 80MPel RAW

Update: by adjusting the USM parameters, I’ve removed much of the artifacts around edges. There are still problems where blades of grass have moved between source frames, but static objects with strong outlines no longer cause such problems.

disable edge-only and halo-detection, just use less USM to start with

disable edge-only and halo-detection, just use less USM to start with

There are other demosaicing algorithms in RawTherapee and other ways of sharpening (newly added wavelet support). Plenty of scope for experimenting to get the best out of this thing.

What’s the verdict?

I absolutely love using it. Where other cameras drive the megapixel race by giving increasingly large images for everything (40MPel+), this camera ticks along at a more than adequate native 20 megapixels for most purposes with the option of taking one’s time to crank out the tripod to make absolutely huge images. This suits me fine – I’ve spent so long doing similar image-stacking/super-resolution trickery as part of the post-processing workflow, it’s great to have the camera do it instead.

The in-body stabilization is excellent. I managed to hand-hold 10 frames at 1/10s at the Falls of Dochart, twice, without a single blurry image.

There are lots of other features to look forward to – I’ve never had the patience to try focus-stacking, but moving it in-camera is very appealing.

With a subtle adjustment to the brighter blues to cater for skies, I could probably live off the camera’s own JPEGs for many purposes.

New lenses and filter adapter rings have been ordered…

Gratuitous sample images

As if the above weren’t nice enough, here are three more images from this afternoon.

Ordinary Landscape

The other day, we had a proponent of “creative landscape photography” presenting at the photographic society. His results were outstanding. I even approved of some of his philosophy, which is saying something.

But … some of his decisions in the execution of that philosophy seemed off.

It’s great to hear someone ignoring the maxim “get it right in camera”, espousing instead the idea of “getting good data at the scene” and affirming the role of post-processing to polish the result instead. I believe strongly in the very same thing myself.

It seems strange that such a philosophy would lead to rejoicing in a camera’s dynamic range – no matter claiming to have recovered 4 stops from the shadows, it would still have been done better by HDR at the scene – where the data comes from photons working in the hardware’s optimum performance zone.

It seems strange that one would criticize such a camera for “not seeing the same way we do”, and go on to say that we need to enhance the impression of depth by using lower contrast in the distance.
The sensor’s response curve is smooth; it will accurately reflect the relative contrast in areas by distance. The only way it would not is if one’s processing were to actively include tonemapping with localized contrast equalization. Left to its own devices, the result will accurately reflect what it was like to be there – one of the greater compliments a landscape photographer could receive.

The problem is not with the landscape; it’s with the way that photography should aspire to relate to the landscape. “Creative” seems to be a euphemism for multiplying and enhancing every aspect, be it strong foreground (make it a yet stronger perspective with a tilt lens), contrasty light (more contrast slider), colour (still more saturation and vibrace to breaking point) and so on. The results create impact but without story or message; visual salt without an underpinning of a particular taste.

Of course we know that “realism” is a phantom. It’s true that no camera will capture quite the same as anyone sees. However, let me introduce a new word: believability, precisely the quality that one could have been there at the same time. That seems like a valid goal.

Have some believable landscape images – they mean a little to me; that suffices.

How I’ll be Voting

indyref-poll-cardIndividually. I hold the idea of nationality very loosely: it sometimes applies but does not define.  I rate people trying to live their lives far higher than the games of politicians playing nations.

Thoughtfully. Manifestos have been read; factors – economic, political, constitutional, cultural, socially – considered, their importances determined;  a decision has been reached.

Carefully. The way things are looking,  half the population is likely to be upset whatever happens. There will be a need to restore peace.

Independently. The presentation of the campaigns as a bickering of two opposing factions has not been attractive. First the reasoning and decision, then the labelling with political ideology (or preferably not at all).

Proudly. It has not been an easy summer, and getting to the stage of holding this poll-card in my hands has taken considerably more organization than intended.

Humbly. I am glad to have a voice in the running of the land I call home.

Knowing Your Lenses

Analysing Lens Performance

Background

Typically, when a manufacturer produces new lenses, they include an MTF chart to demonstrate the lens’s performance – sharpness measured as the number of line-pairs per millimetre it can resolve (to a given contrast tolerance), radially and sagitally, varying with distance from the centre of the image. While this might be useful before purchasing a lens, it does not make for easy comparisons (what if another manufacturer uses a different contrast tolerance?) nor does it necessarily well reflect the use to which it’ll be put in practice. (What if they quote a zoom’s performance at 28mm and 70mm f/5.6, but you shoot it most at 35mm f/8? Is the difference between radial and sagittal sharpness useful for a real-world scene?)
Further, if your lens has been around in the kit-bag a bit, is its performance still up to scratch or has it gone soft, with internal elements slightly out of alignment? When you’re out in the field, does it matter whether you use 50mm in the middle of a zoom or a fixed focal-length prime 50mm instead?
If you’re shooting landscape, would it be better to hand-hold at f/5.6 or stop-down to f/16 for depth of field, use a tripod and risk losing sharpness to diffraction?
Does that blob of dust on the front element really make a difference?
How bad is the vignetting when used wide-open?

Here’s a fairly quick experiment to measure and compare lenses’ performance by aperture, two ways. First, we design a test-chart. It’s most useful if the pattern is even across the frame, reflecting a mixture of scales of detail – thick and thin lines – at a variety of angles – at least perpendicular and maybe crazy pseudo-random designs. Here’s a couple of ideas:

Method

Decide which lenses, at which apertures, you want to profile. Make your own design, print it out at least A4 size, and affix it to a wall. We want the lighting to be as even as possible, so ideally use indoor artificial light after dark (ie this is a good project for a dark evening). Carefully, set up the camera on a tripod facing the chart square-on, and move close enough so the test-chart is just filling the frame.

Camera settings: use the lowest ISO setting possible (normally around 100), to minimize sensor noise. Use aperture-priority mode so it chooses the shutter-speed itself and fix the white-balance to counteract the indoor lighting. (For a daylight lightbulb, use daylight; otherwise, fluorescent or tungsten. Avoid auto-white-balance.)
Either use a remote-release cable or wireless trigger, or enable a 2-second self-timer mode to allow shaking to die down after pushing the shutter. Assuming the paper with the test-chart is still mostly white, use exposure-compensation (+2/3 EV).
For each lens and focal-length, start with the widest aperture and close-down by a third or half a stop, taking two photos at each aperture.
Open a small text-document and list each lens and focal-length in order as you go, along with any other salient features. (For example, with old manual prime lenses, note the start and final apertures and the interval size – may be half-stops, may be third of a stop.)
Use a RAW converter to process all the images identically: auto-exposure, fixed white-balance, and disable any sharpening, noise-reduction and rescaling you might ordinarily do. Output to 16-bit TIFF files.

Now, it is a given that a reasonable measure of pixel-level sharpness in an image, or part thereof, is its standard deviation. We can use the following Python script (requires numpy and OpenCV modules) to load image(s) and output the standard deviations of subsets of the images:

#!/usr/bin/env python

import sys, glob, cv, cv2
import numpy as np

def imageContrasts(fname):
  img = cv2.imread(fname, cv.CV_LOAD_IMAGE_GRAYSCALE)
  a=np.asarray(img)
  width=len(img[0])
  height=len(img[1])
  half=a[0:height//3, 0:width//3]
  corner=a[0:height//8, 0:width//8]
  return np.std(a), np.std(half), np.std(corner)

def main():
	files=sys.argv[1:]
	if len(files)==0:
		files=glob.glob("*.jpg")
	for f in files:
		co,cc,cf=imageContrasts(f)
		print "%s contrast %f %f %f" % (f, co, cc, cf)

if __name__=="__main__":
	main()

We can build a spreadsheet listing the files, their apertures and sharpnesses overall and in the corners, where vignetting typically occurs. We can easily make a CSV file by looping the above script and some exiftool magic across all the output TIFF files:

bash$ for f in *.tif
do
ap=$(exiftool $f |awk '/^Aperture/ {print $NF}' )
speed=$( exiftool $f |awk '/^Shutter Speed/ {print $NF}' )
conts=$(~/python/image-interpolation/image-sharpness-cv.py $f | sed 's/ /,/g')
echo $f,$ap,=$speed,$conts
done

Typical output might look like:

P1440770.tif,,=1/20,P1440770.tif,contrast,29.235214,29.918323,22.694936
P1440771.tif,,=1/20,P1440771.tif,contrast,29.253372,29.943765,22.739748
P1440772.tif,,=1/15,P1440772.tif,contrast,29.572350,30.566767,25.006098
P1440773.tif,,=1/15,P1440773.tif,contrast,29.513443,30.529055,24.942437

Note the extra `=’ signs; on opening this in LibreOffice Spreadsheet, the formulae will be evaluated and fractions converted to floating-point values in seconds instead. Remove the spurious `contrast’ column and add a header column (fname,aperture,speed,fname,overall,half,corner).

Analysis

Let’s draw some graphs. If you wish to stay with the spreadsheet, use a pivot table to average the half-image contrast values per aperture per lens and work off that. Alternatively, a bit of interactive R can lead to some very pretty graphs:

> install.package(ggplot2)
> library(ggplot2)
> data<-read.csv("sharpness.csv") 
> aggs<-aggregate(cbind(speed,entire,third,corner) ~lens+aperture, data, FUN=mean)
> qplot(aggs$aperture, aggs$third, col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

This will give a comparison of overall sharpness by aperture, grouped by lens. Typically we expect every lens to have a sweetspot aperture at which it is sharpest; my own examples are no exception: third There are 5 lenses at play here: a kit 14-42mm zoom, measured at both 30mm and 42mm; a Minolta Rokkor 55mm prime; a Pentacon 30mm prime; and two Pentacon 50mm f/1.8 prime lenses, one brand new off eBay and one that’s been in the bag for 4 years.

The old Pentacon 50mm was sharpest at f/5.6 but is now the second-lowest at almost every aperture – we’ll come back to this. The new Pentacon 50mm is sharpest of all the lenses from f/5 onwards, peaking at around f/7.1. The kit zoom lens is obviously designed to be used around f/8; the Pentacon 30mm prime is ludicrously unsharp at all apertures – given a choice of kit zoom at 30mm or prime, it would have to be the unconventional choice every time. And the odd one out, the Rokkor 55mm, peaks at a mere f/4.

How about the drop-off, the factor by which the extreme corners are less sharp than the overall image? Again, a quick calculation and plot in R shows:

> aggs$dropoff < - aggs$corner / aggs$third
> qplot(aggs$aperture, aggs$dropoff, col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

dropoff Some observations:

  • all the lenses show a similar pattern, worst vignetting at widest apertures, peaking somewhere in the middle and then attenuating slightly;
  • there is a drastic difference between the kit zoom lens at 30mm (among the best performances) and 42mm (the worst, by far);
  • the old Pentacon 50mm lens had best drop-off around f/8;
  • the new Pentacon 50mm has least drop-of at around f/11;
  • the Minolta Rokkor 55mm peaks at f/4 again.

So, why do the old and new Pentacon 50mm lenses differ so badly? Let’s conclude by examining the shutter-speeds; by allowing the camera to automate the exposure in aperture-priority mode, whilst keeping the scene and its illumination constant, we can plot a graph showing each lens’s transmission against aperture.

> qplot(aggs$aperture, log2(aggs$speed/min(aggs$speed)), col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

speed Here we see the new Pentacon 50mm lens seems to require the least increase in shutter-speed per stop aperture, while, above around f/7.1, the old Pentacon 50mm lens requires the greatest – rocketing off at a different gradient to everything else, such that by f/11 it’s fully 2 stops slower than its replacements.

There’s a reason for this: sadly, the anti-reflective coating is starting to come off in the centre of the front element of the old lens, in the form of a rough circle approximately 5mm diameter. At around f/10, the shutter iris itself is 5mm, so this artifact becomes a significant contributor to the overall exposure.

Conclusion

With the above data, informed choices can be made as to which lens and aperture suit particular scenes. When shooting a wide-angle landscape, I can use the kit zoom at f/8; for nature and woodland closeup work where the close-focussing distance allows, the Minolta Rokkor 55mm at f/4 is ideal.