My Photography Zen just ran over your Dogma

In a previous life I used to attend the local photography club; there were ups (friends; winning the occasional competition; being involved, mostly in geekish matters; making the occasional presentation) and some noteworthy downs (a very rude judge; a feeling that photos were judged according to how “lucky” they were to have been found, rather than appreciating skill in the making). On the whole, however, they were welcoming and friendly and it was generally “my” club.

So, now we’ve settled elsewhere in the country, I thought I’d investigate the local camera club.

Such conversation as there was was mostly limited to willy-waving equipment:

  • someone’s selling their Canon 550D and someone else has a 7D. I don’t care; show me the results from either.
  • “oh yeah, oh yeah, Sigma’s a good make. I’ve got a Tokina 11-16mm lens, that’s quite wide-angle”. Immodest one-upmanship, as though one’s lens’s field of view is something to boast about.
  • “I don’t use Photoshop. I figure if you’ve spent thousands on a good camera…”. Well good for you; if I’d spent thousands on my camera I’d consider it imbalanced not to be matched in complexity of software.
  • “I don’t use Photoshop. It makes competitions unfair, that someone puts in the work to do it correctly in-camera and someone else takes a lousy photo and makes it good”. Wake up: post-processing is a necessary part of photography, and at least they’re getting results.
  • “So what do you use?” A Panasonic. “Oh.” Pause. “I have a Nikon” (points to bag emblazened with a Sony logo). I’m as glad for you as the results you get are good.
  • I bought a 100-400mm lens for £650; the chap selling it said it had auto-focus but I was disappointed when it didn’t”. Tough luck; my favourite lens cost me £40 on eBay.

It being the first night of the season, there was a short presentation of a selection of images (of varying degrees of appeal; “sorry but we haven’t calibrated the projector yet” – and it showed, as the red channel was completely blown) and an awful lot of laying-down rules about competitions (down to 2 photos per contributor per competition, inexplicably; a meagre 1024*768px size for digital submissions).

The arguments are old, tired and banal. There was no evidence of people being interested in each others’ choice of subject-matter or approaches to it – what makes them tick as photographers, people, artists. Like landscape or find it too anyone-can-appreciate-it common-denominator? Like portraiture? Sports? Abstract art, fine art? Microscopy?

As a parallel to the hardware fixation, all talk of software – including, critically, “advancing one’s photography” – centred on Adobe Photoshop, as though it were the only tool out there.

Nature:TNGTo state the obvious, photography is not about hardware – the clue’s in the name: light-writing; it’s about image – the image – and what thoughts it evokes in one’s brain. Advancement is a matter of choice, control (pushing the right buttons for desired outcomes) and knowledge – the practical experiences of requirements and attainability at time of shooting and of rationally choosing settings for their consequences as a function of the image-space (rather than as individual corrections or tweaks) in post-processing.

Nor is post-processing all about Photoshop either – but I know what saturation, high-pass sharpening, barrel-distortion-correction and channel-mixing do to an image, and I know how my particular camera behaves with noise arising from high ISO as distinct from longer exposures, and that I prefer to remove both kinds of the camera’s noise as much as possible only to reapply synthetic grain using a Perlin noise generator. All are functions of the image, independent of the software being used.

From experience, the best a competition can do is assign a score – subject to the judge’s biases – and, typically, make some superficial remarks about if one were to “…just tweak the sharpness a bit”… as though there were only one such control (I can think of simple, deconvolution, unsharp masking, high-pass, refocus and Wiener sharpening algorithms off the top of my head – which of these do they just mean?!). This does not lead to advancement.

I do not have to subscribe to their arbitrary rules, limitations and regulations.

I will not be attending that photo-club.

I have images to be making!

Comparing MP3, Ogg and AAC audio encoding formats

Over the past couple of months, I’ve been both learning the R language and boosting my understanding of statistics, the one feeding off the other. To aid the learning process, I set myself a project, to answer the age-old question: which lossy audio format is best for encoding my music collection? Does it matter if I use Ogg (oggenc), MP3 (lame) or AAC (faac), and if so, what bitrate should I use? Is there any significance in the common formats such as mp3 at 128 or 192kbit/s?

First, a disclaimer: when it comes to assessing the accuracy of encoding formats, there are three ways to do it: you can listen to the differences, you can see them, or you can count them. This experiment aims to be objective through a brute-force test, given the constraints of my CD library. Therefore headphones, ears , speakers and fallible human senses do not feature in this – we’re into counting statistical differences, regardless of the nature of the effect they have on the listening experience. It is assumed that the uncompressed audio waveform as ripped from CD expresses the authoritative intent of its artists and sound engineers – and their equipment is more expensive than mine. For the purposes of this test, what matters is to analyze how closely the results of encoding and decoding compare with the original uncompressed wave.

I chose a simple metric to measure this: the root-mean-square (RMS) difference between corresponding samples as a measure of the distance between the original and processed waves. Small values are better – the processed wave is a more faithful reproduction the closer the RMSD is to zero. The primary aim is to get a feel for how the RMSD varies with bitrate and encoding format.

For a representative sample, I started with the Soundcheck CD – 92 tracks, consisting of a mixture of precisely controlled audio tests and some sound-effects and samples, augmented further with a mixture of a small number of CDs of my own – some rock (Queen, Runrig) and classical (Dvorak, Bach and Beethoven).

On the way, I encountered a few surprises, starting with this:

Errors using avconv to decode mp3

Errors using avconv to decode mp3

Initially I was using Arch Linux, where sox is compiled with the ability to decode mp3 straight to wav. The problem being: it prepends approximately 1300 zero samples at the start of the output (and slightly distorts the start). Using avconv instead had no such problem. I then switched to Ubuntu Linux, where the opposite problem occurs: sox is compiled without support for mp3, and instead it’s avconv that inserts the zero samples as the above screenshot of Audacity shows. Therefore, I settled on using lame both to encode and decode.

One pleasant surprise, however: loading wav files into R just requires two commands:

install.packages("audio")
library("audio")

and then the function load.wave(filename) will load a file straight into a (long) vector, normalized into the range [-1,1]. What could be easier?

At this point, interested readers might wish to see the sources used: R audio encoding comparison. I used a zsh script to iterate over bitrates, varying the commands used for each format; a further script iterates over files in other subdirectories as well. R scripts were used for calculating the RMSD between two wavs and for performing statistical analysis.

To complicate matters further, lame, the mp3 encoder, offers several algorithms to choose between: straightforward variable-bitrate encoding, optionally with a “-h” switch to enable higher quality, optionally with a lower bound “-b”, or alternatively, constant bitrate. Further, faac, the AAC encoder, offers a quality control (“-q”).

The formats and commands used to generate them are as follows:

AAC-100 faac -q 100 -b $i -o foo.aac “$src” 2
AAC-80 faac -q 80 -b $i -o foo.aac “$src” 2
mp3hu lame –quiet –abr $i -b 100 -h “$src” foo.mp3
mp3h lame –quiet –abr $i -h “$src” foo.mp3
mp3 lame –quiet –abr $i “$src” foo.mp3
mp3-cbr lame –quiet –cbr -b $i “$src” foo.mp3
ogg oggenc –quiet -b $i “$src” -o foo.ogg

As a measure of the size of the experiment: 150 tracks were encoded to Ogg, all four variants of MP3 and AAC at bitrates varying from 32 to 320kbit/s in steps of 5kbps.

The distribution of genres reflected in the sample size is as follows:

Genre number of tracks
Noise (pink noise with variations) 5
Technical 63
Instrumental 20
Vocal 2
Classical 39
Rock 21

Final totals: 30450 encode+decode operations taking approximately 40 hours to run.

The Analysis

Reassuringly, a quick regression test shows that the RMSD (“deltas”) column is significantly dependent on both bitrate, mp3 and ogg:

> fit<-lm(deltas ~ bitrate+format, d) > summary(fit)

Call:
lm(formula = deltas ~ bitrate + format, data = d)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.06063 -0.02531 -0.00685  0.01225  0.49041 

Coefficients:
                Estimate Std. Error t value Pr(>|t|)    
(Intercept)    6.348e-02  8.663e-04  73.274  < 2e-16 ***
bitrate       -3.048e-04  3.091e-06 -98.601  < 2e-16 ***
formatAAC-80  -9.696e-16  9.674e-04   0.000 1.000000    
formatmp3      2.331e-02  9.674e-04  24.091  < 2e-16 ***
formatmp3-cbr  2.325e-02  9.674e-04  24.034  < 2e-16 ***
formatmp3h     2.353e-02  9.674e-04  24.324  < 2e-16 ***
formatmp3hu    2.353e-02  9.674e-04  24.325  < 2e-16 ***
formatogg      3.690e-03  9.676e-04   3.814 0.000137 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Residual standard error: 0.04512 on 30438 degrees of freedom
  (4 observations deleted due to missingness)
Multiple R-squared: 0.275,      Adjusted R-squared: 0.2748 
F-statistic:  1649 on 7 and 30438 DF,  p-value: < 2.2e-16

Note that, for the most part, the low p-values indicate a strong dependence of RMSD (“deltas”) on bitrate and format; we’ll address the AAC rows in a minute. Comparing correlations (with the cor function) shows:

Contributing factors - deltas:
    trackno     bitrate      format       genre   complexity
0.123461675 0.481264066 0.078584139 0.056769403  0.007699271

The genre and complexity columns are added by hand by lookup against a hand-coded CSV spreadsheet; the genre is useful but complexity was mostly a guess and will not be pursued further.

In the subsequent graphs, the lines are coloured by format as follows:

  • green – ogg
  • black – AAC
  • red – mp3 (ordinary VBR)
  • purple – mp3 (VBR with -h)
  • orange – mp3 (VBR with -b 100 lower bound)
  • blue – mp3 (CBR)

A complete overview:

Encoding errors, overview

Encoding errors, overview

Already we can see two surprises:

  • some drastic behaviour at low bitrates: up to 50kbit/s, ogg is hopelessly inaccurate; mp3 persists, actually getting worse, until 100kbit/s.
  • There is no purple line; the mp3 encoded with lame‘s -h switch is indistinguishable from vanilla mp3.
  • There is only one black line, showing the faac AAC encoder ignores the quality setting (80 or 100% – both are superimposed) and appears to constrains the bitrate to a maximum of 150kbps, outputting identical files regardless of the rate requested above that limit.

On the strength of these observations, further analysis concentrates on higher bitrates above 100kbit/s only, the AAC format is dropped entirely and mp3h is ignored.

This changes the statistics significantly. Aggregating the average RMSD (deltas) by genre for all bitrates and formats, with the entire data-set we see:

Aggregates by genre:
           genre       deltas
1        0-Noise  0.016521056
2    1-Technical  0.012121707
3 2-Instrumental  0.009528249
4        3-Voice  0.009226851
5    4-Classical  0.011838209
6         5-Rock  0.017322585

While when only considering higher bitrates, the same average deltas are now drastically reduced:

Aggregates by genre - high-bitrate:
           genre       deltas
1        0-Noise  0.006603641
2    1-Technical  0.003645403
3 2-Instrumental  0.003438490
4        3-Voice  0.005419467
5    4-Classical  0.004444285
6         5-Rock  0.007251133

The high-bitrate end of the graph, coupled with the blend of genres, represents the likeliest real-world uses. Zooming in:

Encoding errors, high bitrate only

Encoding errors, high bitrate only

Reassuringly, for each format the general trend is definitely downwards. However, a few further observations await:

  • Ogg is the smoothest curve, but above about 180kbit/s, is left behind by all the variants of mp3.
  • The orange line (mp3, VBR with a lower bound of 100kbit/s) was introduced because of the performance at the low-bitrate end of the graph. Unexpectedly, this consistently performs worse than straightforward mp3 with VBR including lower bitrates.
  • All the mp3 lines exhibit steps where a few points have a shallow gradient followed by a larger jump to the next. We would need to examine the mp3 algorithm itself to try to understand why.

Deductions

Two frequently encountered bitrates in the wild are 128 and 192kbit/s. Comparing the mean RMSDs at these rates and for all bitrates above 100kbit/s, we see:

   format   deltas(128)  deltas(192)  deltas(hbr)
1 AAC-100  0.009242665   0.008223191  0.006214152
2  AAC-80  0.009242665   0.008223191  0.006214152
3     mp3  0.007813217   0.004489050  0.003357357
4 mp3-cbr  0.007791626   0.004417602  0.003281961
5    mp3h  0.008454780   0.004778378  0.003643497
6   mp3hu  0.008454780   0.004778408  0.003643697
7     ogg  0.007822895   0.004919718  0.003648990

And the conclusion is: at 128 and 192kbps and on average across all higher bitrates, MP3 with constant bitrate encoding is the most accurate.

Should I use it?

The above test of accuracy is only one consideration in one’s choice of data format. Other considerations include the openness of the Ogg format versus patent-encumbrance of mp3 and the degree to which your devices support each format. Note also that the accuracy of mp3-cbr at about 200kbps can be equalled with ogg at about 250kbps if you wish.

Correlations between all combinations of factors

Correlations between all combinations of factors

Returning to the disclaimer above: we have reduced the concept of a difference between two waveforms to a single number, the RMSD. To put this number in context, this plot of the waveforms illustrates the scope of errors encountered:

visualizing encoding differences

visualizing encoding differences

(Reference wave on top; AAC in the middle showing huge distortion and MP3 on the bottom, showing slight differences.)

This demonstrates why tests based on listening are fallible: the errors will definitely manifest themselves as a different profile in a frequency histogram. When listening to the AAC version, the sound heard is modulated by the product of both residual errors from encoding and the frequency response curve of the headphones or speakers used. Coupled with this, the human ear is not listening for accuracy to a waveform so much as for an enjoyable experience overall – so some kinds of errors might be regarded as pleasurable, mistakenly.

There are be other metrics than RMSD by which the string-distance between source and processed waveforms can be measured. Changing the measuring algorithm is left as an exercise for the reader.

Feel free to download the full results for yourself – beware, the CSV spreadsheet is over 30k rows long.

Welding Glass and Rubber Bands

Sometimes, one feels the need to make long-exposure photos (15s+) during daylight. One option is to buy a Lee Big Stopper, Hitech 100 ND1000 or other neutral-density filter, but these are far from cheap. Alternatively, one could stack a bunch of cheap ND filters on top of each other (ND8+ND4 etc), but then the image-quality suffers and there may be significant magenta colour tints. Further, as the degree of filtration increases, the necessity of preventing light-leaks around the edges of the filters also increases.

Enter the cheapskate: welding glass. According to my experiments, this stuff will pass a tiny fraction of the light from the scene, extending exposure time by 10-11 stops. Admittedly, it more than just tints the scene, it colours it a thick sludgy lurid green, seriously reducing the colour bit-depth in the red and blue channels. Further, it might not fit a regular filter holder.

Hence, a rather complicated shooting and processing methodology.

Perpetrating long exposures

Perpetrating long exposures, complete with rubber bands and welding glass.

When out in the field, if the lens-hood can be inverted, you can use rubber bands to hold the welding glass in place. First advantage: the filter is as close to the lens as it can be, so no light leaks.

Second, we shoot a panorama. In this case, the scene exposure varies with height in the intended finished scene; thus, we choose a vertorama where individual shots are taken in landscape orientation, working up from foreground rocks at the bottom (longer exposure times) to sky at the top, because this keeps the contrast-ratio low in each individual frame. In all, 16 images were shot whilst panning the tripod in the vertical axis, totalling 186 seconds of exposure time.

_1400156

A single source image showing the lurid welding-glass green “tint” and massive lens distortion.

These images were processed in Photivo, using one of my standard presets with corrected white-balance, minimal but adequate noise-reduction (Pyramid algorithm) and Wiener sharpening. Lens distortion, whilst acute, is not addressed at this stage.

Top tip: in scenes such as this, the water just below the far horizon is the closest to an average grey in the light, and makes a good reference for spotting white-balance.

Stitching is done in Hugin, where the considerable distortion from a cheap wide-angle kit zoom lens can be optimized out – the horizon line, bowed to varying degrees by angle of inclination, becomes straight almost by magic. Further, Hugin was set to blend the images using enfuse, which allows selecting pixels to favour both exposure and image entropy – this simply chooses the pixels closest to a midtone, constituting a simple natural HDR.

Second advantage: we’ve just solved lens vignetting as well, because the pixels that were in one image’s corners can be taken from mid-edge pixels in other images.

The output from Hugin’s panorama stitching is a realistic and natural reference image.

From there, it falls to the usual odyssey of post-processing, mostly using Darktable to chose the finished appearance: crop to square, a conversion to black and white (with 80% orange filter), high-pass sharpening and spatial contrast parameters (fourier-space) and a low-pass filter in multiply mode coupled with tonemapping to even-out the large-scale contrast whilst retaining lots of local detail, grad-ND filter applied to retain some contrast in the sky, etc.

At this point, let’s pause to consider the alternatives. Pick an amount of money: you can spend as much as you care to mention on a dSLR to get, maybe, 24 megapixels. What little you have left over can be spent on a sharp lens and a 10-stop ND filter. Now consider taking one image of 3 minutes’ exposure and processing it in some unmentionable popular proprietary processing software. First, the rotation to straighten the horizon and crop from 3:2 aspect-ratio to square are going to cost you about 35% of the pixels, so that 24 is down to 15 megapixels. Then performing noise-reduction is going to cost per pixel, ie the bit-depth is reduced by maybe 2 (on average) from 12 or 14 bits per pixel per channel. Further processing will remove hot and dead pixels. All this is before deciding to process for a particular appearance – we’re still working on a good reference image and the data is vanishing with every click of the mouse, and what little we’re left with is tainted by algorithmic approximation rather than originating with photons coming from the scene.

By performing the panorama routine, the equipment can be much cheaper: a 15-megapixel sensor with poor bit-depth and yet the archive processed image is 22.5 megapixels, all of them good, originating more from photons than approximation.

And it looks pretty good, too:

There Is No Boat.

There Is No Boat.

As an aside, a small boat sailed right around and through the scene while I was shooting the panorama. Because all of the images were fairly long, it left no noticeable impression on the resultant image at all.

[Some product links may be via the Amazon affiliate programme.]

Our Earth Was Once Green

A few months ago, when we moved into the area, one of the first local scenes I spotted was this view of beech trees along the brow of a small hill, running along the side of a fence, terminating with a gate and a hawthorn tree.

I managed to capture the view with the remnants of snow as the deeper drifts melted away:

Some Trees

Some Trees

and I wrote a previous article about what set that image apart from a quick mobile snap of the scene as well.

As a photographer, I was looking forward to capturing the scene in varying seasons – indeed, I could anticipate my output becoming repetitive and running out of inspiration for different ways to portray it.

I got as far as two variations.

First, The Answer: tall beech trees, covered in new foliage for the onset of summer, blowing in the wind:

the answer

The Answer

and then an evening portrait of summer skies with blue wispy cirrus clouds above the trees:

Some Sky

Some Sky

In the past week, the impressive beeches have been cut down; a drain pipe has been laid just this side of the trees, the fence is removed and the whole hillside has been ploughed so what used to be an expanse of green grass is currently brown soil. I guess at least that won’t last long before it recovers.

Our Earth Was Once Green

Our Earth Was Once Green

Yesterday evening I caught a TV programme about Scotland’s landscape, from the point of view of some awful Victorian book, a rather romantic tourist guide for “picturesque” views, the programme showing the contrast with tourists’ search for an “authentic experience” of Scotland – yet pointing out how, more or less by definition, to be a tourist is inevitably to be an uninvolved spectator.

One of the guests in the programme was a local photographer, who explained how landscape photographers struggle with the dichotomy of presenting the landscape as timeless, pure, untainted by human hand, whilst knowing in the back of their mind that they’re perpetrating a myth through selectivity, that the landscape is far from wild and natural – the deforestation dates back 8 millenia to pre-history, what now appears as Highland heather-clad grouse-moor heath used to be crofting land prior to the Clearances, etc.

While landscape used to be my chosen genre of photography, and a fair proportion of what I now shoot – including the above – still qualifies as such, I think it’s time to recognize that landscape photography is not just about the tourist photographer seeking ever-wilder ever-more-northern scenery, nice as that can be, but rather includes potentially less travel whilst valuably documenting the landscape changing from year to year, whether those changes arise from natural forces or human intervention.

Bird on a Wire

It had to be done:

"Bird on a Wire"

“Bird on a Wire”

 

A few days ago a contact on flickr observed I’d entitled a photo in a rather perfunctory fashion, `A Tree‘. The tree itself is relatively characterful, branches blown according to the prevailing wind direction into an exuberant tentacle-waving display; the work that went into the processing of that image was, as usual, significant: it’s a vertorama, the presets used for each image evolved for optimum image quality, the stitching took time, and a lot of time was spent choosing the filtering, black&white conversion and toning.

One thing I noticed about competitions in photography clubs is a distinct tendency for photos to be awarded higher marks according to the literality of their titles relative to their physical subject-matter. This was irksome at the time (as I’m currently “between clubs”), but on a little thought I’ve realised there are several styles of titling photos:

  • one layer of abstraction towards a concept: “dreich”
  • literal: “sparrow”, “buzzard”, “dandelion”
  • understated – perhaps textual / ideological minimalism parallel to the photographic aesthetic
  • cynical, throw-away or orthogonal (e.g. “Untitled” or using the camera’s sequence-numbered filename)
  • outright cliché

Some photos just fall straight into the latter category. No amount of grungy texture-blending processing is going to stop people seeing my photo of the day today and superstitiously muttering the incantation, “bird on a wire”; a Google image-search brings up an entire page full of similar images, differing mostly in the number of birds involved. Similarly, there’s something in the shared photographer psyche that instantly entitles any example of wabi-sabi as “seen better days”. Googling that is left as an exercise for the reader.

More to the point, if a photographer (or their editor) has sufficient lack of background story to entitle such a shot otherwise, should it have been taken except for the sake of adding to one’s collection?

I’m up to 7 birds on the wire now and they’ve changed into pigeons.

Old is the new… old, really

Time for a bit of a rant, geek-style.

For about 15 months I’ve used a Samsung Galaxy Note as mobile phone of choice. And truth be told, I’ve hated all bar the first 10 minutes of it. The lack of RAM has made me reconsider what apps I use, favouring Seesmic for combined Facebook+Twitter over separate apps; it still makes it horrendously slow for running anything interactive – as an example, by the time I’d got the thing unlocked and fired-up HDR Camera+, a rainbow had entirely gone! Spontaneity, we do not have it.

What they don’t tell you is that the leverage available when coupling these large screens with tiny thin USB connectors is a recipe for disaster. I’ve bent the pins on more cables than I care to think, trying to wiggle the thing for optimum solid connectivity; just before Christmas last year it finally died and the internal USB connector broke, requiring a repair under warranty – the first time I’ve ever had to send back a phone to be fixed.

Anyway, yesterday evening it threw a hissy-fit and spontaneously discharged the battery all the way down to empty, despite being plugged in (and, given that the wiggly-connector issues seemed to be returning, I made doubly sure that the power icon said charging). I recharged the battery overnight with the phone turned off, and yet this morning it still refused to boot up. It did something like this a while ago, and leaving it overnight was sufficient to reset itself; today, it’s had all day and still isn’t powering on any more.

So this morning I dug out my old HTC Hero, the first Android phone I ever used, and flashed it with CyanogenMod and updated the radio package. After a bit lot of faffing around, hunting and installing SSL root CA certificates to allow it to talk to Google and rebooting, I’m now powered by a custom ROM and sufficiently uptodate that Google Talk has been replaced by Hangouts. Yay, geek-cred!

The Samsung problem might be as simple as a dead battery; however, after the Christmas fiasco I really can’t be bothered trying to fix it any more and will look forward to getting something better later in the year.

Unfortunately it means a few changes on this blog; the camera is down from 8 megapixels to 5MPel and I won’t be shooting with HDR Camera+ or processing images on the phone with Aviary any more. Still, it does work for capturing the odd image and I can achieve some suitably strange processing effects on the notebook with darktable if I want.

Meanwhile I’m back getting my communication and maps on the go again. And the Hero is living up to its name.

the mind of a dog-walking automaton

damaged crash-barrier beside the B738 Dunskey Road

In the course of this morning’s walkies, Dog & I passed a police van with blue lights flashing, finishing-up at a small scene of wayside destruction.

I’ve been mulling over the nature of interactions with road users for a few months. Every day, we walk along roads (single-lane A- trunk and B- country, such as this), all national speed limit (max 60mph for cars). Being dutiful citizens, we tend to follow the Highway Code and stroll along the right of the road facing into oncoming traffic, as one might expect.

Being logical, I like to think in terms of a table of combinations of possible encounters:

  • A given oncoming vehicle’s speed may be such that they pass way too fast, pass at a comfortable speed, or dawdle slowly past. The distance they pass can be too close (ignores me standing in the gutter), smoothly swerved round to give an extra meter gap, or they can indicate and cross to the other side of the road. Degrees of interaction vary from car-occupants who gawp and stare, to those who appear to consciously avoid eye-contact, a few who nod, some who raise a finger above the steering wheel, some who smile, to those who wave, sometimes even enthusiastically.
  • My normal dog-walking position is along the side of the road, with Dog on the grass verge beside. According to circumstances, I can variously hop onto the verge myself, I can stop to help them pass (particularly if the road is narrow or there’s traffic coming in both directions), and/or I can nod, smile or wave (sometimes even enthusistatically if I recognize the driver).

So, what combinations constitute the happiest encounters? To be clear, I don’t find comparatively high speeds inherently worrying – otherwise I wouldn’t walk beside roads with a 60mph limit; within reason, driving briskly only minimizes the encounter time, which is fine – we’ve all got places to go. I do find it horribly rude when drivers slow down so much and stare, as though Dog & I were some kind of roadside exhibition for their tourist pleasure, especially if I’ve paused to let them pass – it’s a presumption on my time. Considering the combinations of possible action and response above, I notice that happiness is maximized, not in simple direct proportion to the magnitude of a gesture (a wave beats a nod, etc), but rather when a gesture is present but middling (pulling out a yard beats overegging it and crossing the white lines to the other side of the road – what kind of an obstacle am I?!).

Those are the minutiae of decisions. Returning to the bigger picture, to reduce encounters to a simple matter of speed and law would be to neglect a vital other dimension: in acknowledging when other parties show consideration, however small, we acknowledge their humanity. We do not exist in islands made of metal boxes, but we live in relationship to a community of all road users; therein enjoyment can be found.

Water, water, everywhere

Saturday was wet. Very wet.

This was the view just outside Portpatrick, at the corner of the Dunskey B-road turnoff, water flooding off the hill onto the road.

Flood

On arriving home, I discovered a drain outside the gate was unable to cope with the sudden rain, and a gutter-load of water was flowing in a channel down the driveway, pushing gravel through the gate.

It took half an hour’s shovelling gravel to persuade some of the water to flow into a flowerbed the other side of the fence; in the meantime, the garage was flooded about 3in deep.

A couple of days later and the scale of damage caused is clear; while the driveway is now arguably better than it was before, roadside embankments have obviously suffered

Collapsed embankment

as has the road surface – the erosion from gravel and dirt flowing over tarmac is quite severe:

tarmac road surface showing serious erosion due to

Selectivity, or Why is Good?

I remember two particular thoughts from my time in a photo-club (past-tense, as we’ve moved house since).

First, discussing landscape photography with another member, he said he used a large-format (5×4) camera and Fuji Velvia film “because it gets results that express how it felt to be there at the time”. To be quite honest, this put me off landscape – particularly large-format – because I realised the nonsense behind it: after all, if I can name half a dozen people who all shoot LF+Velvia then that conflicts with the unique individuality of a feeling. Face it, it’s just a trend. [0]

Second, a lingering suspicion from club competitions. I felt that the previous photo-club had a bias toward photos where the photographer “got lucky”, which prompted thoughts of rebellion – surely a photo should be marked according to the skill exhibited in its making?

So what is landscape photography? Is it right to just wander around the landscape, see what looks nice and point the camera in the right direction? Or is it desirable to spend hours poring over OS maps, Google Streetview and the MetOffice weather to plan a trip to a specific scene with particular favourable lighting and conditions?

By what criteria will the resultant photograph be regarded as “good” – that it conforms to some guidelines of composition, that it tells a human-interest story, or that it merely represents what it looked like to be in a given place and time?

This pair of images serves as an example.

I like it, as a reminder of walking the dog here every day (my story!), but it would be fair to say it was little more than an illustrative snap – the lighting is sub-optimal, the tonality is crass, the image-quality is fairly poor (as befits an image taken on a mobile phone), and above all, it’s cluttered with many elements as the road and wind-turbine introduce whole new themes of mankind’s interactions with nature.

But a couple of weeks ago, I made a photo Some Trees from nearly the same viewpoint:

And this is favourably regarded in photographic circles. The difference is selectivity: by choosing a tighter framing (both at the scene and only a slight crop in post-processing), I’ve avoided the road and wind turbine, concentrating on the wind-blown trees plodding like Atlas up the outline of the hill and the remaining vestiges of snow. Of course, by also removing the colour in favour of plain black and white, I’ve made the silhouette more stark and concentrated on shapes and lines and form rather than realism.

But the snap is still true to life. Maybe there’s room for saying landscape photography should be both about the photography (which introduces the detached external off-line phenomena during the stages of image-construction and criticism) but also about the landscape, which one should simply appreciate for how it really looks without being excessively selective?

 

[0] The other member left the club soon after, but returned for just one evening a couple of years later, where we had quite a cathartic chat: I was returning to landscape on my own terms, and he was now shooting sports digitally elsewhere in the country. And the wheel turns…