What I did on my holidays

To celebrate my increasing antiquity at the end of August, I spent a happy few days in the Lake District with Mum & Dad.

We stayed in a hotel around Borrowdale, with access to Derwentwater, close to Ashness Bridge. On the second morning there was wonderful mist in the valleys obscuring the view up the lake with just the top of Skiddaw showing.

We spent a happy morning clambering up the Lodore Falls – a steep hillside climb through heather and pine trees.

We visited the Solway Aviation Museum at Carlisle airport, home to an English Electric Lightning (I used to see them flying over Lincolnshire in my very early years), a Phantom and – joy of joys – a Vulcan bomber, XJ823, inside which one could see the cockpit and sit in some of the metal chairs.

And I went flying! Most unexpected – I’d been hoping for a scenic tour but instead got an hour’s flying lesson. As the instructor said, “push the left pedal to turn left”. And the rest was pretty plain sailing – as responsive as a car on a road with perfect camber, crossed with turbulence akin to sailing a boat. We cruised at 2500-3000 feet, skimming along just below the cumulus clouds, from Carlisle across to Bassenthwaite and down Derwentwater to Borrowdale, up over Watendlath Tarn and back around Thirlmere to Carlisle again. A most excellent experience. (Photos by Dad stuck in the back seat – I think he did a good job!)

On the Monday, Dad and I drove around some of our favourite mountain passes and landscape locations in the Lakes: Wastwater with the classic view of Great Gable at the end, round to Hardknott Pass – stop at the Roman Fort of Mediobogdum, admire Eskdale – then carry on up and over Wrynose. The weather was just right – not too much cloud, just cloud shadows on sunny landscape – and my favourite conditions, bright foreground with filthy dark stormy rainclouds in the distance. It was allowed to rain after that.

On the last morning I called in at Mum’s favourite spot on the planet, Friar’s Crag at the end of the road past the jetties out of Keswick.

That was some (long) weekend!

Too-Big Data? That don’t impress me much

On a whim, I spent the evening in Edinburgh at a meetup presentation/meeting concerning Big Data, the talk given by a “Big Data hero” (IBM’s term), employed by a huge multinational corporation with a lot of fingers in a lot of pies (including the UK Welfare system).

I think I was supposed to be awed by the scale of data under discussion, but mostly what I heard was all immodest massive business-speak and buzzwords and acronyms. A few scattered examples to claim “we did that”, “look at the size of our supercomputer”, but the only technical word he uttered all evening was “Hadoop”.

In the absence of a clear directed message, I’ve come away with my own thoughts instead.

So the idea of Big Data is altogether a source of disappointment and concern.

There seems to be a discrepancy: on the one hand, one’s fitbit and phone are rich sources of data; the thought of analyzing it all thoroughly sets my data-geek senses twitching in excitement. However, the Internet of Things experience relies on huge companies doing the analysis – outsourced to the cloud – which forms a disjoint as they proceed to do inter-company business based on one’s personal data (read: sell it, however aggregated it might be – the presenter this evening scoffed at the idea of “anonymized”), above one’s head and outwith one’s control. The power flows upwards.

To people such as this evening’s speaker, privacy and ethics are just more buzzwords to bolt on to a “data value pipeline” to tout the profit optimizations of “data-driven companies”. So are the terms data, information, knowledge and even wisdom.

But I think he’s lost direction in the process. We’ve come a long way from sitting on the sofa making choices how to spend the evening pushing buttons on the mobile.

And that is where I break contact with The Matrix.

I believe in appreciating the value of little things. In people, humanity and compassion more than companies. In substance. In the genuine kind of Quality sought by Pirsig, not as “defined” by ISO code 9000. Value may arise from people taking care in their craft: one might put a price on a carved wooden bowl in order to sell it, but the brain that contains the skill required to make it is precious beyond the scope of the dollar.

Data is data and insights are a way to lead to knowledge, but real wisdom is not just knowing how to guide analysis – it’s understanding that human intervention is sometimes required, and knowing when to deploy it, awareness, critical thinking to see and choose.

The story goes that a salesman once approached a pianist, offering a new keyboard “with eight nuances”. The response came back: “but my playing requires nine”.

Discovering SmartOS

Revolutionize the datacenter: ZFS, DTrace, Zones, KVM

What 22TB looks like.

It has been a long and interesting weekend of fixing computers.
Adopt the pose: sit cross-legged on the floor surrounded by 9 hard-drives – wait, I need another one, make it 10 hard-drives – and the attendant spaghetti of SATA cables and plastic housings and fragments of case.
Funnily enough the need for screwdrivers has reduced over the years, albeit more than compensated by the cost of a case alone. I’m sure it never used to make for such a sore back either…
Anyway. Amidst the turmoil of fixing my main archive/work/backup server, I discovered a new OS.
For a few years now, I’ve been fond of ZFS – reliable as a brick, convenient as anything to use; I choose my OSes based on their ability to support ZFS, amongst other things. Just a quick
zpool create data /dev/ada1 /dev/ada2
zfs create data/Pictures
and that’s it, a new pool and filesystem created, another 1-liner command to add NFS sharing… Not a format or a mount in sight.
Of course, Linux has not been able to include ZFS in the kernel due to licensing considerations, so the various implementations (custom kernel; user-space FUSE module) have been less than desirable. So I’ve been using FreeBSD as server operating-system of choice. The most convenient way to control a plethora of virtual machines on a FreeBSD host seems to be to use VirtualBox – rather large and clunky nowadays.
However, a couple of weeks ago I stumbled across SmartOS, a new-to-me OS combining ZFS, DTrace and a Solaris/Illumos kernel, with both its own native Zones and Linux’s KVM virtualization.
There have been a few steps in this direction previously – most memorably was Nexenta, an opensolaris/illumos kernel with Debian packaging and GNU toolchain. That was a nice idea, but it lacked virtualization.
So, this weekend, with a storage server box rebuilt (staying with FreeBSD) and a whole new machine on which to experiment, I installed SmartOS.
Overall, it’s the perfect feature blend for running one’s own little cloud server. ZFS remains the filesystem of choice, DTrace has yet to be experimented with, and KVM is a breeze, mostly since Joyent have provided their own OS semi-installed images to work from (think: Docker, but without the Linux-specificity). The vmadm command shares a high-level succinctness with the zfs tools. Just import an image, make a JSON config file describing the guest VM and create an instance and it’s away and running with a VNC interface before you know it.
There’s one quirk that deserves special note so far. If you wish to use a guest VM as a gateway, e.g. via VPN to another network, you have to enable spoofing of IPs and IP forwarding on the private netblocks, in the VM config file.
      "allow_dhcp_spoofing": "true",
      "allow_ip_spoofing": "true",
      "allowed_ips": [ "192.168.99.0/24" ]
[root@78-24-af-39-19-7a ~]# imgadm avail | grep centos-7 
5e164fac-286d-11e4-9cf7-b3f73eefcd01 centos-7 20140820 linux 2014-08-20T13:24:52Z 
553da8ba-499e-11e4-8bee-5f8dadc234ce centos-7 20141001 linux 2014-10-01T19:08:31Z 
1f061f26-6aa9-11e4-941b-ff1a9c437feb centos-7 20141112 linux 2014-11-12T20:18:53Z 
b1df4936-7a5c-11e4-98ed-dfe1fa3a813a centos-7 20141202 linux 2014-12-02T19:52:06Z 
02dbab66-a70a-11e4-819b-b3dc41b361d6 centos-7 20150128 linux 2015-01-28T16:23:36Z 
3269b9fa-d22e-11e4-afcc-2b4d49a11805 centos-7 20150324 linux 2015-03-24T14:00:58Z 
c41bf236-dc75-11e4-88e5-038814c07c11 centos-7 20150406 linux 2015-04-06T15:58:28Z 
d8e65ea2-1f3e-11e5-8557-6b43e0a88b38 centos-7 20150630 linux 2015-06-30T15:44:09Z 

[root@78-24-af-39-19-7a ~]# imgadm import d8e65ea2-1f3e-11e5-8557-6b43e0a88b38 Importing d8e65ea2-1f3e-11e5-8557-6b43e0a88b38 (centos-7@20150630) from "https://images.joyent.com" 
Gather image d8e65ea2-1f3e-11e5-8557-6b43e0a88b38 ancestry 
Must download and install 1 image (514.3 MiB) 
Download 1 image [=====================================================>] 100% 514.39MB 564.58KB/s 15m32s 
Downloaded image d8e65ea2-1f3e-11e5-8557-6b43e0a88b38 (514.3 MiB) ...1f3e-11e5-8557-6b43e0a88b38 [=====================================================>] 100% 514.39MB 38.13MB/s 13s 
Imported image d8e65ea2-1f3e-11e5-8557-6b43e0a88b38 (centos-7@20150630) 
[root@78-24-af-39-19-7a ~]# 

[root@78-24-af-39-19-7a ~]# cat newbox.config 
{
  "brand": "kvm",
  "resolvers": [
    "8.8.8.8",
    "8.8.4.4"
  ],
  "ram": "256",
  "vcpus": "2",
  "nics": [
    {
      "nic_tag": "admin",
      "ip": "192.168.5.48",
      "netmask": "255.255.255.0",
      "gateway": "192.168.5.1",
      "model": "virtio",
      "primary": true,
      "allow_dhcp_spoofing": "true",
      "allow_ip_spoofing": "true",
      "allowed_ips": [ "192.168.99.0/24" ]
    }
  ],
  "disks": [
    {
      "image_uuid": "d8e65ea2-1f3e-11e5-8557-6b43e0a88b38",
      "boot": true,
      "model": "virtio"
    }
  ],
"customer_metadata": {
    "root_authorized_keys":
"ssh-rsa AAAAB3NzaC1y[...]"
  }

}
[root@78-24-af-39-19-7a ~]# vmadm create -f newbox.config 
Successfully created VM d7b00fa6-8aa5-466b-aba4-664913e80a2e 
[root@78-24-af-39-19-7a ~]# ping -s 192.168.5.48 
PING 192.168.5.48: 56 data bytes 
64 bytes from 192.168.5.48: icmp_seq=0. time=0.377 ms 
64 bytes from 192.168.5.48: icmp_seq=1. time=0.519 ms 
64 bytes from 192.168.5.48: icmp_seq=2. time=0.525 ms ... 

zsh, basalt% ssh root@192.168.5.48 
Warning: Permanently added '192.168.5.48' (ECDSA) to the list of known hosts.
Last login: Mon Aug  3 16:49:24 2015 from 192.168.5.47
   __        .                   .
 _|  |_      | .-. .  . .-. :--. |-
|_    _|     ;|   ||  |(.-' |  | |
  |__|   `--'  `-' `;-| `-' '  ' `-'
                   /  ;  Instance (CentOS 7.1 (1503) 20150630)
                   `-'   https://docs.joyent.com/images/linux/centos

[root@d7b00fa6-8aa5-466b-aba4-664913e80a2e ~]# 

And there we have a new guest VM up and running in less than a minute’s effort.

Infrastructure and development environments recreated from scratch (partly thanks to storing my ~/etc/ in git) in under an hour.

I’m still looking for the perfect distributed filesystem, however…

RIP, Sir Terry Pratchett

I loved many of his Discworld series, of course, for many years – a very favourite author of one’s – several – generations. But on hearing of his passing, the first quote that comes to mind is from A Slip of the Keyboard, collected non-fiction, showing a further glimpse of the man behind the books.

People have already asked me if I had the current international situation in mind when I wrote the book. The answer is no. I wouldn’t insult even rats by turning them into handy metaphors. It’s just unfortunate that the current international situation is pretty much the same old dull, stupid international situation, in a world obsessed by the monsters it has made up, dragons that are hard to kill. We look around and see foreign policies that are little more than the taking of revenge for the revenge that was taken in revenge for the revenge last time. It’s a path that leads only downwards, and still the world flocks along it. It makes you want to spit. The dinosaurs were thick as concrete, but they survived for 150 million years and it took a damn great asteroid to knock them out. I find myself wondering now if intelligence comes with its own built-in asteroid. – 2001 Carnegie Medal award speech.

Terry passed away in his home, with his cat sleeping on his bed surrounded by his family on 12th March 2015. Diagnosed with PCA [posterior cortical atrophy] in 2007, he battled the progressive disease with his trademark determination and creativity, and continued to write. He completed his last book, a new Discworld novel, in the summer of 2014, before succumbing to the final stages of the disease.

via Terry Pratchett dead: Discworld author dead after struggle with Alzheimer’s – People – News – The Independent.

Today is a good day to listen to some Tallis.

How I Voted…

Happily.

It’s the orthogonal thoughts that caught my eye, so to speak.

The trees still stand. I mean, the town still stands – apart from one tent in the very middle of the pedestrian area, I saw no more campaigning.

IMG_20140918_125152_v1

Yes & No placards reside side by side, as do their representatives, chatting together outside the polling station.

IMG_20140918_122939_v2

The Church is open for business, contemplation, prayer, stillness.

IMG_20140918_122531_v2

Now to sit back and wait patiently. Quid sit futurum cras, fuge quaerere.

Now, who’s going to clean up all the litter?

How I’ll be Voting

indyref-poll-cardIndividually. I hold the idea of nationality very loosely: it sometimes applies but does not define.  I rate people trying to live their lives far higher than the games of politicians playing nations.

Thoughtfully. Manifestos have been read; factors – economic, political, constitutional, cultural, socially – considered, their importances determined;  a decision has been reached.

Carefully. The way things are looking,  half the population is likely to be upset whatever happens. There will be a need to restore peace.

Independently. The presentation of the campaigns as a bickering of two opposing factions has not been attractive. First the reasoning and decision, then the labelling with political ideology (or preferably not at all).

Proudly. It has not been an easy summer, and getting to the stage of holding this poll-card in my hands has taken considerably more organization than intended.

Humbly. I am glad to have a voice in the running of the land I call home.

Colour or Black and White?

No doubt this is one of the oldest conundrums, generally long-since answered. The conventional approach is that black&white is meant to be an end in itself (a choice made at the time of shooting rather than an option or fallback in processing), in order that the eye be drawn to forms and shapes and textures, appreciated for their own sake without the distractions of realistic colour. As such, you’d expect most images to work either as black&white or as colour; it can also be somewhat annoying when one sees an image presented in more than one way as though the photographer couldn’t decide. It’s even more irksome when you are that photographer. Today I reprocessed an image taken a couple of months ago, and was struck by how  it looked in the intermediate colour form before applying the intended toning.

Shapes in the Dark - colour

Shapes in the Dark – colour

Shapes in the Dark - black and white

Shapes in the Dark – black and white

In this case, I contend the two images both stand alone independently well, and they convey different things; further, the the colour gives a means of distinguishing the silhouetted trees from the surrounding foliage that the black&white image does not, making it look moodier.

A Change of Direction

Sometimes, one’s photography takes quite a turn.

A few years ago, I was all interested in large-format landscape work, when a fellow member of the photo-club inadvertently threw the spanner in the works by saying his particular approach gave results that represented how he felt at a given scene. Hang around: how come every LF landscapie I know feels exactly the same way then, if that way is defined by 5×4 format, tripod low on the ground, golden-hours (normally morning, strangely), portrait orientation, near-mid-far, Fuji Velvia film, grad-ND sky and rear-tilt perspective, amongst other things? Having seen that as a clique fashion rather than individual expression, I rejected it and promptly went digital, making a photo a day using overcast dull light to show the shapes of trees in the local woodland realistically.

Last Saturday marked something of a milestone: 4 years of posting a daily photo on Blipfoto. Over time, the idea of forcing a photo a day (especially one as considered and well-processed as I strived to achieve) has become artistically unhealthy and my enthusiasm for the site has waned considerably, so I called it a day.

In some ways, the future looks to be a return to landscape; certainly I intend shooting a lot more of it than I have previously, but I’m intending letting the inspiration drive matters not forcing it by the calendar. I’m hoping to post more often on this blog as well, but using the real camera as well as the mobile, so there’s been a bit of re-branding happening too…

So it was, on Sunday afternoon, with head slightly reeling from the decision, I set off with Dog for an afternoon stroll, with no idea how far or where we’d go except that I wanted it to be a long walk. And it was the longest we’ve been on since moving here, I think – left Portpatrick and walked past the golf course to Port Mora where I usually turn inland and walk through the Dunskey Glen, but this time I continued past Port Kale and the transmission huts (where cables came ashore for monitoring communication during the Troubles in Northern Ireland)…

IMG_20140706_140935 - IMG_20140706_140941_fused-0001.tiff_v1

Transmission huts in Port Kale, outside Portpatrick

…and with a bit of determined plodding along the Southern Upland Way, the next thing we saw was Kilantringan Lighthouse in the distance.

IMG_20140706_141852 - IMG_20140706_141905_fused-0005.tiff_v1

Kilantringan Lighthouse along the Southern Upland Way, black and white with my favoured platinum toning

It took 2.5 hours, so probably 8 miles or thereabouts, given very few photo-stops and some leisurely steep bits.

All in all, an excellent way to spend a Sunday afternoon.

My Favourite Camera Settings

14307072596_4274cbb434_b

There’s one particular combination of camera settings I keep coming back to, that forms a base for almost all my work. Just in case anyone else is interested:

13062909705_90d82315ac_b

Great Success: a replanted willow tree budding, in black and white

Circular polarizer filter: for acting as an optical control of local contrast

RAW+JPEG: so I have vast amounts of data to process properly and a reference of what the camera thought of it, which can also be recovered more easily in case of SD-card corruption (rare, but not unknown)

Mode: maybe 90% aperture-priority (auto-ISO, auto-shutter speed), 5% shutter-priority (auto-ISO), 5% manual (because the NEX-7 fixes ISO to 100 by default); of these, unless I’m doing a long exposure, the aperture is the most distinguishing control between closeup and landscape work.

Processing: black & white, so I get to think in terms of shape and form and colour-contrast even if sometimes a scene is processed for colour.

13063247864_2c71267357_b

Great Success: a replanted willow tree budding, in colour

Metering: matrix/multi-zone metering, because it’s quite good enough, especially when coupled with a histogram on-screen

Compensation: normally +1/3rd EV for reasons of expose-to-the-right, improving the signal-to-noise ratio.

Shooting: HDR +/-1 EV

The shooting-mode is a new departure; not because I’ve suddenly started “doing HDR” (I’ve been open to that workflow on demand for several years), but rather because the ability to produce 30-40megapixel photos requires multiple input images. By shooting hand-held at high frame-rate I get enough image-data to combine upscaling (using super-resolution) with noise-reduction (using stacking). Full-speed burst-mode on the NEX-7 is a very fast 10fps, which leads to taking bursts of 5-6 images by the time I’ve thought I’ve got 3; the camera enjoys a wide dynamic range so even if the scene contrast doesn’t require HDR per se, there’s only a little difference in quality and using the HDR bracketing restricts the burst to 3 frames at a time.

 

14178175892_3edd915e02_b

A little barn: scenery from Steel Riggs on Hadrian’s Wall

Tips for Creative Photography

If there’s a term I particularly despise, it is the “tip”; in 3 letters it makes a promise it cannot keep and belittles the photographic process into the bargain.

It is particularly repugnant when it appears in forms such as “tips for creative landscape photography: HDR”. Consider that example, and substitute the last term with any other technique that you can name – intentional camera movement (aka ICM), “use a tripod”, “use an ND filter”, “focus-stack”, “use a circular polariser filter”… what they all have in common is the tail wagging the dog, a not-entirely-latent suggestion that if you just do this one little thing, you’ll necessarily get better photos as a result. 

Not so; of course, even if the technique were used appropriately, the results are either demonstration shots, or following in the footsteps of convention.

One dictionary defines “creative” as “the ability to transcend traditional ideas, rules, patterns, relationships or the like, and to create meaningful new ideas, forms, methods, interpretations, etc”.

If a tip is something one applies to the camera in search of a panacea, creativity is what one applies from one’s head in order to bring something new to the light-table. There is a progression from “a photo of” to “a photo where” and thence to “a photo that tells a story” – yet one can still pretend to make such on any street corner. Rather, I suggest it arises from concepts as a motivational force – an idea that you so desire to represent that you go out and make it happen with whatever techniques and equipment it takes.

Timescales

I wanted to make a photo to illustrate the timescales at which things happen. That means an extreme shutter-speed, motion either obviously frozen or obviously prolonged, relative to the subjects at hand. Daffodils, shrubs and trees would blow around in the wind; with any luck, a sufficiently long exposure would capture some motion in the clouds; the Parish Church isn’t going anywhere any time soon.

So, a long exposure was chosen – over a minute – with welding glass and rubber bands – that requires the tripod. Even at its widest 18mm focal length, the kit lens only just got the width of the church in the scene, so I shot 8 photos working my way from bottom to top – that’s a vertorama, giving a massive field of view and playing with perspective distortion. As I panned up, taking exposures over a minute long, the sunlight illuminating the scene varied and obviously the sky was brighter, so I varied the exposure (both shutter time and the aperture) to avoid blowing highlights in the clouds – and that’s HDR. I wanted reasonable local contrast, both in the stonework and, critically, in the sky – that’s tonemapping. And I wanted a vintage feel – that’s black & white with warm toning.

But it’s not that I have “an HDR vertorama” to show – hopefully, what I have is a scene in which three subjects are portrayed operating at different speeds. That’s what matters.

Today, anyway.

Landscape: the approachable end of Photography

Perhaps a bit controversially, I have somewhat of a love-hate relationship with the landscape photography genre.

A few years ago now, another member in the photo-club and I were chatting about landscape. He said that he made his images using a large-format 5×4 camera and Velvia film because it “conveyed what it felt like to be there”. It set me thinking: how come I can name a large handful of photographers who all approach landscape the same way: large-format, portrait orientation, Velvia film, tripod low to the ground, rear-tilt for the perspective of a large looming foreground, grad-ND for the sky? Whatever the philosophy behind the approach – and plenty of books have been written about the philosohpy of landscape – it seemed unlikely that such a common approach actually represents an individual feeling. Seeing through the fluff, there was a trend at work, a locus of mutually derivative work – for example, there was rarely any presentation of other films, such as Provia; surely someone out there would have found that a better representation of their feeling, at least once?

My collocutor moved away from the area and left the club shortly after that; I rebelled against landscape and for a while shunned all the conventional advice of the genre: no shiny contrasty light, no wide-angle vistas, no colour, but rather, a “no-light” project, studies of the intrinsic shapes and forms of trees in the woods of Inverawe. After about 18 months, landscape began to resurface – at first, at weekends and other times when I was away from the forest. “Only on my own terms”, however.

A couple of years after the fateful conversation, the ex-member was passing by and visited the club one evening. The discussions were most illuminating: he also had abandoned the whole landscape-by-film scene, and was last heard of favouring digital sports work around the Cairngorms instead. It was satisfying to have caught up and closed the loop.

 

Fast-forward to now. There are phases of conformance in landscape; ignoring distasteful badly tonemapped tripe with excessive local contrast, some sites (notably 500px) feature a lot of over-bright over-saturated images. The past year or so has seen a notable rise in long-exposure work – especially in black and white, some with artistic vision, some perhaps less so. Some of my photographer contacts are now suggesting the time of the Big Stopper filter has passed as well.

Some questions to ponder:

Can one take a camera, follow a handful of guidelines and more or less guarantee coming up with a good result on any random day? (There is no one such magic guideline, but you could assign a score based on the number of things a photograph has in its favour according to a set of rules.)

If so, is landscape photography merely a programmatic sport, a way of passing the time with clearly defined start (Friday nights examining the OS map and weather forecast) and end (JPEG by Sunday night), and how does one express any originality within its scope at all?

On another hand, is landscape something that people should go out and seek to achieve, or is it that what one shoots happens to be landscape?

Relatedly, is a photograph good because you stumbled across it, or because you set out to make it, or because it exhibits a strong contrived personal style?

However it arises, when one’s photography spans several genres – both vista and intimate landscape, other nature closeups and art – it seems that viewers respond the most to landscape. It’s rather like the ITV3 or Channel 4 of photography – “human interest”, where all objects presented are approachable by virtue of being human scale, from boulders half a metre in size to buildings and hillsides that a human can at least radically alter with a suitably large digger. And that brings with it an offputting whiff of mundanity.

I can’t claim to be happy with the answers to all the above; you can’t have it all 3 ways at once.

Transitions

Panasonic Lumix GH2Nearly 3 years ago I spent the best part of 2 hours one afternoon in PCWorld, looking to kick the Canon habit and vacillating between a Nikon (as I recall, the D3100) and the Panasonic Lumix GH2.

On the one hand, the Nikon had excellent image-quality, but its usability was let-down drastically by the lack of a dedicated ISO button (you could pretend, by reassigning the one “custom” button – so why bother with either?) and it had what I felt was a patronizing user-interface, showing a graphic of a closing lens iris every time one changed the aperture (as though I hadn’t learned anything in the last 10 years spent working on my photography).

On the other hand, the Lumix GH2 had less than stellar image-quality, but the user-interface won me over.

Over the last 2 years, the ergonomics fitted my style like a glove: coming from film, including medium-format with waist-level finders, I find it most natural to operate looking down at the articulated LCD panel in live-view mode. I had sufficient custom presets for two aperture-priority-mode variations of black and white (one square, one 3:2, both with exposure-bracketing set to 3 frames +/-1EV) and a third, a particular colour “film emulation” and manual mode at ISO160 for long exposures, with bracketing set to 5 frames for more extreme HDR. With those 3 modes, I could cover 95% of my subject-matter from woodland closeup to long-exposure seascape and back, at the flip of a dial.

I learned to appreciate its choice of exposure parameters (normally well-considered), and to overcome the sensor’s foibles – it made an excellent test-case for understanding both high-ISO and long-exposure sensor noise and its limited dynamic range increased my familiarity with HDR, panoramas and other multi-exposure-blending techniques (all hail enfuse!). Coupled with the Pentacon 50mm f/1.8 lens, it made for some excellent closeup photos. As a measure of how workable the kit is, I once took every camera I then possessed – including medium- and large-format film – to Arran for a photo-holiday, and never used anything apart from the GH2 for the whole week.

If this all sounds like it’s leading up to something, it is. There is a long-established idea in photographer circles that gear-acquisition-syndrome (GAS), or the buying of new equipment for the sake of it or in order that it might somehow help one take better photos, is delusional. To some extent that’s right, but the flip-side is that any one camera will impose limitations on the shots that can be achieved. So I’ve established the principle that, if one can explain 3 things a camera can allow you to do better, the acquisition is justifiable.

And so I’ve switched. The new camera is a Sony NEX7[Amazon]; even though the model is barely younger than the GH2, it still has a vastly superior sensor that will give me larger images, better dynamic range and narrower depth-of-field. Indeed, at two years old, it’s still punching above its weight despite the pressure from some of the larger dSLRs to have come out since.Wee Waterfall (2)

One of the things I learned from the GH2 is that it always pays to understand one’s equipment. For this reason, the first 100 frames shot on the NEX-7 fell into 4 kinds:

  1. studying noise with varying ISO in a comparatively low-light real-world scene (stuff on bookshelves in the study – good to know how noise and sharpness interplay in both the darkest shadows and midtones)
  2. building a library of dark-frame images at various ISO and shutter-speed combinations (taken with the lens-cap on, for a theoretically black shot – any non-zero pixels are sensor noise)
  3. building a library of lens correction profiles – taking images of a uniform out-of-focus plain wall to compare vignetting at various apertures and focal-lengths on both kit lenses
  4. studying kit-lens sharpness as a function of aperture – discussed previously.

Impressively, I could just load all these images into RawTherapee and easily move them into relevant directories in a couple of right-clicks, and from there I spent the rest of the evening deriving profiles for ISO noise and sharpness with automatic dark-frame-reduction and actually measured vignetting correction – because I know very well how much time it will save me in the future.

Despite having played with film cameras, I’m quite acutely aware of the change in sensor format this time: in moving from prolonged use of micro-4/3rds to APS-C, I can no longer assume that setting the lens to f/8 will give me everything in focus at the lens’s sweetspot, but have to stop-down to f/11 or even further. The tripod has already come into its own…

So there we go.

Oh, and the complete GH2 kit is for sale on ebay, if anyone wants to buy it!
Update 2014-02-02: the complete kit sold on eBay for a very reasonable sum!

 

Determining the best ZFS compression algorithm for email

I’m in the process of setting up a FreeBSD jail in which to run a local mail-server, mostly for work. As the main purpose will be simply archiving mails for posterity (does anyone ever actually delete emails these days?), I thought I’d investigate which of ZFS’s compression algorithms offers the best trade-off between speed and compression-ratio achieved.

The Dataset

The email corpus comprises 273,273 files totalling 2.14GB; individually the mean size is 8KB, the median is 1.7KB and the vast majority are around 2.5KB.

The Test

The test is simple: the algorithms consist of 9 levels of gzip compression plus a new method, lzjb, which is noted for being fast, if not compressing particularly effectively.

A test run consists of two parts: copying the entire email corpus from the regular directory to a new temporary zfs filesystem, first using a single thread and then using two parallel threads – using the old but efficient   find . | cpio -pdv   construct allows spawning of two background jobs copying the files sorted into ascending and descending order – two writers, working in opposite directions. Because the server was running with a live load at the time, a test was run 5 times per algorithm – a total of 13 hours.

The test script is as follows:

#!/bin/zsh

cd /data/mail || exit -1

zfs destroy data/temp

foreach i ( gzip-1 gzip-2 gzip-3 gzip-4 gzip-5 gzip-6 \
	gzip-7 gzip-8 gzip-9 lzjb ) {
  echo "DEBUG: Doing $i"
  zfs create -ocompression=$i data/temp
  echo "DEBUG: Partition created"
  t1=$(date +%s)
  find . | cpio -pdu /data/temp 2>/dev/null
  t2=$(date +%s)
  size=$(zfs list -H data/temp)
  compr=$(zfs get -H compressratio data/temp)
  echo "$i,$size,$compr,$t1,$t2,1"
  zfs destroy data/temp

  sync
  sleep 5
  sync

  echo "DEBUG: Doing $i - parallel"
  zfs create -ocompression=$i data/temp
  echo "DEBUG: Partition created"
  t1=$(date +%s)
  find . | sort | cpio -pdu /data/temp 2>/dev/null &
  find . | sort -r | cpio -pdu /data/temp 2>/dev/null &
  wait
  t2=$(date +%s)
  size=$(zfs list -H data/temp)
  compr=$(zfs get -H compressratio data/temp)
  echo "$i,$size,$compr,$t1,$t2,2"
  zfs destroy data/temp
}

zfs destroy data/temp

echo "DONE"

Results

The script’s output was massaged with a bit of commandline awk and sed and vi to make a CSV file, which was loaded into R.

The runs were aggregated according to algorithm and whether one or two threads were used, by taking the mean removing 10% outliers.

Since it is desirable for an algorithm both to compress well and not take much time to do it, it was decided to define efficiency = compressratio / timetaken.

The aggregated data looks like this:

algorithm nowriters eff timetaken compressratio
1 gzip-1 1 0.011760128 260.0 2.583
2 gzip-2 1 0.011800408 286.2 2.613
3 gzip-3 1 0.013763665 196.4 2.639
4 gzip-4 1 0.013632926 205.0 2.697
5 gzip-5 1 0.015003015 183.4 2.723
6 gzip-6 1 0.013774746 201.4 2.743
7 gzip-7 1 0.012994211 214.6 2.747
8 gzip-8 1 0.013645055 203.6 2.757
9 gzip-9 1 0.012950727 215.2 2.755
10 lzjb 1 0.009921776 181.6 1.669
11 gzip-1 2 0.004261760 677.6 2.577
12 gzip-2 2 0.003167507 1178.4 2.601
13 gzip-3 2 0.004932052 539.4 2.625
14 gzip-4 2 0.005056057 539.6 2.691
15 gzip-5 2 0.005248420 528.6 2.721
16 gzip-6 2 0.004156005 709.8 2.731
17 gzip-7 2 0.004446555 644.8 2.739
18 gzip-8 2 0.004949638 566.0 2.741
19 gzip-9 2 0.004044351 727.6 2.747
20 lzjb 2 0.002705393 900.8 1.657

A plot of efficiency against algorithm shows two clear bands, for the number of jobs writing simultaneously.

Analysis

In both cases, the lzjb algorithm’s apparent speed is more than compensated for by its limited compression ratio achievements.

The consequences of using two writer processes are two-fold: first, the overall efficiency is not only halved, but it’s nearer to only a third that of the single writer – there could be environmental factors at play such as caching and disk i/o bandwidth. Second, the variance overall has increased by 8%:

> aggregate(eff ~ nowriters, data, FUN=function(x) { sd(x)/mean(x, trim=0.1)*100.} )
 nowriters eff
1 1 21.56343
2 2 29.74183

so choosing the right algorithm has become more significant – and it remains gzip-5 with levels 4, 3 and 8 becoming closer contenders but gzip-2 and -9 are much worse choices.

Of course, your mileage may vary; feel free to perform similar tests on your own setup, but I know which method I’ll be using on my new mail server.

Definitions of Geek Antiquity

I can see already that this will probably be a series of more than one post…

You know you’ve been a geek a while when you find files like these lying around:

 -rw-r--r-- 1 tim users 16650 Apr 24 2001 .pinerc
...
D drwxr-xr-x 3 tim users   76 Aug 18 2003 .sawfish
  -rw------- 1 tim users  324 Jan 14 2003 .sawfishrc
D drwxr-xr-x 3 tim users   95 Apr 24 2001 .sawmill
  -rw-r--r-- 1 tim users 4419 Apr 24 2001 .sawmillrc

Just in case there’s any doubt: I used Pine from about 1995 to 2000, before I bypassed mutt in favour of Emacs and Gnus. Sawmill was the original name before Sawfish renamed itself; either way, I no longer use just a simple window-manager but the whole KDE environment.

A slight tidy-up is called for…

Knowing Your Lenses

Analysing Lens Performance

Background

Typically, when a manufacturer produces new lenses, they include an MTF chart to demonstrate the lens’s performance – sharpness measured as the number of line-pairs per millimetre it can resolve (to a given contrast tolerance), radially and sagitally, varying with distance from the centre of the image. While this might be useful before purchasing a lens, it does not make for easy comparisons (what if another manufacturer uses a different contrast tolerance?) nor does it necessarily well reflect the use to which it’ll be put in practice. (What if they quote a zoom’s performance at 28mm and 70mm f/5.6, but you shoot it most at 35mm f/8? Is the difference between radial and sagittal sharpness useful for a real-world scene?)
Further, if your lens has been around in the kit-bag a bit, is its performance still up to scratch or has it gone soft, with internal elements slightly out of alignment? When you’re out in the field, does it matter whether you use 50mm in the middle of a zoom or a fixed focal-length prime 50mm instead?
If you’re shooting landscape, would it be better to hand-hold at f/5.6 or stop-down to f/16 for depth of field, use a tripod and risk losing sharpness to diffraction?
Does that blob of dust on the front element really make a difference?
How bad is the vignetting when used wide-open?

Here’s a fairly quick experiment to measure and compare lenses’ performance by aperture, two ways. First, we design a test-chart. It’s most useful if the pattern is even across the frame, reflecting a mixture of scales of detail – thick and thin lines – at a variety of angles – at least perpendicular and maybe crazy pseudo-random designs. Here’s a couple of ideas:

Method

Decide which lenses, at which apertures, you want to profile. Make your own design, print it out at least A4 size, and affix it to a wall. We want the lighting to be as even as possible, so ideally use indoor artificial light after dark (ie this is a good project for a dark evening). Carefully, set up the camera on a tripod facing the chart square-on, and move close enough so the test-chart is just filling the frame.

Camera settings: use the lowest ISO setting possible (normally around 100), to minimize sensor noise. Use aperture-priority mode so it chooses the shutter-speed itself and fix the white-balance to counteract the indoor lighting. (For a daylight lightbulb, use daylight; otherwise, fluorescent or tungsten. Avoid auto-white-balance.)
Either use a remote-release cable or wireless trigger, or enable a 2-second self-timer mode to allow shaking to die down after pushing the shutter. Assuming the paper with the test-chart is still mostly white, use exposure-compensation (+2/3 EV).
For each lens and focal-length, start with the widest aperture and close-down by a third or half a stop, taking two photos at each aperture.
Open a small text-document and list each lens and focal-length in order as you go, along with any other salient features. (For example, with old manual prime lenses, note the start and final apertures and the interval size – may be half-stops, may be third of a stop.)
Use a RAW converter to process all the images identically: auto-exposure, fixed white-balance, and disable any sharpening, noise-reduction and rescaling you might ordinarily do. Output to 16-bit TIFF files.

Now, it is a given that a reasonable measure of pixel-level sharpness in an image, or part thereof, is its standard deviation. We can use the following Python script (requires numpy and OpenCV modules) to load image(s) and output the standard deviations of subsets of the images:

#!/usr/bin/env python

import sys, glob, cv, cv2
import numpy as np

def imageContrasts(fname):
  img = cv2.imread(fname, cv.CV_LOAD_IMAGE_GRAYSCALE)
  a=np.asarray(img)
  width=len(img[0])
  height=len(img[1])
  half=a[0:height//3, 0:width//3]
  corner=a[0:height//8, 0:width//8]
  return np.std(a), np.std(half), np.std(corner)

def main():
	files=sys.argv[1:]
	if len(files)==0:
		files=glob.glob("*.jpg")
	for f in files:
		co,cc,cf=imageContrasts(f)
		print "%s contrast %f %f %f" % (f, co, cc, cf)

if __name__=="__main__":
	main()

We can build a spreadsheet listing the files, their apertures and sharpnesses overall and in the corners, where vignetting typically occurs. We can easily make a CSV file by looping the above script and some exiftool magic across all the output TIFF files:

bash$ for f in *.tif
do
ap=$(exiftool $f |awk '/^Aperture/ {print $NF}' )
speed=$( exiftool $f |awk '/^Shutter Speed/ {print $NF}' )
conts=$(~/python/image-interpolation/image-sharpness-cv.py $f | sed 's/ /,/g')
echo $f,$ap,=$speed,$conts
done

Typical output might look like:

P1440770.tif,,=1/20,P1440770.tif,contrast,29.235214,29.918323,22.694936
P1440771.tif,,=1/20,P1440771.tif,contrast,29.253372,29.943765,22.739748
P1440772.tif,,=1/15,P1440772.tif,contrast,29.572350,30.566767,25.006098
P1440773.tif,,=1/15,P1440773.tif,contrast,29.513443,30.529055,24.942437

Note the extra `=’ signs; on opening this in LibreOffice Spreadsheet, the formulae will be evaluated and fractions converted to floating-point values in seconds instead. Remove the spurious `contrast’ column and add a header column (fname,aperture,speed,fname,overall,half,corner).

Analysis

Let’s draw some graphs. If you wish to stay with the spreadsheet, use a pivot table to average the half-image contrast values per aperture per lens and work off that. Alternatively, a bit of interactive R can lead to some very pretty graphs:

> install.package(ggplot2)
> library(ggplot2)
> data<-read.csv("sharpness.csv") 
> aggs<-aggregate(cbind(speed,entire,third,corner) ~lens+aperture, data, FUN=mean)
> qplot(aggs$aperture, aggs$third, col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

This will give a comparison of overall sharpness by aperture, grouped by lens. Typically we expect every lens to have a sweetspot aperture at which it is sharpest; my own examples are no exception: third There are 5 lenses at play here: a kit 14-42mm zoom, measured at both 30mm and 42mm; a Minolta Rokkor 55mm prime; a Pentacon 30mm prime; and two Pentacon 50mm f/1.8 prime lenses, one brand new off eBay and one that’s been in the bag for 4 years.

The old Pentacon 50mm was sharpest at f/5.6 but is now the second-lowest at almost every aperture – we’ll come back to this. The new Pentacon 50mm is sharpest of all the lenses from f/5 onwards, peaking at around f/7.1. The kit zoom lens is obviously designed to be used around f/8; the Pentacon 30mm prime is ludicrously unsharp at all apertures – given a choice of kit zoom at 30mm or prime, it would have to be the unconventional choice every time. And the odd one out, the Rokkor 55mm, peaks at a mere f/4.

How about the drop-off, the factor by which the extreme corners are less sharp than the overall image? Again, a quick calculation and plot in R shows:

> aggs$dropoff < - aggs$corner / aggs$third
> qplot(aggs$aperture, aggs$dropoff, col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

dropoff Some observations:

  • all the lenses show a similar pattern, worst vignetting at widest apertures, peaking somewhere in the middle and then attenuating slightly;
  • there is a drastic difference between the kit zoom lens at 30mm (among the best performances) and 42mm (the worst, by far);
  • the old Pentacon 50mm lens had best drop-off around f/8;
  • the new Pentacon 50mm has least drop-of at around f/11;
  • the Minolta Rokkor 55mm peaks at f/4 again.

So, why do the old and new Pentacon 50mm lenses differ so badly? Let’s conclude by examining the shutter-speeds; by allowing the camera to automate the exposure in aperture-priority mode, whilst keeping the scene and its illumination constant, we can plot a graph showing each lens’s transmission against aperture.

> qplot(aggs$aperture, log2(aggs$speed/min(aggs$speed)), col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

speed Here we see the new Pentacon 50mm lens seems to require the least increase in shutter-speed per stop aperture, while, above around f/7.1, the old Pentacon 50mm lens requires the greatest – rocketing off at a different gradient to everything else, such that by f/11 it’s fully 2 stops slower than its replacements.

There’s a reason for this: sadly, the anti-reflective coating is starting to come off in the centre of the front element of the old lens, in the form of a rough circle approximately 5mm diameter. At around f/10, the shutter iris itself is 5mm, so this artifact becomes a significant contributor to the overall exposure.

Conclusion

With the above data, informed choices can be made as to which lens and aperture suit particular scenes. When shooting a wide-angle landscape, I can use the kit zoom at f/8; for nature and woodland closeup work where the close-focussing distance allows, the Minolta Rokkor 55mm at f/4 is ideal.