A bit of a hardware upgrade

Four and a half years ago I bought basalt,. a Lenovo Thinkpad W520 notebook.
Named for being black and geometrically regular, he was not cheap, but then the hardware was chosen for longevity. He did everything – mostly sitting on the desk being a workstation, for both photo-processing and work, with added portability. And he was fast… very fast.

He’s done sterling service, but the time has come for a bit of an upgrade. Since I haven’t done so for a few years now, I built the replacement workstation by hand.

So, after much shopping (mostly on Amazon with a couple of trips to the local Maplin’s), meet rhyolite – named because the red fan LEDs in the case remind me of the pink granite rock in Glen Etive.

basalt rhyolite
RAM  16G 64G
CPU  i7-2720QM @ 2.20GHz i7-6800K  @ 3.40GHz
HD 1TB 2.5″ 2*2T SATA, 64GB SSD, 240GB SSD
Display 15″ 1080p 24″ 4K
Time to process one Olympus Pen-F hi-res ORF file from RAW to TIFF 5min50s 1min6s
Time to rebuild Apache Spark from git source 28min15s 6min28s

It’s funny how oblivious we can be of  machines slowing down and software bloating over time, rather like the proverbial boiling lobster and then when I look up, things can be done 4-5x as fast.

It’s really funny when you copy your entire home directory across wholesale and see what used to be a full-screen maximized Firefox window now occupying barely a quarter of the new display.

How many photo thumbnails can you fit in a 4K display?

There’s only one downside: sometimes I miss having a trackpad in the middle of the keyboard area…

Olympus Pen-F: analysis and hands-on

Mine, my only, my preciousss!

Mine, my only, my preciousss!

There’s only one camera shop left in Perth and yesterday I was their first customer to buy the Olympus Pen-F. It’s not released on Amazon until Monday…

So it’s been the talk of the web. It’s undeniably a beautiful camera, with a rangefinder design reminiscent of its film camera predecessor of the same name. I’ve spent much of the last couple of days putting it through its paces.

I’m pleased to report it has a reasonable sturdy weight to it thanks to the aluminium/magnesium alloy shell. It feels just great in one’s hands. Well, my hands, anyway.

How does it perform?

First, software: it’ll be a little while before Adobe support it, but for us open-source fans, RawTherapee already supports the new .ORF RAW images in both 20-megapixel and hi-resolution 80-megapixel modes. (It seems to be the only open-source raw-converter to do so, at the time of writing.)

Second, I’ve been keeping dark-frame reference images for a few cameras, so I can compare its noise levels against predecessors and competitors. (Method: put the lens-cap on, place camera upside-down on desk, and then at every ISO from 6400 to 100 and below, at various shutter-speeds from 1/250s to 30s, take a couple of photos. A quick Python script reads the images, converts to greyscale and outputs various properties including the average lightness value per pixel.) Results aggregated by camera model and ISO with median-average.)

log-log plot of luminosity noise for various camera models by ISO

log-log plot of luminosity noise for various camera models by ISO

The graph shows 3 things, none surprising but pleasant to see confirmation:

  1. The Lumix GH2 really was awful for sensor quality. Its ISO160 is the Sony NEX-7’s ISO 800.
  2. There is a reasonable progression from Lumix GH2 to Sony NEX-7 to Olympus E-M1 and Pen-F; at native resolution, the E-M1 and Pen-F are much of a muchness for L-channel noise.
  3. The real game-changer is the high-resolution mode on the Pen-F – good, since this is one of the main reasons I bought the camera. This is to be expected: it uses the in-body stabilization to blend 8 images, four of them offset by 1 pixel (so it has full chroma bit-depth at every pixel) and four offset by half a pixel (so it has double the linear luminosity resolution). The result is about 1.5 stops better noise-performance.

Hi-res mode comes with a few limitations: the maximum ISO is 1600; aperture only goes down to f/8; shutter speeds only as long as 8s. You can enable picture modes such as natural, vivid, monochrome etc; however it does not work with bracketing or HDR mode: for HDR, it’s best to work manually.

One interesting point in favour of high-resolution mode: it does not stress the lens’s performance; at no point does it take an image of more than the native resolution, so if the MTF curves of contrast vs spatial frequency are adequate for 20 megapixels, the lens is also good enough for 50-80 (more later).

Hands-on: what’s it like in the field?

I took both current cameras out for a drive around the Highlands of Perthshire this afternoon, the NEX-7 with its kit 18-55mm lens and the Pen-F attached to a new 17mm f/1.8 prime.

Here’s a quick overview of four images: one from the NEX-7, two straight from the Pen-F’s own JPEGs out of camera (just downsized for presentation here) and one converted in RawTherapee:

My first real image from the Pen-F: Amulree and Strathbraan Parish Church at the end of Glen Quaich.

The Olympus images all have the same white-balance; I set the ORF and NEF files to the same whitebalance as the Pen-F chose automatically. It’s interesting that the sky is a proper shade of blue in the converted RAW; this might be a difference of profile between RawTherapee and the camera’s own internal JPEG engine. (Fine by me – I much prefer solid blue skies to cyan ones.)

What about the details?

Now this is a hard question. No doubt the resolution suffers and 80MPel is rather optimistic. Here we compare the NEX-7 against the 20MPel and 80MPel RAW files with no sharpening or detail controls enabled at all:

Obviously the 80MPel image is going to take a lot ofsharpening to bring it up to scratch.

My preferred sharpening algorithm with the NEX-7 has long been deconvolution. However, despite tweaking the radius, strength, dampening and iterations controls a lot, RawTherapee doesn’t seem to have the strength to sharpen the 80MPel hi-res RAW.

The out-of-camera JPEG appears to have a cruder sharpening algorithm with a lot of local contrast, so I switched to unsharp mask. At 80MPel, with enough USM to render the individual stones in the dry stone wall adequately sharp, this reveals serious anti-aliassing issues around straight-line edges:

Pen-F edge-aliassing artifacts

Pen-F edge-aliassing artifacts

These artefacts disappear when downsized to 50MPel in camera. Then again, the in-camera JPEG conversion is very contrasty and over-sharpened, so there’s room for improvement all-round:

In-camera JPEG conversion (20MPel) vs RawTherapee downscaled from 80MPel RAW

In-camera JPEG conversion (20MPel) vs RawTherapee downscaled from 80MPel RAW

Update: by adjusting the USM parameters, I’ve removed much of the artifacts around edges. There are still problems where blades of grass have moved between source frames, but static objects with strong outlines no longer cause such problems.

disable edge-only and halo-detection, just use less USM to start with

disable edge-only and halo-detection, just use less USM to start with

There are other demosaicing algorithms in RawTherapee and other ways of sharpening (newly added wavelet support). Plenty of scope for experimenting to get the best out of this thing.

What’s the verdict?

I absolutely love using it. Where other cameras drive the megapixel race by giving increasingly large images for everything (40MPel+), this camera ticks along at a more than adequate native 20 megapixels for most purposes with the option of taking one’s time to crank out the tripod to make absolutely huge images. This suits me fine – I’ve spent so long doing similar image-stacking/super-resolution trickery as part of the post-processing workflow, it’s great to have the camera do it instead.

The in-body stabilization is excellent. I managed to hand-hold 10 frames at 1/10s at the Falls of Dochart, twice, without a single blurry image.

There are lots of other features to look forward to – I’ve never had the patience to try focus-stacking, but moving it in-camera is very appealing.

With a subtle adjustment to the brighter blues to cater for skies, I could probably live off the camera’s own JPEGs for many purposes.

New lenses and filter adapter rings have been ordered…

Gratuitous sample images

As if the above weren’t nice enough, here are three more images from this afternoon.

Knowing Your Lenses

Analysing Lens Performance

Background

Typically, when a manufacturer produces new lenses, they include an MTF chart to demonstrate the lens’s performance – sharpness measured as the number of line-pairs per millimetre it can resolve (to a given contrast tolerance), radially and sagitally, varying with distance from the centre of the image. While this might be useful before purchasing a lens, it does not make for easy comparisons (what if another manufacturer uses a different contrast tolerance?) nor does it necessarily well reflect the use to which it’ll be put in practice. (What if they quote a zoom’s performance at 28mm and 70mm f/5.6, but you shoot it most at 35mm f/8? Is the difference between radial and sagittal sharpness useful for a real-world scene?)
Further, if your lens has been around in the kit-bag a bit, is its performance still up to scratch or has it gone soft, with internal elements slightly out of alignment? When you’re out in the field, does it matter whether you use 50mm in the middle of a zoom or a fixed focal-length prime 50mm instead?
If you’re shooting landscape, would it be better to hand-hold at f/5.6 or stop-down to f/16 for depth of field, use a tripod and risk losing sharpness to diffraction?
Does that blob of dust on the front element really make a difference?
How bad is the vignetting when used wide-open?

Here’s a fairly quick experiment to measure and compare lenses’ performance by aperture, two ways. First, we design a test-chart. It’s most useful if the pattern is even across the frame, reflecting a mixture of scales of detail – thick and thin lines – at a variety of angles – at least perpendicular and maybe crazy pseudo-random designs. Here’s a couple of ideas:

Method

Decide which lenses, at which apertures, you want to profile. Make your own design, print it out at least A4 size, and affix it to a wall. We want the lighting to be as even as possible, so ideally use indoor artificial light after dark (ie this is a good project for a dark evening). Carefully, set up the camera on a tripod facing the chart square-on, and move close enough so the test-chart is just filling the frame.

Camera settings: use the lowest ISO setting possible (normally around 100), to minimize sensor noise. Use aperture-priority mode so it chooses the shutter-speed itself and fix the white-balance to counteract the indoor lighting. (For a daylight lightbulb, use daylight; otherwise, fluorescent or tungsten. Avoid auto-white-balance.)
Either use a remote-release cable or wireless trigger, or enable a 2-second self-timer mode to allow shaking to die down after pushing the shutter. Assuming the paper with the test-chart is still mostly white, use exposure-compensation (+2/3 EV).
For each lens and focal-length, start with the widest aperture and close-down by a third or half a stop, taking two photos at each aperture.
Open a small text-document and list each lens and focal-length in order as you go, along with any other salient features. (For example, with old manual prime lenses, note the start and final apertures and the interval size – may be half-stops, may be third of a stop.)
Use a RAW converter to process all the images identically: auto-exposure, fixed white-balance, and disable any sharpening, noise-reduction and rescaling you might ordinarily do. Output to 16-bit TIFF files.

Now, it is a given that a reasonable measure of pixel-level sharpness in an image, or part thereof, is its standard deviation. We can use the following Python script (requires numpy and OpenCV modules) to load image(s) and output the standard deviations of subsets of the images:

#!/usr/bin/env python

import sys, glob, cv, cv2
import numpy as np

def imageContrasts(fname):
  img = cv2.imread(fname, cv.CV_LOAD_IMAGE_GRAYSCALE)
  a=np.asarray(img)
  width=len(img[0])
  height=len(img[1])
  half=a[0:height//3, 0:width//3]
  corner=a[0:height//8, 0:width//8]
  return np.std(a), np.std(half), np.std(corner)

def main():
	files=sys.argv[1:]
	if len(files)==0:
		files=glob.glob("*.jpg")
	for f in files:
		co,cc,cf=imageContrasts(f)
		print "%s contrast %f %f %f" % (f, co, cc, cf)

if __name__=="__main__":
	main()

We can build a spreadsheet listing the files, their apertures and sharpnesses overall and in the corners, where vignetting typically occurs. We can easily make a CSV file by looping the above script and some exiftool magic across all the output TIFF files:

bash$ for f in *.tif
do
ap=$(exiftool $f |awk '/^Aperture/ {print $NF}' )
speed=$( exiftool $f |awk '/^Shutter Speed/ {print $NF}' )
conts=$(~/python/image-interpolation/image-sharpness-cv.py $f | sed 's/ /,/g')
echo $f,$ap,=$speed,$conts
done

Typical output might look like:

P1440770.tif,,=1/20,P1440770.tif,contrast,29.235214,29.918323,22.694936
P1440771.tif,,=1/20,P1440771.tif,contrast,29.253372,29.943765,22.739748
P1440772.tif,,=1/15,P1440772.tif,contrast,29.572350,30.566767,25.006098
P1440773.tif,,=1/15,P1440773.tif,contrast,29.513443,30.529055,24.942437

Note the extra `=’ signs; on opening this in LibreOffice Spreadsheet, the formulae will be evaluated and fractions converted to floating-point values in seconds instead. Remove the spurious `contrast’ column and add a header column (fname,aperture,speed,fname,overall,half,corner).

Analysis

Let’s draw some graphs. If you wish to stay with the spreadsheet, use a pivot table to average the half-image contrast values per aperture per lens and work off that. Alternatively, a bit of interactive R can lead to some very pretty graphs:

> install.package(ggplot2)
> library(ggplot2)
> data<-read.csv("sharpness.csv") 
> aggs<-aggregate(cbind(speed,entire,third,corner) ~lens+aperture, data, FUN=mean)
> qplot(aggs$aperture, aggs$third, col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

This will give a comparison of overall sharpness by aperture, grouped by lens. Typically we expect every lens to have a sweetspot aperture at which it is sharpest; my own examples are no exception: third There are 5 lenses at play here: a kit 14-42mm zoom, measured at both 30mm and 42mm; a Minolta Rokkor 55mm prime; a Pentacon 30mm prime; and two Pentacon 50mm f/1.8 prime lenses, one brand new off eBay and one that’s been in the bag for 4 years.

The old Pentacon 50mm was sharpest at f/5.6 but is now the second-lowest at almost every aperture – we’ll come back to this. The new Pentacon 50mm is sharpest of all the lenses from f/5 onwards, peaking at around f/7.1. The kit zoom lens is obviously designed to be used around f/8; the Pentacon 30mm prime is ludicrously unsharp at all apertures – given a choice of kit zoom at 30mm or prime, it would have to be the unconventional choice every time. And the odd one out, the Rokkor 55mm, peaks at a mere f/4.

How about the drop-off, the factor by which the extreme corners are less sharp than the overall image? Again, a quick calculation and plot in R shows:

> aggs$dropoff < - aggs$corner / aggs$third
> qplot(aggs$aperture, aggs$dropoff, col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

dropoff Some observations:

  • all the lenses show a similar pattern, worst vignetting at widest apertures, peaking somewhere in the middle and then attenuating slightly;
  • there is a drastic difference between the kit zoom lens at 30mm (among the best performances) and 42mm (the worst, by far);
  • the old Pentacon 50mm lens had best drop-off around f/8;
  • the new Pentacon 50mm has least drop-of at around f/11;
  • the Minolta Rokkor 55mm peaks at f/4 again.

So, why do the old and new Pentacon 50mm lenses differ so badly? Let’s conclude by examining the shutter-speeds; by allowing the camera to automate the exposure in aperture-priority mode, whilst keeping the scene and its illumination constant, we can plot a graph showing each lens’s transmission against aperture.

> qplot(aggs$aperture, log2(aggs$speed/min(aggs$speed)), col=aggs$lens, data=aggs, asp=.5)+geom_smooth()

speed Here we see the new Pentacon 50mm lens seems to require the least increase in shutter-speed per stop aperture, while, above around f/7.1, the old Pentacon 50mm lens requires the greatest – rocketing off at a different gradient to everything else, such that by f/11 it’s fully 2 stops slower than its replacements.

There’s a reason for this: sadly, the anti-reflective coating is starting to come off in the centre of the front element of the old lens, in the form of a rough circle approximately 5mm diameter. At around f/10, the shutter iris itself is 5mm, so this artifact becomes a significant contributor to the overall exposure.

Conclusion

With the above data, informed choices can be made as to which lens and aperture suit particular scenes. When shooting a wide-angle landscape, I can use the kit zoom at f/8; for nature and woodland closeup work where the close-focussing distance allows, the Minolta Rokkor 55mm at f/4 is ideal.