It feels funny to think that back in the early 1990s Serif was known for PagePlus, in the days when such things were known as desktop publishing or DTP applications.
Recently, however, they’ve produced Affinity Photo for Mac, Windows and iPad. After one or two folks recommended it, I thought it was time to add another trick to the photo-publishing workflow and have a play. After all, if nothing else, having an iPad would solve my doubts with colour-management issues on Linux, wouldn’t it?
So I’ve spent a few weeks driving around hunting scenery and taking photos of it and gradually evolving a few routines: photos are still shot on the Pentax K-1; processed (variously pixel-shift or HDR) using dcraw on Linux, where I also run them through darktable for a lot of toning work; the results are then copied to an ownCloud folder which synchronizes automagically with the iPad; a bit of juggling with share-to-Affinity and share-to-Photos and share-to-ownCloud later and the results are copied back to the workstation for final organization, checks and publication.
Within Affinity, my workflow is to import an image and ensure it’s converted to 16-bit P3 colour-space – this is a bit wider than sRGB and native to the iPad’s display. I then run the Develop module which tweaks exposure, brightness and contrast, clarity, detail, colour-balance (not so much), lens distortions, etc. After that, I use layers to remove sensor-dust and other undesirable feature. Top tip, use a new pixel layer for the in-painting tool set to “this layer and below”; then all the in-paintings can be toggled on and off to see the effect; also use a brightness+contrast adjustment layer above that while you work, so the corrections will be even less visible when the contrast is reduced back to normal. If the image requires it, I’ll add one or two fill layers for gradients – better to use an elliptical gradient that can be moved around the scene than a vignette that only applies in the corners. Finally, I merge all the layers to a visible sum-of-all-below pixel layer,on which I run the Tonemapping persona; ignoring all the standard presets (which are awful), I have a couple of my own that make black-and-white in low- and high-key targets, in which I can balance local versus global contrast. If I produce a black&white image, it probably arises from this layer directly; sometimes, dropping the post-tonemapped layer into the luminosity channel makes for a better colour image as well (subject to opacity tweaking).
So, some results. It produces colour:
It produces black and white:
One final thing: I used to hate the process of adding metadata – titles and descriptions, slogging through all my tags for the most pertinent ones, etc. Because the ownCloud layer is so slow and clunky, it makes more sense to be more selective, choose fewer photos to go through it; if I also add metadata at this stage, I can concentrate on processing each image with particular goals in mind, knowing that all future versions will be annotated correctly in advance. Freedom!