ShinyPhoto: New Website

The old ShinyPhoto website was getting a bit long in the tooth. It saw several versions of Python come and go and increasingly suffered from bitrot. (Notably, a mutual incompatibility in the CGI module between python versions; it ran for so long the backend storage engine I used became deprecated with no easy way out but to revert to one I wrote myself – not a good reason to rely on third-party libraries!)

So, for the past couple of months I’ve been learning my way around Javascript and node.js and have replaced the site with a new gallery to show-off my photos.

Being me, it’s a bit geeky. With web-design there are so many angles to consider, but here are a few aspects that stick in the mind:

Technical: no XSLT; this is the first time in nearly 20 years where I’ve used a different templating language – in this case, Mustache since it does need to be able to produce non-HTML data as well.

Learning: there’s a whole ecosystem of node.js packages (NPMs) that have come in handy, from the Express webserver to image-resizing utilities (some of which are faster than others!).

Data: in my more professional work capacity I deal with data-storage on a daily basis, so it has some passing interest. One of the problems with the old site was its inability to extract metadata from images; because this instance’s primary focus is the organization and display of photos, I decided that the JPEG should contain all the data displayed – title, description, geotagging, keywords all extracted from one upload and the less manual editing effort required, the better. Essentially, digiKam is both organizer and implicit website editor on my desktop.

Database: with the unit of data being the JPEG, presented as a page per photo, that maps well into a document-oriented model such as one of the NoSQL JSON-based databases. Unfortunately MongoDB disgraced themselves by choosing a non-open-source licence recently, so I was pleased to discover CouchDB – a modular system sharing protocols (JSON-over-HTTP(S)) and query language (MangoDB) across different storage backends with the advantage that I can start from the PouchDB pure node.js implementation but switch to an external version of the same with a quick data-replication later if need be. So far, it’s coping fine with 1.1GB of JPEG data (stored internally as attachments) and 70MB of log data.

Configurability: several aspects of the site are configurable, from the internal and external navigation menus to the cache-validity/timeout on images.

Scalable: my initial thought was to keep image-resizing pure-javascript and rely on nginx caching for speed; however, that would lose the ability to count JPEG impressions (especially thumbnails), so I switched to a mixed JS/C NPM and now resizing is sufficiently fast to run live. The actual node.js code itself also runs cleanly – feels very snappy in the browser after the old python implementation.

Metadata/SEO: the change of templating engine has meant specific templates can be applied to specific kinds of pages, rather than imposing one structure across the whole site; different OpenGraph and Twitter-Card metadata applies on the homepage, gallery and individual photo pages.

Statistics: lots of statistics. There are at least three aspects where statistics rule:

  • the usual analytics; it’s always handy to keep an eye on the most-popular images, external referrers, etc. The site uses its own application-level logging to register events on the page-impression scale, so the log data is queryable without having to dig through CLF webserver logs.
  • how should a photo gallery be sorted? By popularity, by date? Do thumbnails count? What about click-through rate? The new site combines all three metrics to devise its own score-function which is recalculated across all images nightly and forms the basis of a display order. (It surprises me that there are photo-galleries that expect people to choose the sort order by hand, or even present no obvious order at all.)
  • how should a photo-gallery be organized? My work is very varied, from bright colour to black and white, from sky to tree to mountain and water, from fast to long exposure, from one corner of the country to another, as the landscape leads; I did not want to impose a structure only to have to maintain it down the line. Accordingly, the new ShinyPhoto is self-organizing: within any slice through the gallery, a set of navigation tags is chosen that splits the images closest to half. Relatedly, the images on the homepage used to be a static selection, manually edited; now they are chosen dynamically by aspect-ratio and score.
  • Marketing: some aspects of the layout now enjoy a/b testing – no cookies required, but another hash function determines the site appearance and I can check which work best over time.

So far, it’s proving pleasantly easy to live with; apart from the continual debugging and adding of new features – fortunately now slowing down – I’m adding photos at a rate of a handful a day both to the site and to a new RedBubble account in case anyone wants to buy them, one way or another.

So apparently I now like the whole node.js ecosystem. It’s blown away the cobwebs of running – or more accurately not-running – a legacy website, whilst retaining full control of the appearance and structure of the site not handing that over to some third-party site designer.

A good way to start a new year, methinks.