Recreating Wildlife Photographer of the Year online – part 1 – Introduction and technical approach

https://www.nhm.ac.uk/wpy

Introduction

I’ve been involved with the Wildlife Photographer of the Year brand for as long as I’ve been at the Natural History Museum, which let’s just say is a long time. I’m still very proud and privileged to work on such important and inspirational content.

This series of posts is about the recent re-platform of our ageing WPY microsite. I’ll start by setting the scene for how the project came about and then dive into some of the technical decisions. Later on I’ll talk about some of the site’s key features and what we’ve learnt with the tech and working towards a more modern API-driven web architecture. It’s been great having something so rewarding to work and focus on during these strange and difficult times in lock-down. 

The competition’s microsite has long been one of my responsibilities and this would be my third replatform. The last being a migration into our (then new) main website CMS (Adobe Experience Manager – AEM) with a database-backed Java application serving the dynamic gallery section of winning images, back in 2015.

As is fairly typical, web applications built using older technologies get left behind if they aren’t given focus from both product and development, and unfortunately that was the case with our Java-based WPY microsite.  It’s gallery design was ageing badly and not doing the images justice. It was getting it’s gallery updates once a year with the release of the winners, but that was about it.

My role had also largely moved to a front-end focus, I’d been focusing on Javascript development for a few years, developing web-based gallery interactives and hybrid mobile apps for the Museum (they were all the rage back then). 

As part of a new Digital Media department, we’ve started working towards a new services-led web strategy, using API’s to better separate out back-end and front-end concerns, allowing our front-end team to use modern Javascript frameworks and tools to build more feature rich and performant web products.

WPY’s importance and potential was recognised by the museum so we were given the green light to re-platform it.

This was great news for me, as I had the opportunity to use my passion for JS development on content I’ve always been inspired by.

Clearleft collaboration

Since this project would also be helping the new front-end development team upskill in modern architecture (in this case myself and my colleague Rachel Brook) to the ongoing benefit of the WPY product and the Digital Media department, we’d be collaborating with an agency on the build as well. 

We worked in partnership with Clearleft (https://clearleft.com/) over two ideation and design discovery sprints to try and pin down the direction of the replatform – WPY is a far-reaching brand and community, so we had to really work hard in balancing the possibilities this could take. The images and photographer community quite rightly came to the forefront of the discussion early, so general agreement was soon pointing towards doing justice to the images online, as well as improving the competition entry process and experience.  They’ve written a great article on their view of the project (Link).

Clearleft had impressed us with both their clarity on design direction and their technical know-how. Trys, their senior developer, responding to my technical questions with detailed knowledge and awareness. 


Technical Approach

As the in-house technical lead on the project I mulled over the directions it could take. We knew it would be a Javascript framework build and I was personally leaning towards React (the ReactJS vs VueJS battle is still building up steam amongst us, the only thing that’s clear at this point is that there’s no wrong option).  We’d already built a few React apps in the team and having used it in some interactive and web builds myself I’d gotten more comfortable with it. But I’d also encountered some of its early downsides with it’s more flexible nature. At this stage it still could have taken any form. 

NextJS

It soon became apparent that the new site needed both an established and popular platform (for supportability) and server rendering for the vital SEO and performance benefits it still brings.  As a relatively new Javascript team, we’d been warned off the complexities of custom server rendered React in a previous API-driven project.

Gatsby and NextJS were the two market leaders of server-rendered React, I won’t go into their differences much here as there are numerous articles on the topic.  However both feature automatic performance optimisations and an ecosystem of modules for additional functionality, the key difference being that Gatsby is a static site generator and NextJS (primarily) uses a running server to render requests. 

What does this actually mean?

Gatsby builds all the website pages once at ‘build time’.  The generated files can then be put on a server/CDN to serve the site, this offers performance benefits and means there’s no public server to manage, the downside being you’d have to build the site again if you need to change any of the generated pages (even a spelling mistake).

NextJS operates primarily as a runtime server (though it’s moving to a hybrid static model), using the ‘live’ content from the APIs to render each page as it is requested. If a spelling mistake is updated on the API, it’s available immediately on the NextJS page.  The downside compared to Gatsby is the increased complication and risk of running a live web server, even with caching, load balancing and fallback strategies.

The new WPY site fell between these two publishing requirements, the gallery section of the site only requiring occasional updates, with other pages (i.e. homepage) receiving more regular updates.  NextJS’s all round benefits however made me lean more towards it.  With the fact that this project was being seen as a stepping stone towards our long-term API-driven ‘headless’ aspirations, I was keen for this project to move our learning as much as possible. With Gatsby’s reliance on publishing builds and talk of lack of scalability on large websites being my key reasoning against it.

Modern front-end tools

I was also keen to choose other tools that could help the project and team long term.  We’d often discuss component libraries as a solution for keeping track of the large and changing web component list of our CMS, but these could take many forms, from being entirely code generated to something a designer or developer would need to manually maintain.  Storybook (https://storybook.js.org/) is a popular and extensive framework which generates live component libraries without much developer overhead, it was clear that it would be a good fit here. We’ll cover it’s advantages and on-going value in a later post.

Test Driven Development (TDD) and better test coverage have always been high on our wishlist but had also not had much traction in the team due to the limited architectural Javascript we were using in our main stack. I felt it could be too much of an overhead for us on this project. 

However I wanted to take our Javascript in a more modern direction.  Using Typescript as a way of giving Javascript a bit more engineering robustness looked to be a good choice, the rest of the team agreed.

APIs and Content-services

We were lucky enough to also be partnering with our back-end and platforms colleagues in Technology Solutions on the project, so we had dedicated resource for both the critical API builds that would drive the site and the deployment solution a new platform would need. 

We had two distinct data sources for this project, in effect they are continuations of the previous WPY microsite, which was a combination of database generated content and CMS editable (AEM) content.

  • The new REST WPY API (Java Spring) would deliver the structured WPY image data (with relational year, category, photographer info data) required for the dynamic WPY gallery.
  • AEM content-services would deliver the supporting site editorial content and pages.

We’d worked with REST API’s on some earlier re-platforms as a team, but this would be our first usage of AEM content-services, a relatively new feature set from Adobe AEM (6.4) to allow for the generation of structured JSON content endpoints (instead of rendered HTML which drives our main website) whilst still providing editor authoring functionality.  This meant that the museum would continue developing and authoring on the same established CMS, but be able to also build decoupled web applications on newer front-end platforms with the advantages that brings.

Typescript and the API

I’d read an article earlier that said Typescript takes around two weeks before it clicks with a developer and I can confirm that almost to the dot.  At the start it was difficult to work on even basic functionality.  However, I recognised why it would be a godsend when working with an evolving API layer (especially on the naturally less structured content-service data).  I was still more used to working with separately developed data with Java, where typing is a matter of course and it felt immediately natural getting into the same typed data mode with Javascript, helped along by the great typescript support in our VSCode editor and NextJS platform.  As a means of giving a JS developer a view on what data structures are being passed around in code, it’s particularly helpful.

 export interface WPYImage {
    id: number;
    url: string;
    status?: number;
    is_category_award: boolean;
    is_peoples_choice: boolean;
    is_in_image_set: boolean;
    is_overall_winner: boolean;
    image_set: WPYImageSetPreview | null;
    name: string;
    name_url: string;
    edition: WPYEditionPreview;
    competition: WPYCompetition;
    category: WPYCategoryPreview;
    award: WPYAward;
    photographer: WPYPhotographerPreview;
    content: WPYImageContent;
    location: WPYLocation;
    tags: WPYTags;
    votes: WPYVotePreview[];
  }

Typescript definition for our WPY API’s image data structure.

We used transforms on the fetched API data, where a JSON data object is transformed into another structure better suited for its intended usage in the webapp (e.g. combining or simplifying data objects to make template rendering cleaner).  These were fairly simple when working on the WPY API, as Matt our API developer was both proactive and clairvoyant in optimising and simplifying the API’s endpoint structures for our intended usage as the project evolved. Where we needed data to be extended or reduced, he devised optional endpoint parameters to control the data output, streamlining both the API payload and the data ‘noise’ in the code. 

For example the gallery search uses a reduced view of the image data: 

https://www.nhm.ac.uk/api/wpy-api/images?tags=cat.ndrwtr&tags=ed.current&view=image.gallery-filter

The tag query parameters in the URL above are at the core of the API’s extensible potential and the site’s shareability and navigation channels (the UI tags used throughout the site).  The image database already had relational structures for year, category and photographer data linked to the images, but these were getting inconsistent. The competition processes and categories had evolved over the years. After a lot of cleansing and rationalisation of the original data by Matt, he added a tagging layer on top of the existing database structures. The tags could also be extended to allow for new, related data to be populated into the API, such as species data for each image which historically hadn’t been stored.

Examples of different tag functionality in the site

Design and CSS approach

One of the things that impressed me watching Clearleft work is the speed and communication between their designer, James and Trys.  They’d worked together previously, and were passionate about improving the designer/developer workflow they were clearly mastering, designing components not pages.  They’ve even named their approach – Utopia.

Utopia

James uses Figma as his main design tool, which allows for detailed meta content and design components, something that they’d use as a common interface when moving from design to code. 

Alongside this, they’d devised a strategy to solve a fundamental design problem that comes with typical responsive breakpoint CSS – each breakpoint could be designed, but devices that fall between the breakpoints would be ‘undesigned’ (typography and spacing would sit uncomfortably, too large or too small).  Increasing the number of designed breakpoints alleviates this, but at much increased designer and developer overhead.

They used the combination of descriptive classes (large, XL, gutter etc.) and CSS custom variables, along with some abstracted-away algorithms to fluidly scale typography and spacing so that it’s a designed fit for any screen size.  The scaling doesn’t need to be linear with screen dimensions either, and could be modified to fit the requirements of the design system.

A simple overview of the system, which they discuss in depth at https://utopia.fyi/.

   --f-summit: 1480;
    --f-screen: 100vw;
    --f-foot: 1 / 16;
    --f-hill: (var(--f-screen) - 20rem) / (var(--f-summit) / 16 - 20) +
      var(--f-foot) * 1rem;
    --cfp-gutter: ((4.44445 / 16 - var(--f-foot)) * var(--f-hill));
    --cfp-image-grid: ((2.222 / 16 - var(--f-foot)) * var(--f-hill));
    --cfp-step--2: ((0.9765625 / 16 - var(--f-foot)) * var(--f-hill));
    --cfp-step--1: ((1.041666667 / 16 - var(--f-foot)) * var(--f-hill));
    --cfp-step-0: ((1.11111 / 16 - var(--f-foot)) * var(--f-hill));
    --cfp-step-1: ((1.185185185 / 16 - var(--f-foot)) * var(--f-hill));
    --cfp-step-2: ((1.264197531 / 16 - var(--f-foot)) * var(--f-hill));
    --cfp-step-3: ((1.348477366 / 16 - var(--f-foot)) * var(--f-hill));
    --cfp-step-4: ((1.438375857 / 16 - var(--f-foot)) * var(--f-hill));

A sample of the fluid Utopia calculations which act as the base of the design system.

When they proposed using this system, the reaction was slightly mixed. On the one hand we all saw the benefits but there was a little concern about maintainability. Having worked with it, the system has fit us and the project well.

We also had decisions to make on the CSS configuration in our application, with choices between CSS-in-JS and CSS(SASS) module scoping, which would also impact the site’s CSS workflow.

CSS build and configuration  

CSS-in-JS felt a step too far for the more CSS focused developers in the NHM team, I could see their point of view, mixing CSS in potentially complex JS sounded like a maintenance risk (though I’m intrigued to try it). But we also weren’t that familiar with modern JS component architecture, where each UI element could be as small as you wanted it to be, limiting both the complexity and overhead of each CSS-in-JS section.  

In the end we went with something that was closer to our main website’s CSS, separate SASS files for each component, but here scoped via modules to avoid basic cascading concerns. We still wanted to carry on using the BEM (Block/Element/Modifier) CSS naming convention as well, despite it largely being a solution for older non-module CSS, where each CSS identifier had to be manually set with a unique descriptive name (e.g. header__controls__button–secondary to prevent inadvertent conflicts (and improve template readability).

With CSS modules, each CSS file’s identifiers would get an automatically generated unique identifier suffix at build time which is then inserted into the generated HTML (e.g. Header___31kyg), ensuring uniqueness, so a developer could just use button–secondary in place of the full BEM.

NextJS however wasn’t the easiest to configure in this particular combination, We ended up using a custom configuration for this which caused some early difficulties that were eventually largely overcome.

Because of the lack of documentation on this particular setup, our first attempts relied on importing the global styles (including the custom CSS variables used in all CSS) into each module, which caused unnecessary duplication of the global styles.  We also didn’t have an easy way of differentiating between global and local style classes.

In the end the solution implemented was what we call the ‘C’ utility.  A helper function that imports the global styles once into itself and exposes a method for components to combine global and locally scoped classnames (which are already being imported into the component in question).

import { withC } from '@/src/utilities/c';
import css from './Connections.styles.scss';  //local module style
const c = withC(css);   //C utility is loaded with local module
 
//C function with both global and local style classes
<div className={c('grid--12__9 row-spacing', 'Connections__section')}>
 
//with only a local class
<h2 className={c(null, 'Connections__sub-heading')}>
 
//with an array of global style classes
<div className={c(['row-spacing', 'u-pad-top', 'u-pad-bottom'])}>

Internally it works by destructuring the hashed local classnames from the local styles module and matching them to the unhashed classnames provided by the developer in the C function parameters, before rendering the combined output as:

<div class="Connections__section___ab2YO grid--12__9 row-spacing">

The final part of our CSS setup is the Grid system, a 12 column layout as designed in Figma.

It uses a nested hierarchy system to allow for multiple layouts in each template and section.

<main className={c('grid grid--12')}>
          <aside className={c('grid--12__3')}>
            <ImageSidebar image={image} />
          </aside>
          <div className={c('grid--12__9')}>
            <div className={c('grid grid--9')}>
              <ImageContent className="grid--9__6" image={image} />
              <aside className={c('grid--9__3')}>

A full width (12) grid, containing a left 3 column aside, the remaining 9 columns contain a nested 6|3 grid for the child components

As a final note about the CSS, halfway during the project NextJS released an update to include out-of-the-box SASS module support (somewhat annoyingly). Whilst it’s disabled by default based on our current configuration it might have saved us some time at the start.  There’s no great rush to refactor to it, but it will be interesting to see how much work would be involved and if there would be any significant benefits now. 

 It’s also a sign how quickly these front-end tools evolve (and potentially drop off) and how, to utilize cutting edge front-end technology, it’s important to set your team up to evolve with them at a sustainable and comfortable pace.


I’ll wind down this part of the story, even though I’m used to bashing these sort of articles out in one go, there’s a huge amount achieved here.

The next post will go on to talk about some key site functionality like the gallery filtering and Flickr style image grid that make up the gallery experience.

With thanks to everyone who worked on this project.

Discover more from Blogs from the Natural History Museum

Subscribe now to keep reading and get access to the full archive.

Continue reading