With the platform and CSS approach covered in Part 1 (Link), I’ll go into detail about some of the key technical features and functionality we built out for the site (www.nhm.ac.uk/wpy). We’ll cover the core image gallery solution with it’s API filtering and Flickr style image layout. Then jump into the lazy loading and next gen image format topics, before finally covering the page routing architecture of this API-driven site.
Gallery filtering
We had a full list of image ‘subject’ tags’ available in the WPY API, so one of my first development tickets on the project was to implement the filtering logic that would drive the gallery section of the site and allow the user to browse all these amazing images in a new way. A slightly daunting task at project start but I knew the data/content well enough which certainly helped things along.
Here’s a quick explanation and a React code extract of how it works:

It’s a container component which traverses through the API-fetched tag tree, rendering accordion components which in turn render checkbox components. Each checkbox has its own state. When it changes (ticked/unticked) the checkbox pushes its state up to the logical component with its tag tree identifier. The logical component can then use it’s full list of checked tag tree identifiers to request the API fetch with the combination of tags as parameters.
The fetch is handled and the returned data is used to populate the image grid component that displays the current selected images.
If a checkbox state is modified whilst an API fetch is pending, the existing fetch(s) are aborted with the Abort Controller web API, preventing a stale filter request from being handled as a new image list update.
useEffect(() => {
let isCancelled = false;
const abortController = new AbortController();
if (filtersChanged) {
(async () => {
try {
const galleryParams = getQueryParamsfromTagArray(tagArray, false);
const imageList = await new Request<WPYImage[]>(
endpoints.tagSearch,
{
[':tagParams']: galleryParams + ''
},
abortController
).get(true);
if (!isCancelled) {
updateImageArray(imageList);
//push a successful tag filter to browser url
Router.replace(
Router.pathname + galleryParams,
Router.asPath.substring(0, Router.asPath.indexOf('?')) +
galleryParams,
{
shallow: true
}
);
}
} catch (e) {}
})();
}
return () => {
abortController.abort();
isCancelled = true;
};
}, [tagArray]);
Flickr-style Image grid


With the filtered image list coming in from the API, the design envisaged a minimal and natural layout for displaying the image collection at it’s best. The API proved key, providing image dimensions and therefore aspect ratio (each image was different and had to maintain it’s ratio), something that could be used for both precise layout calculations and lazy loading of the images without unsightly browser re-layouts.
Clearleft had designed a Flickr style image grid, implemented via this code utility – https://github.com/flickr/justified-layout. It’s required calculations use a combination of the API’s aspect ratio values and some of the fluid CSS layout variables like gutter, pulled into the Javascript for the grid calculations. The result could be adapted with a target row height, but the team was happy with the result. This imageGrid component was also used in other areas of the site that needed this layout style (e.g. photo set pages and the upcoming Peoples Choice voting page), not just the main gallery image collection.
However, we were noticing some issues with the grid – it would occasionally display all images in a single column only and break it’s layout on a random row on some mobile screens, all relatively inconsistently.
We eventually tracked these down to three separate issues over time. the CSS module loader was not always pulling in the imageGrid CSS (fixed with a NextJS update), then my colleague Robin found that the Flickr calculations had some rounding issues on mobile device pixel ratios (removing a pixel solves it – see below). Finally the calculations were not taking into account the scrollbar that gets added which reduces the viewport’s width, but only after the page length introduces the scrollbar (solved by loading pages with the vertical scrollbar showing by default).
const containerWidth = gridRef.current
? gridRef.current.clientWidth - 1 //the mystery rounding pixel..
: 1480;
const gutter = gridRef.current
? gridRef.current.getBoundingClientRect().height
: 40;
Image lazy loading
The lazy load implementation was setup at the project start and is a vital part of the thoughtful treatment of the images in the site , they gracefully fade-in during page scroll, with no content jumps disrupting the experience. Lazy loading itself is used to insure that only images that are going to be viewed as the user interacts with the page get downloaded, efficiency at many levels (energy, bandwidth etc.).
Once again the API’s image dimensions were key. Correct aspect ratio placeholder boxes for each image can be generated at initial document load using a CSS calculation, meaning no content shifting when the actual images are downloaded and added.
The other key part is the Intersection Observer API, native modern browser functionality that triggers events to alert the application when a web element is coming into the viewport. As each image comes into view, the app requests the image url from the server, waits for it to finish downloading and only then, (CSS) fade-in the complete image for the user. A relatively complex process (see code below), but with React’s component approach, something that can be encapsulated once and then reused across all components.
useEffect(() => {
// When the component mounts, create an IntersectionObserver instance
const observer = new IntersectionObserver(entries => {
entries
.filter(x => x.isIntersecting)
.forEach(() => {
// If we have access to the image, via the ref, begin lazy loading the image
if (!ref.current) return;
const src = ref.current.getAttribute('data-src');
if (!src) return;
const loadImage = document.createElement('img');
// When the image loads, toggle the loaded state, which will swap the prop
loadImage.addEventListener('load', () => setLoaded(true));
loadImage.src = src;
// Destroy the IntersectionObserver, as it has done its job
observer.unobserve(ref.current);
observer.disconnect();
});
});
setTimeout(() => {
if (ref.current) {
observer.observe(ref.current);
}
}, 0);
return () => {
observer.disconnect();
};
}, []);
Image component lazy load implementation
Intersection Observer is a powerful technique that can be used to lazy load other page elements, not just images. For example, I’ve also applied it on our ‘computationally expensive’ social media embeds (Youtube and Instagram) that appear lower down the pages. This greatly improves the page’s ‘time to interactive’ as it reduces the initial running code and server requests for a page and therefore how long a user has to wait. In effect, code and content on (scroll) demand.
Next-gen image formats
Early on in the project, there was also team discussion around utilising the new optimised image formats (primarily WebP and JPEG 2000) that have been available for a while now but that we hadn’t yet had experience with. These new formats offer a much better file-size to image-quality balance compared to older compression formats like JPEG and PNG.
Our images are handled within our CMS or manually by a designer instead of through more feature-rich cloud services which can convert to these new formats automatically. I knew that since the WPY images are only introduced yearly as part of the competition life-cycle, even a relatively manual process to convert to these new formats would be worthwhile to the user for an image critical website like this. Due to time constraints I didn’t get round to addressing this until recently, but we’ll be releasing WebP very shortly. Here’s a rundown of what was needed:
The challenge
- Decide support level
- Find batch conversion tools
- Reliable browser support detection and JPEG fallback
- Needs to work with our lazy loading and fade logic
- WPY image quality sign off
Firstly some background information. As is frequently the way of the web, browser vendors take their time on agreeing new standards, so current browser support is:
- WebP – Google developed – Chrome, newish Firefox and new IE Edge. (however Apple support seems to be coming according to unofficial Safari developer tweets…)
- JPEG 2000 (JP2) – open source standard – Mac/IOS Safari only
- JPEG XR – Microsoft developed – IE 9-11 and older IE Edge
Our IE 11 support policy looks to be finally going under our 1% userbase threshold (they’d continue getting the existing JPEG fallback) so I was happy to rule out JPEG XR. WebP was the stand-out version to implement as it’s the majority of our userbase and also has the best savings out of all the formats.
For conversion tools, Google offer the CWebP command line tool (developers.google.com/speed/webp/docs/cwebp), something I could easily script to generate the several thousand WPY image archive across the different image sizes we had. For the JPEG 2000 format, I’ve yet to find a reliable converter I’m happy with. My tests with an older free utility (Advanced Image Batch Converter) were proving difficult to balance with image quality and file savings.
With at least WebP in place, the application needed a strategy of progressive format selection and fallback. The approach for this increasing common web image task is to replace the traditional:
<img src="img/image.jpg alt="alt text" />
with:
picture>
<source srcset="img/image.webp" type="image/webp">
<source srcset="img/image.jp2" type="image/jp2">
<img src="img/image.jpg" alt="Alt Text!">
</picture>
What this does is give the browser a sequence of image formats to try, if it doesn’t support a format, it moves onto the next. The last being the original img element that works for all browsers.
This is nice and simple, however our specific lazy loading implementation preloads an image separately before applying it on the actual page where it needs to be, so we’d need a Javascript solution to find the correct image format for this preload. There are a few implementations of this, most rely on loading a tiny version of each image format to detect if it can be dealt with by the browser at runtime. This can be done once per session and stored as a variable. Helpfully, we can also provide an inline data representation of each image format in the application code, which saves us the round trip of going to the server to request a 1 pixel ~WebP/JP2 test image, which would still add latency to a new web session for a user, however minor.
Modernizr (https://modernizr.com/) has been around a long time as a browser detection utility. It fit the bill nicely, offering an online service to select exactly which browser features we need to detect (in this case only WebP and JP2 support), which saves hugely on a full feature list detection script. It also follows the inline image test I described earlier, and it’s code can be downloaded and inserted directly into our application. Again crucially preventing another server round trip before the application can start rendering.
With this in place, the application code knows which image format to preload, so our nice lazy loading/fade-in implementation can continue working as before. Responsive image sizes are handled in the SrcSet attribute of the new <source> elements, just as they were in the original <img>. With all these different formats and file sizes the code itself ends up looking complex, again component design saving the day to abstract this away. Not the easiest thing to implement, but I just needed to look at the file sizes to keep me motivated!


As ever with anything new, there’s always an unknown element. When I first setup the new versions, I found that in the current year’s WPY images, the colour vibrance or saturation differed from the existing JPEGs compared with the converted WebP (and JP2), the latter appearing less vibrant or more washed out (see images below).
Not being an image fomat expert by any means, I instinctively saw it as a web issue and read about a possible Webkit browser bug that could lead to incorrect colour profiles, and associated the difference to this. My reasoning being that since earlier year’s images didn’t exhibit the same conversion difference and opening all 3 image formats in another image viewer and finding they all looked the same (I used IrfanView). I assumed this year’s batch of JPEG images must have been saved in an ‘incorrect’ format and that the WebP and JP2 I had converted were in fact the true representation.
However after approaching our designers and the WPY office, we found that in Photoshop, the original submitted camera file (a .TIFF) had the same vibrancy as the ‘incorrect’ JPEGs, with the new formats being washed out compared to the .TIFF ‘source of truth’, which put paid to my theory. Further digging found that Adobe uses their own specific colour profile and that it’s part of the WPY competition rules (specifically Adobe RGB (1998) ICC color profile).
It didn’t take long to find an optional CWebP configuration that I had missed, which copies the ICC colour profile from the source image’s metadata over to the new WebP images during conversion. This thankfully resolved our colour variance. The same approach can be used for JP2 conversion once I find a suitable tool.
Links and routing
Bringing things back to the application itself. With the gallery filter and images were in place, the site’s routing to all 10 years’ worth of amazing image pages could be bashed out. NextJS uses a file/folder naming convention under the source projects ‘pages’ folder to generate it’s website ‘pages’ or routes, a simpler take on the usual code implementation for a typical Javascript webapp’s ‘router’. There’s contrasting views on the appropriateness of this for complex site structures, but ours was relatively simple.
Starting from the top:

- /[hub]/index.tsx – handles all possible top level site urls (e.g. /competition) – that aren’t the homepage or /gallery
- /[hub]/[page].tsx – handles anything in a hub page, e.g /competition/rules or /competition/rules/translations
- /api/sitemap.ts – our dynamically generated sitemap with it’s (~1000) image page urls (WPY API again).
- /gallery/index.tsx – the core gallery search page, uses the tag query parameters to load with a preselected filter list (https://www.nhm.ac.uk/wpy/gallery?tags=ed.5&tags=loc.142)
- /gallery/[imagename].tsx – handles all the image pages, the API’s internal image-name field drives the urls (https://www.nhm.ac.uk/wpy/gallery/2019-cool-drink)
- /gallery/portfolio/[imageset].tsx – separate route for winning images from a photoset category.
- /Index.tsx – our homepage (https://www.nhm.ac.uk/wpy )
Isomorphic, universal, server-side-rendering???
With NextJS’s server side rendered behavior, a new client browser session is handled by generating the page content (HTML) on our servers (based on the requested URL and the matching API data). Once this is sent back to the client browser (along with all the CSS and JS it needs to run), NextJS then starts operating as a client side app (or Single Page App – SPA). This dual-mode behaviour is often called Isomorphic or Universal.
Here Javascript in the browser is in full control of the site’s URL’s and navigation, so when a new page is selected, the site remains loaded in the browser instead of, as is more traditional, repeating the server request process all over again for a new page.
The client side Javascript requests only the API data (and any other assets) it might need to generate the difference of the newly requested page content, instead of downloading the full site assets again. Which when set up right, makes a big difference to a website’s navigation performance and responsiveness (i.e. big bounce rate win). The URL in the browser address bar still gets updated and is shareable and trackable for analytics.
All great stuff, but it means pages need to implement a special kind of link to tell Next/React to control navigation (otherwise the browser just goes off and loads a server page, as it would normally with a traditional <a> HTML link), this is Next’s <Link> component.
This is fairly intuitive to use, however one complication was the basepath configuration of the site.
The basepath can be considered the root of the web application if it’s not running at the root of the domain. During development ours was ‘/web-wpy-microsite’ (so our homepage was http://localhost:3000/web-wpy-microsite). Unfortunately NextJS didn’t give detailed documentation on how this should be achieved, surprisingly as it’s a basic web development function.
NextJS however offers an ‘experimental’ basepath configuration option, which was passed through to the <Link> component (extended as a custom <WPYLink>) as shown in the code below.
image(slug: string, query: string = '') {
if (query) {
return {
href: `/gallery/[imagename]${query}&imagename=${slug}`,
as: `/gallery/${slug}${query}`
};
} else {
return {
href: `/gallery/[imagename]?imagename=${slug}`,
as: `/gallery/${slug}`
};
}
},
Dynamic image-page route mapping
<Link
href={to.href}
as={`${base}${to.as}`}
passHref={false}
prefetch={prefetch || false}
>
<a
{...props}
href={`${base}${to.as}`}
ref={ref}
/>
</Link>
);
Custom Link component output with basepath (base) inserted into ‘as’ (the client facing version of the link)
The story doesn’t end there though. NextJS released an update (the same that fixed our imagegrid CSS issue), changing the behaviour of the still experimental basepath configuration and breaking the site navigation (that’s why ‘experimental’ carries a warning).
With a bit of refactoring I managed to fix both local and staging environments. Doing so revealed that with a few changes it wasn’t actually needed for our production environment.
The Nginx server configuration set up by our platforms team handled everything cleanly between internal NextJS app and the outside world, even though we hadn’t actually sat down together to plan that specific behaviour (we were all learning).
Our backend and platforms teams did a great job, providing a staging server environment and continuous integration to get us up to development speed quickly, They also provided the project with a live but secured ‘alpha’ site, with which the UX developers could perform user testing and validation and also allow us to sanity check the live application environment with our firewalls and CDN.
A real sign that modern front-end application design needs to sit alongside platforms and back-end to develop common practices and oversight, who said digital transformation was easy?
That’s it for this post, the next will discuss our content-services integration with the main website’s CMS (Adobe AEM), and what that’s taught us on the road to a more modern, services-led web strategy.