Visualising the global WPY image collection – A 3D web experiment

http://static.nhm.ac.uk/wpy-globe-experiment/globe?tags= (NHM VPN required)

During lock down, I worked on a personal side project to progress my knowledge in web-based 3D interactive visualisations. I’d always been inspired by the global reach of the WPY images and the feelings they invoke when considering each image’s precious, far-flung environment and wanted to create something that helped capture that. I’d seen several impressive 3D globe and collection visualisations (e.g. https://artsexperiments.withgoogle.com/tsnemap/) and wanted to investigate how they implemented acceptable performance (load times and frame rate) as an online interactive collection viewer running on a range of devices, knowing enough about the web and 3D to understand the specialist approaches required.

I made my own mental MVP for the user experience as I envisioned it. I wasn’t working with a designer but knew enough of the flexibility they might require to introduce my own design and functionality checkpoints.

I had good starting points, I was working on the WPY web site re-platform at the same time (there’s a series of technical posts for it in this blog) and knew I could use the new WPY API developed by Matt Cooper in TS to setup the experiment to scale with a growing data set. I also found a great open source seed project for my visualisation idea (https://github.com/chrisrzhou/react-globe). This had the mix of interactive functionality and technologies I wanted, being built with ReactJS for easy integration with the WPY website and ThreeJS for the 3D content.


Initial prototyping

It was relatively quick to get something up and running using the react-globe library and integrating it with the WPY site. In the image below you can see the progress from the react-globe project’s default, basic ‘markers’, followed by my 2D images and then the 3D images that make better use of the 3D scene space. A solution I knew was needed to present the 11 years of online WPY winners (~1100 images) in a useful way.

One large problem that I observed once I’d achieved version 3 above was positioning the images correctly above their geo-location (longitude and latitude is available for each image via the API) and still keep them spaced enough to be interactable. You can see the problem in the close stack of image markers below, using random ‘heights’ wouldn’t work, they’re simply too close to each other to be usefully viewed:

what I needed to do was group images based on their geo-location and programmatically apply enough 3D ‘height’ space between them to keep them viewable/interactable. There was a fair bit of challenging (and satisfying) geo and 3D data manipulation required to achieve a solution that spaces the images intelligently:

One thing that helped me was being able to visualise this grouping logic by colour:

Typically in most web applications the Javascript code providing site functionality runs on a single processing thread. This needs to handle everything from loading Google Analytics scripts to displaying images, and can therefore slow down web page loading if overloaded. I used a Web Worker for the positioning logic described above which allows a developer to create a new, separate Javascript thread, speeding up the overall loading times by 25%. It’s a useful technique when dealing with process-heavy data manipulation on a client browser that can be separated from the main website code.


3D performance optimisation

With the positioning and basic concept set up, I could continue on my main focus for the project, implementing several 3D performance optimisation techniques in the code to provide a smooth and aesthetically-pleasing 3D experience across a range of devices. These targeted either reducing the number of overall draw calls (3D graphics tasks) or reducing the graphics memory consumption.

Draw calls are the number of unique shader programs (functions that run on the GPU) rendering the 3D scene, the performance cost of a draw call comes from the CPU having to recalculate and send data to the GPU, the GPU in effect having to pause to retrieve this data from the CPU during every frame render. Each unique instance of a 3D mesh and material data generates additional shader programs unless countered.

GPU memory consumption is the amount of graphics memory the 3D scene is consuming, systems might have a dedicated GPU or shared memory resource, but they all have limits (whether hardware or software bound). As the limits are approached, the GPU would need to perform expensive texture (memory) swapping during the render process.

It’s common practice to work towards reducing both of the above, and my starting point when loading the 1100 image markers I wanted to display was 2200 draw calls.

Why 2200 draw calls and not 1100? Each marker is made up of 2 individual meshes, each with their own material (the cone part and the actual image).

For memory consumption, another interesting 3D fact: GPU memory usage for textures (in this case our WPY images) has little to do with the image file size (our WPY JPEGs are compressed to be between 100–300kb). Instead, it is entirely dependent on the number of pixels in the decompressed image loaded on the GPU, with GPU memory allocation approximately:

width * height * 4 * 1.33 bytes of memory

So each of our ‘medium quality’ 640px JPG images takes up around 1.4MB on the GPU, with 1100 equating to over 1.5GB! Not great for a users browser.

Basis texture format

Tackling the memory problem first, I read an article about a relatively new GPU texture format called Basis — a new open standard for compressed 3D textures. https://medium.com/samsung-internet-dev/using-basis-textures-in-three-js-6eb7e104447d

Chrome’s task manager showing GPU and tab memory consumption

Unlike web image formats like JPG or PNG, 3D textures keep their compression even when loaded onto a graphics device. The process to convert the existing small WPY JPG’s to Basis required some trial and error, in particular the images had to be resized from the existing non-uniform aspect ratios to power of 2 dimensions (a historic graphics requirement to allow for pixel rounding). Once this was done the JPG’s could be batch converted into their Basis texture versions. The result was a huge 50% saving in Chrome’s graphics memory (from 1.5GB to 700mb), making the experience more efficient on a range of devices as well as providing overhead for the displayed collection of images to grow.

Instancing and Merging Geometry

Turning to the excessive number of draw calls next, I used a technique called geometry instancing to turn all the individual lines of the WPY markers into clones of each other. This has the effect of allowing the graphics card to process all of them as a single entity, reducing the existing 1100 draw call count for them to just 1. The code refactor to achieve this required some complex 3D transforms to reposition each ‘clone’ back to their original position and orientation, along with a different way to render the white or gold colouring (which signifies the WPY winners or commended), but the work resulted in a big improvement in frame rate.

The individual lines are turned into 1 3D ‘instanced’ object
Some ‘procedural art’ bloopers whilst working things out.

However the images themselves still existed as (1100) unique draw calls. The solution for these was more complex and was achieved with a mix of geometry merging and texture atlases.

Geometry merging is another technique to render multiple separate 3D objects (our remaining 1100 image meshes) in a single draw call. The idea is that the separate meshes are combined to act as a single unit for the GPU to process.

However I knew merging the image mesh geometry wouldn’t be enough as the separate materials required for each WPY image texture meant that they would still produce individual draw calls. This is why typically a merging solution is used alongside texture atlases.

Texture atlas and accompanying tile data

Texture atlases are single textures that contain multiple textures in tiles, similar to a sprite sheet that might be used in 2D animation. It’s a common optimization technique in 3D, an atlas holding several images can act as a single texture instance on the GPU.

The idea is that once the image objects had been merged into 1 object, the texture atlases could act as a shared material, reused multiple times if positioned correctly through 3D code implementing UV mapping –  the process of projecting a 2D image onto a 3D model’s surface.

This proved to be the most complex part of the project, as I was manipulating the 3D data more directly than I had done in earlier 3D web projects. I had some great help and advice from the ThreeJS developer community, particularly as my challenges became unique and more general online resources didn’t explain enough of the concepts involved.

Some fun bloopers of the work can be seen below, the first showing how the same texture atlas is mapped to appear as single images, and the mistakes along the way.

Atlas tiling being ‘fine tuned’
Getting the final UV mapping process very wrong…

The results were worth it though, with the final 1100 draw calls being reduced to just 17 (several texture atlases were required), which gave the expected huge increase in performance, with my relatively old smart phone running the 3D scene at a reliable 60 frames per second.


Final thoughts

One thing I came to realise during this experiment and something I’ve noticed many times when developing digital interactives for the museum over the years is that creative development is all about asset preparation. If the right format of data and assets is made available, interactive development can become relatively streamlined. the key is understanding the practical potential of museum digital assets early in their lifecycle so that they can provide the best options for new types of immersive digital content.

This post is also a poignant one for me as I’m now finishing up at the museum after 15 years of working across online and gallery digital products. It’s always been a privilege working on WPY in particular so this project gave me alot of personal satisfaction. It’s a shame we weren’t able to get it live before I head off but I hope it provides some inspiration to all the exciting and important digital content the museum will no doubt be producing in the future.

You can browse an internal version of the experiment here: http://static.nhm.ac.uk/wpy-globe-experiment/globe?tags= (NHM VPN required and a modern browser recommended)

%d