Making metadata into meaning: Digital storytelling with IIIF

Tristan Roddis, Cogapp, UK

Abstract

This paper shows how to blend cutting-edge image technology with creative curatorship to deliver engaging digital stories using a COPE (Create Once, Publish Everywhere) content strategy. An increasing number of cultural heritage organizations are adopting the API standards provided by the International Image Interoperability Framework (IIIF) to disseminate their images and associated metadata. However, the focus so far has largely been on the technical and data challenges that this represents, with very little attention paid to how these systems can be leveraged to provide enhanced experiences for users online. In this paper, we look at ways that IIIF APIs can be radically repurposed to go beyond the simple descriptive representations of an image for which it is largely used by museums (e.g. catalog metadata, transcribed document text). Instead, we explore the idea of using them to present a continuous narrative (i.e. story) focussing on various regions within images. We will take stories modelled in IIIF, and present them in a variety of exciting digital imaginings, using both established and bleeding-edge browser technology. These include the following: - Linear Web page (“blog” style) - Social media feed (deliberately delayed information) - Slow looking (deliberately removing text) - Interactive image (previous/next, plus freeform exploration) - Automated image (using text-to-speech) - Human storyteller (back to basics, with a digital twist) We will demonstrate the range of potential uses with a variety of stories taken from different organizations, including the National Gallery, London, the Endangered Archives Programme, the National Portrait Gallery, London, and even a poem commissioned especially for Museums and the Web. From this session, participants will be inspired to think of novel and innovative ways of telling stories using their collection images, as well as appreciating the potential of IIIF to make delivering these experiences that much easier.

Keywords: iiif,story,collection

Motivation

At Cogapp we have been using the International Image Interoperability Framework (IIIF) for several years, and during this time, we have seen an increasing number of cultural heritage organizations adopt the IIIF standards to disseminate their images and associated metadata.

However, the focus so far has largely been on the technical and data challenges that this represents, with very little attention paid to how these systems can be leveraged to provide enhanced experiences for users online.

In response to this, we set ourselves the challenge of reimagining how these IIIF standards can be radically repurposed to go beyond the simple descriptive representations of an image for which they are largely used. Instead, we look at several different ways in which the exact same metadata can be used to present collection images, and their associated textual content, as digital stories experienced via the Web browser.

This work builds on work documented in a previous publication (White, 2017), and is accompanied by a demonstration site with interactive examples of most of the experiments detailed below (http://storiiies.cogapp.com/).

Background

IIIF

The International Image Interoperability Framework (http://iiif.io/) defines a series of Application Programming Interfaces (APIs) for the presentation of images and their associated metadata online. In the experiments we discuss below, we made use of two key APIs: the Image API (Appleby et al., 2017a) and the Presentation API (Appleby et al., 2017b).

The IIIF Image API provides raw image data in Web-friendly formats, with the ability to easily crop and zoom (and combine the two to produce the tiles required for zoomable images). This forms the underpinnings of all of the different image display mechanisms detailed below.

The IIIF Presentation API allows arbitrary metadata to be associated with one or more images, in a JSON-format document known as a IIIF manifest. It also allows metadata to be associated with specific regions or an image, via the mechanism of Annotation Lists (Appleby et al., 2017c).

To date, the main use of the Presentation API has been to associate images and text by their real-world characteristics; for example, a series of images in a IIIF manifest that correspond to scanned pages from a book, or another manifest that lists normal and X-ray photographs of a painting. Equally, annotation lists are often used to simply repeat information from the source image in textual format, such as OCR transcriptions of each word of a printed page.

Annotations used for transcription

Figure 1: presentation manifests usually encode real-world characteristics, such as these annotations that represent the transcription of handwritten content (Crane, 2016)

In the experiments described in this paper, we tried to think differently about IIIF manifests, and to consider them as a way of relating any sort of textual content to one or more images or regions of an image.

Stories

For our purpose, we consider a story to be a linear narrative: something with a beginning, middle, and end. By taking the content from the IIIF manifests in a strict sequence, we can therefore create a story, illustrated by one or more images, and define it once in a known format. This data can then be repurposed in a variety of different ways; for each story considered, we are using exactly the same IIIF manifest, but choosing to express the information in very different ways. In effect, we are leveraging the IIIF protocols and data formats to provide enhanced experiences for users online.

Figure 2: all the annotations used for the sample story, as shown in the Mirador viewer (White, 2017)

Results

In this section we discuss the different experiments we conducted using a single manifest and annotation list as described above. Most of these versions can be seen online on the demonstrator site (http://storiiies.cogapp.com/).

Experiment one: linear Web page

For the first and most basic example, we illustrate the result of rendering the combined annotation list and manifest into a basic HTML webpage to tell a story. The text that describes each image, region, or group of image regions is simply presented next to the appropriately cropped image region.

Figure 3: part of the linear page view. Video at https://vimeo.com/223268672

This version of the story is not very exciting, but is deliberately clear and familiar: any user of the Web will understand an interface where you just have to scroll to read the illustrated narrative.

Experiment two: delayed gratification with social media

Given that each story can be broken down into a small fragment of text and one or more cropped images, this could, instead of being presented all at once on a Web page, be drip-fed over social media by using a bot to post at scheduled intervals to social media platforms such as Twitter or Instagram.

Figure 4: mockup of a Twitter-bot version

Despite presenting exactly the same information as the linear Web page shown above, it is clear that the reader’s experience of this information will feel very different due to the deliberately slow delivery.

Experiment three: interactive viewer

Giving the user the ability to freely explore IIIF images by zooming and panning is trivial using Javascript viewers such as OpenSeadragon (https://openseadragon.github.io/) and Leaflet (http://leafletjs.com/). In this experiment, we enhance an OpenSeadragon viewer by providing a text overlay with previous/next buttons to move through the story. As each button is pressed, the image automatically moves to the correct image region and zoom level, and the text updates as well.

Figure 5: interactive viewer. Video at https://vimeo.com/223269469

This interface provides a good balance between a directed experience and allowing the user to discover more on their own; the story navigation moves the viewer to focus on a particular feature of the image, but the user is free to take over control and explore other areas of the image if they wish.

Experiment four: slow looking

Slow looking is the practice of taking time to explore an artwork or image in a particular way in order to gain a deeper understanding of the piece (Tishman, 2017; Clothier, 2018). While this is traditionally something that is used in-gallery, it can be approximated online by taking a high-resolution image that is zoomed in until its detail can be observed, and then panning slowly around to take in new areas of the painting. It is a feature that Cogapp has previously implemented for the Clyfford Still Museum’s online collection (https://collection.clyffordstillmuseum.org) and the motivation for this is discussed elsewhere (Mallory, 2017). In this experiment, we use the image regions defined in the annotation list as targets for the viewer to pan to, before displaying text and then moving on to the next region.

Figure 6: slow looking shows a close-up crop. Video at https://vimeo.com/223278332

In this scenario, we deliberately take away control from the user to create a passive, immersive, and reflective experience. The user has no control, and instead must watch as the story of the painting slowly unfolds (ideally in full-screen mode to remove distractions). It would also be possible to convert this to a purely image-led experience by going so far as to hide the text panels: In that case, the annotations merely provide a list of interesting regions for the viewer to slowly pan towards.

Experiment five: passive story with speech synthesis

Extending the idea of removing control from the user, this next experiment takes the format of the interactive viewer, but removes the previous/next buttons and instead automatically moves the image only after it has completed reading out the associated text using the browser’s text-to-speech capability.

Figure 7: speech synthesis version removes control. Video at https://vimeo.com/223269074

This version forces the user to only experience the narrative in the way that it is intended (like listening to any other story read aloud). However, the synthetic voice is quite jarring and will mispronounce specialist words and abbreviations.

Experiment six: human storyteller

For this final experiment, we tried to overcome the limitations of the artificial storyteller mentioned above by putting a real human back into the mix, and returning to the age-old pattern of having one storyteller recount to multiple listeners.

To do this we created two interfaces: The first, for the narrator only, works exactly like the interactive viewer (i.e. text prompts, previous/next buttons, free ability to zoom and pan). The second, for everyone who wants to listen to the story, has no controls at all; it simply shows whatever region of the image is currently visible on the narrator’s screen, without the text. The narrator then has complete control of what the listeners see. She can choose to follow the text or ad lib; to speed up the story, or slow it down; or to make digressions to look at other parts of the image if she so wishes.

Figure 8: human storyteller: narrator interface with a range of audience devices. Video at  https://vimeo.com/223270820 

This version forces users to listen along, and to only see what the narrator wants them to see: a passive, receptive experience. We can imagine this being used in the same physical space (e.g. a curator talking to children with individual tablet devices) or remotely (e.g. a radio program that is accompanied by listeners following along on their own computers, tablets or phones). Having a human to interpret and enhance the base content encapsulated by the IIIF manifest feels like an interesting avenue to explore.

Other sources of content

All of the examples above can be applied to multiple different types of content. In all of the screenshots and videos above we have used a single example, based on a portrait by Hans Holbein the Younger. However, when you consider the wider idea of a story illustrated by one or more images, there are many other possibilities for using museum content in this way. For example, we hope to commission the following in time for Museums and the Web 2018, and to display these on the demo site (http://storiiies.cogapp.com/):

  • Highlights from documents about slavery from the Endangered Archives Programme
  • Terrifying features of a magnified parasite from the Booth Museum, Brighton
  • Telling the stories of the characters in a nineteenth-century painting from the National Portrait Gallery, London
  • A poem commissioned especially for Museums and the Web, using photography from Wikimedia

Conclusions

All the examples discussed above use the same manifests and annotation lists—the only differences are the way we presented the data to engage with the audience. This Create Once, Publish Everywhere (COPE) content strategy means that authors can see a radically different variety of outputs from a single source of content.

We hope that by presenting these experiments we have inspired you to think of novel and innovative ways of telling stories using your collection images, as well as to appreciate the potential of IIIF to make delivering these experiences that much easier.

References

Appleby, M., T. Crane, R. Sanderson, J. Stroop, & S. Warner. (2017a). IIIF Image API specification 2.1. Consulted January 25, 2018. Available http://iiif.io/api/image/2.1/

Appleby, M., T. Crane, R. Sanderson, J. Stroop, & S. Warner. (2017b). IIIF Presentation API specification 2.1. Consulted January 25, 2018. Available http://iiif.io/api/presentation/2.1

Appleby, M., T. Crane, R. Sanderson, J. Stroop, & S. Warner (2017c). IIIF Annotation Lists. Consulted January 25, 2018. Available http://iiif.io/api/presentation/2.1/#annotation-list

Clothier, P. (2018). “The Case for Spending an Hour with One Work of Art.” Last updated Jan 8, 2017.  Consulted January 25, 2018.  Available https://www.artsy.net/article/artsy-editorial-case-spending-hour-one-work-art

Crane, T. (2016). IIIF Search API 1.0 An introduction to the International Image Interoperability Framework, Web Annotations, and how we might search them. Consulted January 25, 2018. Available https://www.dropbox.com/s/ogw9dj259uwvnek/Search-API%20copy%202.pptx?dl=0

Mallory, G. (2017). “Deeper, more meaningful art-experiences with digital.” Last updated September 20, 2017. Consulted January 25, 2018. Available https://blog.cogapp.com/deeper-more-meaningful-art-experiences-with-digital-8afd7bdeb35b

Tishman, S. (2017). Slow Looking: The Art and Practice of Learning Through Observation. London: Routledge.

White, J. (2017). “Innovatively repurposing content across multiple platforms – Storytelling with IIIF.” Last updated July 4, 2017.  Consulted January 25, 2018. Available https://blog.cogapp.com/iiif-for-storytelling-1e36ce277f48


Cite as:
Roddis, Tristan. "Making metadata into meaning: Digital storytelling with IIIF." MW18: MW 2018. Published January 31, 2018. Consulted .
https://mw18.mwconf.org/paper/making-metadata-into-meaning-digital-storytelling-with-iiif/