Building a smart museum: Tackling in-gallery challenges with digital experience at scale

Brian Dawson, Ingenium -- Canada's Museums of Science and Innovation, Canada, Darran Edmundson, EDM Studio Inc, Canada


Digital technology is becoming increasingly pervasive in gallery spaces, not just for science museums/centers, but for museums and galleries of all types. Museums are thus experiencing a range of new, common challenges, requiring approaches that can scale, to ensure long term sustainability. Museum applications can draw on robust Web standards and platforms for presentation, accessibility, data, etc. However, museum installations frequently go beyond a traditional Web page oriented paradigm. In-gallery applications of digital technology can vary widely, and range far beyond the computer/touch screen. They may include digital/mechanical interactives, experiential and immersive environments, and "headless" (non-screen) or other non-traditional interfaces. The Canada Science and Technology Museum reopened in November of 2017, after a complete reconstruction and reconceptualization of the exhibition experience. This presented an opportunity to tackle a number of contemporary museum challenges with digital technologies head-on, drawing on lessons learned from many institutions. This paper will outline many of these contemporary challenges for in-gallery digital interactives, exploring a number of facets, including hardware and software management; content management; analytics; deficiency/issue management; accessibility standards; design standards; systems and data Integration; and certainly not least, people. For each of these facets, this session will draw on experiences from the renewed Canada Science and Technology Museum to illustrate how these challenges can be tackled, and will also contrast these with alternate approaches taken at other institutions. It highlights how an enterprise IoT approach can help tackle the challenge of managing digital experience at scale.

Keywords: Smart Museum, Internet of Things, Digital Interactives, Scalability , Sustainability


Digital technology has become increasingly pervasive in gallery spaces, not just for science museums/centers, but for museums and galleries of all types. Museums are thus experiencing a range of new, common challenges, requiring approaches that can scale, to ensure long-term sustainability.

Museum applications often draw on platforms for content and digital asset management, as well as robust Web standards and platforms for presentation, accessibility, data, etc. However, museum installations frequently go beyond a traditional Web paradigm of presentation of information. In-gallery applications of digital technology can vary widely, and range far beyond the computer/touch screen. They may include the following:

  • digital/mechanical interactives;
  • experiential and immersive environments;
  • “headless” (non-screen) and other non-traditional interfaces;
  • personal/wearable and smartphone/app-based technology.

The Canada Science and Technology Museum reopened in November of 2017 after a complete reconstruction of the museum and a reconceptualization of the exhibition experience. This presented an opportunity to tackle a number of contemporary museum challenges with digital technologies head-on, drawing on lessons learned from many institutions.

This paper outlines many of these contemporary challenges for managing in-gallery digital interactives at scale, exploring a number of factors that will be faced by many institutions. For each of these facets, this paper draws on experiences from the fully renewed Canada Science and Technology Museum to illustrate how these challenges can be tackled. The paper will also contrast these with alternate approaches taken at other institutions. While there has been some excellent recent work in documenting broader digital strategy and planning for museums (for example, Hossaini, 2017), one distinguishing aspect of this case study is the Enterprise Internet of Things (IoT) approach, outlined in more detail below.

So many “things”

Today’s museum floor boasts an ever-expanding list of network-connected devices. This includes familiar items like Mac minis and PCs, projectors, and IP cameras. However, once isolated embedded controllers (such as Arduinos and Raspberry Pi) are also increasingly being connected to exhibition networks.

Just consider a few examples of the rapid evolution of devices in the museum:

  • Museum screens have long been controlled via serial connection (RS232). More recently screens either have an actual ethernet connection or reliable Display Management Power Signalling or DPMS (see, the latter effectively making the monitor a slave device to a host media source.
  • While overall electrical systems remains rightfully under the purview of proper Building Management Systems, many exhibitions now include fine-grained power control using IP-based Power Distribution Units (PDUs). This creates an opportunity for managing devices that are not “smart.” As these power bars typically offer socket-level on/off, the range of (even rudimentarily) controllable downstream devices can quickly become quite large.
  • Sensors and actuators are another class of device that is becoming increasingly prevalent on exhibition networks, especially in the context of science centers/museums. Manufacturers are realizing that the market for such devices is greatly increased by making them standalone consumers or producers of data via a standardized, published API.
  • There is also a range of non-network connected peripherals that are connected to a computer or smart device. These can be considered “pseudo” devices, that is, devices where an agent running on the host could provide a logical connection. A Microsoft Kinect sensor is a good example.

All these devices present a host of logistical challenges for monitoring and management. Yet device connectivity and “intelligence” opens up an array of opportunities, monitoring, control, interactivity, and even personalization (Porter, 2014).

The museum as a “Subnet of Things”

Smart homes, smart buildings, smart cities… The Internet of Things is not just a technological revolution, it represents a historic shift in how we manage and interact with an innumerable array of objects and environments in our everyday life.

This revolution is also transforming enterprises, and creates opportunities for museums to manage the increasing complexity of their gallery spaces. To manage the hundreds or thousands of devices within a museum, organizations are developing solutions that can be considered Enterprise IoT applications—essentially “Intranets of Things” or “Subnets of Things” (Salma 2016).

Porter and Heppelmann (2014) introduced a model for “smart, connected products,” which provides a useful framework for the capabilities of smart, connected objects. The model is built on a progression of capabilities, with each building upon the previous one. These capabilities are grouped into four areas: monitoring, control, optimization, and autonomy. This model is outlined in Table 1.

1. Monitoring 2. Control 3. Optimization 4. Autonomy
Sensors and external data sources enable comprehensive monitoring. Software embedded in the device enables control of object functions, and personalization of user experiences. Algorithms optimize device operation to enhance performance, allow predictive diagnostics, service, repair. Autonomous device operation, self coordination with other devices, autonomous personalization.
Is the hardware system running?

Is the interactive application running?

Capture analytics.

On/off control.

Restart the application.

Reboot the system.

Interactive responds to user behavior.

Continuous Integration of exhibit software.

Automatic restart of hung application.

Adjust interactive volume to respond to in-gallery noise levels.

Interactives independently adapting to visitor preferences.

Autonomous robot or AI interacting with visitors on the museum floor.

Table 1: Maturity model for smart, connected objects, with some potential museum examples.

These levels are as follows (adapted from Porter, 2014):

  • Monitoring: Comprehensive monitoring of an object’s condition, operation, and external environment through sensors and external data sources.
  • Control: Can be controlled through remote commands or algorithms that are built into the device or reside externally.
  • Optimization: Rich data from smart object, coupled with capacity to control, allows for optimization of performance, such as efficiency of operation, or remote updating of software.
  • Autonomy: Monitoring, control, and optimization capabilities combine to allow smart, connected objects to achieve a previously unattainable level of autonomy. This could include learning about their environment, self-diagnosing their own service needs, and adapting to users’ preferences.

As we will see in the case study below, Ingenium’s exhibition management system provides robust capabilities for monitoring, control, and even some significant optimization capabilities, and a foundation for future autonomous personalization.

Following the case study description, we will outline some of the key considerations with this project. These are organized so as to be potentially helpful for other institutions.

CSTM case study software infrastructure architecture

We will now introduce the CSTM software infrastructure as a case study. Note that many of the documents associated with this case study project are available on Ingenium’s Open Heritage document access portal (; this should be useful for anyone wishing to dig deeper into the project.

The sudden closure of the Canada Science and Technology Museum in late 2014 created the opportunity for a complete re-think of the museum experience. This renewal of the museum took place in the context of a broader digital strategy—one firmly embedded within the organization’s overall corporate strategy, and under the banner of “Digital Citizenship.” The unfolding digital strategy placed an emphasis on relevance by engaging in the new and emerging digital media forms of today (including games, virtual reality, augmented reality, etc.). The re-build of the museum created parallel thrusts for a re-invention of the museum experience and outreach/engagement via digital channels.

The CSTM Interpretive Concept Master Plan mapped out the overall goals of the new CSTM experience. Digital was recognized as a critical aspect with the emphasis placed on the actual visitor-facing experience. (CSTM, 2015a)

The project had a budget of $80 Million (CAD) on a timeframe of two and a half years, both for the reconstruction of the museum building and the complete redevelopment of the exhibition experience. This was significantly less time and budget than most museum-building projects of this scale. These constraints forced creativity and careful management of scope throughout the entire project.

While the museum’s new software infrastructure was designed and built to support the digital aspects of the experience in the new Canada Science and Technology Museum, it was built to be leveraged across all of Ingenium’s museums (see CSTM, 2015b).

Let’s outline the system up front. Ingenium’s software infrastructure is composed of several major components.

1. Exhibition Management System (EMS)

The EMS is comprised of a base layer of services that facilitates exhibit monitoring, diagnostics, computer control, and code/application deployment services. Known functions include the following:

  • maintaining a canonical configuration database that fully describes the exhibition’s various hardware, software and networking configurations;
  • building applications from source, and deploying applications and associated media resources to the proper exhibit;
  • monitoring individual devices (e.g., computer CPU and memory loads, uptime, peripherals; projector bulb hours; PDU socket status);
  • providing control over individual devices (e.g., computer reboot, application start/stop/reload, PDU socket control, projector sleep/wake);
  • aggregating individual devices monitoring and control into a central operations interface.

The Configuration Database portion of the EMS makes use of familiar concepts such as devices, experiences, and galleries (see Figure 1). This provides a logical hierarchy of organization in language familiar to museum staff. Implemented in Django and PostgreSQL, the configuration database contains the information needed to reconstitute an exhibit from scratch. For network-connected devices, at a minimum this information includes the logical name, the MAC address, the IP and DNS. For specialized equipment like IP-connected power bars, the data also contains a human-readable list of what is connected to each controllable outlet. And for applications, the configuration database contains a link to the code repository and the scripts necessary to turn code into functioning applications.

Figure 1: Sample screenshot from the EMS Configuration Database. Here, the sockets of an IP-controlled PDU are given human-readable descriptors and scheduling behaviors.

To accomplish monitoring and control, the EMS uses a “Service Agent” approach, as shown in Figure 2 below. Service Agents implement a concise API tailored to the needs of the museum. Agents run as persistent processes on all exhibit Mac minis and PCs. Additionally, agents run on virtualized hardware to control “dumb” third-party networked devices that don’t support running custom code, such as IP-controlled PDUs. (This could be viewed as an implementation of the so-called “Facade” software design pattern, where we make these “dumb” devices look like they have Service Agents.)

Figure 2: The EMS uses a Service Agent model to monitor and control connected devices.

On exhibit computers, the Service Agent ensures that a given exhibit application is running. In the case of an application crash, it automatically restarts the software. In cases where the application either won’t start or has been purposefully stopped (e.g., a peripheral hardware issue), the Service Agent displays a customized graphic that both provides an “out of order” message to the visitor and prevents access to the native OS’s desktop. Figure 3 shows an example of this so-called “Shielding Window.” The Service Agent additionally supports machine restarts, volume level changes, and triggering of an application-specific Debug Overlay mode that can provide technical staff with important diagnostic information.

Figure 3: The Service Agent “Shielding Window” provides information to the visitor and keeps meddling fingers away from what would otherwise be the native OS desktop.

Let’s turn our attention the process of putting software onto exhibit computers. Ingenium sought to specifically avoid the all-too-familiar situation wherein visitor-facing interactives are essentially fossilized on (or soon after) opening. While the reasons for this are complex, one factor is the relative ease or difficulty of accessing, recompiling/rebuilding, and redeploying applications. This is especially difficult within the context of an operational museum. Ingenium’s Digital Team anticipated, planned for, and delivered a largely automated build and deployment pipeline that has already yielded observable successes.

The system, built using tried and tested open-source components, was designed to achieve the following:

  • automatically build software from source contained in the museum’s code repository;
  • deploy built software to any target machine (e.g., the museum floor or a test station);
  • easily roll back to any previously numbered software version;
  • handle inevitable outlier situations (e.g., a third-party application only available in binary form).

Figure 4 below shows the logical structure of the Build/Deployment subsystem. Central to the system is an automation server, in our case, the open-source Jenkins ( Building and deploying is logically a four-step process:

  1. Triggered through a simple click in the Web-based UI, application source code and supporting assets are gathered from the museum’s Git repository.
  2. Developers provide a build script capable of turning code and assets into a standalone binary package. The details of this build script are kept in the configuration database.
  3. The automation server tasks a build worker with actually creating the binary application. This is necessary so that, for example, Mac applications are compiled on Mac hardware. The resulting finished application is stored in a pool of built applications.
  4. The task server deploys the application to target machine. From here, it is launchable via the Service Agent running on the target machine.   
Figure 4: The Build/Deployment Subsystem of the EMS.

Operationally, how does this work? First notice that software suppliers don’t configure exhibit computers directly. Instead, applications are built from source in a “hands-off” fashion. While this places a minor burden on developers (namely, to provide a build script that works on a machine other than their own), it guarantees that software can be reconstituted from scratch. This immediately removes the common issue of someone making undocumented changes to an exhibit. Every change, even a minor tweak to a configuration file, is tracked.

In the months leading up to reopening, and especially during the final weeks of stress testing, this system proved its worth with rapid cycles of feedback, software changes and deployment. A typical scenario would see Ingenium staff reporting a bug or feature, the responsible developer team updating the appropriate code repository and announcing that “Version 1.0.2  fixes the reported bug,” and the Ingenium Digital Team rolling this new version out to either the project test bed or the exhibit floor with a handful of keystrokes and clicks. (We call this approach “near continuous integration,” with building and deployment being manually triggered by technical staff.)

Figure 5 below shows a sample screenshot of the Smartphone Exhibit in the Artefact Alley gallery, which has five available software versions that can be deployed. These numbers map back to tagged versions and associated notes in the Git code repository.

Figure 5: Sample screen from the EMS’s Build/Deployment subsystem. Here, a member of the Digital Team can “roll back” the Smartphone exhibit software to any of five previously-built versions at the click of a button.

The final piece of the EMS is the Show Control Interface. This provides both technical and front-line staff with a convenient UX for dealing with monitoring, scheduling, and machine and application restarts. We’ve already seen that the museum’s networked devices expose monitoring and control using the Service Agent model. Layered atop this is the Show Control system. Written in Elixir (, a functional programming language that leverages the Erlang VM, the Show Control server periodically polls Service Agents to maintain a stateful representation of the various exhibits–including gathering live thumbnail screenshots of screen-based interactives. As it also enforces daily on/off schedules, the Show Control system was designed to be highly fault-tolerant.

Figure 6 shows a sample screenshot from the Show Control interface that is used by technical support staff. From the drop-down associated with each exhibit, common actions can be launched. Notice that the Mac mini-based interactive shows the live thumbnail mentioned above. This provides technical staff with an “at a glance” visual that an exhibit is running.

It’s worth noting that a separate page for front-line Visitor Experience staff provides a streamlined interface.

Figure 6: Sample screen from the Show Control interface. Here we see a live status view of 2 IP-controlled power bars (PDUs) and a Mac mini. (“Bounce” is internal terminology for restarting an application without restarting the entire computer.)

2. Exhibition Content Management System (ECMS)

The ECMS is the museum’s subsystem for managing visitor-facing content. It provides museum staff with a user-friendly interface for updating text and media at selected exhibits. Just as important, it also provides interactive developers with a clean (yet flexible) approach to retrieving stored data.

The ECMS was developed using an open-source stack consisting of Django, PostgreSQL and Bootstrap running on a virtualized Linux machine. In keeping with our philosophy of sustainability, the entire system can be auto-magically reconstituted on a fresh CentOS virtual machine using (self-documenting) Ansible scripts (

The architecture of the ECMS is outlined in Figure 7. Ingenium made the conscious decision to restrict use of the ECMS to exhibits having highly-structured data and relatively frequent content changes. This choice was made based on our collective experience with previous projects where precious development resources were expended making “everything” intraweb-updateable.

Figure 7: Architecture of the Exhibition Content Management System (ECMS)

We instead identified a relatively small set of exhibits where the ECMS was warranted. The flexible nature of the ECMS supports creating highly customized UXs designed to facilitate updates by non-technical museum staff. As an example, consider the ECMS screenshot taken from the exhibit “Pop-Up Science” shown in Figure 8. Here visitors browse a selection of images, each of which has a number of “hotspots.” When a visitor touches a hotspot, the screen zooms in on the feature of interest and brings up a pop-up overlay of related information. In the ECMS interface, notice how staff visually identify the hotspot and its corresponding zoom. Contrast this with a more generic interface which would have staff entering actual hotspot (x,y) and zoom (x, y, width, height) numeric coordinates. The latter, while technically easier to implement, presents an obvious usability barrier to those responsible for refreshing content.

Figure 8: A screenshot from the Exhibit Content Management System showing example exhibit-specific UX features like the clickable hotspot (circled plus at top left) and zoom image slider at right.

3. Exhibit Analytics System (EAS)

The aim of the Exhibit Analytics System (EAS) is to provide museum staff with actionable information that could be used to improve the visitor experience, and provide data that can help substantiate the work of evaluation staff who carry out observation studies. It is worth noting up front that this aspect of the software infrastructure is the least developed of the suite. The system is functional and engineered for scalability, but is not yet utilized by all of the exhibit interactives.

Rather than log fine-grained interactions (touches, button presses, etc.) at the system level and then try and infer behaviors, we leave reporting up to the individual exhibit applications. The idea here is that the teams responsible for creating an exhibit are in the best position to identify meaningful metrics, given the diverse range of interactive experiences. That said, exhibit developers were specifically asked—at a minimum–to log the following information:

  • Session Duration: As there is no way to accurately track dwell time without using specialized hardware that can measure a visitor’s actual physical presence, we instead adopted a consistent “session duration” definition. Specifically, a visitor session starts when the app descends from its “home” screen (usually a language selection screen) to the first level of content and ends when either the “restart” button is pressed or when the app “times out” after a certain time of inactivity.
  • Language Usage: This is only recorded within a session. It is simply the amount of time a session has been showing content in a specific language; in our case, French or English.
  • Inactivity: This is simply time not spent in a session.
  • Content Hits: Whenever a visitor encounters a piece of content,  a “content visit” event should be generated. These are in the form of human readable IDs that are unique within the context of a given exhibit.

This information is logged to the local machine using a thin wrapper library provided as part of the Integration Toolkit (see next section). An agent running on each machine trawls the logs for analytics-specific messages, pushing this data to a CentOS VM-hosted instance of open-source Elasticsearch ( At present, the simple query UI exposes a limited range of graphical outputs.

Let’s quickly look at a few examples taken from the museum’s touchscreen-based Smartphone interactive. This simple kiosk interactive lives along the central spine of the museum known as Artefact Alley. It asks visitors to identify which historical technologies are available on a smartphone. For example, the telegraph, once a method for signaling concise information over distances, lives on in its present-day analog of SMS text messaging.

Figure 9 shows a histogram of Session Durations measured during the month of February, 2018. We know from personal experience that completing the quiz takes roughly a minute. The large bar at 0-15s therefore represents visitors who touched the attract screen to start the experience … but didn’t further engage after 15s. Whether or not this represents a problem or a success is beyond the scope of this paper.

Figure 9: A measured session duration for the Smartphone Interactive during February 2018.

In the next EAS screenshot, Figure 10, we show the daily usage percentage for the same exhibit over the month of February, 2018. On busy days, we can see that this exhibit was in use 20% of the open hours. Can you see the two days that the museum was closed for maintenance during the month? Can you spot the long weekend that occurred in February?

Figure 10: Daily usage for the Smartphone interactive during February 2018.

As stated, our institution’s delve into exhibit analytics is just starting. It is yet to be seen whether or not this relatively modest software infrastructure investment turns out to be useful to management, curators, and visitor experience.

4. Integration Toolkit (ITK)

The Integration Toolkit provides the libraries, skeleton applications, and support documentation needed for Ingenium’s digital team staff and third-party developers to quickly and easily create visitor-facing applications that leverage the EMS, ECMS, and EAS infrastructure. Additionally, the toolkit provides a number of standardized applications (e.g., a flexible video player and a customized HTML5-based web browser) that are used across a number of galleries.

5. Deficiency system

The deficiency system provides an issue-tracking system commensurate with operations and floor-staff needs. A single deficiency system is used for all deficiencies related to the exhibition experience, both digital and physical. Ingenium implemented the Atlassian JIRA Service Desk as its deficiency system (see Figure 11).

Figure 11: A sample screenshot from the CSTM deficiency system.

6. Documentation

System documentation was critical throughout the project, and essential for the transition into ongoing operations. Documentation was authored and (continues to be) managed in Atlassian Confluence.

The museum as an Enterprise IoT application

The challenge of managing a myriad of network-aware devices can be viewed as an Enterprise IoT application. Enterprise IoT is a specific field of application of IoT capabilities. Salma (2016) provides a very useful framework for how to manage Internet of Things capabilities within the context of an enterprise. Mapping to Salma’s Enterprise IoT model, Ingenium’s Exhibition Management System can be visualized with the following asset integration architecture:

Enterprise Enterprise Applications Bitbucket (Future integrations to other Museum Systems– Collection, DAM, etc.) JIRA (Service Desk)
IoT Cloud / M2M Backend Exhibit Management System (EMS)
Exhibition Assets Gateway & Agent Service Agent IP Power Bar / Virtual Service Agent Browser

Business Logic

Exhibit Application Exhibit Application Browser
Devices Exhibit Computer

(Mac / PC)

Raspberry Pi / Arduino / “dumb” devices iPad

Table 2: The EMS Viewed as an Enterprise Internet of Things Application.

Understanding key drivers

First and foremost is understanding the drivers that are specific to your museum. Digital strategy is not a one-size-fits-all—every institution has its own unique context.

The CSTM renewal worked from an overarching concept: Through scientific and technological endeavor, people have made Canada and continue to shape its future. From this central theme, avenues for interpretation were identified, including several categories of imagined visitor experience:

  • engage with a historical narrative;
  • creative and collaborative use of technology;
  • hands-on exploration and analysis;
  • immersive narratives;
  • appreciation of technological forms and functions;
  • a welcoming and engaging space.

This identified more detailed elements in a digital media strategy within the Interpretive Concept Master Plan (CSTM 2015a) that were further explored in a supplemental Digital Systems Strategy (NGX, 2015). These drivers included the following:

  • hardware (standardization, robustness for museum use, modern/current technology, maintenance, remote access);
  • software (diverse experiences, flexibility to support varied creative direction, access to source code, avoiding closed systems);
  • experience (fits with the framework for the CSTM Interpretive Master plan, appropriate for core audience, layered content, engaging visitors with each other, quality over quantity, learning from visitor data, not creating a museum with “so many screens”!).

Constraints can also be a significant driver. Ingenium had limited funding to re-conceive and rebuild the Canada Science and Technology Museum, and less than two and a half years from the initiation of the project.

Comparison of drivers with other museum case studies

It is useful to compare the drivers behind the CSTM renewal with that of other major museum projects. In the brief summaries below, we attempt to highlight some of the key drivers expressed by these institutions, and the different approaches which followed. (We’d like to note that Lee and Paddon (2017) also undertook a useful survey of projects, and presents a useful case study of the Asian Art Museum.)

Canadian Museum of Human Rights’ Enterprise Content Management System (ECMS) has been well documented (e.g. Timpson, 2017). The CMHR is “an ideas museum, whose subject matter is intangible and conceptual. Its artifacts are not material culture but stories themselves” (Timpson, 2017). This drove the development of a modular, scalable ECMS that supported dynamic delivery of content—an enterprise application at the core of the museum.

The Cooper-Hewitt had embarked on a multi-year transformation project, that culminated with its reopening in 2014, and the launch of the Pen in early 2015. Complementing the overall vision, the museum articulated several basic principles for technology in the galleries:

  • give visitors explicit permission to play;
  • make interactive experiences social and multi-player and allow people to learn by watching;
  • ensure a “look up” experience;
  • be ubiquitous, a “default” operating mode for the institution;
  • work in conjunction with the Web and offer a persistence of visit.

This lead the Cooper-Hewitt to craft a seamless, unifying experience across the museum (the Pen). A fundamental component supporting this experience is the back-end API and Service Oriented Architecture, which had its roots in the Cooper-Hewitt’s first Collection Alpha in 2012. The Pen experience was part of the overall transformation of the institution (Chan 2015).

The Cleveland Museum of Art’s Gallery One focused on an interactive gallery that would support learning, build audiences, highlight featured artworks and “propel visitors into the primary galleries with greater enthusiasm, understanding, and excitement about the collection” (Alexander, 2013). CMA focused on a data-driven strategy that could be leveraged for applications that focus on the art and on visitor experiences. The CMA “developed and implemented a comprehensive back-end strategy to activate its world-class collection, connect art and people, promote new scholarship and support research, promote on-site and online attendance, increase financial support, promote both external and internal collaboration, and help staff work smarter by targeting artwork information, interpretive content, research resources, and supporter-relationship data” (Alexander, 2015).

Each of these case studies illustrates different organizational contexts and drivers, which lead to distinctive strategies for digital initiatives. Understanding the drivers specific to your institution is critical, as your institutional strategy will unfold from there.


To refine the requirements for digital platforms, Ingenium rooted the project in understanding the needs of people—both staff and visitors. How could digital support the overall museum experience for visitors? And how could digital facilitate museum activities and work across traditional silos in support of overall museum experience? This early focus on people was an important way to identify the key requirements for the renewed CSTM.

A vital step was capturing user stories, a technique from the agile playbook. A user story is “a quick and simple description of a specific way that a user will use the software” (Stellman, 2015). User stories were a powerful way to identify, clarify, and prioritize Ingenium’s requirements. A key point for Ingenium was to not overbuild, creating features that no one would actually use. Since user stories capture actual use scenarios, they were very useful in managing the scope of the project.

There are many references that address agile development; Stellman (2015) includes good coverage on the power of user stories, and how to use them effectively. For our purposes, stories followed a simple structure: “[I am a Role]. [I find myself in this Context]. [I desire the following Outcome]. [And optionally, this is important because of the following Rationale]” (Ingenium, 2016).

EDM Studio, the museum’s primary software infrastructure partner and developer/integrator for the EMS, worked with Ingenium’s digital team to collect user stories, consulting almost every museum function. Visitor stories were of key importance, but stories included functions as diverse as visitor experience, facilities, conservation, curatorial, business development, revenue generation, IT, information management, the digital team, and so on (see Ingenium, 2016).

User stories were prioritized, forming the “backlog” for the system development with EDM Studio.

Agile, Git, and “The Brain”

An agile approach was used throughout the project. To allow for early and ongoing delivery of software, we created a hardware test bed. Connected to the museum’s exhibition network and colloquially known as “The Brain,” this space was outfitted with a network PTZ camera and a range of representative exhibit hardware. The Brain allowed us to perform exhaustive testing and review of both software infrastructure and visitor-facing applications. Partners were encouraged to share work early and often. This allowed for early feedback, thereby helping to avoid costly corrections further downstream.

It is worth noting that use of The Brain involved establishing a high level of trust among all parties. Not all of our “design-build” partners were accustomed to such early, transparent commits to our Git repository. Not every partner “got it,” but for those that did, it helped avoid costly issues closer to launch.

Project scope and system boundary

The gathering of user stories helped clarify the scope of the project and the system boundary for the EMS, and allowed us to focus on features and integrations that people would actually use. The system boundary is an important line. What is in the system and what is external? Where does the EMS need to be integrated with external systems? And are these tight or loose integrations?

Earlier analysis (NGX, 2015) conceived of many possible integrations, and the digital team identified many more. The project’s agile approach allowed for discovery as the project progressed. Some systems, such as the CSTM exterior canopy and facade’s projection and content management system (Pandora’s Box), and back of house content systems, were flagged as out of scope at early stages (see discussion of Content Management below). Others, such as digital signage and exhibition lighting, were first considered as potential EMS scope, but upon deeper understanding of user stories and specific systems, were removed from the EMS. This management of scope was critical in maintaining focus on critical requirements, managing risk, and delivering on time.

Hardware standards and management

Developing hardware standards was essential for the success of the project, not only for managing scope but also with an eye to future sustainability.

While recognizing the inevitable need to support a mix of computer platforms, we made an early decision to go with the Apple Mac mini running MacOS as the primary platform that would host the vast majority of digital experiences. That is, unless design/build companies had a compelling argument otherwise–something more than “this is the way we usually do it”–the museum required delivery of a native Mac app targeting the mini. We choose the mini/MacOS combination based on demonstrated robustness, form factor, and suite of development and management tools. Limiting hardware options allowed for the development of a robust service agent that provided the Internet of Things integration with the Exhibit Management System (see Figure 2).

Outlier exhibits do exist. This includes a number of pre-existing exhibits built for Windows, a large custom sound installation running on Intel NUC’s running Linux, a handful of Raspberry Pis, and several Brightsign media players. Integration of these systems with the software infrastructure was done on a case-by-case basis. For example, Windows and Linux Service Agents were authored that implement a subset of the full Service Agent API. In a worst case scenario, devices are limited to on/off control via an IP-controlled power supply. In general, we would caution that deviations from a common standard will incur (hidden) costs, not the least of which is long-term operational support. (In fact, one of our most problematic interactives continues to be one that was granted an exemption from the standard.)

For embedded systems, we standardized on Arduino. While not currently integrated with the EMS, future-proofing was done by requiring that these systems include wired ethernet capability.

Standards were also specified for non-computer devices including screens, projectors, amplifiers, etc.

For less or non-intelligent devices, one method Ingenium used was IP-based power bars (PDUs). These provide on-off functionality, and hence the ability to reboot and schedule attached devices (again, see Figure 2 above).

All network-accessible hardware is identified and managed in the EMS configuration database, including MAC addresses, IPs, and internally-resolvable DNS names, as described in the case study above (see Figure 1).

One sustainability consideration was the strategy for device failure and rapid replacement. Hardware standardization, coupled with the “Near Continuous Integration” approach for software management, was essential for achieving this goal. Quite literally, replacement of a failed Mac mini from a new shrink-wrapped unit can be achieved in under an hour, the vast majority of that time being deployment of the base (custom) OS image. Actual redeployment and relaunch of visitor-facing software takes from one to five minutes.

Software standards and management

Software standards were developed in tandem with hardware standards. These standards included development language, code management, and the approach for code build and deployment. Software management was a key project driver for Ingenium. This included the ownership of source code and the ability to maintain and update code into the future.

Software is managed in a Git source repository, Bitbucket. This repository (or rather, set of repositories) contains all of the code, libraries and digital assets authored for the CSTM exhibitions. It is worth noting that large media files are handled via the Git “Large File Storage” extension (

The EMS was built to support a “Near Continuous Integration” approach, thereby ensuring that all software is current, maintained, and deployable–both now and into the future. The use of the qualifying preface “near” is significant. Rather than automatic builds of every commit, Ingenium made a conscious choice that building and deployment would be manually triggered by technical staff.

Content management

Content strategies have appropriately garnered significant attention (for example, Hossaini, 2017b; Timpson, 2017). For some museums, these content systems are at the heart of their digital strategies.

As noted in the case study overview, content management was given careful consideration. CSTM made conscious decisions to not overbuild content management features that would be infrequently used. Development focused on visitor and staff needs rather than an information-centric approach. See figures 7 and 8 in the case study above for an illustration and example of the content management approach.

Readiness of back-end systems was a factor. Ingenium had several back-end system projects related to collections, content management, and digital asset management that would not be ready on the museum renewal timeline. Given the tight timeline, avoiding these system dependencies was important to de-risk the museum renewal project.

Further back-end system integration is on the future roadmap. Ingenium is developing a microservices architecture to support its omni-channel content strategy, and expects this will be leveraged for the in-museum experience in the future.

Accessibility standards and inclusive design

Standards play a critical role in helping to ensure an experience is available and accessible to all visitors. See Wyman (2016) for an excellent exploration of inclusive design.

Accessibility was a core element of the CSTM Interpretive Concept Master Plan (CSTM 2015a). Ingenium implemented a comprehensive accessibility standard, a living document with the “aim of preventing accessibility barriers” and making “every effort to provide alternate methods of operation and information retrieval for digital interactives” (Ingenium, 2018.)  The accessibility standard was a key touchstone for all exhibition teams throughout the project. The exhibition teams followed not just the letter, but the spirit of these standards.

Given the wide range of experiences being developed, there was significant experimentation, prototyping, and testing. Input and feedback from representative external stakeholder groups was invaluable.

There were also technical innovations along the way. For example, the Ingenium Innovation Lab designed and 3D-prototyped an accessible headphone jack that is now implemented across the museum galleries (Bedi, 2017).

As another example, a number of exhibits implement a considered navigation paradigm that features contextual descriptive text maintained in the ECMS. This audio uses automatic text-to-speech (TTS) synthesis that, while perhaps less engaging than professionally-recorded audio, would otherwise not have been possible owing to cost considerations.

We recognize that we may not have gotten everything right, but remain firmly committed to continuously improving the museum experience for all visitors.

Graphic standards for digital media

Accessibility standards were further elaborated with graphic standards for digital media (CSTMC, 2016).  The graphic standards provided exhibition designers and developers with specific and clear guidance on how to achieve the accessibility standard. Figure 12 shows an excerpt from the Graphic Standards for Digital Media. These standards sought to balance consistency of UX with flexibility for creativity, allowing for diverse and distinctive exhibition experiences.

Figure 12: An excerpt from CSTMC Graphic Standards for Digital Media.

Recognizing that the Canadian Museum of Human Rights was an exemplar on accessibility and universal design, Ingenium reached out to CMHR regarding the application of their accessibility standards to digital media. Many aspects of Ingenium’s Graphic Standards for Digital Media were adapted from this prior work of CMHR (Canadian Museum of Human Rights, 2013).


Digital analytics are recognized as critical for understanding our audiences for websites, apps, and other niche platforms, and what content is appealing for these audiences (Moffat, 2017). This is also true of in-museum experiences. “Digital technologies are changing how we understand visitors. By relying on records rather than self-reporting… analytics methodologies… give objective insight into the nuances of actual behavior” (Hossaini, 2017). Such analytics are an important supplement to more traditional forms of in-museum evaluation.

As described in the case study, Ingenium planned up-front for an Exhibit Analytics System, to provide actionable information that could be used to enhance the visitor experiences. Full deployment is still a work in progress. We expect to be leveraging this system to support summative analysis of exhibitions, and for ongoing feedback to continuously improve museum experiences.

Deficiency/issue management

Issue and deficiency management was another key consideration: how to identify, record and track deficiencies, both during the development and integration, and well as for ongoing operations.

As noted in the case study, Ingenium decided to leverage robust industry standard tools, adopting the Atlassian suite, specifically using a JIRA service desk for deficiency tracking.

Adoption of the Service Desk was critical. To help with adoption, the team drew from the User Stories that had been gathered from the outset. We also took a whole-museum view, not just managing deficiencies in digital interactives. Stakeholders were involved during setup. And CSTM is using regular, brief stand-ups to stay coordinated on deficiencies across museum functions. The system is now actively used by the broader range of stakeholders to track all issues related to exhibitions.


The EMS approach supports sustainability in a number of significant ways. First, there is the efficiency of software deployment, integration, and revisions. As described in the case study, the Exhibition Management System demonstrated its value in spades, particularly during the final integration and testing of digital interactives in the weeks immediately before the reopening of the Canada Science and Technology Museum. A team of two museum employees, along with invaluable support from EDM Studio, were able to successfully integrate hundreds of devices from the five design/build firms and numerous digital media subcontractors, all within a few short weeks. It allowed for the efficient and rapid deployment of revisions throughout the floor integration. This would have otherwise been an impossible task to manage, given the tight time frame involved.

For ongoing operations, the long-term benefits are expected to be even greater. The EMS provides for efficient monitoring, control, and scheduling for the museum floor. The project’s hardware and software standards simplify hardware replacement and the management of spares. The museum’s contractual approach ensures the museum retains ownership of source code, and the EMS in turn helps ensure this remains a “living” deployable code base. The EMS allows for rapid replacement of software onto replacement computer hardware, to help quickly resolve any hardware failures. And bigger picture, it also provides a framework that supports future innovation and experimentation in a modular yet manageable way.

The EMS has demonstrated its resiliency—from the project phase of development and integration all the way through to the transition to ongoing operations. This bodes well for the role of the EMS as a foundation for both operations and to support future initiatives, with deployment across Ingenium’s other museums already planned. The EMS also supports a modular approach. By developing a service agent for an additional platform (e.g. Linux, Android), any future device could be integrated into the EMS.

Future directions

Ingenium’s EMS journey is not over. While the EMS is being operationalized, Ingenium is also mapping out future directions for building on the platform.

The EMS is currently deployed at only the Canada Science and Technology Museum. Ingenium plans to expand its use across its three museums.

There are several mobile initiatives underway. CSTM has launched an augmented reality game—Artebots (—allowing young and youthful audiences to explore the collection through play. It is also developing an augmented reality app (Augmented Alley). Both of these apps use an array of beacons across the museum floor. An app that further supports accessibility for all audiences is in the early stages of development.

As noted above, deeper content and data integration is anticipated for the future. Ingenium is developing a microservices architecture for content systems which will be leveraged for museum experiences.

Ingenium also foresees future collaboration regarding the project. We have explored code-sharing arrangements with other institutions that have similar platforms, and are considering the potential value in open-sourcing the project (while being realistic about the viability of most open source projects in general).


Technology has an increasingly important role on the gallery floor of many institutions. The proliferation of “smart” devices and Internet of Things approaches offer significant opportunity to manage and personalize experiences. However, benefiting from these opportunities takes planning—specifically, to map out an architecture and standards that that make sense for your institution. It is vital to understand the drivers that are specific to your institution.

The reboot of the Canada Science and Technology Museum–even with its significant constraints in time and budget–presented a significant opportunity. We recognize that few institutions have such an opportunity to undertake a complete redo or re-conception from scratch; but we hope that institutions can seize opportunities, project by project, big and small, incrementally advancing toward a sustainable, long-term approach for managing their museum of digital things.


Many of the themes in this paper were explored at the MCN 2017 Conference, in the professional session “The Museum of (Digital) Things: Diverse strategies for managing digital experience,” with Jane Alexander, Jordon Randall, Corey Timpson, and Brian Dawson (see Dawson, 2017).

We also wanted to acknowledge Christopher Jaja (Ingenium’s former Director of Digital Media and Technology) and Andrew Macdonald (Ingenium’s New Media Officer) who together steered the vision for Ingenium’s EMS and were in the trenches throughout its implementation. Thank you also to Kevin Garnett (CSTM Senior Exhibition Renewal Project Manager) for strategic advice and experience.


Alexander, J., J. Barton, & C. Goeser. (2013). “Transforming the Art Museum Experience: Gallery One.” In Museums and the Web 2013, N. Proctor & R. Cherry (eds). Silver Spring, MD: Museums and the Web. Published February 5, 2013. Consulted September 30, 2017. Available

Alexander, J., L. Wienke, & P. Tiongson. “Removing the barriers of Gallery One: a new approach to integrating art, interpretation, and technology.” MW17: MW 2017. Published February 16, 2017. Consulted September 30, 2017. Available

Bedi, J. (2017). How the Canada Science and Technology Museum designed an accessible, modular headphone jack. Ingenium Innovation, 2017. Available

Canadian Museum for Human Rights. (2013). Graphic Standards for Exhibits and Media. May 31, 2013. (Non-published document referenced.)

CSTM. (2015a). Canada Science and Technology Museum Interpretive Concept Master Plan. Published September, 2015. Available

CSTM (2015b). CSTM Software Infrastructure Draft Specification. Published November 5, 2015.  Available

CSTMC. (2016). Graphic Standards for Digital Media. Published September 22, 2016. Available

Dawson, B, J. Alexander, J. Randall, C. Timpson. (2017). The Museum of (Digital) Things: Diverse Strategies for Managing Digital Experience. MCN 2017. Published November 8, 2017. Audio recording available

Ingenium. (2016). CSTM Renewal Project Software Infrastructure User Stories, Version 0.5. Published April 2016. Available

Ingenium. (2017). Ingenium Accessibility Standards for Exhibitions. Last revision April 26, 2017. Available

Hossaini, A. & N. Blankenberg. (2017a). Manual of Digital Museum Planning. London: Rowman & Littlefield.

Hossaini, A. (2017b). “The Omnichannel Museum.” In A. Hossaini and N. Blankenberg (eds.).  et. al, Manual of Digital Museum Planning. London: Rowman & Littlefield.

Lee, J. & M. Paddon. (2017). “Creating The Smart Museum: The Intersection of Digital Strategy, Kiosks And Mobile.” MW17: MW 2017. Published February 1, 2017. Available

NGX. (2015). CSTM—Digital Systems Strategy. Published July 20, 2015.

Porter, M. & J. Heppelmann. (2014). “How Smart, Connected Products Are Transforming Competition.” Harvard Business Review, Published November, 2014. Available

Stellman, A. & J. Greene (2015). Learning Agile: Understanding Scrum, XP, Lean and Kanban. O’Reilly Media.

Timpson, C. (2017). “The Pursuit of Efficient Relevance: An Enterprise Content Management System.” In A. Hossaini and N. Blankenberg (eds.). et. al, Manual of Digital Museum Planning. London: Rowman & Littlefield.

Wyman, B., C. Timpson, S. Gillam and S. Bahram. (2016). “Inclusive design: From approach to execution.” MW2016: Museums and the Web 2016. Published February 24, 2016. Consulted September 30, 2017. Available


Cite as:
Dawson, Brian and Edmundson, Darran. "Building a smart museum: Tackling in-gallery challenges with digital experience at scale." MW18: MW 2018. Published March 16, 2018. Consulted .