Serving researchers in a self-service world

Dana Allen-Greil, National Archives, USA

Abstract

In a world in which a family historian can type her grandfather's name into Ancestry.com to start building a family tree, and a journalist can Google to download public domain images, where do the collections searches, online tools, and APIs that museums and archives provide fit in? This paper outlines strategies for better serving people who are looking for the knowledge and expertise within your collections and staff. At the National Archives and Records Administration of the United States, we undertook a significant user experience (UX) research project to better understand the online experiences of professional researchers, family historians, and history enthusiasts. Research methods included audits of existing user data (e.g., Google Analytics, survey data) as well as new user interviews, usability testing, a survey, and a landscape analysis. Key findings include the fact that researchers struggle to complete their tasks using existing online tools; people researching family history are particularly unsatisfied and in need of better support; and all audiences require just-in-time help and appropriate orientation to archival research. A major challenge highlighted by this research is how to meet user expectations for item-level records while providing access to digitized records at massive scale.

Keywords: research, collections, UX ,market research, online reference, archives

The National Archives and Records Administration of the United States (NARA) has embarked upon a significant research project to better understand today’s online marketplace for researchers and how we can best serve their needs. For the purposes of this project, a “researcher” is defined broadly to mean anyone seeking documents, data, or other information about the past. This includes scholars, professional researchers, family historians, and casual information seekers.

For several years, NARA’s customer satisfaction survey has indicated relatively low satisfaction with our websites and a significant percentage of users who are unable to complete their intended tasks. Like many online collections, our National Archives Catalog has been developed over time without a strong user-centered design framework. Because access to records via the Catalog and other online platforms is a critical part of NARA’s strategic plan through FY 2022, we know that we will be investing heavily in digital, both in staff time and financial resources. It is critical that enhancements and changes to user interfaces, as well as to the underlying infrastructure and back-end systems, are driven by the needs and expectations of our users.

The user data and customer journey analysis gathered through this research project are intended to help us identify pain points and improve our interfaces, but those inputs alone won’t be enough to help us address the needs and expectations of our audiences. It is critical that we also have a good grasp of the greater online context in which researchers encounter NARA’s services. In a world in which a family historian can type her grandfather’s name into Ancestry.com to start building a family tree, and a journalist can Google to download public domain images, where do the collections searches, online tools, and APIs that museums and archives provide fit in?

To ensure a user-driven design process for our digital platforms going forward, NARA contracted with Threespot to undertake a three-phased project:

  1. User experience research (completed)
  2. Persona and user-centered design product development (in progress)
  3. Development of a user experience (UX) roadmap, marketing strategy, and measurement strategy based on the research findings (planned)

This paper focuses on the results of the first phase of the project. (To access information about the outcomes of subsequent phases of the project, follow the NARAtions blog for updates in the months to come.) The user experience (UX) roadmap to be completed later this year will define NARA’s digital development priorities and pair those aims with marketing and measurement strategies to increase our chances of success. The ultimate outcomes we want to see from all of this work are, simply put, to significantly improve online researcher satisfaction and to significantly increase the number of online researchers we are able to serve.

Strategic alignment

NARA is America’s record keeper. We are the U.S. Government agency that not only preserves documents and materials related to the United States, but also makes sure people can access the information. Billions of letters, photographs, video and audio recordings, drawings, maps, treaties, posters, and other informative materials exist that tell the stories of America’s history as a nation. From the Declaration of Independence, the Constitution, and the Bill of Rights, to census records that account for every citizen—the preservation of important American documents helps illustrate what happened in the United States before and after we were born.

The National Archives’ vision is to be known for cutting-edge access to extraordinary volumes of government information and unprecedented engagement to bring greater meaning to the American experience. We aim to transform the American public’s relationship with their government, with archives as a relevant and vital resource. Within NARA’s Office of Innovation, the Digital Engagement Division strives to engage the public with meaningful digital experiences when, where, and how they want them.

Several challenging goals set out in NARA’s strategic plan serve as the backdrop for this research project:

  • By FY 2024, NARA will digitize 500 million pages of records and make them available online to the public through the National Archives Catalog;
  • By FY 2025, NARA will provide digital, next-generation finding aids to 95 percent of the holdings described in the National Archives Catalog;
  • By FY 2025, NARA will have one million records enhanced by citizen contributions to the National Archives Catalog;
  • By December 31, 2022, NARA will, to the fullest extent possible, no longer accept transfers of permanent or temporary records in analog formats and will accept records only in electronic format and with appropriate metadata.

To achieve these goals, NARA must not only figure out how to provide access to a massive number of archival resources, but also how to do it through user-centered digital products that scale.

Scope of inquiry

NARA’s online ecosystem is made up of more than two dozen websites, including 14 Presidential Libraries. In 2017, NARA’s websites served more than 40 million users. While the focus of this project is on the online researcher, there is obviously room for a ton of variety in roles, tasks, and opinions within such a large audience. In 2017, NARA developed user personas based on available data about our digital audiences. However, the idea of an online researcher at NARA has continued to be surrounded by myths, assumptions, and anecdotal evidence.

Figure 1: digital user persona for Researcher, which was created in 2017 based on available user data

To better understand how NARA serves this important audience, we focused our inquiry primarily on three NARA websites designed to serve online researchers: Archives.gov, the National Archives Catalog (catalog.archives.gov), and History Hub (history.gov).

Homepage of Archives.gov
Figure 2: Archives.gov

Archives.gov is NARA’s flagship website,serving more than 20 million users in 2017. It is the portal to information for researchers, veterans, educators, museum visitors, and more. The public can view statistics about NARA’s website usage, including most-used pages, on analytics.usa.gov thanks to the U.S. government’s Digital Analytics Program.

Figure 3: search results page on the National Archives Catalog (catalog.archives.gov) for a keyword search on “World War I”

The National Archives Catalog (catalog.archives.gov) contains archival descriptions for over 95% of the holdings of the National Archives and over 39 million digitized copies of records. The site served 1.6 million users in 2017. Citizen archivists have added more than 1.2 million enhancements to the Catalog’s records in the form of user-contributed tags, transcriptions, and comments.

Figure 4: The homepage of History Hub (history.gov) encourages users to ask a question or explore recent content.

History Hub (history.gov) is a crowdsourcing platform and online community that served 60,000 users in 2017. History Hub enables researchers to find expertise, share information, and work together. More than 700 research questions have been asked and answered on the platform since its launch in 2016. We are just beginning to understand the potential uses of History Hub as a platform, and this research project helped us better understand who uses the site (and who doesn’t), and what role it plays in supporting researchers. History Hub is intended to be a tool for many cultural institutions to use (not just NARA) and it is free and open to all.

Research methodology

Our approach to this project was informed by many user-centered design research projects that predated ours, including well-documented methods used by Mitroff and Alcorn (2007), Tasich and Villaespesa (2013), Treptow and Kaiser (2015), and Coburn (2016).

During the first phase of this project, our research goals were to:

  1. Assess the current level of audience engagement
  2. Understand NARA’s high-value audiences
  3. Identify new, aspirational audiences
  4. Review peer organizations for best practices, features, and trends
  5. Recommend ways to improve user experience on current digital platforms
  6. Identify new opportunities to engage audiences

We approached our research problem using a variety of assessment tools designed to help us get a 360 degree view of our website performance, identify the user pain points and frustrations of our existing online services, better understand our existing and potential audiences, and review the larger online ecosystem in which our digital initiatives exist for users. Our research methods included the following:

  1. Review of existing data about Web users: We analyzed data from an ongoing website satisfaction survey (using about 13,000 responses collected by Foresee on Archives.gov and the Catalog in 2017), looked for trends within a set of nearly 2,000 research-related Contact Us form inquiries, and reviewed heatmaps for key webpages (using CrazyEgg).
  2. Stakeholder and audience interviews: We conducted individual and group interviews with nearly 80 NARA staffers including executive leadership, reference services staff, archivists, electronic records specialists, and digital engagement staff. We also conducted interviews with 19 external stakeholders, including genealogists, in-person researchers, online researchers, electronic records experts, and citizen archivists. Each interview lasted 30 to 60 minutes and was conducted face-to-face or via teleconference.
  3. Site audits: To better understand NARA’s digital ecosystem, we completed heuristic evaluations of the usability, information architecture, and navigation of our key websites (archives.gov, catalog.archives.gov, and history.gov).
  4. Search audit: We conducted a usability audit of the search results page on the Catalog and an analysis of the search queries submitted by users. We also analyzed the 300 most popular search terms on Archives.gov in 2017.
  5. Design audit: By examining eight NARA websites, we assessed how the design system and branding of our websites impact the user experience, with the aim to determine whether there is a user-centered need for a more cohesive design system across sites. The sites audited included: archives.gov, catalog.archives.gov, history.gov, ourdocuments.gov, jfklibrary.gov, fdrlibrary.org, obamalibrary.gov, and georgewbushlibrary.smu.edu.
  6. API audit: We reviewed the Catalog API and its documentation to understand the current capabilities. We also spoke with the API’s developers (NARA contractors) and maintainers (NARA staff) as well as external developers who have used the API.
  7. Analytics audit: We conducted an audit of Google Analytics data (12/01/2016-11/30/2017) to examine measurement configurations as well as to glean insights about site performance, user acquisition, popular content, and relationships between NARA’s Web properties.
  8. Comparative analysis: We reviewed the websites of organizations serving similar audiences. We compared features, functionality, and content to understand the wider online research services landscape. Sites reviewed included: U.K. National Archives, Library of Congress, Smithsonian, Europeana, New York Public Library Labs, Cooper Hewitt Smithsonian Design Museum, Digital Public Library of America, Ancestry, FamilySearch, Online Archive of California, Google Arts & Culture, and Stanford SearchWorks.
  9. E-mail records audit: In order to grapple with the challenge of displaying useable electronic records at scale, we reviewed three examples: Elena’s InBox (Sunlight Foundation), Kaine Email Project (Library of Virginia), and ePadd (Stanford University).
  10. User survey: We distributed a new Web-based survey (using Survey Monkey) to further assess the experiences of research-oriented audiences interacting with NARA’s online services. The survey was distributed through pop-ups on Archives.gov (research and citizen archivist pages only), the Catalog, History Hub, and the 1940 Census website. We also promoted the survey by e-mail to our 29,000 Catalog newsletter subscribers (using MailChimp). A total of 1,847 responses were received from January 12 to 19, 2018 (about 58% of respondents came from the newsletter, 30% from Archives.gov, 7% Catalog, 3% 1940 Census website, and 2% History Hub).
  11. On-site observation: Three members of the research team (NARA contractors) took on the role of in-person researcher at NARA for a day (visiting the College Park, Maryland, location on January 9, 2018). The several-hours observation comprised an end-to-end researcher experience, from getting our own researcher card to requesting records to documenting our research trip findings.
  12. User testing: We used an online user testing platform (Helio) to perform two task-based usability tests with nine users on Archives.gov, the Catalog, and History Hub; one test covered Archives.gov and the Catalog; the other tested History Hub. Users were asked where they would click to complete a task-based question and their click responses were recorded; users were also asked to explain why they clicked where they did. Test users were drawn from a pool who self-identified in the user survey described above.
  13. Social media analytics audit: We used NARA’s social media analytics tool (Synthesio) to analyze mentions, sentiment, and audience data across NARA’s 130 social media platforms (for September 2017-January 2018).
  14. SEO audit: We used MOZ Open Site Explorer and SEMRush’s SEO tools to assess the organic search engine performance standings of Archives.gov and History Hub.

It is our belief that the thoroughness of this research process has given us a well-rounded and multi-perspective view of NARA’s online researcher audience and the digital context within which they (and NARA) operate. Our budget, and therefore our methodology, did not afford us the ability to go deeply into market research, and so our data do not well represent audiences who are unfamiliar with NARA. We must be cognizant that our data contains some bias in that it is generated from users who are already familiar with NARA and have used our online services at least once. Because landscape analysis and market research were limited, the data does not necessarily accurately reflect the differing needs and perspectives of those who are yet unfamiliar with us. While our research provides general indications, it does not probe deeply into the nuances of NARA’s potential opportunities or difficulties in reaching new audiences.

Findings and recommendations

Summary

This research project was a major undertaking and there are a lot of data points and insights to unpack. Here’s a quick summary of what we found:

  1. Online researchers are finding and using our services designed to support research…
  2. …But most users can’t complete their intended task
  3. People researching family history, in particular, have low satisfaction scores and need better support
  4. History enthusiasts and curious nerds are a growth opportunity
  5. All audiences require better just-in-time help and many also need an orientation to archival research
  6. Repeat customers aren’t any more satisfied, so the problem isn’t just a learning curve
  7. Expectations about item-level records and our massive scale are major challenges to be addressed

Major recommendations to consider implementing based on these findings include the following:

  1. Default to show digitized items first
  2. Provide streamlined search interfaces for specific research tasks
  3. Create in-line help and proactive guidance to get users to the right platform for their task
  4. Orient and guide audiences who are underprepared for archival research
  5. Provide a visual overview of holdings
  6. Support on-site researchers with better online services
  7. Make it easier to narrow or expand a search
  8. Provide welcoming pages for long-tail visitors
  9. Improve the API to nurture a community of artists and developers

Performance baseline

Our first goal for this project was to assess the current level of audience engagement. In 2017, NARA’s websites served more than 40 million users. A majority (60%) of visitors arrived via organic search, and the bounce rate is low across all properties. Returning visitors made up a small portion of traffic across all sites, especially on the Archives website, with only 24% returning visitors. The Average Session Duration for returning users (00:03:09) was over a minute higher than the Average Session Duration for new visitors (00:02:06).

Figure 5: This bubble graph shows the relative traffic to a selection of NARA’s many websites, with Archives.gov generating the most usage per month. These stats are based on average monthly pageviews (AMP) in Google Analytics for December 2016-November 2017.

Audiences: Who are we serving?

Our next two goals for the research project were to understand NARA’s high-value audiences and to identify new, aspirational audiences.

Audience baseline: Many veterans, some genealogists and researchers

Keeping the customer’s needs front and center is critical when developing new digital tools. In 2016-2017, NARA developed a set of user personas to help establish a more robust and data-informed understanding of the individuals that engage digitally with the National Archives. We applied customer data from a variety of sources including website analytics and the Foresee survey to inform the creation of eight personas that represent our digital customers. The personas correspond to the roles identified in the Foresee survey data: Educator, Genealogist, Government Stakeholder, History Enthusiast, Museum Visitor, Researcher, and Veteran. (We added a “Curious Nerd” persona from our observations of social media audiences and their potential as a growth audience for our Web platforms.)

The Foresee survey has been in place on Archives.gov for more than a decade; we began surveying Catalog users with the same Foresee instrument in October of 2016. Below is Foresee data representing our audiences on all of Archives.gov and the Catalog for most of 2017. As you can see in the pie chart below (showing data from most of 2017), our biggest audience group represented in the Foresee data has been “Veteran or Veteran’s Family” at 43% of respondents. This category is made up of people looking to obtain their own (or a family member’s) military service record for the purpose of benefits, such as loans and burials. A segment of this audience is also interested in military history, in general (not for the purpose of obtaining a specific service member’s record). The veteran group is followed in size by a researcher cohort of about 27%, including “Researcher” (17%) and “Genealogist or Family Historian” (10%).

Figure 6: Audience role results from Foresee survey data on all of Archives.gov, the Catalog, and History Hub for January 2017-November 2017.

Researchers are finding and using researcher-serving platforms

For the new survey conducted in January 2018 (using Survey Monkey), we added new options to the question, “Which of the following best describes you?” Unlike the long-running Foresee survey, this measurement tool was only encountered by users of the platforms designed to serve researchers (specifically: the Catalog and its newsletter, History Hub, 1940 Census website, and research and citizen archivist sections of Archives.gov). Because of this recruitment method, we would expect to see a higher percentage of researchers (including genealogists/family historians) in the audience for these platforms—and we did (42% vs 27% for Foresee)! This is good news because it indicates that platforms designed for researchers are being found and used by the intended audience. It is important to note here that many of the other categories listed would easily fall into our broad definition of “researcher” as anyone seeking data or information about history, including students, educators, and authors.

Figure 7: Audience role results from Survey Monkey survey, January 2018, with 1,847 respondents. This survey opportunity was provided to a smaller group of individuals who used the Catalog (or received its e-mail newsletter), Archives.gov/research or Archives.gov/citizen-archivist, History Hub, or the 1940 Census website.
Who are the “other” people?

Because the “Other, please specify” group came in just above 10% of the total number of respondents, it is worth taking a closer look at their write-in responses. Of the 201 people who selected “Other,” at least one quarter of them (56) would be characterized in our view as “Genealogist/Family Researcher” but were obviously reluctant to select that option. Several wanted to make it clear that their genealogical research wasn’t “paid” or “professional.” Others simply identified their role in a family (“daughter,” or “grandson”) or wrote in things like “looking into my family history,” or “just trying to research my family,” or “tracing my roots.” Several people who selected “Other” indicated that they fell into multiple roles (e.g., a veteran and a family historian). If we re-coded these respondents who were clearly interested in family history, you’d see the “Genealogist/Family Researcher” category rise from to 32% to 35% and “Other” sink from 11% to 8%. (This re-coding would also bring our total researcher audience on these researcher-oriented platforms to nearly 45%).

(Note: One of the research goals of this project was to recommend ways to improve user experience on current digital platforms; these are highlighted throughout the rest of the paper with this symbol: ☑ Recommendation.)

Recommendation: Reconsider the words and content we use for people who are researching personal or family history. Does our content, which favors the term “genealogy,” match how these researchers think of themselves and their goals?

A number of “Other, please specify” respondents (32) said they were current or retired history professionals (archivists, historians, museum curators, exhibition designers, librarians), which makes up almost 2% of the total respondents. Another set of people (24) referred to themselves in casual browsing terms such as “curious,” “citizen,” “just a person,” or “just looking.” As noted above, several people (15) selected “other” because they felt they fell into multiple categories.

People conducting genealogical research are the least satisfied, followed by other researchers

Looking back to our longstanding Foresee survey, we know that people who identified as “Genealogists or Family Historians” gave the lowest combined satisfaction scores of all the roles, and only 24% were able to complete their task on Archives.gov or the Catalog.

Figure 8: Customer Satisfaction scores from Foresee survey data on all of Archives.gov, the Catalog, and History Hub for January 2017-November 2017.

It is obvious that Genealogists and Family Historians need clearer instructions and more support throughout their journey so they have a better customer experience. Part of the issue may be relative lack of experience with archival research, or research in general (this issue is described in more depth below). People researching family history need better guidance on how to define their research question and they need more help to figure out how to conduct their inquiries. These researchers may follow a distinctly different user path than other researchers, and other websites like Ancestry and FamilySearch provide scaffolded user interfaces and customized search capabilities (such as name search) that influence their expectations about research at NARA. (Note: One of our research goals was to review peer organizations for best practices, features, and trends. Throughout the rest of the paper, we have sprinkled examples from the wider landscape that shape user expectations as well as models from other cultural institutions that we can learn from.)

Recommendation: Convert helpful resources from PDF or PowerPoint into webpages that are more findable. Surface these resources and tool tips at the right moments to help people during their customer journey.

Recommendation: Use History Hub to connect people and provide support to researchers with varying degrees of experience with family history research experience.

Figure 9: FamilySearch offers a user-friendly interface to search a collection by name, life event, location, and relationships.

Recommendation: Develop easy-to-use, streamlined experiences specifically customized to the needs of people conducting genealogical research.

Recommendation: Set clearer expectations on the type of genealogy research that people can do on NARA sites vs. on other genealogy sites. In particular, make it more clear when a user is about to click on a site that requires paid membership for access. Further explore how to leverage our existing partnerships with Ancestry and FamilySearch to better serve researchers.

Researchers indicated that they need clearer indications that something they want to do (e.g., access a website, order a copy) will cost them money. They want to know about the cost of the task in advance, and they need clarity on how much it will cost. Audiences expected to get most services, including copies, for free, especially if they are U.S. citizens.

Recommendation: Provide clearer cost details upfront to visitors, especially when providing links to paid services such as Ancestry.com and Fold3.

Veterans and their families have average satisfaction scores

With a customer satisfaction score of 65 (see chart above), people who identify as a “Veteran or Veteran’s Family” are having an average experience when compared to other NARA audiences. As seen in the chart below, “research or request veterans service records” is the top task identified on Archives.gov and the Catalog. However, most veterans’ military service records and medical records are not available online due to privacy (instead, veterans and next-of-kin may order copies of these records). Right now, we are not making it clear to people searching the Catalog for service records that they are in the wrong place! And the content on Archives.gov/veteran, which provides instructions on ordering military service records, needs an overhaul to clarify what is available and how to obtain it.

Figure 10: Task results from Foresee survey data on all of Archives.gov, the Catalog, and History Hub for January 2017-November 2017.

To break down the analysis of this group of “Veteran and Veteran’s Family” users a bit more, we need to look more closely at their potentially divergent tasks and goals. Users need to follow a different path if they intend to request their own (or a family member’s) military service record versus if they want to research older military records for general or personal history research purposes. (Unfortunately, this may be another problem with our terminology—we found that users conflate military records and the military as a subject.) For those seeking military information (that is not a specific service member’s record of service), they are often looking for a family member or friend, seeking photos or records such as ship logs related to their own military experience, or searching for documents related to historical events like wars or conflicts (e.g., Civil War). People who fall into this category often want to search by a person’s name, date of service/conflict, military branch, general historical events or subjects, or key locations.

Recommendation: Create a wizard-style guide to help users get to the right place for military records. Set clear expectations for the types of military records available and how to find or request them.

Recommendation: Create “best bets” or “tool tips” in the Catalog search results to appear when it looks like someone is searching for their own/family member’s DD 214 (the certificate issued by the Defense Department upon a military service member’s retirement, separation, or discharge from active duty) or is trying to find the SF 180 (the Standard Form 180, Request Pertaining to Military Records).

Recommendation: Provide support for (or at least search tips on how to) search by name, date of service/conflict, military branch, historical events, or locations.

Figure 11: Ancestry.com offers a dedicated search interface for military records with links to a simple guide if a user needs help.

History enthusiasts and curious nerds are a growth opportunity

ForeSee data have long indicated that Educators/Students, Museum Visitors, and History Enthusiasts are well satisfied—which is great news! These groups gave combined satisfaction scores in the mid 70s. These high satisfaction scores are part of the reason that these groups are not the focus of this specific research project which is intended to help us improve services for less-satisfied audiences.

However, one of our research goals for this project was to identify new opportunities to engage audiences. We believe that history enthusiasts have the potential to become the largest audience segment across all of NARA’s digital properties (right now they are less than 10%). As Coburn (2016), argues: “A substantial online audience exists that desires collections engagement without a specific search request in mind. The responsibility, therefore, is with the museum to create user-centric experiences that propel casual, curious audiences into and through the collection.” In order to increase our reach and engagement with history enthusiasts, we must increase their opportunities for discovery, relevance, and serendipity.

Recommendation: Provide more opportunities for history enthusiasts to share their discoveries from NARA’s holdings and seek to move them along the engagement ladder to become citizen archivists and History Hub users.

Recommendation: Consider how to reuse content NARA created for teachers and students (such as DocsTeach.org) for a wider audience, to connect history enthusiasts with America’s past and put present issues into context.

Recommendation: Find opportunities for NARA to provide context around calendar events, news events, and culturally relevant topics by sharing related pieces of history.

Recommendation: Make “Best Bet” recommendations in search results based on weekly trends in users’ queries.

Figure 12: The Cooper Hewitt uses friendly language and sets expectations about what is available online. The “Random” button facilitates serendipitous discovery. Users can explore the collection by a number of predefined but fun categories like color, tag, people, and size.

Our statistics show that there is good engagement and overall positive sentiment toward NARA as an organization on social media. A previous analysis led us to create the Curious Nerd persona, representing audiences that serendipitously stumble upon NARA content via social media.

Recommendation: Create monitoring for more history-related topics that a NARA archivist could pull an interesting document for, and reply to conversations about historic events with neutral but related information to join greater conversations.

Recommendation: Add schema markup on Archives.gov to connect NARA’s additional social media profiles to the organization and “get credit” for social media activity in organic search.

Citizen archivists are highly engaged

Since its launch in 2011, NARA’s citizen archivist initiative has been successful at attracting public contributions. The goal of the program is to encourage the public to get involved and help raise the visibility and accessibility of NARA records through crowdsourcing activities such as tagging, commenting, and transcription. To date, citizen archivists have enhanced more than 170,000 pages, comprising more than one million total enhancements (tags, transcription saves, and comments). However, our strategic plan outlines an ambitious goal for the next few years: to have one million pages enhanced by citizen contributions to the National Archives Catalog by FY 2025.

This research project found that Citizen Archivists are highly engaged with a passion for history and are actively seeking opportunities to be get involved. The task ahead is to raise awareness and increase recruitment of participants, moving History Enthusiasts, Curious Nerds, and Researchers up the ladder of engagement to become active Citizen Archivists.

Recommendation: Provide ways for Citizen Archivists to communicate with NARA staff and with each other, including using History Hub as a platform for connection. Deputize “power users” to help NARA staff manage the community.

Recommendation: Extend reach to students, including language students, for transcribing missions.

Recommendation: Enhance transcribing tools on the platform, including better visual status indicators for records moving through the transcription process. Provide stronger guidance and clearer standards documentation.

Figure 13: The Library of Congress “Beyond Words” crowdsourcing project provides a sleek and clear interface for participation.

Engaged citizens could be cultivated and invited to return

Our research uncovered an audience that has not previously segmented out: the Engaged Citizen. This audience segment has been growing over last 18 months. They are interested in how government works, in administration and agency policies (past and present), and in fact-checking news and media outlets. Examples of content this group is interested in include information about the Electoral College, executive orders, and the JFK Assassination files (which were viewed by more than one million people in a single day in October 2017). There is potential that this audience overlaps with Curious Nerds, however, the Engaged Citizen is likely more proactive in seeking out resources from NARA.

Recommendation: Optimize landing pages that are likely to be popular with Engaged Citizens so that they include calls-to-action that support longer-term relationships. Examples include follow our social media channels, subscribe to an e-mail newsletter, engage in Citizen Archivist activities, or participate in History Hub discussions.

Recommendation: Feature relevant content and trending topics on home pages and landing pages.

Figure 14: The Library of Congress curates content and features trending searches on its homepage.

Journalists are under-served

Very few journalists self-identified on surveys. Journalists will be a key audience interested in “born digital” records as they are accessioned and made available by NARA.

Recommendation: Provide easy access to digital assets that are rights-free and available for use in stories.

Repeat customers aren’t happier

One finding that was a bit of a surprise is the fact that returning visitors were no more or less satisfied with the site than new visitors, regardless of role (ForeSee data). This suggests that no new knowledge is gained in the first visit that makes subsequent visits easier to execute. Researchers are frequent users and repeat visitors of the website, but like all respondents, their satisfaction did not improve during subsequent visits. While part of the solution may be better background education for less seasoned researchers (see below), the root problem seems to be more than just a learning curve. Instead, we need to look into the deeper and more fundamental problems with the usability and usefulness of our approach to digital services.

Challenge: Why can’t most people accomplish their task?

Perhaps the most discouraging finding of our research project is that less than half of all visitors to Archives.gov and the Catalog report being able to successfully complete the purpose of their visit. 32% report being unable to complete their task and an additional 25% report being able to partially complete their task. This means that the majority of people aren’t able to do what they need to do with our online services. (See top tasks in Figure 10 above.)

Figure 15: Task accomplishment results from Foresee survey data on all of Archives.gov, the Catalog, and History Hub for January 2017-November 2017.

How can we help more people accomplish their tasks—or help people define research tasks in a way that they can be more easily completed with available resources? Here are a few ideas.

Default to show digitized records first

Users, especially those less familiar with research in cultural institutions, expect everything to be digitized. Our research found that people had higher expectations for photos and video to be digitized. We also found that the older a record is, the more users expected to be able to view a digital version. Even for those who understand that not everything is available online (yet!), they still want to see digital holdings first rather than be forced to sort through a mix of digitized and non-digitized results.

People visiting our Catalog are unclear that it presents information about all records, whether they are available online or not. Let’s say I’m interested in seeing photographs from the Great Depression. A search in the Catalog for “Great Depression” displays “All” results first and then offers tabs across the top to filter just “Available Online” results or just “Images.” Usability studies indicated that these links did not read as “filters” to users; even still, it begs the question: If most audiences want digital items first, why don’t we flip the model, allowing users to expand to non-digitized items if they so choose?

Figure 16: A search for “great depression” in the Catalog does not put digitized images first. Instead, users must select “Available Online” to narrow the results.

Recommendation: Show digitized items by default in Catalog search results and experiment with different user interfaces for facilitating the narrowing and expansion of results.

Recommendation: Consider designing user interfaces for search results and browsing that are driven by an image-first exploration interaction, rather than small thumbnails accompanied by a lot of text.

For many researchers, digital access is their only touch point, as they will never be able to visit a NARA location in-person. Only 25% or respondents to our Survey Monkey have conducted in-person research at a NARA facility. 87% of respondents said the ability to conduct research online as opposed to in-person was important or very important (72% said it was very important). 89% of respondents said it was important or very important to access digitized records for their research purposes (76% said it was very important).

Recommendation: Show curated groupings of digitized records based on popular research topics.

Figure 17: The New York Public Library features recently-digitized and popular collections organized by topic. The digital collection is also mobile-responsive.

Recommendation: Keep users up-to-date about digitization efforts, so they have clear expectations about what they will find online. Make it more clear what’s been digitized (and if it is available through a partner site or a NARA one) versus what’s only available in the physical archive.

Figure 18: The UK National Archives sets clear expectations about what is in the holdings (or not) and what is online (or not).

Recommendation: Prioritize the display of digitized records based on engagement. Per Coburn (2016), explore “adapting the system to more intelligently curate objects in response to ongoing cumulative user data. In theory, the system could make assumptions about compelling and non-compelling artefacts.”

Address item-level expectations

Thanks to Google, Amazon, and other consumer search and browse technologies that permeate our everyday lives, people come to NARA with what we call “item-level expectations.” We’ve already addressed that people expect every item to be digitized but, beyond that, they expect to find individual things and are often baffled by the hierarchy of records presented in the Catalog. People expect every item in a record to have a unique listing and that the item is searchable, often based on subject or name. They also expect to see things that are related to whatever they are viewing.

This problem has multiple causes:

  • Many novice researchers are unfamiliar with archives and how they are structured. (We address this issue in more detail below);
  • Even for those who are familiar with the levels of descriptions, there are usability problems with the Catalog’s visual cues for display of group, series, and item levels, making this structure hard to decipher;
  • Items often do not have sufficient metadata to support the kinds of searches that people are accustomed to using in a tool like Google (keyword or subject based);
  • A search-first user interface makes it harder to “drill down” or expand based on levels because the default display of records is a thumbnail-plus-metadata list sorted by relevance.

Recommendation: Create a more obvious visual representation of the hierarchy to which a record belongs, including how a specific record is related to series. Make obvious links to authority records.

Recommendation: Create exploratory interfaces for following “related” links to other records, series, etc. to facilitate discovery.

Recommendation: Bring forward hierarchical metadata for an item to improve navigation. Leverage the parent metadata to support this type of linking from different archival hierarchical levels on the Catalog.

Figure 19: The UK National Archives makes archival hierarchy visually apparent.

Recommendation: Explore applications of artificial intelligence to support the metadata needs required for true item-level search and exploration. (See Areas for Further Research for more on this topic.)

Orient and guide audiences who are underprepared for archival research

Given that NARA serves 40 million people annually via our websites, we know that there is a wide spectrum of experience and expertise among online researchers who use our services. While our research methodology did not support rigorous cross-comparisons between researcher experience levels, we did notice trends that indicate a large percentage of our online researchers are underprepared for research at the National Archives. Here are some commonly cited questions and concerns.

How do I get started?

There is no one right way to tackle a research problem. Novice researchers under-appreciate that doing research is like putting a puzzle together—there are different angles and perspectives to consider while diving into a topic. These researchers also don’t know what they don’t know. To have a fruitful research experience, they need the assistance of both prepared resources as well as personalized guidance from NARA experts. The trick of it is getting support when they need it most, whether it be the beginning, middle, or farther on in their research journey.

Recommendation: Experiment with using wizard-type interfaces to direct people in more personalized (and less overwhelming) ways than a one-size-fits-all research guide.

Figure 20: The U.K.’s Government Digital Service defines a useful “Check before you start pattern” that helps guide people preparing to use one of their services based on the answers to a small set of questions.
What is an archive?

New researchers aren’t clear on the nature of an archive. They don’t understand the scope of our holdings or how archival records are organized. Some audiences don’t understand that an archive is not like a library—one can’t ask for everything on a topic, for example. These researchers need assistance to narrow their focus.

We know that many users need a better orientation to how archives work, how the items are stored and retrieved, the depth and size of NARA’s’ collection, and the fact that relatively few items are digitized. This type of information is available on the Archives site, but it is often hard to find or is delivered at the wrong moment in a customer’s journey. We also know that our labels and language are filled with archivist jargon which is not understood by many of our users.

Recommendation: Provide better connections between the instructional resources on Archives.gov and the Catalog, where their guidance would be implemented.

How do I navigate NARA’s holdings?

Even those audiences that may have some archival research experience don’t necessarily understand NARA’s holdings. They may not understand that 1) our holdings are from government agencies (and not other possible historical sources); 2) there may be gaps in the records; 3) current government records and news are not included; 4) item-level information (e.g., individual photos) may not be retrievable by keyword search; and 5) not every collection has been described to the item level.

Even if researchers do have a basic understanding of what NARA holds, without knowing the history of how our governmental agencies have operated and evolved, it’s very difficult to start using the Catalog. You need to know how an agency/department was structured, what records they used in regular operations, and what types of subjects they have touched.

Field labels in the Catalog and language on Archives.gov are both filled with archivist jargon. There are some useful resources for deciphering archival terminology and NARA’s specific usage and practices:

Recommendation: Put existing resources above into user-friendly, plain-language formats and make them available where and when they are needed within a customer’s journey.

Recommendation: Use friendlier language instead of specialist language, whenever possible. When specific archival terms must be used, offer helper text or tooltips with definitions, especially within the Catalog.

Provide a visual overview of holdings

It’s hard to learn which records are at the National Archives, what the major record groups are, and what types of items might they contain. As described in Whitelaw (2015), an “overview” interface can provide a “rich but workably compact representation of the collection as a whole” that can help users get a sense of the “shape of the collection.”

Recommendation: Experiment with “overviews” and other data visualizations of NARA’s holdings, including what’s digitized, items organized by record group, or displays by date or geographical location.

Figure 21: The Digital Public Library of America visualizes data about NARA’s records on a timeline.

Consider how to deliver content by subject

It’s hard to search NARA’s holdings by topic. A government agency’s files might be organized by “memorandums” and “purchase orders” rather than “nuclear powered submarines” or the “outer space treaty,” although these subjects are mentioned in memorandums from various agencies. Top on-site search queries for Archives.gov show mostly subject-based inquiries, including people (John F. Kennedy and Hitler). The DD 214 queries are related to military service records. While topic pages exist on Archives.gov, their readability could be improved with shorter sentences, less jargon, and more streamlined formatting (e.g., consistent styles for headings, bullets, and links to resources) that facilitates scanning and reduces cognitive load. Users find the number of disparate or duplicative Web pages on popular subject areas—such as the founding documents, specific wars, or ethnic heritage groups—confusing.

Figure 22: Top search queries on Archives.gov in 2017, based on Search.gov data.

Recommendation: Explore applications of artificial intelligence to support the metadata needs required for a subject search. (See Areas for Further Research for more on this topic.)

Recommendation: Explore Citizen Archivist missions focused on facilitating subject searches in the Catalog.

Recommendation: Offer subject-based guides on Archives.gov that are written in plain-language and styled consistently with bullets and headlines to assist with scanning.

Recommendation: Develop a portal-based approach to consolidating information about a given topic on Archives.gov.

Recommendation: Introduce subject guides and topic pages as “Best Bet” results in searches.

Recommendation: Provide better Web governance and content strategy to manage and maintain content, including eliminating duplicate and outdated content; use this content framework to identify opportunities to add new content that is timely and useful.

Make it easier to narrow or expand a search

The Catalog’s simple landing page seems straightforward, but users noted that it is hiding a cluttered and overwhelming experience once you submit a query. As mentioned previously, the results page has a large number of filters with different styles that are difficult to navigate. It is not clear that the terms listed at the top of the results page are filters. It is not possible to sort Catalog results by date. It is not clear how relevant results are determined. Even professional researchers and NARA staff report relying on Google’s search to find items in the Catalog.

Recommendation: Test alternate user interfaces for expanding and narrowing a search.

Figure 23: On the search results page of Stanford’s SearchWorks catalog, all filters are shown in accordion menus on the left-hand side of the page. SearchWorks highlights the physical description of the archival item on the preview card that appears on the search results page. The individual item’s location is also clearly visible on the item preview, as well as any associated finding aid. The system also recommends related items.

Support in-person research visits with better online services

Some portion of online researchers visit our research rooms (about one-quarter according to our Survey Monkey results). Online services could provide better support to researchers prior to their on-site visit. Users reported being unable to determine the physical location of an item from the Catalog detail page. Tools to prepare for a visit, such as the ability to save searches or schedule record pulls, are not available online. Information explaining some of the basics about a research visit can sometimes be found on Archives.gov but it is not presented in a consolidated or easy-to-access way (e.g., orientation is offered as a downloadable PowerPoint).

Recommendation: Update the visual presentation of the location of records in the Catalog to make it more obvious where they are located and whether or not they are available online.

Recommendation: Make it worth it to create a user account in the Catalog. For example, offer the ability to save searches, create a pull request for an upcoming in-person visit, or receive notifications when an item you’ve requested has become available. With a system like this, NARA could also programmatically make better recommendations on research methods, the types of items most useful for certain types of research, related records, and more (which would be useful to online and on-site researchers alike).

Figure 24: The Library of Congress Online Catalog saves your recent user searches.

Recommendation: Optimize the Catalog site to perform better on mobile devices, which may be used in even higher proportions by in-person researchers on the go.

Additional opportunities to engage audiences

Provide welcoming landing pages for long-tail audiences

Archives.gov has a very broad presence in organic search results, spanning across almost one million keywords. Very few of these keywords are searches for branded terms such as “NARA” or “National Archives,” but rather topic-based searches. This underscores the hypothesis that many people don’t know what NARA’s holdings contain, or that Archives.gov may be a resource for their research. The graph below of the number of queries per search term illustrates delightfully the “long tail” (Anderson, 2009) nature of NARA’s reach due to a wide variety of niche content.

Figure 25: number of queries per search term in Google Search Console

There is no “front door” experience for NARA users. Only 7% of visitors enter the Archives.gov from the homepage. Most users arrive via pages that appear in organic search results (53%) or from referral links from outside websites (18%). Entry points (landing pages) are distributed throughout the site. The military service records page is the single most viewed page across the suite of NARA websites, representing only 3% of the total page views.

Recommendation: Make sure wayfinding and rescue content is available and clear on any page of the site, not just the homepage.

Recommendation: Find ways to consolidate related but redundant content to better support clear task-related user paths through the site, and get a better read on what type of content people find most useful.

Recommendation: Take control of the Archives.gov appearance in Google search results. Add meta descriptions for priority pages that have a higher number of SERP (Search Engine Results Pages) impressions to capture more of the existing search traffic with more compelling and informative information. Claim the Google+ Business profile for NARA and update the organization’s information, and suggest edits to the Wikipedia page about NARA to improve the information that is displayed in Google’s Knowledge Panel.

Figure 26: Google’s Knowledge Panel for a search for “National Archives” as of February 2018.

Recommendation: To increase traffic to the Catalog, further optimize the detail pages for organic search through programmatic updates that leverage existing metadata.

Push users across websites in a strategic and coherent manner

Audiences are confused about the purpose of different websites, are unaware (or unsure) that they exist, or are unclear about their relationship to the larger NARA organization. Users end up searching for information in the wrong place or missing helpful content because it isn’t available just-in-time.

Figure 27: This diagram shows possible user connections and pathways between NARA websites.

Recommendation: Prioritize pathways across sites to support specific customer journeys. Move beyond personas and focus on supporting specific user tasks. Consider transactional, exploratory, and investigative types of user journeys. Operationalize these user journeys across NARA to inform UX enhancements.

Recommendation: Implement a design system that brings consistency across websites. A consistent footer or global utility header would make the connection to NARA clearer and provide a more coherent user experience across websites.

Increase promotion and integration of History Hub to support researchers

History Hub has lacked visibility, and its participation numbers suffer due to lack of awareness and promotion. One obvious solution is to better integrate it as a helper for Catalog users who need assistance or Archives.gov users who aren’t sure what to do next.

Recommendation: Rethink the workflows for contacting NARA staff on Archives.gov and the Catalog. Consider where it might be appropriate to swap out a Contact Form or an e-mail address/phone number for a link to History Hub.

Figure 28: The Catalog displays snail mail, phone, fax, and e-mail contact information but does not promote History Hub as a resource for researchers. This screenshot shows the contact information at the item level.
Figure 29: The UK National Archives offers live chat, e-mail, and phone support at the right time in a customer’s journey.

Recommendation: Recruit researchers to participate in History Hub. Provide opportunities for online and in-person researchers to easily share their research discoveries and images with others (website, social, etc.).

Recommendation: Encourage Citizen Archivists to connect with one another, and with NARA staff, on History Hub.

In addition to increased marketing and promotion, it is clear from our usability studies that improvements to the user interface could decrease confusion and lower the barrier to participation. Since our testing, the layout and design of the homepage and navigation has been updated to address issues such as users not being encouraged to scroll below the fold (and, therefore, missing recent content), placing the “Get Started” guide at the top of the page, confusion about what “People” and “Places” meant, and encouraging people to explore existing content before clicking “Ask a Question.”

Recommendation: Conduct additional usability tests on the new design.

Recommendation: Test the impact of marking questions as answered. Users prefer to know if something has been answered, however, marking a question as “answered” in the past has led to dialogue ending on that thread, which can be detrimental to a complete answer for complex questions.

Improve the Catalog API to serve artists and developers (including our own)

The National Archives Catalog API is a read-write Web API for the National Archives Catalog. This API can be used to perform fielded search of archival metadata, bulk export of metadata and digital media, and post contributions to records. The dataset includes archival descriptions, authorities, digital media, Web pages, and public contributions (such as tags, transcriptions, and comments).

If we seriously want to engage artists and developers with our open access data, we need to more clearly state the value proposition and provide better scaffolding and documentation for their involvement. The Catalog API, in its current state, is functionally similar to the Catalog Web user interface. As such, it is best suited for creating similarly search-focused client applications or possibly as a tool for scraping Catalog data. Without enhancements, it will be challenging for developers to create other types of data-driven applications, such as timelines or data visualizations. Limitations of the Catalog website are also represented in the API such as insufficient metadata or obvious connections between records.

Recommendation: Create a “Lab” to promote and showcase experimental and creative projects. Crate data missions and competitions to support and demonstrate innovative uses of NARA data.

Figure 30: Europeana provides multiple APIs to support the needs of developers interesting in utilizing their data.

Recommendation: Add additional endpoints to expose representations of discrete resources and collections of those resources. Include hypermedia to describe relationships and functionality in the response data and allow temporal and spatial metadata associated with resources to be accessed easily in the results.

Recommendation: There are a number of opportunities for cleaning up the overall profile that would improve the API’s usability more generally, such as adopting standard schemas and formats to increase interoperability and definition; improving representational data through consistent modeling of resources and metadata; better and more consistent attribute naming; flattening the data structure; and pruning esoteric data from results.

Recommendation: Build and nurture a community of developers by applying the following best practices:

  • Improve the API documentation and leverage tools to automate documentation;
  • Share the roadmap for future development more openly;
  • Establish and document versioning and deprecation policies that are accessible to developers;
  • Identify and fix performance bottlenecks;
  • Lower the barrier to entry by providing examples of other applications built with the API and helper libraries;
  • Increase outreach to developers through online communities (e.g., GitHub) and technology conferences.

Areas for further research

What new user interfaces are possible with our API—and which work best for discovery and exploration?

In addition to the discovery work outlined in this paper, NARA is also currently developing several working prototypes using the Catalog API. The goal is to put working software in front of actual users and, along the way, discover the real capabilities (as well as uncover the limitations) of our existing infrastructure and metadata to support new user interfaces for exploration of our resources. Prototypes include a voice-enabled Alexa skill for Amazon Echo, a Facebook Messenger chatbot for tagging images, a data visualization that provides an “overview” of NARA’s holdings, a date-based “When Am I?” Web game, a CAPTCHA module leveraging NARA images, and an open-source framework for placing images from (as well as references back to) the Catalog on any Drupal website.

Do we need to update our wording and navigation to reflect user identities?

We would need to dig in more to better understand how people want to identify, whether navigation and terminology on the site should be updated to reflect that identity, and whether there are other behavior or motivation based categories we should be using instead to understand and better serve our users.

How can we improve customer service in a multi-channel environment?

Implement a ticketing system to track researcher requests to manage in-bound research inquiries. With History Hub, e-mail, snail mail, phone, Twitter, Facebook, etc…how can we best manage the customer relationship across time, across staff touchpoints, and better measure the impact and efficiency of our efforts?

How can artificial intelligence help us achieve our goals to better support research?

Through stakeholder interviews, we uncovered a tension between those responsible for archival work and those responsible for digital access. A balance needs to be reached between maintaining high archival standards and providing timely access. On a related note, processes, standards, and infrastructure must be optimized so that information can be released to the public quickly. As Coburn (2016) explains, “An online collections experience, whether novel or traditional, will ultimately succeed or fail as a direct result of the quality of the data it presents.” Our research highlighted the desire of researchers to search by name, topic, and date—and to see item-level results for these inquiries. With billions of records in the backlog, one solution to consider is how artificial intelligence (AI) might help cultural institutions provide the level of metadata that researchers and developers require at the item level? How might machine learning be used to make searches more relevant based on user behaviors and pathways through our content? Can chatbots provide useful customer service for common researcher requests for guidance? How might these AI applications help us scale up the human-powered expertise of our staff? There are many possible avenues of inquiry here and the trick will be to figure out how to get our feet wet with a quick win while exploring larger-scale projects that could produce the most impact on our ability to do conduct our core business going forward.

How can we help people find the needle they want in an ever-increasing haystack?

A massive challenge for NARA is how to balance access to the scale of its holdings with usability and findability. As part of our strategic plan, we aim to digitize 500 million pages of records and make them available online to the public through the National Archives Catalog by 2024. A review of ForeSee data in 2016-2017 revealed that customer satisfaction scores for the Catalog were consistently declining, despite the fact that no user interface changes had been made to the site. The cause? During this period of time, we uploaded millions of new digital objects. Our hypothesis is that as the number of digitized items grows, customer satisfaction scores decreased because they had even more options to sift and sort through. The moral of the story? The more that is made available online, the more scaffolding and assistance we may need to provide to help people find that needle in the haystack.

Figure 31: Satisfaction scores (via the Foresee survey) for the National Archives Catalog fell 16 points in 7 months.

What is the best way to provide access to born-digital records?

Our strategic plan marks the beginning of the transition to a fully electronic government, with a deadline of 2022 for all agencies to move to fully electronic recordkeeping, to the maximum extent possible, for permanent and electronic records of all types. In the years to come, as more and more electronic records become available, the expectations of the serious researcher and journalist are changing. New challenges will include providing the software to open these records, ways to sift through the mass quantities of results generated, and a way to refine searches by record format. ePadd offers a good starting point as a potential model. The Task Force on Technical Approaches for Email Archives will soon issue a report (currently in limited-distribution draft) that summarizes the state of e-mail preservation and access, including recommendations on the important role of APIs to facilitate interoperability.

How can we best measure what matters?

In order to provide more focus for our digital efforts, NARA needs to develop a new, user-oriented measurement plan that captures audience data across channels. With so many numbers available at our fingertips, we must dedicate ourselves to building a streamlined set of key performance indicators that give us diagnostic evidence of where we are succeeding or failing to meet our strategic objectives. As we build out this framework, we are looking at ways to beef up our staff skill sets to better support measurement, analysis, and iterative testing. A key challenge that remains is how to move from reporting metrics to implementing actionable insights. Another challenge to explore is how best to operationalize the data and analysis across the organization so that all staff feel connected and empowered to make decisions based on what we know about our users.

Conclusions

User experience research like the project outlined above is, in some ways, a response to the challenges of the current state of libraries, archives, and the government—the need to “do more with less.” The National Archives will face challenges in implementing the recommendations outlined in this paper. We are working in a resource-constrained environment, in terms of funding as well as staffing and access to cutting-edge technology skill sets. To make the smartest use of available resources, we must make tough decisions about our priorities. Another challenge we face is with our development and procurement frameworks, which were not designed with responsive or agile methodologies in mind. Like all organizations, we must fairly balance diverse stakeholder needs, including the workloads of archival reference staff as well as the demands of the diverse public audiences we serve. Perhaps the biggest challenge to overcome is the pressure that massive scale puts on our infrastructure. Working with holdings in the billions creates tension between the need to simply make access happen and providing a truly great user experience.

While many of the findings and recommendations outlined in this paper are specific to the National Archives, my hope is that the methodology and analysis are useful to any cultural institution looking to support scholarly and casual inquiry into their collections and staff expertise. The online environment shaped by companies like Google, Amazon, Ancestry, Facebook, and Twitter makes our jobs more challenging and more exciting each day. As digital professionals, we owe it to our staff experts to help make their work processes efficient and enjoyable, but we must also be willing to push the needle when new opportunities arise to serve more people and to serve them better.

I know that I have benefited greatly throughout my career from the peek-behind-the-curtains that other museums, libraries, and archives have provided as they seek to better understand their successes and failures from a user’s perspective. It is in that spirit that I have transparently shared our research process and findings, warts-and-all. There is much to celebrate regarding the outcomes of our efforts to date but there is also still so much to do to improve. I look forward to sharing updates with you on the NARAtions blog about our progress towards increasing customer satisfaction and engagement with online researchers.

Acknowledgements

Thanks to Jason Clingerman, Digital Public Access Branch Chief at the National Archives, who oversees the National Archives Catalog and was my partner in designing and facilitating this research project.

My gratitude also belongs to the many NARA staffers who provided their expertise and input, including Pamela Wright (NARA’s Chief Innovation Officer) and the staff of the Office of Innovation as well as our partners in the Office of Research Services, the Office of Presidential Libraries, the Office of the Chief of Staff, and Information Services. Thanks especially to our contractors at Threespot for their work conducting the research and analyzing the results: Sid Barcelona, Lizzy Cederberg, Liz Lord, Rachel Mundstock, Liz Ott, Jamielyn Smith, and Jeff Tomlinson (Four Kitchens).

References

Anderson, C. (2009). The Long Tail in a Nutshell. Consulted February 12, 2018. Available http://www.longtail.com/about.html

Coburn, J. (2016). “I don’t know what I’m looking for: Better understanding public usage and behaviours with Tyne & Wear Archives & Museums online collections.” MW2016: Museums and the Web 2016. Published January 29, 2016. Consulted February 11, 2018. Available https://mw2016.museumsandtheweb.com/paper/i-dont-know-what-im-looking-for-better-understanding-public-usage-and-behaviours-with-tyne-wear-archives-museums-online-collections/

Cole, D. (2017). “Cultural Institutions Invited to Participate in History Hub.” History Hub. Published December 17, 2017. Consulted February 12, 2018. Available https://historyhub.history.gov/groups/about-history-hub/blog/2017/12/07/cultural-institutions-invited-to-participate-in-history-hub

Digital Analytics Program. (2018). Analytics Dashboard. Consulted February 11, 2018. Available https://analytics.usa.gov/national-archives-records-administration/

Ferriero, D. (2011). “The Wisdom of the Crowd.” AOTUS Blog. Published August 3, 2011. Consulted February 12, 2018. Available https://aotus.blogs.archives.gov/2011/08/03/the-wisdom-of-the-crowd/

Ferriero, D. (2011). “Together We Can Do It!” AOTUS Blog. Published December 23, 2011. Consulted February 12, 2018. Available https://aotus.blogs.archives.gov/2011/12/23/together-we-can-do-it/

Ferriero, D. (2017). “Improving Customer Experience with Digital Personas.” AOTUS Blog. Published June 15, 2017. Consulted February 11, 2018. Available https://aotus.blogs.archives.gov/2017/06/15/improving-customer-experience-with-digital-personas/

Mitroff, D., and K. Alcorn. (2007). “Do You Know Who Your Users Are? The Role Of Research In Redesigning sfmoma.org.” in J. Trant and D. Bearman (eds.). Museums and the Web 2007: Proceedings, Toronto: Archives & Museum Informatics, published March 1, 2007 Consulted February 11, 2018. Available http://www.archimuse.com/mw2007/papers/mitroff/mitroff.html

National Archives. (1999). “Glossary; adapted from Maygene F. Daniels, Introduction to Archival Terminology.” Published in A Modern Archives Reader: Basic Readings on Archival Theory and Practice (National Archives Trust Fund Board, 1984): 336-42. Consulted February 12, 2018. Available https://www.archives.gov/research/alic/reference/archives-resources/terminology.html

National Archives. (2002). The Lifecycle Data Requirements Guide. Consulted February 12, 2018. Available https://www.archives.gov/research/catalog/lcdrg

National Archives. (2017). Citizen Archivist Frequently Asked Questions. Consulted February 12, 2018. Available https://www.archives.gov/citizen-archivist/faqs

National Archives. (2017). Introducing History Hub (video). Consulted February 12, 2018. Available https://historyhub.history.gov/docs/DOC-1143

National Archives. (2017). National Archives Digital Personas. Consulted February 12, 2018. Available https://www.archives.gov/digitalstrategy/personas

National Archives. (2017). Vision and Mission. Consulted February 12, 2018.https://www.archives.gov/about/info/mission.html

National Archives. (2018). Archival Descriptions Statistics. Consulted February 12, 2018. Available ://catalog.archives.gov/statistics

National Archives. (2018). National Archives for Developers. Consulted February 12, 2018. Available https://www.archives.gov/developer

National Archives. (2018). National Archives Strategic Plan FY 2018-2022. Consulted February 12, 2018. Available https://www.archives.gov/about/plans-reports/strategic-plan

National Archives. (2018). Social Media and Digital Engagement. Consulted February 12, 2018. Available https://www.archives.gov/social-media

Tasich, T. and E. Villaespesa (2013). “Meeting the Real User: Evaluating the Usability of Tate’s Website.” In Museums and the Web 2013, N. Proctor & R. Cherry (eds). Silver Spring, MD: Museums and the Web. Published January 31, 2013. Consulted February 11, 2018 . Available https://mw2013.museumsandtheweb.com/paper/meeting-the-real-user-evaluating-the-usability-of-tates-website/

Treptow, T., and K. Kaiser. (2015). “When to ask and when to shut up: How to get visitor feedback on digital interactives.” MW2015: Museums and the Web 2015. Published January 31, 2015. Consulted February 11, 2018. Available https://mw2015.museumsandtheweb.com/paper/when-to-ask-and-when-to-shut-up-how-to-get-visitor-feedback-on-digital-interactives/

U.K. Government Digital Service. (2017). Check before you start. Consulted February 12, 2018. Available https://www.gov.uk/service-manual/design/check-before-you-start

Whitelaw, M. (2015). “Generous Interfaces for Digital Cultural Collections.” Digital Humanities Quarterly 9(1). Consulted February 11, 2018. Available http://www.digitalhumanities.org/dhq/vol/9/1/000205/000205.html


Cite as:
Allen-Greil, Dana. "Serving researchers in a self-service world." MW18: MW 2018. Published February 13, 2018. Consulted .
https://mw18.mwconf.org/paper/serving-researchers-in-a-self-service-world/