Posted on August 8, 2011.
by Margaret Ostrander, Minnesota Chapter, IT & KM Divisions
Sometimes being future ready means growing our roots in new directions. At the core of our professional ethos is a service ethic focused on the needs of our users.
A recent enterprise search implementation at Thomson Reuters provides a compelling case study for keeping users at the heart of everything we do. Just one indicator of what this project provided to users is an astounding turnabout in data points before and after the search implementation – where previously 90% of our users’ experience with search was strongly negative, this trend was flipped to a 90% strongly positive user experience after the launch of our new search.
Our users’ delight can be attributed directly to rigorous user-focused methods of testing and validating the search experience.
User Observation: Design with the user in mind
User observation focuses the interaction of typical end-users with the user interface features of the search engine. In our case, relevance of search results is not studied in this portion of testing. Instead, the goal is to increase knowledge about how people actually use search results pages, to identify areas that work for users and those that are barriers. User experience testing also opens doors to hear unanticipated feedback from users about their preferences in using an enterprise search tool.
A major tenant underlying user observation is that analysis of what users actually do, versus what they say, provides a more actionable picture into their needs, preferences and stumbling blocks – and thus a sound basis for the design of an easy-to-use system.
After brief warm-up questions, a moderator guides the user, prompting for reactions, thoughts, insights and feedback. While scripted search scenarios provide valuable comparative data across all users, the most robust and valuable information is mined from searches that users come up with themselves. This portion of user observation offered the team clear insights, and a hands-on, real understanding about both the users and their information needs.
Relevancy Testing: Optimizing search results from the user’s point of view
Clearly, the most important thing to determine when looking at any search engine is how valuable search results are to the user. Actual search behavior of enterprise users formed the basis of carefully selecting a mix of test queries for relevancy testing. The majority of queries were intentionally drawn from the pool of the most common search queries that our users use, found in the “short head” of search logs, but also balanced with queries from the “long tail” and other examples seen in user observation sessions.
Specific information needs of users were associated with each query so that search results could be judged accurately and consistently. Again, these use cases were defined based on real life examples. Selecting a good group of queries for relevancy testing is as much an art as a science, and the search team found this aspect of relevancy testing to be particularly challenging and interesting.
Iterative rounds of relevancy testing were conducted by corporate librarians on the search team, with a variety of scoring methods for each query. Testing results were used to adjust the search engine’s relevancy settings until search results reached an optimized state. The hard numbers provided by the relevancy testing protocol were also critical in gaining an objective view in how relevant search results corresponded to user needs. The numbers also moved us away from the danger of reacting to biased “gut feelings” towards a clear, accurate methodology that accounted for relevancy as users see it.
Alpha Testing: Involving power users
As the launch of the search engine drew closer, a light weight testing protocol involved a core group of intranet power users. Testers were asked to explore the new search environment. At this point, we were especially excited to find that in 81% of queries, Alpha users were finding what they needed on the first try, and 97% did not experience any technical problems. These and other data points verified that the user interface design and search results relevancy was meeting – and often exceeding – the expectations of Alpha users. At the same time, Alpha user feedback uncovered a few issues that were significant to resolve before moving into a broader Beta release.
Beta Testing: Widen the net of user feedback
Close before the search engine went live, a group of 10,000 users were invited to use search in a Beta environment. Feedback gained through focused survey questions revealed Beta users’ experience was also overwhelmingly positive, mirroring that of our Alpha users. The focus at this point was to test the search engine’s capacity for increased, live traffic and to spot any red flags prior to launch. Beta results were also valued by senior stakeholders, as they could see in a quick snapshot of real users’ experience and feedback before the new search tool was rolled out to all employees.
Further testing critical to an optimal user experience included testing content processing, content permissioning, browser compatibility, performance (speed), and load testing. The methodologies presented here aim to provide repeatable, proven, and practical tactics to test an enterprise search engine so that its relevance, usability, and accuracy can be optimized for a superior user experience.
Margaret Ostrander, MLIS, is an information professional who enjoys connecting people with knowledge through innovative uses of technology. She is Manager of Search at Thomson Reuters, a provider of intelligent information for the world’s businesses and professionals. She was recently a co-recipient of the Innovation in Action Award from the Minnesota Chapter of SLA, and was named an SLA Rising Star in 2009. Margaret recently co-presented on User Observation techniques at the Libraries & Technology conference and a MN SLA Chapter continuing education event. She has published articles on information seeking behavior in the international journal New Library World (2010) and the “Best Young Professionals” issue of Library Hi Tech (2008). Margaret invites you to connect with her at http://www.linkedin.com/in/margaretostrander.