IA Usability Research & Testing

CLARION UNIVERSITY

THE GOAL

With remote usability testing, evaluate the information architecture that I had designed for an enterprise-level, university client.

Draw actionable insights from the mental models of our target users. Deliver and communicate my findings so they are readily accessible to anybody in the client organization, whether they've been closely involved with the redesign or don't even know what "IA" stands for.

What I Did

Testing Platform Selection, Research Design, Task and Participant Identification, Copywriting, Test Coordination, Troubleshooting, Results Synthesis and Analysis, Refinement, Documentation, Client Presentation

Background

This research dovetailed with a months-long project to design the structure, navigation, and overall user experience of a new website for Clarion University. It was completed in May 2014, though the redesigned site has not yet launched as of the writing of this portfolio entry.


THE QUEST

At the outset, none of my clients had experience with usability testing. So, I laid out the benefits and garnered support for including evaluative research in the project's timeline and budget. I then led decisions on which audiences we wanted to target and which parts of the information architecture were the most valuable or interesting to test.

Red sections indicate which parts of the IA we decided to cover with our tests, as seen at the second level.

We chose to use Treejack as our testing platform, and I identified 20 survey tasks for two student audiences. Having crafted how the questions would be written and displayed, I pre-tested them with stakeholders and coordinated with agency team members over distribution, email copywriting, and participation incentive.

 

Excerpt from a spreadsheet analyzing the paths that students took during a task. This was helpful in digging deep for patterns that weren't necessarily apparent in the aggregate.

When the surveys went live, I monitored early results for issues. We met our participation goals quickly, and I set to work analyzing data that included findability and directness rates, first clicks, time on task, demographics, and paths taken.

 

The results were at once fascinating and invaluable. I drilled down to reveal patterns, understand mental models, and draw actionable conclusions. In synthesizing everything into a client deliverable, I knew this document was likely to be shared beyond our core project team, so I took care to ensure it would be accessible to just about anybody.

To make my findings widely accessible, I drafted this section discussing what "good" means when we're talking about usability testing. The report also included an overview of what we did, what the surveys looked like, and a summary of the results.

A typical page from the testing report. Details are largely obscured for confidentiality reasons. Each page recapped the task with context and provided definitions, so uninitiated readers didn't need to remember what was meant by "Success" or "Directness."


THE RESULTS

Half of our survey tasks produced success rates of 80% or higher (and in four cases, 90% or higher). More importantly, though, the other half yielded insights about our student audiences that we would never have uncovered without testing. Participation data indicated the tests were clear and well designed.

Where necessary, I made recommendations and improvements to reflect our findings. Clients who were previously unfamiliar with evaluative research were thrilled with the value this added to the project, and now they had the validation they needed to move forward with confidence.