From 10 personas to 2 that teams actually use

Using qualitative and quantitative research across thousands of users to replace assumption-based personas with research-backed, actionable user profiles.

Organization

Research Square Company

Role

  • Qualitative interviews: planning, recruiting, and conducting 1 hour semi-structured interview sessions with users.

  • Qualitative analysis: coding and thematic analysis of interview transcripts to identify themes across our English and translated Chinese interviews.

  • Survey design: Developing a well-structured large-scale Jobs-to-be-Done survey that minimizes respondent drop-off. 

  • Survey analysis: utilized Python and Jupyter Notebooks for clustering of respondent data.

  • Stakeholder communications and workshop facilitation: Gathering stakeholder knowledge and assumptions about our users at the start of the project.

  • Persona creation: synthesizing useful personas for the organization based on the most important and least-served needs our users currently experience. 

Key deliverables

  • Jobs-to-be-Done research report that identifies key opportunities for product teams moving forward.

  • Research-backed personas for product teams to use in all phases of our design process.

Research Square Company provides wide-ranging services to authors in the pre-publication stages of their research that aim to help authors get published and share their research with the world. Unfortunately, organizational understanding of what our users need while writing and submitting their research was unclear and inconsistent. Over the years, different product teams created siloed, assumptive user personas with few resources for iteration as they learned about their users. Eventually, there were 10 personas with varying degrees of overlap and assumptions created by different teams and colleagues for ad-hoc needs. Stakeholders didn’t know where to look for a source of truth on user personas or who was responsible for maintaining them over time.

This fragmented buffet of personas introduced unnecessary complexity: teams had conflicting, assumptive, risky beliefs about our users. To integrate user knowledge across the organization, we needed a way to share what we know about our users and to validate it with qualitative research on our actual users.

Jobs-to-be-Done approach

Creating and validating a job map with qualitative interviews

Using Anthony Ulwick's Jobs-to-be-Done Needs Framework, we worked with key stakeholders to share our knowledge and assumptions about a core functional job for our users: publishing their academic research. This job is what Research Square Company supports with every product and service it offers. Together, we created an assumptive map of the steps and corresponding desired outcomes required to accomplish that core functional job.

In Jobs-to-be-Done Needs Framework parlance, as users achieve the desired outcome of each step, they move closer to accomplishing the core functional job. Jobs-to-be-Done steps and outcomes are invariant across demographics and short periods, making them immediately valuable for product teams looking to build the right things for users.

Assumptive map in hand, it was time to validate it against current users. Collaborating with a colleague—a UX Researcher based in China—we facilitated 20 qualitative interviews with users, in English and Chinese respectively, structured around the core functional job. These interviews invited participants to guide us through the steps involved in their most recently published research with as much detail as possible. We then revised the job map based on thematic observations from the interviews. To further increase the value of this validated map, we needed to prioritize the steps and outcomes within to identify key opportunities for our product teams to address.

Surveying users

As a next step in the Jobs-to-be-Done process, we prepared a survey based on the outcomes in our map, which invites our users to prioritize the outcomes according to how important they are and how satisfied they are with how they currently achieve them.

The survey is as simple as it is long: each outcome is broken into a pair of statements asked in a 5-point Likert format:

  • - How important is _____?

  • - How satisfied are you with how you _____?

With the help of our marketing team, we gathered responses from more than 2800 individual respondents, far exceeding our minimum response target for the project. 

Survey analysis

With Python and Jupyter Notebooks, I analyzed the data using k-means clustering on the importance-satisfaction statement pairs to create groups of users.

With participants now grouped, I then calculated "opportunity scores" for each, which uses the percentage of respondents who rated both the importance and satisfaction of a given outcome at 4 or 5 to give a vectorized score according to the formula:

score = Importance +max(Importance - Satisfaction, 0)

Outcomes with opportunity scores 10 or greater on the graph are underserved: not only are they important outcomes for users, but they're not satisfied with their current means of reaching that outcome, suggesting potential for our product teams.

Graph graphemes

Figure to the left is the k-means clustering of respondents based on their survey answers. Figure to the right is the opportunity score graph for all three clusters.

With the analysis complete, the UX team was ready to support the product teams with a validated, nuanced description of the problem space.

Making and iterating on research-backed personas

We created personas based exclusively on the Jobs-to-be-Done findings, but quickly found that our intended users (colleagues in Product, UX, and Engineering) weren't adopting them—they weren’t digestible or engaging. It's no surprise: the pure Jobs-to-be-Done personas covered exclusively the outcomes they cared about with very little personifying traits.

Proliferating personas.

We added 3 more unused personas to the stack. They were cardboard cutouts with nobody printed on top—flimsy silhouettes with no memorable human traits.

Because these new personas weren't personable, product teams simply didn't use them. They stuck to familiar standards. They were cardboard cutouts with nobody printed on top—flimsy silhouettes with no memorable human traits. I needed to grab a thick-tipped permanent marker and ink the dotted eyes and line-lipped mouth of a memorable academic author or mere cardboard they would remain.

It seemed like we added 3 more unused personas to the stack of 10. Teams were still relying on old, risky assumptions. We needed to introduce engaging clarity, simplicity, and truth to product teams but appeared to only deliver on the last point.

To address this, we looked back at the personas existing across the company. Why did these teams want to keep using them? We took the following steps to synthesize new personas for the organization and make them preferable over the outdated, unmaintainable personas:

  1. Reviewed the existing personas to identify shared and unique traits.

  2. Grouped together personas that have the most in common in terms of age, career stage, career goals, area of study, pain points, and language. 

  3. Refined the consolidated groupings by introducing our validated findings from Jobs-to-be-Done to enhance their clarity.

  4. Established a clear path for maintaining and contributing to personas as the organization continues to learn about our users.

The resulting personas was simple and digestible: 2 personas, each representative of a distinct set of needs, pain points, and Jobs-to-be-Done outcomes that matter to them. I included assets like representative cards and chips with their name and photo to support stakeholders in workshops and habituate colleagues to their use. They were relatable. They were simple. They were accurate.

Something borrowed, something new.

A set of 2 UX personas with 3 assets each: the full descriptive persona with bio, goals, needs, behavior, frustrations, and current solutions; a brief card with key traits; and a small chip with only their name and photo.

I also developed a presentation introducing the new personas and their applications to the company, kicking off their introduction. After presenting to key product team members and putting to use in a fresh design sprint, they caught on. Suddenly, I'd hear their names in use in meetings that almost never used personas before (meetings that could have benefited from them in the first place). Product teams now had a research-backed source of truth that was engaging and got them thinking about their products from the user's point of view more quickly and confidently than before.

Previous
Previous

Improving gameplay with rapid iterative testing and evaluation

Next
Next

Designing AI interactions that build user confidence