It’s time for Agile User Research
The traditional research process is making us stressed, inefficient, and disempowered. Researchers can (and often do) make more impact with less waste by adopting Agile practices to manage risk.
With apologies to everyone else, this post is going to be pretty narrowly addressed to user researchers and anyone who collaborates with them.
When I was a PhD student, I was introduced1 to Agile Research—the idea that we can reduce waste and increase impact by applying principles of Agile software development to our research projects.
Agile Research rejects the long timelines, high costs, and unpredictable ROI of traditional research projects. Instead, you do smaller research activities at a higher cadence—allowing for faster iteration, learning, and impact.
I naïvely expected user research in the tech industry to work this way, too. After all, Agile originated in the tech industry, not academia.
But my impression is that in-house research is (still) heavily influenced by the waterfall planning cycles that we inherited from social science and agency work. To be clear, this is an observation about the culture of user research as a discipline, not any particular organization.
I have observed a belief among many researchers that careers are built on high-prestige “foundational” studies—while rapid, iterative work is too often seen as low-status and low-impact.
I think these norms and incentives are counterproductive. First, they distract us from the fact that many researchers do, actually, rely on Agile practices to do impactful work. And I may be wrong, but I sense that many research leaders wish their teams were more comfortable with this way of working.
I think our field could benefit from a more conscious embrace of Agile practices. But I think it’s hard for many researchers to imagine what this means, especially if you’ve never seen it in action.
So, I wrote this piece to summarize:
the core principles of Agile User Research,
why it can feel uncomfortable (but shouldn’t), and
what obstacles we need to overcome to make it easier.
If you are in tech or research, I want to hear what you think! Does this resonate with your experience? Have I got it all wrong? Does this describe how you already work?
The full post is below, and it’s also available as a PDF slide deck here:
Why science is an insufficient mental model to guide our work
As good researchers, let’s start with understanding the problem.
If we had to draw our mental model of the research process, it might look something like this:
This is a model we inherited from social science and consultancies, where researchers were further removed from the chaos of organizational decision-making. I’ve never run a research consultancy, but I’m guessing it helps with billing (or marketing) when you can break down the process into all these chunks to demonstrate where the client’s fees are going.
The rise of in-house user research might suggest to us that we don’t need to worry so much about marketing our process any more.
But in-house user research is (still) heavily influenced by these “waterfall” planning cycles.
And this seems almost necessary—to us, this is just “the scientific method.” What else would we do?
The problem for us is that the scientific method doesn’t account for risk.
Specifically, science is not inherently geared toward impact, it’s geared toward discovery. Since we care about impact, we also need to consider risks that could limit our impact. This makes us more like engineers than scientists—a critical point we will revisit shortly.
User researchers need to manage risk
There are many things that make user research inherently risky work:
Decisions move fast — so it’s easy to miss the window for impact
Decision contexts evolve quickly — so it’s hard to anticipate what answers will be needed in the future
Decisions are ill-defined (meaning that the goals & options are often vague, unspoken, and subject to revision) — so it’s easy to prioritize the wrong questions (answering things that aren’t important, or aren’t even relevant).
Research is open-ended — so it can be hard to predict the value of what we will learn.
So we feel pressure to anticipate every question and deliver relevant answers—before it’s too late.
Often, we pull it off by working harder and faster at the expense of well-being and other priorities.
And this makes us feel angsty. In our least glamorous moments, we can be caught blaming our surroundings, e.g.:
“There is never enough time for research”
“They don’t value rigor”
“They have unrealistic expectations for output”
But what if the problem is our process? Or rather, how we think about our process?
We need to think like engineers, not just scientists
This is the premise of Agile User Research: we need to think like engineers, not just scientists.
Agile User Research is a different mental model of research, inspired by engineering practices for managing risk.
It isn’t meant to be a totally new kind of research. It isn’t a specific process or series of steps, either. It is a cognitive tool that we can use in 2 key ways:
First, recognizing things we already do to manage risk, and becoming more aware of the importance of those activities.
Second, analyzing and improving how we manage risk.
Minimal viable history of Agile software development
To understand Agile research, it helps to have a very lightweight familiarity with Agile Software Development. This will be plenty for us:
Agile user research vs. research “under” Agile
If you are familiar with Agile, you might be surprised (as I was) to learn that we haven’t really had an Agile movement in user research. At least, not by that name (which is surprising in itself given the success of Agile in other product development functions).
When I searched “Agile user research” on Google in late 2023, I found many articles discussing how to fit user research “under” Agile software development.
These articles are addressed to researchers who work with Agile development teams and feel overwhelmed by the pace of decision making. Here are a couple examples:
Agile user research? That’s umpossible: pragmatic tips for doing user research under Agile by Sarah White in UX Collective (2018)
Accounting for user research in Agile by Rachel Krause at Nielsen Norman Group (2021)
How to conduct user research in an agile environment (2021) by Nikki Anderson on Dovetail’s best practices blog
These articles offer some very helpful tips for doing faster research for Agile teams, but they don’t make a positive case for why we should embrace of Agile principles within our research.
In this piece, we’re not talking about how to reconcile our research process with Agile software development. We are talking about using Agile practices to do research.
4 core practices of Agile User Research
There are 4 core practices that are key to Agile research:
Testing
Iteration
Slicing
Risk prioritization
Each of these practices solves a different problem and can be applied in different ways.
Key metaphor: researcher as product manager for insights
Before getting into details, let’s get oriented with a helpful metaphor:
As a researcher, you are also a product manager. But your product is your insights, and your users are decision makers. In order to deliver a valuable product, you need to manage various risks. Of course, there are technical risks that could undermine the integrity of your insights—validity threats, bias, etc. There are also user-based risks (value and usability) that could undermine the relevance and impact of your insights.
Technical risks are solved by methodology.
User-based risks are solved by Agile practices. Let’s see how:
1. Testing
The first core practice is testing.
In user research, stakeholders often struggle to express their needs. It is difficult, without research training, to articulate good, actionable research questions. It can also be difficult for stakeholders to fully convey their decision context.
This creates uncertainty for researchers: how do we know if we’re on track to deliver valuable insights?
The solution to this problem is “testing” the value of our work by sharing insights and observing the impact on stakeholders’ behavior and thinking.
I don’t want to be overly prescriptive about how that looks, but here is an illustrative example:
In a testing mindset, we avoid ending projects with conversations where we “hand off” the insights like a baton and consider our work finished.
Instead, we debrief with stakeholders to understand how they are thinking about what we have shared. We might ask questions like:
How is this informing your thinking?
What new questions do you have?
What didn’t make sense?
What are you thinking about for next steps?
What other factors are swaying your decision?
The goal here is to get an early signal of how much impact, if any, the insights are having. And more importantly, why.
2. Iteration
The second core practice is iteration.
In user research, we are constantly at risk of delivering insights that are no longer relevant because the decision context has changed.
New decision makers may become involved. New product hypotheses may emerge. New priorities may be imposed. New business and technical constraints may be identified or removed. Urgency may change.
This creates an ongoing coordination problem: how do we stay closely aligned with what our stakeholders need?
The solution to this problem is iterating on our deliverables (the insights + how they are communicated) early and often based on frequent testing with stakeholders.
In an iterative mindset, we try to avoid doing weeks of upfront work to run comprehensive studies and create polished deliverables.
Instead, we have a higher cadence of check-ins where we are providing something for stakeholders to react to—ideally, something of real value, not just plans.
After each check-in, we use what we learned to plan our next bit of work.
3. Slicing
The third core practice is slicing.
Slicing is the key to working iteratively without burning yourself out.
The problem with frequent testing and iteration is that, without slicing, we are just trying to do the same amount of work in way less time.
The solution to this is breaking down complex projects into “slices” that deliver the minimum viable insight to advance a decision.
There are many ways we can slice a project:
Focusing on only 1 specific user segment
Answering only 1-2 key questions
Gathering incremental evidence vs. targeting conclusive evidence
Reporting only the most critical insight
Effective slicing requires deep methodological expertise because it often requires us to bend the rules on research rigor. We need to know where we can afford to cut corners without compromising the integrity of a decision, based on our understanding of the team’s priorities and what the insights will be used for.
Slicing means simplifying, which can be uncomfortable for researchers. We slip into believing that doing complex projects—not making impact—is what makes us valuable to an organization. Complexity makes us feel special, like we are showing off the full depth of our skill set.
But the opposite is true: getting good results from simple research requires deeper expertise because of the speed and trade offs involved. Skilled researchers make it look quick and easy, but there’s a lot going on beneath the surface.
In a slicing mindset, we avoid delivering big, detailed reports that answer many questions. We know this would slow our cadence of testing and iteration.
Instead, we plan focused investigations and try to deliver the simplest output possible—such as a single sentence, slide, or chart.
This way, we can test the value and iterate quickly to keep our work tightly coupled with stakeholder needs and decision making.
4. Risk prioritization
The fourth core practice is risk prioritization.
This is how we decide which slice of research to deliver first. The challenge is, how do we avoid doing small slices of low-impact research?
The solution is to focus on research that mitigates risk.
There are two components to mitigating risk:
Epistemic value - does the research help us make a better decision?
Business value - does the decision have big financial consequences?
We need to make sure that both conditions are satisfied, but it’s easy to over-index on only one component.
When a product manager asks us to “validate” a decision that will have big financial consequences—but cannot explain how the research findings might change their decision—they are over-indexing on business value.
Likewise, when a product manager requests a study that will yield actionable insights to improve a feature—but the feature is unimportant from the perspective of business strategy—they are over-indexing on epistemic value.
I’m not bashing product managers here—researchers, designers, engineers, and executives can be equally susceptible to poor risk prioritization.
The point is that we can’t rely on others to prioritize our work properly. Just because a product manager (or anyone else!) identifies something as a priority doesn’t mean we need to accept this uncritically. Nor should we ignore them. It should be the start of a conversation, probably one where we ask a lot of questions.
In a risk prioritization mindset, we resist our bias toward comprehensive understanding.
Instead, we focus on uncovering the 1-2 things our team really needs to know to make better decisions (especially for decisions where it’s super expensive to be wrong).
5 counterarguments (and why they shouldn’t stop us)
As researchers, we are critical thinkers, so I would be surprised if anyone is willing to accept the proposition of Agile user research without some scrutiny.
Here are 5 reasonable counterarguments I would anticipate from a critical audience. By considering these arguments, we can tease out some nuances that are important to fully understanding the Agile user research approach.
If you still have doubts after reading this, I would really appreciate hearing from you!
1. “Research is serendipitous”
One reasonable counterargument says that research is serendipitous:
We don’t know what we will learn, so we can’t actually predict ROI beforehand. Since we can’t predict ROI, there is no real basis to prioritize based on risk.
Instead, we need to accept that research requires a leap of faith. We must simply trust that it will lead to useful knowledge.
To this, I say: sure. It’s true that research can be unpredictable. But this doesn’t mean we should give up on trying to plan well.
Yes, we have to make an initial guess about ROI based on past experience.
And this is precisely why it’s best to move quickly so we can get an early signal of value and have time to adapt.
2. “Economies of scope”
Another reasonable counterargument is based on economies of scope:
It often costs much less to add 1 more question vs. needing to run a separate follow-up study.
This is also true, but I think it misrepresents the true costs of scope creep.
First, we rarely stop at “just 1 more question.”
Second, we forget that extra data wrangling, analysis, and documentation also take a lot of time.
My observation is that data gathered in fishing expeditions are rarely actionable. If something isn’t important enough to warrant its own study, that’s often a red flag that it doesn’t deserve our attention.
Instead of loading up studies with extra questions “just because we can”, it’s better to move fast and leave ourselves more time for the highest-priority follow up studies.
3. “Stakeholder feedback vs. researcher judgement“
A third counterargument says that Agile user research feels too responsive to stakeholders:
It’s widely understood among researchers that our stakeholders do not always anticipate what the most valuable insights will look like.
Therefore, we—the experts—should be making decisions about what research to do.
Of course this is true! But it doesn’t change the fact that we need to maintain trust and mutual respect with stakeholders.
The point is that Agile user research can help us to assert our expertise while staying responsive to stakeholders’ perspectives:
Quick iteration makes it less risky to go a little bit rogue. In contrast, big, complicated projects make us more dependent on upfront buy-in.
It’s easier to win trust when we can quickly show value that speaks for itself.
4. “We should be proactive, not reactive”
A fourth reasonable counterargument stems from the desire to be proactive:
In a crisis, decisions are made with the research that already exists.
Therefore, we need to be proactive.
This means doing complex, foundational projects before they are needed.
Reasonable enough—but this presumes we know what research will be needed in the future. I don’t think this is a realistic presumption. For every proactive study that turns out highly useful, there are others that quietly fade into memory. (This isn’t to say researchers are particularly wasteful people — the same can be said of product initiatives!) Bottom line, it’s hard to predict the future.
And let’s consider the alternative: if we pursue Agile research instead of big, proactive projects, there’s an equally good chance that we’ll uncover insights that are useful during a future crisis.
But anyways, I think it’s a red herring to try and prepare insights for every possible crisis.
The real issue is how much influence the research function has—and whether we are considered essential partners when a crisis occurs.
And the best way to increase our influence is by delivering value at a regular cadence in the here & now.
5. “We need polished artifacts to socialize insights”
The fifth counterargument I would expect has to do with communication strategy:
Big projects with narrative deliverables help us win attention and shift how the org thinks about users.
Once again, I agree there is an important insight here. But it’s a bit reductive, and I think we’ll benefit from a more nuanced approach.
First, let’s remember that people (stakeholders) learn by assimilating information gradually over time as they try to solve problems—not by inhaling it all at once.
Second, we can still create synthesis deliverables to help teams remember key learnings to date—but this can be done incrementally over cycles of research. This shouldn’t stop us from adopting Agile practices.
Third, big, narrative reports can make us feel seen—but that doesn’t mean they are read (or understood) as much as we’d like to believe.
There is absolutely a role for narrative artifacts, but this shouldn’t stop us from adopting Agile practices.
Why don’t we do this already?
Researchers are generally a very thoughtful bunch. So if this is such a great idea, why don’t we already work this way?
First, I need to reiterate—many researchers do already work like this.
But to the extent that we don’t, here are some barriers that explain why:
Stigma - iterative research is often seen as junior, inherently tactical, or UI focused (in a pejorative sense).
Incentives - hiring and leveling often (seem to) reward big, glitzy studies. Because the impact of iterative work is more closely coupled with product teams and less likely to produce high fidelity media products, it is less visible.
Feasibility - the overhead costs of recruitment and research operations for individual studies leads to batching and scope creep in studies. It’s seductively easy to “just ask one more question” when we’re already doing the work of recruitment, and it feels less efficient to run a larger number of smaller studies.
Stakeholder expectations - we’ve trained stakeholders to expect research to be a fairly long, linear process resulting in a high fidelity deliverable. They now collaborate with us in ways that reinforce these norms, explicitly and implicitly.
Anxiety - we are conscientious people. Agile user research puts stress on our mental model of “good work,” and we worry we are doing something wrong.
These are not trivial barriers. But they are things we can work to change!
Here are a few of my suggestions for where to get started:
Start small - Agile user research isn’t an all-or-nothing proposition. Incremental changes can help us build momentum and learn what works well. Be Agile about adopting Agile…
Pick your battles - protect the trust you’ve built with managers and stakeholders. You need them as allies if you want to try something new. Or at least, curious bystanders. If they are strongly opposed, this may not be the time for innovation.
Get help - ask research leaders for support, tools, and resources.
Work together - find like-minded peers for mutual support and guidance. Find time to share reflections and troubleshoot difficult situations together.
Changing the culture of a discipline isn’t something that happens quickly. It's the result of a lot of little steps that end up leading somewhere. So find a friend, and take a little step.
Credit and many thanks to those who introduced me to these ideas, especially Daniel Rees Lewis and also Matt Easterday, Chris Riesbeck, Haoqi Zhang, Leesha Maliakal, Liz Gerber, and Kristine Lu.