On timing and timelessness in User Research: Lessons from researching online shopping behaviour within Ricardo’s product development cycles
“I mean, like, this was probably cool circa 2001.”
We were 5 minutes into the first video call of my research project, and the user I was interviewing had already gone into rant mode about shopping on Switzerland's largest auction marketplace. The product manager was also on the call. “Honestly, Ricardo is a catastrophe… if they had any real competition they’d be bankrupt by now.”
Through the window of the meeting room I was in, I could see the PM burst out laughing at his desk. What a start, I thought.
Unlike the feature my interviewee was lambasting, proper user research should be timeless. But there is absolutely a wrong time to do it. To deliver actionable results, researchers need to be part of product development processes as early as possible. We need to align carefully with stakeholders about the goals of a project and the risks and assumptions they are willing to take to meet deadlines. Conducting a study about purchasing behaviour on Ricardo taught me all of these lessons the hard way.
Purchasing on Ricardo: Project Purcival
I took on the responsibility of planning research efforts six months ago when I became the lead UX Researcher from the Product + UX (PUX) team supporting our client Ricardo. For context: Ricardo works in cross-functional teams focused around opportunities. One high-priority opportunity was “Purchasing”. The newly-created team’s goal was to increase buyer conversions by optimising the process of bidding on or buying an article on Ricardo. I reached out to the team’s PM and UX Designer to discuss what research support they required.
Defining the Problem
The Purchasing team’s goal was to help visitors with a clear buying intent to become successful buyers. They assumed users had “purchasing intent” when they (1) bookmarked an item, (2) asked a question on a product page, or (3) visited the same product page more than three times. The team wanted research support to prioritise the user stories they had pinpointed during an ideation session. A logical starting point was the Wishlist: why did users only bid on or buy X% of the items they bookmarked?
Ricardo has a user journey map that is fully validated through user research conducted by PUX. However, it has a strong focus on what happens before or after a purchase. For the Wishlist problem, I couldn’t come up with questions that weren’t based on the #1 red flag for any researcher: assumptions. I wasn’t comfortable with the underlying assumption that only users with purchasing intent bookmarked items. ...or asked questions. ...or visited product pages multiple times. It felt like the team had rushed headlong into planning solutions without validating whether the underlying problems were real user needs or not.
I explained that, from a user research perspective, we needed to step back and validate the assumptions being made, and understand more about the user journey during purchasing, and identify users’ Jobs to be Done with exploratory interviews. The UX Designer agreed.
The PM pulled up the team roadmap and asked the question every researcher knows and dreads. “Can you do it in two weeks?”
Reader, I said yes.
This is where I missed the opportunity to ensure that everyone was aligned on the specific goals and deliverables for the project.
Our normal research process involves several feedback rounds before the “real” research starts in order to agree on the scope, goals, and success metrics, and to make sure the research questions cover what the stakeholders want to know. With the timeline as tight as it was, I completely skipped the first feedback round and immediately got to work, and didn’t take time to properly follow up for feedback. More on that later.
Doing the Research
To get a better idea of users’ habits and understand at which point they switched from searching or browsing to purchasing intent, I used a combination of semi-structured interviews and usability test techniques which I led remotely via video call. For 30 minutes I asked the participants questions about their experiences shopping online, both in general and on Ricardo. For another 30 minutes, I asked them to think out loud while they completed a search task and a bidding simulation. In case you ever decide to conduct a simulation on a live platform: Be careful! In my case, someone not related to the testing actually bid on the cat picture I had listed as “TEST Article, please do not bid”. I had to ask Customer Care to delete it. (To be fair, it was a pretty fabulous RCP.)
In total, I talked to six participants with a range of experience levels using Ricardo, from non-user to power user. All but the non-user had bought something on Ricardo in the past few months.
As a rule in UX Research, five participants are already enough to get a broad spectrum of qualitative insights out of user interviews. In this case, six participants provided so many insights across the board that I didn’t know where to start.
I’m a quantitative researcher by training: I like having neat variables and friendly numbers. While writing my research questions, I oriented myself along the user journey map and the step-by-step process of choosing an item and placing a bid. Assumption alert: step-by-step implies that the purchasing process is linear.
Spoiler: It’s not.
There followed a very intense few days thematically analysing all the different journey steps, jobs to be done, pains, usability issues, and bids placed on cat pictures. I also drew on prior research to better explain perplexing findings.
Analysis finished, I found that the real challenge was communicating the results effectively.
First: I found that users jump back and forth between different journey stages in a kind of “journey spiral”. Within each stage, there were triggers that would send a user to the next step or make them spiral back to a previous stage. I started mapping out all the different steps and how they were inter-connected with icons and arrows in a presentation slide. My researcher colleague took one look at it and suggested: “Maybe you should ask a designer for help.”
My designer colleague managed to turn it into magic.
Second: I tried to break down into a kind of formula the individual elements that lead a buyer to convert. I wound up with a variable I could only call “Magic Sauce”. Users are unique and have a lot of (sometimes odd) personal preferences!
Mainly, I found some very interesting insights related to the Purchasing Team’s user stories. For example, users differ in how they use the Wishlist: they don’t necessarily only save items they intend to buy. Some bookmark items during the search phase because it’s hard to keep track of all the items they find interesting and navigate between them.
I compiled all of these insights and visualisations into a detailed report about the users’ needs, jobs, emotions, and decision criteria during their purchasing process. But what I couldn’t do was validate the specific user stories on the Purchasing roadmap or give a clear answer about which one would have the highest impact.
Remember I mentioned I cut a few corners when aligning on goals?
What the team really needed was a clear direction on what to focus on in order to ship something by the end of the production cycle. They wanted to know, “Will building this make users with purchasing intent convert more?”. What I delivered was exploratory research; I was saying, “Your definition of purchasing intent isn’t correct.” It was clear that this wasn’t actionable enough to help the team at this stage in the production cycle. So I tried to “translate” the results into more concrete recommendations, provide more context, and add opportunities and risks to the user flow map in order to -- I hoped -- help the team to empathise with the users.
Making an Impact
In the short term, I used the interview insights to help the PM put together a survey that would answer his original question: why don’t users buy things on their Wishlists?
But where the value of this research project started to show was when we started planning the next round, and there was an evidence-based foundation to build on. We were able to quickly create and test a prototype that gave us insights into users’ expectations and potential negative consequences. Being part of discussions within the Purchasing team and explaining what we knew from user research when decisions needed to be made, rather than just delivering a report and expecting everyone to read and remember it, also made a huge difference in how much impact the results have had.
What I Learned
Fundamentally, I ended up doing the right research at the wrong time. It should have been done before the team ever started their ideation session, so that they could build on identified user needs instead of assumptions. Unfortunately, it wasn’t clear whether there would even be a Purchasing team six weeks before. The main lesson I took from this is to be proactive. At Ricardo, I now have a continuous dialogue with designers and PMs to anticipate what research will be needed by when.
The project taught me to be more pragmatic in how I approach research planning, and to be more willing to make compromises and build on assumptions when quick results are needed. I’m also more careful about setting goals and clarifying what the output of a specific research project will be.
At the same time, I’m trying to spread awareness that user research ≠ usability testing. The real time-waster isn’t developing one untested feature. It’s devoting time and resources to a process that isn’t built on well-founded understanding of how users use your product. Here are all the stages where research might be needed:
- Identify a business problem like “50% of users don’t convert” → validate whether your metric for conversion reflects the user behaviour you think it does.
- Identify a UX problem like “20% of users exit from this page” → understand why users are on that page and where it fits into their journey.
- Propose a solution like “When users have [assumed problem x], they want [assumed feature y] so they can [assumed goal z]” → validate whether x is a real problem, y is a good solution to that problem, and z is a real goal that would make them use y.
If you skip a step in this process, you risk that all of your further effort is built on an incorrect assumption.
Is it feasible to follow this process to the letter every time you want to develop something new? Of course not. And this is where “timeless research” comes in. Done right, every piece of research you do contributes to your organisation’s knowledge about its users and creates a stronger foundation to build on for the next cycle.
Had I just done a validation of a specific user story, that research would have been meaningless and “disposable” in a few months’ time. There was also a high risk that the results would not be valid and could lead to wrong decisions, wasting far more time and resources. We wouldn’t have known, for example, that some users bookmark items early on in the search stage. Not putting this into the questionnaire as an option would have meant that users chose the next best option, giving us unreliable results and possibly over-estimating the size of a problem or “validating” a problem that didn’t actually exist. Now that the foundational research on Purchasing is done, it can inform all future studies and thus still provide value to the team -- and to other opportunities -- in future production cycles.
The balance between doing thorough research and delivering quick, actionable results is difficult to strike. Keeping user researchers in the loop from the start makes it easier for us to plan the right kind of research at the right time. The more we know about which questions you need answered and which decisions you need to make, the better we can align on the risks and compromises required to get there.
But as researchers, we should keep the big picture in mind and strive to get timeless insights out of every project. That way, instead of scrambling to answer foundational questions in the middle of a sprint, we can at least make somewhat-founded assumptions based on what we learned six weeks, months, or even years(!) ago. Disposable research was probably cool circa 2001.