For the last 2.5 years, I have been user research lead within a public sector organisation (as evident from the blog posts…). This was my first time being a line manager and my first time introducing research into an organisation that didn’t do it previously. During both of these experiences I made some mistakes, and feel like I learned a lot. Most of the lessons (or thoughts cribbed from colleagues smarter than me) are probably only relevant to me, but I thought I would document them in case anything I learned is of value to others going into similar contexts.
I’m also conscious that this will reflect my own experiences, and is going to be subjective – perhaps there won’t be many universal truths in my own reflections, and I’d recommend reading other sources before drawing conclusions. In no particular order, here’s some thoughts…
One thing I learned: Evidence only works when you have agreement on what you’re trying to achieve
For a long time after I joined, we were following a development strategy that had one principal goal – to produce a HTML representation of information generated by the organisation, mirroring the structure of the concepts it represents, and enable others to build new things using this output. I think this is called the semantic web. This approach has a lot of benefits – for example the points made at the end of this blog post about helping SEO, and enabling others to use your data to build smarter things than you can imagine yourself. However this approach was different to the one being advocated for within the product teams, of understanding who uses our existing website and designing useful tools and services to meet their needs. Both approaches can be described as user centred – they just didn’t agree on who they wanted the users for a new website to be, and ultimately what ‘the website’ should be.
The impact of this lack of agreement was that a lot of the research we ran early on felt like it was having low impact. Our work was guided by product teams and we would generate a lot of evidence that the pages weren’t meeting the product team’s goals – but because the development approach had different goals than “utility for existing users of the website”, that evidence wasn’t relevant to bringing about a change in approach.
On reflection, our research team could have spent less time running research that only reinforced our early findings that the approach wasn’t achieving the goals of the product teams, and more time helping build a consensus on what the unified approach should be. Although that’s not the ‘user research’ bit of the job directly, it would have unblocked the effort of the research team earlier, and so is probably the ‘lead’ bit of the job title.
A second thing I learned: A service designer has a hard job.
Definitions can be fuzzy, so let’s start with my understanding of what we’re talking about.
‘A service’ is the end-to-end process that lets a user complete a goal, such as renewing their passport, or buying some IT equipment.
‘Service design’ is the practice of defining how that service should work and then making it so.
So a ‘service designer’ is the individual who is defining how that service should work and then making it so. But actually doing that, and being a service designer feels incredibly difficult. A UI designer can develop a repeatable process for getting their decisions made in real life (and the pixels rarely complain). In contrast service designers don’t (and presumably cannot) have a repeatable process for making people do their job differently, which is required to make any sort of significant change to how most services work. And the people being changed probably will complain, as they may have done their job in the current way for a long time. To achieve any kind of significant change requires a tremendous amount of organisational buy-in and soft skills to make that happen – which can be difficult to achieve.
The risk we saw as a research team is that, when blocked from making changes by a lack of buy-in or influence, service designers can be tempted to fall back on further understanding the as-is state, and want to keep running user research in the hope that the situation blocking them will change. This has the potential for conflict when working with an established research team as it generates low impact work for researchers, and hides the reason why change isn’t occurring – creating the impression that ‘a lack of research’ is the cause of delay, as opposed to a lack of organisational buy-in, or an appropriate scope being defined for design to occur within.
One of the tools that helped manage relationships with service designers for us was socialising the principles taught to me when I was at PlayStation – including a clear definition of what is ‘research’ (planning a study, running a study, drawing conclusions and communicating what was learned), and what is ‘design’ (having questions that need to be answered by research, deciding what the appropriate action to take is in reaction to what’s been learned). It also impressed upon me the importance of describing the wider context of a round of research – why is it being run, and what change is going to occur on the back of it. Having a strong understanding of roles and responsibilities feels essential when working with service designers and researchers.
A third thing I learned: What does a user researcher do?
People will ask user researchers for a lot of things that are not user research, and researchers have to take care not to stray into being a business analyst, product manager or designer. To make good decisions requires the team to understand the organisation, the vision, what is feasible and what good looks like, not just users, and product teams may not have a clear understanding of who can help find each of these out, and will default to asking their user researcher.
Moving from an organisation where the research team were commissioned to run individual rounds of research, to working in a collaborative multidisciplinary way where roles have a lot more cross-over and grey areas, it took me a while to start to recognise the research questions which weren’t really user research. It reinforced to me the importance of deeply scrutinising the objectives of each round of research we run – stopping, and thinking about what you’re being asked to do, and identify “is it a question that a better understanding of our users will answer”. Related, I saw occurrences of individuals asking for research when they really just need to make a decision. To mitigate this, I’ve lately explored describing the role of user research as primarily to “help inform decision making” – to make it clear to teams who are new to working with researchers that user research won’t tell you what to do, and they will still have to make decisions.
A fourth thing I learned: The importance explaining what you do at all levels
User Research was relatively new at the organisation when I joined, and so we put a lot of effort into trying to explain what it was, demystify it, and make people at all levels of seniority understand how it’s relevant. This can be difficult as people often don’t want to admit – or are unaware- that they don’t *really* know what it is. Our excellent head of research & design helped reinforce the importance of this, which we did through all staff presentations, open research drop-in sessions, and other initiatives which sum up to ‘always banging on about research’.
We saw some issues occur as a result of always talking about research – a lot of related concepts that aren’t “user research” (User Centred Design, Service Design, defining what success looks like and measuring if you’re achieving it), often got attributed as user research. Despite these issues, we saw the importance of explaining what we do to people at all levels when organisational changes started to occur – as we’d put a lot of effort to internally socialising the relevance of our team, I felt relatively safe as senior people had come to the conclusion that our team’s work was important and should continue.
A fifth thing I learned: Multidisciplinary teams need to be trusted and complete
The ambition when we all started was building autonomous multidisciplinary product teams, in the belief that this would help build better software. And I still have that belief that a good team can build better software in this manner, but I’ve also seen how fragile those teams are, and how easy to disrupt them it can be.
For many of the iterations of these teams I saw, the decisions about “how should the product work”, “what should the product look like”, “what things should the product do”, “what things should be on the page” were occurring outside of the product teams, which means hand-overs, reviews and changes, and ultimately stopped the product team from being responsible for decisions they have been told to make. This indicates a lack of trust in the work that would be done within product teams from discipline leads, which has worrying implications on the leads.
We also saw core disciplines not being represented in product teams, which meant that the ‘process’ of design and development couldn’t occur, and a lot of wasted work by the others in the team. This then creates a spiral of people leaving product teams because the team didn’t have the people within it to deliver, and any shared knowledge is lost. For a research team this taught us a couple of things – the importance of documentation even when running research in a collaborative way (because shared understanding is lost when people join or leave), and where to focus our efforts – if a team isn’t going to be able to deliver, research will have low impact for those teams, and our priorities should be elsewhere.
A sixth thing I learned: Some constraints can be too big to make usable software
For the majority of my time, the development approach for the new website was to understand the concepts and relationships within the organisation, and build HTML representations that map closely to the real objects they describe. As referenced earlier, there are benefits for this approach for sustainability and allowing the civic tech community to use our information to build new things. There was also a hope that it would also create something that was useful to the people who come to our website to complete tasks.
The implications of this approach for most users became clear to us relatively early on in research, but were slow to lead to an informed critique of the approach and a decision “is this what the organisation really wants from its website”. When we ran research with real users representing the audience of the website, we found that it wasn’t making useful things to people who came to the existing website. The development approach allowed little deviation from the words the organisation used to describe its concepts and processes, or the relationships that the items had in real life when creating structure between pages. But for a complicated and niche subject matter the audience for the website isn’t the people who understand the domain (because there are probably less than ten people who understand the domain in depth, and they all work here). We sought out the real audience for each part of the website and we consistently found that even very specialist users, who use the current website daily, didn’t have the depth of understanding about the language or processes of the organisation to achieve their goals on the HTML representations of the domain. Design is the process of finding solutions within known constraints, but it was evident a robust interpretation layer was necessary between the domain and users, and being denied this layer was one constraint too many for the product teams, who felt prevented from creating things useful to visitors to the website.
Our discovery research frequently uncovered key journeys that wouldn’t be supported by the HTML representation of the domain. Our usability research reinforced that people couldn’t use the HTML representation of the domain to achieve their goals. But because the goal was to create a HTML representation of the domain, teams felt unable to address those issues and bridge the divide between the domain and people completing useful tasks. The development team has since started to explore changes to its development approach, but I guess the learning I took from this is the same as my previous point – the previous development approach had different goals to creating useful and usable software for the people who visit the website, and I should have recognised that earlier and thought about the implications of where we should be putting our research time and effort.
A seventh thing I learned: User Researchers are lovely people
To be fair, I already knew this one. They are also often smart people too, and I really appreciated their thoughts and advice – anything interesting in my opinions was probably inspired by things they’ve told me, and I’m sure many of them will recognise our conversations in all of these things I learned.
An eighth thing I learned: Education about the work and doing the work are different things
It seems self evident that doing user research, and teaching people about user research are different things. But there’s lots of grey areas, and it’s easy to fall into the gaps. One that we discussed often was bringing people from outside of the product team into the observation room during usability testing. On the face of it, this seems like a great education opportunity – seeing what research looks like first hand. However there are a lot of risks, some of which we saw occur first hand.
Because the guest hasn’t been involved with the product team’s decision that led to that research session, they will find it difficult to take any meaning from the session, and instead the main thing they will take from it is entertainment – seeing what a user looks like in the flesh, and what usability testing looks like. They may come away from that thinking user research is fun, or intriguing. However without seeing the whole process through from kick-off to debrief, the guest won’t come away with any meaningful conclusions about how it fits into the process of design.
This can be fine, but problems occur when guests are important people, and want to contribute their own observations about what they have seen. Unless a team has been together for a long time, and are mature in their practise and rhythm, these opinions can be distracting for team who won’t know how to incorporate this within the more relevant observations from those closer to understanding “what do the team need to know at this point”.
Some of our researchers used video playback sessions to open up research to a wider group, where they could then moderate the observation experience in addition to the research session. When we were guests at GDS, we were sat at the back – perhaps physical separation is another appropriate method. Banning guests from the observation room is often unpopular, as no-one wants to hear that they will be a distraction, but perhaps video playback sessions are a way to avoid hard feelings and manage those guests appropriately.
A ninth thing I learned: Watch out for people using user research as a weapon
I fell for this at least once, and still haven’t worked out all the nuances of how to avoid this. Because research finds ‘truth’, there is a risk of it being commissioned to prove a point – particularly when someone wants to prove a colleague wrong.
One of the more obvious tells is when research is commissioned on someone else’s work – unless they have the ability to make changes to it, the impact of your research is likely to be nothing. Especially if the commissioner doesn’t have any agreement that evidence is going to inform decision making (“you can’t reason someone out of a position they didn’t reason themselves into”).
But it’s not always that obvious, and there are a lot of grey areas, which I haven’t got my head around yet. Ultimately the ‘point’ of a user research team is to help individuals and organisations make better decisions, and I can see that commissioning research to help inform viewpoints is a step towards evidence based decision making. But it has a lot of internal political risks also. I have no advice other than be careful…
A tenth thing I learned: Only you can be responsible for doing good work
A popular ethos in a lot of public sector research is in collaborative research – active involvement of the whole team in the research process to increase their understanding of the situation for real users. However what the rest of the team will lack is an understanding of the rigour that should be going into designing and running studies and drawing conclusions, and how to appropriately caveat results to ensure that the results are communicated accurately, and are as ‘true’ as possible.
You can imagine a scale of rigour from “just making it up”, to “a peer reviewed, repeatable academic study”, and plot any study on that scale. The lowest bar for a researcher working in industry is “my colleagues will believe me”. But that is far lower than the bar we should be setting of “this meets my professional and ethical standards”. The risk is that no-one will be checking it meets your professional standards, and the only checkpoint is whether your colleagues believe you. And because decisions made based on research can often take years to come to fruition, there is very low accountability for poor quality research. So being a good user researcher requires commitment and honesty, far beyond what anyone other than yourself will be able to assess.
A thing I found helpful during my time working in the public sector was to remain a member of the games user research community, and be exposed to their discussions, for example via their discord. Ensuring that data is accurately gathered and honestly reported is a common discussion topic, and being exposed to the professional expertise of other researchers helped set my standard appropriately.
The risk when running collaborative research with a wider team who are not researchers is that they won’t understand or be able to apply appropriate rigour to identify ‘safe’ conclusions and ‘unsafe’ conclusions. One of the tools our researchers have applied is education about the ‘synthesis’ stage – that gap between data collection and debriefing, where a researcher goes away and thinks about the results properly, and translates the team’s early thoughts into results we are confident in. Ensuring that teams we work with understand and see the value in that stage is essential.
The eleventh thing I learned: “it depends”
There’s a lot of trendy catchphrases and opinions in the wider user centred design space. I have often been guilty of it. Some you may hear include…
- “Quantitative research tells you what, and qualitative research tells you why”
- “Surveys are bad”
- “Focus groups are bad”
- “Upfront research is good/bad”
- “Measuring things is good/bad”
And each of these statements are often true. But not always – it depends on the context. Parroting these phrases can often deter people from thinking about the arguments they represent further. And that’s dangerous, as none of these are ever completely right or completely wrong, and the answer is usually “it depends”. I guess the lesson is that popular sayings are no substitute for thinking, and everything always depends on the context (which is probably true about the previous 10 things I learned too)…
I probably learned more than 11 things from my time in this role, but the rest escape me at this moment – and I’ve probably rambled on enough. Congratulations for making it to the end!