Think like a UX researcher: An interview with David Travis and Philip Hodgson

Audio (mp3: )

Published: 11 August 2019

How to sharpen your research skills and choose appropriate research methods.

Gerry Gaffney

This is Gerry Gaffney with the user experience podcast. I have two guests today. Both have PhDs in experimental psychology.

David Travis has been running ethnographic field research and usability tests for 30 years, as well as running training courses in person and online for thousands of students.

Philip Hodgson has been a practitioner and a mentor for over 25 years and has worked across multiple industries and regions.

Together they recently wrote an excellent book called Think Like a UX researcher: How to Observe Users, Influence Design and Shape Business Strategy.

Philip Hodgson:

Thank you.

David Travis:

It's great to be here.

Gerry:

Tell me, why did you write the book?

David:

Let me go first on this one. One of the real reasons I wanted to write the book was because my last book had the word e-commerce in the title and I didn't want to be known as a person that I book with e-commerce in the title, given that it seems so outdated these days. So I was very keen to come up with a book that had a different title in it. But perhaps more seriously, the other reason that I wanted to put the book together was because it seemed to me that there, there weren't any books where there was a focus on experimental psychology and its role in user research.

So when I looked around at my bookshelves, you know, I would see books to do with, various books to do with different aspects of user research, but none of them took her a psychological perspective and I felt that that was something that was missing. And related to that, and a separate issue is when I look at all of these books on my bookshelves, one of the things I notice is that I've not really read many of them. I've got loads of books. I'm a kind of a, I'm a UX book addict and I’m continually buying new books when they come out, but I very rarely managed to finish them off. And it's not, this isn't a comment on the quality of the writing or the books themselves. I think it's a comment on perhaps the way we read these days, which is that we tend to read more bite sized articles or that we tend to dip in and out of stuff because of the… experience that we have where on the one we're on the web.

And what I wanted to do was put together a book that mimicked that in a way, that you could just take it off the shelf, open it up anywhere, read something and get some value from that afterwards rather than feeling that you need to start at the beginning and work your way through to the end. So it was those two things really. The first one was about a book about user research with a psychological angle, but secondly, a book that you could just pick up and start reading.

Gerry:

And Philip?

Phlip:

Yeah, I mean I would echo all of that. Certainly the fresh perspective that I think we bring by injecting some experimental psychology issues and experiences into the book gives it an edge. But also over the years and most recently, David and I have often found ourselves reflecting on our UX careers and on our experimental psychology careers. And in fact, it's almost difficult to write about UX without reflecting on your own experiences. That's where we draw a lot of the information in the book from, things we've done that have worked out well, things we've done that have worked how badly. And we felt that pulling these together and sharing our experiences with other folks might help shed some new light on some UX research issues.

Gerry:

I might add that lest listeners are put off by the term “experimental psychology,” The book is in fact very much based on a practical and hands-on content. Now you open the book by talking about differentiating between good and bad research. And you write, “People don't have reliable insight into their mental processes. So there is no point asking them what they want.” But isn't this precisely what many researchers are asked to do by their clients?

Phlip:

It is you, you're right, Gerry. Often the client or the sponsor or your team boss asks you to do just that, to find out what users think, what customers think. And I think some of this results from clients thinking that that's what user research is about. If you want to find out what people want in the future, just ask them. And that is you know, it's a well-worn technique in various customer related research fields that is often the expectation. In an ideal world, of course we would like the client or the sponsor to present us with a question or with a problem and let the UX researcher figure out the best way of answering it, rather than the client prescribing a method such as you know, just asking, asking questions.

It’s a little bit like, you know, going to the doctor and letting the doctor diagnose you rather than going through doctor and telling the doctor what pills you want. So I think so I think that's partly the reason for you know, we get so many requests from expectations to just ask questions.

David:

I think it's at the heart of what makes user research poor. So when I work with clients and I see that previous research that they've done and when it turns out to be poor, the reason is because the work they've done in the past, it's been very much around for example, preference testing. So they've got two versions of a design and they'd gone out and they've asked people which one they liked best and that's the… while it seems a sensible way of doing research on the face of it. You might think, well that's exactly what we're interested in. The fact is that's the worst way of doing user research because it’s very opinion-based rather than behaviour-based. And one of the key points we make in the book is the, you know, the best predictor of future behaviour is past behaviour. And to do that, you need to understand people's behaviours rather than be led by opinions. I think it's absolutely true that clients ask for that kind of research, but there's a learning process where you have to kind of help give them what they actually need perhaps rather than what they ask for. Because if that is the question that they genuinely have, then it requires market research with thousands of users to find out which ones they prefer. But if they're more interested in, well, what is going to work best for our users, we can do that with, much more cheaply and much more accurately, with smaller samples.

Gerry:

So what constitutes good UX research?

David:

There's an essay that we have in the book where we quote a guy called William Badke who wrote a book called Research Strategies and he has five key checkpoints and I think they apply to UX research as well as any other kind of research. And he's checkpoints are… or when he talks about inadequate research it merely gathers data and regurgitates it, and deals in generalities and superficial surveys, avoiding depth and analysis. It asks no analytical questions. It doesn't advance knowledge, but is happy to summarize what's already known and it's boring. And I think that you could apply that to what makes poor UX research and flip them on their head and you would get what makes good UX research. I mean to that I would add, but I would define good UX research as something which gives us testable and actionable insights into users’ needs. And if our research is hitting that goal, and for me that's working properly.

I would just add that you know, to the point that Badke makes about, you know, poor research being boring, it's very easy if we rely on off-the-shelf methods and just sort of crank out the same studies over and over again using the same techniques, it's very easy to end up with research that's perfunctory and frankly is, is boring. And I think, you know, just like for a comedian, the a cardinal sin is to not be funny, I think for researchers the cardinal sin is to be uninteresting. So I think it begins with finding an important and interesting question that really gets folks on board. It creates some intrigue. And then as David says, there are, these are the check boxes that it needs to tick, collecting reliable and valid data, going beyond the obvious and, and also good research typically, you know you typically end up with more questions than you started with because you've opened up new lines of investigation. I feel that's always a good sign that you've got an exciting research question.

Gerry:

Yeah. I think good researchers do tend to be excited, don't they, by the existence and the, and the exponential growth of those questions?

Phlip:

Exactly, yes.

Gerry:

In the book you focus, from a practices perspective anyway, largely on field research and usability testing. I guess they're the two chunks of the book in many ways. Perhaps you could briefly describe each activity and its purpose and I don't know which of you wants to go with this one guys.

David:

Let me take this. The reason is because I, I strongly feel there are only two questions that we answer as a UX researcher.

The first question is what do people want? And the second question we ask is, we've got this thing, can they use it?

And given those are our two questions, the two methods that we use are to understand what people want we do field research. To understand whether or not people can use our thing we'll do usability testing. And it's a good, actually, it's a good map, I'm sure Gerry you’ll know the double-diamond model that the Design Council developed to do with design processes that they found worked in top performing design firms. And those two approaches map perfectly onto that design model, or that design approach. First of all, there's a phase where people are trying to understand users’ needs and they're not going in with preconceptions, they’re going in to understand what is it that people are trying to achieve, what's the meaningful activity that they're engaging in?

And then out of that come various ideas for products. And then the next phase is to pick one or more of those and then start coming up with prototypes and ideas and ways of implementing it that they can put in front of users and test. So the purpose of each activity are really quite different. And I think one of the big mistakes I often see is that teams tend to go too quickly into the usability testing phase before they've properly understood the user needs. And they'd also say to me, well, okay, we didn't go out and do field visits to users, but it doesn't matter because what we're going to do is run a usability test and we can ask people then whether or not it meets their needs or we can use that as a way to find out what the meaningful activity is that they do, what the bigger picture is, but you just can't do it that way because you're moving out of context.

Gerry:

Nevertheless, there are a lot of business drivers that force user research or UX teams to do exactly that, start at the second diamond and, and skip the first one entirely.

David:

I’m not sure that it's a… I would say the reason behind that is to do with the maturity of the organization so that that would be an organization with relatively low UX maturity that thinks of usability or user experience as basically a QA style function. So it's around the notion of testing and evaluating the thing that we've got and they imagine that the way to understand user needs is to just listen to what marketing say. So they don't consider that as a UX function. As organizations get more mature in UX, they appreciate that there's this whole other area to do with understanding user needs that can be included as well. So I don't think it's pressures of the business because someone goes out and claims that they understand what users need so that activity happens. It just happens poorly in organizations where that's not led by a UX function.

Gerry:

I must say. I saw the, a double diamond in the book and I thought, Oh God, here we go, another variant on the double diamond. But you guys I think have taken quite a sensible and minimalist approach to the modifications you've made to the traditional double diamond.

David:

Good.

Gerry:

Philip, anything to add to that?

Phlip:

Not really, I think David's covered it well. I am just reminded of my first UX boss who used to focus on these two questions, and he reduced them to the catchy phrase, the two questions being are we building the right thing, which requires field research, and are we building the thing right, which requires usability testing to answer. And I, and I think last one easy way of noting the distinction between these two methods.

Gerry:

Yeah. Now, oddly in the book you said that even though usability testing is the wrong thing to do in the first instance, it's also the best thing to do first in many instances. Can you pick that apart, that apparent contradiction?

Phlip:

Yeah. Let me, let me…

Gerry:

Yeah, you get the difficult one.

Phlip:

This is a great question by the way. In our book, we have some discussion triggers at the end of each chapter. And this is one if we, if we get to do a revised version, this is a question I think that we might include, it's a really good question. I think the sense in which it's the wrong thing to do a usability test first is when the design process has started by testing a prototype because this skips an important stage as we mentioned with the last question is skips the discovery stage. And I'm reminded of the American poet Louise Bogan who posed an interesting question when she said, the initial mystery that attends any journey is, how did the traveller reach his starting point in the first place. And I think we're often too keen to start the design journey to get moving to get building and testing and we seldom stop to ask the question, how did we get here it is, how did we get to this starting point?

Is this even the right prototype to be testing? In that context, we might say usability testing is the wrong place to start. But there's a sense in which is the right thing to do if an organization has low UX maturity then often starting with a usability test which can be relatively low cost and relatively low investment can be a good entry point to UX because it can flush out lots of issues. It can act as a wake-up call to a team. It can reveal opportunities that were overlooked so it can work well in a low UX maturity that are being introduced to usability testing for the first time.

Gerry:

Okay. Now to move to something fairly specific, personas as articulated by Alan Cooper - who we had incidentally on UXpod some years back - can be a very useful tool, but they can also be a waste of time or even something that undermines credibility. Where did it all go wrong and what's your take on using them?

David:

This so annoys me, this issue of personas being like the heart of a problem. I think, all personas are, like other deliverables in UX, like user journey maps would be another example, scenarios, that all they are are receptacles for your user research insights. So you go out and you do some kind of research and then you need some way of conveying it. So you could, you could write a 40-page report or you could put together information radiators, like a persona which try to distil or summarize the research that you found and communicate that to the team. And although I don't have shares in personas, I don't really care whether teams create or use personas at all, but they do need to synthesize the research that they've done and capture and describe their insights to the team.

And I think personas are one of the best ways I know, certainly one of the most effective ways I know of being able to do that. Now where it goes wrong is where teams don't use personas as a synthesis of their research insights, but they just come up with made up assumption personas that they are no more than a collection of their biases and hunches rather than based on real data. And I think that gets to the heart of the problem, but in those situations, personas are being treated as an end in themselves when really personas aren’t the artefact with the photograph and the quotation and so on. Personas is the process that you go through, the field research you do to understand your users. If you do that and then create the personas you're doing it properly, if you jump straight to the artefact and make up this faux character who you believe is a user of your system, but you've done no research, you've got no evidence or no data behind that, then clearly that's going to fail because everybody will look at it and realize that it just doesn't ring true.

Gerry:

Yeah. Tomer Sharon refers to them as bullshit personas.

David:

Yeah. I’ve heard them called various… and it's a good definition of them, isn't it really?

Gerry:

Yeah. Although he argues that there's a place for them. He said, you know, if in some instance you may start with the bullshit persona and then as long as you know and then validated and cross-reference and yada yada and come up with, end up with something good.

David:

I'd agree with that. I think they, I often, I think it's a good exercise to get a design team, adevelopment team in a room and say, okay, let's identify who we think are use our user groups are and then get people to create personas for each of those groups which are made up. But then we say, okay, this is our starting point. So what we now need to do is identify the key assumptions that we have in these and then validate them. So if you're using assumption personas to kickstart your research, I think it's a good way of planning, but obviously the worst way of using them is to treat them as the end of the process when you've not done the process.

Phlip:

We can see the effect of what can we call it, the misuse of personas really if we look at it from the point of view of the designers themselves. Because, you know, we often hear designers saying, I don't know what to do with these personas because they're just given the, the snapshot or the summary poster. And from a designer's point of view, that's, that's got very little value at all. They've missed the, it's like giving somebody a summary of a movie you've watched and they've missed the movie and you're trying to say, well, you know you know, here's a movie poster or some stills from the movie. They can't do anything with it. So the key I think is to find a way to take the designers with you into the field so that they actually experience the users. Then the, deliverable, the persona, the poster becomes a useful reminder.

Gerry:

Yeah. I liked the term that David used - receptacles of user research or receptacles for user research - because I think, you know, practitioners and clients can be, can become very enamoured by deliverables. I've worked for organizations where, you know, we’ve got a problem and then the solution is let's create a customer journey map and you think, well, you know, off of what benefit is that, how do we fight that tendency?

David:

I wonder if we do really. I think that in, in the old school of deliverables, in the world where I used to work when I first started our own consultancy. A deliverable was it like a 40- or 50-page report, which nobody ever read. But which was, meant it took about four or five days to write, which meant I could charge the client for that particular time. But to me is a not very useful deliverable. I think things like personas, customer journey maps, these very visible receptacles of research insights, these information radiators I think are a good thing. I'd like them to become enamoured by those two deliverables. Anything that engages the team and the stakeholders with the research that we're doing I think is a good thing, not a bad thing, because so long as the focus isn't on the artefacts itself, but it's on the processes needed to create the artefacts , then I think this is exactly what people should be doing.

Gerry:

Okay. let's move along. Why is desk research important and what is it?

Phlip:

Desk research is a, is a term, it's a strange term in a way. I guess it's research you can conceivably do sitting at your desk. But it's the background research. It's finding the foundation for the research you're about to do. In a way, what you're asking yourself is you know, this question we've come up with, this question we're trying to answer, does the company already know the answer to this? Has somebody done this research before? And we often… especially given the turnover of employees within design groups you know, often the research tends to reside in the head of the researchers and when they leave, the company loses that value and finds itself, you know, paying good money to do the same research over and over again. So I think we owe it to ourselves and to our company and to the, you know, the sharpness of our own research to find out what do we already know, can we build on it rather than simply repeating the same research all the time.

David:

As people who need time with our research participants, and that time is extremely valuable given how much face-to-face time over the course of a project we actually get, it's beholden on us to do some kind of due diligence really. So we don't want to spend time with participants and ask them questions or “discover” in inverted commerce things that we could have discovered if only we had read an appropriate research report in a journal somewhere or other. So the idea is, let's check what's already known about this field and about this area. Even though it may not be with our exact users, but it's something which gives us an understanding of the domain. And then when we go in and speak with our users, we don't need to spend the first 15 minutes or so asking questions that we could have got the answers to very easily beforehand. And that means we can use that time with our users much more effectively than if we go in as people that literally know nothing.

Gerry:

Now in the book you have some rather unkind words to say about surveys. Do you want to tell us about that?

Well, I'd say the problem with surveys isn't that… It's not a problem with surveys, it's a problem with the way surveys are used. So surveys are a great tool later on in design and development. When you want a numerical answer to a specific question that you have. The problem is that surveys are often used as a discovery research method. So rather than go out and understand what users need, what researchers sometimes will do is instead send out a survey to a large sample of their user base and ask them a series of specific questions about what they think they need. And the problem with that is you can only ask questions about the things that you can think about asking questions about. It means that you obviously don't ask those questions that you never thought about asking. And when you do field research, you discover what you don't know you don't know. You discover the questions that you didn't think were questions beforehand and those questions turn out to be the really important ones.

If you start with a survey, the problem is you can only discover the things you already know that you don't know because you can phrase them as questions. What you can't discover are things that you don't know that you don't know and as a consequence the use of a survey in discovery is very, very problematic because it's also a time when numbers are kind of less important to us than understanding what are the issues, what's the domain, what are the concerns that users have that they may not be able to articulate. And that poses the other problem, which is the often… If you think back to the last time or if your listeners think back to the last time that they completed a survey, the chances are they didn't spend a long time thinking about each question, making sure that they gave exactly the right answer. They answered the question as briefly and concisely as they could and they gave a top of mind response.

And in some instances that might be fine if you want to know that percentage of your audience that use a particular brand of mobile phone. But if you're interested in users’ deeper behaviours and needs, the only way you can do that is by observing them. But I want to emphasize that Philip and I aren't rejecting the use of surveys. We both use surveys in our user research, but we tend to use them in later phases of development and design rather than in those earlier phases.

Gerry:

Okay. Now another contradiction that I found in the book, I think there's, there's just a couple, one is, you say, you write that “UX research with unrepresentative participants is a waste of time and money,” but on the very next page you write, “Engaging a representative sample of sounds like a good idea, but it is flawed.” Can you reconcile these apparently contradictory statements for us please?

David:

I'm going to let Philip answer this one.

Gerry:

Nice one! Yeah, handball.

David:

These contradictory statements. They're like, you know, you know, like these Zen-like quotations.

Gerry:

Koans.

David:

What you're, you're spotting here with both of these are exactly that. You know, it's like, you know, the, the obstacle is the way, I think.

Gerry:

The Way that can be known as not The Way.

David:

But Philip you have a go.

Phlip:

Yeah. These are… It's a great question Gerry. Well spotted. We wish we'd slotted it as well. But we didn’t.

But, but really it's actually quite easy to resolve any, these two apparently contradictory claims that are describing two different situations. So UX research with unrepresentative participants is a waste of time and money. This is really talking about in the context in which we use it in the book it’s talking about selecting users to do tasks in a usability test. So the sense in which unrepresentative participants would be a waste of time and money would be, for instance, if you were testing an online banking website and you are testing it with people who did not do online banking, you know, it's a cardinal sin of UX research to test with the wrong participants. You're never going to get successful research. So that would be an example where it would be a waste of time and money.

But the other claim that engaging a representative sample sounds like a good idea, but is flawed is referring to the case where you're trying to use your sample to represent population demographics. So you’re trying to cover all the demographics in a target audience, but you've only got 10 or 12 participants. And in this respect it's a flawed approach. You would end up needing literally hundreds, three, 300 plus, depending on how many demographics you were trying to represent.

David:

There’s a corollary to that as well, which is that when you're doing your research, it should be with representative users in the broadest sense. So for example, let's say that you were doing research on an air traffic control system. You clearly need to involve people who are air traffic controllers. You'd be bonkers not to do that. So that would, that's the first part of the big issue. The second issue to do with, well, when it comes to actually recruiting your participants for your research, which ones should you use? I'd go so far as to argue you want unrepresentative representative participants. What I mean by that it's rather than taking people from the middle of the bell curve of their abilities take people from the extremes, particularly the ones that are on the left hand side of the bell curve. So for example, when you're doing usability testing, it's a good idea in this particular scenario I’m creating of these air traffic controllers to recruit air traffic controllers that have slightly less domain knowledge than the average or you could test participants with less technical knowledge than most people within that group.

And that's because with our small samples that were using usability testing, it makes it more likely that we'll find problems that affect other users or even our expert users when they're under some degree of stress. Because those users really push the envelope. Because if you've got a user who is technically very competent, they are going to use their, they’ll be able to muddle through with any system that we throw at them because they've worked out how to troubleshoot systems. We're not, we're going to learn less from that representative user, but another user who's also representative of the overall group but has less domain knowledge than the average. So it's a great question I think, another one of those, it would be good to include in our kind of think like a UX researcher prompts that we have at the end of each essay. But the contradiction actually points to this really important issue that you want people that are broadly representative. But within that aiming for perfect representation is a fool's errand because that means you need to balance a ridiculous number of demographic variables which are properly unimportant to the usage of the system.

Gerry:

I find that clients often want to set you up with users as well if they've got control of, or an input into the recruitment process. And I'm always saying to them, you know, we want stupid users who hate your system.

David:

We would've said that, but it wouldn't have been politically correct.

Gerry:

Of course.

We don't worry about that on UXpod. Hey, there’s a couple of places in the book where you write that UX research is a team sport. Can you pick that apart a little for us maybe?

David:

I've spent a lot of time working with the UK government but with the government digital services, GDS. And this was a phrase I first heard them use when I was working with them maybe six years ago or so. And they like their slogans at GDS. And this is one of the, as UX research is a team sport. And it makes it a really critical point, which is the often when we're doing user research, there's a quotation from someone that I know, you know very well, Caroline Jarrett, where she writes, Your job as a user researcher isn't to understand about users. That that's a fallacy. You're job as a user researcher is to help your team understand about users. It's no good if that knowledge ends up inside your head.

And what this is getting at, this particular slogan is the point that when you're doing research, there's no point in you becoming an oracle of user requirements or user needs or of the user where people always people come to you and ask you questions. The goal is to help that team increase that their level of understanding and knowledge of users so that when you're not there and they need to make a quick decision about an element in a user interface, let's say, they can do that based on their experience because they've been involved as a team engaging in the research rather than either making a poor decision or waiting for you to come back to the office.

Gerry:

Now, towards the end of the book you have you have some very practical information and a guide on what to do in the first month in a UX research role, which I found very interesting. But do you have any general advice for people who are new to UX research or perhaps who are working more on the development or the UI end of things, but wanting to get into doing research. Perhaps Philip, you could lead that one off.

Phlip:

Yeah, I think so. I think whether it’s user research or any, any other kind of research. I think a good starting point is to, and an easy starting point is to just develop curiosity, become curious about things. We're all born with curiosity, but somewhere along the way we seem to lose it. And you know maybe one way of doing this would be to just observe people. You know, on your way to work, instead of having your head down, looking at your iPhone just look around and observe people and see if you can spot five unexpected people behaviours on your way to work and maybe generate some intriguing questions about those behaviours. So I think, I think at the heart of any research is this need to be curious about, about things to, to fall in love with questions, fall in love with investigations, to fall in love with finding out things. So developing curiosity. Reading detective novels might help. Anything like that gets you in the right mindset for being a good researcher.

David:

The thing I'd add to that is that I know that a lot the students I have, for example, the ones that take my Udemy class, when they first start in the field, they've not really had much experience before and they think of this area of user experience predominantly as a design discipline. And I mean, design in the sense that it's pixels being pushed around on screen. So fundamentally they think of it as a kind of a UI design discipline, and they don't realize at first that that design research is an important part of any design project that you undertake. And that's quite revelatory for many of them because they may not be particularly good at visualizing things and making things appear neat in user interfaces.

But they do have an analytical frame of mind and they're keen on the idea of going out and understanding users. So it can be quite liberating for them to realize well actually there's this whole area of user experience where I can do observation, I can just go out and watch the way people do stuff and then use that as a way to help the designers then ideate and come up with a range of ideas. So the general advice, I guess I'd add to what Philip said, this notion of observation and I think I'd add to that, that the general advice would be to realize that the field of user experience has these two quite distinct areas really. One is the design research and one is the design itself. And you may not be you, you don't have to do both. You know, you can focus on one or other area.

Gerry:

Now you talk about you know, observation and curiosity. And while I sort of concur with both of you in your statements there, sometimes it seems like there's a lack of rigour among a UX practitioners. You know, that the just curiosity is an offering. Just observing users is enough fun and a lack of knowledge of, you know, the domain.

David:

In fact, now that there's a there's a quotation that we have in the book where we say you should not care what the outcome of your research is. And what we mean by that is having this kind of objectivity when you carry out your research that you don't go in with, with PR, with a predefined idea about the way things should look. But instead you go in with an open mind and work out and then note down what happens. And in fact, this whole idea of rigour is one of the reasons why we wanted to bring an experimental design angle to the field of user research from psychology. So making sure that people realize that there are certain things that matter when we do research to make sure that you end up with valid and reliable results. And what's curious is people often think that that means numbers, it means you need huge sample sizes or you need to measure things that you can then quantify afterwards, and nothing could be further from the truth. It comes back to that early question you asked us about what makes good UX research and it's really about this notion of actionable and testable insights into users’ needs. So long as you are acquiring those, then I'd say you're doing good quality UX research.

I'm not sure that answered the question though, Gerry. Did it answer the question?

Gerry:

Well, look, it did address… I guess you know, perhaps I was being curmudgeonly and, and suggesting that some of the, some of the people that you've come across working in the field you know, read blogs and are curious and are observant but, but don't take the time to read books like yours for example.

Okay.

Phlip:

Well, I certainly think they’re reading…

David:

After this podcast I’m sure…

Gerry:

I'm sure that they'd be rushing out to do so. Yeah.

Phlip:

Yeah. Hopefully.

Gerry:

We don't have time, you know, really to touch on the content of the book very much in a format like this, but I'd certainly recommended the book is called Think Like a UX researcher: How to Observe Users, Influence Design and Shape Business Strategy. And I only found one typo in it by the way. How many times did you guys find?

David:

Damn, you found one.

Phlip:

I'm really upset now. Read the book again. See if I can find it.

David:

Philip and I were, were almost literally… I mean I had a full head of hair before we started this book and all of the, the problems that we had were down to kind of typesetting issues to do with problems with typos that we spotted within it. And it, it's interesting cause in the past other books I've done have all been camera ready but this one was sent away to be done separately and it introduced so many problems that we had to fix as you went through the process and I thought we'd got them all but you found one.

Gerry:

Well look, you know, it is an excellent book and I'd certainly recommend it very highly to anybody who is… I found it for me personally very valuable than I've got, you know, reasonably lengthy background in the field, a very, very useful book but also very, very useful for people starting out. So I would thoroughly recommend it.

David &

Phlip:

Thanks Gerry.