Published: 17 December 2017
Tech firms that are not themselves diverse can't be effective in meeting the needs of diverse audiences. Sara Wachter-Boettcher talks about how we've built exclusion into our algorithms.
Gerry Gaffney:
This is Gerry Gaffney with the User Experience podcast. My guest today is a content strategy and user experience expert. She’s based in Philadelphia where she’s principle at Rare Union. She wrote “Content Everywhere; Strategy and Structure For Future- Ready Content.” She co-wrote with Eric Meyer, “Design for Real Life.” Her most recent book, and the reason I asked her to talk to us today is Technically Wrong: Sexist Apps, Biased Algorithms and Other Threats of Toxic Tech.
Sara Watchter-Boettcher, welcome to the User Experience podcast.
Sara
Hello Gerry, thank you so much for having me.
Gerry
In “Technically Wrong” you raise some very serious concerns about where we’re going with technology in general. What prompted your interest and your concerns?
Sara
Well, you know there was a number of things that sort of started to come together at the same time. Basically I kept seeing at a personal level lots of little tiny interaction detail that I was finding alienating or frustrating for myself or for my friends from form fields that didn’t seem to accommodate who they were, you know I had a friend who had recently gone through you know sort of a public process of changing the way she was presenting her gender identity and she was really struggling with things like the way that Facebook allows you to do that now in some ways but then in other places still forces you to choose you know a male/female binary when you sign up.
So lots of people were telling me stories about the ways that the sites and apps that they were using were kind of, forcing them into a box or trying to define them in ways that didn’t make sense to them.
Meanwhile though I started hearing all these other stories and maybe the most prominent one was my friend Eric Meyer who had this really terrible experience where he had his daughter who had died of an aggressive brain cancer, a little girl, she was six, he had all these tragic memories from that year that she had died that resurfaced to him by algorithms trying to make him celebrate his year and from there I started hearing all these other stories about the way the kind of social web we’ve created has been designed to delight us and celebrate our best moments and just sort of falls apart with anything that is less than that. And what I started realising was there was all this connective tissue between these kinds of things and that actually the same kinds of problems inside a tech company that would enable them to launch let’s say a form with a menu that doesn’t account for multi-racial people are the problems that also enable them to launch these kind of happy, peppy little updates that leave some people out. That it’s this really mono-culture within these companies and lack of understanding or caring about the diverse people who are actually using the products.
Gerry
I guess it’s not a surprise that technology can be tone deaf and simplistic, you know we’ve all come across I guess those sorts of examples but isn’t it sensible in a way to offset our human tendency to make ill-considered and inconsistent judgements by handing them over to algorithms and machines?
Sara
I mean I think that this is sort of the promise that we’ve been given by technology, that we can hand off a lot of the decision making that would have been done by humans to machines and that that’s a somehow going to be a neutral or unbiased thing and I think the reality of course is that you know an algorithm is just like anything else that humans make. Things that are made by humans show the fingerprints of their creators, they are always influenced by the people who made them and so if you have people who are designing algorithms and embedding those algorithms into other kinds of systems that are deciding what information you see or how a system’s going to interpret your content. I mean of course they’re going to be influenced by the people who made them and what they foresee and not foresee. And I think that that may be something that the public has not really understood enough about, right? Because so often when we talk about something like an algorithm it’s talked about like it’s just sort of a magical process behind the scenes, you know it’s like, it’s something that you couldn’t possibly understand it, so you just say the word ‘algorithm’ and nobody asks anymore questions. And the reality is of course it’s just like steps of series, you know a series of decisions, a series of steps that the system has been trained to go through and the people who did that were humans. So you know for example, if you teach an algorithm how to understand what’s in photos and do an image recognition, that’s by showing it lots of photos and telling it things about those photos so it has something to learn from and what we’ve seen over and over is for example if you take a bunch of photos of people and say “OK, here’s what people look like,” and then you train it to what other people look like but all those photos are of white people then you have an algorithm that learns about people from a very limited group of those people and then all of a sudden it can’t recognise people of colour or doesn’t do as good of a job with people of colour versus white people. And so of course that’s a bias, you know deeply embedded into the algorithms that has to be there if the humans who made it weren’t thinking about whether they had a data set that was actually representative.
Gerry
Yeah and I guess we’ve seen stories come out of the US in recent weeks and specifically about people of colour being misrecognised by face recognition systems which unfortunately are often used surreptitiously so they don’t even know that they’ve been exposed to this sort of surveillance and then arrests being made on the basis of those flawed algorithms.
Sara
Yeah, I mean one of the things that we’re starting to see is this intersection between the, kind of, surveillance culture and the technological capacity to build all of these different AI driven or algorithmic systems and you know we can certainly have a discussion about the ethics of the surveillance piece of this but I think one of the things that isn’t really talked about enough in that whole thing is that the technologies that are being used to decide who is surveilled for what and when may be deeply flawed.
Gerry
There’s also the problem of course of algorithms that don’t allow any introspection so whether they’re based on genetic design principles where they’re actually modifying themselves to the point that nobody actually knows what’s going on inside them. I mean that must be difficult to deal with.
Sara
Well, sure and I think what we don’t have is any kind of agreed upon methods for tracking whether or not these algorithms are tuned to the right kinds of results, if they’re getting those kinds of results, how would we know if they’re screwing it up? How would we know if they are actually increasing bias? What are the methods of transparency and accountability? There are some people who are starting to work on that but I think broadly speaking you know we have a consumer tech industry that has been really reluctant to even admit that there’s a problem, much less think systemically at a macro level about what they can do to solve it.
Gerry
I guess following on from that, you do write in the book that “far too many people in tech have started to believe that they’re truly saving the world.” And it does seem that as a society we’re enabling a techno-centric elite to assume an enormous interest in and control over our lives. How has that come about?
Sara
Well, I think one of the things that has really contributed to the way that we see the tech industry has been that there’s like a kind of a shroud of mystery over it, right? And the way that we’ve been talking about it in the media for decades, it’s been very much like, you know these brilliant wunderkinds or these magical kind of larger than life figures, like your Steve Jobs. And when you talk about tech in that way it really encourages the public to look at technology as something that is special. And you know you don’t understand it but you sure do love getting a new phone every year. And when the public are encouraged to look at technology in that way it really discourages it from being critical. And I think that’s a piece of it so you take that piece and then you mix in you know some of the economic factors in the way that tech is funded and the way that there’s been, how do I put this, sort of a lot of lack of oversight over where that money is going and whether that money is actually yielding anything valuable to the world. You end up with folks who are really, you know they can create something that has questionable ethics, that nobody really calls them on and make a bunch of money doing it and so they keep doing it.
Gerry
My mother told me that I should never attribute to malice that which can be explained by stupidity. Do you think there’s malice at the heart of companies like Facebook?
Sara
You know everybody always wants to know if I think that this company or that company is truly evil and I guess my position on...
Gerry
Sorry to be so predictable. [Laughs.]
Sara
I mean I think it’s a normal question but the thing is I kind of don’t care anymore and what I mean by that is that I don’t really think that most people started out with certain sort of malicious intent. But the results that we’ve seen from companies like Facebook and I would say that Google is another one of these, is that you have some really massive impacts on people’s lives, you become incredibly powerful, incredibly wealthy and you have done that by manipulating what your users see and there are ramifications to that and if you do not understand that there are real ramifications to that and many of those are ramifications have been incredibly bad for people, then that is wilful ignorance. I mean, it’s not like this is something, it’s not like people are just now learning that there might be a problem with something like Facebook. I mean, people have been talking about the problem with everything from, you know, their ad platform and the amount of targeting, the amount of data tracking that they have on there, to filter bubbles to how Facebook changes how people’s emotions work to… There’s a lot of research here. So if you are in a position of power at a company like Facebook, which is making money hand over fist, and you are choosing not to change some of the ways that you’re doing things then whether you originally had a malicious intent or not, I think that you are culpable and I think that you have to take a long hard look in the mirror and say you know, “Am I comfortable with this being the way that my company makes money?” Like for example, Facebook’s ad platform you know was making them I think it was $27 billion last year and they meanwhile had all these instances of problems, both large and small. From the large end it was Russian backed organisations buying all these ads to sow discontent and tension, particularly racial tension in the United States during our election, that all happened on Facebook’s ad platform and they didn’t catch it. But on the smaller scale they were letting people do things like target housing ads based off of race and specifically exclude certain races from seeing housing ads which is federally illegal. I mean we’ve had fair housing laws here since the civil rights era and they were letting it happen and that was slipping by their moderators and that was slipping into their interfaces because they were not investing in doing an ethical ad business and they were under-investing in the monitoring of it. They’ve been called out on it over and over again and even though they were called out on that particular thing a year ago with the fair housing laws, they actually just this year, this Fall have the same thing come up again. They were apparently letting those ads slip through again. So that to me that just says you know, OK you’ve underinvested in being able to actually monitor and insure that this ad platform is not being used for nefarious purposes, you’ve been called out on having done that, you have failed to make the proper investment in having that kind of oversight and that is the kind of neglect that I think is no longer, you know, sort of benevolent neglect and turns into something that I think is fundamentally unethical and really speaks to misplaced priorities.
Gerry
I guess when you look at companies like that and you look at their fundamental raison d’etre which is to get people to click on things, it’s maybe foolish to just expect them to behave in a different manner without oversight.
You used the term “monoculture” a few minutes ago and many tech companies blame the lack of diversity within their own ranks you know which is reflected frequently in their algorithms and in their products. They tend to blame this on the pipeline. What do they mean by this and are they right?
Sara
Yeah I think for those of us who have worked in tech for a while, we’ve probably heard the term “the pipeline” used a lot to talk about getting more underrepresented groups into technology positions. So oftentimes we’re talking about women or we’re talking about people of different races and ethnicities and so what ends up getting talked about is this idea that well you know we can’t get a more diverse team because we don’t have enough women and/or people of colour in the pipeline to hire from. Most of the people coming out of computer science programs for example in the United States, are still men and they’re still largely white and I think that is a big cop out because it’s actually been studied pretty closely that even compared to the percentage of people who are coming out of these programs, the percentage of women, of people of colour who are actually hired by tech companies is under representative. So more of them are graduating than are actually getting hired but there’s also much more work that has to be done within these companies, right? If you say you want to have a diverse team but you’re unwilling to let’s say recruit from any different universities, you keep going back to the same very small list of universities, I’ve definitely heard from black folks working in tech that one of the problems is that a lot of these tech companies don’t want to talk to, for example, historically black colleges and universities that often times have very good computer science programs because they’re not on that short list that they’ve always gone to. Or you don’t want to change your culture within the company at all and you think like well okay we want to hire diverse people but they have to be exactly like the people we already have and fit in perfectly to the culture we’ve already built and you have to think well maybe the culture that you’ve already built isn’t good for them and maybe you’re going to have to change some things internally. And so whenever you try and talk with those things you end up in this discussion where somebody says, “Oh, but we can’t lower the bar,” you know we can’t let in less qualified people. And I think you know this is not a matter of more or less qualified, oftentimes what it really is is a matter of your narrow perception of what it means to be qualified.
So I think that what you end up having is sort of this system that recreates itself, right? So you have companies that won’t change their behaviour, they say they want to have more diversity, they’ll talk it up in their diversity reports and then at the end of the year they’ll be like “OK, and now we’ve gone from you know, I don’t know, 18% women to 19% women,” and you’re like OK, congratulations? It happens so slowly because they keep saying they want to change the numbers but they don’t want to do any of the work that would help enable that to happen and so then over and over again you have more diverse folks come into tech who then leave the industry and it’s hard to imagine that ever turning into this sort of like groundswell folks who want to join. It’s like why would you keep trying to keep into a club that keeps saying it wants you but then treating you badly once you’re there?
And so I really think that there’s a lot that needs to happen in tech beyond changing any sort of pipeline that would enable it to become a more diverse place and that would also prevent some of these problems that happen when you have a non-diverse team working on products that affect everybody.
Gerry
Now many of our listeners or readers will be familiar with using personas, which are fictitious but representative users as part of their user research and design toolkit, but you caution that personas can become exclusionary. Can you talk a little to that please?
Sara
Yeah, absolutely and I’ve certainly used personas many times and I think that they can be really, really great tools to help keep teams thinking about people when they’re designing. But one of the things that I’ve seen happen is I’ve seen teams get very hyper focused on the demographics of the people who are portrayed in a persona or connecting demographics to motivations. And so, for example, if you have a persona that’s like oh this persona is a 42 year old man and this other persona is a 36 year old woman, people will get very, very tied to the things that that persona is doing or fundamentally, you know, like universal to all people who match that same demographic. So you get a lot of this sort of very shallow thinking about people, what they care about. I’ve tended to think that this over reliance on making personas feel real has led people to think about them as sort of, as though they were the literal audience when in fact they’re just sort of reminders, right? And it’s more like can you run your decisions through a series of people and think through would it work for them? Would it work for them? As opposed to saying like OK I’m designing everything for “Suzie.” So I think that that’s one piece of it and the other piece that I think also comes into play is that you often get these personas that really encapsulate you know just a narrow slice of the population and nobody’s thinking like, OK, so if we think our target audience is going to be, I don’t know, let’s say collage age women, well what happens if somebody’s who’s a little bit different than that uses this product and does it still work or does it fail spectacularly and if it fails spectacularly, can we do something about that? Is that an unnecessary limitation we’ve put on this product that could end up harming people who might need it the most.
Gerry
It would be interesting to get Alan Cooper’s take on this as well, given that he’s kind of the parent of the use of personas.
Sara
Yeah, absolutely.
Gerry
One of the topics that we’re very fond of on UXpod is forms design and you wrote about “some god-forsaken form that asked ‘have you been sexually abused or assaulted?’” and the answer was “Yes” or “No.” And you use that as an illustration about the power that a single field can have. And I think this issue of power and power imbalance is in many ways at the heart of our interactions, isn’t it?
Sara
Yes, absolutely and I think that that’s something that we don’t talk about enough in the field as designers and people who are making decisions about interfaces; that you are fundamentally making choices about what a user is expected to tell you. And I really like to think of it as you know what are you expecting people to disclose to you? And because “disclose” is sort of a word that carries some more weight, right? It’s not just filling out information, you’re disclosing something. And are there topics that would make somebody really uncomfortable to disclose? And of course there are. The example I gave about the form asking about sexual assault and abuse, I mean it happened in a medical setting. It was intake form, it was a little digital PDF that they had sent me and I was supposed to fill that out before I came in and, you know, on the one hand I can understand why they’re wanting to get some of that information, sexual assault and abuse is rampant and particularly you know the number of women who have been assaulted is extremely high and you know I can see them wanting to address that in some way but the problem comes in when nobody’s thinking about the experience of this person who’s never been to this place in their life, doesn’t know this doctor yet, doesn’t have any connection to anybody and is sort of just sitting at home saying “Do I enter this data or not? Where does this go? What kind of databases does it go in? Is somebody going to ask me about this? Is somebody going to send this information to somebody else?” You know, like what are you going to do with this? And I think we often take it for granted that if we want information and we make the form feel nice and clean and pretty and easy to flow through then it’s fine. And I think that we need to you know very much take a step back and question is this actually something that we need? And if we do need it, are we explaining it to people in a way that’s going to make them feel comfortable?
Gerry
Indeed. I sometimes think of, there’s an album by the punk band The Dead Kennedys called “Give Me Convenience or Give Me Death,” and we all seem to be very ready to trade privacy and control for convenience and I think that allows organisations, including states and governments, to do all sorts of things because nominally we’ve opted in.
Sara
Yeah, and I guess one of the problems is that people don’t really know what they’ve opted in to or even if you try to explain it one of the other issues is that it’s hard to tell what all the implications are, right, of any given piece of data, and what might be done with it because it’s a whole series of, well if people have this then they can also get this data and this organisation or this government uses it these are the things that could potentially happen. So it all feels kind of like boogeyman, right? Like okay it could get you, it could not get you. And so people, I think, right it off because it feels intangible and without having some sort of concrete threat it’s hard to have it feel real for people and I think it is totally unfair to expect the general public to have such a nuanced understanding of not only what’s being collected and how that’s being stored and how that’s adding up to be able to profile them but also all of the potential negative things that might be done with that and also sort of extrapolating that out of all of the systems that they might interact with and all of the people who might end up with that data. It’s too much to expect of an individual person who’s just trying to like get through a user agreement on a new app that they’re supposed to be downloading for work or something, right? And so I really do think that we do need better standards and better regulation around what is going to be acceptable and that’s really tough because historically the people who have been making regulation aren’t necessarily the people who understand technology. So I think we need sort of like a fundamental shift to the way we have these conversations and who is involved. But so far what’s been, I think, very frustrating is that there are a few really big players who have a tremendous amount of money in tech who don’t really have any incentive to be more ethical and have control of far too much of the conversation.
Gerry
Yeah I think the whole idea of opting in is an enormous cop out in I guess the impression that once somebody has clicked that button or not clicked that button then they have consciously given us permission to do something when, as you say, it’s just to get through using this app or using this service or whatever.
You’re very critical of tech firms in general; you call out Uber specifically both for the treatment of their workers or contractors or whatever they call them and their use or misuse of client data. In some ways Uber I guess epitomises the heart of the problem because for many of us it is so convenient, isn’t it?
Sara
Absolutely and I look at Uber as being kind of being a case study in what happens when you have almost zero regard for ethics in your organisation. I mean they’ve been pretty explicit about that for years but they’re trying to sort of back-pedal on some of that now but for years they were very much like, “Look, we’re going to go into markets at any and all costs, we are out here to take down the taxi industry. We will do whatever- it takes to do that. We don’t care if there’s a regulation that says we can’t, we’re going to do it anyway.” And when that’s your business model you get very, very focused on that one goal and everything else sort of seems irrelevant. So if you have a journalist who’s being critical of you, you might propose in a meeting that you should just track her movements using your technology and all of sudden that starts to feel pretty normal because you’ve sort of created a culture where anything goes as long as long as it’s in service of domination. And so you can look at Uber and you can look at the way that they’ve reached this sort of extreme place where they had dozens of people who were fired for sexual harassment this past year and had to oust the CEO and all of these other issues, “Delete Ube” campaigns because they were price gouging people when there is a taxi strike, when you know the US was first trying to implement Muslim ban at the beginning of the year. I mean, you can look at all of these crises after crisis that they’ve had this year and you can say you know this is what happens when you take this single mindedness that is so common in the tech industry to its extreme and you allow that to be the sole driver for your business. You end up in some extremely unethical places that I think most people in tech are really uncomfortable with when they actually stop and look at it, that’s not the world that they want to be building and you know it’s of course some cognitive dissonance when they realise that in fact it is the world they’ve actively been building even though they’ve been talking the whole time about how they just want to connect the world and create something that’s going to be new technology that’s going to help people and, you know, it’s hard I think to take a step back and realise that you have been part of something that maybe is harming people but I think it’s going to be very difficult for us to get anywhere until people can recognise that.
Gerry
So what can we do to address the issues you raise in the book, both as a society and as individual designers?
Sara
So I think that I will start addressing the individual designers because I think there’s a lot that can be done at the sort of individual level that is not going to fix all the macro problems but does in fact matter and that’s around really taking a step back and looking at the processes that you use in your team, the design decisions that you make and identifying where and how biases could creep in and how you go about mitigating that. Where in your process do you ask questions like “What if I’m wrong?” or “What’s the worst that could happen?” or “Could this be used to harm someone?” or “How might somebody gain from this system?” Asking those kinds of questions of yourself and your colleagues I think is really important and valuable work. Of course one of the things that also needs to happen is that that needs to be respected and valued within a tech company and teams can’t just do all of that ad hoc, right? There has to be sort of the overarching principles, the priorities of the organisation that need to shift. And I don’t think at an individual level that’s going to be easy to do. I think that that will take pressure being put on tech companies from a lot of different places for that to happen. But I do think that when your employees start raising these questions of pushing back and sort of banding together within an organisation that that can create some appetite for broader change.
But I think at a societal level a big thing that I advocate for and that I talk a lot about is that we need to stop allowing tech to pull the wool over our eyes and let us look at it as something that is magical, mystical, shiny, fancy, beyond anything we can understand and instead look at technology a bit more critically, even if we don’t consider ourselves tech people, right? Even if you are somebody who is very distant from the industry, to look at it just like anything else and to kind of ask it some tough questions and I think that once we do that we’ll start having more important critical conversations about the role of technology in our lives and about the level of access to information and control that tech companies have and that when we start having those conversations then suddenly we can have a much more effective dialogue about what the right answers might be for what we’re comfortable with, what’s going to be safe, positive for us, for our children et cetera. I think that as long as you have an industry that is allowed to make itself opaque and seem special like it’s somehow above the law or it’s somehow so important that we couldn’t possibly criticise it, you will not be able to solve the problem. So I’m really trying to encourage getting this outside of just the tech industry and to everyday folks and helping them understand and kind of talking about technology in very plain language because I think that once we do that then you can get a groundswell of people who feel more equipped to push back.
Gerry
So I guess we need to “cast a cold eye,” as W.B. Yeats wrote. I’ll remind people that Sara’s book, which I recommend very highly, is called Technically Wrong: Sexist Apps, Biased Algorithms and Other Threats of Toxic Tech which is kind of a mouthful but it really is a great book. I don’t think we’ve done it justice today.
Sara Watchter-Boettcher thanks so much for joining me today on the User Experience podcast.
Sara
Thank you so much for having me.