Taking responsibility for our creations: An interview with Ellen Broad

Audio (mp3: 94.2MB, 41:11)

Published: 29 October 2021

The ethical context of AI

Gerry Gaffney

This is Gerry Gaffney with the User Experience podcast.

My guest today is based in Canberra, Australia.

She describes herself as a data policy wonk and data tinkerer. She was Head of Policy at the Open Data Institute, and she's a senior Fellow with Professor Genevieve Bell's 3A Institute at the Australian National University.

Her 2018 book is Made by Humans: the AI Condition.

Ellen Broad, welcome to the User Experience podcast.

Ellen Broad

Thank you so much for having me.

Gerry

I'll remind listeners that as always a transcript of this episode is available at uxpod.com.

You describe the book as being about AI systems being used to make decisions about people. What sparked your interest in this topic?

Ellen

So I almost fell into this topic inadvertently in that for nearly a decade before writing the book I'd been working in data, data standards, infrastructure policy, and a lot of the systems that we were studying to use to make decisions in an automated way were using large quantities of data from different sources.

And so I became pulled into those discussions by virtue of my background and realised that quite often, in conversations around automated decision making systems, lots of critical questions related to data just weren't being asked

Gerry

Early in the book you describe, it's probably a good opening example. You described a dataset of chest X-rays. I thought it was a nice politically neutral example as well. Can you tell us a little bit about that and what it tells us about the domain in general?

Ellen

Sure. So this was actually a great story from a machine learning researcher, as well as a trained radiographer, Dr. Luke Oakden-Rayner who's based at the University of Adelaide and with the Australian Institute for Machine Learning.

And he was looking at an incredibly large, openly published dataset of chest X-rays that were being used as the basis for a model that was purporting to identify conditions like pneumonia or collapsed lungs, that kind of thing, and pointed out a variety of ways in which a system trained on those chest X-rays wasn't learning necessarily to identify medical conditions, but it was just learning to identify patterns based on the images as they were presented.

So the images for example, would learn to identify in every image containing what's called a stent or kind of a medical device that's already been inserted into the lung that those patients had pneumonia, but as Luke points out in his blog, if they already have a chest stent, that's already known, that's not a new condition.

And in fact, what they learned was to equate every image like that with pneumonia and perhaps missing other kinds of images that didn't have that stent present, they would dismiss those as not having pneumonia. So he points out lots of different little ways in which radiographers mark up images. You know, they will leave some things implicit in an image because to a radiographer they're very visible and apparent. They will ignore certain elements of an image because they're not relevant to the medical condition being looked at in that particular image. And so he really just points out that context is really important to interpreting even something as seemingly benign as a chest X-ray. And it's not as simple as we believe it is to just train a machine to step into the role of a radiographer, because there's all of these human, and these anticipations and expectations underpinning that large data set of X-rays that we're just now using as data for a machine.

Gerry

And of course, AI and the medical professionals… I know there's an Australian company whose name escapes me that scans embryos to decide which ones are most viable for IVF purposes.

Ellen

Yeah, wow, and look, it's not that every use of image recognition in medicine is difficult or impossible using machine learning. It's that it's always context specific. So some kinds of things are easier to identify and train a machine to identify than others. So I know there's some really interesting work being done, for example, with skin cancers, with retinal eye images, but it's always about the kind of condition that you're trying to use machine learning to identify and how contested your results might be. Something that Dr. Luke Oakden-Rayner points that in his chest X-rays case study is that usually we don't ever use images alone to diagnose a condition. You will use a variety of diagnostic tools to ultimately determine whether someone has skin cancer. For example, it's not that you just put a suspicious looking mole through a scanner and say, yep, okay, the machine has identified you as having skin cancer, you've got it. That might be one step in a broader diagnostic process. So I think there's still some really valuable uses of AI in medicine. It's just helping people understand that they're part of a broader diagnostic toolkit. And not that these machines are as good as, or better than human medical practitioners, but I cannot think of a single example where a machine alone is acting at that level.

Gerry

I guess we should point out the kind of obvious, AI is artificial intelligence. I don't think we've spelled it out in the in the episode.

You talk about being used in these critical conditions. I was reading something in New Scientist and they quoted the us secretary of the US Air Force in September of 2021 saying that the Air Force had quote, 'deployed AI algorithms for the first time to a live operational kill chain.'

Ellen

I haven't seen that article. My heart to sank a little bit, hearing the line be deploying it to a live, what was it, live, operation kill scenario…

It really makes me wonder what exactly they mean in that context, because something you said, we hadn't mentioned what AI was yet in this context and a joke that someone made to me, but they were being serious at the time, when I was writing the book was when they hear the words AI, they think artificial insemination. And am I going to write a book about cows?

But it's, so it's still the case that quite often, when we use a term like AI or even algorithms it can be a stand-in for so many different kinds of things. So maybe they were just using an automated process to confirm GPS coordinates as part of their kill scenario. I have a lot of questions as to what is it that they have delegated to a machine in this context as part of a live operation, because it's a highly unlikely to me that they've delegated the entirety of a kill scenario for the Air Force. Identifying your target, deciding whether to press the trigger, understanding whether and how your mission has been successful, deciding to deploy that mission in the first place, there are still human decisions and interventions, even in a context like that.

And what troubles me when I hear we have deployed AI for the first time in this context is, quite often, it allows us to ignore or minimize the human points of accountability that nonetheless are still there. It's not that we just have a system that is deciding for itself when to intervene and how to intervene and how to confirm its intervention was successful. There are humans involved there that you, particularly in a context like the Air Force, there's no way given all of the checks and balances they have and all of their own processes to determine that their systems are operating as intended, that they would just be delegating their own hierarchical decision making at the highest levels to a machine. There are humans involved.

Gerry

One would hope so.

Ellen

The thing that I think I would say in that context is even if he have senior members saying no, no, we have delegated all to a machine. Even that decision to delegate is a human decision. Like they are still involved in that. And there is a real challenge that we have in talking about machines identifying that we still have a hand in it, that there is still some mechanisms of accountability to, it's very easy to say, no, no, I'm not involved. It's now a machine doing it. And that's part of the problem.

Gerry

You write that 'issues of missing data and under an over-represented populations in historic data plague the creation of a fair and accurate AI systems.' Can you tell us a little bit about that?

Ellen

So every data set that we collect and particularly datasets about people, but in lots of different contexts, we choose what is important and worth collecting. We're limited in what we can collect by the instruments that we have, the barriers that are in our way, and we decide how we structure and label that information. So when we undertake something like a census, we decide the questions that are worth asking in the census. We decide the level of effort we're going to invest in getting people to respond to it. So, you know, you might send the census to every household with a postal address, knowing that that means that you'll miss anyone that doesn't have a postal address at this point in time. So all of those decisions end up influencing who and what gets measured. So in lots of different kinds of datasets, say in health, for example, where we're looking at patients undergoing heart surgery as a dataset to help identify people who may need heart surgery, you immediately need to say, well, there are certain populations who cannot access heart surgery.

They are poor, they have English as a second language. They have had their concerns related to a heart condition minimized or dismissed by the medical profession. So if you limit as your dataset for determining suitability for heart surgery, for example, to people who have already had heart surgery, that's itself a particular population, it's people who have been able to access it. And this kind of dynamic plays out in lots of different contexts because people's access to health, education, justice is mediated by things like class and race and gender and wealth and ethnicity. So it's a problem that plagues nearly every kind of dataset that you could of involved in making decisions about people. And it's something that we, I think, are slowly getting over the idea that we can fix it technically, and actually that these are bigger challenges that our society needs to grapple with.

I could give you an example if you like that might make it a bit more tangible. So a really well recited study just a couple of years ago involved a large dataset of medical patients in the US who were being used as a basis for a system that would decide who needed to be prioritised for treatment. So when you go into an emergency setting and you're triaged by a nurse you will be allocated a level of severity based on the symptoms that you present with. And this particular system was being designed both to prioritise patients for treatment and also estimate the severity of their condition. This system was trained on patients that had been treated in the US in a particular set of US hospital contexts, and it inadvertently learned to prioritize white patients for treatment and rank them as needing greater access to intervention. So surgery medicine, etc. And when they looked into why it was that the system was just inadvertently prioritising white patients for treatment and downgrading Black and Brown, and other kinds of ethnicities for treatment, they discovered that the underlying dataset had actually been based on health costs, costs to the health system. And because non-white patients were much less likely to be treated as in an ongoing way, or to the same extent as white patients, the system just inadvertently learned that they didn't need as much treatment. And so a system that was trying to iron out biases in triaging patients picked up that it started to equate cost of health care with need for health care. And so anyone that cost a lot clearly needed more healthcare, anyone who didn't cost much money didn't need healthcare. And so this disparity just ended up being entrenched and kind of got into a vicious feedback loop through the use of that data in this triaging context. And that kind of dynamic plays out in lots of contexts.

Gerry

Now, I guess a lot of people will be familiar with automated CV scanning and doing interviews by AI and so on. You say in the book 'Building AI's to sift through CVS and cover letters to watch people's spaces and listen to their voices during interviews, isn't really an innovation in recruitment, it's mimicry.'

Does this mean that well-intentioned efforts eliminating recruitment bias, for example, are doomed to fail?

Ellen

I think it's all about where you put efforts to eliminate bias in recruitment. And I talked a little bit about this in the book, that there's a difference between using AI to alleviate potential sources of human bias. So a startup that I talk a little bit about in the book is a startup called Applied that was founded by Kate Glazebroo in the UK, and now is used in a variety of contexts where they use automated processes to alleviate certain processes that can fatigue humans. So removing identifying characteristics from CVs, randomly sorting CVs for review by humans, extracting and making easily comparable certain sections of CVs. So they're not making any decisions about who will be prioritized for recruitment, they're not ranking candidates. They really haven't got a role in deciding who might be a good fit for a job, but they make it easier for the human review panel to assess the CVs in a fair way, so they're not being influenced by the name and the perhaps gender of a candidate when they read a CV. They are able to compare candidates' responses to particular questions and not be influenced by, wow, I just read this person's CV and they sounded so amazing. And now their one-page application letter seems even better because I've just read their CV.

So it's using AI to streamline and reduce those low-hanging fruit, but it's not trying to take the role of a human recruiter. And I think those kinds of mechanisms are still really powerful and useful. But when, there isn't a context in which you could entirely remove bias from recruitment, because ultimately your candidates are going to go and work in a human organization with other humans. And there is so much that goes into someone's experience, even of an interview that reflect our own biases and preconceived notions of their ability. So I think we can use AI effectively to level the playing field somewhat for candidates, but not to somehow substitute for our own bias. It's in us, it exists and we need to kind of be more self-aware in the way that we recruit.

Just take that example of using image recognition in recruiting to measure people's eye movements or facial you know, smiles ,ability to make eye contact, that kind of technology is being rolled out in different recruitment context, but behind it is an implicit set of assumptions about what we expect people to do in interviews. You know, we expect women to smile as part of the process of recruitment. We expect a level of eye contact, even though we know that neuro-diverse populations may not maintain eye contact, and your ability to maintain eye contact may not even be essential for the job that they're being recruited to do. And quite often actually the context that we see these technologies being rolled out in are where you're recruiting massive amounts of people. So in retail, in supermarkets, in large restaurant chains, in manufacturing, where actually you have kinds of jobs that many different types of people can do. And a system that tries to treat everyone the same.

Gerry

The European Union has been debating strict controls on the use of AI to prevent gender racial or age bias in high risk areas, such as law enforcement. And on the other hand, I came across a quote in a book by Brian Christian called The Alignment Problem, which is, you know subject matter similar to your own. And he said, 'the power and flexibility of these models have made them irresistibly useful for a large number of commercial and public applications.'

Is the genie out of the bottle. And then is there any point trying to regulate or limit its use?

Ellen

So you can always put the genie back in the bottle, you can get a different size bottle. You can put the bottle in a different room. An example that I use a lot, and I apologise to anyone that's heard me use it before is, you know, electricity. 250 years ago, we thought electricity could be used for anything and everything. We were using electricity to try and cure blindness. We thought we could raise the dead. I was listening to a podcast about Lord Byron, where they just casually mentioned that he used to let his children play with live electricity, because it was a fun and common children's toy at that point in time. And yet over many, many years, not 10, but over 250 years, we've moved to a very, very different world using that technology. There are standards, there are different standards in different countries, which is a pain for anyone traveling. There are expectations of the kinds of licensed professionals who can install, maintain, assess. There are different kinds of intermediaries, like it's an entire complex ecosystem of forms of expertise and infrastructure, processes, and rules dedicated to electricity that vary by setting. It's incredibly complex and regulated in some contexts and not in others. And that's where we're going to go with computing and automated systems. It's not that we're ever going to have, you know, one omnibus piece of legislation to rule them all that will solve our problems. And it's not going to be a five to 10 year, let's solve this problem kind of job. Really the challenges that we're encountering with any automated system at the moment are almost at every level of the stack. It's really hard to make systems that connect to other kinds of automated systems. It's really hard to keep your system up to date.

It's really hard to know if your system is doing what it's supposed to do. It's really hard to know what your responsibilities are as you know, the software engineer in verifying the outputs of your system. These are, there are so many kinds of questions and problems that we have. And over time, we're going to start breaking them up and inserting new kinds of expertise and regulation will be part of it. It's not going to happen overnight. It's not that, you know, the EU's debating the AI Act. It's not that this is going to solve the problems, but I really do think in a hundred years time, not 10 years, but a hundred years, hopefully this space will look really different. It's not going to be the domain of even just software engineers and computer scientists. Like any about other essential infrastructure it will have become a much more interesting and complex and fragmented space.

Gerry

Now to bring us back to the present.

Here in Australia, we had an egregious example in what became known as Robodebt. Most of our listeners won't be familiar with that. Can you briefly describe what this was, how it was allowed to happen and whether we've learned anything from it?.

Ellen

Robodebt was the colloquial term… I think it was Asher Wolf who's a citizen journalist coined the term Robodebt. And the term used by services. Australia was automated debt recovery, I believe. And the system was actually incredibly simple. In fact, by one definition you wouldn't call it artificial intelligence at all. There wasn't any sophisticated machine learning, being undertaken. A system was just simply matching income tax returns with expected tax returns and payments that those people had received through Centrelink. And if an discrepancy in reporting was identified, an automated debt notice would be issued. And people started getting debt notices going back five to 10 years, significant amounts of money in some contexts and, and amounts of money that if people investigated the amount being raised would determine that they actually hadn't been in breach of their obligations to Centrelink, that the system was just doing a really clumsy calculation. You know, if your income tax return looked different to the amount that you seemed to disclose for the purposes of receiving these payments, that means there's a debt and you need to pay us back.

And while many of the debt notices were challenged and overturned, what made this automated process so egregious was it flipped where accountability set for challenging a debt on its head. So prior to the introduction of this system, it was still the Department's responsibility to verify for themselves that a debt that had been raised using their automated processes was accurate before a notice could be issued to the person who was on the receiving end of the debt. Now this system could spit out tens of thousands of letters a week and just send them, and it became an individual's responsibility to challenge the debt. So you can imagine, particularly in this context, a lot of the people getting these debt letters were people who have previously suffered homelessness, people who were students, people who are single parents, people with disability pensions, lots and lots of vulnerable populations. Anyone who's been on paternity leave or maternity leave and claimed income from the government. So lots and lots of different populations with different levels of capacity to challenge debts being issued them. Ultimately… When I wrote the book where this system was still in place, it was incredibly controversial but had not yet been found to be in breach of the government's obligations to citizens. And I actually think about a year after I wrote the book the government was forced to settle a class action with people who had been affected by Robodebt. And the system has been scrapped. So they'll still be in I doubt future, and in fact, I think they are still embarking on automated processes like that, but they can't just issue tens of thousands of letters a week and push responsibility for challenging those amounts on to individual citizens. So really what made that controversial wasn't we're using sophisticated AI, it was we've abdicated our responsibility to check that the system's working, which is human one. It was very innocent that the

Gerry

Yes it was interesting that government was extremely unapologetic and recalcitrant about voting, but I guess that's taking us into the realm of politics which we should steer clear of.

Ellen

I think it, that is a very expected response from any government when a signature policy is found to not be working or to be egregiously harmful, then I don't know what government I've seen, just openly admit that that was the case and we're incredibly apologetic for it.

Gerry

Okay. To move on to something more positive. Tell us about Opie and in what ways it was an exemplary undertaking or project.

Ellen

So I use Opie as an example of the design of an automated system that was undertaken very much in concert with the community that it's intended to interact with. So Opie was developed by researchers at the University of Queensland in concert with language educators and elders at the Ngukurr Language Centre in the Northern Territory. Opie is really simple system. It's like a wooden, cardboard robot with a touch screen. That's helping students at the school learn words and phrases in Ngukurr. And what I talk about in the book is a series of revelations and changes to Opie over time that the University of Queensland research made based on feedback and engagement from the community around the Ngukurr Language Centre and the community that that system would be embedded within. They originally designed the robot in white. It was like a white silicon robot, just the kind that you imagine in sci-fi. And immediately the community said, don't bring a white robot in here. And so they redesigned it in a wooden kind of cardboard plyboard and the community painted it, students painted it, it was drawn all over. They rerecorded the voices teaching students to pronounce different phrases using the voices of local community members so that it became part of the community education process. All of the data was owned and held by the Ngukurr Language Centre. Decisions about how it would be implemented, were made by the local community. And in the end, it came to be seen as something that was more a part of the community than some of the non-indigenous language educators. So Opie, for example, was allowed to teach students or interact with students in order to educate them about the Ngukurr language, where non-indigenous educators were able to be classroom aides but not able to directly teach. So it took on this really interesting role in the community, in part because they had really owned the design and implementation and maintenance of that system. So I use it as an example of different approaches to designing technologies that don't look like we build something in one context and then just deploy it in an entirely different one.

Gerry

Okay. Now you write that 'the tech sector is not morally deficient,' but that seems like a tenuous statement given the various excesses in recent years,

Ellen

I come from the tech sector, I've worked in engineering groups. I've worked really closely with different government automated technology teams. And there are many, many good ethical people dedicated to trying to design services that will improve people's lives. What I think is incredibly problematic is that we are not, I'm trying to think of a way to phrase this. Not only is it that asking people to draw on their own moral frameworks in order to make decisions about whether a system will prove harmful or not is a recipe for appealing to people's own backgrounds and experiences and expectations of what moral is or isn't, but also it puts responsibility in the hands of individuals when responsibility should be in the hands of organizations and of societies. Our technology companies, for example, you know, some of the controversies that we're hearing with some of our largest technology companies, it is not because they are necessarily explicitly evil and trying to go out of their way to cause harm, manipulate democracy, discriminate against populations.

It's because they are motivated by priorities like profit shareholder expectations. And unless levers are introduced that require them to take other things into account, they just may choose not to. And I would say they should be taking these into these things into account, also understanding that you, in any context you are balancing competing responsibilities, competing expectations. And I do not think we should be leaving decisions as to whether something is morally appropriate or not in the hands of organisations or in the hands of individuals, whether you think they're going to act ethically or not, it's actually that certain decisions need to be taken out of the hands of individual companies or individuals and made by us as a society. So, yeah, I guess that's what I was trying to say when I said they're not morally deficient. At every level in any company, there's always people trying to say, I don't think we should do this, or is this really the right idea, but there's also in any complex organisation, another set of forces competing for your attention. And it's too much to think that we will just always navigate to the right decision, knowing how the systems are.

Gerry

It's interesting when you talk about inadvertent harm, I guess I was just, I know you're a bit of a sci-fi buff, there's a quotation from Phillip K Dick I've always liked. In one of his books, he says that we were on this earth to find out that what you most love will be taken from you probably due to an error in high places rather than by design. I always that was an interesting philosophy.

Ellen

Yeah. And there's a great quote that actually I saw Lizzie O'Shea, who's a human rights lawyer here in Australia tweet yesterday, it's Grey's law, which is I'm gonna mess it up now, but it's like at the highest levels gross incompetence is indistinguishable from malice.

Gerry

That was one of my mother's favourite phrases. She always used to say never attribute to malice that which can be explained by incompetence.

Ellen

…So, yes.

Gerry

I guess I've got another quote here, which perhaps as a follow on. 'We have at our disposal, incredible tools and techniques and information to improve people's lives and these same tools and techniques and information, can, it must be acknowledged, be used to control and profile and discriminate against people.'

Ellen

Is that one of my quotes. I was like, who wrote that?

Gerry

[Laughter.] I'm sure that's out of your book, I hope it is otherwise I've done a cut and paste. I'll have to go back and read it.

Ellen

You know, something that I was right into when I was writing that book, because it was a couple of years old and particularly in the context of a quote like that is at that point in time, there was this real disowning of our actions in relation to technologies. So, you know, machine lean machine learning, researchers might say something like, but now that the tool is here, we just have to use it. Or this is just an incredibly powerful model. And isn't it a terrible thing that it can do harm to people as though the systems just grew on their own and now just existed and, oh, well, here's where we're at in, in a very disempowered bodied sense. And so I think we're slowly starting to move on from that as a community. This was kind of 2017, 2018 when I was writing.

But I was really frustrated and angry with the kinds of positions put forward by, you know, entrepreneurs like Elon Musk was saying this a lot. Jeff Bezos was guilty of this of we're really worried about what terrible things could be done with AI. Humanity is potentially in real trouble, as though these weren't also issues that humanity is set up to respond to and solve. Like, there's really not many things in the world that once you've decided are a problem, I'm now thinking of climate change, actually, once you've decided they're a problem, you try and do something to address them. And so for example, if there are tools and techniques that we think can equally be used for harm, as well as be used for good, then we would start talking about, well, how do you mitigate those harms? What are the things that we need to put in place to mitigate those harms?

It's not that you just go, well, now some people, some people might pick up a knife and use it to hurt somebody, but some people might pick up a knife to cut vegetables with. There's a whole set of different kinds of interventions and education and expectations that go into wielding any tool. You know, I teach my toddler, you're not allowed to touch a sharp knife from the minute they can basically walk and talk. And it's not that I just say, well, the knife is there, just be warned, you could use it in a harmful. You immediately just take on these responsibilities to help put it in a safe context for others. And so I think that's what I was really frustrated about in computer science is we were all just kind of saying, oh, bad things could happen, but didn't seem to reflect in any way what our set of responsibilities might be to start to address and mitigate those harms. So I think when, you know, the conversation has really matured since then, there's been a lot of great scholarship, but at that point in time, it was very much kind of, as you said, like, we've just let the genie out of the bottle. The genie is out of the bottle. Now let's just stand around saying woe is me and twiddling our thumbs.

Gerry

The book feels very current. Even thought, the thing that sort of struck me as I was reading it, there's no reference to COVID-19 and it, and it's kind of, it feels odd to read you know a current if you like technology book, I don't want to describe it specifically as a technology book, but you know, not to have COVID-19 mentioned. Do you think the situation has changed much since you since the book came out, what three years ago now?

Ellen

It's really hard to know how much of a lot has changed and how much of my bubbles have changed. So when I wrote the book, I was coming out of an applied context, leading an engineering group, designing APIs and software standards in the financial sector in Australia. And I'd come from these very applied settings where we built things without necessarily thinking a lot about the history on the culture and the context. Since then I've joined an academic institution and being immersed in literature predating my books, scholars writing like my book. And so it's really hard. Like it feels like the world has moved on, but that's only in terms of the conversations that I have. And actually, I think COVID is a great example of some of the ways in which it hasn't. COVID is both a story of incredible change and disruption around the world, disruption to supply chains, disruption to our lives, our family dynamics, but also a bunch of technologies being rolled out without much thought.

None of which have really lasted. You know, last year we were talking about, actually I shouldn't say none of them have really lasted a lot, have lasted irrespective of whether they're actually performing as intended. But we were talking about things like being greeted by robots who would measure your body temperature and determine whether you had COVID or not, not determine whether your body temperature was higher or lower than expected, but you would therefore have COVID. We had systems that were going to tell you whether you had COVID based on a cough. There was just a variety of poorly thought-out systems being rushed out really quickly that were going to reduce our burden diagnosing or identifying a serious and complex disease. And so when I think about that, I'm like, actually the world really hasn't moved on. The teams that are coming up with ideas with the capacity to implement those ideas, actually probably still aren't reading books like mine, or they're reading books like mine and going well, that's all well and good, but we need to move really fast and we don't have time to ask these questions and we need to reach for technical solutions because we've created a culture around technology that it is the silver bullet it's going to solve our problems. And that persists everywhere. We still have this idea that at some point technology is just going to fix the problem that's in front of me. And so maybe the world hasn't moved on, maybe it's just that I have a lot of conversations with people that agree with me. [Laughs.]

Gerry

Mind you, who would've thought the humble QR code would make such a come-back…

Ellen

I know! I've moved from ignoring QR codes, like, you know, people to tried to make them a thing and I ignore them everywhere. And now my interactions with QR codes are almost part of my subconscious.

Gerry

I want to remind listeners that Ellen's book is called Made by Humans: the AI Condition.

I'd really recommend it. It's an excellent lucid and thought-provoking read on a topic that I think we all need to think about and, I don't know if confront is quite the word. But it really is excellent. And well-worth a read.

Ellen Broad, thanks so much for joining me today on the User Experience podcast.

Ellen

Thank you so much for having me.