IAC19 Dan Zollman on Ethical Design

2020-03-01 210浏览

  • 1.I’m an information architect and strategist based in Boston, MA. Ethics has been an important theme for my entire career, but feel that I’m only beginning to understand how to approach ethics in practice. This talk represents my current understanding. The speaker notes contain most of the spoken part of this presentation, with additional examples, resources, or citations below the separator. --Other notes/background:• Please feel free to contact me to discuss any aspect of this topic. • I’m also part of an online community for Ethical Design which has global, Englishspeaking participation and is connected with local meetups in several cities. •Pronouns:he/him/his 1
  • 2.This is my vision statement for those working in design and technology fields. 2
  • 3.An online community for this topic has its origins in the IA Summit. 3
  • 4.The part I want to focus on is agency. 4
  • 5.The practical reality of ethics for designers, and anyone working in technology, is that it’s complex and it’s political. 5
  • 6.We need a systemic approach in order to understand our role in this complex and political environment. 6
  • 7.We’ll bring that systemic approach through five themes, focusing on theory and principles, with a bit about practices at the end. 7
  • 8.A core idea for thissession:Ethical practice is characterized by ongoing questions that are never fully answered. I will raise some of these questions, but I can’t give you the answer. Only YOU can answer them in your own situation. It’s an ongoing, active process. It’s about developing awareness and the capability to do this work. 8
  • 9.9
  • 10.We know there are serious ethical problems in design and technology. We hear about these issues everywhere—and we know that UX professionals are directly implicated in many of them. But it’s not just these issues. All design has an ethical dimension. 10
  • 11.In design, we make decisions that affect other peoples lives. We make decisions about what things are and how people should behave. That’s not a neutral act. The idea of human-centered design means we’re trying to improve people’s lives. Design for good. But we need to interrogate that. Is it really good? For whom? Who gets to decide that? This is a starting place for ethical questioning. --• “Design is applied ethics.” – Cennydd Bowles • AlainFindeli:When approaching a problem, both technological and behavioral solutions are always available; product design already involves a choice to use a technological solution. “Choosing a technological mediation is a matter of ethics.” (Findeli, “Ethics, aesthetics, and design”) • “Human decision making, while often flawed, has one chief virtue. It can evolve. As human beings learn and adapt, we change, and so do our processes. Automated systems, by contrast, stay stuck in time until engineers dive in to change them.” (Cathy O’Neil, Weapons of Math Destruction) 11
  • 12.There are many ways of defining “good” that may overlap or compete in practice. Perhaps good design enhances human wellbeing. Perhaps good design aligns with our values, beliefs, and ideology. We’ve seen this question with purportedly neutral platforms like Twitter, Facebook, or Bitcoin. Platforms that are intended to break down political hierarchies, and to give everyone avoid, actually do express a very particular ideology. Perhaps good design enhances societal and environmental outcomes. Or it enhances social justice. That’s about fairness, equality and equity, and combatting injustices in the world. ----Other examples of how these might compete inpractice:• If you have a libertarian ideology, that may shape or delimit your understanding of what approaches to social justice are acceptable. • Products designed in the interest of human wellbeing cannot be “universally” designed, i.e., you cannot include everyone or provide an equal/equitable outcome for everyone. 12
  • 13.We could look at it from a human rights perspective. Based on my reading, these are all the rights listed in the UN Universal Declaration of Human Rights from 1948. There are a lot of them! Values may differ between users, clients, and ourselves. They differ from user to user, and between communities of users/stakeholders. And as professionals, we are personally invested and bring closely held values to the table. Design requires difficult decisions about which kinds of good we’ll prioritize. Even simple design problems can be ethically complex. --Note:Recently, many statements of values/principles/heuristics/ethical frameworks have been proposed by designers in the UX world. I find most of these untenably narrow and relative to Western values. For example, many of them implicitly prioritize individual autonomy as an overarching framework. Others mix human rights with particular ideologies, such as “free an open sources”, without acknowledging the limitations of the latter. 13
  • 14.The philosophy of ethics offers many frameworks that can help us. Even though I’m trying to give you a birds-eye view, this presentation favors a consequentialist framework, because I’m going to emphasize the consequences our actions have for others. But even then, ideas about virtues and ethics are mixed in. These frameworks can help sensitize us to our moral reasoning, allowing us to be more conscious and deliberate in our decisions. Ultimately, you still have to make your own decisions. 14
  • 15.In addition to making decisions about values, we have to make decisions about what role we’ll play as practitioners and what kind of responsibilities we’ll take on. Perhaps we should focus on serving the client. That’s the realm of professional ethics. Perhaps we should not only serve the client, but make sure we do no harm to users. (That comes from medical ethics and bioethics.) Beyond that, maybe we should effect positive change for users. Or going further, effect political change. Some people would say-–yeah! We should always use design for political change. Others would sayno:It would be unethical to use a client engagement as a vehicle for political ends. So we have to make these decisions too. The problem with this framing is that implies we have the option of not having a role, not being involved…. 15
  • 16.But “do no harm” is not always possible, because harm is already being done. We already participate in dynamic systems where good and bad things are happening. We’re a part of that. All technology has positive and negative consequences Any organization you work for can both improve and perpetuate systemic problems. 16
  • 17.We can’t be neutral and have to choose between imperfect alternatives, weighing the upsides and downsides of each. One one hand, we can’t fix everything, and we can’t take the world on our shoulders. But we do have some power, and maybe we can take some things on our shoulders. According to Jared Spool, there is more demand for designers than supply. That means we have choices about the work we do. If you’ve got a seat at the table, what do you plan to do with it? 17
  • 18.Let’s go deeper into this idea that we a a part of dynamic systems. 18
  • 19.We participate in larger systems where we can’t specify the outcomes the way we can write a design specification. Design is a distributed, social process. 19
  • 20.(This slide builds out from the center of the left circle.) Design is a collective process involving many people who have different goals and intentions, different values and beliefs. Those people may be diverse, or not so diverse. They have different points of view. They have different levels of power and influence over the process. All of this shapes the outcomes of design. The product is also shaped by the design skills, processes, and capabilities we have in the organization. It is shaped by the organizational structure, information flows, and incentives. It is shaped the external forces the organization has to respond to. Political, environmental, social, technological, legal, environmental. And all of this is permeated by culture—the shared understanding we have about who we are and what we believe to be true about the world. 20
  • 21.It is through all of these structures and processes that we gain an understanding of the world of our users and the context(s) we are designing for. We understand that world through a combination of research and the knowledge and lenses already embedded in the organization. Then all of this shapes the products and services we produce. 21
  • 22.And finally, once all of that happens, the actual meaning and function of these products are reconstructed in the context of use. There are many more complex structures in that environment, shaping the outcomes of our design. So this is a complex process. 22
  • 23.The responsibility for these outcomes is distributed across many participants in the process. There are a few systems principles we can apply to any complex process like this. One is that there is no simple relationship between cause and effect. For any given outcome, there is rarely one single root cause. Instead, there is joint determination by many causes. It’s not nature OR nurture, but nature AND nurture. Two, cause and effect can be nonlinear, bidirectional or circular. You’ve seen this if you’ve ever had a supervisor who was micromanaging you, and you responded to that by pushing them back and trying to create space for yourself, and they responded to that by micromanaging even harder. It’s a circular relationship. And third, people have limited information about the system around them. They make the best decisions they can based on the information they have, but that information is complete. That’s the principle of bounded rationality. 23
  • 24.To illustrate this, let’s look at a simple scenario. Let’s imagine there’s a startup building a new enterprise software platform, and they have six months to launch an MVP. There’s a UX designer who’s doing the very best lean UX research, doing site visits with potential customers to understand their needs. The product manager and developer also want to build the best product, but they know they have time and resource constraints, so they all work together to prioritize features and build something they can start selling to customers. Then there’s a second company that’s doing some business process reengineering, and they’re under pressure to meet a deadline that came down from the CEO. After a lot of negotiation, they purchase this product from the first company and roll it out to the employees. Unfortunately, it’s a disaster. People can’t figure out how to use the software. It doesn’t match the way they do their work. The IT department is overwhelmed with support tickets, and they can’t help. But people are relying on this platform to do their jobs. They can’t get things done. So projects fail. They lose business. Some employees get bad performance reviews. At the end of the year, they don’t get raises or bonuses. So we’re talking about something very mundane, it’s not Cambridge Analytica, but it’s something any of us have may have experienced if you’ve worked either on the production end or the receiving end of enterprise software, and this had a serious impact in the lives of these employees. 24
  • 25.What would have had to to happen in order for this to be a successful software rollout? We could look at the way they original user research was done, or who was included in that research. We could look at the way the product team prioritized features and planned the product. The alignment between the product team and the sales team, which informed the sales team’s understanding of the strengths and weaknesses of the product, and their choices about which customers to pursue. In the second organization, we could look at the quality and the timeline of the requirements gathering process. The way they configured and customized the software. The training given to employees as the software was rolled out. And whether or not the IT support desk had the right knowledge and procedures to support the product. So a lot of this falls outside the traditional responsibilities of design and UX, but if what we care about is the outcomes for users and stakeholders, we can imagine how all of this could have gone differently. --For practical guidance on how the UX function can align with and collaborate across other organizational functions, see, for example, Alesha Arp’s presentation from the same conference (IA Conference 2019). 25
  • 26.There are a few more principles illustrated in this story. • Design decisions have unintended consequences. Some of these can be anticipated; others can’t. • The effects of these decisions scale-–the product team is only a few people, but this affects hundreds or thousands of people. • The impact is felt across space and time. The users are far away, and the effects may not be felt for months or years. The designer never experiences or observes these outcomes. • Again, the context of use is a big part of the outcome. What if this software had been rolled out to a university? A hospital? A construction company? The consequences would have been much different. • We can see how the voices who are included in the design process, or the voices that are not included, also affect the outcome. 26
  • 27.Based on what we’ve been talking about, let’s do an exercise… • Partner with someone in the audience and decide who is “A” and who is “B”. • Individually first, read the scenario and reflect on it. • Once you’ve answered the question, discuss with your partner. This should be a judgment-free conversation. Just listen to what the other person has to say.https://goo.gl/forms/mqCYSQmNeryiHtlv127
  • 28.28
  • 29.Did anyone reconsider after the conversation? Obviously, these are leading questions, and they’re written to sway you. But based on the information in the scenario, there is no correct or incorrect response to the poll. Each scenario… • Was told from a different person’s point of view. • Had different information. • Had different levels of insight into the CompuGuard company. • Emphasized different pros and cons. You also probably had certain beliefs, feelings, or past experiences that may have affected your response. You may have had past experiences with privacy, surveillance, or authority, that influenced your answer. And whether or not you felt you could make a difference in this company may have influenced your answer. This decision comes down to weighting a whole bunch of pros and cons, which is difficult because on either side, you have important human values that are all right, but they conflict with each other. And you don’t have enough information or enough foresight to know the consequences of your decision. 29
  • 30.A couple more principleshere:• Extending the idea of unintended consequences, technology can be misused or abused. CompuGuard’s customers, even if there were checks and balances, could abuse this technology to spy on their employees. • People with different points of view and different information have different realities about what is right and wrong. This is very real for them. • And last, did anyone find that in scenario B, the information I gave you about the CompuGuard’s competitors let you to make a decision you might not otherwise have made? 30
  • 31.So that leads us into the idea of hierarchy. We can see much power we have as professionals who shape the technological world, yet we work within larger structures that shape the options available to us, and limit our agency. 31
  • 32.If you’ve been to any IA conference, you’ve probably seen this diagram, the idea of pace layering, from Stewart Brand. This is just one way of looking at civilization. 32
  • 33.Every complex system has many layers of hierarchy and scale and subsystems, systems inside systems inside systems. There are small-scale structures that move and change very quickly, and there are larger-scale structures that change very slowly. It’s from these fast-moving structures that systems learn, and from the larger, slower structures that we get stability and consistency. This relates to design in a very important way… 33
  • 34.Very importantidea:Design reproduces dominant sociopolitical relationships unless there is a deliberate effort to change them. 34
  • 35.We’re seen algorithms that amplify racism. Design for healthcare that improves outcomes, but benefits the wealthy. Urban planning that reinforces segregation in cities. 35
  • 36.Even our exemplars of the very best UX and service design come from companies that have done a lot of good in the world, but also have externalities. These are costs that the company doesn’t take responsibility for and are absorbed by other people. A good user experience doesn’t guarantee that this won’t happen. It can even facilitate it. 36
  • 37.So how is it that design does this? This is a course unto itself, but I’ll talk about three mechanisms by which design reproduces these relationships. 37
  • 38.The first is economic incentives. If you think about it, it’s strange that we manufacture tons and tons of plastics so you can carry food to your home and throw them away, using gasoline to drive back and forth over and over again. But these wastes don’t go outside the system, they are externalities that come back to us in the form of other costs. But we’ve built, and rely on, this infrastructure of food distribution and suburban sprawl. It’s too expensive to change it, so we keep designing new products that continue the cycle. 38
  • 39.And in a society with great wealth disparity, investors follow those who have money to spend. That applies to consumer products and services as well as hospitals and universities. The result for designers is that designers who want to make a positive difference in the world are more likely to find paying work that serves affluent customers. 39
  • 40.Anothermechanism:Governing mentalities. I’m borrowing and extending this term from STS scholar Nancy Campbell, who used this to describe the way human beings are seen and understood from the point of view of policy making. You may have heard about predictive algorithms and how they’re being used in law enforcement and criminal justice. They’ve been attacked for the way they make highly biased and inaccurate judgments about human beings. But going beyond bias, but these technologies tell us something about how the justice systems sees criminals, and criminality—perhaps as a deterministic set of personality traits. That mentality is the deeper structure in the hierarchy that I’m talking about. 40
  • 41.Governing mentalities describes “those widely shared values, norms, expectations, and assumptions of how the world operates.” They “are simultaneously the most important and the most difficult toidentify:'>identify: