`
Attention! Vous lisez actuellement un article écrit il y a au moins 9 mois. L'information pourrait ne plus être pertinente.

Alex Hanna: Diversity, The Key Towards Ethical Design?

I met Alex in one of those snowy winter evenings in Montreal, when I was in a quest for the holy grail and she was visiting the town for work. As she quotes, unlike many other folks who work in Google, she has a background in sociology. Alex has been an active voice for challenging inequalities in and out of artificial intelligence and I was bound to know more about her views on «Machine Fairness».

The conversation got me to rethink the common practices that I’ve been equipped with throughout the years. I hope that it will be similarly an aspiration for the designers of the future.

***

Alex Hanna is a sociologist and research scientist working on machine learning fairness and ethical AI at Google. Before that, she was an Assistant Professor at the Institute of Communication, Culture, Information and Technology at the University of Toronto. Her research centers on origins of the training data which form the informational infrastructure of machine learning, artificial intelligence, and algorithmic fairness frameworks. She received her PhD in sociology from the University of Wisconsin-Madison.

***

Bahar Partov: Ethics is a major subject in your research, can you take us back to the definition of ethics ?

Alex Hanna: That is a big question. A classical version of ethics is what does it mean to live a good life, what does it mean to live a moral life. Something that existed from earliest times in philosophy. There are of course different strains of ethics that include virtue ethics, Confucian ethics, etc. Broadly speaking we may say that ethics answers the question of, «how to live a good life».

Bahar Partov: Ethical definitions evolve over time. How can we make sure that an ethical AI system stands the test of time as well?

Alex Hanna: It is an interesting question. In fact, AI methods are pretty old, neural networks have existed since the 50’s. We are at a point that we can deploy AI at scale, only because we have a significant amount of data and computational power. The worry is that AI is not meeting the ethical challenges. This does not necessarily pertain to AI, but technology.

Principles by which we design technology need to have ethical considerations. It could be anywhere from weapon systems to data collection, transit, and design of the cities. A classical example of this is when Robert Moses ordered engineers to build the Southern State Parkway bridges extra-low. This prevented poor and Black people in buses from using the highway. This has nothing to do with AI, but it has everything to do with how technology can affect separation of people by class and race.

In some cases the designer carries on their personal bias to the design, it’s also the case that they can’t think of ways in which their technology would be unfair. It’s not just about the individual and their own prejudices but it’s about if they considered downstream consequences of a decision or technology on people.

Bahar Partov: As a sociologist, do you think that engineers and scientists need to be equipped with more information about ethics?

Alex Hanna: Of course, they need to have some kind of training in these topics. A lot of times what we see is a one week course in ethics. You have a compliance sort of thing and it is not sufficient. Most people don’t think that they are doing something bad, rather they are used to work practices as habits and because they are habits they should be re-enacted over and over again without thinking critically: I am doing this kind of code practice because I’ve done this before.

We are not used to doing what Shanon Vallor, calls cultivating techno-social virtue. What that means is to think through work practices of designing in an ethical way. For example in the process of building a robot delivery service (a system that already runs at Berkeley), you may be thinking that this is helping people with disabilities, but what if the robot only stops at the door of a multi-story building? How robots are going to navigate the streets? Moreover, in the case of Berkeley they had to send images to crowd workers, to learn how to label all the images the robot was capturing. This work was done by thousands of people, usually lower income workers from the U.S. and India. So you are actually contributing to the bulk of under-paid workers. In which case, the vision of doing the practice, i.e. thinking of a problem and engineering it, is not aligned with a techno-socially virtuous approach of solving the problem. The key to a techno-social approach is to consult with folks who might be helped by this technology, for example asking how the new approach can impact other delivery drivers in this case. Cultivating such habits takes a lot of time and effort. To ensure more engineers are equipped with this training, the ethical design needs to be built into the curriculum not just for a week or a single course, but rather as a starting point of a curriculum. These days we are seeing more initiatives to develop these types of programs.

Bahar Partov: Is AI the solution for everything?

Alex Hanna: Actually I think AI is the solution to the very narrow set of problems. There are some interesting cases in which AI has been used for health applications, such as diagnosis. There are some potential uses of AI for helping mitigate certain environmental risks. The larger problem is that they are only a few companies that have enough data to deploy AI at scale and that endows these particular firms with a form of socio-technical power.

Technological interventions can be applied to particular problems, but it really depends on what the problem is as well as the involved community. For instance I cannot see a case in which AI can be used to solve problems such as income inequality, or alleviate racial or housing segregation. These are policy decisions.

«In another word, move slow and mend things as opposed to Facebook’s motto of move fast and break things» – Alex Hanna


Bahar Partov: Is it possible to design a machine that makes more fair assessments than a human?

Alex Hanna: It depends on the context. For instance allocation of welfare benefits, it might be the case that a social worker can allow them some leeway based on certain observations. It is not just about fair optimization anymore. By having humans there, we can set a kind of intervention in the process. In many systems, having that dimension of intervention allows more compassion in terms of how people are interacting with a particular social system.

Another instance in hiring: algorithms that look at faces in interviews and try to assess if an interviewee will be good for a job, without the interviewer being there as a pre-screening. This already changed the nature of the interaction.

Bahar Partov: What is social good? Do you suggest using any fairness framework while thinking about application of AI in social good?

Alex Hanna: I guess the question is what you are trying to do to begin with? We require the expanse of the problem i.e understanding the root cause and more systematic issues. There is a dividing line between if we should make the system more fair or the system is just irrevocably broken and we need to think how to reframe the problem. This is often the case of criminal justice. Are we thinking about pretrial risk assessment systems? Should we use this practice, or should we be saying how pretrial risk assessment itself is a problematic system? Or when locking people in cages because they cannot pay a bond fee for example.

This is why you can’t narrowly define fairness. It is not a metric, because at the level of engineer or designer you need to ask what is this for. It has to do with systems of power or domination. Things that many engineers and data scientists are not prepared to examine or intervene on.

Bahar Partov: Hopes for the future of ethics in AI?

Alex Hanna: I wanna see the case that companies and governments and academics take ownership and have sufficiently trained people. More social scientists in companies, as well as more women, more Black people, Indigenous people, people of color, queer people, and disabled people. We need insights from lived experiences, disciplinary training and disciplinary conversations. I hope there is legislation that is well informed to come out in Canada and the US that really understands how these technologies are built, the data that fuels them, what kind of data is needed, what kind of data needs to be restricted, and what kind of policies we need to put around to build algorithms.

The last thing I really hope to see is that affected communities are engaged throughout all these processes, to talk to community groups that are in the ground and engage with them as stakeholders rather than rush and build things that aren’t needed. In another word, move slow and mend things as opposed to Facebook’s motto of move fast and break things.

Post a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.