Feminist Design Tool

Share / Comparte

Defensible decision making for interaction design and AI

This tool was created by Feminist Internet and Josie Young (see the license). I found it useful, so I reproduced it below. Also, I translated it into Spanish here.

The aim of the Feminist Design Tool is to deepen how you think about the values you will be embedding in your design as you create it, whether it’s a chatbot, an Artificial Intelligence-powered agent, or something else. The questions in this tool aim to make your design better (and more feminist!) by ensuring it doesn’t knowingly or unknowingly perpetuate biases or inequalities.

1. Getting Started

👨‍👨‍👧‍👦Stakeholders

The idea of ‘universal users’ has become very popular in interaction design. It means designing for ‘everyone’, which sounds good, but in practice it can mean that people with specific needs are overlooked in favour of mainstream groups. This is because designers don’t always consider the people who are not currently well served by mainstream products and services. If you solve for people who are not currently well served, you are more likely to design something that works for everyone. A good example of this is wheelchair ramps – they make buildings accessible to people who both use, and don’t use wheelchairs.

+ In addition, the term ‘user’ itself is problematic. It positions a person’s use of a product or service above other aspects of their lived reality, and doesn’t acknowledge their potential for active participation in the production of technologies. We prefer the term ‘stakeholders’, which emphasises the idea that we all have a stake in the technologies we use, and are active participants in constructing their social meaning and value. Alternatively, you can simply use ‘person’ or ‘people’!

Have you considered:

  1. Rather than design for a ‘universal user’, can you identify a stakeholder who is not currently well served, and who could benefit from your design? (Try not to make assumptions about what is beneficial, get good quality stakeholder research instead).
  2. What might be some of the specific needs, barriers, and problems that they face?
  3. What are their strengths and viewpoints?
  4. What different participatory methods do you have available so that your stakeholder can co-create or have direct input into the development of your design? Some examples of participatory methods include:
    • having a stakeholder from the community that you are designing for on your design team
    • ensuring community members agree you are addressing a relevant problem
    • ensuring that community members agree the choice of technology suits the solution they are seeking
    • co-creation design workshops, early and ongoing feedback, and testing

💖Purpose

Technologies are built for many reasons, and can have many possible beneficial or harmful consequences for people. This tool is about encouraging you to design technologies that improve, rather than damage, social, environmental and economic outcomes. This starts with defining the purpose of the thing you want to make, and considering whether it is beneficial for the world.

Have you considered:

  1. Does your design meet a meaningful human need or address an injustice?
  2. How will your design address the problem/s experienced by your stakeholder?
    You may want to think about your stakeholder in more detail to help you:
  3. What is the problem they’re trying to overcome?
  4. What obstacles prevent them overcoming the problem?

💡Context

Technology does not sit in isolation from political, social,
economic, cultural, technological, legal, and environmental power dynamics. For example, facial recognition technology is currently being deployed by police services in the context of a history of police discrimination towards black communities.

Have you considered:

  1. Do you have a good understanding of the context your design will be part of and the power dynamics at play within it?
  2. Do you understand the opportunities and challenges for different stakeholders within this context?
  3. Who is not well served in this context and why?
  4. Does your design exacerbate problems that others are currently trying to solve in this context?

2. Taking a Step Back

👁️Team bias

We all come from places and experiences that have shaped our thinking and perspectives, and we tend to unconsciously embed these perspectives in the things we make. The risk of not reflecting on this is that your design may reinforce negative stereotypes about particular groups of people, which could be harmful to your stakeholders.

Have your team reflected on:

  1. Your values and position in society, individually and collectively?
  2. How your values and position might lead you to choose one option over another or hold a specific perspective on the world?
  3. How your values and position in society relate to the stakeholders your design seeks to engage?
  4. Are there additional perspectives you need to bring into the process? How might you do this?

3. Getting into the Detail

🤖Design & Representation

The way AI agents (chatbots, game characters etc) are designed and represented can challenge or reinforce stereotypes. For example, characterising a financial advice bot as male could reinforce the stereotype that men are more competent than women with money. How are you planning to depict or represent your design to your stakeholders?

Have you considered:

  1. What type of character will you give your design?
  2. How might your character choice reinforce any stereotypes?
  3. How will your character remind the stakeholder it’s a robot?
  4. Will you assign a gender to your character? Why? In what ways might this reinforce or challenge gender stereotypes? Have you considered a genderless
    design? What possibilities might this open up?
  5. In what ways might your choices prompt people to behave unethically or in a prejudiced way?

💬Conversational Design

Since conversation is likely the primary interface between your stakeholder and your design, the dialogue needs to be carefully crafted. This is how they will decide whether the design is effective, and also how your design might harm or discriminate against them. A feminist approach to conversation design means using empathic, inclusive, accessible language and images, and providing opportunities for the stakeholder to specify how they would like to be addressed.

Have you considered:

  1. How can you get the design to speak with a feminist voice? What’s the tone of voice (physically and metaphorically)?
  2. What words should the design avoid (what could trigger or be upsetting?)
  3. Are there words specific to your stakeholder you need to ensure your design can understand?
  4. If it receives abuse, how will the design respond?
  5. What will your design say when it doesn’t understand?
  6. How will you get feedback about whether the conversation is appropriate for your stakeholders?
  7. Have you asked how the stakeholder would like to be addressed?

🗄️Data

Biases in datasets and the AI techniques they power (like machine learning) can negatively affect how your design works and impacts your stakeholders. It’s important to monitor data bias through the whole lifecycle of your design.

Have you considered:

  1. How will you collect and treat data through the development of your design?
  2. Are you aware of how bias might manifest itself in your training data?
  3. Are you aware of how bias might manifest itself in the AI techniques that power your design (like machine
    learning)?
  4. How could stakeholder-generated data and feedback be used to improve the design?
  5. Will the design learn from the stakeholder’s behaviour, and if so, are you assuming that the design will get it right?
  6. What mechanisms or features could make these assumptions visible to the stakeholder and empower them to change the assumptions if they want to?
  7. How will you protect stakeholder data?

📐Architecture

Building connected devices takes place within a complex, physical ecosystem of physical infrastructure, hardware, software and human labour. Often this ecosystem is hidden, and it can be hard to track exploitation or its negative effects. While it’s not always possible to have a totally feminist architecture, it is important to consider the steps you can take to minimise harm and justify your choices.

Have you considered:

  1. What type of technical architecture and capabilities you will use? For example, are you buying these directly from companies like Amazon or using open source platforms? Are you aware of the ethical and feminist implications of your choice?
  2. How will you minimise the carbon and climate footprint of your design? For example, excessive data use through things like video streaming can fuel high
    carbon footprints.
  3. Where might unpaid or exploited labour exist in the production/supply chain of the technology you’re using?
  4. Will the impact of AI and automation on the service your design aims to provide make some jobs redundant or lower status?

The idea of this tool is to go through the questions on each section and answer them as a team in the design process, you can use collaborative tools like Mural or Miro to develop the questions. I’m using it in a little project with some friends and we are enjoying the reflections derived from the guide. I hope you enjoy them too! 🤓

Leave a Reply

Your email address will not be published. Required fields are marked *