Pluriversality in AI development

Share / Comparte

From its beginning, Artificial Intelligence (AI) has been developed based on optimistic and almost unreflective points of view. Probably, this fact has allowed the incredible results that we see nowadays, after approximately 70 years of its inception. Between the 50s and 60s, the AI field went from complete scepticism about what machines could do to the discovery that they could do more than what they are programmed to, which lead to disproportionated predictions on the possibilities. These predictions of the future capacity of AI would continuously happen through the years, between downsides of pessimism when they were not fulfilled, referred to as Springs and Winters by some authors. Even now, it is visible the pattern of wonderful promises and, although it is wise to be cautious about how alarming AI has been in its history, it is also necessary to admit its extensive influence on all the systems that sustain the world as we know it, and therefore, to question how it has come to be.

During the Lighthill debate in 1973, experts in the AI field were deliberating about the promises of the field and the possibility that the “general purpose robot is a mirage”. Although the assertions to support this affirmation were mainly based on the AI incapability at the time to cope with the swift growth of the problem’s complexity (defined as combinatorial explosion), none of them paid attention to how they were defining general as a synonym of universal. AI systems and the thought behind them was defined and pursuit mainly by man that were part of elite universities in north global countries; Lighthill’s debate did not have a single woman participating. 

Albeit, after the arguments of Lighthill discussion the outlook towards AI as a promising field  became negative and, eventually, caused the end of financial support for most of the researchers in the area, by the early 80s “nearly every major US corporation had its own AI group”, and there were several efforts towards AI research. Around 1987, there was a change of approach integrating “experimental results rather than philosophical claims”. After that, AI started to have a broader development. Although this approach has meant several advances as mentioned before, I argue that not having a clear philosophy, or at least not acknowledging the limitations of the one underlying AI efforts, has also led to what they seem to be dead ends; reproduction of discrimination in AI systems, deepening of inequalities around them, between other problems noticeable today. The possibility of having results without thinking about the philosophical point of view under these results are achieved led to the “articulation of the sciences of the artificial [to have] as its central subject/object the universal figure of man” (p. 141).

Feminisms and decolonial thinking as part of pluriversality

If the mistake in AI’s development so far has been considering as the universal the white-straight-male figure, then a pluriversal approach could allow the amendment of that error. Pluriversality is the notion that in this world there must be place for many worlds to coexist. Its roots are the Zapatista movement from Mexico, and it commits to the universal as something that can not be defined from one narrowed perspective; “the universal can only be pluriversal”, because it “acknowledges and supports a wider radius of socio-political, ecological, cultural and economic needs” (p. 672). In my way of thinking, this concept embraces many other visions that I want to portray as the imaginaries that AI could have considered as base philosophies and theories in the last 20 years and that are necessary today to develop the just and inclusive solutions that are needed.

In relation to the feminist theories, the technofeminism and xenofeminism are articulated with pluriversal vision through the necessity to recognize the material realities of technology production (p. 324). One of the aims of technofemism, that is framed in the feminist Science and Technology Studies, “is to undo that figure [of the ‘man’ as universal] and the arrangements that it serves to keep in place” (p. 141). Xenofeminism goes beyond, and postulates the recognition of the others from an intersectionally, which focuses on thinking about how different aspects of a person’s identity (like their gender, race, and sexuality) affect their experiences in society. Xenofeminist manifesto specifies that understanding how these aspects of identity overlap and interact with each other “is not a universal that can be imposed from above, but built from the bottom up”. The latter is aligned to pluriversality, since it is not about “cultural relativism, but the entanglement of several cosmologies connected today in a power differential”. That power differential is nowadays defined by the mediation of AI-based technologies, among other systems. 

Furthermore, decolonial views are the main base of pluriversality, since they postulate that, historically, “technology [has been] used to extend capitalist patriarchal modernity, the aims of the market and/or the state, and to erase indigenous ways of being, knowing, and doing” (p. 11). The many ways in which colonial thought have penetrated the paths of technological development of AI has been discussed by several authors (Mohamed et al., 2020; Thatcher et al., 2016; Zuboff, 2019), but the main link of it to the universal definition as one and only is framed in modernity; as Mignolo mentions, “the logic of coloniality [is] covered up by the rhetorical narrative of modernity”.

Why pluriversity in AI?

Questioning why the points of view mentioned were not considered during AI’s history is cumbersome. Overall, because I don’t think this thought on the possibility of universality from the point of view of a few was a spontaneous conception, on the contrary, it required several factors to aligned to make it happen. Nevertheless, I do consider there was a lack of reflection during the process. I think on the case of Alison Adam, who guided the development of two feminist AI systems in 1993 (which are not mentioned in the main books of AI history), considering questions that are still valid; “How is AI used and for which purposes? How does AI represent knowledge? What knowledge is used in AI systems?” (p. 3). Adam did not only question the latter, but also her own work; after these systems she was worried about the possibility of them ending up “replicating the social order and power structures that exist in society” (p. 5). This kind of reflective effort has not being present in AI history until now. 

The absence of reflection on the own work may be one of the reasons why some foundational ideas of AI are fallacies. For example, the idea that intelligence involves only the brain, has being one of the main focuses on AI development through history. However, intelligence has proven to be impossible without a body; cognition is embodied. Melanie Mitchell mentions that “[s]everal other disciplines, such as developmental psychology, add to evidence for embodied cognition. However, research in AI has mostly ignored these results” (p. 7). A pluriversal outlook would contemplate those insights as an input to accomplish its aim in the AI context: not to change the world but the beliefs and understanding of the world, which would change our way of living in the world.

The revision and questioning of AI history allows exploring alternatives to the philosophies underlying AI systems that have yielded questionable results. In this case, one of the arguable bases of AI is the universal as a definition possible from the point of view of only a few. A few that were mainly male, white, and with specific academic backgrounds. In this essay it was argued how the notion of pluriversality, that encompass technofeminist, xenofeminist, and decolonial theories, may be an appropriate approach to take action against the current problems present in AI systems by allowing a better understanding of the world they are embedded in. 🤖

Leave a Reply

Your email address will not be published. Required fields are marked *