Part of: Emily Carr University of Art + Design
Course: Contemporary Dialogues in Design
Key words: Artificial intelligence (AI), Machine-learning, Algorithms, Data-driven, Systemic Inequality, Inclusivity
We live in a world where artificial intelligence (AI) is everywhere; intelligent algorithms are programmed to assist humans and make our lives easier and more efficient in both work and life. AI's primary motivation is to have a human approach where it acts and thinks like the human mind and have an ideal rational approach towards things it has been designed for. (What is Artificial Intelligence (AI)?, 2020, para. 3). However, in reality, there is a whole other dimension to how AI has been worsening the inequality created by the biases embedded within the machine-learning algorithms. 'Artificial Intelligence's White Guy Problem' by Kate Crawford is an informative article that brings awareness to the other side of AI. Kate Crawford is known for her social and political implications of artificial intelligence. She is a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and AI. Recently her latest book has been published 'Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press)' (Crawford, 2021) (Crawford, 2016).
Crawford has cited multiple examples that have clearly shown the systemic inequality caused by AI in workplaces, judicial systems and housing concerning different forms of discrimination like sexism and racism. She has prominently mentioned how AI works based on the values of the creator and how inclusivity plays a vital role on a larger picture and not just the specialist level in the field but also the people who sit on the company boards and take ethical decisions. The statistics show that the AI industry suffers from a severe lack of diversity. Women make up only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of the research staff at Facebook and Google. Racial diversity is much worse: black employees make up to 4% of Facebook's and Microsoft's workforce and only 2.5% of Google's total workforce. There is no data available for transgender people and other gender minorities, but the trend is unlikely to be reversed there as well (West et al., 2019, p. 3, 5).
An article published on venturebeat.com (2019) written by Khari Johns, who interviewed Os Keyes, a PhD student at the University of Washington's Department of Human-Centred Design & Engineering, is another example of what Crawford is trying to address. The article explains how AI systems have adverse consequences on the lives of transgender people and those who do not strictly adhere to binary identification of males and females. In one example, they discussed the use of facial recognition software, specifically automatic gender recognition (AGR) used in Berlin's train ticketing machines, to give female riders discounts on their tickets on International Woman's Day. Even though it may be well-intentioned, but the issue is that it validates the technology, making it seem convenient, which makes detrimental usage acceptable. In other examples, Keyes also talks about different types of discrimination they have to face because of facial recognition software. In the VentureBeat article, Keyes expresses her concerns as the AI industry has rarely considered transgender or gender-fluid people in their work. Therefore, the lack of development in facial recognition software is why today, the technology appears transphobic(Johnson, 2019). The robustness of machine learning is determined by the quality of the data set used to train the AI system.
Crawford warns by saying, "We need to be vigilant about how we design and train these machine-learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future." (Crawford, 2016, p. 3) Suppose experts examine how systems can be biased based on acknowledging the data that is used. Then, we will be in a far better position to create more equitable AI systems in the future related to racism and sexism and marginalized communities. However, this will require significantly greater accountability on the tech community, Governments and public organizations that can also help by ensuring fairness while investing in prediction technologies. The consequences will only worsen if the required steps are not taken to start fixing the problem.
This article gives a deep insight into how important it is to use the right set of data for any product or service to be successful without harming anyone; a minute ignorance can also lead to misery for many. It also emphasizes one of the most crucial topics of all time, 'diversity' and 'inclusivity'; be it in any field and whether there is the use of AI or not, taking these points under consideration is essential.
References
- Crawford, K. (2016). Artificial Intelligence’s White Guy Problem. Cs.dartmouth.edu. https://www.cs.dartmouth.edu/~ccpalmer/teaching/cs89/Resources/Papers/AIs%20White%20Guy%20Problem%20-%20NYT.pdf.
- Crawford, K. (2021). Kate Crawford. Katecrawford.net. https://www.katecrawford.net/about.html.
- Johnson, K. (2019). A transgender AI researcher’s nightmare scenarios for facial recognition software. VentureBeat. https://venturebeat.com/2019/04/24/a-transgender-ai-researchers-nightmare-scenarios-for-facial-recognition-software/.
- West, S., Whittaker, M., & Crawford, K. (2019). DISCRIMINATING SYSTEMS Gender, Race, and Power in AI. Ainowinstitute.org. https://ainowinstitute.org/discriminatingsystems.pdf.
- What is Artificial Intelligence (AI)?. Ibm.com. (2020). https://www.ibm.com/cloud/learn/what-is-artificial-intelligence.