Representational harm

From KYNNpedia

Systems cause representational harm when they misrepresent a group of people in a negative manner. Representational harms include perpetuating harmful stereotypes about or minimizing the existence of a social group, such as a racial, ethnic, gender, or religious group.<ref name=":3">Blodgett, Su Lin (2021-04-06). Sociolinguistically Driven Approaches for Just Natural Language Processing. Doctoral Dissertations (Thesis). doi:10.7275/20410631.</ref> Machine learning algorithms often commit representational harm when they learn patterns from data that have built-in biases. While preventing representational harm in models is essential to prevent harmful biases, researchers often lack precise definitions of representational harm and conflate it with allocative harm, an unequal distribution of resources among social groups, which is more widely studied and easier to measure.<ref name=":3" /> However, recognition of representational harms is growing and preventing them has become an active research area. Researchers have recently developed methods to effectively quantify representational harm in algorithms, making progress on preventing this harm in the future.<ref name=":0" /><ref name=":1" />

Types

Three prominent types of representational harm include stereotyping, denigration, and misrecognition.<ref>Rusanen, Anna-Mari; Nurminen, Jukka K. "Ethics of Ai". ethics-of-ai.mooc.fi.</ref> These subcategories present many dangers to individuals and groups.

Stereotypes are oversimplified and usually undesirable representations of a specific group of people, usually by race and gender. This often leads to the denial of educational, employment, housing, and other opportunities.<ref name=":2">Shelby, Renee; Rismani, Shalaleh; Henne, Kathryn; Moon, AJung; Rostamzadeh, Negar; Nicholas, Paul; Yilla-Akbari, N'Mah; Gallegos, Jess; Smart, Andrew; Garcia, Emilio; Virk, Gurleen (2023-08-29). "Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction". Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. AIES '23. New York, NY, USA: Association for Computing Machinery. pp. 723–741. doi:10.1145/3600211.3604673. ISBN 979-8-4007-0231-0. S2CID 256697294.</ref> For example, the minority stereotype of Asian Americans as highly intelligent and good at mathematics can be damaging professionally and academically.<ref>Trytten, Deborah A.; Lowe, Anna Wong; Walden, Susan E. (January 2, 2013). ""Asians are Good at Math. What an Awful Stereotype" The Model Minority Stereotype's Impact on Asian American Engineering Students". Journal of Engineering Education. 101 (3): 439–468. doi:10.1002/j.2168-9830.2012.tb00057.x. ISSN 1069-4730. S2CID 144783391.</ref>

Denigration is the action of unfairly criticizing individuals. This frequently happens when the demeaning of social groups occurs.<ref name=":2" /> For example, when searching for "Black-sounding" names versus "white-sounding" ones, some retrieval systems bolster the false perception of criminality by displaying ads for bail-bonding businesses.<ref>Sweeney, Latanya (2013-03-01). "Discrimination in Online Ad Delivery: Google ads, black names and white names, racial discrimination, and click advertising". Queue. 11 (3): 10–29. arXiv:1301.6822. doi:10.1145/2460276.2460278. ISSN 1542-7730. S2CID 35894627.</ref> A system may shift the representation of a group to be of lower social status, often resulting in a disregard from society.<ref name=":2" />

Misrecognition, or incorrect recognition, can display in many forms, including, but not limited to, erasing and alienating social groups, and denying people the right to self-identify.<ref name=":2" /> Erasing and alienating social groups involves the unequal visibility of certain social groups; specifically, systematic ineligibility in algorithmic systems perpetuates inequality by contributing to the underrepresentation of social groups.<ref name=":2" /> Not allowing people to self-identify is closely related as people's identities can be ‘erased’ or ‘alienated’ in these algorithms. Misrecognition causes more than surface-level harm to individuals: psychological harm, social isolation, and emotional insecurity can emerge from this subcategory of representational harm.<ref name=":2" />

Quantification

As the dangers of representational harm have become better understood, some researchers have developed methods to measure representational harm in algorithms.

Modeling stereotyping is one way to identify representational harm. Representational stereotyping can be quantified by comparing the predicted outcomes for one social group with the ground-truth outcomes for that group observed in real data.<ref name=":0">Abbasi, Mohsen; Friedler, Sorelle; Scheidegger, Carlos; Venkatasubramanian, Suresh (28 January 2019). "Fairness in representation: quantifying stereotyping as representational harm". arXiv:1901.09565 [cs.LG].</ref> For example, if individuals from group A achieve an outcome with a probability of 60%, stereotyping would be observed if it predicted individuals to achieve that outcome with a probability greater than 60%.<ref name=":0" /> The group modeled stereotyping in the context of classification, regression, and clustering problems, and developed a set of rules to quantitatively determine if the model predictions exhibit stereotyping in each of these cases.[citation needed]

Other attempts to measure representational harms have focused on applications of algorithms in specific domains such as image captioning, the act of an algorithm generating a short description of an image. In a study on image captioning, researchers measured five types of representational harm. To quantify stereotyping, they measured the number of incorrect words included in the model-generated image caption when compared to a gold-standard caption.<ref name=":1">Wang, Angelina; Barocas, Solon; Laird, Kristen; Wallach, Hanna (2022-06-20). "Measuring Representational Harms in Image Captioning". 2022 ACM Conference on Fairness, Accountability, and Transparency. FAccT '22. New York, NY, USA: Association for Computing Machinery. pp. 324–335. doi:10.1145/3531146.3533099. ISBN 978-1-4503-9352-2. S2CID 249674329.</ref> They manually reviewed each of the incorrectly included words, determining whether the incorrect word reflected a stereotype associated with the image or whether it was an unrelated error, which allowed them to have a proxy measure of the amount of stereotyping occurring in this caption generation.<ref name=":1" /> These researchers also attempted to measure demeaning representational harm. To measure this, they analyzed the frequency with which humans in the image were mentioned in the generated caption. It was hypothesized that if the individuals were not mentioned in the caption, then this was a form of dehumanization.<ref name=":1" />

Examples

One of the most notorious examples of representational harm was committed by Google in 2015 when an algorithm in Google Photos classified Black people as gorillas.<ref>"Google apologises for Photos app's racist blunder". BBC News. 2015-07-01. Retrieved 2023-12-06.</ref> Developers at Google said that the problem was caused because there were not enough faces of Black people in the training dataset for the algorithm to learn the difference between Black people and gorillas.<ref name=":4">Grant, Nico; Hill (May 22, 2023). "Google's Photo App Still Can't Find Gorillas. And Neither Can Apple's". The New York Times. Retrieved December 5, 2023.</ref> Google issued an apology and fixed the issue by blocking its algorithms from classifying anything as a primate.<ref name=":4" /> In 2023, Google's photos algorithm was still blocked from identifying gorillas in photos.<ref name=":4" />

Another prevalent example of representational harm is the possibility of stereotypes being encoded in word embeddings, which are trained using a wide range of text. These word embeddings are the representation of a word as an array of numbers in vector space, which allows an individual to calculate the relationships and similarities between words.<ref>Major, Vincent; Surkis, Alisa; Aphinyanaphongs, Yindalon (2018). "Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research". AMIA ... Annual Symposium Proceedings. AMIA Symposium. 2018: 1405–1414. ISSN 1942-597X. PMC 6371342. PMID 30815185.</ref> However, recent studies have shown that these word embeddings may commonly encode harmful stereotypes, such as the common example that the phrase “computer programmer” is oftentimes more closely related to “man” than it is to “women” in vector space.<ref>Bolukbasi, Tolga; Chang, Kai-Wei; Zou, James; Saligrama, Venkatesh; Kalai, Adam (21 Jul 2016). "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings". arXiv:1607.06520 [cs.CL].</ref> This could be interpreted as a misrepresentation of computer programming as a profession that is better performed by men, which would be an example of representational harm.

References

<references group="" responsive="1"></references>