Mustaches, Unibrows, and Shalwar Khameezes
In April 2024, The Verge posted a story about attempting to create an Asian man with a white woman. The article found that many image generators gave the woman “Asian features.” Because the majority of these image generators are trained on datasets that predominantly feature white individuals, the AI struggled to accurately represent an Asian man without relying on stereotypes and tropes.
At one point, Meta banned keywords that were related to Asians. Likewise, Google paused Gemini’s ability to generate images of people after concerns about diversity. There are some attempts at fixing this bias through the use of “Diversity Fine-tuning”:
However, this requires that there be a significant amount of photos of the group in question. Growing up as a Pakistani in Mississippi, I was skeptical that a large dataset of photos representing people like me would exist. Out of curiosity, I decided to try generating images using various AI image generators with prompts related to my background. The results were… interesting, to say the least. Despite the fact that African Americans make up almost thirty-eight percent of Mississippians, Mississippi’s cultural image is predominantly white. Often, Southern states are conflated with Texas, leading to depictions with cowboy boots and hats, even though these weren’t common in my experience.
I decided to use a simple prompt of “Pakistani boy from Mississippi” across nine different AI image generators. While some produced plausible pictures of Pakistani boys—one even in a polo shirt—the majority relied on stereotypes. Most images placed the boys in Shalwar Kameez, attire I rarely saw Pakistani children wear in Mississippi outside specific cultural events. Even stranger were stereotypes I hadn’t encountered before, largely oriented around facial hair. The generated boys often had unibrows, and some even sported full mustaches—an unrealistic depiction for children. The idea of “hairy” Pakistani children wasn’t a prominent trope I recall hearing growing up, making it interesting to see how the AI internalized and applied these particular stereotypes.
Furthermore, the generated images generally lacked regional specificity. Despite the prompt including “from Mississippi,” the AI struggled to incorporate visual cues representing that cultural or geographical context, such as clothing or surroundings typical of the American South. Instead, the generators defaulted heavily to stereotypical markers of Pakistani identity, ignoring the regional modifier.
Below, you can observe the results that I got for the various image generators: