The Impact of AI Biases on Minorities: Insights from Tamar Huggins
Artificial Intelligence (AI) is transforming industries worldwide, but has flaws. One critical issue is AI’s inherent biases, which can have significant implications, particularly for minorities. In a recent episode of the “Back in America” podcast, Tamar Huggins, founder and CEO of Tech Spark, shared her perspectives on this pressing issue. Her insights shed light on the challenges and potential solutions for creating more inclusive AI technologies.
The Human Element in AI
Despite their advanced capabilities, AI systems are ultimately shaped by human input. Tamar highlights, “In terms of artificial intelligence, the models, machines cannot work without human interaction. Unfortunately, as humans, we all have biases, and when that is coupled with the creation of a new thing, those biases can be implemented into, for example, the training data; whether it’s done purposefully or not, the biases are always going to be in there.”
This statement underscores a fundamental issue: AI systems are only as unbiased as the data they are trained on and the people who develop them. This problem becomes even more pronounced in the context of minority representation.
Spark Plug: Mitigating Bias Through Diversity
To address these biases, Tamar developed Spark Plug, an AI tool to translate classical literature into African American Vernacular English (AAVE). The goal is to create a more inclusive learning experience for Black students. Tamar explains, “We first started out by transforming classical text into African American vernacular English because we wanted to increase the engagement in schools and also content retention. And we thought the best way to do that was to create a text, a piece of technology designed with the end user in mind.”
Spark Plug is unique in its approach, utilizing diverse data sources and cultural contexts to train its models. Tamar notes, “When we created Spark Plug, we used text from the civil rights movement, speeches from Dr. King, Malcolm X, and authors from the Harlem Renaissance. We like to think of it as training it and building a foundation that rests upon the shoulders of our ancestors.”
The Importance of Diverse Voices in AI Development
One of Tamar's most striking points is the need for diverse voices in AI development teams. She compares the lack of diversity in tech to historical examples of exclusion, saying, “This reminds me of, have you ever watched Mad Men? A group of men creating advertising for women’s hygiene products without asking any woman if the marketing even makes sense.”
The lack of representation in tech leads to products that fail to serve minority communities effectively. Tamar emphasizes, “If you do not have a diverse development team, what do you expect? If you are creating a product that is for millions of people to use and have access to, and your development team, your UX design team, your engineering team, your product team, your sales team, and your marketing team is not reflective of the market that you are attracting, then what is going to happen?”
AI and the African American Vernacular English (AAVE)
AAVE is not just a dialect but a significant part of Black cultural identity. AI’s failure to recognize and process AAVE effectively can lead to digital marginalization. Tamar’s work with Spark Plug aims to change this. She states, “We have to start looking at elements like technology as something that is for all people. Unfortunately, big tech focuses on what will serve their bottom line, and being culturally relevant and equitable doesn’t serve their bottom line because it takes a lot of work. It takes a lot of time and dedication to do the type of work we do.”
By incorporating AAVE into AI, Tamar is not only making technology more accessible but also preserving a crucial aspect of Black culture. This initiative highlights AI's broader potential when designed with cultural relevance and inclusivity in mind.
Moving Forward: The Role of AI in Social Equity
The potential for AI to widen or bridge racial and economic gaps is immense. According to McKinsey, generative AI can potentially widen the racial economic gap in the United States by $43 billion yearly. However, deployed thoughtfully, it could remove barriers to Black economic mobility.
Tamar’s efforts with Spark Plug are a step towards mitigating these disparities. Her work demonstrates that AI can be a powerful tool for social equity with the right approach. She concludes, “It’s important to understand that it’s not just for Black students. It’s not just for Brown students. All students can use Spark Plug.”
In conclusion, Tamar Huggins’ insights and initiatives highlight the critical need for diversity in AI development. By addressing biases and incorporating cultural relevance, we can create AI technologies that serve and uplift all communities, particularly those that have been historically marginalized.
AAVE and AI Bias
African American Vernacular English (AAVE) can be traced back to the 17th century during early British colonization of the American South. Throughout this period, Black slaves and indentured servants began to develop a new dialect that combined British English with elements of African and Caribbean Creole languages. While AAVE emerged in part out of a lack of educational access, it also functioned as a mode of resistance — “a covert, often defiant response to the surveillance state of slavery.” Princeton University’s Cornel West argues that this resistance is expressed through linguistic differences and unique hand expressions, rhythmic repetition, ways of walking, hairstyles, and more.
According to Stanford University, all five speech recognition technologies had error rates almost twice as high for blacks as for whites – even when the speakers were matched by gender and age and spoke the same words. This disparity underscores the critical need for inclusive and representative AI training data to avoid perpetuating technological biases.
Tamar Huggins’ work with Spark Plug exemplifies how targeted efforts to address AI biases can lead to more equitable outcomes. It highlights the broader importance of cultural sensitivity in AI development.