AI Doesn’t Care About Black People

Chu
9 min readNov 5, 2023

--

A screen grab depicting the moment Kanye West deviated from the script during a Hurricane Katrina relief telethon, next to a bewildered Mike Myers, to express his discontent with the government’s response to the crisis. Image used under fair use for educational and commentary purposes.

In 2005, Kanye West’s impromptu statement, “George Bush doesn’t care about black people,” during a Hurricane Katrina fundraiser, captured a raw sentiment of systemic neglect towards Black Americans. Nearly two decades later, the phrase “AI doesn’t care about black people” satirically echoes West’s words, underscoring a new, yet distressingly familiar form of indifference — this time in the realm of artificial intelligence.

This title, a satirical riff on West’s infamous words, cuts through the techno-optimism that often clouds discussions on AI. It serves as a stark reminder that technology is not immune to the biases and oversights that plague our societies. We’ll unpack this idea through a personal narrative with the GIO AI App, which lays bare the uncomfortable truth about racial bias in AI, particularly within facial recognition software.

As we employ satire to shed light on serious issues, we embark on a journey that is as much about technology as it is about the people it fails to recognize and represent.

The GIO AI App Experience: A Reflection of Systemic Bias

Upon downloading the GIO AI App, the anticipation of transforming my everyday photos into professional-quality headshots was high. The app’s description promised a seamless transformation, a simple solution for anyone to elevate their digital presence with a few taps on a screen. However, the reality that unfolded over the course of three hours and countless attempts told a different story. Despite the advertised ease and supposed sophistication of the app’s AI, it failed to produce a single satisfactory headshot that accurately represented the texture and style of Black hair.

The repeated failures were not just a test of patience but also a clear indication of the app’s limitations when it came to recognizing and adapting to the diversity of Black features. The technology’s shortfall became particularly stark when, in search of a fair comparison, I turned to a non-melanated friend — a white man — to test the app’s capabilities with his image. Remarkably, the app required just one attempt to deliver the perfect headshot for him, highlighting the systemic bias ingrained within its algorithms — a bias that reflects a broader industry-wide issue of underrepresentation and overlooked diversity in tech.

Side-by-side images showing the differing results of the GIO AI app when processing photos of me, an individual of African descent, versus a white friend, highlighting the disparity in the app’s performance. Image used under fair use for educational and commentary purposes.

AI and Facial Recognition: The Minority Dilemma

The challenges with the GIO AI app are a microcosm of a pervasive issue affecting the broader landscape of AI and facial recognition technology. These systems are notoriously unreliable when it comes to identifying and processing images of individuals from minority groups. This deficiency stems from a fundamental flaw in the development process: a homogenous pool of creators and trainers.

Facial recognition technology relies heavily on machine learning algorithms, which in turn depend on vast datasets to “learn” from. However, when these datasets are predominantly composed of faces and features from a non-diverse population, the technology develops a narrow “understanding” of human diversity. This leads to a high rate of inaccuracies, such as the inability to recognize non-white features with the same precision as white features, or the failure to accurately render hairstyles and textures that are common among Black individuals.

The repercussions of these inaccuracies are far-reaching and can be severe, affecting everything from personal technology use to critical applications in law enforcement and judicial systems. There have been numerous instances where faulty facial recognition has led to wrongful arrests and misidentification, disproportionately impacting people of color. These are not simply cases of technological glitches but are indicative of systemic biases that have been encoded into the very fabric of these AI systems.

Studies have consistently shown that these technologies perform poorly when analyzing the features of people of color, especially Black individuals. For example, a study by the National Institute of Standards and Technology (NIST) found that facial recognition systems have error rates that are significantly higher for African Americans than for whites, underscoring the racial disparities inherent in these AI models.

Excerpt from the National Institute of Standards and Technology (NIST) study evaluating the effects of race, age, and sex on face recognition software, indicating significant disparities in accuracy. Image used under fair use for educational and commentary purposes.

Tech’s Diversity Gap: A Closer Look

The diversity gap in technology isn’t just a buzzword; it’s a quantifiable disparity with profound implications for AI and its applications. This gap is not merely about the numbers; it’s about whose perspectives are shaping the future of technology. The underrepresentation of Black, Latino/a, and female voices in tech is not simply a missed opportunity for inclusivity — it’s a foundational flaw that sets the stage for systemic biases.

This gap begins in education, where computer science and STEM fields see significantly lower enrollment from women and minority students, and extends into the workplace, where the numbers continue to dwindle. The statistics are stark: while women make up nearly half of the workforce, they hold less than a quarter of tech jobs. Similarly, Black and Latino/a workers are grossly underrepresented in tech roles compared to their presence in the overall workforce. This lack of diversity isn’t just a problem for the groups excluded; it’s a problem for the technology itself.

Screen grab of Google’s hiring data showing the demographic composition of their tech workforce, sourced from Genevieve Carlton, Ph.D.’s article on the persistence of the tech diversity gap. Image used under fair use for educational and commentary purposes.

The consequences of a homogenous tech landscape are technologies that are inherently biased. When AI is trained on data sets that lack diversity, it learns to perform well only for the majority group. This means that facial recognition software, for example, is less accurate for women and people of color — not due to any innate complexity in their features, but simply because the AI hasn’t been exposed to them as frequently.

The disparity in tech also has economic implications. The wage gap in STEM fields is significant, with Black and Latino/a workers earning considerably less than their white and Asian counterparts. This economic imbalance reinforces the cycle of underrepresentation, as it becomes even more challenging for minorities to enter and remain in the tech field without equitable compensation.

The Consequences of a Homogenous Tech Landscape

The ripple effects of a predominantly white and male tech sector extend far beyond Silicon Valley. This homogeneity has profound consequences for the development and deployment of AI across various industries, from healthcare to law enforcement, and even to the seemingly innocuous world of social media filters. AI developed in this bubble can perpetuate and amplify biases, leading to systemic discrimination.

For instance, as stated earlier when facial recognition is used in law enforcement, the stakes are incredibly high. Misidentification can lead to false accusations and wrongful arrests, disproportionately affecting minority communities. The error rates in facial recognition for people of color, particularly Black individuals, are significantly higher than for white individuals. This isn’t just a hypothetical issue; there have been documented cases where flawed facial recognition has led to the arrest of innocent people.

In the healthcare sector, AI tools are increasingly used to inform decisions about patient care. If these tools are trained on data that isn’t representative of the diverse patient population, they can provide suboptimal or even harmful recommendations for treatments for minority patients. This can exacerbate existing health disparities and lead to poorer health outcomes for these communities.

Within the realm of consumer technology, the lack of diversity in AI development teams is why apps like GIO fail to serve a significant portion of their potential user base adequately. It’s not just about a headshot; it’s about the message it sends when a Black individual cannot use a service with the same ease and satisfaction as a white individual. It’s a clear signal that they were not considered during the design process, that their features are not the ‘norm’ that the AI was built to recognize.

Closing the Gap

The disparities laid bare by instances of AI bias are not inevitable. They are the result of choices made at various levels of the tech industry — choices that can be revisited and revised. To close the gap, a concerted effort across the board is essential.

Tech companies must scrutinize their data sets, development teams, and training processes to ensure they reflect the diversity of the global population they serve. This means actively recruiting and retaining employees from a range of backgrounds, ensuring that minority voices are not only heard but are influential in decision-making.

In the educational realm, it’s crucial to support and encourage students from underrepresented groups to pursue STEM fields. This can be achieved by providing scholarships, mentorship programs, and fostering an inclusive environment that counters the stereotypes and barriers that currently lead to high attrition rates.

For AI to be truly equitable, it must be audited and regulated. This involves establishing clear guidelines for transparency in AI algorithms, promoting open-source development to reduce the likelihood of hidden biases, and implementing rigorous testing across diverse demographic groups before deployment.

To illustrate the need for action, we could highlight successful diversity initiatives within tech companies that have made a tangible difference. Bringing attention to organizations that work towards bridging the digital divide can serve as an inspiration and a guide for others in the industry.

Policymakers also play a role in mandating fairness in AI. Legislation that addresses AI accountability and ethical standards can push companies to prioritize these issues. Furthermore, engaging community leaders in the conversation ensures that the solutions developed are grounded in the needs of those most affected by AI bias.

Finally, a change in corporate culture is indispensable. This involves redefining what leadership looks like, valuing diverse perspectives, and acknowledging the creative and competitive edge that diversity brings to problem-solving and innovation.

Beyond the Code

The narrative of the GIO AI App’s shortcomings is a microcosm of a systemic issue that permeates the tech industry. It’s not just about a glitch in an app or an algorithmic oversight; it’s a reflection of the profound lack of representation in the spaces where technology is created and shaped.

The consequences of a homogenous tech landscape extend beyond inconvenient app performance to real-world repercussions for minority communities. When AI doesn’t recognize Black people accurately, it’s not just an app failing to stylize a portrait — it could mean misidentification in a surveillance system or discrimination in a hiring algorithm.

The call to action is clear: the tech industry needs to commit to diversity and inclusion at every level, from the classroom where the first spark of interest in technology is ignited, to the boardrooms where strategic decisions are made. This includes but is not limited to hiring practices, funding for STEM education in underrepresented communities, and public policies that hold companies accountable for the social impact of their technologies.

For a future where AI serves everyone equitably, the industry must look beyond the code and see the faces of those it has historically overlooked. This transformation requires the dismantling of systemic barriers and the cultivation of an environment where diversity is not just present but is thriving and driving innovation.

The hope is that by sharing these experiences and shedding light on the underlying issues, we can catalyze change towards a more inclusive and just technological future. Because AI should care about Black people, about all people, and it’s our collective responsibility to ensure that it does.

One question before I go. Who the fuck is this supposed to be? Sammy Sosa……. post bleaching…. “allegedly” Lol let’s do better GIO AI.

Rayshaun “Chu” Smith

CEO & Founder, Rayshaun Smith Enterprises

Author -Breaking the Code: Thriving as Black

Individuals in the Era of Artificial Intelligence

X @RSEChuOfficial

--

--

Chu
Chu

Written by Chu

I write sometimes...... U.S. Navy Vet|🚀Entrepreneur|💻Tech & AI Enthusiast|⚓️USN Vet|✈️World Traveler(28/195 countries)|📚 Novice Writer| Opinions Are My Own.

No responses yet