Broader concerns in the AI industry regarding data worker treatment

13 Oct 2023

In recent news, Amazon, Microsoft, and Meta are facing scrutiny from Democratic lawmakers over their involvement in "ghost work" within the realm of artificial intelligence (AI). This secretive labor involves crucial tasks like data labeling and response rating, which are vital for advancing AI technologies.

Reports from Bloomberg have shed light on the challenges faced by these invisible data workers who work behind the scenes. They often tackle demanding tasks, endure constant surveillance, receive low wages, and lack essential benefits. Their work involves screening potentially harmful chatbot responses, a task that is not only demanding but also time-sensitive, often compromising safety measures. Moreover, insufficient training and supervision can introduce bias into AI systems, posing a significant concern for the integrity of AI-driven applications.

This scrutiny is led by Massachusetts Senator Ed Markey and Washington Representative Pramila Jayapal. They, along with a group of lawmakers, have penned a letter addressed to the CEOs of nine companies, including Amazon, Alphabet, Meta Platforms, Microsoft Corp, and International Business Machines Corp. In this letter, they emphasize the need for transparency and accountability in how data workers are treated.

The lawmakers seek detailed information from these executives about their data workforces. They want to understand policies related to breaks, procedures for appealing suspensions, and access to mental health resources for workers exposed to distressing content. Their message is clear: tech companies must not exploit workers in their quest for AI innovation. This message has the support of influential figures like Massachusetts Senator Elizabeth Warren.

This scrutiny isn't limited to established tech giants; it also extends to newer AI-focused companies like OpenAI Inc., Inflection AI, Scale AI Inc., and Anthropic. This highlights the broader concern about the treatment of data workers across the AI industry.

Recent reports reveal that U.S. companies heavily rely on subcontracted staff to develop AI products. These workers, often hired through external staffing agencies, don't enjoy the same benefits as the companies' direct employees. They handle tasks like content moderation and product quality assurance.

Specifically, generative AI tools, which produce responses to text prompts, depend on thousands of contract workers for training, fixing, and enhancing algorithms. These algorithms are then presented to customers as technological marvels. However, many of these workers report being underpaid, stressed, and overworked. Some even suffer from trauma due to having to filter disturbing images.

For instance, OpenAI paid workers in Kenya less than $2 per hour to filter harmful content from ChatGPT. Such practices have raised serious ethical and labor concerns.

Play audio


Share:

Comments

No comments

Add your comment

Search Blog

Recent Posts

GTBank to Temporarily Shut Down App for System Upgrade Guaranty Trust Bank (GTBank) has announced a tempo...
Vodafone, Google Sign 10-Year Deal to Boost AI and Cybersecurity Vodafone and Google have announced the expansion o...

Related Post

African Startup Funding Plummets in August 2024: A Cautionary Tale for a Volatile Ecosystem
In stark contrast to the optimistic outlook painted just a month ago, August 202...
Interest in AI Among Nigerians Skyrockets with 130% Surge in Google Searches
Nigeria is witnessing a significant spike in interest in artificial intelligence...
40% of Young Africans say they Prefer to have AI as Managers at Work in New Survey
In a notable shift reflecting the rapid integration of artificial intelligence (...
Logo

Accelerating the growth of Africa's tech ecosystem