In recent news, Amazon, Microsoft, and Meta are facing scrutiny from Democratic lawmakers over their involvement in "ghost work" within the realm of artificial intelligence (AI). This secretive labor involves crucial tasks like data labeling and response rating, which are vital for advancing AI technologies.
Reports from Bloomberg have shed light on the challenges faced by these invisible data workers who work behind the scenes. They often tackle demanding tasks, endure constant surveillance, receive low wages, and lack essential benefits. Their work involves screening potentially harmful chatbot responses, a task that is not only demanding but also time-sensitive, often compromising safety measures. Moreover, insufficient training and supervision can introduce bias into AI systems, posing a significant concern for the integrity of AI-driven applications.
This scrutiny is led by Massachusetts Senator Ed Markey and Washington Representative Pramila Jayapal. They, along with a group of lawmakers, have penned a letter addressed to the CEOs of nine companies, including Amazon, Alphabet, Meta Platforms, Microsoft Corp, and International Business Machines Corp. In this letter, they emphasize the need for transparency and accountability in how data workers are treated.
The lawmakers seek detailed information from these executives about their data workforces. They want to understand policies related to breaks, procedures for appealing suspensions, and access to mental health resources for workers exposed to distressing content. Their message is clear: tech companies must not exploit workers in their quest for AI innovation. This message has the support of influential figures like Massachusetts Senator Elizabeth Warren.
This scrutiny isn't limited to established tech giants; it also extends to newer AI-focused companies like OpenAI Inc., Inflection AI, Scale AI Inc., and Anthropic. This highlights the broader concern about the treatment of data workers across the AI industry.
Recent reports reveal that U.S. companies heavily rely on subcontracted staff to develop AI products. These workers, often hired through external staffing agencies, don't enjoy the same benefits as the companies' direct employees. They handle tasks like content moderation and product quality assurance.
Specifically, generative AI tools, which produce responses to text prompts, depend on thousands of contract workers for training, fixing, and enhancing algorithms. These algorithms are then presented to customers as technological marvels. However, many of these workers report being underpaid, stressed, and overworked. Some even suffer from trauma due to having to filter disturbing images.
For instance, OpenAI paid workers in Kenya less than $2 per hour to filter harmful content from ChatGPT. Such practices have raised serious ethical and labor concerns.
Play audio
No comments