Site icon Wonderful Engineering

Kenyan AI Moderators Who Worked For $2 Per Hour Say It Took A Huge Toll On Their Health

‘It’s Destroyed Me Completely’: Kenyan Moderators Decry Toll Of Training Of AI Models

Behind the scenes of the booming artificial intelligence revolution lies a group of unsung heroes – content moderators who tirelessly review and label data to train AI algorithms. However, the recent petition filed by former ChatGPT moderators in Nairobi, Kenya, sheds light on the dark side of this crucial role.

In a quest for justice, former content moderator Mophat Okinyi, with three other moderators, have come together to challenge the exploitative conditions they endured while working on OpenAI’s ChatGPT. The moderators claim that the traumatic nature of the content they were reviewing, coupled with low pay and inadequate support, has severely impacted their mental health.

Sama employed the moderators, a data annotation services company in California contracted by OpenAI for content review. Their tasks involved reviewing texts and images, including graphic scenes of violence, self-harm, and sexual abuse. The petitioners argue that they were not adequately warned about the nature of the content and were not provided adequate psychological support. Additionally, they were paid as low as $1.46 to $3.74 per hour.

On the other hand, Sama maintains that moderators had access to licensed mental health therapists and received medical benefits for reimbursement. When the contract with OpenAI was terminated early, the moderators faced financial difficulties while dealing with severe trauma.

The data annotation industry, where large language models are fed examples of hate speech, violence, and abuse to train algorithms, is proliferating, with the data collection and labeling industry expected to reach over $14 billion by 2030. Much of this work is carried out in regions like East Africa, India, and the Philippines, where workers are willing to do the job for lower costs.

In response to the petition, Cori Crider, director of Foxglove, a legal NGO supporting the case, emphasizes the responsibility of tech companies like OpenAI to address the working conditions of content moderators. She believes that outsourcing workers is a tactic for tech companies to distance themselves from the challenges moderators face.

In a groundbreaking ruling in Kenya, the court declared tech giants as the “true employers” of these unsung heroes, acknowledging their essential role in shaping the AI industry. But the mystery deepens. OpenAI’s content moderation outsourcing remains shrouded in secrecy, leaving questions about transparency and accountability.

Despite the darkness, there’s hope. Okinyi sees himself as a digital soldier, shielding users from harm. His resilience shines like a beacon, even in enduring wounds.

As the battle for recognition rages on, these brave moderators continue to emerge from the shadows, unmasking the truth and demanding their rightful place in the AI industry.

Exit mobile version