- Moderators of ChatGPT who review content say that they were forced to read pages and pages of disturbing texts.
- The Guardian reported that a former Sama employee would read 700 text passages a day.
- The Guardian reports that the content frequently depicted graphic images of violence, abuse or animal cruelty.
Kenyan moderators review content OpenAI’s ChatGPTIn a recent report from The Guardian, employees claim they were forced to read graphic text and that they received poor pay. They also said they received limited mental health care.
The moderators worked for Sama, an annotation data company in California that has a contract with OpenAI. They have also offered to offer Google and Microsoft now offer data labeling service.. Sama terminated its partnership with OpenAI by February 2022. Time is a factor.It was reported that OpenAI, a California-based company, cut ties due to concerns about working with content that could be illegal for AI training.
Mophat Okinyi is a moderator who reviews content for OpenAI. The GuardianHe would read 700 text passages a day. He stated that a lot of the passages were about sexual assault and that his work had caused him to become paranoid. This eventually affected his mental health as well as his relationship with his parents.
Alex Kairu told the media outlet that the things he witnessed on the job had “completely destroyed me.” He stated that his wife’s physical relationship deteriorated and he became more introverted.
Insider’s comment request was not immediately answered by representatives of OpenAI or Sama. OpenAI didn’t provide a statement to The Guardian.
The Guardian reported that these moderators said the content for review depicted violent scenes, including child abuse, brutality, murder and sexual abuse. Sama’s spokesperson said that workers were paid between $1.46 to $3.74 per hour. Reporting time previouslyThe data labelers received less than $2 per hour for reviewing content for OpenAI.
The Guardian reports that four moderators have now called on the Kenyan Government to investigate the conditions of work during the contract between OpenAI Sama.
The moderators state that they did not receive adequate support in terms of mental health for their work. Sama denies this claim. The company told The Guardian workers received medical benefits and 24/7 access to therapists.
A Sama spokesperson told the media outlet that “we are in agreement with those calling for fair and just job opportunities, as they align with our mission.” “We believe that we will already be in compliance with any legislation that may be passed in this area.”