
Scale AI Faces US Labor Probe Over Pay Violations | Image Source: finimize.com
WASHINGTON, D.C., March 7, 2025 - Scale The AI, the start-up of artificial intelligence data tagging supported by technology giants Nvidia, Amazon and Meta, faces federal research. According to Reuters, the U.S. Department of Labour launched the probe almost a year ago, analysing the company’s wage practices and working conditions. The purpose of the survey is to determine whether the scale CEW is consistent with federal labour laws, including fair wages and the treatment of workers.
San Francisco-based start-up, valued at $14 billion, plays a crucial role in the AI ecosystem by providing labelled data to form machine learning models, including OpenAI ChatGPT. With important customers such as Microsoft and Morgan Stanley, Scale’s AI data fuel the advances in artificial intelligence. However, the company’s work practices have been thoroughly reviewed, raising concerns about the processing of data annotators that feed these advanced technologies.
What did you do?
Research on AI’s work practices on a scale began under former President Joe Biden and continued in 2025. The U.S. Department of Labour is examining whether scale AI adequately compensates its workers, including independent labellers who help form AI models. According to SiliconANGLE, the probe focuses on possible violations of the Fair Labour Standards Act, which establishes fair wages and working conditions.
Over the past year, it has actively cooperated with the Ministry of Labour, providing information on its business model and the rapid evolution of the AI industry. A corporate spokesperson defended Scalel AI’s practices, saying: “The feedback we receive from taxpayers is extremely positive, and we have dedicated teams to ensure that people are paid in court and feel supported. The company also states that almost all payments are made on time and that 90% of the payment consultations are completed within three days.
The role of AI in the AI industry
Founded in 2016, Scale AI specializes in providing high quality labelled data to improve AI training. Its platform enables researchers and data annotators in more than 9,000 cities to contribute to the development of IA. Company data labelling services are essential to improve AI models used in independent vehicles, finance, health and natural language processing.
With increasing demand for AI-based applications, the services of Scale AI have become indispensable for companies that strive to maintain a competitive edge in artificial intelligence. Technological giants such as Nvidia, Amazon and Meta rely to a large extent on scale AI data to improve their AI models. If the work probe stops the business, it could have a wave effect on the entire AI industry.
Legal problems and complaints by workers
This research is not the first legal challenge for AI on a scale. In December 2024, a former employee filed a lawsuit against the company alleging the theft of wages and the reclassification of workers. In early 2025, additional requests were made, with entrepreneurs claiming that they were suffering from mental discomfort due to exposure to disturbing content, including graphic images related to violence and child abuse.
As TipRanks pointed out, these claims raised concerns about the ethical implications of the AI labelling work. Many annotators perform emotional control tasks, such as identifying harmful content on online videos and social media publications. Critics argue that compensation for these workers is not proportional to the psychological cost of work.
How could this research affect industrial AI?
The Department of Labour’s investigation could have a significant impact on Scale’s AI and its sponsors. Nvidia, Amazon and Meta have invested heavily in the development of AI, relying on Scale AI data for the formation of their models. If it is found that the company has broken labour law, it can face fines, operational disruptions and reputational damage.
In addition, the case could set a precedent for how AI companies manage labour regulation. As AI continues to expand, regulators around the world pay more attention to the working conditions of data annotators, concert agents and content moderators. If more stringent labour laws are enforced, artificial intelligence firms may need to reassess their business models to ensure fair treatment of their workforce.
What’s next for AI?
Despite legal challenges, scale AI remains a dominant force in the AI industry. The company is committed to maintaining fair work practices while continuing to provide high quality data for AI training. However, as an intensive regulatory review, more transparent and fairer labour policies may be needed to maintain the credibility of AI and similar start-ups.
For investors, this presents risks and opportunities. Although the work probe raises concerns about the operational stability of Scale AI, its strong support from large technology companies suggests continued confidence in its long-term growth. As the AI sector evolves, companies that balance innovation and ethical work practices will likely emerge as industry leaders.