We should all be worried about AI infiltrating crowdsourced work

A new paper from researchers at Swiss university EPFL suggests that between 33% and 46% of distributed crowd workers on Amazon’s Mechanical Turk service appear to have “cheated” when performing a particular task assigned to them, as they used tools such as ChatGPT to do some of the work. If that practice is widespread, it may turn out to be a pretty serious issue.

Amazon’s Mechanical Turk has long been a refuge for frustrated developers who want to get work done by humans. In a nutshell, it’s an application programming interface (API) that feeds tasks to humans, who do them and then return the results. These tasks are usually the kind that you wish computers would be better at. Per Amazon, an example of such tasks would be: “Drawing bounding boxes to build high-quality datasets for computer vision models, where the task might be too ambiguous for a purely mechanical solution and too vast for even a large team of human experts.”

Data scientists treat datasets differently according to their origin — if they’re generated by people or a large language model (LLM). However, the problem here with Mechanical Turk is worse than it sounds: AI is now available cheaply enough that product managers who choose to use Mechanical Turk over a machine-generated solution are relying on humans being better at something than robots. Poisoning that well of data could have serious repercussions.

“Distinguishing LLMs from human-generated text is difficult for both machine learning models and humans alike,” the researchers said. The researchers therefore created a methodology for figuring out whether text-based content was created by a human or a machine.

The test involved asking crowdsourced workers to condense research abstracts from the New England Journal of Medicine into 100-word summaries. It is worth noting that this is precisely the kind of task that generative AI technologies such as ChatGPT are good at.

A screenshot of the instructions the researchers gave the human crowd workers. Image Credits: EPFL (opens in a new window)

That said, there is a valid use case here: Imagine if you wanted to test your own LLM against humans to see how similar or how good your model is. If you are expecting to test it with a large dataset produced by humans, but you instead receive one made by other LLMs of unknown origin and quality, well, it’s going to be hard to train your bots. Training AI on machine-generated text is a recipe for disaster for many reasons, which include amplifying biases and “confirming” spurious data.

The researchers argue that using LLMs to do crowdsourced work “would severely diminish the utility of crowdsourced data because the data would no longer be the intended human gold standard, but also because one could prompt LLMs directly (and likely more cheaply) instead of paying crowd workers to do so (likely without disclosing it).”

I know, we’re close to arguing about late-stage capitalism here. Of course minimum-wage data-entry workers are going to use all the tools they have to complete their (often boring and repetitive) tasks as effectively as possible. As the paper’s authors point out, “crowd workers have financial incentives to use LLMs to increase their productivity and income.”

On one hand, it’s not unusual for workers of all stripes to use all the tools they have to get the work done. If you type faster on a Dvorak keyboard than a QWERTY keyboard, more power to you.

On the other hand, the research highlights some of the very serious challenges with machine learning training datasets. The old computing adage of “garbage in, garbage out” still stands. If you can’t trust the training data, you can’t trust the output.