People have been using ChatGPT to help them to do their jobs since it was released in November of last year, with enthusiastic adopters using it to help them write everything from marketing materials to emails to reports.
Now we have the first indication of its effect in the workplace. A new study by two MIT economics graduate students, published today in Science, suggests it could help reduce gaps in writing ability between employees. They found that it could enable less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues.
Shakked Noy and Whitney Zhang recruited 453 marketers, data analysts, and college-educated professionals and got each of them to complete two kinds of tasks they’d normally undertake as part of their jobs, such as writing press releases, short reports, or analysis plans. Half were given the option of using ChatGPT to help them complete the second of the two tasks.
A group of other professionals then quality-checked the results, grading the writing on a scale of 1 to 7, with 7 the best. Each piece of work was evaluated by three people working in the same professions, hired through the research platform Prolific.
The writers who chose to use ChatGPT took 40% less time to complete their tasks, and produced work that the assessors scored 18% higher in quality than that of the participants who didn’t use it. The writers who were already skilled at writing were able to reduce the amount of time they spent on their work, while those who were assessed as being weaker writers produced higher-quality work once they gained access to the chatbot.
“ChatGPT is just very good at producing this kind of written content, and so using it to automate parts of the writing process seems likely to save a lot of time,” says Noy, lead author of the research.
“One thing that’s clear is that this is very useful for white-collar work—a lot of people will be using it, and it’s going to have a pretty big effect on how white-collar work is structured,” he adds.
However, the output of ChatGPT and other generative AI models is far from reliable. ChatGPT is very good at presenting false information as factually correct, meaning that although workers may be able to leverage it to help them produce more work, they also run the risk of introducing errors.
Depending on the nature of a person’s job, those kinds of inaccuracies could have serious implications. Lawyer Steven Schwartz was fined $5,000 by a judge last month for using ChatGPT to produce a legal brief that contained false judicial opinions and legal citations.
“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the judge, Kevin Castel, wrote. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
The research hints at how AI could be helpful in the workplace by acting as a sort of virtual assistant, says Riku Arakawa, a researcher at Carnegie Mellon University who studies workers’ use of large language models, and was not involved with the research.
“I think this is a really interesting result that demonstrates how human-AI cooperation works really well in this kind of task. When a human leverages AI to refine their output, they can produce better content,” he adds.