The makers of ChatGPT are trying to curb its reputation as a free-wheeling cheating machine with a new tool that helps teachers detect whether homework has been written by students or artificial intelligence.
A new AI Text Classifier launched by OpenAI on Tuesday follows weeks of debate in schools and colleges over concerns that ChatGPT’s ability to write almost anything on command could foster academic fraud and impede learning. I’m here.
OpenAI warns that its new tools, like others already available, are not foolproof. Methods for detecting text written by AI are “imperfect and sometimes wrong,” said Jan Leike, head of OpenAI’s alignment team, who was tasked with making the system more secure. said Mr.
“So you shouldn’t rely solely on it when making decisions,” says Leike.
Teens and college students were among the millions who started experimenting with ChatGPT After being published as a free application on the OpenAI website on November 30And while many have found creative and harmless ways to use it, the ease with which they can answer take-home test questions or help with other assignments is a major concern for some educators. caused panic.
By the time schools opened for the new year, New York City, Los Angeles, and other large public school districts began blocking their use on classroom and school devices.
Post-secondary educators and students said the new ChatGPT AI software, which can pump out complete essays based solely on writing prompts, not only brings challenges but also opportunities to improve human skills. I’m here.
Seattle Public Schools initially blocked ChatGPT on all school devices in December, but has since made it accessible to educators who want to use it as a teaching tool, said district spokesman Tim Robinson. said.
“We can’t ignore that,” said Robinson.
The district will also expand the use of ChatGPT into the classroom so that teachers can use it to train students to be better critical thinkers and students can use the application as a “tutor” or We also discuss the possibility of helping or enabling new ideas to be generated while working on a challenge. said Robinson.
School districts across the country say the conversation around ChatGPT is evolving rapidly.
“The first reaction was, ‘OMG, how can we stop all the fraud that is happening on ChatGPT?’ is not the solution, he said.
“We would be naive if we weren’t aware of the dangers this tool poses. If we do, we will not be able to serve our students,” Page said. Like himself, his ChatGPT will eventually be unblocked, especially once the company’s detection service is in place.
OpenAI highlighted the limitations of its detection tools in Tuesday’s blog post, but in addition to deterring plagiarism, it could help detect automated disinformation campaigns and other exploits of AI to mimic humans. said it was possible.
The longer the passage of text, the better the tool can detect if something was written by an AI or a human.Enter any text — a college admissions essay or Ralph Ellison’s literary analysis Invisible Man —and the tool labels them as “very unlikely, unlikely, unknown, likely, or likely” generated by AI.
But just like ChatGPT itself, which has been trained on a plethora of digitized books, newspapers, and online writings, often spewing out falsehoods and nonsense with confidence, we interpret how it came to its conclusions. it is not easy to do.
“We basically don’t know what patterns we’re paying attention to and how they work under the hood,” says Leike. “There’s not much we can say at this point about how the classifier will actually work.”
Higher education institutions around the world are also beginning to discuss the responsible use of AI technology. Sciences Po, one of France’s most prestigious universities, last week banned its use, banning anyone found to have secretly used ChatGPT and other AI tools to conduct written or oral research. , warned that they could be expelled from Sciences Po and other institutions.
In response to the backlash, OpenAI said it has been working for several weeks to create new guidelines to help educators.
“Like many other technologies, one school district may decide it is unsuitable for classroom use,” said Lama Ahmad, a policy researcher at OpenAI. “We’re not trying to force anything on them. We just want to give them the information they need to make the right decisions.”
It’s a very public role for a research-oriented San Francisco start-up, now backed by billions of dollars of investment from partner Microsoft, and growing interest from the public and government alike. facing.
French Digital Economy Minister Jean-Noël Barraud recently met with OpenAI executives, including CEO Sam Altman, in California and told an audience at the World Economic Forum in Davos, Switzerland a week later, But the government minister, a former professor at the Massachusetts Institute of Technology and HEC, a French business school in Paris, said there are also difficult ethical questions that need to be addressed.
“So if you are in a law school, there is room for concern, as ChatGPT, among other tools, can clearly deliver a relatively impressive exam,” he said. “If you’re in a graduate-level economics department, ChatGPT will struggle to find or deliver what you expect, so if you’re in an economics department, it’s fine.”
He said it will become increasingly important for users to understand the basics of how these systems work and to know what biases may exist.