Image source: Pexels
ChatGPT’s halo has become too bright
ChatGPT has been in the limelight since it was launched: a new AI chatbot tool from OpenAI that can quickly generate articles, stories and lyrics, and even code to answer questions based on user requests.
At launch, it went viral because of the incredible amount of information and completion in the answers and gained millions of users overnight. It also helped OpenAI get a new $10 billion investment from Microsoft, bringing OpenAI’s latest valuation to $29 billion. You know, when Google bought DeepMind in full, it only cost $600 million.
With the development of ChatGPT in the past month or so, it seems to have come out of the stage of being “mocked” by users and started to show its real potential. And similar AI tools are starting to be used by the industry.
News sites: A warm welcome
One of the most discussed stories in Silicon Valley in the past two days is that the new media site Buzzfeed has turned around and tripled its stock price based on ChatGPT and even OpenAI’s big aura!
The cause was simply the announcement that Buzzfeed would use the AI API provided by OpenAI — not even ChatGPT itself, which was misrepresented by some media outlets — to help create some content.
In a memo, BuzzFeed CEO Jonah Peretti said, “By 2023, you’ll see us take content that is still in the development stage of AI and turn it into part of our core business to enhance the Quiz experience, inform our creative minds, and personalize content for our audience with personalized content.”
In contrast to conventional news sites, youth-oriented Buzzfeed is known for a variety of quizzes on the web, including “Test which Disney princess you are” “Which Avengers superhero would be the best boyfriend for you?” and so on.
The partnership with OpenAI will focus on the production of such “fast food” content. Specifically, BuzzFeed will use OpenAI’s artificial intelligence technology to help generate relevant test questions on the site to help to brainstorm editors find better ideas.
“What needs to be clear is that we see breakthroughs in AI opening up a new era of creativity that will enable humans to harness creativity in new ways, creating endless opportunities and applications,” Peretti said. “In publishing, AI can benefit content creators and audiences by inspiring new ideas and inviting audience members to co-create personalized content.”
And whether or not readers are actually willing to pay for the fun quizzes created by AI, the news of this partnership is enough to bring BuzzFeed back from the dead.
Since going public through SPAC in December 2021, BuzzFeed’s stock had fallen more than 90 percent, its third-quarter net loss widened to $27 million from $3.6 million a year ago, and it even had to cut about 12 percent of its workforce to control costs. But once the news of the OpenAI tie-up hit, its estimates jumped more than 300%.
And BuzzFeed’s next partnership with Meta will likely bring that AI-generated content to a wider audience. Not long ago, Meta paid BuzzFeed millions of dollars to generate content for Meta’s platform and to train creators on the platform. That also means you’ll probably be able to play a lot of AI-generated mindless quizzes on Facebook and Instagram next.
However, a spokesperson said that BuzzFeed will not be using AI to help write news stories at this time. This decision may have something to do with another media outlet using AI to create content that flopped miserably not long ago.
In the use of artificial intelligence to write news, CNET is more advanced, but also earlier to eat the “bitter fruit”.
According to CNET, as part of the CNET Money team’s “testing” program, the editorial team began using an internally developed AI engine in November 2022 to generate 77 news stories or about 1 percent of the site’s total articles. The articles are universally signed “CNET Money Staff” to help editors create “a set of basic explanatory models” around financial services topics. The articles, written using AI tools, include “Do Home Equity Loans Affect Private Mortgage Insurance?” and “How to close a bank account,” among others.
“The editors first generated outlines for the stories, then expanded, added and edited AI drafts before publishing.” wrote CNET editor-in-chief Connie Guglielmo.
Soon, however, the CNET Money editorial team discovered that one of the articles was inaccurate. So they conducted a full review. The result was that a small number of these AI-generated articles needed extensive correction, while others had more or less minor problems, such as incomplete company names, or ambiguous language or numerical errors.
For example, at the end of an article on “What is compound interest?” the AI gives some very inaccurate personal finance advice at the end of the article. “An earlier version of this article suggested that savers put $10,000 into a savings account and earn 3% compound interest each year, which would earn $10,300 after one year.” In fact, anyone who has studied elementary school math knows that savers only earn $300.
Guglielmo did not specify how many of the 77 published stories needed to be corrected or how many “substantive” or “minor” issues there were, except for the correction tips listed below the articles.
However, more than half of the stories contained factual errors or inappropriate quotes, so much so that CNET has now stopped using the AI engine.
The use of AI to automate news reporting is not new, as the Associated Press began doing so nearly a decade ago, but the issue has gained new attention with the rise of ChatGPT. When AI is applied to content production on a large scale, how much plausible content gets mixed in with it?
Despite these problems, Guglielmo left the door open for a return to AI tools, saying it will start using AI news writing tools again once the problems are resolved.
Education and academia: Meeting the challenge
While they are beginning to receive bold adoption in journalism, Chatgpt-like AI tools are being challenged in more writing scenarios. This includes one of the most popular yet most questionable places – schools.
To test ChatGPT’s ability to generate answers on exams for four courses, professors at the University of Minnesota Law School recently had ChatGPT take an exam and blindly evaluated the results. After completing 95 multiple-choice questions and 12 essay questions, ChatGPT received an average grade of C+ – a low but passing grade in all four courses, “flying low over the passing line.
ChatGPT fared even better on the Wharton School’s Business Management Program exam, earning a B to B-. Wharton professor Christian Terwiesch said ChatGPT did “very well” in answering basic operations management and process analysis questions, but performed poorly on more advanced prompts and made “surprising mistakes” in basic math. “surprising errors,” some of which were only at the level of elementary school math.
What does this mean? If left unchecked, ChatGPT will turn into the most powerful cheating tool ever created — helping students write homework assignments and even complete exam papers.
So, as the test results come in, more and more schools and teachers are expressing concern about ChatGPT’s ability to cheat. For example, public schools in New York City and Seattle have banned students and teachers from using ChatGPT on district networks and devices.
Professor Terwiesch also said he agrees that restrictions should be imposed on students when they take tests. “The ban is necessary,” he said. “After all, when you award a doctor’s degree, you want them to really have mastered medicine, not just know how to use a chatbot. The same applies to other skills certifications, including the legal and business professions, among others.”
But Terwiesch believes the technology will still end up in the classroom. “If we end up with just the same education system as before, then we’re wasting the great opportunity that ChatGPT presents,” He said.
And in academia, ChatGPT has come under even harsher scrutiny.
Holdenthorpe, editor-in-chief of the major U.S. journal Science, announced an updated editorial policy that bans the use of texts from ChatGPT and says ChatGPT cannot be listed as a collaborator.
Holden Thorpe said scientific journals all require authors to sign a statement promising to take responsibility for their articles. “But because ChatGPT can’t do that, it can’t be an author.” He believes that using ChatGPT is problematic even at the stage of preparing a paper. “ChatGPT makes a lot of mistakes, and those mistakes can go into the literature.” He said.
It’s not just Science, other publishers have made similar moves. Springer-Nature, which publishes nearly 3,000 journals, also issued a statement saying it could not list ChatGPT as an author.
Perhaps most severe of all is the online programming Q&A platform Stack Overflow, which announced a blanket ban on ChatGPT and any non-human-generated responses shortly after its launch, and further stipulated that users would be banned if found to be in violation.
This article is from WeChat: Silicon Star People (ID: guixingren123), by VickyXiao