According to NPR, OpenAI may face a lawsuit from the New York Times because the company used articles and pictures from the New York Times to train its artificial intelligence (AI) model, which violated the law. The New York Times Terms of Service. If the lawsuit is successful, OpenAI could suffer huge losses, including deleting its datasets and paying heavy fines.
OpenAI’s ChatGPT is a powerful chatbot that has received a lot of attention and use since its release. However, ChatGPT has also raised some copyright issues. For example, famous writer Sarah Silverman and others have sued OpenAI to protect the copyright of their books.
Lawyers for The New York Times are considering whether a lawsuit against OpenAI is necessary to protect the intellectual property rights of its journalism, according to NPR. The New York Times is concerned that OpenAI could use its content to create a competitor, an AI system that can answer questions based on original reporting and writing. The New York Times updated its terms of service this month, prohibiting any use of its content to develop any software programs, including but not limited to training machine learning or AI systems.
NPR reported that a licensing agreement between The New York Times and OpenAI could have been possible, whereby OpenAI would pay to use content from The New York Times to train its models. But talks between the two sides have become “controversial,” with a licensing deal looking increasingly unlikely and The New York Times appearing to be weighing whether it’s worth getting involved with a product that could be its strongest contender.
If the New York Times did file a lawsuit against OpenAI, OpenAI could use a defense of “fair use,” meaning they used everything on the web to train its tools, but experts say that would be very challenging.
Last month, The Associated Press became one of the first news organizations to strike a licensing deal with OpenAI, but the terms of the agreement were not disclosed. The Associated Press reported today that it has joined other news organizations in setting standards for the use of AI in newsrooms, acknowledging that many “news organizations are concerned about their material being used by AI companies without permission or payment.” use”. In April, the News Media Alliance published the AI Principles, an attempt to protect publishers’ intellectual property, whether used for AI, by insisting that generative AI “developers and deployers must negotiate with publishers the right to use” publisher content. Whether training, AI tools displaying information, or AI tools synthesizing information.