OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

  • ricecake@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Well, machine learning algorithms do learn, it’s not just copy paste and a thesaurus. It’s not exactly the same as people, but arguing that it’s entirely different is also wrong.
    It isn’t a big database full of copy written text.

    The argument is that it’s not wrong to look at data that was made publicly available when you’re not making a copy of the data.
    It’s not copyright infringement to navigate to a webpage in your browser, even though that makes your computer download it, process all of the contents of the page, render the content to the screen and hold onto that download for a finite but indefinite period of time, while you perform whatever operations you like on the downloaded data.
    You can even take notes on the data and keep those indefinitely, including using that derivative information to create your own similar works.
    The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human. They just didn’t expect that the processing would end up looking like this.

    The argument doesn’t require that we accept that a human and a computers system for learning be held to the same standard, or that we can’t differentiate between the two, it hinges on the claim that this is just an extension of what we already find it reasonable for a computer to do.
    We could certainly hold that generative AI is a different and new category for copyright law, but that’s very different from saying that their actions are unacceptable under current law.