https://youtu.be/KwpeuqT69fw

Researchers were able to get giant amounts of training data out of ChatGPT by simply asking it to repeat a word many times over, which causes the model to diverge and start spitting out memorized text.

Why does this happen? And how much of their training data do such models really memorize verbatim?

OUTLINE:

0:00 - Intro

8:05 - Extractable vs Discoverable Memorization

14:00 - Models leak more data than previously thought

20:25 - Some data is extractable but not discoverable

25:30 - Extracting data from closed models

30:45 - Poem poem poem

37:50 - Quantitative membership testing

40:30 - Exploring the ChatGPT exploit further

47:00 - Conclusion

Paper: https://arxiv.org/abs/2311.17035

  • fediverser@alien.top
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    This post is an automated archive from a submission made on /r/MachineLearning, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.

    Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !machinelearning@academy.garden that can benefit from your contribution and join in the conversation.

    Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.

  • we_are_mammals@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Can’t OpenAI simply check the output for sharing long substrings with the training data (perhaps probabilistically)?