• peanuts4life@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I’ve been using LLMs a lot. I use gpt 4 to help edit articles, answer nagging questions I can’t be bothered to answer, and other random things, such as cooking advice.

    It’s fair to say, I believe, that all general purpose LLMs like this are plagiarizing all of the time. Much in the way my friend Patrick doesn’t give me sources for all of his opinions, Gpt 4 doesn’t tell me where it got its info on baked corn. The disadvantage of this, is that I can’t trust it any more than I can trust Patrick. When it’s important, I ALWAYS double check. The advantage is I don’t have to take the time to compare, contrast, and discover sources. It’s a trade off.

    From my perspective, The theoretical advantage of bing or Google’s implementation is ONLY that they provide you with sources. I actually use Bing’s implementation of gpt when I want a quick, real world reference to an answer.

    Google will be making a big mistake by sidelining it’s sources when open source LLMs are already overtaking Google’s bard’s ai in quality. Why get questionable advice from Google, when I can get slightly less questionable advice from gpt, my phone assistant, or actual, inline citations from bing?

  • jmp242@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    That sounds like what “everyone” is doing. I know Neeva was doing that just before it was bought, I heard Bing is doing that, and Kagi is experimenting. Personally I’ve been happy with a mix of startpage and Kagi, but I only pay for Kagi because work pays for it for me.

    That said, all search sucks now.