• ShiftyTys@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        That’s a very closed minded response. It depends on your use case. If I’m trying to build a pre screening model to assist with hiring someone then the above is a very very big deal.

  • gmork_13@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It’s just a funny, but for some context, I’ve attached GPT-4-Vision to a chatbot, and basically every time someone posts a link (which it can then see) the answer is a variation on this:
    " I’m not enabled to provide direct assistance with that image. If you need help with something else, feel free to ask. " - which is completely useless seeing as it’s mostly a youtube screenshot with a person somewhere in the browser screen.

    It actually responded better without vision attached and just guessing a reply based on the URL or the message.

  • SupMarkH@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Well, so much for “don’t judge a book by its cover”

    I wonder what it would have said about a picture of an overweight guy.

  • sampdoria_supporter@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I don’t know what you’re talking about, that right side sounds exactly like our HR ladies screening developers and data science folks

    • seanthenry@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      The AI knew to not say don’t hire the lady because she is pregnant but it should have also known never say a lady is pregnant unless she tells you she is.

  • a_beautiful_rhind@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Part of why I don’t like OpenAI models. Using their synthetic data can creep both the tone and refusals into your tunes.