A partnership with OpenAI will let podcasters replicate their voices to automatically create foreign-language versions of their shows.

  • FireWire400@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    2
    ·
    1 year ago

    That’s just weird… Part of the reason I listen to podcasts is that I just enjoy people talking about things and AI voices still have this uncanny quality to me

      • Hoimo@ani.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        That’s obviously way better than any TTS before it, but I still wouldn’t want to listen to it for more than a few minutes. In these two sentences I can already hear some of the “AI quirks” and the longer you listen, the more you start to notice them.
        I listen to a lot of AI celeb impersonations and they all sound like the same machine with different voice synthesizers. There’s something about the prosody that gives it away, every sentence has the same generic pattern.
        Humans are generally more creative, or more monotonous, but AI is in a weird inbetween space where it’s never interested and never bored, always soulless.

        • bamboo@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Having listened to it, I could not identify any sort of “AI quirk”. It sounded perfectly fine.

    • sudoshakes@reddthat.com
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      7
      ·
      1 year ago

      A large language model took a 3 second snippet of a voice and extrapolated from that the whole spoken English lexicon from that voice in a way that was indistinguishable from the real person to banking voice verification algorithms.

      We are so far beyond what you think of when we say the word AI, because we replaced the underlying thing that it is without most people realizing it. The speed of large language models progress at current is mind boggling.

      These models when shown FMRI data for a patient, can figure out what image the patient is looking at, and then render it. Patient looks at a picture of a giraffe in a jungle, and the model renders it having never before seen a giraffe… from brain scan data, in real time.

      Not good enough? The same FMRI data was examined in real time by a large language model while a patient was watching a short movie and asked to think about what they saw in words. The sentence the person thought, was rendered as English sentences by the model, in real time, looking at fMRI data.

      That’s a step from reading dreams and that too will happen inside 20 months.

      We, are very much there.

      • danielbln@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        Imho it has already been worked out. There is probably selection bias at play as you don’t even recognize the AI voices that are already there.

      • Pantoffel@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Following up on the other comment.

        The issue is that widely available speech models are not yet offering the quality that is technically possible. That is probably why you think we’re not there yet. But we are.

        Oh, I’m looking forward to just translate a whole audiobook into my native language and any speaking style I like.

        Okay, perhaps we would still have difficulties with made up fantasy words or words from foreign languages with little training data.

        Mind, this is already possible. It’s just that I don’t have access to this technology. I sincerely hope that there will be no gatekeeping to the training data, such that we can train such models ourselves.