• Deiskos@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 year ago

      If it’s all in the cloud - yes. If some/most of it runs locally - still yes with a caveat - hardware acceleration can make it run both faster and at a lower battery cost, same how every computer has a dedicated graphics chip, whether as a separate expansion card or a separate module on the CPU. Yes, you technically can do all that math on CPU, it’s called software rendering, but rendering done with a purpose-built chip is so much better. Same logic applies to “AI”.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        1 year ago

        I get what you mean, but there’s currently nothing planned or in the works to run local AI on phones, also they’re still way too demanding. They can’t even handle an 7b model if we’re talking about ram usage alone.

        • Deiskos@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Futureproofing, I guess, but also I’m sure there are things other than language models that can benefit from a dedicated processing unit, like photo processing, smaller models, Lens, etc.

        • newIdentity@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Actually, Speach To Text is done locally on newer pixel devices. So is the audio recognition, the camera processing and a lot more AI features.

          AI isn’t just chatgpt and basically every device in the last 4 years has a dedicated AI chip

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    Sometimes they’re useful: I asked it to expand a bullet point list of notes into some care instructions for my houseplants, and it added some helpful context about how often to water them.

    We’ll see the Pixel 8 (that’s no surprise — Google already told us in like 20 different ways), and what I’m most interested to see is how it starts bringing together the company’s various ideas about AI and its usefulness in our daily lives.

    AI needs a lot of it, and Google — like other companies — offloads the heavy lifting to the cloud when you ask Bard to summarize a document or write up a meal plan.

    Google’s custom Tensor chips are supposedly designed with the goal of doing more of this processing locally, but is its third-generation chipset up to the task?

    Given how common overheating complaints are about Pixel 7 phones, it seems unlikely that Tensor G3 will suddenly be ready to run a lot more complicated processes on-device.

    There was a report earlier this year that Google was shaking up the Assistant team and aiming to make the product more Bard-like, and my guess is that we’ll see plenty of flavors of this future in next week’s announcement.


    The original article contains 885 words, the summary contains 205 words. Saved 77%. I’m a bot and I’m open source!