• Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    2
    ·
    11 months ago

    I knew “scary fast” had to mean some sort of processor bump for hardware, but I was secretly hoping they’d kill off the remaining Lightning ports on their keyboards, trackpads, and mice.

    And I was hoping they’d finally redesign that god awful mouse. I don’t know how people live with that thing.

    • TherouxSonfeir@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      15
      ·
      11 months ago

      I think they want to force trackpads. I can’t even use a mouse anymore. Feels strange.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 months ago

        Whe I use my Mac I only ever use the track pad, it’s amazing.

        I’m a software developer and I’ll still only use the track pad when working at my desk with my monitors.

        Still need a mouse for gaming though on the gaming PC.

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            11 months ago

            Using the laptops track pad and multiple monitors (2 external + macbook)

            I find short of the mythical programmer who never uses a mouse because they they know 100% of the hotkeys, it let’s me keep my hands closer to the keyboard so I can move around and type quicker than going back and forth to the mouse.

            The workflow is much less disruptive.

        • Space Sloth@feddit.dk
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          They’re indeed god awful. I prefer the Magic Trackpad over a mouse for Mac use.

  • arcadefx1@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    11 months ago

    The memory maximums are a tad silly. I’d expect …

    • M3 up to 32gb
    • M3 Pro up to 64gb
    • M3 Max ok, this one is ok

    The ray tracing is awesome, but minus that I am not eager to move up from my M1 Max.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      edit-2
      11 months ago

      The memory maximums are going to be more and more important when it comes to local AI applications.

      Take language models for an example

      To run a 30b model, you need 24gb of video ram to do it fully on the video card. That’s a nvidia 3090 or 4090 today. But in the grand scheme of things, 30b is small. They are going to get much bigger, especially when you want larger contexts which allow the AI to remember more about its interactions with you.

      Apples memory is unified, so it can be system ram, or video ram. You’ll be able to easily load a 70b model into a MacBook with 64gb of ram for example, where you’d need 2 3090s or 4090s and a hefty PSU on a current Gen non Mac PC (if you even can with just that)

      For the moment, things are better optimized for windows and nvidia hardware, but Apple is encroaching on this space, and their huge amounts of video memory will begin to unlock using and training larger and larger models with each hardware generation.

      Expect to see nvidia starting to offer higher video ram cards as well for this exact reason. Maybe even cards tailored to that instead of gaming with really high amounts of ram.

      • BetaDoggo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        I can’t see local models or hardware needing to scale much past the sizes we already have. Recent models like mistral have shown that we are still far from saturation at current model sizes.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 months ago

          And we only ever needed 64kb of ram.

          Even if we have a lot of room to optimize and grow within what we have, we still have so much more to do.

          Fully coherent audio and video synthesis for a scene for example.

          And these models are being trained on server farms, but thats just because video memory is so expensive to come by.

          We’re just starting to crawl, we haven’t even started walking yet on where this is going.

          • BetaDoggo_@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            I was mainly referring to language models which have somewhat predictable scaling laws. It doesn’t make sense to continue scaling the parameters when you can scale the data instead.

            Diffusion models are a completely different domain which is less established. Most advancements made in that space are related to the architecture and training methodology. In terms of scale they haven’t changed much.

            Large models will always be trained in datacenters because the compute will always be exponentially greater and cheaper than what you could get as an individual. Local finetuning already happens but it’s expensive and limited.

    • holycrap@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      11 months ago

      Doesn’t the m2 max allow 196gb of ram? Seems like an odd downgrade. The value in these for me is the unified memory for large ai models, but most consumers may not notice that. Who knows.

    • sloppy_diffuser@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      11 months ago

      Based on my 60 seconds reading on it, onboard GPUs typically share the systems RAM. It is usually a fixed amount from my understanding. Dynamic caching seems to allow the GPU to only consume what it needs. Without knowing more, I’m guessing this means it frees up more RAM for the system instead of holding a fixed chunk in reserve for the GPU, or, on the other side, allows the GPU to use more RAM than some predetermined fixed amount.

      According to Apple’s press release, the GPUs in the new Macs are already faster and more efficient than those that came before them. But they go further thanks to their support for Dynamic Caching, a feature that “unlike traditional GPUs, allocates the use of local memory in hardware in real time.”

      What does that mean? Apple says that “with Dynamic Caching, only the exact amount of memory needed is used for each task. This is an industry first, transparent to developers, and the cornerstone of the new GPU architecture.”

      https://www.imore.com/mac/dynamic-caching-and-its-m3-chips-could-be-the-secret-to-apples-mac-gaming-plans

  • Kaidao@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    11 months ago

    I was kinda bored for this announcement. I have a M2 Pro MBP for work and I really have no desire to get anything faster.

    I was hoping for a new iPad Mini announcement

  • YⓄ乙 @aussie.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    Do we have a m3 air now? I can’t find it. I was planning to buy m2 air but now we have m3 so probably best to wait ?

  • set_secret@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    11 months ago

    pity qualcomm has just wiped the floor with them. even with the new M3 it’s not even close I believe. please correct me if I’m wrong (who am I kidding you’re gunna correct me)

      • set_secret@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        11 months ago

        the snapdragon X Elite beats the M2 by 50% so you’re wrong and it beats the M2 Max in single core too. and does so with 40% less power.

        It’s likely the M3 will rival it but it will be close by the looks of it.

        Also you’re wrong.

          • set_secret@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            I’m just going on what’s been reported. I believe they were running benchmarks against the M2. and we have M3 specs so it’s reasonable to make guesses. I never said it wipes the floor with the M3. it definitely does toc the standard M2, of which you seemed confidently incorrect about.

            Maybe you just angry with something and aren’t reading what I’ve written?

            anyway we’ll find out soon enough.