• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    The Apple chips also have a wide interface to the RAM. That means you can run chatbots (LLMs) and other AI workloads that are memory-bound at crazy speeds compared to an Intel (or AMD) computer.

    • JohnDClay@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      Really? How fast is the memory bus compared to x86? And did they just double the bus bandwidth by doubling the memory?

      I’m dubious because they only now went to 16gb ram as base, which has been standard on x86 for almost a decade.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        21 hours ago

        Apple is also much faster because the integrated graphics are actually usable for LLMs.

        The base M is just a big faster than an Intel/AMD laptop if you can get their graphics working. The M Pro is 2x is fast (as its memory bus is 2x as wide). The M Max is 4x as fast.

        AMD is coming out with something more competitive in 2025 though, Strix Halo.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 days ago

        Depending on the chip, they have somewhere from 100 to 400 GB/s. I’m not sure on the numbers on Intel processors. I think the consumer processors have about 50 - 80 GB/s. (~Alder Lake, dual channel DDR5) Mine seems to have way less. And a recent GPU will be somewhere in the range of 400 to 1000 GB/s. But consumer graphics cards stop at 24GB of VRAM and these flagship models are super expensive. Even compared to Apple products.

        The people from the llama.cpp project did some measurements and I believe the Apple “Metal” framework seems to outperform the x86 computers by an order of magnitude or so. I’m not sure, it’s been some time since i skimmed the discussions on their Github page.