• JuanElMinero@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Am I reading those Cuda core projections right?

    GA102 to AD102 increased by about 80%, but the jump from Ad102 to GB202 is only slightly above 30%, aside from no large gains going to 3nm?

    Might not turn out that impressive after all.

    • Qesa@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It’s highly likely to be a major architecture update, so core count alone won’t be a good indicator of performance.

      • Eitan189@alien.topB
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        It isn’t a major architecture update. Nvidia’s slides from Ampere’s release stated that the next two architectures after Ampere would be part of the same family.

        Performance gains will be had by improving the RT & tensor cores, using an improved node, probably N4X, to facilitate clock speed increases at the same voltages, and by increasing the number of SMs across the product stack. The maturity of the 5nm process will allow Nvidia to use larger die than they could in Ada.

    • Baalii@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      You should be looking at transistor amount if anything at all, “cuda cores” is only somewhat useful when looking at different products within the same generation.

      • ResponsibleJudge3172@alien.topB
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Still very accurate if you know what to look for.

        For example, the reason why Ampere vs Turing CUDA cores scale different will let you predict how an Ampere GPU scales vs Turing GPU.

        It’s also why we knew how Ada would scale linearly except with 4090 that was nerfed to be more efficient

  • DevAnalyzeOperate@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I honestly don’t know how well a 24gb 5090 will move, no matter how fast it is. I feel like the gamers will go for stuff like 4080 super, 4070 ti super, next gen AMD. For productivity users, there’s 3090, 4090, A6000.

    Maybe I’m wrong and the card doesn’t need to be very good to sell because GPUs are so burning hot right now.

    • JuanElMinero@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      GDDR7 memory chips will be in production with either 2 or 3 GB sizes, which means 36GB of VRAM on 384-bit bus could be a possibility for next gen.

        • ZaadKanon69@alien.topB
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          32GB 5090 and 24GB 5080 is the most realistic configuration.

          Also expect both of them to have ridiculous prices. $1500+ for the 5080 and $2500 FE MSRP for the 5090 wouldn’t surprise me. AMD is skipping high-end for 1 generation so their competition will likely be a $1000 5070Ti. The 7900XTX or a refresh of it will be AMD’s flahship until RDNA5. They have their valid reasons for that but it’s very bad news for Nvidia customers, as much as they like to bash AMD.

          Nvidia also wants to protect their way more expensive professional lineup so especially the 32GB 5090 will be priced to the moon.

          • lusuroculadestec@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Rumors have a 5090 with GDDR7 and a 384-bit bus. Micron has GDDR7 modules on their roadmap as 2GB and 3GB. This means that the memory configurations for 2GB modules will be 24 or 48GB, and with 3GB modules it will be 36GB or 72GB.

            32GB would imply it’s a 256 or 512-bit bus, neither of which are very likely for a xx90. I could see them maybe going as low as a 320-bit bus for 30GB. Even 33GB with a 352-bit bus is more likely.

            The 5080 will be another thing, 24GB would imply a 256-bit bus with 3GB modules. Nvidia has been all over the map with the xx80 memory width, so it will be anyone’s guess. If they prioritize memory bandwidth and use a 320-bit bus, a 20GB card is most likely.

            GDDR prevents having arbitrary memory sizes.

            • ZaadKanon69@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              A 20GB 5080, probably at an even higher MSRP than the 4080 due to a lack of competition, would be criminal… There was supposed to be a 20GB 3080 for crying out loud. And games will def go over 16GB before next next gen so 4080 owners will face a VRAM bottleneck and then their upgrade option is a $1500 5080 omg.

              I heard the 512-bit rumor and thought Nvidia was FINALLY fixing their VRAM issue across their entire product stack… Sigh.

      • rorschach200@alien.topB
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Why actually build the 36 GB one though? What gaming application will be able to take advantage of more than 24 for the lifetime of 5090? 5090 will be irrelevant by the time the next gen of consoles releases, and the current one has 16 GB for VRAM and system RAM combined. 24 is basically perfect for top end gaming card.

        And 36 will be even more self-canibalizing for professional cards market.

        So it’s unnecessary, expensive, and canibalizing. Not happening.

        • Flowerstar1@alien.topB
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Gaming applications didn’t take advantage of the 24GB when it debuted on the 3090 and they still don’t do for the 4090 now. That’s not what drives these decisions.