• liaminwales@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    They can do what they want, ‘gamer’ GPU’s for AI is not a new thing. The theory of Nvidia’s low VRAM comes from GTX 1080 TI’s being used for AI training, Nvidia saw the money lost and locked down that VRAM.

    • crazyates88@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      And mining. Ethereum mining is very memory intensive, so they had to limit memory bandwidth and find other ways to make up the performance for games. That’s why you don’t see 384 or 512-but memory bus anymore, they’re all as low low low as you can go. A 128-bit bus isn’t uncommon, sadly.

      • einmaldrin_alleshin@alien.topB
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        The reason for the shrinking memory buses is the bad scaling of IO with newer processes. The memory controllers on AD102 have basically the same footprint as that on GA102, in spite of there being a gigantic increase in overall transistor density

        Ethereum mining hasn’t been a thing for a year now btw

      • Zednot123@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That’s why you don’t see 384

        2080 Ti, 3090, 4090.

        or 512-but memory bus anymore

        We haven’t seen them since we moved to GDDR6. Simply because the signal integrity and power requirements makes it quite unreasonable.

        find other ways to make up the performance for games

        Lack of DRAM scaling is the reason why we are where we are. Computational power has grown much faster than bandwidth.

        Nvidia has had around a generation of advantage in bandwidth efficiency/utilization since Maxwell over AMD. Surprise surprise, one generation after AMD they as well have to resort to larger caches to substitute for bandwidth.

        A 512 bit G6 bus (which isn’t realistic to begin with), would not have given 4090 enough bandwidth over 3090. To keep up with the growth in computational power.