Just wondering,what AMD would need to do…to at least MATCH nvidias offering in A.I/dlss/Ray tracing tech

  • AutonomousOrganism@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Just wondering,what AMD would need to do…

    They’d have to actually spend chip real estate on DL/RT.

    So far they’ve been talking about using DL for gameplay instead of graphics. So no dedicated tensor units.

    And their RT has been mostly just there to keep up with NV feature wise. They did enhance it somewhat in RDNA3 apparently. But NV isn’t waiting for them either.

    • bankkopf@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Second gen AMD HW ray-tracing still has a worse performance impact than Intel first gen HW ray-tracing. No need to talk about Nvidia here, as they are miles ahead. Either AMD is not willing to expend more resources on RT or they aren’t able to improve performance.

      • TSP-FriendlyFire@alien.topB
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        AMD’s RT hardware is intrinsically tied to the texture unit, which was probably a good decision at the start since Nvidia kinda caught them with their pants down and they needed something fast to implement (especially with consoles looming overhead, wouldn’t want the entire generation to lack any form of RT).

        Now, though, I think it’s giving them a lot of problems because it’s really not a scalable design. I hope they eventually implement a proper dedicated unit like Nvidia and Intel have.

        • Prince-of-Ravens@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          I am pretty sure they already have something in the pipeline, its just that it can take half a decade from low level concept to customer sales…

        • bubblesort33@alien.topB
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.

          Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.

      • GomaEspumaRegional@alien.topB
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        They don’t even need to make dedicated tensor units, since programmable shaders already have the necessary ALU functionality.

        The main issue for AMD is their software, not their hardware per se.

          • GomaEspumaRegional@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Well, sure the application specific IP is always going to be more performant. But on a pinch, shader ALUs can do tensor processing just fine. But without a proper software stack, the presence of tensor cores is irrelevant ;-)