Just wondering,what AMD would need to do…to at least MATCH nvidias offering in A.I/dlss/Ray tracing tech

  • colefinbar1@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Competition drives innovation, so AMD catching up to Nvidia would only make both better. I’m hopeful they rise to the challenge for the benefit of us all.

  • softwareweaver@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    For AI, they need to invest in technologies that compete with CUDA, like DirectML, etc.

    Pay developers to develop AI apps using DirectML, which will run well on AMD GPUs and Market those apps to their customer base.

  • ResponsibleTruck4717@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If It was Amd I would hire engineers and software developers to work on the software side.

    If 3rd party developers will have access to libraries / api that fully support Amd’s products and can deliver good performance more people will purchase amd, but when people want to start small / medium machine learning projects and the default choice is only Nvidia cause almost all libraries is been written for cuda, why should anyone choose Amd?

    So to solve this amd need to make sure there are libraries written for their hardware.

  • hey_you_too_buckaroo@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Ray tracing, probably. AI probably not. What AMD can provide is better value. They just don’t have the manpower or money to beat Nvidia at this point. Nvidia could probably release a new product twice as fast which is kind of what they’re doing now with how many people they have.

    • admalledd@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Which, if you notice how AMD plays in the AI space is exactly their plan: cost advantage and being “open” so that the communities take up some of the slack of HIP competing against CUDA. It certainly isn’t perfect but AMD doesn’t have the head count to do anything about it really. I don’t know if there is a better path for them to take, I at least don’t see it.

      • Flowerstar1@alien.topB
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        They don’t have the headcount because they don’t want to invest in Radeon, this has been a constant issue since they bought ATI. AMD is focused on their CPU business above all.

  • AutonomousOrganism@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Just wondering,what AMD would need to do…

    They’d have to actually spend chip real estate on DL/RT.

    So far they’ve been talking about using DL for gameplay instead of graphics. So no dedicated tensor units.

    And their RT has been mostly just there to keep up with NV feature wise. They did enhance it somewhat in RDNA3 apparently. But NV isn’t waiting for them either.

    • bankkopf@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Second gen AMD HW ray-tracing still has a worse performance impact than Intel first gen HW ray-tracing. No need to talk about Nvidia here, as they are miles ahead. Either AMD is not willing to expend more resources on RT or they aren’t able to improve performance.

      • TSP-FriendlyFire@alien.topB
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        AMD’s RT hardware is intrinsically tied to the texture unit, which was probably a good decision at the start since Nvidia kinda caught them with their pants down and they needed something fast to implement (especially with consoles looming overhead, wouldn’t want the entire generation to lack any form of RT).

        Now, though, I think it’s giving them a lot of problems because it’s really not a scalable design. I hope they eventually implement a proper dedicated unit like Nvidia and Intel have.

        • Prince-of-Ravens@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I am pretty sure they already have something in the pipeline, its just that it can take half a decade from low level concept to customer sales…

        • bubblesort33@alien.topB
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.

          Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.

      • GomaEspumaRegional@alien.topB
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        They don’t even need to make dedicated tensor units, since programmable shaders already have the necessary ALU functionality.

        The main issue for AMD is their software, not their hardware per se.

          • GomaEspumaRegional@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Well, sure the application specific IP is always going to be more performant. But on a pinch, shader ALUs can do tensor processing just fine. But without a proper software stack, the presence of tensor cores is irrelevant ;-)

  • Dealric@alien.topB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Can they? Of course they can?

    Will they? Unlikely.

    Unless there will be moment we stop in development id except nvidia to remain step ahead.

    • Jonny_H@alien.topB
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Nvidia is nearly 10x the size of AMD, and more focused on GPUs. That’s a lot of R&D, and if they keep outselling AMD 10:1 in GPUs that’s a big amortized development cost advantage.

      That’s a big hill to climb, and (unlike Intel) they don’t seem to be sitting on their thumbs.

      • Prince-of-Ravens@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Intel used to be 20 times the size of AMD, and fully focused on CPUs.

        I mean, companies have become lazy in the past. Maybe nvidia will just getting used to selling AI chips for $20k to companies and become lazy on the consumer origented GPU side, etc.