Just wondering,what AMD would need to do…to at least MATCH nvidias offering in A.I/dlss/Ray tracing tech
Competition drives innovation, so AMD catching up to Nvidia would only make both better. I’m hopeful they rise to the challenge for the benefit of us all.
Can AMD ever catch up to the lead NVIDIA has in AI.
Yes. mi300.
For AI, they need to invest in technologies that compete with CUDA, like DirectML, etc.
Pay developers to develop AI apps using DirectML, which will run well on AMD GPUs and Market those apps to their customer base.
If It was Amd I would hire engineers and software developers to work on the software side.
If 3rd party developers will have access to libraries / api that fully support Amd’s products and can deliver good performance more people will purchase amd, but when people want to start small / medium machine learning projects and the default choice is only Nvidia cause almost all libraries is been written for cuda, why should anyone choose Amd?
So to solve this amd need to make sure there are libraries written for their hardware.
Ray tracing, probably. AI probably not. What AMD can provide is better value. They just don’t have the manpower or money to beat Nvidia at this point. Nvidia could probably release a new product twice as fast which is kind of what they’re doing now with how many people they have.
Which, if you notice how AMD plays in the AI space is exactly their plan: cost advantage and being “open” so that the communities take up some of the slack of HIP competing against CUDA. It certainly isn’t perfect but AMD doesn’t have the head count to do anything about it really. I don’t know if there is a better path for them to take, I at least don’t see it.
They don’t have the headcount because they don’t want to invest in Radeon, this has been a constant issue since they bought ATI. AMD is focused on their CPU business above all.
and at least my IT friend says server parks.
Just wondering,what AMD would need to do…
They’d have to actually spend chip real estate on DL/RT.
So far they’ve been talking about using DL for gameplay instead of graphics. So no dedicated tensor units.
And their RT has been mostly just there to keep up with NV feature wise. They did enhance it somewhat in RDNA3 apparently. But NV isn’t waiting for them either.
Second gen AMD HW ray-tracing still has a worse performance impact than Intel first gen HW ray-tracing. No need to talk about Nvidia here, as they are miles ahead. Either AMD is not willing to expend more resources on RT or they aren’t able to improve performance.
AMD’s RT hardware is intrinsically tied to the texture unit, which was probably a good decision at the start since Nvidia kinda caught them with their pants down and they needed something fast to implement (especially with consoles looming overhead, wouldn’t want the entire generation to lack any form of RT).
Now, though, I think it’s giving them a lot of problems because it’s really not a scalable design. I hope they eventually implement a proper dedicated unit like Nvidia and Intel have.
I am pretty sure they already have something in the pipeline, its just that it can take half a decade from low level concept to customer sales…
I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.
Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.
Microsoft told Digital Foundry that they had locked the specs of the Xbox Series consoles in 2016. In 2016 they knew the console would have an SSD, RT capabilities etc.
They could already have in mind the next console
RDNA3 is up to 60% faster than RDNA2 equivalent in epath tracing
Isn’t that only because RDNA3 is available with that many more CUs? Afaik the per-unit and per-clock RT performance of RDNA3 is barely ahead of RDNA2.
I mistook 6700XT specs, thinking it was comparison to 7800XT with 60CU but the actual comparison is 6800 vs 7800XT.
That being said
7600 is 20% faster in Alan wake, 10% more in Cyberpunk than 6600XT with same CU count at 1080p
7800XT is 37% faster than 6800 at 1440p in Alan Wake, no 6800 in Cyberpunk, you would have to infer based on 6700XT, which shows good 35% improvements 7800XT vs 6800, with an inference of 6800 being 20% faster than 6700XT like in Alan Wake with same 60CU
https://www.techpowerup.com/review/alan-wake-2-performance-benchmark/7.html
Can they make dedicated tensor to units or is that patented?
They don’t even need to make dedicated tensor units, since programmable shaders already have the necessary ALU functionality.
The main issue for AMD is their software, not their hardware per se.
Nah, throughput of tensor cores is far to high to compete against
Well, sure the application specific IP is always going to be more performant. But on a pinch, shader ALUs can do tensor processing just fine. But without a proper software stack, the presence of tensor cores is irrelevant ;-)
Can they? Of course they can?
Will they? Unlikely.
Unless there will be moment we stop in development id except nvidia to remain step ahead.
Nvidia is nearly 10x the size of AMD, and more focused on GPUs. That’s a lot of R&D, and if they keep outselling AMD 10:1 in GPUs that’s a big amortized development cost advantage.
That’s a big hill to climb, and (unlike Intel) they don’t seem to be sitting on their thumbs.
Intel used to be 20 times the size of AMD, and fully focused on CPUs.
I mean, companies have become lazy in the past. Maybe nvidia will just getting used to selling AI chips for $20k to companies and become lazy on the consumer origented GPU side, etc.