It’s no secret that graphics cards still aren’t powerful enough to be able to do ray tracing efficiently. Hence , DLSS or FSR and Ray Tracing go hand in hand for the moment , however, this relationship is not entirely perfect and has certain associated problems, which although minor, are a sample of what still exists. Technology has to advance in this regard.
We still have a long way to go in visual gaming progress, so much so that this ensures several years of visual enhancements that graphics cards will need to be ready for. And everything indicates that all this is slowing down in recent years. That is to say, things are going slower than expected and certain tricks have had to be pulled for it.
DLSS and FSR do not get along very well with Ray Tracing
One of the peculiarities of Ray Tracing is that it scales in terms of computational need according to the number of pixels of the image to be generated, which is why, in theory, image scaling systems to real time like NVIDIA‘s DLSS or AMD‘s FSR exist, however, the reality is that this is not the case and the use of these techniques completely destroys the work of ray tracing.
We must start from the fact that the main advantage of Ray Tracing is that it allows us to reliably represent the effects of indirect lighting in real time. And we must take into account that in each frame because the position of the objects changes, either by game action or by changing the position of the camera. And this point is important to understand why DLSS and FSR don’t get along with ray tracing, and yes, it seems counterproductive with all the marketing we see, especially from NVIDIA, but it all has an explanation and it’s much simpler than you think.
What is the problem?
When our graphics card is applying a scaling algorithm in real time, we are reconstructing the same image with a larger number of pixels. And what information does each of them contain? Well, the color value of each point on the screen, which is nothing more than the sum of its chrominance and luminance. However, we have a problem and that is that we don’t know the color value of those extra pixels, so we use algorithms to find them so that the image comes out as reliable as possible.
Well, everyone knows that AI has shown that it is the best method, but the problem as everything is when incorrect data is added to the equation, which ends up affecting the final result. And which are they? This is where the problem of temporality comes in, which consists of taking the color information that exists in previous frames as an informative reference to build the current one. Let’s not forget that the lighting is dynamic in each frame and we must take into account that the light sources are in motion.
In other words, the DLSS and the FSR affect, affect Ray Tracing or at least the visual fidelity that it intends to represent, but this does not mean that it reduces it to something useless, since without this algorithm it is impossible to represent how objects generate shadows and they refract and reflect light faithfully with reality. What does all this tell us? Well, it is very likely that we will see a version of these algorithms that works better with Ray Tracing when the time comes, however, this is another matter entirely.