LiquidVR: the Technology for Virtual Reality on AMD GPUs

LiquidVR

Virtual reality is in itself a medium that works under its own rules due to its particular nature. This prompts GPU manufacturers to develop technologies and standards to increase the performance of their graphics cards under virtual reality, such as AMD‘s LiquidVR technology.

LiquidVR has been with us for several years, but we have not talked about what it consists of in all this time and as AMD GPUs see the rendering of scenes based on virtual reality accelerated using everything that surrounds LiquidVR, we explain quickly what it consists of and how it works.

What is AMD LiquidVR?

AMD LiquidVR

The technologies that make up LiquidVR have been used by AMD since 2015, so it is not a new technology on the market, since it has been with us for some years. Moreover, in its initial approach it was not even thought for DirectX 12, but for DirectX 11, since the twelfth version of Microsoft‘s API was released that same year. However LiquidVR has remained unchanged since then.

LiquidVR, as its name suggests, is a series of technologies designed to accelerate the rendering of virtual reality scenes through AMD GPUs, which consists of five fundamental points:

  • Asynchronous computing.
  • Multi-GPU Affinity
  • The ability to reduce motion input latency with the virtual reality unit.
  • Advanced data copy capability between CPU and GPU.
  • Reduce latency on video output

These four principles have evolved in the different AMD GPUs since then and are the basis for future designs, but in their day they were devised in order to accelerate Virtual Reality.

Motion to photon latency

Motion to Photon

In virtual reality there is the rule that the time from when we make a movement until we see it on the screen of our viewer must be less than 20 ms. A higher number generates disbelief in our brain about the fact of “being” in the place. On the other hand, when the figure of less than 20 ms is reached, what is called presence is generated.

With this we are not talking about a frame time of less than 20 ms, but in less than that type the whole process has to be carried out, from when we press the button or make a movement until we see the result of it on the screen. It forced it to be necessary to optimize not only the rendering speed of the GPU, but also everything that involves the process of creating the image that the user sees.

Asynchronous computing

DX11 vs DX12

Asynchronous computing is already built into DirectX 12 and Vulkan, so it is no longer a separate feature. What does it consist of? Well, in the fact that before these two APIs on PC all screen and computation lists were treated as a common list in DirectX 11.

Some GPUs have several command processors apart from the main one that take care of computing tasks, many of which work asynchronously when drawing the scene. That is, they do not depend on the rendering of the scene at any time and can be executed at any time.

Multi-GPU Affinity in LiquidVR

LiquidVR

The idea when rendering a 3D scene in real time is done with a single camera, this is equivalent to seeing the world from the point of view of a cyclops. But in order to render the scene for virtual reality, it is done using two points of view, that is, two cameras where each one corresponds to a different eye.

Multi-GPU affinity in LiquidVR is nothing more than rendering a scene using a GPU for the vision of each of the eyes, each from its point of view and in parallel. For this, it is necessary for the CPU to create two screen lists, one for each eye and which the CPU processes in parallel.

Because the Virtual Reality case is connected to the first screen, what the second GPU does is copy its final image buffer to the first GPU. Do not forget that the LCD panels of HMD units or virtual reality headsets usually come in a single LCD panel, in which the image of the left eye is reproduced in the left half of the screen and that of the right eye in the middle. right.

LiquidVR movement estimation

The delay between the movement of the player’s head and the time in which the image is generated can cause a lag that can cause dizziness in the player, since the movement seen in the viewfinder does not correspond to the movement made by the player. player by moving their head, all caused by the delay it takes for the CPU to generate the screen list and send it to the GPU.

The estimation of movement in LiquidVR is therefore predicting what direction and at what speed the user will move the head. The objective is none other than to reduce latency in calculating the position of the head and other elements that are tracked in order to reduce the lag and if it can be canceled. The reason why we need this is because all the objects in the frame need a reference point with respect to which all of them move, said reference point is always the camera that in virtual reality always depends on the position of the head and in some cases even of the eyes.

In virtual reality it is extremely necessary that the user’s movement be synchronized with the movement of the virtual world, each movement that he or she makes has to be coordinated with the rest of the virtual world. Since if the whole world is not credible for those who experience it, the presence process is also broken, which is key in virtual reality.