Little by little the abandonment of the disk is a fact and the use of solid state drives is growing more and more. However, this change is not only taking place in the world of home computers, but also in servers. The NVMe-oF protocol has a lot to do with all this. What do these acronyms stand for and why can they shape the future of PC storage?
The progressive transition towards the use of NVMe SSDs for storage in all sectors of computing continues steadily, deterministically and without brakes. This includes network systems, which today intercommunicate several computers with each other, either on a local network or in a data center.

Most storage units are of the DAS or Direct Access Storage type in which only the PC that has said unit installed can access its content and, therefore, in a network environment it is necessary to develop protocols that, for example, in a data center environment or a supercomputer made up of tens or hundreds of drives allow access to the entire storage infrastructure.
How is communication in a data center?

Before going into how NVMe-oF works and what it consists of. We must bear in mind that the technologies used in a data center or a local network to intercommunicate its internal storage are called SAN, which stands for Storage Area Network or local network storage. For this, three different technologies are used today, all of them based on the veteran SCSI.
- Fiber Channel Protocol (FCP): It is a protocol that transports SCSI commands through a fiber optic network , although it can also be done over copper lines. Their speeds can range from 1 to 128 GB/s.
- iSCSI: What Combines the TCP/IP Internet Protocol and SCSI Commands . This is based on conventional network cards and is designed for very low bandwidth networks as it is limited to their Ethernet capabilities. So speeds of 1 GB/s are common, although 10 GB/s are starting to be seen lately.
- Serial Attached SCSI: the most used of all and based on SAS cables that allow up to 128 storage units to be connected through host bus adapters or HBAs. The speed of these can be 3 GB/s, 6 GB/s, 12 GB/s and even 22.5 GB/s.
However, all of these technologies are intended to communicate with conventional disk drives. And we have to assume that access to a hard drive is different from that of a flash-based drive. What makes the use of these protocols not the most appropriate.
What is NVMe-oF?

Well, they are the acronym for NVMe over Fabric and it is that this protocol was not only planned to communicate with flash or non-volatile memory units, but also for the intercommunication of the different elements in a system through intercommunication infrastructures. We must understand that we are referring to a communication structure between two elements. Which can be two processors, a RAM and a processor, an accelerator and a ROM memory, and so on. Let’s not forget that the topologies used for this case make use of the same structures as in telecommunications, but on a very small scale.
However, this is going to be used to communicate NVMe SSDs over the network. Either to communicate different elements to the CPU within the same PC or failing that through a network card. So we are talking about large data centers. The advantage of using NVMe-oF? Well, compared to the SATA and SAS protocols used in hard drives, these are capable of supporting a queue of up to 65,000 end-to-end requests and up to 65,000 different commands per request , compared to a single one of 1 single request and less than 256 commands. . Which is key in environments with more and more cores making data requests to storage that could saturate the network.
Types of NVMe-OF
Currently there are two variants, which are the following:
- NVMe-of with fiber optic channel: which was designed to integrate into existing data centers and servers by supporting old protocols such as SCSI. This will ease the transition to using flash drives in existing data centers and servers.
- NVMe via Ethernet: which is used for two computers to exchange data through remote direct memory access (RDMA) and, therefore, refers to the fact that two computers can exchange the content of their flash memories in NVMe SSDs without the CPU of either system intervening in the process. In this case, the communication does not use the so-called SCSI packets.
Let’s not forget that NAND Flash memories are also called non-volatile RAM due to the fact that their nature when accessing them is the same as that of RAM, only that they do not lose information when they stop receiving an electrical charge. This allows the deployment of technologies used to intercommunicate two separate RAM memories to do so with the different flash memories.
What speeds are we talking about?
Let’s not forget that NVMe SSDs use PCI Express interfaces, so the fiber-optic based version of this will be one of the possible candidates to connect the different NVMe SSDs within the infrastructure of a data center or a local network. However, Ethernet will continue to dominate as the standard communication protocol for networks for a long time to come. There is no doubt that network interfaces at speeds of 50, 100 and even 200 Gigabits per second have been in development and will soon be deployed in data centers.
The future of NVMe-oF is also on the PC

The RDMA integrated in NVMe-oF is not a new technology, since it has been implemented in niche markets for years, due to the fact that the integrated network controllers or NICs with RDMA were very expensive and required highly specialized technicians for their maintenance. its implementation was expensive. However, it will be key in the future even on desktop PCs. The reason for this is that the internal infrastructures of the processors are evolving to what we call NoC. In them, each element of the processor has a small integrated network card and an IP address with which to communicate with the rest of the elements through what we could call a network processor integrated in the processor.
It is no secret to anyone with knowledge of the matter that in the same way that network controllers were seen integrated into CPUs, the next step is to do it with the flash controllers found in NVMe SSDs. Furthermore, the advantage of implementing NVMe-oF internally is that the CPU does not have to perform a series of processes to access data from one unit to another within a computer.
That is to say, in the future, the same protocols that will be used at the level of data centers and large servers will be seen on our PC in order not only to intercommunicate with the NVMe SSD units within them, but so that each element can be communicate differently to the CPU. We’ll just drop that protocols like those used in DirectStorage that give you access to the SSD from the GPU without having to go through the processor are based on NVMe-oF.