Rough Calculations: How CPUs Save Power

If you think about computing, words such as performance, speed, consumption and efficiency will surely come to mind, but others such as correctness and precision are also important. However, the researchers of the OPRECOMP project disagree, and their objective is to develop a radically different and more flexible type of calculation called transprecision computing , which consists, as you will already suppose by the owner of this article, in making approximate and not exact calculations. How would a processor of this size work in a PC without causing errors?

Especially efficiency , which consists of improving performance by reducing consumption, has become especially important in recent times and researchers have been trying for decades to find alternative ways to find the best values of both. One of these lines of research consists of breaking out of the rule regarding the precision of the calculations , something that could be absurd for modern computing but which, as we are going to tell you below, makes a lot of sense.

How CPUs Save Power

The precision of calculations in a processor

Demolishing the 100% accuracy assumption that underlies almost all modern digital computing is the primary goal of the OPRECOMP research consortium, led by IBM Research Europe in Zurich. When we talk about a computer and more specifically a processor, we all think that precision must be absolute, but in many applications such precision is simply not necessary and consumes too much energy. Instead, OPRECOMP seeks to deliver approximations with just the right amount of energy needed for the job, while also making the new computing faster. It is what we could call, “lazy processors” that do the “law of least effort”.

Render CPU Código

Digital computers typically use an elaborate coding scheme that stores numbers in the form of 64 binary digits . In many cases, however, the applications do not require all these digits, so a challenge in this project is not only to reduce power consumption but also to ensure that the approximate result of the calculations remains within the correct or predefined limits, or at least that such limits can be given for the expected results.

In other words: currently, if a processor miscalculates an operation, we get a blue screen of error and the system hangs, so if the processor did approximate calculations it would certainly not work, right? However, what this project refers to is to make a correct calculation but without so many decimals , since they are useless in a large number of cases.

To give you an idea we are going to give an example. Imagine that the processor must calculate the number Pi, which, as you know, has infinite decimal places. Thus, when the CPU is asked to calculate it, it will get 64 decimal places, but the reality is that the application that requested it only requires 16 decimal places (it is an example) to perform the operations it requires, so the rest they have literally been calculated in a useless way. For this reason, when we talk about the number Pi, we generally do not say 3.14159265358979323846 … but we generally settle for saying that it is 3.1416, rounding because with four decimal places we usually have enough for what we need.

How would a rough calculation CPU work?

OPRECOM addresses the complete computing stack from the physical hardware level through architecture to compilers, algorithms, and software. It aspires to offer the first complete transprecision framework for the computing of the future, and to achieve this they work not only on the computing side, but their team of mathematicians, computer scientists and software engineers is also working on showing the benefits of applications. with rough calculations, such as a small drone that is used to fly for long periods of time.

OPRECOMP

Other specific application fields include Big Data analytics, machine learning, and high-performance computing (HPC). The architecture developed will address processing, memory and communication aspects from low power systems (on the order of milliwatts) used in small-scale devices or connected objects to large high-performance computer systems that have enormous consumption (on the order of kilowatts). .

The project team has already adapted many existing algorithms to work in transprecision: for example, the team has developed a novel implementation of the BLSTM algorithm that converts images to text with just 8-bit precision. It only loses 0.01% accuracy but in return reduces power consumption by up to a factor of 8. The algorithm has also been implemented in hardware, showing that it can be automated and used in real applications.

“One challenge is that transprecision computing is not very well known, even among computer scientists. Automation is the key to helping an open international community grow so that rough computation can be accessible to a wide audience. including engineers who have no experience in inaccurate approximations or calculations.In this regard, OPRECOMP is developing a transprecision software development kit that allows developers to easily program and experiment with transprecision algorithms and small computing devices, such as the PULP. The roadmap to see this technology in everyday applications is still long, but with OPRECOMP we have taken a big step forward “.- Cristiano Maolssi, project coordinator.

By the end of this project, new algorithms based on approximate computation will have been generated, new low-energy platforms to run that workload, libraries of software environments to allow the use of this type of computation, as well as libraries emulation tools that accelerate development prototyping. All the software produced has been made open source , and results like the FloatX programming library for low-precision computing have already made headlines. Additionally, IBM has already prototyped a traditional HPC system along with transprecision computing acceleration.

In the end, the objective is to reduce the consumption of processors of all kinds by making them not have to calculate so precisely, and that with approximate calculations that are more than enough, it is possible to improve efficiency immensely.