How expensive is data type conversion compared to managing an array of bits in VHDL?

In VHDL, if you want to increment std_logic_vector, which represents a real number by one, I came across several options.

1) Use the data type conversion functions of types to change the std_logic vector to a signed or unsigned value, then convert it to an integer, add it to this integer and convert back to std_logic_vector vice versa than before. When you try to do this, you can use the table below.

Number to Vector Conversion Chart

2) Check to find out the LSB value. If it is "0", make it "1". If it is “1”, make a “left shift” and connect “0” to the LSB. Example: (for a vector with 16 bits) (from 15 to 1) and "0";

In FPGAs, compared to the microprocessor, physical hardware resources seem to be a limiting factor instead of the actual processing time. There is always a risk that you may run out of physical gates.

So, my real question is: which of these implementations is “more expensive” in FPGAs and why? Are compilers strong enough to implement the same physical representation?

+5
source share
3 answers

There is no conversion cost by type.

The various types are intended solely to express the design as clearly as possible - not only to other readers (or to you, next year :-), but to the compiler, allowing him to catch as many errors as possible (for example, this integer value is out of range)

Type conversions are your way of telling the compiler "yes, I wanted to do this."

Use the type that best reflects the intent of the project.

If you use too many type conversions, this usually means that something has been declared as the wrong type; stop and think about the design a little, and it will often be simplified beautifully. If you want to increase std_logic_vector, it probably should be unsigned or even natural.

Then convert when you need: often on top-level ports or other IP people.

Conversions can endlessly slow down modeling, but that's another matter.

As for your option 2: detailed descriptions at a low level are not only harder to understand than a <= a + 1; , but synthesizer tools are easier to carry and most likely contain errors.

+14
source

I give one more answer in order to better answer why, in terms of gates and FPGA resources, it really does not matter which method you use. In the end, the logic will be implemented in Look-Up-Tables and flip flops. Usually (or always?) In the FPGA-matrix there are no built-in counters. The synthesis will turn your code into a LUT, period. I always recommend trying to express the code as simple as possible. The more you try to write your code in RTL (compared to behavioral), the more you will be prone to errors. KISS is the right course of action every time, a synthesis tool, if any, will simplify your intentions as much as possible.

0
source

The only reason to do arithmetic manually is if you:

  • Think about what you can do better than a synthesis tool (where better could be less, faster, less energy consumed, etc.).
  • and you think that the reduced portability and maintainability of your code ultimately doesn't really matter.
  • and it really matters if you do a better job than the synthesis tool (for example, you can achieve the desired working frequency only by doing it manually, and not letting the synthesis tool do it for you).

In many cases, you can also rewrite your RTL code a bit or use synthesis attributes such as KEEP to convince the synthesis tool to make better implementations, rather than arithmetic components that implement manually.

By the way, a fairly standard trick to reduce the cost of hardware counters is to avoid normal binary arithmetic and instead use, for example, LFSR counters. See Xilinx XAPP 052 for some inspiration in this area if you are interested in FPGAs (quite old, but the general principles in current FPGAs are the same).

0
source

Source: https://habr.com/ru/post/1208739/


All Articles