What calls does the built-in compiler make?

How can you say what will happen or what will not be done by the built-in compiler?

Sometimes I was told that some minor optimizations are pointless because the compiler will build certain calls or computations, while similar optimizations seem to be recommended.

What are the rules that let us know when we do or do not need to optimize these things?

+4
source share
9 answers

The only sure way to see that something is embedded is to look at the assembly.

Regardless of whether any attachment is included, it is entirely up to the compiler, since the compiler makes the final decision on whether to embed something.

Premature optimization to the side: If this is really important (or just interesting for you), you can use compiler-specific pragmas to force the function to be inserted / not nested and then profile to see, you can make better decisions than the compiler.

However, there are some cases where you can be sure that a function cannot be inlined:

  • Practically called functions in which a type cannot be determined at compile time.
  • Recursive functions can never be fully nested unless the maximum depth can be determined statically.
+2
source

This may seem a little tangent, but I think it is important.

What are the rules that let us know when we do or don't need to optimize these things?

I would say that there are two rules that go into the game before any other WRTs this particular question:

  • That probably doesn't matter. If you havenโ€™t profiled your code in the Release assembly and proved that the overhead associated with calling the function is the main bottleneck in your code, you better forget about the performance implications of inline code.

  • You have little or no control over the fact that the compiler will not be embedded. inline can be ignored by the compiler if it considers it appropriate. Some things cannot be embedded. Some platforms provide language extensions in force_inline strings, but even they can be ignored.

+1
source

There are no hard and fast rules. There are even a few tests to test your optimization assumptions.

0
source
 How can you tell what will or what won't be made inline by the compiler? 

You cannot: for example, if you use the inline keyword, its simple hint and no compiler is required to respect it. However, if you need insane optimizations, it seems you might need code at a lower level than c / C ++ and use assembly calls

0
source

Have you looked at SO?

For instance:

Does embedded C ++ C ++ work without the keyword 'inline'?

Why not check all the built-in ones?

The question you ask is very general because it depends on which compiler, not to mention that version. It seems that what you were doing was a premature optimization - you should profile your code and find areas that slow it down. Does that mean "will it be inline?" the question is because you will look at the effect after the compiler, the attachment, and other included optimizations.

0
source

It is impossible to say for sure why for details why and when is attachment a good practice, I recommend reading this C ++ FAQ

0
source

You cannot know what the compiler is going to embed during the optimization phase: you must know the heuristic of the exact compiler by heart. However, you can specify what you would like to embed.

0
source

Microsoft Compiler has long had an optimization flag / Ob, which forces it to embed everything and only functions explicitly declared inline . This is non-standard, and by default it behaves in accordance with the standard, which means that the keyword is nothing more than a hint. The Intel Parallel Studio compiler behaves in a similar way.

0
source

How can you say what will happen or what will not be done by the built-in compiler?

You cannot at all.

You can read the Inline Redux article by Herb Sutter. He reveals a number of technologies to explain when embedding can happen, knowing that the latter happens ... the more information you have.

For example, in the JIT environment, you can look at the loop, understand that its entire body is executed on one object, which seems to be often of type Foo , specializes the entire object for Foo , which allows you to fully embed virtual methods and now check at the top loop if the object is Foo , and if so, switch to a fully integrated version of the shell instead of shared with virtual calls.

So what can be embedded? It depends on when ...

In general, C ++ compilers:

  • Insert compilation time (historical)
  • Link Binding Time (part of the so-called LTO optimization package)

This means that they can only base their optimizations on static ones (known at compile time).

At the time of compilation, this means that if the static type, the virtual call can be devirtualized (the first step of the attachment: guessing the called function;)). This means that any function whose definition is visible can also be embedded.

At a point in time for a library, you basically discover new function definitions that allow you to embed multiple calls.

At a point in time for a WPA (Whole Program Analysis) executable file, it may begin to add some devirtualization, realizing that some classes never succeed, and therefore we can consider final with respect to this executable. Neither GCC nor LLVM (and I doubt that VC ++ can) can perform this optimization, although due to the fact that they do not have byte code that supports class hierarchies.

And compilers could, if they wanted to, drop the intermediate representation that they have (bytecode in some way) and wait for the installation for built-in calls specific to the OS. Although I don't know anything (for C ++) that do this.

Changes to the runtime are quite complicated in C ++, because the memory (for example, the addresses of functions) is available to the programmer, so it would be difficult to make only safe conversions ...

0
source

Source: https://habr.com/ru/post/1383090/


All Articles