You assume that the compiler works without errors without errors, and only optimization is dangerous. Compilers themselves are programs and very often have errors with or without certain functions. Sure that functions can make it better, or they can make it worse.
Llvm is mentioned in another answer, there is a well-known llvm optimization error which seems to have zero interest in committing
while(1) continue;
optimizes, just leaves ... sometimes ... and other similar, but not completely infinite loops also disappear in the llvm optimizer. Leaving you with a binary that does not match your source code. This is the one I know, there are probably many others in gcc and llvm compilers.
gcc is a monster that is barely held together with duct tape and wire. It’s like watching one of these faces death films or something like that, once you get these images in your head once, you cannot pull them out, they are burned there for life. Therefore, it is worth finding out how scary gcc is, looking behind the curtain. But you may not be able to forget what you saw. For various purposes, -O0 -O1 -O2-O3 can fail all the time with a bang with some code at some point in time. Similarly, a fix should once optimize no less.
When you write a program, the hope is that the compiler does what it says, but just as you hope your program does what you say. But this is not always the case, your debugging does not end when the source code is perfect, it ends when the binary is perfect, and which includes any binary and operating system that you hope to configure (different minor versions of gcc make different binary files , different Linux targets respond differently to programs).
The most important tip is development and testing using a target optimization level. if you develop and test, always creating a debugger, it’s good that you have a program that works in the debugger, you can start all over again when you want it to work somewhere else. gcc-O3 works often, but people are afraid of it, and it does not get enough use for proper debugging, so it is not so reliable. -O2 and no optimization -O0 get a lot of mileage, lots of bug reports, lots of corrections, choose one of them or as another answer, go with what uses Linux. Or go with what uses firefox or uses what uses chrome.
Now hard embedded real-time systems. Human mission systems, systems in which life or property are directly affected. First, why are you using gcc? Secondly, yes, optimizers are often not used in these environments, this creates too much risk and / or significantly increases testing and verification efforts. Usually you want to use a compiler that has gone through many tests, and its warts and traps are well known. Do you want to be the one who turned on the optimizer, and as a result, the flight computer crashed the plane to elementary school on a school day? You can learn a lot from old timers. yes, they have a lot of military stories and a lot of fear of new deceived things. do not repeat the story, learn from it. "They do not build, as they are used to," then this is not just a saying. these outdated systems were stable and reliable and still work for some reason, partly those old timers and what they learned, and partly because new things are built cheaper and have lower quality components.
For this class of environment, you definitely do not stop at the source code, your money and time are poured out in confirmation of BINARY. Every time you change the binary, you need to start checking again. It is no different from the hardware on which it works, you change one component, warm up one soldered connection, you again start checking the verification from the very beginning. One of the differences, perhaps, is that in some of these environments, only a maximum number of cycles is allowed for each solder joint before you discard the entire block. But this can take place in the software, there are only so many burn cycles on the graduation brochure before you refuse the prom, and only so many processing cycles on the graduation gaskets / holes before you break the circuit board / block. Leave the optimizer turned off and find a better, more stable compiler and / or programming language.
Now, if this harsh real-time environment doesn’t hurt people or property (except that it works) when it works, that’s another story. Maybe his player is with a blue ray, and he skips the frame here and there or displays a few bad pixels, a big deal. Turn on the optimizer, the masses no longer care about this level of quality, they are content with youtube quality images, compressed video formats, etc. Cars that need to be turned off and on again for radio or Bluetooth to work. One bit does not bother them, turn on the optimizer and demand better performance than your competitor. If the software is too bad to endure customers, they will work around it or just buy someone if it fails, they will come back to you and buy your new model with the new firmware. They will continue to do this because they want to dance baloney, they do not want stability and quality. This stuff costs too much.
You must collect your own data, try the software optimizers in your environment, and run the product with the full test suite. If it does not interrupt, either the optimizer for this code is fine that day, or the test system needs more work. If you cannot do this, you can at least parse and analyze what the compiler does with your code. I would suggest (and know from personal experience) that both gcc and llvm error systems have errors tied to optimization levels, does this mean that you can sort them based on the optimization level? I don’t know, this is an open, mostly uncontrolled interface, so you cannot rely on the masses to accurately and completely define input fields, if there was an optimization field in the error report form, it is probably always set by default for the form / web pages. You must examine the problem report yourself to see if the user has problems with the optimizer. If it was a closed system for the corporation, where the analysis of employee performance could be negatively reflected for the wrong procedure, for example, filling out forms, you would have better searchable databases for obtaining information.
The optimizer increases the risk. Let's say that 50% of the compiler is used to get the result without optimization, another 10% to get -O1, you increased your risk, used more compiler code, more risk of having an error, more risk when you exit badly, and more code used to go to - O2 and -O3. Decreasing optimization does not completely eliminate the risk, but reduces the chances.