Should I add exception handling to my existing code base?

I would like to know the advantages and disadvantages of adding exception handling to existing code.

I am working on an SDK that manages h / w cards in a Windows environment.

The SDK consists of more than 100 DLLs that interact with each other. Our existing codebase probably contains 100,000 (if not 1,000,000) lines of code. Our modules also have multithreading.

We will contact the appropriate library to use nothrow new (lic.lib instead of licp.lib).

Most of the code does not have exception handling. The code is written with this in mind.

int *p = new int[size]; if (p == NULL) { // handle this case... // most probably return an error code } char *q = new char[size]; if (q == NULL) { delete[] p; // handle this case... // most probably return an error code } 

We also use the RAII method. For example, we have an object created on the stack that automatically waits and frees a critical section.

We want to improve the stability of our SDK. We were thinking of adding exception handling, but I'm not sure if this is the right way to improve stability. I must admit that I do not have much experience with EH.

In general, the code checks for division by 0 or checks for NULL pointers before dereferencing. But it still happens that such an event will happen. Since dividing by zero or dereferencing the NULL pointer does not throw an exception, I wonder how useful it is to go through 100,000 lines of code lines and add an exception handling that will change the workflow and could cause a memory leak if not handled properly. I experimented with SEH, but I don’t think it makes sense to start using SEH, and it depends on Microsoft, right?

In my opinion, I think that if it would be more useful to look at the existing code and just check for possible failures, such as division by zero, which may have been skipped.

Also, if I were to add exception handling, how could I continue? Change all the modules at once or start from the bottom up (this means that if module A calls module B, which calls module C, I would change C, and then B then A, since we release our software quite often, and with us, there will probably only be time for changing C to the next version).

Thanks!

+4
source share
3 answers

I would like to know the advantages and disadvantages of adding exception handling to existing code.

You don’t say what you specifically mean by “exception handling”, so I’ll start with something fundamental: standard C ++ (you marked the question as c ++ ) requires you to write code that “handles exceptions” for all but trivial applications, otherwise your code is faulty . Various parts of the C ++ standard library are allowed to throw exceptions, including new , which your sample code uses. Therefore, your code already has a chance that it may include exceptions that it should "handle". What happens in this case? Basically, you should write a " safe exception code .

  • Error for resource leak program before exceptions. You use RAII to keep you alright.
  • It is an error for any entity to introduce an inconsistent state after an exception is thrown. Ensuring that this can be much more difficult.

You must first focus on making code safe.

+3
source

In Legacy code, you must enter exception handling in several places, taking into account schedule permissions; or the least accessible areas of the code (to reduce the risk of errors for the rest of the code base), and where they will be most useful (places with critical errors).

I do not recommend stopping an obsolete project by simply adding exception handling everywhere. The hardest part of the legacy code is changing it and keeping it working. In the end, his testing and his behavior are well documented.

+1
source

I agree with Raedwald that if you use C ++ without a very careful coding standard in order to avoid EH (for example: using nothrow new, avoiding standard containers, etc.), which I assume that the legacy code wasn’t, then the code is already broken and is likely to leak and do sporadic things before the exceptions it may already encounter, like bad_alloc or bad_cast , if you rely on dynamic_cast .

However, from a very pragmatic point of view with an outdated code base, there is a chance that the outdated code could get away from it. And in the end, how many non-trivial applications can gracefully recover from bad_alloc exceptions without much control over memory allocation? Not so much, and this does not lead to the fact that the whole world comes to a screeching stop.

Therefore, I do not recommend rewriting legacy code to try to catch exceptions and use RAII everywhere. You can use RAII here and there in code that you absolutely must change, but I would try to find reasons not to change it too much. Write tests for this and try to stabilize it and turn it into a black box; the library of functionality that will be used is not supported and changes when passing endlessly through an infinite LOC.

Now, the main reason I settled here is to resurrect this old thread (apologies!) Because of this comment:

In general, the code checks for division by 0 or checks for NULL pointers before dereferencing. But still it happens that such a thing will be. Since dividing by zero or dereferencing a NULL pointer does not throw an exception [...]

In my strong opinion, you should not throw things like zero pointer or divide by zero, because these are programmer errors. If you don’t work in mission-critical software where you want to elegantly restore functionality, even if the software doesn’t work to try to reduce the risk of life costs or something like that, you don’t want the application to elegantly recover in a programming event mistakes. The reason you usually don’t want to do this is because it has the disadvantage of hiding errors, which makes them quiet, allowing users to ignore and treat them, or even not report them.

Instead of programmer errors, you usually prefer assert , which generally does not include exceptions. If the assertion fails, the software in the debug build will stop and will usually display an error message telling you where the assertion did not work until the exact line of code. This is usually the fastest rate for detecting and fixing these programming errors when starting the debugger in response to an error report, so feel free to assert .

Exceptions are most useful for external exceptional events outside the control of the programmer. An example would be reading a file that turns out to be damaged. What is not in the programmer’s controller to process, so that it is convenient to drop and restore. Another example is the inability to connect to the server, the user placing the interrupt button for the operation to be completed, etc. These are exceptional external input events, not programmer errors.

The best way to repair programmer errors, such as null access pointer and divide by zeros, is to first find them ( assert convenient), write a test to play it, fix and pass the test, and call it a day, and not throw exceptions and catch them leaving these errors there.

0
source

Source: https://habr.com/ru/post/1379157/


All Articles