In fact, it’s a great idea to use exceptions in some cases when the “normal” programmers did not think to use them. For example, in a parser that runs a “rule” and detects that it does not work, an exception is a pretty good way to get back to the correct restore point. (This is similar to the extent of your suggestion to exit recursion.)
There is a classic objection that “exceptions are no better than goto,” which is clearly false. In Java and most other reasonably modern languages, you can have handled exception handlers and finally handlers, and therefore, when control is passed through an exception, a well-designed program can do the cleaning, etc. In fact, therefore, there are exceptions to the somewhat preferable way to return codes, since with a return code you must add logic to EACH return point in order to check the return code and find and execute the correct finally logic (possibly several nested parts) before exiting this procedure. With exception handlers, this is fairly automatic, through nested exception handlers.
Exceptions have some "baggage" - stack tracing in Java, for example. But Java exceptions are actually quite effective (at least compared to implementations in some other languages), so performance should not be a big problem if you don't use exceptions too much.
[I will add that I have 40 years of programming experience, and I have been using exceptions since the late 70s. Independently "invented" try / catch / finally (called BEGIN / ABEXIT / EXIT) ca 1980.]
“illegal” derogation:
I think that in these discussions, it is often overlooked that the number 1 problem in computing is not cost or complexity, standards or performance, but control.
By “control” I do not mean “control flow” or “control language” or “operator control” or any other context in which the term “control” is often used. I mean "control of complexity", but more than that - it is "conceptual control".
We all did this (at least those of us who programmed for more than 6 weeks) started writing a “simple little program” without real structure or standards (except for those that we might get used to using), without worrying about its complexity, because it is "simple" and "outburst". But then, perhaps, in one case out of 10 or in one case out of 100, depending on the context, the “simple little program” turns into a monster.
We are losing “conceptual control” over it. Fixing one error represents two more. Control and data flow of the program become opaque. He behaves in a way that we cannot understand.
And yet, by most standards, this “simple little program” is not so complicated. These are not very many lines of code. Very likely (since we are experienced programmers), it crashed into an “appropriate” number of routines. Run it using the complexity measurement algorithm and, probably (since it is still relatively small and "subroutine"), it will be rated as not very complicated.
Ultimately, maintaining conceptual control is the driving force behind virtually all software tools and languages. Yes, things like assemblers and compilers make us more productive, and performance is the stated driving force, but a big part of this performance increase is that we don’t have to deal with “irrelevant” parts and instead focus on concepts that we want to implement.
The main achievements in conceptual control occurred in the early stages of the history of computing, since “external subroutines” arose and became more independent from their environment, which allowed “sharing problems” when the developer of the subroutine did not need to know much about the subroutine and the user of the subroutine did not need there was a lot to know about the internal functions of the subroutine.
The simple development of BEGIN / END and "{...}" led to similar improvements, since even the "embedded" code could benefit from some isolation between "there" and "here."
Many of the tools and language features that we take for granted exist and are useful because they help maintain intelligent control over increasingly complex software structures. And you can fairly accurately assess the usefulness of a new tool or function, as it helps in this intellectual control.
One of the biggest remaining areas of difficulty is resource management. By “resource” here I mean any object — an object, an open file, a selected heap, etc. Which can be “created” or “allocated” during program execution and subsequently need some form of release. The invention of the "automatic stack" was the first step here - the variables could be allocated "on the stack" and then automatically deleted when the routine that "allocated" them came out. (This was a very controversial concept at one time, and many "authorities" advised against using this feature because it affected performance.)
But in most languages (all?) This problem still exists in one form or another. Languages using an explicit heap should “delete” everything that you are “new,” for example. Open files must be closed. Locks must be released. Some of these problems can be subtle (for example, using the GC heap) or sealed (reference counting or "parenting"), but there is no way to eliminate or hide all of them. And, managing this problem in the simple case is pretty straightforward (for example, a new object, call the routine that uses it, then delete it), real life is rarely that simple. It is not uncommon to have a method that makes a dozen or so different calls, randomly distributing resources between calls with different "lifetimes" for these resources. And some of the calls may return results that change the control flow, in some cases invoking a subroutine to exit, or they may cause a loop around some subset of the body of the subroutine. Knowing how to release resources in such a scenario (freeing all the right ones and none of the wrong ones) is a problem, and it becomes even more complicated as the routine changes over time (like any code of any complexity).
The basic concept of the try/finally mechanism (ignoring the catch aspect for a moment) copes quite well with the above problem (although I, of course, far from completely). With each new resource or group of resources that needs to be managed, the programmer introduces a try/finally block, putting the release logic in the finally clause. In addition to the practical aspect of ensuring that resources are released, this approach has the advantage of clearly defining the “scope” of resources involved by providing documentation that is “maintained with strong control”.
The fact that this mechanism is associated with the catch mechanism is a little intuitive, since the same mechanism that is used to manage resources in the normal case is used to manage them in the case of an “exception”. Since "exceptions" are (supposedly) rare, it is always wise to minimize the amount of logic in this rare way, since it will never be as well tested as the trunk, and since the "understanding" of error cases is especially difficult for the average programmer.
Of course, try/finally has some problems. One of the first among them is that the blocks can enter so deep that the structure of the program becomes unclear, rather than refined. But this is a problem common to do and if , and it awaits inspiration from the language developer. The big problem is that try/finally has catch luggage (and even worse, an exception), which means that it is inevitably classified as second-class citizens. (For example, finally does not even exist as a concept in Java bytecodes, outside of the legacy JSB / RET mechanism.)
There are other approaches. IBM iSeries (or "System i" or "IBM i" or what they call now) has the concept of attaching a cleanup handler to a given call level in the call stack, which should be executed when the corresponding program returns (or exits abnormally). Although this, in its current form, is awkward and not very suitable for the exact level of control required in a Java program, for example, it indicates a potential direction.
And, of course, in the C ++ language family (but not in Java), it is possible to create an instance of a class that represents a resource as an automatic variable and provide the object's destructor with a "clear" when leaving the variable area. (Note that this under-cover scheme essentially uses try / finally.) This is a great approach in many ways, but it requires either a set of general cleanup classes or the definition of a new class for each other type of resource, creating a potential “cloud” of textual bulky, but relatively meaningless class definitions. (And, as I said, this is not an option for Java in its current form.)
But I'm distracted.