Using Throwable for Things Exceptionally

I have always seen Throwable / Exception in the context of errors. But I can think of situations where it would be nice to extend Throwable just to break out of the call stack of recursive methods. Say, for example, you tried to find and return an object to a tree by a recursive search. As soon as you find that it inserts it into some Carrier extends Throwable and throws it, and catch it in a method that calls the recursive method.

Positive: you do not need to worry about the logic of returning recursive calls; since you found what you need, why worry about how you would carry this link, backing up the method stack.

Negative: you have a stack trace that you don't need. Also, the try/catch becomes counter-intuitive.

Here is an idiotic simple use:

 public class ThrowableDriver { public static void main(String[] args) { ThrowableTester tt = new ThrowableTester(); try { tt.rec(); } catch (TestThrowable e) { System.out.print("All good\n"); } } } public class TestThrowable extends Throwable { } public class ThrowableTester { int i=0; void rec() throws TestThrowable { if(i == 10) throw new TestThrowable(); i++; rec(); } } 
Question: is there a better way to achieve the same? Also, is there something inherently bad about doing this?
+6
source share
5 answers

In fact, it’s a great idea to use exceptions in some cases when the “normal” programmers did not think to use them. For example, in a parser that runs a “rule” and detects that it does not work, an exception is a pretty good way to get back to the correct restore point. (This is similar to the extent of your suggestion to exit recursion.)

There is a classic objection that “exceptions are no better than goto,” which is clearly false. In Java and most other reasonably modern languages, you can have handled exception handlers and finally handlers, and therefore, when control is passed through an exception, a well-designed program can do the cleaning, etc. In fact, therefore, there are exceptions to the somewhat preferable way to return codes, since with a return code you must add logic to EACH return point in order to check the return code and find and execute the correct finally logic (possibly several nested parts) before exiting this procedure. With exception handlers, this is fairly automatic, through nested exception handlers.

Exceptions have some "baggage" - stack tracing in Java, for example. But Java exceptions are actually quite effective (at least compared to implementations in some other languages), so performance should not be a big problem if you don't use exceptions too much.

[I will add that I have 40 years of programming experience, and I have been using exceptions since the late 70s. Independently "invented" try / catch / finally (called BEGIN / ABEXIT / EXIT) ca 1980.]

“illegal” derogation:

I think that in these discussions, it is often overlooked that the number 1 problem in computing is not cost or complexity, standards or performance, but control.

By “control” I do not mean “control flow” or “control language” or “operator control” or any other context in which the term “control” is often used. I mean "control of complexity", but more than that - it is "conceptual control".

We all did this (at least those of us who programmed for more than 6 weeks) started writing a “simple little program” without real structure or standards (except for those that we might get used to using), without worrying about its complexity, because it is "simple" and "outburst". But then, perhaps, in one case out of 10 or in one case out of 100, depending on the context, the “simple little program” turns into a monster.

We are losing “conceptual control” over it. Fixing one error represents two more. Control and data flow of the program become opaque. He behaves in a way that we cannot understand.

And yet, by most standards, this “simple little program” is not so complicated. These are not very many lines of code. Very likely (since we are experienced programmers), it crashed into an “appropriate” number of routines. Run it using the complexity measurement algorithm and, probably (since it is still relatively small and "subroutine"), it will be rated as not very complicated.

Ultimately, maintaining conceptual control is the driving force behind virtually all software tools and languages. Yes, things like assemblers and compilers make us more productive, and performance is the stated driving force, but a big part of this performance increase is that we don’t have to deal with “irrelevant” parts and instead focus on concepts that we want to implement.

The main achievements in conceptual control occurred in the early stages of the history of computing, since “external subroutines” arose and became more independent from their environment, which allowed “sharing problems” when the developer of the subroutine did not need to know much about the subroutine and the user of the subroutine did not need there was a lot to know about the internal functions of the subroutine.

The simple development of BEGIN / END and "{...}" led to similar improvements, since even the "embedded" code could benefit from some isolation between "there" and "here."

Many of the tools and language features that we take for granted exist and are useful because they help maintain intelligent control over increasingly complex software structures. And you can fairly accurately assess the usefulness of a new tool or function, as it helps in this intellectual control.

One of the biggest remaining areas of difficulty is resource management. By “resource” here I mean any object — an object, an open file, a selected heap, etc. Which can be “created” or “allocated” during program execution and subsequently need some form of release. The invention of the "automatic stack" was the first step here - the variables could be allocated "on the stack" and then automatically deleted when the routine that "allocated" them came out. (This was a very controversial concept at one time, and many "authorities" advised against using this feature because it affected performance.)

But in most languages ​​(all?) This problem still exists in one form or another. Languages ​​using an explicit heap should “delete” everything that you are “new,” for example. Open files must be closed. Locks must be released. Some of these problems can be subtle (for example, using the GC heap) or sealed (reference counting or "parenting"), but there is no way to eliminate or hide all of them. And, managing this problem in the simple case is pretty straightforward (for example, a new object, call the routine that uses it, then delete it), real life is rarely that simple. It is not uncommon to have a method that makes a dozen or so different calls, randomly distributing resources between calls with different "lifetimes" for these resources. And some of the calls may return results that change the control flow, in some cases invoking a subroutine to exit, or they may cause a loop around some subset of the body of the subroutine. Knowing how to release resources in such a scenario (freeing all the right ones and none of the wrong ones) is a problem, and it becomes even more complicated as the routine changes over time (like any code of any complexity).

The basic concept of the try/finally mechanism (ignoring the catch aspect for a moment) copes quite well with the above problem (although I, of course, far from completely). With each new resource or group of resources that needs to be managed, the programmer introduces a try/finally block, putting the release logic in the finally clause. In addition to the practical aspect of ensuring that resources are released, this approach has the advantage of clearly defining the “scope” of resources involved by providing documentation that is “maintained with strong control”.

The fact that this mechanism is associated with the catch mechanism is a little intuitive, since the same mechanism that is used to manage resources in the normal case is used to manage them in the case of an “exception”. Since "exceptions" are (supposedly) rare, it is always wise to minimize the amount of logic in this rare way, since it will never be as well tested as the trunk, and since the "understanding" of error cases is especially difficult for the average programmer.

Of course, try/finally has some problems. One of the first among them is that the blocks can enter so deep that the structure of the program becomes unclear, rather than refined. But this is a problem common to do and if , and it awaits inspiration from the language developer. The big problem is that try/finally has catch luggage (and even worse, an exception), which means that it is inevitably classified as second-class citizens. (For example, finally does not even exist as a concept in Java bytecodes, outside of the legacy JSB / RET mechanism.)

There are other approaches. IBM iSeries (or "System i" or "IBM i" or what they call now) has the concept of attaching a cleanup handler to a given call level in the call stack, which should be executed when the corresponding program returns (or exits abnormally). Although this, in its current form, is awkward and not very suitable for the exact level of control required in a Java program, for example, it indicates a potential direction.

And, of course, in the C ++ language family (but not in Java), it is possible to create an instance of a class that represents a resource as an automatic variable and provide the object's destructor with a "clear" when leaving the variable area. (Note that this under-cover scheme essentially uses try / finally.) This is a great approach in many ways, but it requires either a set of general cleanup classes or the definition of a new class for each other type of resource, creating a potential “cloud” of textual bulky, but relatively meaningless class definitions. (And, as I said, this is not an option for Java in its current form.)

But I'm distracted.

+6
source

Using exceptions for program control flow is not a good idea.

Reserve exceptions specifically for those that are outside the normal working criteria.

There are quite a few related questions on SO:

+3
source

The syntax becomes unstable because they are not intended for a common control flow. The standard practice in recursive function design is to return either the sentinel value or the found value (or nothing that works in your example) all the way back.

Traditional wisdom: "Exceptions for exceptional circumstances." As you noticed, Throwable sounds theoretically more generalized, but with the exception of Exceptions and Errors, it is not intended for wider use. From docs :

The Throwable class is the superclass of all Java errors and exceptions.

Many time series (VMs) are not designed to be optimized when throwing exceptions, which means they can be "expensive." This does not mean that you could not do this, of course, and the “dear” is subjective, but, as a rule, this is not done, and others will be surprised to find it in your code.

+2
source
Question: is there a better way to achieve the same? Also, is there something inherently bad about doing this?

As for your second question, exceptions carry a significant load at runtime, no matter how efficient the compiler is. This in itself should oppose their use as governing structures in the general case .

In addition, controlled getos are exceptions, which is almost equivalent to long jumps. Yes, yes, they can be nested, and in languages ​​like Java, you can have your own beautiful “final” blocks and that’s it. However, all that they are, and as such, they are not intended to replace general cases for typical management structures. More than four decades of collective, industrial knowledge tells us how, in general, we should avoid such things for FREE , you have a very good reason for this.

And that goes to the hearth of your first question. Yes, there is a better way (e.g. take your code) ... just use typical control structures:

 // class and method names remain the same, though using // your typical logical control structures public class ThrowableDriver { public static void main(String[] args) { ThrowableTester tt = new ThrowableTester(); tt.rec(); System.out.print("All good\n"); } } } public class ThrowableTester { int i=0; void rec() { if(i == 10) return; i++; rec(); } } 

Cm? Simpler. Fewer lines of code. No redundant attempt / trick or unnecessary exception. You achieve the same.

In the end, our task is not to play with language constructs, but to create programs that are reasonable, simple enough for a service point, with enough statements to do the job and with nothing else.

So, when it comes to the example code that you provided, you should ask yourself: what did I get with this approach that I cannot get when using typical control structures?

You do not need to worry about the logic of returning recursive calls;

If you're not worried about the return logic, just ignore the return or define your method as void. Wrapping it in try / catch just makes the code more complicated than necessary. If you do not care about the return, I am sure that you have ensured that this method is completed. So all you need to do is just call it (as in the example code that I provided with this post).

since you found what you need, why worry about how you would carry this link, a backup of the method stack.

It’s cheaper to get a return (mainly to an object link in the JVM) to the stack before the method completes than to do the whole book related to throwing an exception (launching epilogues and filling out a potentially large stack trace) and catch it (moving the stack trace). JVM or not, this is the main material of CS 101.

So, not only is this more expensive, you still have to enter more characters to encode the same thing.

There is practically no recursive method that you can exit using Throwable, which cannot be rewritten using typical control structures. You should have a very, very, but very good reason to use exception instead of control structures.

+1
source
0
source

Source: https://habr.com/ru/post/894005/


All Articles