This is bad practice. It has nothing to do with open mode (read, write, read, write, add, binary mode, text mode ...).
In CPython, “it almost always works” because file objects are automatically closed when they become garbage (unreachable), and counting references to CPython usually collects (non-cyclic) garbage very soon after it becomes garbage.
But to believe that this is really bad practice.
If, for example, dosomething(f) started an hour before it is returned, the likelihood that the file will remain open will remain open all the time.
Note: in some cases, dosomething(f) can be encoded to explicitly close the transmitted object itself. In these cases, this is not a bad practice; -)
Later: the related thing I often saw is this:
data = open(file_path).read()
In CPython, an unidentified file object collects garbage (and also closes) immediately after the instruction completes, thanks to the count of links to CPython. Then people are surprised when they move their code to another Python implementation and get the OS "too many open files!". complaints. Heh - serves them on the right; -)
Example:
open("Output.txt", "w").write("Welcome") print open("Output.txt").read()
Will print
Welcome
in CPython, because an unnamed file object from the first statement collects garbage (and closes) immediately after the completion of the first statement.
But:
output = open("Output.txt", "w") output.write("Welcome") print open("Output.txt").read()
probably prints an empty string in CPython. In this case, the file object is tied to the name ( output ), so the collector fails when the second statement completes (a more reasonable implementation may theoretically detect that output never used again, and collect garbage immediately, but CPython does not).
"Welcome" is probably still in the file memory buffer, so it has not yet been written to disk. Therefore, the third statement probably finds an empty file and does not print anything.
In other Python implementations, both examples can print blank lines well.