Given that I have the following directory structure, where . is the current working directory
. \---foo \---bar \---__init__.py \---baz.py
When I run python -c "import foo.bar.baz" , I get
Traceback (most recent call last): File "<string>", line 1 ImportError: No module named foo.bar.baz
If I echo "" > foo/__init__.py , the command above works.
Am I doing something wrong or misunderstanding the point __init__.py ? I thought this should stop existing modules where they shouldn't, for example. a directory called string , but if you replace foo with string in my example, I seem to be forced to create a module that should never be used, so I can reference the file deeper in the hierarchy.
Update
I work with a build system that generates __init__.py for me and provides a directory structure, and although I can mess around with the hierarchy, I would rather just add __init__.py myself. To slightly change the question, why do I need a python package at every level, and not just at the top? Is it just the rule that you can import modules only from the python path or from the package chain from the python path?
source share