i am working on a large program that is made of many files the get run as separate processes. there is a big set of modules that every file needs to import. i hate having this list duplicate in every file, especially when i need to make changes. in C i would have one #include file the would #include all the other #include files. then each C file would #include that one #include file.
what's a good way (or best way) to do this with large projects in Python?
Maybe a sub-module with all the other files? In an __init__.py file, you can do all the importing, and optionally have an __all__ defined, so wherever you want those modules, you can do from modules import *
and have a very well defined list of what, exactly, is being imported?
so if i use the filename __init__.py
i don't need to actually import it? i figured i would need to do from mycommonmodulename import *
. would that be enough? can i do from whatever import *
there, too (not the same names, of course)?
i was planning on using the name "common". a choice i have used since my Fortran days (on mainframes, before i switched to Assembler for everything, which was before i did C, which was before i am doing Python).
If you have a folder named "common", and that folder has a file in it named "__init__.py", then you can do import common
or from common import *
, and the __init__.py will be run. Inside that file, you can then do all the common importing as a convenience.
Defining __all__ in the top level just determines what, exactly, is imported if you do from common import *
, so that import *
doesn't HAVE to actually import everything, only the things you actually want exposed outside the module.