Super-simple, absolutely boring setup: I have a directory full of .hpp and .cpp files. Some of these .cpp files must be embedded in executable files; naturally, these .cpp files # include some of the .hpp files in the same directory, which can then include others, etc. etc. Most of these .hpp files have corresponding .cpp files, namely if some_application.cpp #include foo.hpp, either directly or in transit, then there is a chance that the foo.cpp file should be compiled and linked to the some_application executable.
Super-simple, but I still don’t know what the “best” way to create it is, either in SCons or in CMake (none of them have any experience yet, except that you look at the documentation for the last day or so, and it becomes sad). I'm afraid that the solution I want may actually be impossible (or at least overly complicated) to come out on most build systems, but if so, it would be nice to know that I can just give up and to be less picky, Naturally, I hope that I am mistaken, which is not surprising, given how much I am not aware of the build systems (in general, about CMake and SCons in particular).
CMake and SCons can, of course, automatically detect that some_application.cpp needs to be recompiled when any of the header files it depends on (directly or transitively) changes because they can “parse” C ++ files well enough to select these dependencies. Good, great: we don’t need to list each .cpp- # include-.hpp dependency manually. But: we still need to decide which subset of the object files to send to the linker when the time comes to create each executable.
As I understand it, the two easiest alternatives to solve this part of the problem are:
- A. By explicitly and painstakingly listing "anything using this object file, you also need to use these other object files" manually, although these dependencies are accurately reflected by the corresponding .cpp-transitive-includes- -corresponding-.hpp that the build system already ran into a problem of asking for us . What for? Because computers.
- C. Dumping all object files in this directory into a single "library", and then depend on all executable files and links in this library. It is much simpler, and I understand that most people will do it, but it is also careless. Most executable files do not actually need everything in this library, and in fact they will not need to be rebuilt if only the contents of one or two .cpp files have changed. Isn’t this the setting of precisely this kind of unnecessary calculations, which should be avoided by the supposed "build system"? (Perhaps they do not need to be rebuilt if the library is dynamically linked, but suffice it to say that I do not like dynamically linked libraries for other reasons.)
Can CMake or SCons do this better than possible in any way? I see several limited ways to envelop an automatically created dependency graph, but not a universal way to do it interactively ("OK, build system, what do you think Dependencies:" A. "Based on this, add the following dependencies and think again: ...") . I'm not too surprised. I have not yet found a special-purpose mechanism in any build system to deal with the superfamily case when the binding time dependencies should reflect the corresponding #include turn-on dependencies. Did I miss something in my (albeit slightly superficial) reading of the documentation, or does everyone just go with option (B) and calmly hate themselves and / or their build systems?