Those who know C ++ may know what I mean by “assembly unity”:
- *. Project cpp files effectively # include-ed in a single source filefollowing the #include directives specified in * .cpp and * .h files
- this source file is fed to the compiler
- finish! You get a binary output!
Doing things this way means that there are fewer intermediate files (* .o), fewer read and overhead files on the IO disk, fewer calls to this compiler, which leads to better build performance.
My question is: is this possible for Latex? I want this because there is a slow post-processor pass that I would like to run on .tex files before creating my last .pdf using pdflatex. Currently, it takes about 7 seconds to process my growing list of .tex files. I believe that doing this pass on a single file is much faster. This motivates my question!
To summarize, I want
- ' merge all .tex files into a supermassive .tex source file , following the \ input {} and \ include {} > macros in each .tex file
- pass the source supermassive.tex file to a slow post-processor pass (actually the Ott, fyi tex filter)
- pdflatex
- ! PDF!
. . , script, !
!