Well, you cannot understand it as
"Function A1 makes f for object B, then function A2 makes g in D, etc."
its more like
"Function A performs the action {a, b, c, d, e, f, g, h, i, j, k, l, m, n, o or p or no-op} and shifts / decreases a certain number of objects { 1-1567} in a stack of type {B, C, D, E, F or G} and objects containing it up to N levels, which can be of types {H, I, J, K or L etc} in certain combinations in accordance with list of rules "
Actually, you need a data table (or code generated from a data table, for example, as a BNF grammar dataset) that tells the function what to do.
You can write it from scratch. You can also paint walls with eyelash brushes. You can interpret the data table at runtime. You can also install Sleep (1000); in your code every line. Not that I tried it either.
Compilers are complicated. Consequently, compiler generators.
EDIT
You are trying to define tokens in terms of the content in the file itself.
I assume that the reason you do not want to use regular expressions is because you want to have access to the line number information for different tokens in the block of text, and not just for the block of text as a whole. If line numbers for each word are not needed, and whole blocks are going to fit into memory, I would be inclined to simulate the entire block in brackets as a token, as this can increase processing speed. In any case, you will need the yylex custom function. Start by creating lex with fixed tokens "[" and "]" to start and end the content, then freeze and modify it to update the data about which tokens to look for from yacc code.