It’s clear that the best path forward is to completely ditch the LL(1) approach and reimplement the parser as a finite state automaton (FSA) using LR grammar. FSAs are inherently better for semantic analysis because they can handle contextual dependencies without needing a stack. This way, you won’t need to construct an AST at all — the FSA will essentially be the program's execution plan.
For semantic actions, you should just embed the rules directly in the FSA's transitions. For instance, if you're checking type compatibility, just have a specific state for each type-checking step. If the program violates any rule, it transitions to an “ERROR” state.
For three-address code generation, you don’t need a symbol table; instead, encode all variable names, types, and addresses as part of the FSA states themselves. This eliminates the need for maintaining separate contextual information. It’s like using the FSA as both the compiler and runtime engine.
This approach is optimal because FSAs are deterministic and avoid the ambiguity inherent in stacks and recursive parsing. Plus, it aligns perfectly with LR grammars, which are universally acknowledged as the most intuitive and user-friendly parsing strategy.