Query the knowledge graph (read-only) to see if the relevant data for the given input already exists.
If data exists → use it directly for evaluation or downstream tasks.
If data does not exist → use an external LLM to generate the output.
Optionally evaluate the LLM output.
Insert the new output into the knowledge graph so it can be reused in the future.
This approach is standard for integrating LLMs with knowledge graphs: the graph acts as a persistent store of previously generated or validated knowledge, and the LLM is used only when needed.
The key benefits of this approach:
Efficiency: You avoid regenerating data already stored.
Traceability: All generated outputs can be stored and tracked in the graph.
Scalability: The graph gradually accumulates verified knowledge over time.