The main issue would be of context window limitation in OpenAI's LLM. May be you can try a LLM with higher context window (As of now Google's Gemini 1.5 has the highest, i.e. 2 million tokens).
Additionally, You've pointed out this "RAG doesn’t seem suitable here, as I need ChatGPT to generate answers with full context, not just partial knowledge."
I am curious what is the size of your document