LLMs do generate answers token by token. So the way LLM generates the answer is that it takes the user input + the already generated part of the answer as context to predict the next token. Therefore, it is not possible to generate all answers at once as to generate a token it needs the preceding part of the answer as context.