You can use ollama . Pull a model and run locally. Give context to the model with the prompt and you should get a proper answer