response_1 = openai.ChatCompletion.create(
model="gpt-4-turbo",
messages=[
{ "role": "system", "content": "You are an intent classifier..." },
{ "role": "user", "content": user_input }
]
)
response_2 = openai.ChatCompletion.create(
model="gpt-4-turbo",
messages=[
{ "role": "system", "content": f"Intent: {intent}" },
{ "role": "user", "content": "Proceed to handle the request using tool if needed." }
],
tools=[...],
tool_choice="auto"
)
I’d love to hear how others are handling this, especially if you’ve built similar multi-step chains using OpenAI's API. How are you managing context, avoiding prompt bloat, and keeping things fast and clean?
Thanks!