Considering that OpenAI resources are all out of the box ready with a rest API you can create a simple logic app that take the common alert schema, break out the information from a query you believe important for troubleshooting.
You can then send that extracted data from the Common Alert Schema JSON (In my testing I used the entire common alert schema JSON) and send it to the OpenAI model that you have prepared for the analysis of Alerts to help you devise a plan for resolving the issue.
In my case I created a lab with an alert that triggers on a CPU metric going above 0.1 percent.
This alert triggers an action group that send the common alert schema to a logic app.
I parse the Alert Schema and send it to the OpenAI Model that I have created for this purpose:
Image of my HTTP Body to prompt my OpenAI Alert Processing model
Once I get the response back, I parse the JSON and extract just the response message body.choices.message.content then I email that to myself for testing purposes.
At this point its just a blob of text because I have not done any formatting for the test.
However, I like that it was able to determine with ease that my alert is simply too sensitive:
Please let me know if you have question regarding my method, I am happy to dive further into it.