79739476

Date: 2025-08-19 05:24:13
Score: 3
Natty:
Report link

Considering that OpenAI resources are all out of the box ready with a rest API you can create a simple logic app that take the common alert schema, break out the information from a query you believe important for troubleshooting.

You can then send that extracted data from the Common Alert Schema JSON (In my testing I used the entire common alert schema JSON) and send it to the OpenAI model that you have prepared for the analysis of Alerts to help you devise a plan for resolving the issue.

In my case I created a lab with an alert that triggers on a CPU metric going above 0.1 percent.
This alert triggers an action group that send the common alert schema to a logic app.

I parse the Alert Schema and send it to the OpenAI Model that I have created for this purpose:
Image of my HTTP Body to prompt my OpenAI Alert Processing model

Once I get the response back, I parse the JSON and extract just the response message body.choices.message.content then I email that to myself for testing purposes.

At this point its just a blob of text because I have not done any formatting for the test.

However, I like that it was able to determine with ease that my alert is simply too sensitive:

Bullet Point Recommendation from my AI Model for how to resolve my alert that triggers at 0.1% CPU Utilization.

Please let me know if you have question regarding my method, I am happy to dive further into it.

Reasons:
  • RegEx Blacklisted phrase (2.5): Please let me know
  • Long answer (-1):
  • No code block (0.5):
  • Low reputation (1):
Posted by: Marcus