Thanks for the reply! So that article mostly discusses guardrails and grounding checks which aim to protect/regulate the output of the LLM. What about protecting your data from the cloud and model providers? I seem to remember Azure had some kind of secrecy agreement like langdock but can't find it