When we talk about an 'embedding', be it a word embedding or a paragraph embedding, it is akin to a hashmap that transforms some input into a vector of numbers that can be tuned automatically by some downstream model, e.g. a Neural Network.
In your case, if you used paragraph embeddings, your hashmap keys would be the paragraph texts themselves, and you would run into the issue that your keys are too-high-dimensional, i.e. the same paragraph of text would never appear twice, which defeats the purpose of all tuning.
I think in this case a pretrained embedding that is powerful enough to encapsulate your specific use case would probably be the best way to go, but if you really want classification at the paragraph level, perhaps you could use some pooling or aggregation function to aggregate individual word embeddings in a paragraph into a "pooled paragraph embedding"? Perhaps Bag-of-Words would help you achieve this.