Seems like the difference is only in syntax [source]:
PySpark and pandas on Spark both have similar query execution models. Converting a query to an unresolved logical plan is relativpandas API on Sparkely quick. Optimizing the query and executing it takes much more time. So PySpark and pandas on Spark should have similar performance.
The main difference between pandas on Spark and PySpark is just the syntax.