As for now the recommended way is to use DataFrameWriterV2
API:
So the modern way to define partitions using spark DataFrame API is:
import pyspark.sql.functions as F
df.writeTo("catalog.db.table") \
.partitionedBy(F.days(F.col("created_at_field_name"))) \
.create()