79706086

Date: 2025-07-18 11:14:14
Score: 1
Natty:
Report link

As for now the recommended way is to use DataFrameWriterV2 API:

So the modern way to define partitions using spark DataFrame API is:

import pyspark.sql.functions as F

df.writeTo("catalog.db.table") \
    .partitionedBy(F.days(F.col("created_at_field_name"))) \
    .create()
Reasons:
  • Probably link only (1):
  • Low length (0.5):
  • Has code block (-0.5):
Posted by: SleepWalker