Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Partition transform function: A transform for timestamps to partition data into hours. Supports Spark Connect.
Warning
Deprecated in 4.0.0. Use partitioning.hours instead.
Syntax
from pyspark.databricks.sql import functions as dbf
dbf.hours(col=<col>)
Parameters
| Parameter | Type | Description |
|---|---|---|
col |
pyspark.sql.Column or str |
Target date or timestamp column to work on. |
Returns
pyspark.sql.Column: Data partitioned by hours.
Examples
df.writeTo("catalog.db.table").partitionedBy(
hours("ts")
).createOrReplace()
Note
This function can be used only in combination with the partitionedBy method of the DataFrameWriterV2.