Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Important
This feature is in Public Preview.
Scales the input geometry in the X, Y, and Z (optional) directions using the given factors.
For the corresponding Databricks SQL function, see st_scale function.
Syntax
from pyspark.databricks.sql import functions as dbf
dbf.st_scale(col1=<col1>, col2=<col2>, col3=<col3>, col4=<col4>)
Parameters
| Parameter | Type | Description |
|---|---|---|
col1 |
pyspark.sql.Column or str |
A Geometry value. |
col2 |
pyspark.sql.Column or float |
A DOUBLE value representing the X scaling factor. |
col3 |
pyspark.sql.Column or float |
A DOUBLE value representing the Y scaling factor. |
col4 |
pyspark.sql.Column or float, optional |
A DOUBLE value representing the Z scaling factor (optional). Default is 1. |
Examples
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([('MULTIPOINT ZM (1 2 3 -4,5 6 7 -8,EMPTY)',)], ['wkt'])
df.select(dbf.st_asewkt(dbf.st_scale(dbf.st_geomfromtext('wkt', 4326), 10.0, 20.0)).alias('result')).collect()
[Row(result='SRID=4326;MULTIPOINT ZM ((10 40 3 -4),(50 120 7 -8),EMPTY)')]
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([('MULTIPOINT ZM (1 2 3 -4,5 6 7 -8,EMPTY)',)], ['wkt'])
df.select(dbf.st_asewkt(dbf.st_scale(dbf.st_geomfromtext('wkt', 4326), 10.0, 20.0, 3.0)).alias('result')).collect()
[Row(result='SRID=4326;MULTIPOINT ZM ((10 40 9 -4),(50 120 21 -8),EMPTY)')]