Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Computes the first argument into a binary from a string using the provided character set (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16', 'UTF-32').
For the corresponding Databricks SQL function, see encode function.
Syntax
from pyspark.databricks.sql import functions as dbf
dbf.encode(col=<col>, charset=<charset>)
Parameters
| Parameter | Type | Description |
|---|---|---|
col |
pyspark.sql.Column or str |
target column to work on. |
charset |
literal string |
charset to use to encode. |
Returns
pyspark.sql.Column: the column for computed results.
Examples
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([("abcd",)], ["c"])
df.select("*", dbf.encode("c", "UTF-8")).show()