Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Applies to:
Databricks Runtime 18.2 and above
Important
This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Azure Databricks previews.
Returns the canonical string representation of an IP address or CIDR block.
For the corresponding SQL function, see ip_as_string function.
Syntax
from pyspark.databricks.sql import functions as dbf
dbf.ip_as_string(col=<col>)
Parameters
| Parameter | Type | Description |
|---|---|---|
col |
pyspark.sql.Column or str |
A STRING or BINARY value representing a valid IPv4 or IPv6 address or CIDR block. |
Examples
Example 1: Convert an IPv4 address to string.
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([('192.168.1.1',)], ['ip'])
df.select(dbf.ip_as_string('ip').alias('result')).collect()
[Row(result='192.168.1.1')]
Example 2: Convert an IPv6 address to string.
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([('2001:db8::1',)], ['ip'])
df.select(dbf.ip_as_string('ip').alias('result')).collect()
[Row(result='2001:db8::1')]
Example 3: Convert a CIDR block to string.
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([('192.168.1.5/24',)], ['cidr'])
df.select(dbf.ip_as_string('cidr').alias('result')).collect()
[Row(result='192.168.1.0/24')]
Example 4: None input returns None.
from pyspark.databricks.sql import functions as dbf
df = spark.createDataFrame([(None,)], 'ip: string')
df.select(dbf.ip_as_string('ip').alias('result')).collect()
[Row(result=None)]