Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Creates a new row for a json column according to the given field names.
Syntax
from pyspark.sql import functions as sf
sf.json_tuple(col, *fields)
Parameters
| Parameter | Type | Description |
|---|---|---|
col |
pyspark.sql.Column or str |
String column in json format. |
fields |
str | A field or fields to extract. |
Returns
pyspark.sql.Column: a new row for each given field value from json object
Examples
from pyspark.sql import functions as sf
data = [("1", '''{"f1": "value1", "f2": "value2"}'''), ("2", '''{"f1": "value12"}''')]
df = spark.createDataFrame(data, ("key", "jstring"))
df.select(df.key, sf.json_tuple(df.jstring, 'f1', 'f2')).collect()
[Row(key='1', c0='value1', c1='value2'), Row(key='2', c0='value12', c1=None)]