返回一个新行,对应于给定数组或映射中每个元素的位置。 对位置使用默认列名,对数组中的元素和keyvalue地图中的元素使用默认列名pos,col除非另有指定。
Syntax
from pyspark.sql import functions as sf
sf.posexplode(col)
参数
| 参数 | 类型 | Description |
|---|---|---|
col |
pyspark.sql.Column 或列名 |
要处理的目标列。 |
退货
pyspark.sql.Column:每个数组项或映射键值一行,包括作为单独的列的位置。
例子
示例 1:分解数组列
from pyspark.sql import functions as sf
df = spark.sql('SELECT * FROM VALUES (1,ARRAY(1,2,3,NULL)), (2,ARRAY()), (3,NULL) AS t(i,a)')
df.show()
+---+---------------+
| i| a|
+---+---------------+
| 1|[1, 2, 3, NULL]|
| 2| []|
| 3| NULL|
+---+---------------+
df.select('*', sf.posexplode('a')).show()
+---+---------------+---+----+
| i| a|pos| col|
+---+---------------+---+----+
| 1|[1, 2, 3, NULL]| 0| 1|
| 1|[1, 2, 3, NULL]| 1| 2|
| 1|[1, 2, 3, NULL]| 2| 3|
| 1|[1, 2, 3, NULL]| 3|NULL|
+---+---------------+---+----+
示例 2:分解地图列
from pyspark.sql import functions as sf
df = spark.sql('SELECT * FROM VALUES (1,MAP(1,2,3,4,5,NULL)), (2,MAP()), (3,NULL) AS t(i,m)')
df.show(truncate=False)
+---+---------------------------+
|i |m |
+---+---------------------------+
|1 |{1 -> 2, 3 -> 4, 5 -> NULL}|
|2 |{} |
|3 |NULL |
+---+---------------------------+
df.select('*', sf.posexplode('m')).show(truncate=False)
+---+---------------------------+---+---+-----+
|i |m |pos|key|value|
+---+---------------------------+---+---+-----+
|1 |{1 -> 2, 3 -> 4, 5 -> NULL}|0 |1 |2 |
|1 |{1 -> 2, 3 -> 4, 5 -> NULL}|1 |3 |4 |
|1 |{1 -> 2, 3 -> 4, 5 -> NULL}|2 |5 |NULL |
+---+---------------------------+---+---+-----+