通过


内联的

将结构数组分解为一个表。

此函数采用一个包含结构数组的输入列,并返回一个新列,其中数组中的每个结构将分解为一个单独的行。

Syntax

from pyspark.sql import functions as sf

sf.inline(col)

参数

参数 类型 Description
col pyspark.sql.Column 或列名 要分解的值的输入列。

退货

pyspark.sql.Column:内联分解结果的生成器表达式。

例子

示例 1:将内联与单个结构数组列配合使用

import pyspark.sql.functions as sf
df = spark.sql('SELECT ARRAY(NAMED_STRUCT("a",1,"b",2), NAMED_STRUCT("a",3,"b",4)) AS a')
df.select('*', sf.inline(df.a)).show()
+----------------+---+---+
|               a|  a|  b|
+----------------+---+---+
|[{1, 2}, {3, 4}]|  1|  2|
|[{1, 2}, {3, 4}]|  3|  4|
+----------------+---+---+

示例 2:将内联与列名一起使用

import pyspark.sql.functions as sf
df = spark.sql('SELECT ARRAY(NAMED_STRUCT("a",1,"b",2), NAMED_STRUCT("a",3,"b",4)) AS a')
df.select('*', sf.inline('a')).show()
+----------------+---+---+
|               a|  a|  b|
+----------------+---+---+
|[{1, 2}, {3, 4}]|  1|  2|
|[{1, 2}, {3, 4}]|  3|  4|
+----------------+---+---+

示例 3:将内联与别名一起使用

import pyspark.sql.functions as sf
df = spark.sql('SELECT ARRAY(NAMED_STRUCT("a",1,"b",2), NAMED_STRUCT("a",3,"b",4)) AS a')
df.select('*', sf.inline('a').alias("c1", "c2")).show()
+----------------+---+---+
|               a| c1| c2|
+----------------+---+---+
|[{1, 2}, {3, 4}]|  1|  2|
|[{1, 2}, {3, 4}]|  3|  4|
+----------------+---+---+

示例 4:将内联与多个结构数组列配合使用

import pyspark.sql.functions as sf
df = spark.sql('SELECT ARRAY(NAMED_STRUCT("a",1,"b",2), NAMED_STRUCT("a",3,"b",4)) AS a1, ARRAY(NAMED_STRUCT("c",5,"d",6), NAMED_STRUCT("c",7,"d",8)) AS a2')
df.select(
    '*', sf.inline('a1')
).select('*', sf.inline('a2')).show()
+----------------+----------------+---+---+---+---+
|              a1|              a2|  a|  b|  c|  d|
+----------------+----------------+---+---+---+---+
|[{1, 2}, {3, 4}]|[{5, 6}, {7, 8}]|  1|  2|  5|  6|
|[{1, 2}, {3, 4}]|[{5, 6}, {7, 8}]|  1|  2|  7|  8|
|[{1, 2}, {3, 4}]|[{5, 6}, {7, 8}]|  3|  4|  5|  6|
|[{1, 2}, {3, 4}]|[{5, 6}, {7, 8}]|  3|  4|  7|  8|
+----------------+----------------+---+---+---+---+

示例 5:将内联与嵌套结构数组列配合使用

import pyspark.sql.functions as sf
df = spark.sql('SELECT NAMED_STRUCT("a",1,"b",2,"c",ARRAY(NAMED_STRUCT("c",3,"d",4), NAMED_STRUCT("c",5,"d",6))) AS s')
df.select('*', sf.inline('s.c')).show(truncate=False)
+------------------------+---+---+
|s                       |c  |d  |
+------------------------+---+---+
|{1, 2, [{3, 4}, {5, 6}]}|3  |4  |
|{1, 2, [{3, 4}, {5, 6}]}|5  |6  |
+------------------------+---+---+

示例 6:将内联与包含 null、空数组和 null 的列一起使用

from pyspark.sql import functions as sf
df = spark.sql('SELECT * FROM VALUES (1,ARRAY(NAMED_STRUCT("a",1,"b",2), NULL, NAMED_STRUCT("a",3,"b",4))), (2,ARRAY()), (3,NULL) AS t(i,s)')
df.show(truncate=False)
+---+----------------------+
|i  |s                     |
+---+----------------------+
|1  |[{1, 2}, NULL, {3, 4}]|
|2  |[]                    |
|3  |NULL                  |
+---+----------------------+
df.select('*', sf.inline('s')).show(truncate=False)
+---+----------------------+----+----+
|i  |s                     |a   |b   |
+---+----------------------+----+----+
|1  |[{1, 2}, NULL, {3, 4}]|1   |2   |
|1  |[{1, 2}, NULL, {3, 4}]|NULL|NULL|
|1  |[{1, 2}, NULL, {3, 4}]|3   |4   |
+---+----------------------+----+----+