Compartir vía


map_concat

Devuelve la unión de todos los mapas especificados. Para las claves duplicadas en los mapas de entrada, el control se rige por spark.sql.mapKeyDedupPolicy. De forma predeterminada, produce una excepción. Si se establece en LAST_WIN, usa el valor del último mapa.

Syntax

from pyspark.sql import functions as sf

sf.map_concat(*cols)

Parámetros

Parámetro Tipo Description
cols pyspark.sql.Column o str Nombres de columna o Columna

Devoluciones

pyspark.sql.Column: mapa de entradas combinadas de otros mapas.

Examples

Ejemplo 1: Uso básico de map_concat

from pyspark.sql import functions as sf
df = spark.sql("SELECT map(1, 'a', 2, 'b') as map1, map(3, 'c') as map2")
df.select(sf.map_concat("map1", "map2")).show(truncate=False)
+------------------------+
|map_concat(map1, map2)  |
+------------------------+
|{1 -> a, 2 -> b, 3 -> c}|
+------------------------+

Ejemplo 2: map_concat con tres mapas

from pyspark.sql import functions as sf
df = spark.sql("SELECT map(1, 'a') as map1, map(2, 'b') as map2, map(3, 'c') as map3")
df.select(sf.map_concat("map1", "map2", "map3")).show(truncate=False)
+----------------------------+
|map_concat(map1, map2, map3)|
+----------------------------+
|{1 -> a, 2 -> b, 3 -> c}    |
+----------------------------+

Ejemplo 3: map_concat con mapa vacío

from pyspark.sql import functions as sf
df = spark.sql("SELECT map(1, 'a', 2, 'b') as map1, map() as map2")
df.select(sf.map_concat("map1", "map2")).show(truncate=False)
+----------------------+
|map_concat(map1, map2)|
+----------------------+
|{1 -> a, 2 -> b}      |
+----------------------+

Ejemplo 4: map_concat con valores NULL

from pyspark.sql import functions as sf
df = spark.sql("SELECT map(1, 'a', 2, 'b') as map1, map(3, null) as map2")
df.select(sf.map_concat("map1", "map2")).show(truncate=False)
+---------------------------+
|map_concat(map1, map2)     |
+---------------------------+
|{1 -> a, 2 -> b, 3 -> NULL}|
+---------------------------+