Kopīgot, izmantojot


first_value

Returns the first value of col for a group of rows. It will return the first non-null value it sees when ignoreNulls is set to true. If all values are null, then null is returned.

Syntax

from pyspark.sql import functions as sf

sf.first_value(col, ignoreNulls=None)

Parameters

Parameter Type Description
col pyspark.sql.Column or str Target column to work on.
ignoreNulls pyspark.sql.Column or bool, optional If first value is null then look for first non-null value.

Returns

pyspark.sql.Column: some value of col for a group of rows.

Examples

Example 1: Get first value without ignoring nulls

from pyspark.sql import functions as sf
spark.createDataFrame(
    [(None, 1), ("a", 2), ("a", 3), ("b", 8), ("b", 2)], ["a", "b"]
).select(sf.first_value('a'), sf.first_value('b')).show()
+--------------+--------------+
|first_value(a)|first_value(b)|
+--------------+--------------+
|          NULL|             1|
+--------------+--------------+

Example 2: Get first value ignoring nulls

from pyspark.sql import functions as sf
spark.createDataFrame(
    [(None, 1), ("a", 2), ("a", 3), ("b", 8), ("b", 2)], ["a", "b"]
).select(sf.first_value('a', True), sf.first_value('b', True)).show()
+--------------+--------------+
|first_value(a)|first_value(b)|
+--------------+--------------+
|             a|             1|
+--------------+--------------+