Share via


sum_distinct

Aggregate function: returns the sum of distinct values in the expression.

Syntax

from pyspark.sql import functions as sf

sf.sum_distinct(col)

Parameters

Parameter Type Description
col pyspark.sql.Column or str Target column to compute on.

Returns

pyspark.sql.Column: the column for computed results.

Examples

Example 1: Using sum_distinct function on a column with all distinct values

from pyspark.sql import functions as sf
df = spark.createDataFrame([(1,), (2,), (3,), (4,)], ["numbers"])
df.select(sf.sum_distinct('numbers')).show()
+---------------------+
|sum(DISTINCT numbers)|
+---------------------+
|                   10|
+---------------------+

Example 2: Using sum_distinct function on a column with no distinct values

from pyspark.sql import functions as sf
df = spark.createDataFrame([(1,), (1,), (1,), (1,)], ["numbers"])
df.select(sf.sum_distinct('numbers')).show()
+---------------------+
|sum(DISTINCT numbers)|
+---------------------+
|                    1|
+---------------------+

Example 3: Using sum_distinct function on a column with null and duplicate values

from pyspark.sql import functions as sf
df = spark.createDataFrame([(None,), (1,), (1,), (2,)], ["numbers"])
df.select(sf.sum_distinct('numbers')).show()
+---------------------+
|sum(DISTINCT numbers)|
+---------------------+
|                    3|
+---------------------+