Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This is a special version of aes_decrypt that performs the same operation, but returns a NULL value instead of raising an error if the decryption cannot be performed. Returns a decrypted value of input using AES in mode with padding. Key lengths of 16, 24 and 32 bits are supported. Supported combinations of (mode, padding) are (ECB, PKCS), (GCM, NONE) and (CBC, PKCS). Optional additional authenticated data (AAD) is only supported for GCM. If provided for encryption, the identical AAD value must be provided for decryption. The default mode is GCM.
Syntax
from pyspark.sql import functions as sf
sf.try_aes_decrypt(input, key, mode=None, padding=None, aad=None)
Parameters
| Parameter | Type | Description |
|---|---|---|
input |
pyspark.sql.Column or str |
The binary value to decrypt. |
key |
pyspark.sql.Column or str |
The passphrase to use to decrypt the data. |
mode |
pyspark.sql.Column or str, optional |
Specifies which block cipher mode should be used to decrypt messages. Valid modes: ECB, GCM, CBC. |
padding |
pyspark.sql.Column or str, optional |
Specifies how to pad messages whose length is not a multiple of the block size. Valid values: PKCS, NONE, DEFAULT. The DEFAULT padding means PKCS for ECB, NONE for GCM and PKCS for CBC. |
aad |
pyspark.sql.Column or str, optional |
Optional additional authenticated data. Only supported for GCM mode. This can be any free-form input and must be provided for both encryption and decryption. |
Returns
pyspark.sql.Column: A new column that contains a decrypted value or a NULL value.
Examples
Example 1: Decrypt data with key, mode, padding and aad
from pyspark.sql import functions as sf
df = spark.createDataFrame([(
"AAAAAAAAAAAAAAAAQiYi+sTLm7KD9UcZ2nlRdYDe/PX4",
"abcdefghijklmnop12345678ABCDEFGH", "GCM", "DEFAULT",
"This is an AAD mixed into the input",)],
["input", "key", "mode", "padding", "aad"]
)
df.select(sf.try_aes_decrypt(
sf.unbase64(df.input), df.key, "mode", df.padding, df.aad
).cast("STRING")).show(truncate=False)
+-------------------------------------------------------------------------+
|CAST(try_aes_decrypt(unbase64(input), key, mode, padding, aad) AS STRING)|
+-------------------------------------------------------------------------+
|Spark |
+-------------------------------------------------------------------------+
Example 2: Failed to decrypt data with key, mode, padding and aad
from pyspark.sql import functions as sf
df = spark.createDataFrame([(
"AAAAAAAAAAAAAAAAQiYi+sTLm7KD9UcZ2nlRdYDe/PX4",
"abcdefghijklmnop12345678ABCDEFGH", "CBC", "DEFAULT",
"This is an AAD mixed into the input",)],
["input", "key", "mode", "padding", "aad"]
)
df.select(sf.try_aes_decrypt(
sf.unbase64(df.input), df.key, "mode", df.padding, df.aad
).cast("STRING")).show(truncate=False)
+-------------------------------------------------------------------------+
|CAST(try_aes_decrypt(unbase64(input), key, mode, padding, aad) AS STRING)|
+-------------------------------------------------------------------------+
|NULL |
+-------------------------------------------------------------------------+
Example 3: Decrypt data with key, mode and padding
from pyspark.sql import functions as sf
df = spark.createDataFrame([(
"AAAAAAAAAAAAAAAAAAAAAPSd4mWyMZ5mhvjiAPQJnfg=",
"abcdefghijklmnop12345678ABCDEFGH", "CBC", "DEFAULT",)],
["input", "key", "mode", "padding"]
)
df.select(sf.try_aes_decrypt(
sf.unbase64(df.input), df.key, "mode", df.padding
).cast("STRING")).show(truncate=False)
+----------------------------------------------------------------------+
|CAST(try_aes_decrypt(unbase64(input), key, mode, padding, ) AS STRING)|
+----------------------------------------------------------------------+
|Spark |
+----------------------------------------------------------------------+
Example 4: Decrypt data with key and mode
from pyspark.sql import functions as sf
df = spark.createDataFrame([(
"AAAAAAAAAAAAAAAAAAAAAPSd4mWyMZ5mhvjiAPQJnfg=",
"abcdefghijklmnop12345678ABCDEFGH", "CBC", "DEFAULT",)],
["input", "key", "mode", "padding"]
)
df.select(sf.try_aes_decrypt(
sf.unbase64(df.input), df.key, "mode"
).cast("STRING")).show(truncate=False)
+----------------------------------------------------------------------+
|CAST(try_aes_decrypt(unbase64(input), key, mode, DEFAULT, ) AS STRING)|
+----------------------------------------------------------------------+
|Spark |
+----------------------------------------------------------------------+
Example 5: Decrypt data with key
from pyspark.sql import functions as sf
df = spark.createDataFrame([(
"83F16B2AA704794132802D248E6BFD4E380078182D1544813898AC97E709B28A94",
"0000111122223333",)],
["input", "key"]
)
df.select(sf.try_aes_decrypt(
sf.unhex(df.input), df.key
).cast("STRING")).show(truncate=False)
+------------------------------------------------------------------+
|CAST(try_aes_decrypt(unhex(input), key, GCM, DEFAULT, ) AS STRING)|
+------------------------------------------------------------------+
|Spark |
+------------------------------------------------------------------+