Databricks Runtime 12.2 LTS
Die folgenden Versionshinweise enthalten Informationen zur Databricks Runtime-Version 12.2 LTS, die von Apache Spark 3.3.2 unterstützt wird.
Diese Version wurde von Databricks im März 2023 veröffentlicht.
Hinweis
LTS bedeutet, dass diese Version langfristig unterstützt wird. Weitere Informationen finden Sie unter Databricks Runtime LTS-Versionslebenszyklus.
Tipp
Hinweise zu Databricks-Runtime-Versionen, die das Ende des Supports (EoS) erreicht haben, finden Sie unter Versionshinweise zu Databricks Runtime am Ende des Supports. Die EoS-Databricks-Runtime-Versionen wurden eingestellt und werden möglicherweise nicht aktualisiert.
Änderungen des Verhaltens
[Bahnbrechende Änderung] Für die neue Python-Version ist das Aktualisieren von Databricks Connect V1 Python-Clients erforderlich.
Um erforderliche Sicherheitspatches anzuwenden, wird die Python-Version in Databricks Runtime 12.2 LTS von 3.9.5 auf 3.9.19 aktualisiert. Da diese Änderungen möglicherweise Fehler in Clients verursachen, die bestimmte PySpark-Funktionen verwenden, müssen alle Clients, die Databricks Connect V1 für Python mit Databricks Runtime 12.2 LTS verwenden, auf Python 3.9.7 oder höher aktualisiert werden.
Neue Features und Verbesserungen
- Delta Lake-Schemaentwicklung unterstützt das Angeben von Quellspalten in merge-Anweisungen
- Workloads für strukturiertes Streaming werden in Clustern mit dem Modus „gemeinsamer Zugriff“ unterstützt
- Neue Features für Predictive I/O
- Unterstützung für implizites laterales Spaltenaliasing
- Neus forEachBatch-Feature
- Standardisierte Verbindungsoptionen für den Abfrageverbund
- Erweiterte SQL-Funktionsbibliothek für die Arrayverwaltung
- Neue Maskenfunktion zum Anonymisieren von Zeichenfolgen
- Allgemeine Fehlerbedingungen geben jetzt SQLSTATE-Werte zurück
- Aufrufen von Generatorfunktionen in der FROM-Klausel
- Unterstützung für Protokollpuffer ist allgemein verfügbar
- Zieldefinition für Notizbuchvariablen und -funktionen
- Notizbuchschnellkorrektur für automatisch importierte Bibliotheken
- Fehlerbehebungen
Delta Lake-Schemaentwicklung unterstützt das Angeben von Quellspalten in merge-Anweisungen
Wenn die Schemaentwicklung aktiviert ist, können Sie jetzt in Einfüge- oder Aktualisierungsaktionen für Mergeanweisungen Spalten angeben, die nur in der Quelltabelle vorhanden sind. In Databricks Runtime 12.1 und niedriger können nur INSERT *
- oder UPDATE SET *
-Aktionen für die Schemaentwicklung mit Merge verwendet werden. Weitere Informationen finden Sie unter Automatische Schemaentwicklung für Delta Lake-Merge.
Workloads für strukturiertes Streaming werden in Clustern mit dem Modus „gemeinsamer Zugriff“ unterstützt
Sie können jetzt strukturiertes Streaming verwenden, um mit Unity Catalog in freigegebenen Clustern zu interagieren. Es gelten einige Einschränkungen. Weitere Informationen finden Sie unter Welche Funktionen von strukturiertem Streaming unterstützt Unity Catalog?.
Neue Features für Predictive I/O
Für die Foreachbatch
-Senke ist jetzt Photon-Unterstützung verfügbar. Workloads, die aus einer Quelle streamen und in Delta-Tabellen mergen oder in mehrere Senken schreiben, können jetzt von der Photonized Foreachbatch
-Senke profitieren.
Unterstützung für implizites laterales Spaltenaliasing
Azure Databricks unterstützt jetzt standardmäßig implizites laterales Spaltenaliasing. Sie können jetzt einen zuvor in derselben SELECT
-Liste angegebenen Ausdruck wiederverwenden. Beispiel: Bei Angabe von SELECT 1 AS a
, a + 1 AS b
kann das a
in a + 1
als das zuvor definierte 1 AS a
aufgelöst werden. Weitere Details zur Auflösungsreihenfolge finden Sie unter Namensauflösung.
Um dieses Feature zu deaktivieren, können Sie spark.sql.lateralColumnAlias.enableImplicitResolution
auf false
festlegen.
Neus forEachBatch-Feature
Photon wird jetzt unterstützt, wenn zum Schreiben in eine Datensenke foreachBatch
verwendet wird.
Standardisierte Verbindungsoptionen für den Abfrageverbund
Sie können jetzt einen einheitlichen Satz von Optionen (Host, Port, Datenbank, Benutzer, Kennwort) verwenden, um eine Verbindung mit Datenquellen herzustellen, die im Abfrageverbund unterstützt werden. Port
ist optional und verwendet die Standardportnummer für jede Datenquelle, sofern nicht anders angegeben.
Erweiterte SQL-Funktionsbibliothek für die Arrayverwaltung
Sie können jetzt mithilfe von array_compact alle NULL-Elemente aus einem Array entfernen. Verwenden Sie zum Anfügen von Elementen an ein Array array_append.
Neue Maskenfunktion zum Anonymisieren von Zeichenfolgen
Rufen Sie die Maskenfunktion auf, um vertrauliche Zeichenfolgenwerte zu anonymisieren.
Allgemeine Fehlerbedingungen geben jetzt SQLSTATE-Werte zurück.
Die meisten Fehlerbedingungen in Databricks Runtime enthalten jetzt dokumentierte SQLSTATE-Werte, die verwendet werden können, um in standardmäßiger, SQL-konformer Weise auf Fehler zu testen.
Aufrufen von Generatorfunktionen in der FROM-Klausel
Sie können jetzt Tabellenwertgeneratorfunktionen wie explode in der regulären FROM
-Klausel einer Abfrage aufrufen. Dadurch wird der Generatorfunktionsaufruf an andere integrierte und benutzerdefinierte Tabellenfunktionen angepasst.
Unterstützung für Protokollpuffer ist allgemein verfügbar
Sie können die Funktionen from_protobuf
und to_protobuf
verwenden, um Daten zwischen Binär- und Strukturtypen auszutauschen. Weitere Informationen finden Sie unter Lesen und Schreiben von Protokollpuffern.
Zieldefinition für Notizbuchvariablen und -funktionen
In Notizbüchern können Sie schnell zur Definition einer Variablen, einer Funktion oder zum Code hinter einer %run
-Anweisung wechseln, indem Sie mit der rechten Maustaste auf die Variable oder den Funktionsnamen klicken.
Notizbuchschnellkorrektur für automatisch importierte Bibliotheken
Databricks-Notizbücher bieten jetzt ein Schnellkorrekturfeature für das automatische Importieren von Bibliotheken. Wenn Sie vergessen, eine Bibliothek wie Pandas zu importieren, zeigen Sie mit der Maus auf die unterstrichene Syntaxwarnung, und klicken Sie dann auf Schnellkorrektur. Für dieses Feature muss der Databricks-Assistent in Ihrem Arbeitsbereich aktiviert sein.
Fehlerkorrekturen
- Verbesserte Konsistenz für das Delta-Commitverhalten für leere Transaktionen im Zusammenhang mit
update
-,delete
- undmerge
-Befehlen. Auf der IsolationsstufeWriteSerializable
erstellen Befehle, die keine Änderungen zur Folge haben, jetzt einen leeren Commit. Auf der IsolationsstufeSerializable
erstellen solche leeren Transaktionen jetzt keinen Commit.
Verhaltensänderungen
Verhaltensänderungen mit dem neuen Feature für laterale Spaltenaliase
Das neue Feature für laterale Spaltenaliase führt in den folgenden Fällen zu Verhaltensänderungen während der Namensauflösung:
- Laterale Spaltenaliase haben nun Vorrang vor korrelierten Verweisen mit demselben Namen. Für die Abfrage
SELECT (SELECT c2 FROM (SELECT 1 AS c1, c1 AS c2) WHERE c2 > 5) FROM VALUES(6) AS t(c1)
wurde z. B. dasc1
-Element im innerenc1 AS c2
-Element in den korrelierten Verweist.c1
aufgelöst, ändert sich jetzt jedoch in den lateralen Spaltenalias1 AS c1
. Die Abfrage gibt jetztNULL
zurück. - Laterale Spaltenaliase haben jetzt Vorrang vor Funktionsparametern mit demselben Namen. Beispielsweise wurde für die Funktion
CREATE OR REPLACE TEMPORARY FUNCTION func(x INT) RETURNS TABLE (a INT, b INT, c DOUBLE) RETURN SELECT x + 1 AS x, x
dasx
-Element im Funktionstext in den Funktionsparameter x aufgelöst, ändert sich jedoch im Funktionstext in den lateralen Spaltenaliasx + 1
. Die AbfrageSELECT * FROM func(1)
gibt jetzt2, 2
zurück. - Um das Feature für laterale Spaltenaliase zu deaktivieren, legen Sie
spark.sql.lateralColumnAlias.enableImplicitResolution
auffalse
fest. Weitere Informationen finden Sie unter Namensauflösung.
Bibliotheksupgrades
- Aktualisierte Python-Bibliotheken:
- filelock von 3.8.2 auf 3.9.0
- joblib von 1.1.0 auf 1.1.1
- platformdirs von 2.6.0 auf 2.6.2
- whatthepatch von 1.0.3 auf 1.0.4
- Aktualisierte R-Bibliotheken:
- class von 7.3-20 auf 7.3-21
- codetools von 0.2-18 auf 0.2-19
- MASS von 7.3-58 auf 7.3-58.2
- nlme von 3.1-160 auf 3.1-162
- Rserve von 1.8-11 auf 1.8-12
- SparkR von 3.3.1 auf 3.3.2
Verhaltensänderungen
- Benutzer*innen müssen jetzt über die Berechtigungen
SELECT
undMODIFY
für jede Datei verfügen, wenn sie ein Schema mit einem definierten Speicherort erstellen.
Apache Spark
Databricks Runtime 12.2 enthält Apache Spark 3.3.2. Diese Version enthält alle Spark-Fixes und -Verbesserungen, die in Databricks Runtime 12.1 (EoS) enthalten sind, sowie die folgenden zusätzlichen Fehlerkorrekturen und Verbesserungen an Spark:
- [SPARK-42416] [SC-123205][SC-122851][SQL] Dateset operations should not resolve the analyzed logical plan again
- [SPARK-41848] Revert “[CHERRY-PICK][12.x][12.1][12.0][SC-120037][CORE] Fixing task over-scheduled with TaskResourceProfile”
- [SPARK-42162] [SC-122711][ES-556261] Introduce MultiCommutativeOp expression as a memory optimization for canonicalizing large trees of commutative expressions
- [SPARK-42406] [SC-122998][PROTOBUF][Cherry-pick] Fix recursive depth setting for Protobuf functions
- [SPARK-42002] [SC-122476][CONNECT][PYTHON] Implement DataFrameWriterV2
- [SPARK-41716] [SC-122545][CONNECT] Rename _catalog_to_pandas to _execute_and_fetch in Catalog
- [SPARK-41490] [SC-121774][SQL] Assign name to _LEGACY_ERROR_TEMP_2441
- [SPARK-41600] [SC-122538][SPARK-41623][SPARK-41612][CONNECT] Implement Catalog.cacheTable, isCached and uncache
- [SPARK-42191] [SC-121990][SQL] Support udf ‘luhn_check’
- [SPARK-42253] [SC-121976][PYTHON] Add test for detecting duplicated error class
- [SPARK-42268] [SC-122251][CONNECT][PYTHON] Add UserDefinedType in protos
- [SPARK-42231] [SC-121841][SQL] Turn
MISSING_STATIC_PARTITION_COLUMN
intointernalError
- [SPARK-42136] [SC-122554] Refactor BroadcastHashJoinExec output partitioning calculation
- [SPARK-42158] [SC-121610][SQL] Integrate
_LEGACY_ERROR_TEMP_1003
intoFIELD_NOT_FOUND
- [SPARK-42192] [12.x][SC-121820][PYTHON] Migrate the TypeError from pyspark/sql/dataframe.py into PySparkTypeError
- [SPARK-35240] Revert “[SC-118242][SS] Use CheckpointFileManager …
- [SPARK-41488] [SC-121858][SQL] Assign name to _LEGACY_ERROR_TEMP_1176 (and 1177)
- [SPARK-42232] [SC-122267][SQL] Rename error class:
UNSUPPORTED_FEATURE.JDBC_TRANSACTION
- [SPARK-42346] [SC-122480][SQL] Rewrite distinct aggregates after subquery merge
- [SPARK-42306] [SC-122539][SQL] Integrate
_LEGACY_ERROR_TEMP_1317
intoUNRESOLVED_COLUMN.WITH_SUGGESTION
- [SPARK-42234] [SC-122354][SQL] Rename error class:
UNSUPPORTED_FEATURE.REPEATED_PIVOT
- [SPARK-42343] [SC-122437][CORE] Ignore
IOException
inhandleBlockRemovalFailure
if SparkContext is stopped - [SPARK-41295] [SC-122442][SPARK-41296][SQL] Rename the error classes
- [SPARK-42320] [SC-122478][SQL] Assign name to _LEGACY_ERROR_TEMP_2188
- [SPARK-42255] [SC-122483][SQL] Assign name to _LEGACY_ERROR_TEMP_2430
- [SPARK-42156] [SC-121851][CONNECT] SparkConnectClient supports RetryPolicies now
- [SPARK-38728] [SC-116723][SQL] Test the error class: FAILED_RENAME_PATH
- [SPARK-40005] [12.X] Self contained examples in PySpark
- [SPARK-39347] [SC-122457][SS] Bug fix for time window calculation when event time < 0
- [SPARK-42336] [SC-122458][CORE] Use
getOrElse()
instead ofcontains()
in ResourceAllocator - [SPARK-42125] [SC-121827][CONNECT][PYTHON] Pandas UDF in Spark Connect
- [SPARK-42217] [SC-122263][SQL] Support implicit lateral column alias in queries with Window
- [SPARK-35240] [SC-118242][SS] Use CheckpointFileManager for checkpoint file manipulation
- [SPARK-42294] [SC-122337][SQL] Include column default values in DESCRIBE output for V2 tables
- [SPARK-41979] Revert “Revert “[12.x][SC-121190][SQL] Add missing dots for error messages in error classes.””
- [SPARK-42286] [SC-122336][SQL] Fallback to previous codegen code path for complex expr with CAST
- [SPARK-42275] [SC-122249][CONNECT][PYTHON] Avoid using built-in list, dict in static typing
- [SPARK-41985] [SC-122172][SQL] Centralize more column resolution rules
- [SPARK-42126] [SC-122330][PYTHON][CONNECT] Accept return type in DDL strings for Python Scalar UDFs in Spark Connect
- [SPARK-42197] [SC-122328][SC-121514][CONNECT] Reuses JVM initialization, and separate configuration groups to set in remote local mode
- [SPARK-41575] [SC-120118][SQL] Assign name to _LEGACY_ERROR_TEMP_2054
- [SPARK-41985] Revert “[SC-122172][SQL] Centralize more column resolution rules”
- [SPARK-42123] [SC-122234][SC-121453][SQL] Include column default values in DESCRIBE and SHOW CREATE TABLE output
- [SPARK-41985] [SC-122172][SQL] Centralize more column resolution rules
- [SPARK-42284] [SC-122233][CONNECT] Make sure connect server assembly is built before running client tests - SBT
- [SPARK-42239] [SC-121790][SQL] Integrate
MUST_AGGREGATE_CORRELATED_SCALAR_SUBQUERY
- [SPARK-42278] [SC-122170][SQL] DS V2 pushdown supports supports JDBC dialects compile
SortOrder
by themselves - [SPARK-42259] [SC-122168][SQL] ResolveGroupingAnalytics should take care of Python UDAF
- [SPARK-41979] Revert “[12.x][SC-121190][SQL] Add missing dots for error messages in error classes.”
- [SPARK-42224] [12.x][SC-121708][CONNECT] Migrate TypeError into error framework for Spark Connect functions
- [SPARK-41712] [12.x][SC-121189][PYTHON][CONNECT] Migrate the Spark Connect errors into PySpark error framework.
- [SPARK-42119] [SC-121913][SC-121342][SQL] Add built-in table-valued functions inline and inline_outer
- [SPARK-41489] [SC-121713][SQL] Assign name to _LEGACY_ERROR_TEMP_2415
- [SPARK-42082] [12.x][SC-121163][SPARK-41598][PYTHON][CONNECT] Introduce PySparkValueError and PySparkTypeError
- [SPARK-42081] [SC-121723][SQL] Improve the plan change validation
- [SPARK-42225] [12.x][SC-121714][CONNECT] Add SparkConnectIllegalArgumentException to handle Spark Connect error precisely.
- [SPARK-42044] [12.x][SC-121280][SQL] Fix incorrect error message for
MUST_AGGREGATE_CORRELATED_SCALAR_SUBQUERY
- [SPARK-42194] [12.x][SC-121712][PS] Allow columns parameter when creating DataFrame with Series.
- [SPARK-42078] [12.x][SC-120761][PYTHON] Migrate errors thrown by JVM into PySparkException.
- [SPARK-42133] [12.x][SC-121250] Add basic Dataset API methods to SparkConnect Scala Client
- [SPARK-41979] [12.x][SC-121190][SQL] Add missing dots for error messages in error classes.
- [SPARK-42124] [12.x][SC-121420][PYTHON][CONNECT] Scalar Inline Python UDF in Spark Connect
- [SPARK-42051] [SC-121994][SQL] Codegen Support for HiveGenericUDF
- [SPARK-42257] [SC-121948][CORE] Remove unused variable external sorter
- [SPARK-41735] [SC-121771][SQL] Use MINIMAL instead of STANDARD for SparkListenerSQLExecutionEnd
- [SPARK-42236] [SC-121882][SQL] Refine
NULLABLE_ARRAY_OR_MAP_ELEMENT
- [SPARK-42233] [SC-121775][SQL] Improve error message for
PIVOT_AFTER_GROUP_BY
- [SPARK-42229] [SC-121856][CORE] Migrate
SparkCoreErrors
into error classes - [SPARK-42163] [SC-121839][SQL] Fix schema pruning for non-foldable array index or map key
- [SPARK-40711] [SC-119990][SQL] Add spill size metrics for window
- [SPARK-42023] [SC-121847][SPARK-42024][CONNECT][PYTHON] Make
createDataFrame
supportAtomicType -> StringType
coercion - [SPARK-42202] [SC-121837][Connect][Test] Improve the E2E test server stop logic
- [SPARK-41167] [SC-117425][SQL] Improve multi like performance by creating a balanced expression tree predicate
- [SPARK-41931] [SC-121618][SQL] Better error message for incomplete complex type definition
- [SPARK-36124] [SC-121339][SC-110446][SQL] Support subqueries with correlation through UNION
- [SPARK-42090] [SC-121290][3.3] Introduce sasl retry count in RetryingBlockTransferor
- [SPARK-42157] [SC-121264][CORE]
spark.scheduler.mode=FAIR
should provide FAIR scheduler - [SPARK-41572] [SC-120772][SQL] Assign name to _LEGACY_ERROR_TEMP_2149
- [SPARK-41983] [SC-121224][SQL] Umbenennen und Verbessern der Fehlermeldung für
NULL_COMPARISON_RESULT
- [SPARK-41976] [SC-121024][SQL] Improve error message for
INDEX_NOT_FOUND
- [SPARK-41994] [SC-121210][SC-120573] Assign SQLSTATE’s (1/2)
- [SPARK-41415] [SC-121117][3.3] SASL Request Retries
- [SPARK-38591] [SC-121018][SQL] Add flatMapSortedGroups and cogroupSorted
- [SPARK-41975] [SC-120767][SQL] Improve error message for
INDEX_ALREADY_EXISTS
- [SPARK-42056] [SC-121158][SQL][PROTOBUF] Add missing options for Protobuf functions
- [SPARK-41984] [SC-120769][SQL] Umbenennen und Verbessern der Fehlermeldung für
RESET_PERMISSION_TO_ORIGINAL
- [SPARK-41948] [SC-121196][SQL] Fix NPE for error classes: CANNOT_PARSE_JSON_FIELD
- [SPARK-41772] [SC-121176][CONNECT][PYTHON] Fix incorrect column name in
withField
’s doctest - [SPARK-41283] [SC-121175][CONNECT][PYTHON] Add
array_append
to Connect - [SPARK-41960] [SC-120773][SQL] Assign name to _LEGACY_ERROR_TEMP_1056
- [SPARK-42134] [SC-121116][SQL] Fix getPartitionFiltersAndDataFilters() to handle filters without referenced attributes
- [SPARK-42096] [SC-121012][CONNECT] Some code cleanup for
connect
module - [SPARK-42099] [SC-121114][SPARK-41845][CONNECT][PYTHON] Fix
count(*)
andcount(col(*))
- [SPARK-42045] [SC-120958][SC-120450][SQL] ANSI SQL mode: Round/Bround should return an error on integer overflow
- [SPARK-42043] [SC-120968][CONNECT] Scala Client Result with E2E Tests
- [SPARK-41884] [SC-121022][CONNECT] Support naive tuple as a nested row
- [SPARK-42112] [SC-121011][SQL][SS] Add null check before
ContinuousWriteRDD#compute
function closedataWriter
- [SPARK-42077] [SC-120553][CONNECT][PYTHON] Literal should throw TypeError for unsupported DataType
- [SPARK-42108] [SC-120898][SQL] Make Analyzer transform
Count(*)
intoCount(1)
- [SPARK-41666] [SC-120928][SC-119009][PYTHON] Support parameterized SQL by
sql()
- [SPARK-40599] [SC-120930][SQL] Relax multiTransform rule type to allow alternatives to be any kinds of Seq
- [SPARK-41574] [SC-120771][SQL] Update
_LEGACY_ERROR_TEMP_2009
asINTERNAL_ERROR
. - [SPARK-41579] [SC-120770][SQL] Assign name to _LEGACY_ERROR_TEMP_1249
- [SPARK-41974] [SC-120766][SQL] Turn
INCORRECT_END_OFFSET
intoINTERNAL_ERROR
- [SPARK-41530] [SC-120916][SC-118513][CORE] Rename MedianHeap to PercentileMap and support percentile
- [SPARK-41757] [SC-120608][SPARK-41901][CONNECT] Fix string representation for Column class
- [SPARK-42084] [SC-120775][SQL] Avoid leaking the qualified-access-only restriction
- [SPARK-41973] [SC-120765][SQL] Assign name to _LEGACY_ERROR_TEMP_1311
- [SPARK-42039] [SC-120655][SQL] SPJ: Remove Option in KeyGroupedPartitioning#partitionValuesOpt
- [SPARK-42079] [SC-120712][CONNECT][PYTHON] Rename proto messages for
toDF
andwithColumnsRenamed
- [SPARK-42089] [SC-120605][CONNECT][PYTHON] Fix variable name issues in nested lambda functions
- [SPARK-41982] [SC-120604][SQL] Partitions of type string should not be treated as numeric types
- [SPARK-40599] [SC-120620][SQL] Add multiTransform methods to TreeNode to generate alternatives
- [SPARK-42085] [SC-120556][CONNECT][PYTHON] Make
from_arrow_schema
support nested types - [SPARK-42057] [SC-120507][SQL][PROTOBUF] Fix how exception is handled in error reporting.
- [SPARK-41586] [12.x][ALL TESTS][SC-120544][PYTHON] Introduce
pyspark.errors
and error classes for PySpark. - [SPARK-41903] [SC-120543][CONNECT][PYTHON]
Literal
should support 1-dim ndarray - [SPARK-42021] [SC-120584][CONNECT][PYTHON] Make
createDataFrame
supportarray.array
- [SPARK-41896] [SC-120506][SQL] Filtering by row index returns empty results
- [SPARK-41162] [SC-119742][SQL] Fix anti- and semi-join for self-join with aggregations
- [SPARK-41961] [SC-120501][SQL] Support table-valued functions with LATERAL
- [SPARK-41752] [SC-120550][SQL][UI] Group nested executions under the root execution
- [SPARK-42047] [SC-120586][SPARK-41900][CONNECT][PYTHON][12.X] Literal should support Numpy datatypes
- [SPARK-42028] [SC-120344][CONNECT][PYTHON] Truncating nanoseconds timestampsl
- [SPARK-42011] [SC-120534][CONNECT][PYTHON] Implement DataFrameReader.csv
- [SPARK-41990] [SC-120532][SQL] Use
FieldReference.column
instead ofapply
in V1 to V2 filter conversion - [SPARK-39217] [SC-120446][SQL] Makes DPP support the pruning side has Union
- [SPARK-42076] [SC-120551][CONNECT][PYTHON] Factor data conversion
arrow -> rows
out toconversion.py
- [SPARK-42074] [SC-120540][SQL] Enable
KryoSerializer
inTPCDSQueryBenchmark
to enforce SQL class registration - [SPARK-42012] [SC-120517][CONNECT][PYTHON] Implement DataFrameReader.orc
- [SPARK-41832] [SC-120513][CONNECT][PYTHON] Fix
DataFrame.unionByName
, add allow_missing_columns - [SPARK-38651] [SC-120514] [SQL] Add
spark.sql.legacy.allowEmptySchemaWrite
- [SPARK-41991] [SC-120406][SQL]
CheckOverflowInTableInsert
should accept ExpressionProxy as child - [SPARK-41232] [SC-120073][SQL][PYTHON] Adding array_append function
- [SPARK-42041] [SC-120512][SPARK-42013][CONNECT][PYTHON] DataFrameReader should support list of paths
- [SPARK-42071] [SC-120533][CORE] Register
scala.math.Ordering$Reverse
to KyroSerializer - [SPARK-41986] [SC-120429][SQL] Introduce shuffle on SinglePartition
- [SPARK-42016] [SC-120428][CONNECT][PYTHON] Enable tests related to the nested column
- [SPARK-42042] [SC-120427][CONNECT][PYTHON]
DataFrameReader
should support StructType schema - [SPARK-42031] [SC-120389][CORE][SQL] Clean up
remove
methods that do not need override - [SPARK-41746] [SC-120463][SPARK-41838][SPARK-41837][SPARK-41835][SPARK-41836][SPARK-41847][CONNECT][PYTHON] Make
createDataFrame(rows/lists/tuples/dicts)
support nested types - [SPARK-41437] [SC-117601][SQL][ALL TESTS] Do not optimize the input query twice for v1 write fallback
- [SPARK-41840] [SC-119719][CONNECT][PYTHON] Add the missing alias
groupby
- [SPARK-41846] [SC-119717][CONNECT][PYTHON] Enable doctests for window functions
- [SPARK-41914] [SC-120094][SQL] FileFormatWriter materializes AQE plan before accessing outputOrdering
- [SPARK-41805] [SC-119992][SQL] Reuse expressions in WindowSpecDefinition
- [SPARK-41977] [SC-120269][SPARK-41978][CONNECT] SparkSession.range to take float as arguments
- [SPARK-42029] [SC-120336][CONNECT] Add Guava Shading rules to
connect-common
to avoid startup failure - [SPARK-41989] [SC-120334][PYTHON] Avoid breaking logging config from pyspark.pandas
- [SPARK-42003] [SC-120331][SQL] Reduce duplicate code in ResolveGroupByAll
- [SPARK-41635] [SC-120313][SQL] Fix group by all error reporting
- [SPARK-41047] [SC-120291][SQL] Improve docs for round
- [SPARK-41822] [SC-120122][CONNECT] Setup gRPC connection for Scala/JVM client
- [SPARK-41879] [SC-120264][CONNECT][PYTHON] Make
DataFrame.collect
support nested types - [SPARK-41887] [SC-120268][CONNECT][PYTHON] Make
DataFrame.hint
accept list typed parameter - [SPARK-41964] [SC-120210][CONNECT][PYTHON] Add the list of unsupported IO functions
- [SPARK-41595] [SC-120097][SQL] Support generator function explode/explode_outer in the FROM clause
- [SPARK-41957] [SC-120121][CONNECT][PYTHON] Enable the doctest for
DataFrame.hint
- [SPARK-41886] [SC-120141][CONNECT][PYTHON]
DataFrame.intersect
doctest output has different order - [SPARK-41442] [SC-117795][SQL][ALL TESTS] Only update SQLMetric value if merging with valid metric
- [SPARK-41944] [SC-120046][CONNECT] Pass configurations when local remote mode is on
- [SPARK-41708] [SC-119838][SQL] Pull v1write information to
WriteFiles
- [SPARK-41780] [SC-120000][SQL] Should throw INVALID_PARAMETER_VALUE.PATTERN when the parameters
regexp
is invalid - [SPARK-41889] [SC-119975][SQL] Anfügen der Stammursache für invalidPatternError und Umgestalten der Fehlerklassen INVALID_PARAMETER_VALUE
- [SPARK-41860] [SC-120028][SQL] Make AvroScanBuilder and JsonScanBuilder case classes
- [SPARK-41945] [SC-120010][CONNECT][PYTHON] Python: connect client lost column data with pyarrow.Table.to_pylist
- [SPARK-41690] [SC-119102][SC-119087][SQL][CONNECT] Agnostic Encoders
- [SPARK-41354] [SC-119995][CONNECT][PYTHON] Implement RepartitionByExpression
- [SPARK-41581] [SC-119997][SQL] Update
_LEGACY_ERROR_TEMP_1230
asINTERNAL_ERROR
- [SPARK-41928] [SC-119972][CONNECT][PYTHON] Add the unsupported list for
functions
- [SPARK-41933] [SC-119980][CONNECT] Provide local mode that automatically starts the server
- [SPARK-41899] [SC-119971][CONNECT][PYTHON] createDataFrame` should respect user provided DDL schema
- [SPARK-41936] [SC-119978][CONNECT][PYTHON] Make
withMetadata
reuse thewithColumns
proto - [SPARK-41898] [SC-119931][CONNECT][PYTHON] Window.rowsBetween, Window.rangeBetween parameters typechecking parity with pyspark
- [SPARK-41939] [SC-119977][CONNECT][PYTHON] Add the unsupported list for
catalog
functions - [SPARK-41924] [SC-119946][CONNECT][PYTHON] Make StructType support metadata and Implement
DataFrame.withMetadata
- [SPARK-41934] [SC-119967][CONNECT][PYTHON] Add the unsupported function list for
session
- [SPARK-41875] [SC-119969][CONNECT][PYTHON] Add test cases for
Dataset.to()
- [SPARK-41824] [SC-119970][CONNECT][PYTHON] Ingore the doctest for explain of connect
- [SPARK-41880] [SC-119959][CONNECT][PYTHON] Make function
from_json
accept non-literal schema - [SPARK-41927] [SC-119952][CONNECT][PYTHON] Add the unsupported list for
GroupedData
- [SPARK-41929] [SC-119949][CONNECT][PYTHON] Add function
array_compact
- [SPARK-41827] [SC-119841][CONNECT][PYTHON] Make
GroupBy
accept column list - [SPARK-41925] [SC-119905][SQL] Enable
spark.sql.orc.enableNestedColumnVectorizedReader
by default - [SPARK-41831] [SC-119853][CONNECT][PYTHON] Make
DataFrame.select
accept column list - [SPARK-41455] [SC-119858][CONNECT][PYTHON] Make
DataFrame.collect
discard the timezone info - [SPARK-41923] [SC-119861][CONNECT][PYTHON] Add
DataFrame.writeTo
to the unsupported list - [SPARK-41912] [SC-119837][SQL] Subquery should not validate CTE
- [SPARK-41828] [SC-119832][CONNECT][PYTHON][12.X] Make
createDataFrame
support empty dataframe - [SPARK-41905] [SC-119848][CONNECT] Support name as strings in slice
- [SPARK-41869] [SC-119845][CONNECT] Reject single string in dropDuplicates
- [SPARK-41830] [SC-119840][CONNECT][PYTHON] Make
DataFrame.sample
accept the same parameters as PySpark - [SPARK-41849] [SC-119835][CONNECT] Implement DataFrameReader.text
- [SPARK-41861] [SC-119834][SQL] Make v2 ScanBuilders’ build() return typed scan
- [SPARK-41825] [SC-119710][CONNECT][PYTHON] Enable doctests related to
DataFrame.show
- [SPARK-41855] [SC-119804][SC-119410][SPARK-41814][SPARK-41851][SPARK-41852][CONNECT][PYTHON][12.X] Make
createDataFrame
handle None/NaN properly - [SPARK-41833] [SC-119685][SPARK-41881][SPARK-41815][CONNECT][PYTHON] Make
DataFrame.collect
handle None/NaN/Array/Binary porperly - [SPARK-39318] [SC-119713][SQL] Remove tpch-plan-stability WithStats golden files
- [SPARK-41791] [SC-119745] Add new file source metadata column types
- [SPARK-41790] [SC-119729][SQL] Set TRANSFORM reader and writer’s format correctly
- [SPARK-41829] [SC-119725][CONNECT][PYTHON] Add the missing ordering parameter in
Sort
andsortWithinPartitions
- [SPARK-41576] [SC-119718][SQL] Assign name to _LEGACY_ERROR_TEMP_2051
- [SPARK-41821] [SC-119716][CONNECT][PYTHON] Fix doc test for DataFrame.describe
- [SPARK-41871] [SC-119714][CONNECT] DataFrame hint parameter can be str, float or int
- [SPARK-41720] [SC-119076][SQL] Rename UnresolvedFunc to UnresolvedFunctionName
- [SPARK-41573] [SC-119567][SQL] Assign name to _LEGACY_ERROR_TEMP_2136
- [SPARK-41862] [SC-119492][SQL] Fix correctness bug related to DEFAULT values in Orc reader
- [SPARK-41582] [SC-119482][SC-118701][CORE][SQL] Reuse
INVALID_TYPED_LITERAL
instead of_LEGACY_ERROR_TEMP_0022
Wartungsupdates
Weitere Informationen finden Sie unter Wartungsupdates für Databricks Runtime 12.2.
Systemumgebung
- Betriebssystem: Ubuntu 20.04.5 LTS
- Java: Zulu 8.68.0.21-CA-linux64
- Scala: 2.12.15
- Python: 3.9.19
- R: 4.2.2
- Delta Lake: 2.2.0
Installierte Python-Bibliotheken
Bibliothek | Version | Bibliothek | Version | Bibliothek | Version |
---|---|---|---|---|---|
argon2-cffi | 21.3.0 | argon2-cffi-bindings | 21.2.0 | asttokens | 2.0.5 |
attrs | 21.4.0 | backcall | 0.2.0 | backports.entry-points-selectable | 1.2.0 |
beautifulsoup4 | 4.11.1 | black | 22.3.0 | bleach | 4.1.0 |
boto3 | 1.21.32 | botocore | 1.24.32 | certifi | 2021.10.8 |
cffi | 1.15.0 | chardet | 4.0.0 | charset-normalizer | 2.0.4 |
Klicken | 8.0.4 | cryptography | 3.4.8 | cycler | 0.11.0 |
Cython | 0.29.28 | dbus-python | 1.2.16 | debugpy | 1.5.1 |
decorator | 5.1.1 | defusedxml | 0.7.1 | distlib | 0.3.6 |
docstring-to-markdown | 0,11 | entrypoints | 0,4 | executing | 0.8.3 |
facets-overview | 1.0.0 | fastjsonschema | 2.16.2 | filelock | 3.9.0 |
fonttools | 4.25.0 | idna | 3.3 | ipykernel | 6.15.3 |
ipython | 8.5.0 | ipython-genutils | 0.2.0 | ipywidgets | 7.7.2 |
jedi | 0.18.1 | Jinja2 | 2.11.3 | jmespath | 0.10.0 |
joblib | 1.1.1 | jsonschema | 4.4.0 | jupyter-client | 6.1.12 |
jupyter_core | 4.11.2 | jupyterlab-pygments | 0.1.2 | jupyterlab-widgets | 1.0.0 |
kiwisolver | 1.3.2 | MarkupSafe | 2.0.1 | matplotlib | 3.5.1 |
matplotlib-inline | 0.1.2 | mccabe | 0.7.0 | mistune | 0.8.4 |
mypy-extensions | 0.4.3 | nbclient | 0.5.13 | nbconvert | 6.4.4 |
nbformat | 5.3.0 | nest-asyncio | 1.5.5 | nodeenv | 1.7.0 |
Notebook | 6.4.8 | numpy | 1.21.5 | Packen | 21,3 |
Pandas | 1.4.2 | pandocfilters | 1.5.0 | parso | 0.8.3 |
pathspec | 0.9.0 | patsy | 0.5.2 | pexpect | 4.8.0 |
pickleshare | 0.7.5 | Pillow | 9.0.1 | pip | 21.2.4 |
platformdirs | 2.6.2 | plotly | 5.6.0 | pluggy | 1.0.0 |
prometheus-client | 0.13.1 | prompt-toolkit | 3.0.20 | protobuf | 3.19.4 |
psutil | 5.8.0 | psycopg2 | 2.9.3 | ptyprocess | 0.7.0 |
pure-eval | 0.2.2 | pyarrow | 7.0.0 | pycparser | 2.21 |
pyflakes | 2.5.0 | Pygments | 2.11.2 | PyGObject | 3.36.0 |
pyodbc | 4.0.32 | pyparsing | 3.0.4 | pyright | 1.1.283 |
pyrsistent | 0.18.0 | Python-dateutil | 2.8.2 | python-lsp-jsonrpc | 1.0.0 |
python-lsp-server | 1.6.0 | pytz | 2021.3 | pyzmq | 22.3.0 |
requests | 2.27.1 | requests-unixsocket | 0.2.0 | rope | 0.22.0 |
s3transfer | 0.5.0 | scikit-learn | 1.0.2 | scipy | 1.7.3 |
seaborn | 0.11.2 | Send2Trash | 1.8.0 | setuptools | 61.2.0 |
sechs | 1.16.0 | soupsieve | 2.3.1 | ssh-import-id | 5.10 |
stack-data | 0.2.0 | statsmodels | 0.13.2 | tenacity | 8.0.1 |
terminado | 0.13.1 | testpath | 0.5.0 | threadpoolctl | 2.2.0 |
tokenize-rt | 4.2.1 | tomli | 1.2.2 | tornado | 6.1 |
traitlets | 5.1.1 | typing_extensions | 4.1.1 | ujson | 5.1.0 |
unattended-upgrades | 0,1 | urllib3 | 1.26.9 | virtualenv | 20.8.0 |
wcwidth | 0.2.5 | webencodings | 0.5.1 | whatthepatch | 1.0.4 |
wheel | 0.37.0 | widgetsnbextension | 3.6.1 | yapf | 0.31.0 |
Installierte R-Bibliotheken
R-Bibliotheken werden aus der Microsoft CRAN-Momentaufnahme vom 11.11.2022 installiert.
Bibliothek | Version | Bibliothek | Version | Bibliothek | Version |
---|---|---|---|---|---|
Pfeil | 10.0.0 | askpass | 1.1 | assertthat | 0.2.1 |
backports | 1.4.1 | base | 4.2.2 | base64enc | 0.1-3 |
bit | 4.0.4 | bit64 | 4.0.5 | Blob | 1.2.3 |
boot | 1.3-28 | brew | 1,0 - 8 | brio | 1.1.3 |
broom | 1.0.1 | bslib | 0.4.1 | cachem | 1.0.6 |
callr | 3.7.3 | caret | 6.0-93 | cellranger | 1.1.0 |
chron | 2.3-58 | class | 7.3-21 | cli | 3.4.1 |
clipr | 0.8.0 | clock | 0.6.1 | cluster | 2.1.4 |
codetools | 0.2-19 | colorspace | 2.0-3 | commonmark | 1.8.1 |
compiler | 4.2.2 | config | 0.3.1 | cpp11 | 0.4.3 |
crayon | 1.5.2 | Anmeldeinformationen | 1.3.2 | curl | 4.3.3 |
data.table | 1.14.4 | datasets | 4.2.2 | DBI | 1.1.3 |
dbplyr | 2.2.1 | desc | 1.4.2 | devtools | 2.4.5 |
diffobj | 0.3.5 | digest | 0.6.30 | downlit | 0.4.2 |
dplyr | 1.0.10 | dtplyr | 1.2.2 | e1071 | 1.7-12 |
ellipsis | 0.3.2 | Evaluieren | 0,18 | fansi | 1.0.3 |
farver | 2.1.1 | fastmap | 1.1.0 | fontawesome | 0.4.0 |
forcats | 0.5.2 | foreach | 1.5.2 | foreign | 0.8-82 |
forge | 0.2.0 | fs | 1.5.2 | future | 1.29.0 |
future.apply | 1.10.0 | gargle | 1.2.1 | generics | 0.1.3 |
gert | 1.9.1 | ggplot2 | 3.4.0 | gh | 1.3.1 |
gitcreds | 0.1.2 | glmnet | 4.1-4 | globals | 0.16.1 |
glue | 1.6.2 | googledrive | 2.0.0 | googlesheets4 | 1.0.1 |
gower | 1.0.0 | Grafiken | 4.2.2 | grDevices | 4.2.2 |
grid | 4.2.2 | gridExtra | 2.3 | gsubfn | 0.7 |
gtable | 0.3.1 | hardhat | 1.2.0 | haven | 2.5.1 |
highr | 0.9 | hms | 1.1.2 | htmltools | 0.5.3 |
htmlwidgets | 1.5.4 | httpuv | 1.6.6 | httr | 1.4.4 |
ids | 1.0.1 | ini | 0.3.1 | ipred | 0.9-13 |
isoband | 0.2.6 | iterators | 1.0.14 | jquerylib | 0.1.4 |
jsonlite | 1.8.3 | KernSmooth | 2.23-20 | knitr | 1.40 |
labeling | 0.4.2 | later | 1.3.0 | lattice | 0.20-45 |
lava | 1.7.0 | Lebenszyklus | 1.0.3 | listenv | 0.8.0 |
lubridate | 1.9.0 | magrittr | 2.0.3 | markdown | 1.3 |
MASS | 7.3-58.2 | Matrix | 1.5-1 | memoise | 2.0.1 |
methods | 4.2.2 | mgcv | 1.8-41 | mime | 0,12 |
miniUI | 0.1.1.1 | ModelMetrics | 1.2.2.2 | modelr | 0.1.9 |
munsell | 0.5.0 | nlme | 3.1-162 | nnet | 7.3-18 |
numDeriv | 2016.8-1.1 | openssl | 2.0.4 | parallel | 4.2.2 |
parallelly | 1.32.1 | pillar | 1.8.1 | pkgbuild | 1.3.1 |
pkgconfig | 2.0.3 | pkgdown | 2.0.6 | pkgload | 1.3.1 |
plogr | 0.2.0 | plyr | 1.8.7 | praise | 1.0.0 |
prettyunits | 1.1.1 | pROC | 1.18.0 | processx | 3.8.0 |
prodlim | 2019.11.13 | profvis | 0.3.7 | Fortschritt | 1.2.2 |
progressr | 0.11.0 | promises | 1.2.0.1 | proto | 1.0.0 |
proxy | 0.4-27 | ps | 1.7.2 | purrr | 0.3.5 |
r2d3 | 0.2.6 | R6 | 2.5.1 | ragg | 1.2.4 |
randomForest | 4.7-1.1 | rappdirs | 0.3.3 | rcmdcheck | 1.4.0 |
RColorBrewer | 1.1-3 | Rcpp | 1.0.9 | RcppEigen | 0.3.3.9.3 |
readr | 2.1.3 | readxl | 1.4.1 | recipes | 1.0.3 |
rematch | 1.0.1 | rematch2 | 2.1.2 | remotes | 2.4.2 |
reprex | 2.0.2 | reshape2 | 1.4.4 | rlang | 1.0.6 |
rmarkdown | 2,18 | RODBC | 1.3-19 | roxygen2 | 7.2.1 |
rpart | 4.1.19 | rprojroot | 2.0.3 | Rserve | 1.8-12 |
RSQLite | 2.2.18 | rstudioapi | 0.14 | rversions | 2.1.2 |
rvest | 1.0.3 | sass | 0.4.2 | scales | 1.2.1 |
selectr | 0.4-2 | sessioninfo | 1.2.2 | shape | 1.4.6 |
shiny | 1.7.3 | sourcetools | 0.1.7 | sparklyr | 1.7.8 |
SparkR | 3.3.2 | spatial | 7.3-11 | splines | 4.2.2 |
sqldf | 0.4-11 | SQUAREM | 2021.1 | stats | 4.2.2 |
stats4 | 4.2.2 | stringi | 1.7.8 | stringr | 1.4.1 |
survival | 3.4-0 | sys | 3.4.1 | systemfonts | 1.0.4 |
tcltk | 4.2.2 | testthat | 3.1.5 | textshaping | 0.3.6 |
tibble | 3.1.8 | tidyr | 1.2.1 | tidyselect | 1.2.0 |
tidyverse | 1.3.2 | timechange | 0.1.1 | timeDate | 4021.106 |
tinytex | 0,42 | tools | 4.2.2 | tzdb | 0.3.0 |
urlchecker | 1.0.1 | usethis | 2.1.6 | utf8 | 1.2.2 |
utils | 4.2.2 | uuid | 1.1-0 | vctrs | 0.5.0 |
viridisLite | 0.4.1 | vroom | 1.6.0 | waldo | 0.4.0 |
whisker | 0,4 | withr | 2.5.0 | xfun | 0.34 |
xml2 | 1.3.3 | xopen | 1.0.0 | xtable | 1.8-4 |
yaml | 2.3.6 | zip | 2.2.2 |
Installierte Java- und Scala-Bibliotheken (Scala 2.12-Clusterversion)
Gruppen-ID | Artefakt-ID | Version |
---|---|---|
antlr | antlr | 2.7.7 |
com.amazonaws | amazon-kinesis-client | 1.12.0 |
com.amazonaws | aws-java-sdk-autoscaling | 1.12.189 |
com.amazonaws | aws-java-sdk-cloudformation | 1.12.189 |
com.amazonaws | aws-java-sdk-cloudfront | 1.12.189 |
com.amazonaws | aws-java-sdk-cloudhsm | 1.12.189 |
com.amazonaws | aws-java-sdk-cloudsearch | 1.12.189 |
com.amazonaws | aws-java-sdk-cloudtrail | 1.12.189 |
com.amazonaws | aws-java-sdk-cloudwatch | 1.12.189 |
com.amazonaws | aws-java-sdk-cloudwatchmetrics | 1.12.189 |
com.amazonaws | aws-java-sdk-codedeploy | 1.12.189 |
com.amazonaws | aws-java-sdk-cognitoidentity | 1.12.189 |
com.amazonaws | aws-java-sdk-cognitosync | 1.12.189 |
com.amazonaws | aws-java-sdk-config | 1.12.189 |
com.amazonaws | aws-java-sdk-core | 1.12.189 |
com.amazonaws | aws-java-sdk-datapipeline | 1.12.189 |
com.amazonaws | aws-java-sdk-directconnect | 1.12.189 |
com.amazonaws | aws-java-sdk-directory | 1.12.189 |
com.amazonaws | aws-java-sdk-dynamodb | 1.12.189 |
com.amazonaws | aws-java-sdk-ec2 | 1.12.189 |
com.amazonaws | aws-java-sdk-ecs | 1.12.189 |
com.amazonaws | aws-java-sdk-efs | 1.12.189 |
com.amazonaws | aws-java-sdk-elasticache | 1.12.189 |
com.amazonaws | aws-java-sdk-elasticbeanstalk | 1.12.189 |
com.amazonaws | aws-java-sdk-elasticloadbalancing | 1.12.189 |
com.amazonaws | aws-java-sdk-elastictranscoder | 1.12.189 |
com.amazonaws | aws-java-sdk-emr | 1.12.189 |
com.amazonaws | aws-java-sdk-glacier | 1.12.189 |
com.amazonaws | aws-java-sdk-glue | 1.12.189 |
com.amazonaws | aws-java-sdk-iam | 1.12.189 |
com.amazonaws | aws-java-sdk-importexport | 1.12.189 |
com.amazonaws | aws-java-sdk-kinesis | 1.12.189 |
com.amazonaws | aws-java-sdk-kms | 1.12.189 |
com.amazonaws | aws-java-sdk-lambda | 1.12.189 |
com.amazonaws | aws-java-sdk-logs | 1.12.189 |
com.amazonaws | aws-java-sdk-machinelearning | 1.12.189 |
com.amazonaws | aws-java-sdk-opsworks | 1.12.189 |
com.amazonaws | aws-java-sdk-rds | 1.12.189 |
com.amazonaws | aws-java-sdk-redshift | 1.12.189 |
com.amazonaws | aws-java-sdk-route53 | 1.12.189 |
com.amazonaws | aws-java-sdk-s3 | 1.12.189 |
com.amazonaws | aws-java-sdk-ses | 1.12.189 |
com.amazonaws | aws-java-sdk-simpledb | 1.12.189 |
com.amazonaws | aws-java-sdk-simpleworkflow | 1.12.189 |
com.amazonaws | aws-java-sdk-sns | 1.12.189 |
com.amazonaws | aws-java-sdk-sqs | 1.12.189 |
com.amazonaws | aws-java-sdk-ssm | 1.12.189 |
com.amazonaws | aws-java-sdk-storagegateway | 1.12.189 |
com.amazonaws | aws-java-sdk-sts | 1.12.189 |
com.amazonaws | aws-java-sdk-support | 1.12.189 |
com.amazonaws | aws-java-sdk-swf-libraries | 1.11.22 |
com.amazonaws | aws-java-sdk-workspaces | 1.12.189 |
com.amazonaws | jmespath-java | 1.12.189 |
com.chuusai | shapeless_2.12 | 2.3.3 |
com.clearspring.analytics | Datenstrom | 2.9.6 |
com.databricks | Rserve | 1.8-3 |
com.databricks | jets3t | 0.7.1-0 |
com.databricks.scalapb | compilerplugin_2.12 | 0.4.15-10 |
com.databricks.scalapb | scalapb-runtime_2.12 | 0.4.15-10 |
com.esotericsoftware | kryo-shaded | 4.0.2 |
com.esotericsoftware | minlog | 1.3.0 |
com.fasterxml | classmate | 1.3.4 |
com.fasterxml.jackson.core | jackson-annotations | 2.13.4 |
com.fasterxml.jackson.core | jackson-core | 2.13.4 |
com.fasterxml.jackson.core | jackson-databind | 2.13.4.2 |
com.fasterxml.jackson.dataformat | jackson-dataformat-cbor | 2.13.4 |
com.fasterxml.jackson.datatype | jackson-datatype-joda | 2.13.4 |
com.fasterxml.jackson.datatype | jackson-datatype-jsr310 | 2.13.4 |
com.fasterxml.jackson.module | jackson-module-paranamer | 2.13.4 |
com.fasterxml.jackson.module | jackson-module-scala_2.12 | 2.13.4 |
com.github.ben-manes.caffeine | caffeine | 2.3.4 |
com.github.fommil | jniloader | 1.1 |
com.github.fommil.netlib | core | 1.1.2 |
com.github.fommil.netlib | native_ref-java | 1.1 |
com.github.fommil.netlib | native_ref-java-natives | 1.1 |
com.github.fommil.netlib | native_system-java | 1.1 |
com.github.fommil.netlib | native_system-java-natives | 1.1 |
com.github.fommil.netlib | netlib-native_ref-linux-x86_64-natives | 1.1 |
com.github.fommil.netlib | netlib-native_system-linux-x86_64-natives | 1.1 |
com.github.luben | zstd-jni | 1.5.2-1 |
com.github.wendykierp | JTransforms | 3.1 |
com.google.code.findbugs | jsr305 | 3.0.0 |
com.google.code.gson | gson | 2.8.6 |
com.google.crypto.tink | tink | 1.6.1 |
com.google.flatbuffers | flatbuffers-java | 1.12.0 |
com.google.guava | guava | 15.0 |
com.google.protobuf | protobuf-java | 2.6.1 |
com.h2database | h2 | 2.0.204 |
com.helger | profiler | 1.1.1 |
com.jcraft | jsch | 0.1.50 |
com.jolbox | bonecp | 0.8.0.RELEASE |
com.lihaoyi | sourcecode_2.12 | 0.1.9 |
com.microsoft.azure | azure-data-lake-store-sdk | 2.3.9 |
com.microsoft.sqlserver | mssql-jdbc | 11.2.2.jre8 |
com.ning | compress-lzf | 1.1 |
com.sun.mail | javax.mail | 1.5.2 |
com.tdunning | json | 1.8 |
com.thoughtworks.paranamer | paranamer | 2.8 |
com.trueaccord.lenses | lenses_2.12 | 0.4.12 |
com.twitter | chill-java | 0.10.0 |
com.twitter | chill_2.12 | 0.10.0 |
com.twitter | util-app_2.12 | 7.1.0 |
com.twitter | util-core_2.12 | 7.1.0 |
com.twitter | util-function_2.12 | 7.1.0 |
com.twitter | util-jvm_2.12 | 7.1.0 |
com.twitter | util-lint_2.12 | 7.1.0 |
com.twitter | util-registry_2.12 | 7.1.0 |
com.twitter | util-stats_2.12 | 7.1.0 |
com.typesafe | config | 1.2.1 |
com.typesafe.scala-logging | scala-logging_2.12 | 3.7.2 |
com.uber | h3 | 3.7.0 |
com.univocity | univocity-parsers | 2.9.1 |
com.zaxxer | HikariCP | 4.0.3 |
commons-cli | commons-cli | 1.5.0 |
commons-codec | commons-codec | 1.15 |
commons-collections | commons-collections | 3.2.2 |
commons-dbcp | commons-dbcp | 1.4 |
commons-fileupload | commons-fileupload | 1.3.3 |
commons-httpclient | commons-httpclient | 3.1 |
commons-io | commons-io | 2.11.0 |
commons-lang | commons-lang | 2.6 |
commons-logging | commons-logging | 1.1.3 |
commons-pool | commons-pool | 1.5.4 |
dev.ludovic.netlib | arpack | 2.2.1 |
dev.ludovic.netlib | blas | 2.2.1 |
dev.ludovic.netlib | lapack | 2.2.1 |
info.ganglia.gmetric4j | gmetric4j | 1.0.10 |
io.airlift | aircompressor | 0,21 |
io.delta | delta-sharing-spark_2.12 | 0.6.3 |
io.dropwizard.metrics | metrics-core | 4.1.1 |
io.dropwizard.metrics | metrics-graphite | 4.1.1 |
io.dropwizard.metrics | metrics-healthchecks | 4.1.1 |
io.dropwizard.metrics | metrics-jetty9 | 4.1.1 |
io.dropwizard.metrics | metrics-jmx | 4.1.1 |
io.dropwizard.metrics | metrics-json | 4.1.1 |
io.dropwizard.metrics | metrics-jvm | 4.1.1 |
io.dropwizard.metrics | metrics-servlets | 4.1.1 |
io.netty | netty-all | 4.1.74.Final |
io.netty | netty-buffer | 4.1.74.Final |
io.netty | netty-codec | 4.1.74.Final |
io.netty | netty-common | 4.1.74.Final |
io.netty | netty-handler | 4.1.74.Final |
io.netty | netty-resolver | 4.1.74.Final |
io.netty | netty-tcnative-classes | 2.0.48.Final |
io.netty | netty-transport | 4.1.74.Final |
io.netty | netty-transport-classes-epoll | 4.1.74.Final |
io.netty | netty-transport-classes-kqueue | 4.1.74.Final |
io.netty | netty-transport-native-epoll-linux-aarch_64 | 4.1.74.Final |
io.netty | netty-transport-native-epoll-linux-x86_64 | 4.1.74.Final |
io.netty | netty-transport-native-kqueue-osx-aarch_64 | 4.1.74.Final |
io.netty | netty-transport-native-kqueue-osx-x86_64 | 4.1.74.Final |
io.netty | netty-transport-native-unix-common | 4.1.74.Final |
io.prometheus | simpleclient | 0.7.0 |
io.prometheus | simpleclient_common | 0.7.0 |
io.prometheus | simpleclient_dropwizard | 0.7.0 |
io.prometheus | simpleclient_pushgateway | 0.7.0 |
io.prometheus | simpleclient_servlet | 0.7.0 |
io.prometheus.jmx | Sammlung | 0.12.0 |
jakarta.annotation | jakarta.annotation-api | 1.3.5 |
jakarta.servlet | jakarta.servlet-api | 4.0.3 |
jakarta.validation | jakarta.validation-api | 2.0.2 |
jakarta.ws.rs | jakarta.ws.rs-api | 2.1.6 |
javax.activation | activation | 1.1.1 |
javax.el | javax.el-api | 2.2.4 |
javax.jdo | jdo-api | 3.0.1 |
javax.transaction | jta | 1.1 |
javax.transaction | transaction-api | 1.1 |
javax.xml.bind | jaxb-api | 2.2.11 |
javolution | javolution | 5.5.1 |
jline | jline | 2.14.6 |
joda-time | joda-time | 2.10.13 |
net.java.dev.jna | jna | 5.8.0 |
net.razorvine | pickle | 1.2 |
net.sf.jpam | jpam | 1.1 |
net.sf.opencsv | opencsv | 2.3 |
net.sf.supercsv | super-csv | 2.2.0 |
net.snowflake | snowflake-ingest-sdk | 0.9.6 |
net.snowflake | snowflake-jdbc | 3.13.22 |
net.sourceforge.f2j | arpack_combined_all | 0,1 |
org.acplt.remotetea | remotetea-oncrpc | 1.1.2 |
org.antlr | ST4 | 4.0.4 |
org.antlr | antlr-runtime | 3.5.2 |
org.antlr | antlr4-runtime | 4.8 |
org.antlr | stringtemplate | 3.2.1 |
org.apache.ant | ant | 1.9.2 |
org.apache.ant | ant-jsch | 1.9.2 |
org.apache.ant | ant-launcher | 1.9.2 |
org.apache.arrow | arrow-format | 7.0.0 |
org.apache.arrow | arrow-memory-core | 7.0.0 |
org.apache.arrow | arrow-memory-netty | 7.0.0 |
org.apache.arrow | arrow-vector | 7.0.0 |
org.apache.avro | avro | 1.11.0 |
org.apache.avro | avro-ipc | 1.11.0 |
org.apache.avro | avro-mapred | 1.11.0 |
org.apache.commons | commons-collections4 | 4.4 |
org.apache.commons | commons-compress | 1.21 |
org.apache.commons | commons-crypto | 1.1.0 |
org.apache.commons | commons-lang3 | 3.12.0 |
org.apache.commons | commons-math3 | 3.6.1 |
org.apache.commons | commons-text | 1.10.0 |
org.apache.curator | curator-client | 2.13.0 |
org.apache.curator | curator-framework | 2.13.0 |
org.apache.curator | curator-recipes | 2.13.0 |
org.apache.derby | derby | 10.14.2.0 |
org.apache.hadoop | hadoop-client-api | 3.3.4-databricks |
org.apache.hadoop | hadoop-client-runtime | 3.3.4 |
org.apache.hive | hive-beeline | 2.3.9 |
org.apache.hive | hive-cli | 2.3.9 |
org.apache.hive | hive-jdbc | 2.3.9 |
org.apache.hive | hive-llap-client | 2.3.9 |
org.apache.hive | hive-llap-common | 2.3.9 |
org.apache.hive | hive-serde | 2.3.9 |
org.apache.hive | hive-shims | 2.3.9 |
org.apache.hive | hive-storage-api | 2.8.1 |
org.apache.hive.shims | hive-shims-0.23 | 2.3.9 |
org.apache.hive.shims | hive-shims-common | 2.3.9 |
org.apache.hive.shims | hive-shims-scheduler | 2.3.9 |
org.apache.httpcomponents | httpclient | 4.5.13 |
org.apache.httpcomponents | httpcore | 4.4.14 |
org.apache.ivy | ivy | 2.5.0 |
org.apache.logging.log4j | log4j-1.2-api | 2.18.0 |
org.apache.logging.log4j | log4j-api | 2.18.0 |
org.apache.logging.log4j | log4j-core | 2.18.0 |
org.apache.logging.log4j | log4j-slf4j-impl | 2.18.0 |
org.apache.mesos | mesos-shaded-protobuf | 1.4.0 |
org.apache.orc | orc-core | 1.7.6 |
org.apache.orc | orc-mapreduce | 1.7.6 |
org.apache.orc | orc-shims | 1.7.6 |
org.apache.parquet | parquet-column | 1.12.3-databricks-0002 |
org.apache.parquet | parquet-common | 1.12.3-databricks-0002 |
org.apache.parquet | parquet-encoding | 1.12.3-databricks-0002 |
org.apache.parquet | parquet-format-structures | 1.12.3-databricks-0002 |
org.apache.parquet | parquet-hadoop | 1.12.3-databricks-0002 |
org.apache.parquet | parquet-jackson | 1.12.3-databricks-0002 |
org.apache.thrift | libfb303 | 0.9.3 |
org.apache.thrift | libthrift | 0.12.0 |
org.apache.xbean | xbean-asm9-shaded | 4.20 |
org.apache.yetus | audience-annotations | 0.13.0 |
org.apache.zookeeper | zookeeper | 3.6.2 |
org.apache.zookeeper | zookeeper-jute | 3.6.2 |
org.checkerframework | checker-qual | 3.5.0 |
org.codehaus.jackson | jackson-core-asl | 1.9.13 |
org.codehaus.jackson | jackson-mapper-asl | 1.9.13 |
org.codehaus.janino | commons-compiler | 3.0.16 |
org.codehaus.janino | janino | 3.0.16 |
org.datanucleus | datanucleus-api-jdo | 4.2.4 |
org.datanucleus | datanucleus-core | 4.1.17 |
org.datanucleus | datanucleus-rdbms | 4.1.19 |
org.datanucleus | javax.jdo | 3.2.0-m3 |
org.eclipse.jetty | jetty-client | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-continuation | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-http | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-io | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-jndi | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-plus | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-proxy | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-security | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-server | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-servlet | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-servlets | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-util | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-util-ajax | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-webapp | 9.4.46.v20220331 |
org.eclipse.jetty | jetty-xml | 9.4.46.v20220331 |
org.eclipse.jetty.websocket | websocket-api | 9.4.46.v20220331 |
org.eclipse.jetty.websocket | websocket-client | 9.4.46.v20220331 |
org.eclipse.jetty.websocket | websocket-common | 9.4.46.v20220331 |
org.eclipse.jetty.websocket | websocket-server | 9.4.46.v20220331 |
org.eclipse.jetty.websocket | websocket-servlet | 9.4.46.v20220331 |
org.fusesource.leveldbjni | leveldbjni-all | 1.8 |
org.glassfish.hk2 | hk2-api | 2.6.1 |
org.glassfish.hk2 | hk2-locator | 2.6.1 |
org.glassfish.hk2 | hk2-utils | 2.6.1 |
org.glassfish.hk2 | osgi-resource-locator | 1.0.3 |
org.glassfish.hk2.external | aopalliance-repackaged | 2.6.1 |
org.glassfish.hk2.external | jakarta.inject | 2.6.1 |
org.glassfish.jersey.containers | jersey-container-servlet | 2,36 |
org.glassfish.jersey.containers | jersey-container-servlet-core | 2,36 |
org.glassfish.jersey.core | jersey-client | 2,36 |
org.glassfish.jersey.core | jersey-common | 2,36 |
org.glassfish.jersey.core | jersey-server | 2,36 |
org.glassfish.jersey.inject | jersey-hk2 | 2,36 |
org.hibernate.validator | hibernate-validator | 6.1.0.Final |
org.javassist | javassist | 3.25.0-GA |
org.jboss.logging | jboss-logging | 3.3.2.Final |
org.jdbi | jdbi | 2.63.1 |
org.jetbrains | annotations | 17.0.0 |
org.joda | joda-convert | 1.7 |
org.jodd | jodd-core | 3.5.2 |
org.json4s | json4s-ast_2.12 | 3.7.0-M11 |
org.json4s | json4s-core_2.12 | 3.7.0-M11 |
org.json4s | json4s-jackson_2.12 | 3.7.0-M11 |
org.json4s | json4s-scalap_2.12 | 3.7.0-M11 |
org.lz4 | lz4-java | 1.8.0 |
org.mariadb.jdbc | mariadb-java-client | 2.7.4 |
org.mlflow | mlflow-spark | 2.1.1 |
org.objenesis | objenesis | 2.5.1 |
org.postgresql | postgresql | 42.3.3 |
org.roaringbitmap | RoaringBitmap | 0.9.25 |
org.roaringbitmap | shims | 0.9.25 |
org.rocksdb | rocksdbjni | 6.28.2 |
org.rosuda.REngine | REngine | 2.1.0 |
org.scala-lang | scala-compiler_2.12 | 2.12.14 |
org.scala-lang | scala-library_2.12 | 2.12.14 |
org.scala-lang | scala-reflect_2.12 | 2.12.14 |
org.scala-lang.modules | scala-collection-compat_2.12 | 2.4.3 |
org.scala-lang.modules | scala-parser-combinators_2.12 | 1.1.2 |
org.scala-lang.modules | scala-xml_2.12 | 1.2.0 |
org.scala-sbt | test-interface | 1.0 |
org.scalacheck | scalacheck_2.12 | 1.14.2 |
org.scalactic | scalactic_2.12 | 3.0.8 |
org.scalanlp | breeze-macros_2.12 | 1.2 |
org.scalanlp | breeze_2.12 | 1.2 |
org.scalatest | scalatest_2.12 | 3.0.8 |
org.slf4j | jcl-over-slf4j | 1.7.36 |
org.slf4j | jul-to-slf4j | 1.7.36 |
org.slf4j | slf4j-api | 1.7.36 |
org.spark-project.spark | unused | 1.0.0 |
org.threeten | threeten-extra | 1.5.0 |
org.tukaani | xz | 1.9 |
org.typelevel | algebra_2.12 | 2.0.1 |
org.typelevel | cats-kernel_2.12 | 2.1.1 |
org.typelevel | macro-compat_2.12 | 1.1.1 |
org.typelevel | spire-macros_2.12 | 0.17.0 |
org.typelevel | spire-platform_2.12 | 0.17.0 |
org.typelevel | spire-util_2.12 | 0.17.0 |
org.typelevel | spire_2.12 | 0.17.0 |
org.wildfly.openssl | wildfly-openssl | 1.0.7.Final |
org.xerial | sqlite-jdbc | 3.8.11.2 |
org.xerial.snappy | snappy-java | 1.1.8.4 |
org.yaml | snakeyaml | 1,24 |
oro | oro | 2.0.8 |
pl.edu.icm | JLargeArrays | 1.5 |
software.amazon.cryptools | AmazonCorrettoCryptoProvider | 1.6.1-linux-x86_64 |
software.amazon.ion | ion-java | 1.0.2 |
stax | stax-api | 1.0.1 |