spark dataframe to pandas dataframe conversion

Jha, Ayush 16 Reputation points
2022-06-08T06:50:01.737+00:00

Trying to convert large data size and convert it into pandas dataframe as data transformations are happening in python. Encountered some issues and increased nodes to make it process. On small size data it is working fine. What should be done next? Increase size or split data into multiple sets and iterate over it .

----------

/opt/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py:137: UserWarning: toPandas attempted Arrow optimization because 'spark.sql.execution.arrow.pyspark.enabled' is set to true, but has reached the error below and can not continue. Note that 'spark.sql.execution.arrow.pyspark.fallback.enabled' does not have an effect on failures in the middle of computation.    
  An error occurred while calling o723.getResult.    
: org.apache.spark.SparkException: Exception thrown in awaitResult:     
	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)    
	at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:97)    
	at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:93)    
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)    
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)    
	at java.lang.reflect.Method.invoke(Method.java:498)    
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)    
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)    
	at py4j.Gateway.invoke(Gateway.java:282)    
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)    
	at py4j.commands.CallCommand.execute(CallCommand.java:79)    
	at py4j.GatewayConnection.run(GatewayConnection.java:238)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 21.0 failed 4 times, most recent failure: Lost task 0.3 in stage 21.0 (TID 2588) (vm-cc265957 executor 2): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 1550719. To avoid this, increase spark.kryoserializer.buffer.max value.    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:382)    
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:542)    
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)    
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0, required: 1550719    
	at com.esotericsoftware.kryo.io.Output.require(Output.java:167)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:251)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:237)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:49)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:38)    
	at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:629)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:332)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)    
	at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:378)    
	... 4 more    
    
Driver stacktrace:    
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2263)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2212)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2211)    
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)    
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)    
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)    
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2211)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1082)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1082)    
	at scala.Option.foreach(Option.scala:407)    
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1082)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2450)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2392)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2381)    
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)    
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:869)    
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2282)    
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2377)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$5(Dataset.scala:3629)    
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)    
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2(Dataset.scala:3633)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2$adapted(Dataset.scala:3610)    
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)    
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:107)    
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:181)    
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:94)    
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)    
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)    
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1(Dataset.scala:3610)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1$adapted(Dataset.scala:3609)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$2(SocketAuthServer.scala:139)    
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)    
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1(SocketAuthServer.scala:141)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1$adapted(SocketAuthServer.scala:136)    
	at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:113)    
	at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:107)    
	at org.apache.spark.security.SocketAuthServer$$anon$1.$anonfun$run$4(SocketAuthServer.scala:68)    
	at scala.util.Try$.apply(Try.scala:213)    
	at org.apache.spark.security.SocketAuthServer$$anon$1.run(SocketAuthServer.scala:68)    
Caused by: org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 1550719. To avoid this, increase spark.kryoserializer.buffer.max value.    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:382)    
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:542)    
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)    
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0, required: 1550719    
	at com.esotericsoftware.kryo.io.Output.require(Output.java:167)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:251)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:237)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:49)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:38)    
	at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:629)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:332)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)    
	at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:378)    
	... 4 more    
    
  warnings.warn(msg)Py4JJavaError: An error occurred while calling o723.getResult.    
: org.apache.spark.SparkException: Exception thrown in awaitResult:     
	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)    
	at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:97)    
	at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:93)    
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)    
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)    
	at java.lang.reflect.Method.invoke(Method.java:498)    
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)    
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)    
	at py4j.Gateway.invoke(Gateway.java:282)    
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)    
	at py4j.commands.CallCommand.execute(CallCommand.java:79)    
	at py4j.GatewayConnection.run(GatewayConnection.java:238)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 21.0 failed 4 times, most recent failure: Lost task 0.3 in stage 21.0 (TID 2588) (vm-cc265957 executor 2): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 1550719. To avoid this, increase spark.kryoserializer.buffer.max value.    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:382)    
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:542)    
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)    
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0, required: 1550719    
	at com.esotericsoftware.kryo.io.Output.require(Output.java:167)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:251)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:237)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:49)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:38)    
	at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:629)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:332)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)    
	at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:378)    
	... 4 more    
    
Driver stacktrace:    
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2263)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2212)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2211)    
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)    
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)    
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)    
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2211)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1082)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1082)    
	at scala.Option.foreach(Option.scala:407)    
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1082)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2450)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2392)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2381)    
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)    
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:869)    
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2282)    
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2377)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$5(Dataset.scala:3629)    
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)    
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2(Dataset.scala:3633)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2$adapted(Dataset.scala:3610)    
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)    
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:107)    
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:181)    
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:94)    
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)    
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)    
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1(Dataset.scala:3610)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1$adapted(Dataset.scala:3609)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$2(SocketAuthServer.scala:139)    
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)    
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1(SocketAuthServer.scala:141)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1$adapted(SocketAuthServer.scala:136)    
	at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:113)    
	at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:107)    
	at org.apache.spark.security.SocketAuthServer$$anon$1.$anonfun$run$4(SocketAuthServer.scala:68)    
	at scala.util.Try$.apply(Try.scala:213)    
	at org.apache.spark.security.SocketAuthServer$$anon$1.run(SocketAuthServer.scala:68)    
Caused by: org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 1550719. To avoid this, increase spark.kryoserializer.buffer.max value.    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:382)    
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:542)    
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)    
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0, required: 1550719    
	at com.esotericsoftware.kryo.io.Output.require(Output.java:167)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:251)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:237)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:49)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:38)    
	at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:629)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:332)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)    
	at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:378)    
	... 4 more    
    
Traceback (most recent call last):    
    
  File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py", line 108, in toPandas    
    batches = self.toDF(*tmp_column_names)._collect_as_arrow()    
    
  File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/pandas/conversion.py", line 246, in _collect_as_arrow    
    jsocket_auth_server.getResult()    
    
  File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/py4j/java_gateway.py", line 1304, in __call__    
    return_value = get_return_value(    
    
  File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco    
    return f(*a, **kw)    
    
  File "/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/py4j/protocol.py", line 326, in get_return_value    
    raise Py4JJavaError(    
    
py4j.protocol.Py4JJavaError: An error occurred while calling o723.getResult.    
: org.apache.spark.SparkException: Exception thrown in awaitResult:     
	at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)    
	at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:97)    
	at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:93)    
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)    
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)    
	at java.lang.reflect.Method.invoke(Method.java:498)    
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)    
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)    
	at py4j.Gateway.invoke(Gateway.java:282)    
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)    
	at py4j.commands.CallCommand.execute(CallCommand.java:79)    
	at py4j.GatewayConnection.run(GatewayConnection.java:238)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 21.0 failed 4 times, most recent failure: Lost task 0.3 in stage 21.0 (TID 2588) (vm-cc265957 executor 2): org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 1550719. To avoid this, increase spark.kryoserializer.buffer.max value.    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:382)    
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:542)    
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)    
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0, required: 1550719    
	at com.esotericsoftware.kryo.io.Output.require(Output.java:167)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:251)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:237)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:49)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:38)    
	at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:629)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:332)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)    
	at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:378)    
	... 4 more    
    
Driver stacktrace:    
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2263)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2212)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2211)    
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)    
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)    
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)    
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2211)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1082)    
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1082)    
	at scala.Option.foreach(Option.scala:407)    
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1082)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2450)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2392)    
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2381)    
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)    
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:869)    
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2282)    
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2377)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$5(Dataset.scala:3629)    
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)    
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2(Dataset.scala:3633)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2$adapted(Dataset.scala:3610)    
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687)    
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:107)    
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:181)    
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:94)    
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)    
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)    
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1(Dataset.scala:3610)    
	at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$1$adapted(Dataset.scala:3609)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$2(SocketAuthServer.scala:139)    
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)    
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1(SocketAuthServer.scala:141)    
	at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1$adapted(SocketAuthServer.scala:136)    
	at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:113)    
	at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:107)    
	at org.apache.spark.security.SocketAuthServer$$anon$1.$anonfun$run$4(SocketAuthServer.scala:68)    
	at scala.util.Try$.apply(Try.scala:213)    
	at org.apache.spark.security.SocketAuthServer$$anon$1.run(SocketAuthServer.scala:68)    
Caused by: org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow. Available: 0, required: 1550719. To avoid this, increase spark.kryoserializer.buffer.max value.    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:382)    
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:542)    
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)    
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)    
	at java.lang.Thread.run(Thread.java:748)    
Caused by: com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0, required: 1550719    
	at com.esotericsoftware.kryo.io.Output.require(Output.java:167)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:251)    
	at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:237)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:49)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:38)    
	at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:629)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:332)    
	at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)    
	at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)    
	at org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:378)    
	... 4 more    
.NET
.NET
Microsoft Technologies based on the .NET software framework.
3,375 questions
0 comments No comments
{count} votes