spark出现task不能序列化错误的解决方法 org.apache.spark.SparkException: Task not serializable
出現“task not serializable"這個錯誤,一般是因為在map、filter等的參數使用了外部的變量,但是這個變量不能序列化。特別是當引用了某個類(經常是當前類)的成員函數或變量時,會導致這個類的所有成員(整個類)都需要支持序列化。解決這個問題最常用的方法有:
?
- Make the NotSerializable object as a static and create it once per machine.
- Call rdd.forEachPartition and create the NotSerializable object in there like this:
==================
ref[1]:<http://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/javaionotserializableexception.html>
?
If you see this error:
org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: ...The above error can be triggered when you intialize a variable on the driver (master), but then try to use it on one of the workers. In that case, Spark Streaming will try to serialize the object to send it over to the worker, and fail if the object is not serializable. Consider the following code snippet:
NotSerializable notSerializable = new NotSerializable(); JavaRDD<String> rdd = sc.textFile("/tmp/myfile");rdd.map(s -> notSerializable.doSomething(s)).collect();This will trigger that error. Here are some ideas to fix this error:
- Serializable the class
- Declare the instance only within the lambda function passed in map.
- Make the NotSerializable object as a static and create it once per machine.
- Call rdd.forEachPartition and create the NotSerializable object in there like this:
?
Pasted from:?<http://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/javaionotserializableexception.html>
?
另外, stackoverflow上http://stackoverflow.com/questions/25914057/task-not-serializable-exception-while-running-apache-spark-job?這個答的也很簡明易懂。
創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎總結
以上是生活随笔為你收集整理的spark出现task不能序列化错误的解决方法 org.apache.spark.SparkException: Task not serializable的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 《深入理解Elasticsearch》读
- 下一篇: 关于SparkMLlib的基础数据结构S