我的集群:1个主节点,11个从节点,每个节点有6gb内存。

我的设置:

spark.executor.memory=4g, Dspark.akka.frameSize=512

问题是这样的:

首先,我从HDFS读取一些数据(2.19 GB)到RDD:

val imageBundleRDD = sc.newAPIHadoopFile(...)

其次,在这个RDD上做一些事情:

val res = imageBundleRDD.map(data => {
                               val desPoints = threeDReconstruction(data._2, bg)
                                 (data._1, desPoints)
                             })

最后,输出到HDFS:

res.saveAsNewAPIHadoopFile(...)

当我运行我的程序时,它显示:

.....
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:36 as 30618515 bytes in 449 ms
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Starting task 1.0:32 as TID 35 on executor 7: Salve4.Hadoop (NODE_LOCAL)
Uncaught error from thread [spark-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[spark]
java.lang.OutOfMemoryError: Java heap space

任务太多?

PS:当输入数据约为225 MB时,一切正常。

我该如何解决这个问题呢?


看看启动脚本,Java堆大小设置在那里,看起来你在运行Spark worker之前没有设置这个。

# Set SPARK_MEM if it isn't already set since we also use it for this process
SPARK_MEM=${SPARK_MEM:-512m}
export SPARK_MEM

# Set JAVA_OPTS to be able to load native libraries and to set heap size
JAVA_OPTS="$OUR_JAVA_OPTS"
JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$SPARK_LIBRARY_PATH"
JAVA_OPTS="$JAVA_OPTS -Xms$SPARK_MEM -Xmx$SPARK_MEM"

您可以在这里找到部署脚本的文档。


我有一些建议:

If your nodes are configured to have 6g maximum for Spark (and are leaving a little for other processes), then use 6g rather than 4g, spark.executor.memory=6g. Make sure you're using as much memory as possible by checking the UI (it will say how much mem you're using) Try using more partitions, you should have 2 - 4 per CPU. IME increasing the number of partitions is often the easiest way to make a program more stable (and often faster). For huge amounts of data you may need way more than 4 per CPU, I've had to use 8000 partitions in some cases! Decrease the fraction of memory reserved for caching, using spark.storage.memoryFraction. If you don't use cache() or persist in your code, this might as well be 0. It's default is 0.6, which means you only get 0.4 * 4g memory for your heap. IME reducing the mem frac often makes OOMs go away. UPDATE: From spark 1.6 apparently we will no longer need to play with these values, spark will determine them automatically. Similar to above but shuffle memory fraction. If your job doesn't need much shuffle memory then set it to a lower value (this might cause your shuffles to spill to disk which can have catastrophic impact on speed). Sometimes when it's a shuffle operation that's OOMing you need to do the opposite i.e. set it to something large, like 0.8, or make sure you allow your shuffles to spill to disk (it's the default since 1.0.0). Watch out for memory leaks, these are often caused by accidentally closing over objects you don't need in your lambdas. The way to diagnose is to look out for the "task serialized as XXX bytes" in the logs, if XXX is larger than a few k or more than an MB, you may have a memory leak. See https://stackoverflow.com/a/25270600/1586965 Related to above; use broadcast variables if you really do need large objects. If you are caching large RDDs and can sacrifice some access time consider serialising the RDD http://spark.apache.org/docs/latest/tuning.html#serialized-rdd-storage. Or even caching them on disk (which sometimes isn't that bad if using SSDs). (Advanced) Related to above, avoid String and heavily nested structures (like Map and nested case classes). If possible try to only use primitive types and index all non-primitives especially if you expect a lot of duplicates. Choose WrappedArray over nested structures whenever possible. Or even roll out your own serialisation - YOU will have the most information regarding how to efficiently back your data into bytes, USE IT! (bit hacky) Again when caching, consider using a Dataset to cache your structure as it will use more efficient serialisation. This should be regarded as a hack when compared to the previous bullet point. Building your domain knowledge into your algo/serialisation can minimise memory/cache-space by 100x or 1000x, whereas all a Dataset will likely give is 2x - 5x in memory and 10x compressed (parquet) on disk.

http://spark.apache.org/docs/1.2.1/configuration.html

编辑:(所以我可以谷歌自己更容易)下面也表明了这个问题:

java.lang.OutOfMemoryError : GC overhead limit exceeded

设置内存堆大小的位置(至少在spark-1.0.0中)在conf/spark-env中。 相关变量为SPARK_EXECUTOR_MEMORY和SPARK_DRIVER_MEMORY。 部署指南中有更多的文档

此外,不要忘记将配置文件复制到所有从节点。


您应该增加驱动程序内存。在$SPARK_HOME/conf文件夹中,你应该找到spark-defaults.conf文件,编辑并设置spark.driver.memory 4000m,这取决于你主内存的大小。 这就是为我解决问题的方法,一切都很顺利


为了添加一个通常不被讨论的用例,我将在本地模式下通过Spark -submit提交Spark应用程序时提出一个解决方案。

根据Jacek Laskowski的giitbook Mastering Apache Spark:

您可以在本地模式下运行Spark。在这种非分布式单JVM部署模式下,Spark在同一个JVM中生成所有执行组件——驱动程序、执行程序、后端和主机。这是驱动程序用于执行的唯一模式。

因此,如果您在堆中遇到OOM错误,调整驱动程序内存而不是执行程序内存就足够了。

这里有一个例子:

spark-1.6.1/bin/spark-submit
  --class "MyClass"
  --driver-memory 12g
  --master local[*] 
  target/scala-2.10/simple-project_2.10-1.0.jar 

你应该配置offHeap内存设置如下所示:

val spark = SparkSession
     .builder()
     .master("local[*]")
     .config("spark.executor.memory", "70g")
     .config("spark.driver.memory", "50g")
     .config("spark.memory.offHeap.enabled",true)
     .config("spark.memory.offHeap.size","16g")   
     .appName("sampleCodeForReference")
     .getOrCreate()

根据您机器的RAM可用性提供驱动程序内存和执行程序内存。如果仍然面临OutofMemory问题,可以增加offHeap大小。


广义上讲,spark Executor JVM内存可以分为两部分。Spark内存和User内存。这是由spark.memory.fraction属性控制的——值在0到1之间。 在spark应用程序中处理图像或执行内存密集型处理时,请考虑降低spark.memory.fraction。这将为应用程序工作提供更多内存。Spark可能溢出,所以它仍然可以在较少的内存共享下工作。

The second part of the problem is division of work. If possible, partition your data into smaller chunks. Smaller data possibly needs less memory. But if that is not possible, you are sacrifice compute for memory. Typically a single executor will be running multiple cores. Total memory of executors must be enough to handle memory requirements of all concurrent tasks. If increasing executor memory is not a option, you can decrease the cores per executor so that each task gets more memory to work with. Test with 1 core executors which have largest possible memory you can give and then keep increasing cores until you find the best core count.


在使用动态资源分配时,我经常遇到这个问题。我原以为它会利用我的集群资源来最适合这个应用程序。

但事实上,动态资源分配并没有设置驱动程序内存,而是将其保持为默认值,即1G。

我通过将spark.driver.memory设置为适合我的驱动器内存的数字来解决这个问题(对于32GB ram,我将其设置为18G)。

可以使用spark submit命令进行设置,方法如下:

spark-submit --conf spark.driver.memory=18g

非常重要的一点是,如果你从代码中设置这个属性,将不会被考虑,根据Spark文档-动态加载Spark属性:

Spark properties mainly can be divided into two kinds: one is related to deploy, like “spark.driver.memory”, “spark.executor.instances”, this kind of properties may not be affected when setting programmatically through SparkConf in runtime, or the behavior is depending on which cluster manager and deploy mode you choose, so it would be suggested to set through configuration file or spark-submit command line options; another is mainly related to Spark runtime control, like “spark.task.maxFailures”, this kind of properties can be set in either way.


对于上面提到的错误,我没有什么建议。

检查执行程序分配的内存可能必须处理需要比分配的内存更多的分区。

尝试查看是否有更多的shuffle是实时的,因为shuffle是昂贵的操作,因为它们涉及磁盘I/O、数据序列化和网络I/O

●使用广播连接

避免使用groupByKeys,尽量用ReduceByKey代替

●避免在任何发生洗牌的地方使用巨大的Java对象


你把你的主垃圾收集日志扔掉了吗?所以我遇到了类似的问题,我发现SPARK_DRIVER_MEMORY只设置Xmx堆。初始堆大小仍然是1G,堆大小永远不会扩大到Xmx堆。

传递“——conf”spark.driver。extraJavaOptions=-Xms20g”解决了我的问题。

Ps aux | grep Java和您将看到以下日志:=

4178294 pts/0 Sl+ 18184 pts/0 Sl+ 18:49 0:33 /usr/java/latest/bin/ opt/spark/ com / /


根据我对上面提供的代码的理解,它加载文件并进行映射操作并保存回来。没有需要shuffle的操作。此外,没有任何操作需要将数据传输到驱动程序,因此调优与shuffle或驱动程序相关的任何内容都不会产生影响。当任务太多时,驱动程序确实会有问题,但这只是在spark 2.0.2版本之前。可能会有两件事出错。

There are only one or a few executors. Increase the number of executors so that they can be allocated to different slaves. If you are using yarn need to change num-executors config or if you are using spark standalone then need to tune num cores per executor and spark max cores conf. In standalone num executors = max cores / cores per executor . The number of partitions are very few or maybe only one. So if this is low even if we have multi-cores,multi executors it will not be of much help as parallelization is dependent on the number of partitions. So increase the partitions by doing imageBundleRDD.repartition(11)


设置这些确切的配置有助于解决问题。

spark-submit --conf spark.yarn.maxAppAttempts=2 --executor-memory 10g --num-executors 50 --driver-memory 12g

堆空间错误通常是由于将太多数据带回驱动程序或执行程序而发生的。 在您的代码中,似乎没有将任何东西带回驱动程序,相反,您可能重载了使用threeDReconstruction()方法将一个输入记录/行映射到另一个输入记录/行的执行器。我不确定在方法定义中是什么,但这肯定会导致执行器的重载。 现在你有两个选择,

编辑你的代码,以更有效的方式进行三维重建。 不要编辑代码,但是给你的执行程序更多的内存,以及更多的内存开销。[spark.executor。内存或spark.driver.memoryOverhead]

我建议谨慎使用,只使用你需要的量。就内存需求而言,每个作业都是独一无二的,所以我建议根据经验尝试不同的值,每次增加2的幂(256M,512M,1G ..)等等)

您将得到一个可以工作的执行程序内存的值。尝试使用此值重新运行作业3或5次,然后再接受此配置。


简单,如果你正在使用一个脚本或juyter笔记本,然后只设置配置路径,当你开始构建一个spark会话…

spark = SparkSession.builder.master('local[*]').config("spark.driver.memory", "15g").appName('testing').getOrCreate()