分区和并行之间的区别是什么?

spark.sql.shuffle.partitionsspark.default.parallelism有什么区别?

我已经尝试将它们都设置为 SparkSQL,但是第二阶段的任务编号总是200。

164307 次浏览

From the answer here, spark.sql.shuffle.partitions configures the number of partitions that are used when shuffling data for joins or aggregations.

spark.default.parallelism is the default number of partitions in RDDs returned by transformations like join, reduceByKey, and parallelize when not set explicitly by the user. Note that spark.default.parallelism seems to only be working for raw RDD and is ignored when working with dataframes.

If the task you are performing is not a join or aggregation and you are working with dataframes then setting these will not have any effect. You could, however, set the number of partitions yourself by calling df.repartition(numOfPartitions) (don't forget to assign it to a new val) in your code.


To change the settings in your code you can simply do:

sqlContext.setConf("spark.sql.shuffle.partitions", "300")
sqlContext.setConf("spark.default.parallelism", "300")

Alternatively, you can make the change when submitting the job to a cluster with spark-submit:

./bin/spark-submit --conf spark.sql.shuffle.partitions=300 --conf spark.default.parallelism=300

spark.default.parallelism is the default number of partition set by spark which is by default 200. and if you want to increase the number of partition than you can apply the property spark.sql.shuffle.partitions to set number of partition in the spark configuration or while running spark SQL.

Normally this spark.sql.shuffle.partitions it is being used when we have a memory congestion and we see below error: spark error:java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE

so set your can allocate a partition as 256 MB per partition and that you can use to set for your processes.

also If number of partitions is near to 2000 then increase it to more than 2000. As spark applies different logic for partition < 2000 and > 2000 which will increase your code performance by decreasing the memory footprint as data default is highly compressed if >2000.

In case someone who might want to know, there's indeed a special situation when the setting spark.sql.shuffle.partitions might become invalid. When you restart a structured streaming spark application with the same checkpoint location, changing this term does not take effect. See more at https://spark.apache.org/docs/latest/configuration.html#runtime-sql-configuration