admin管理员组文章数量:1026989
I created skewed data to test a salting approach and tried three different solutions, but none achieved the desired results with a significant runtime improvement. Can you guide me on the best approach to solve this problem effectively?
import pyspark.sql.functions as F
df1 = spark.range(300_000_000).withColumn('value',F.when(F.rand() < 0.6,1).otherwise((F.rand() * 100).cast('int)).drip('id')
df2 = spark.range(200_000_000).withColumn('value',F.when(F.rand() < 0.2,4).otherwise((F.rand() * 100).cast('int)).drip('id')
final_df = df1.join(df2,on='value',how='inner')
finaldf.write.format('parquet).save(path)
Process 1
Enabled AQE and other settings like skewjoin,enabled=true,coalescePartitions.enabled=True
and shuffle.partition.=auto
It was running for more than 20 minutes, and I cancelled the job manually.
Process 2
Salting techniques. Disabled AQE and added shuffle partition size = 1000
df1 = (df1.withColumn('salt_numbers',F.expr('sequence(0,1)'))
.withColum('salt',F.explode('salt_numbers'))
.drop('salt_numbers))
df2 = (df2.withColumn('salt_numbers',F.expr('sequence(0,3)'))
.withColum('salt',F.explode('salt_numbers'))
.drop('salt_numbers))
Now both datasets have equal size (600_000_00) and added joining column - salt. on=['value','salt'] . It ran more that 30min and cancelled the job manually.
Process 3
Salting techniques: Enabled AQE
df1 = (df1.withColumn('salt_numbers',F.expr('sequence(0,1)'))
.withColum('salt',F.explode('salt_numbers'))
.drop('salt_numbers))
df2 = (df2.withColumn('salt_numbers',F.expr('sequence(0,3)'))
.withColum('salt',F.explode('salt_numbers'))
.drop('salt_numbers))
Now both datasets have equal size (600_000_00) and added joining column - salt. on=['value','salt'] . also took similar 30 min and manually cancelled the job.
Note: Databricks cluster size is 32 gb 4 cores and 8 workers. Please share your ideas on how efficiently we can run the job.
I created skewed data to test a salting approach and tried three different solutions, but none achieved the desired results with a significant runtime improvement. Can you guide me on the best approach to solve this problem effectively?
import pyspark.sql.functions as F
df1 = spark.range(300_000_000).withColumn('value',F.when(F.rand() < 0.6,1).otherwise((F.rand() * 100).cast('int)).drip('id')
df2 = spark.range(200_000_000).withColumn('value',F.when(F.rand() < 0.2,4).otherwise((F.rand() * 100).cast('int)).drip('id')
final_df = df1.join(df2,on='value',how='inner')
finaldf.write.format('parquet).save(path)
Process 1
Enabled AQE and other settings like skewjoin,enabled=true,coalescePartitions.enabled=True
and shuffle.partition.=auto
It was running for more than 20 minutes, and I cancelled the job manually.
Process 2
Salting techniques. Disabled AQE and added shuffle partition size = 1000
df1 = (df1.withColumn('salt_numbers',F.expr('sequence(0,1)'))
.withColum('salt',F.explode('salt_numbers'))
.drop('salt_numbers))
df2 = (df2.withColumn('salt_numbers',F.expr('sequence(0,3)'))
.withColum('salt',F.explode('salt_numbers'))
.drop('salt_numbers))
Now both datasets have equal size (600_000_00) and added joining column - salt. on=['value','salt'] . It ran more that 30min and cancelled the job manually.
Process 3
Salting techniques: Enabled AQE
df1 = (df1.withColumn('salt_numbers',F.expr('sequence(0,1)'))
.withColum('salt',F.explode('salt_numbers'))
.drop('salt_numbers))
df2 = (df2.withColumn('salt_numbers',F.expr('sequence(0,3)'))
.withColum('salt',F.explode('salt_numbers'))
.drop('salt_numbers))
Now both datasets have equal size (600_000_00) and added joining column - salt. on=['value','salt'] . also took similar 30 min and manually cancelled the job.
Note: Databricks cluster size is 32 gb 4 cores and 8 workers. Please share your ideas on how efficiently we can run the job.
本文标签: sqlAre there any techniques to solve skew data in databricksStack Overflow
版权声明:本文标题:sql - Are there any techniques to solve skew data in databricks? - Stack Overflow 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://it.en369.cn/questions/1745660887a2161884.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论