WordCount案例分析图
2021-01-30 21:46:48 17 举报
针对普通的Spark的Wordcount案例进行分析
作者其他创作
大纲/内容
getDependencies()
compute(p).IteratorInputformat.recordreader
compute()
prev
deps
iter
文本
textFile()
writer.write()
shuffleMapTask
PipeLine
desp
LineAge 血统
runJob
ShuffledRDD
NextIterator
File txt
f
getPartition()inputformat.getSplits
MappartitionsRDD
Nil
iterator
file
node02
SparkContext
@transient
flatMap(_.split(\" \"))
block01
文件
shuffleshuffleManager
node01
reduceByKey(_+_)
prevRDDpartserializerkeyOrderingaggregatormapSideCombine<ShuffleDependency>
block00
compute(p).IteratorshuffleManager.getReader.read()
HadoopRDD
foreache(println)
0 条评论
下一页