MapReduce基础开发之十二ChainMapper和ChainReducer使用
生活随笔
收集整理的這篇文章主要介紹了
MapReduce基础开发之十二ChainMapper和ChainReducer使用
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
1、需求場景:
? ?過濾無意義的單詞后再進(jìn)行文本詞頻統(tǒng)計。處理流程是:
1)第一個Map使用無意義單詞數(shù)組過濾輸入流;
2)第二個Map將過濾后的單詞加上出現(xiàn)一次的標(biāo)簽;
3)最后Reduce輸出詞頻;
MapReduce適合高吞吐高延遲的批處理,對于數(shù)據(jù)集迭代支持比較弱,唯有這個Chain具備。
? ?過濾無意義的單詞后再進(jìn)行文本詞頻統(tǒng)計。處理流程是:
1)第一個Map使用無意義單詞數(shù)組過濾輸入流;
2)第二個Map將過濾后的單詞加上出現(xiàn)一次的標(biāo)簽;
3)最后Reduce輸出詞頻;
MapReduce適合高吞吐高延遲的批處理,對于數(shù)據(jù)集迭代支持比較弱,唯有這個Chain具備。
2、具體代碼如下:
package com.word;import java.io.IOException; import java.util.HashSet; import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.chain.ChainMapper; import org.apache.hadoop.mapreduce.lib.chain.ChainReducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser;public class ChainWordCount {//過濾無意義單詞的第一個Mappublic static class FilterMapper extends Mapper<Object, Text, Text, Text>{private final static String[] StopWord = {"a","an","the","of","in","to","and","at","as","with"};private HashSet<String> StopWordSet;private Text word = new Text();//setup函數(shù)在Map task啟動之后立即執(zhí)行public void setup(Context context) throws IOException,InterruptedException{StopWordSet=new HashSet<String>();for(int i=0;i<StopWord.length;i++){StopWordSet.add(StopWord[i]);}} //將 輸入流中無意義的單詞過濾掉public void map(Object key, Text value, Context context)throws IOException, InterruptedException {StringTokenizer itr = new StringTokenizer(value.toString());while (itr.hasMoreTokens()) {String aword=itr.nextToken();//獲取字符if(!StopWordSet.contains(aword)){//不包含無意義單詞word.set(aword);context.write(word, new Text(""));}}}}//記錄單詞標(biāo)簽第二個Mappublic static class TokenizerMapper extends Mapper<Text, Text, Text, IntWritable>{private final static IntWritable one = new IntWritable(1); public void map(Text key, Text value, Context context)throws IOException, InterruptedException {context.write(key, one);}}public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {private IntWritable result = new IntWritable();public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {int sum = 0;for (IntWritable val : values) {sum += val.get();}result.set(sum);context.write(key, result);}}public static void main(String[] args) throws Exception {Configuration conf = new Configuration();String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();if (otherArgs.length != 2) {System.err.println("Usage: ChainWordCount <in> <out>");System.exit(2);}Job job = new Job(conf, "ChainWordCount");job.setJarByClass(ChainWordCount.class);//第一個map加入作業(yè)流JobConf map1Conf=new JobConf(false);ChainMapper.addMapper(job, FilterMapper.class, Object.class, Text.class, Text.class, Text.class, map1Conf);//第二個map加入作業(yè)流JobConf map2Conf=new JobConf(false);ChainMapper.addMapper(job, TokenizerMapper.class, Text.class, Text.class, Text.class, IntWritable.class, map2Conf);//將詞頻統(tǒng)計的Reduce設(shè)置成作業(yè)流唯一的ReduceJobConf redConf=new JobConf(false);ChainReducer.setReducer(job, IntSumReducer.class, Text.class, IntWritable.class, Text.class, IntWritable.class, redConf);job.setNumReduceTasks(1);//設(shè)置reduce輸出文件數(shù)FileInputFormat.addInputPath(job, new Path(otherArgs[0]));FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));System.exit(job.waitForCompletion(true) ? 0 : 1);} } /** 統(tǒng)計的輸入文件:hadoop fs -put /var/log/boot.log /tmp/fjs/* 結(jié)果輸出文件:/tmp/fjs/cwcout* 執(zhí)行命令:hadoop jar /mnt/ChainWordCount.jar /tmp/fjs/boot.log /tmp/fjs/cwcout*/總結(jié)
以上是生活随笔為你收集整理的MapReduce基础开发之十二ChainMapper和ChainReducer使用的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: MapReduce基础开发之十一Dist
- 下一篇: 机器学习笔记(一)绪论