日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

MapReduce基础开发之十二ChainMapper和ChainReducer使用

發(fā)布時(shí)間:2025/4/16 编程问答 22 豆豆
生活随笔 收集整理的這篇文章主要介紹了 MapReduce基础开发之十二ChainMapper和ChainReducer使用 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
1、需求場(chǎng)景:
? ?過(guò)濾無(wú)意義的單詞后再進(jìn)行文本詞頻統(tǒng)計(jì)。處理流程是:
1)第一個(gè)Map使用無(wú)意義單詞數(shù)組過(guò)濾輸入流;
2)第二個(gè)Map將過(guò)濾后的單詞加上出現(xiàn)一次的標(biāo)簽;
3)最后Reduce輸出詞頻;
MapReduce適合高吞吐高延遲的批處理,對(duì)于數(shù)據(jù)集迭代支持比較弱,唯有這個(gè)Chain具備。


2、具體代碼如下:

package com.word;import java.io.IOException; import java.util.HashSet; import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.chain.ChainMapper; import org.apache.hadoop.mapreduce.lib.chain.ChainReducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser;public class ChainWordCount {//過(guò)濾無(wú)意義單詞的第一個(gè)Mappublic static class FilterMapper extends Mapper<Object, Text, Text, Text>{private final static String[] StopWord = {"a","an","the","of","in","to","and","at","as","with"};private HashSet<String> StopWordSet;private Text word = new Text();//setup函數(shù)在Map task啟動(dòng)之后立即執(zhí)行public void setup(Context context) throws IOException,InterruptedException{StopWordSet=new HashSet<String>();for(int i=0;i<StopWord.length;i++){StopWordSet.add(StopWord[i]);}} //將 輸入流中無(wú)意義的單詞過(guò)濾掉public void map(Object key, Text value, Context context)throws IOException, InterruptedException {StringTokenizer itr = new StringTokenizer(value.toString());while (itr.hasMoreTokens()) {String aword=itr.nextToken();//獲取字符if(!StopWordSet.contains(aword)){//不包含無(wú)意義單詞word.set(aword);context.write(word, new Text(""));}}}}//記錄單詞標(biāo)簽第二個(gè)Mappublic static class TokenizerMapper extends Mapper<Text, Text, Text, IntWritable>{private final static IntWritable one = new IntWritable(1); public void map(Text key, Text value, Context context)throws IOException, InterruptedException {context.write(key, one);}}public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {private IntWritable result = new IntWritable();public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {int sum = 0;for (IntWritable val : values) {sum += val.get();}result.set(sum);context.write(key, result);}}public static void main(String[] args) throws Exception {Configuration conf = new Configuration();String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();if (otherArgs.length != 2) {System.err.println("Usage: ChainWordCount <in> <out>");System.exit(2);}Job job = new Job(conf, "ChainWordCount");job.setJarByClass(ChainWordCount.class);//第一個(gè)map加入作業(yè)流JobConf map1Conf=new JobConf(false);ChainMapper.addMapper(job, FilterMapper.class, Object.class, Text.class, Text.class, Text.class, map1Conf);//第二個(gè)map加入作業(yè)流JobConf map2Conf=new JobConf(false);ChainMapper.addMapper(job, TokenizerMapper.class, Text.class, Text.class, Text.class, IntWritable.class, map2Conf);//將詞頻統(tǒng)計(jì)的Reduce設(shè)置成作業(yè)流唯一的ReduceJobConf redConf=new JobConf(false);ChainReducer.setReducer(job, IntSumReducer.class, Text.class, IntWritable.class, Text.class, IntWritable.class, redConf);job.setNumReduceTasks(1);//設(shè)置reduce輸出文件數(shù)FileInputFormat.addInputPath(job, new Path(otherArgs[0]));FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));System.exit(job.waitForCompletion(true) ? 0 : 1);} } /** 統(tǒng)計(jì)的輸入文件:hadoop fs -put /var/log/boot.log /tmp/fjs/* 結(jié)果輸出文件:/tmp/fjs/cwcout* 執(zhí)行命令:hadoop jar /mnt/ChainWordCount.jar /tmp/fjs/boot.log /tmp/fjs/cwcout*/

總結(jié)

以上是生活随笔為你收集整理的MapReduce基础开发之十二ChainMapper和ChainReducer使用的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。