日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

MapReduce经典案例——统计单词数

發布時間:2023/12/4 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 MapReduce经典案例——统计单词数 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
資源文件file.txthello hadoophello wordthis is my first hadoop program
分析:一個文檔中每行的單詞通過空格分割后獲取,經過map階段可以將所有的單詞整理成如下形式:key:hello value:1key:hadoop value:1key:hello value:1key:word value:1key:this value:1key:is value:1key:my value:1key:first value:1key:hadoop value:1key:program value:1經過hadoop整理后以如下形式輸入到Reduce中:key:hello value:{1,1}key:hadoop value: {1,1}key:word value:{1}key:this value: {1}key:is value:{1}key:myvalue: {1}key:first value:{1}key:program value:{1}所以Reduce接受的時候是以Iterable<IntWritable> values最為值。在Reduce中我們就可以將value中的值迭代相加就可以得出該單詞出現的次數。實現:package com.bwzy.hadoop; import java.io.File; import java.io.IOException; import java.util.Iterator; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.*; import org.apache.hadoop.mapreduce.*; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; public class WordCount extends Configured implements Tool {public static class Mapextendsorg.apache.hadoop.mapreduce.Mapper<LongWritable, Text, Text, IntWritable> {private final static IntWritable one = new IntWritable(1);private Text word = new Text();public void map(LongWritable key,Text value,Context context){String line = value.toString();StringTokenizer tokenizer = new StringTokenizer(line);//讀取每一行數據,并將該行數據以空格分割(StringTokenizer默認是以空格分割字符串)while (tokenizer.hasMoreTokens()) {word.set(tokenizer.nextToken());try {context.write(word, one);//輸出給Reduce} catch (IOException e) {e.printStackTrace();} catch (InterruptedException e) {e.printStackTrace();}}}}public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException{int sum = 0; //合并每個單詞出現的次數for (IntWritable val : values) {sum+=val.get();}context.write(key, new IntWritable(sum));}}public static void main(String[] args) throws Exception {int ret = ToolRunner.run(new WordCount(), args);System.exit(ret);}@Overridepublic int run(String[] arg0) throws Exception {Job job = new Job(getConf());job.setJobName("wordcount");job.setOutputKeyClass(Text.class);//key--設置輸出格式job.setOutputValueClass(IntWritable.class);//value--設置輸出格式job.setMapperClass(Map.class);job.setReducerClass(Reduce.class);job.setInputFormatClass(TextInputFormat.class);job.setOutputFormatClass(TextOutputFormat.class);FileInputFormat.setInputPaths(job, new Path(arg0[0]));FileOutputFormat.setOutputPath(job, new Path(arg0[1]));boolean success = job.waitForCompletion(true);return success?0:1;} }
運行:1:將程序打包選中打包的類-->右擊-->Export-->java-->JAR file--填入保存路徑-->完成2:將jar包拷貝到hadoop的目錄下。(因為程序中用到來hadoop的jar包)3:將資源文件(存儲這單詞的文件,假設在/home/user/Document/file1.txt)上傳到定義的hdfs目錄下創建hdfs目錄命令(在hadoop已經成功啟動的前提下):hadoop fs -mkdir /自定義/自定義/input
上傳本地資源文件到hdfs上:hadop fs -put -copyFromLocal /home/user/Document/file1.txt /自定義/自定義/input
4:運行MapReduce程序:hadoop jar /home/user/hadoop-1.0.4/WordCount.jar com.bwzy.hadoop.WordCount /自定義/自定義/input /自定義/自定義/output
說明:hadoop運行后會自動創建/自定義/自定義/output目錄,在該目錄下會有兩個文件,其中一個文件中存放來MapReduce運行的結果。如果重新運行該程序,需要將/自定義/自定義/output目錄刪除,否則系統認為該結果已經存在了。5:運行的結果為hello 2hadoop 2word 1this 1
is 1my 1first 1program 1


轉載于:https://blog.51cto.com/3157689/1350178

總結

以上是生活随笔為你收集整理的MapReduce经典案例——统计单词数的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。