MapReduce Java API实例-统计单词出现频率
場景
Windows下使用Java API操作HDFS的常用方法:
https://blog.csdn.net/BADAO_LIUMANG_QIZHI/article/details/119382108
在上面使用Java API操作HDFS已經(jīng)配置開發(fā)環(huán)境的基礎(chǔ)上。
使用Java API操作Mapreduce統(tǒng)計單次出現(xiàn)的次數(shù)。
這里Hadoop集群搭建的是Hadoop2.8.0,所以新建Maven項目并引入依賴
??????? <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs-client --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs-client</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-mapreduce-client-core --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-mapreduce-client-core</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/junit/junit --><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.12</version><scope>test</scope></dependency>注:
博客:
https://blog.csdn.net/badao_liumang_qizhi
關(guān)注公眾號
霸道的程序猿
獲取編程相關(guān)電子書、教程推送與免費下載。
實現(xiàn)
實現(xiàn)對指定目錄或文件中的單詞出現(xiàn)次數(shù)進(jìn)行統(tǒng)計,默認(rèn)輸出結(jié)果是以單詞字典排序。
采用默認(rèn)文本讀入,每行讀取一次,然后使用\t對數(shù)據(jù)進(jìn)行分割或者使用字符串類
StringTokenizer對其分割(該類會按照空格、\t、\n等進(jìn)行切分)。在Reduce端相同的key,即
相同的單詞會在一起進(jìn)行求和處理,求出出現(xiàn)次數(shù)。
1、首先新建數(shù)據(jù)集worlds.txt
2、編寫map類,通過繼承Mapper類實現(xiàn)里面的map函數(shù)
Mapper類中的第一個參數(shù)是Object(常用),也可以寫成Long
第一個參數(shù)對應(yīng)的值是行偏移量
第二個參數(shù)類型通常是Text類型,Text類型是Hadoop實現(xiàn)的String類型的可寫類型
第三個參數(shù)表示的是輸出key的數(shù)據(jù)類型
第四個參數(shù)表示的是輸出value的數(shù)據(jù)類型,IntWritable是Hadoop實現(xiàn)的int類型的可寫數(shù)據(jù)類型
package com.badao.mapreducedemo;import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper;import java.io.IOException; import java.util.StringTokenizer;public class WorldCountMapper extends Mapper<Object,Text,Text,IntWritable> {//1、編寫map函數(shù),通過繼承Mapper類實現(xiàn)里面的map函數(shù)//?? Mapper類當(dāng)中的第一個函數(shù)是Object,也可以寫成Long//?? 第一個參數(shù)對應(yīng)的值是行偏移量//2、第二個參數(shù)類型通常是Text類型,Text是Hadoop實現(xiàn)的String 類型的可寫類型//?? 第二個參數(shù)對應(yīng)的值是每行字符串//3、第三個參數(shù)表示的是輸出key的數(shù)據(jù)類型//4、第四個參數(shù)表示的是輸出value的數(shù)據(jù)類型,IntWriable 是Hadoop實現(xiàn)的int類型的可寫數(shù)據(jù)類型public final static IntWritable one = new IntWritable(1);public Text word = new Text();//key 是行偏移量//value是每行字符串@Overridepublic void map(Object key, Text value, Context context) throws IOException, InterruptedException {StringTokenizer stringTokenizer = new StringTokenizer(value.toString());while (stringTokenizer.hasMoreTokens()){//stringTokenizer.nextToken()是字符串類型,使用set函數(shù)完成字符串到Text數(shù)據(jù)類型的轉(zhuǎn)換word.set(stringTokenizer.nextToken());//通過write函數(shù)寫入到本地文件context.write(word,one);}} }這里有個很重要的一點就是,千萬不要導(dǎo)錯包!!!
尤其是Text
3、編寫reduce類
通過繼承Reduce類實現(xiàn)里面的reduce函數(shù)
第一個參數(shù)類型是輸入值key的數(shù)據(jù)類型,map中間輸出key的數(shù)據(jù)類型
第二個參數(shù)是輸入值為value的數(shù)據(jù)類型,map中間輸出value的數(shù)據(jù)類型
第三個參數(shù)是輸出值key的數(shù)據(jù)類型,它的數(shù)據(jù)類型要跟job.setOutputKeyClass(Text.Class)保持一致
第四個參數(shù)類型是輸出值value的數(shù)據(jù)類型,它的數(shù)據(jù)類型要跟job.setOutputValueClass(IntWriable.class)保持一致
package com.badao.mapreducedemo;import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException;//第一個參數(shù)類型是輸入值key的數(shù)據(jù)類型,map中間輸出key的數(shù)據(jù)類型 //第二個參數(shù)類型是輸入值value的數(shù)據(jù)類型,map中間輸出value的數(shù)據(jù)類型 //第三個參數(shù)類型是輸出值key的數(shù)據(jù)類型,他的數(shù)據(jù)類型要跟job.setOutputKeyClass(Text.class) 保持一致 //第四個參數(shù)類型是輸出值value的數(shù)據(jù)類型,它的數(shù)據(jù)類型要跟job.setOutputValueClass(IntWriable.class) 保持一致public class WordCountReducer extends Reducer<Text, IntWritable,Text,IntWritable> {public IntWritable result = new IntWritable();//key就是單詞? values是單詞出現(xiàn)頻率列表@Overridepublic void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {int sum = 0;for(IntWritable val:values){//get就是取出IntWriable的值sum += val.get();}result.set(sum);context.write(key,result);} }4、編寫job類
這里把Windows本地文件和集群HDFS抽離出兩個方法
本地文件
??? public static void wordCountLocal()throws IOException, ClassNotFoundException, InterruptedException{Configuration conf = new Configuration();//實例化一個作業(yè),word count是作業(yè)的名字Job job = Job.getInstance(conf, "wordcount");//指定通過哪個類找到對應(yīng)的jar包job.setJarByClass(WorldCountJob.class);//為job設(shè)置Mapper類job.setMapperClass(WorldCountMapper.class);job.setCombinerClass(IntSumReducer.class);//為job設(shè)置reduce類job.setReducerClass(WordCountReducer.class);//為job的輸出數(shù)據(jù)設(shè)置key類job.setOutputKeyClass(Text.class);//為job輸出設(shè)置value類job.setOutputValueClass(IntWritable.class);//為job設(shè)置輸入路徑,輸入路徑是存在的文件夾/文件FileInputFormat.addInputPath(job,new Path("D:\\words.txt"));//為job設(shè)置輸出路徑FileOutputFormat.setOutputPath(job,new Path("D:\\badao"));job.waitForCompletion(true);}注意事項:
這里的輸入路徑必須存在,就是上面新建的數(shù)據(jù)集。
輸出路徑必須不能存在,不然會報錯路徑已經(jīng)存在
注意這里的FileOutputFormat導(dǎo)入的包路徑是下面的路徑
然后在main方法中調(diào)用該方法
運行成功后會在D盤下badao目錄下生成part-r-00000文件,這就是統(tǒng)計結(jié)果
集群HDFS
??? public static void wordCountColony()throws IOException, ClassNotFoundException, InterruptedException{Configuration conf = new Configuration();conf.set("fs.defaultFS", "hdfs://192.168.148.128:9000");System.setProperty("HADOOP_USER_NAME","root");//實例化一個作業(yè),word count是作業(yè)的名字Job job = Job.getInstance(conf, "wordcount");//指定通過哪個類找到對應(yīng)的jar包job.setJarByClass(WorldCountJob.class);//為job設(shè)置Mapper類job.setMapperClass(WorldCountMapper.class);job.setCombinerClass(IntSumReducer.class);//為job設(shè)置reduce類job.setReducerClass(WordCountReducer.class);//為job的輸出數(shù)據(jù)設(shè)置key類job.setOutputKeyClass(Text.class);//為job輸出設(shè)置value類job.setOutputValueClass(IntWritable.class);//為job設(shè)置輸入路徑,輸入路徑是存在的文件夾/文件FileInputFormat.addInputPath(job,new Path("/words.txt"));//為job設(shè)置輸出路徑FileOutputFormat.setOutputPath(job,new Path("/badao9"));job.waitForCompletion(true);}然后將數(shù)據(jù)集上傳到集群HDFS中
然后main方法中運行該方法
運行結(jié)束后查看該文件
?job完整代碼:
package com.badao.mapreducedemo;import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer; import java.io.IOException;public class WorldCountJob {public static void main(String[] args) throws InterruptedException, IOException, ClassNotFoundException {wordCountLocal();}public static void wordCountLocal()throws IOException, ClassNotFoundException, InterruptedException{Configuration conf = new Configuration();//實例化一個作業(yè),word count是作業(yè)的名字Job job = Job.getInstance(conf, "wordcount");//指定通過哪個類找到對應(yīng)的jar包job.setJarByClass(WorldCountJob.class);//為job設(shè)置Mapper類job.setMapperClass(WorldCountMapper.class);job.setCombinerClass(IntSumReducer.class);//為job設(shè)置reduce類job.setReducerClass(WordCountReducer.class);//為job的輸出數(shù)據(jù)設(shè)置key類job.setOutputKeyClass(Text.class);//為job輸出設(shè)置value類job.setOutputValueClass(IntWritable.class);//為job設(shè)置輸入路徑,輸入路徑是存在的文件夾/文件FileInputFormat.addInputPath(job,new Path("D:\\words.txt"));//為job設(shè)置輸出路徑FileOutputFormat.setOutputPath(job,new Path("D:\\badao"));job.waitForCompletion(true);}public static void wordCountColony()throws IOException, ClassNotFoundException, InterruptedException{Configuration conf = new Configuration();conf.set("fs.defaultFS", "hdfs://192.168.148.128:9000");System.setProperty("HADOOP_USER_NAME","root");//實例化一個作業(yè),word count是作業(yè)的名字Job job = Job.getInstance(conf, "wordcount");//指定通過哪個類找到對應(yīng)的jar包job.setJarByClass(WorldCountJob.class);//為job設(shè)置Mapper類job.setMapperClass(WorldCountMapper.class);job.setCombinerClass(IntSumReducer.class);//為job設(shè)置reduce類job.setReducerClass(WordCountReducer.class);//為job的輸出數(shù)據(jù)設(shè)置key類job.setOutputKeyClass(Text.class);//為job輸出設(shè)置value類job.setOutputValueClass(IntWritable.class);//為job設(shè)置輸入路徑,輸入路徑是存在的文件夾/文件FileInputFormat.addInputPath(job,new Path("/words.txt"));//為job設(shè)置輸出路徑FileOutputFormat.setOutputPath(job,new Path("/badao9"));job.waitForCompletion(true);} }示例代碼下載:
https://download.csdn.net/download/BADAO_LIUMANG_QIZHI/20718869
與50位技術(shù)專家面對面20年技術(shù)見證,附贈技術(shù)全景圖總結(jié)
以上是生活随笔為你收集整理的MapReduce Java API实例-统计单词出现频率的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Three.js中引入dat.gui库实
- 下一篇: MapReduce Java API实例