日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 >

MapReduce Java API实例-统计单词出现频率

發(fā)布時(shí)間:2025/3/19 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 MapReduce Java API实例-统计单词出现频率 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

場(chǎng)景

Windows下使用Java API操作HDFS的常用方法:

https://blog.csdn.net/BADAO_LIUMANG_QIZHI/article/details/119382108

在上面使用Java API操作HDFS已經(jīng)配置開發(fā)環(huán)境的基礎(chǔ)上。

使用Java API操作Mapreduce統(tǒng)計(jì)單次出現(xiàn)的次數(shù)。

這里Hadoop集群搭建的是Hadoop2.8.0,所以新建Maven項(xiàng)目并引入依賴

??????? <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs-client --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs-client</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-mapreduce-client-core --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-mapreduce-client-core</artifactId><version>2.8.0</version></dependency><!-- https://mvnrepository.com/artifact/junit/junit --><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.12</version><scope>test</scope></dependency>

注:

博客:
https://blog.csdn.net/badao_liumang_qizhi
關(guān)注公眾號(hào)
霸道的程序猿
獲取編程相關(guān)電子書、教程推送與免費(fèi)下載。

實(shí)現(xiàn)

實(shí)現(xiàn)對(duì)指定目錄或文件中的單詞出現(xiàn)次數(shù)進(jìn)行統(tǒng)計(jì),默認(rèn)輸出結(jié)果是以單詞字典排序。
采用默認(rèn)文本讀入,每行讀取一次,然后使用\t對(duì)數(shù)據(jù)進(jìn)行分割或者使用字符串類
StringTokenizer對(duì)其分割(該類會(huì)按照空格、\t、\n等進(jìn)行切分)。在Reduce端相同的key,即
相同的單詞會(huì)在一起進(jìn)行求和處理,求出出現(xiàn)次數(shù)。

1、首先新建數(shù)據(jù)集worlds.txt

2、編寫map類,通過繼承Mapper類實(shí)現(xiàn)里面的map函數(shù)

Mapper類中的第一個(gè)參數(shù)是Object(常用),也可以寫成Long

第一個(gè)參數(shù)對(duì)應(yīng)的值是行偏移量

第二個(gè)參數(shù)類型通常是Text類型,Text類型是Hadoop實(shí)現(xiàn)的String類型的可寫類型

第三個(gè)參數(shù)表示的是輸出key的數(shù)據(jù)類型

第四個(gè)參數(shù)表示的是輸出value的數(shù)據(jù)類型,IntWritable是Hadoop實(shí)現(xiàn)的int類型的可寫數(shù)據(jù)類型

package com.badao.mapreducedemo;import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper;import java.io.IOException; import java.util.StringTokenizer;public class WorldCountMapper extends Mapper<Object,Text,Text,IntWritable> {//1、編寫map函數(shù),通過繼承Mapper類實(shí)現(xiàn)里面的map函數(shù)//?? Mapper類當(dāng)中的第一個(gè)函數(shù)是Object,也可以寫成Long//?? 第一個(gè)參數(shù)對(duì)應(yīng)的值是行偏移量//2、第二個(gè)參數(shù)類型通常是Text類型,Text是Hadoop實(shí)現(xiàn)的String 類型的可寫類型//?? 第二個(gè)參數(shù)對(duì)應(yīng)的值是每行字符串//3、第三個(gè)參數(shù)表示的是輸出key的數(shù)據(jù)類型//4、第四個(gè)參數(shù)表示的是輸出value的數(shù)據(jù)類型,IntWriable 是Hadoop實(shí)現(xiàn)的int類型的可寫數(shù)據(jù)類型public final static IntWritable one = new IntWritable(1);public Text word = new Text();//key 是行偏移量//value是每行字符串@Overridepublic void map(Object key, Text value, Context context) throws IOException, InterruptedException {StringTokenizer stringTokenizer = new StringTokenizer(value.toString());while (stringTokenizer.hasMoreTokens()){//stringTokenizer.nextToken()是字符串類型,使用set函數(shù)完成字符串到Text數(shù)據(jù)類型的轉(zhuǎn)換word.set(stringTokenizer.nextToken());//通過write函數(shù)寫入到本地文件context.write(word,one);}} }

這里有個(gè)很重要的一點(diǎn)就是,千萬不要導(dǎo)錯(cuò)包!!!

尤其是Text

3、編寫reduce類

通過繼承Reduce類實(shí)現(xiàn)里面的reduce函數(shù)

第一個(gè)參數(shù)類型是輸入值key的數(shù)據(jù)類型,map中間輸出key的數(shù)據(jù)類型

第二個(gè)參數(shù)是輸入值為value的數(shù)據(jù)類型,map中間輸出value的數(shù)據(jù)類型

第三個(gè)參數(shù)是輸出值key的數(shù)據(jù)類型,它的數(shù)據(jù)類型要跟job.setOutputKeyClass(Text.Class)保持一致

第四個(gè)參數(shù)類型是輸出值value的數(shù)據(jù)類型,它的數(shù)據(jù)類型要跟job.setOutputValueClass(IntWriable.class)保持一致

package com.badao.mapreducedemo;import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException;//第一個(gè)參數(shù)類型是輸入值key的數(shù)據(jù)類型,map中間輸出key的數(shù)據(jù)類型 //第二個(gè)參數(shù)類型是輸入值value的數(shù)據(jù)類型,map中間輸出value的數(shù)據(jù)類型 //第三個(gè)參數(shù)類型是輸出值key的數(shù)據(jù)類型,他的數(shù)據(jù)類型要跟job.setOutputKeyClass(Text.class) 保持一致 //第四個(gè)參數(shù)類型是輸出值value的數(shù)據(jù)類型,它的數(shù)據(jù)類型要跟job.setOutputValueClass(IntWriable.class) 保持一致public class WordCountReducer extends Reducer<Text, IntWritable,Text,IntWritable> {public IntWritable result = new IntWritable();//key就是單詞? values是單詞出現(xiàn)頻率列表@Overridepublic void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {int sum = 0;for(IntWritable val:values){//get就是取出IntWriable的值sum += val.get();}result.set(sum);context.write(key,result);} }

4、編寫job類

這里把Windows本地文件和集群HDFS抽離出兩個(gè)方法

本地文件

??? public static void wordCountLocal()throws IOException, ClassNotFoundException, InterruptedException{Configuration conf = new Configuration();//實(shí)例化一個(gè)作業(yè),word count是作業(yè)的名字Job job = Job.getInstance(conf, "wordcount");//指定通過哪個(gè)類找到對(duì)應(yīng)的jar包job.setJarByClass(WorldCountJob.class);//為job設(shè)置Mapper類job.setMapperClass(WorldCountMapper.class);job.setCombinerClass(IntSumReducer.class);//為job設(shè)置reduce類job.setReducerClass(WordCountReducer.class);//為job的輸出數(shù)據(jù)設(shè)置key類job.setOutputKeyClass(Text.class);//為job輸出設(shè)置value類job.setOutputValueClass(IntWritable.class);//為job設(shè)置輸入路徑,輸入路徑是存在的文件夾/文件FileInputFormat.addInputPath(job,new Path("D:\\words.txt"));//為job設(shè)置輸出路徑FileOutputFormat.setOutputPath(job,new Path("D:\\badao"));job.waitForCompletion(true);}

注意事項(xiàng):

這里的輸入路徑必須存在,就是上面新建的數(shù)據(jù)集。

輸出路徑必須不能存在,不然會(huì)報(bào)錯(cuò)路徑已經(jīng)存在

注意這里的FileOutputFormat導(dǎo)入的包路徑是下面的路徑

然后在main方法中調(diào)用該方法

運(yùn)行成功后會(huì)在D盤下badao目錄下生成part-r-00000文件,這就是統(tǒng)計(jì)結(jié)果

集群HDFS

??? public static void wordCountColony()throws IOException, ClassNotFoundException, InterruptedException{Configuration conf = new Configuration();conf.set("fs.defaultFS", "hdfs://192.168.148.128:9000");System.setProperty("HADOOP_USER_NAME","root");//實(shí)例化一個(gè)作業(yè),word count是作業(yè)的名字Job job = Job.getInstance(conf, "wordcount");//指定通過哪個(gè)類找到對(duì)應(yīng)的jar包job.setJarByClass(WorldCountJob.class);//為job設(shè)置Mapper類job.setMapperClass(WorldCountMapper.class);job.setCombinerClass(IntSumReducer.class);//為job設(shè)置reduce類job.setReducerClass(WordCountReducer.class);//為job的輸出數(shù)據(jù)設(shè)置key類job.setOutputKeyClass(Text.class);//為job輸出設(shè)置value類job.setOutputValueClass(IntWritable.class);//為job設(shè)置輸入路徑,輸入路徑是存在的文件夾/文件FileInputFormat.addInputPath(job,new Path("/words.txt"));//為job設(shè)置輸出路徑FileOutputFormat.setOutputPath(job,new Path("/badao9"));job.waitForCompletion(true);}

然后將數(shù)據(jù)集上傳到集群HDFS中

然后main方法中運(yùn)行該方法

運(yùn)行結(jié)束后查看該文件

?job完整代碼:

package com.badao.mapreducedemo;import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer; import java.io.IOException;public class WorldCountJob {public static void main(String[] args) throws InterruptedException, IOException, ClassNotFoundException {wordCountLocal();}public static void wordCountLocal()throws IOException, ClassNotFoundException, InterruptedException{Configuration conf = new Configuration();//實(shí)例化一個(gè)作業(yè),word count是作業(yè)的名字Job job = Job.getInstance(conf, "wordcount");//指定通過哪個(gè)類找到對(duì)應(yīng)的jar包job.setJarByClass(WorldCountJob.class);//為job設(shè)置Mapper類job.setMapperClass(WorldCountMapper.class);job.setCombinerClass(IntSumReducer.class);//為job設(shè)置reduce類job.setReducerClass(WordCountReducer.class);//為job的輸出數(shù)據(jù)設(shè)置key類job.setOutputKeyClass(Text.class);//為job輸出設(shè)置value類job.setOutputValueClass(IntWritable.class);//為job設(shè)置輸入路徑,輸入路徑是存在的文件夾/文件FileInputFormat.addInputPath(job,new Path("D:\\words.txt"));//為job設(shè)置輸出路徑FileOutputFormat.setOutputPath(job,new Path("D:\\badao"));job.waitForCompletion(true);}public static void wordCountColony()throws IOException, ClassNotFoundException, InterruptedException{Configuration conf = new Configuration();conf.set("fs.defaultFS", "hdfs://192.168.148.128:9000");System.setProperty("HADOOP_USER_NAME","root");//實(shí)例化一個(gè)作業(yè),word count是作業(yè)的名字Job job = Job.getInstance(conf, "wordcount");//指定通過哪個(gè)類找到對(duì)應(yīng)的jar包job.setJarByClass(WorldCountJob.class);//為job設(shè)置Mapper類job.setMapperClass(WorldCountMapper.class);job.setCombinerClass(IntSumReducer.class);//為job設(shè)置reduce類job.setReducerClass(WordCountReducer.class);//為job的輸出數(shù)據(jù)設(shè)置key類job.setOutputKeyClass(Text.class);//為job輸出設(shè)置value類job.setOutputValueClass(IntWritable.class);//為job設(shè)置輸入路徑,輸入路徑是存在的文件夾/文件FileInputFormat.addInputPath(job,new Path("/words.txt"));//為job設(shè)置輸出路徑FileOutputFormat.setOutputPath(job,new Path("/badao9"));job.waitForCompletion(true);} }

示例代碼下載:

https://download.csdn.net/download/BADAO_LIUMANG_QIZHI/20718869

與50位技術(shù)專家面對(duì)面20年技術(shù)見證,附贈(zèng)技術(shù)全景圖

總結(jié)

以上是生活随笔為你收集整理的MapReduce Java API实例-统计单词出现频率的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。