當前位置:
首頁 >
java编写WordCound的Spark程序,Scala编写wordCound程序
發布時間:2024/9/27
26
豆豆
生活随笔
收集整理的這篇文章主要介紹了
java编写WordCound的Spark程序,Scala编写wordCound程序
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
1、創建一個maven項目,項目的相關信息如下:
<groupId>cn.toto.spark</groupId> <artifactId>bigdata</artifactId> <version>1.0-SNAPSHOT</version>2、修改Maven倉庫的位置配置:
3、首先要編寫Maven的Pom文件
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>cn.toto.spark</groupId><artifactId>bigdata</artifactId><version>1.0-SNAPSHOT</version><properties><maven.compiler.source>1.7</maven.compiler.source><maven.compiler.target>1.7</maven.compiler.target><encoding>UTF-8</encoding><scala.version>2.10.6</scala.version><spark.version>1.6.2</spark.version><hadoop.version>2.6.4</hadoop.version></properties><dependencies><dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>${scala.version}</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.10</artifactId><version>${spark.version}</version></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>${hadoop.version}</version></dependency></dependencies><build><sourceDirectory>src/main/scala</sourceDirectory><testSourceDirectory>src/test/scala</testSourceDirectory><plugins><plugin><groupId>net.alchim31.maven</groupId><artifactId>scala-maven-plugin</artifactId><version>3.2.2</version><executions><execution><goals><goal>compile</goal><goal>testCompile</goal></goals><configuration><args><arg>-make:transitive</arg><arg>-dependencyfile</arg><arg>${project.build.directory}/.scala_dependencies</arg></args></configuration></execution></executions></plugin><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-shade-plugin</artifactId><version>2.4.3</version><executions><execution><phase>package</phase><goals><goal>shade</goal></goals><configuration><filters><filter><artifact>*:*</artifact><excludes><exclude>META-INF/*.SF</exclude><exclude>META-INF/*.DSA</exclude><exclude>META-INF/*.RSA</exclude></excludes></filter></filters></configuration></execution></executions></plugin></plugins></build></project>4、編寫Java代碼
package cn.toto.spark;import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.FlatMapFunction; import org.apache.spark.api.java.function.Function2; import org.apache.spark.api.java.function.PairFunction; import scala.Tuple2;import java.util.Arrays;/*** Created by toto on 2017/7/6.*/ public class JavaWordCount {public static void main(String[] args) {SparkConf conf = new SparkConf().setAppName("JavaWordCount");//創建java sparkcontextJavaSparkContext jsc = new JavaSparkContext(conf);//讀取數據JavaRDD<String> lines = jsc.textFile(args[0]);//切分JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {@Overridepublic Iterable<String> call(String line) throws Exception {return Arrays.asList(line.split(" "));}});//遇見一個單詞就記作一個1JavaPairRDD<String, Integer> wordAndOne = words.mapToPair(new PairFunction<String, String, Integer>() {@Overridepublic Tuple2<String, Integer> call(String word) throws Exception {return new Tuple2<String, Integer>(word, 1);}});//分組聚合JavaPairRDD<String, Integer> result = wordAndOne.reduceByKey(new Function2<Integer, Integer, Integer>() {@Overridepublic Integer call(Integer i1, Integer i2) throws Exception {return i1 + i2;}});//反轉順序JavaPairRDD<Integer, String> swapedPair = result.mapToPair(new PairFunction<Tuple2<String, Integer>, Integer, String>() {@Overridepublic Tuple2<Integer, String> call(Tuple2<String, Integer> tp) throws Exception {return new Tuple2<Integer, String>(tp._2, tp._1);}});//排序并調換順序JavaPairRDD<String, Integer> finalResult = swapedPair.sortByKey(false).mapToPair(new PairFunction<Tuple2<Integer, String>, String, Integer>() {@Overridepublic Tuple2<String, Integer> call(Tuple2<Integer, String> tp) throws Exception {return tp.swap();}});//保存finalResult.saveAsTextFile(args[1]);jsc.stop();} }5、準備數據
數據放置在E:\wordcount\input中:
里面的文件內容是:
6、通過工具傳遞參數:
7、運行結果:
8、scala編寫wordCount
單詞統計的代碼如下:
import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext}/*** Created by ZhaoXing on 2016/6/30.*/ object ScalaWordCount {def main(args: Array[String]) {val conf = new SparkConf().setAppName("ScalaWordCount")//非常重要的一個對象SparkContextval sc = new SparkContext(conf)//textFile方法生成了兩個RDD: HadoopRDD[LongWritable, Text] -> MapPartitionRDD[String]val lines: RDD[String] = sc.textFile(args(0))//flatMap方法生成了一個MapPartitionRDD[String]val words: RDD[String] = lines.flatMap(_.split(" "))//Map方法生成了一個MapPartitionRDD[(String, Int)]val wordAndOne: RDD[(String, Int)] = words.map((_, 1))val counts: RDD[(String, Int)] = wordAndOne.reduceByKey(_+_)val sortedCounts: RDD[(String, Int)] = counts.sortBy(_._2, false)//保存的HDFS//sortedCounts.saveAsTextFile(args(1))counts.saveAsTextFile(args(1))//釋放SparkContextsc.stop()} }總結
以上是生活随笔為你收集整理的java编写WordCound的Spark程序,Scala编写wordCound程序的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 银行降息公积金贷款也降息么
- 下一篇: 中金所交易时间