日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > java >内容正文

java

Java常用spark的pom.xml与读取csv为rdd到最终join操作+java常用pom.xml文件

發布時間:2023/12/31 java 25 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Java常用spark的pom.xml与读取csv为rdd到最终join操作+java常用pom.xml文件 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

能進行join的只能是:

JavaPairRDD

--------------------------------------------------------------------第一種方案------------------------------------------------------------------------

代碼如下:

import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.function.Function; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaSparkContext; import scala.Tuple2; import org.apache.log4j.Logger; import java.util.Arrays; import java.util.List;public class java_join {static class Entity {private String name;private Integer age;public Entity(String name, Integer age) //構造函數{this.name = name;this.age = age;}public String getName() {return name;}public Integer getAge() {return age;}}//--------------------------------------------------------------------------------------------------public static void main(String[] args){Logger.getLogger("org.apache.hadoop").setLevel(org.apache.log4j.Level.WARN);Logger.getLogger("org.apache.spark").setLevel(org.apache.log4j.Level.WARN);Logger.getLogger("org.project-spark").setLevel(org.apache.log4j.Level.WARN);String appName = "test";String master = "local[2]";String path = "hdfs://Desktop:9000/rdd3.csv";SparkConf conf = new SparkConf().setAppName(appName).setMaster(master) .set("spark.serializer","org.apache.spark.serializer.KryoSerializer");JavaSparkContext sc = new JavaSparkContext(conf);// 這個keyby會把age放前,name放后JavaPairRDD<Integer, Entity> pairRDD = sc.parallelize(Arrays.asList(new Entity("zhangsan", 11),new Entity("lisi", 11),new Entity("wangwu", 13))).keyBy(Entity::getAge);JavaPairRDD<Integer, Entity> javaPairRDD = sc.textFile(path).map(line -> {String[] strings = line.split(",");String name = strings[0];Integer age = Integer.valueOf(strings[1]);return new Entity(name, age);}).keyBy(Entity::getAge);System.out.println("--------------------------------------------------------");System.out.println(javaPairRDD.collect());JavaPairRDD<Integer, Tuple2<Entity, Entity>> collect = pairRDD.join(javaPairRDD);System.out.println("-------------------------查看join結果-------------------------------");List<Tuple2<Integer, Tuple2<Entity, Entity>>> result = collect.collect();for (int i = 0; i < result.size(); i++){System.out.print("List[");System.out.print(result.get(i)._1);System.out.print(",Tuple2(");System.out.print(result.get(i)._2._1.name);System.out.print(",");System.out.print(result.get(i)._2._2.name);System.out.println(")]");}} }

實驗驗結果是:

List[11,Tuple2(zhangsan,zhangsan)]
List[11,Tuple2(zhangsan,lisi)]
List[11,Tuple2(lisi,zhangsan)]
List[11,Tuple2(lisi,lisi)]

rdd3.csv的內容是:

zhangsan,11
lisi,11
wangwu,14

----------------------第二種方案--------------------------

import com.sun.rowset.internal.Row; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaPairRDD$; import org.apache.spark.api.java.function.*; import org.slf4j.event.Level; import scala.Tuple2; import java.util.*; import java.util.Random; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.SparkContext; import java.util.Iterator; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaSparkContext; import java.lang.*; //import org.apache.log4j.Level; import org.apache.log4j.Logger; //import java.util.logging.Logger;import scala.Tuple2;public class sampling_salting {public static void main(String[] args) {Logger.getLogger("org.apache.hadoop").setLevel(org.apache.log4j.Level.WARN);Logger.getLogger("org.apache.spark").setLevel(org.apache.log4j.Level.WARN);Logger.getLogger("org.project-spark").setLevel(org.apache.log4j.Level.WARN);SparkConf conf = new SparkConf().setMaster("local").setAppName("join");JavaSparkContext sc = new JavaSparkContext(conf);String path1="hdfs://Desktop:9000/rdd1.csv";String path2="hdfs://Desktop:9000/rdd2.csv";JavaPairRDD<Integer, String> rdd1 = sc.textFile(path1).mapToPair(new PairFunction<String, Integer, String>(){@Overridepublic Tuple2<Integer, String> call(String s) throws Exception{String[] strings=s.split(",");Integer ids = Integer.valueOf(strings[0]);String greet=strings[1];return Tuple2.apply(ids,greet);}});JavaPairRDD<Integer,String>rdd2=sc.textFile(path2) .mapToPair(line->{String[] strings=line.split(",");Integer ids = Integer.valueOf(strings[0]);String greet=strings[1];return new Tuple2<>(ids,greet); });System.out.println(rdd1.collect());System.out.println(rdd2.collect());JavaPairRDD<Integer, Tuple2<String, String>> result = rdd1.join(rdd2);System.out.println(result.collect());} }

上述代碼中,轉化為最終的JavaPairRDD使用了mapToPair有兩種辦法:

return Tuple2.apply(ids,greet);

return new Tuple2<>(ids,greet);

rdd1.csv

001,hello
001,hello
001,hello
001,hello

?

rdd2.csv

002,hello
002,hello
002,hello
002,hello

hdfs dfs -put rdd1.csv /

hdfs dfs -put rdd2.csv /

?

--------------------------------------------Java常用的pom.xml文件--------------------------------------------------------------------------------

?

<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>java_join</groupId><artifactId>java_join</artifactId><version>1.0-SNAPSHOT</version><build><plugins><plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId><configuration><source>1.8</source><target>1.8</target><encoding>UTF-8</encoding></configuration></plugin></plugins></build><dependencies><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.12</artifactId><version>3.0.0</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-sql_2.12</artifactId><version>3.0.0</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-streaming_2.12</artifactId><version>3.0.0</version><scope>provided</scope></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-mllib_2.12</artifactId><version>3.0.0</version><scope>runtime</scope></dependency><!-- https://mvnrepository.com/artifact/org.apache.spark/spark-graphx --><dependency><groupId>org.apache.spark</groupId><artifactId>spark-graphx_2.12</artifactId><version>3.0.0</version></dependency></dependencies></project>

總結

以上是生活随笔為你收集整理的Java常用spark的pom.xml与读取csv为rdd到最终join操作+java常用pom.xml文件的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。