日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

RDD 编程

發布時間:2024/7/5 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 RDD 编程 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

    • 1. RDD 創建
    • 2. RDD轉換
    • 3. RDD動作
    • 4. 持久化
    • 5. 分區
    • 6. 文件數據讀寫
      • 6.1 本地
      • 6.2 hdfs
      • 6.3 Json文件
      • 6.4 Hbase

學習自 MOOC Spark編程基礎

1. RDD 創建

  • 從文件創建
Welcome to____ __/ __/__ ___ _____/ /___\ \/ _ \/ _ `/ __/ '_//___/ .__/\_,_/_/ /_/\_\ version 2.1.0/_/Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_131) Type in expressions to have them evaluated. Type :help for more information.scala> val lines = sc.textFile("file:///home/hadoop/workspace/word.txt") lines: org.apache.spark.rdd.RDD[String] = file:////home/hadoop/workspace/word.txt MapPartitionsRDD[1] at textFile at <console>:24
  • 從 hdfs 創建
scala> val lines = sc.textFile("hdfs://localhost:9000/user/word.txt") lines: org.apache.spark.rdd.RDD[String] = hdfs://localhost:9000/user/word.txt MapPartitionsRDD[3] at textFile at <console>:24 scala> val lines = sc.textFile("/user/word.txt") lines: org.apache.spark.rdd.RDD[String] = /user/word.txt MapPartitionsRDD[9] at textFile at <console>:24
  • 通過并行集合創建
scala> val array = Array(1,2,3,4,5) array: Array[Int] = Array(1, 2, 3, 4, 5)scala> val rdd = sc.parallelize(array) rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[12] at parallelize at <console>:26

2. RDD轉換

  • filter(func),過濾
scala> val linesWithSpark = lines.filter(line=>line.contains("spark")) linesWithSpark: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[13] at filter at <console>:26
  • map(func) , 映射
scala> val rdd2 = rdd.map(x => x+10) rdd2: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[14] at map at <console>:28 scala> val words = lines.map(line => line.split(" ")) words: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[15] at map at <console>:26

輸出: n 個元素,每個元素是一個 String 數組

  • flatMap(func)
scala> val words = lines.flatMap(line => line.split(" ")) words: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[16] at flatMap at <console>:26

輸出:所有單詞

  • groupByKey(), reduceByKey(func)
    按 key 合并,得到 value list,后者還可以根據 func 對 value list 進行操作

3. RDD動作

spark 遇到 RDD action 時才會真正的開始執行,遇到轉換的時候,只是記錄下來,并不真正執行

  • count() ,統計 rdd 元素個數
  • collect(),以數組形式返回所有的元素
  • first(),返回第一個元素
  • take(n),返回前 n 個元素
  • reduce(func),聚合
  • foreach(func),遍歷
scala> val rdd = sc.parallelize(Array(1,2,3,4,5)) rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:24scala> rdd.count() res0: Long = 5scala> rdd.first() res1: Int = 1scala> rdd.take(3) res2: Array[Int] = Array(1, 2, 3)scala> rdd.reduce((a,b)=>a+b) res3: Int = 15scala> rdd.collect() res4: Array[Int] = Array(1, 2, 3, 4, 5)scala> rdd.foreach(elem => println(elem))

4. 持久化

  • persist(),對一個 rdd 標記為持久化,遇到第一個 rdd動作 時,才真正持久化
scala> val list = List("Hadoop","Spark","Hive") list: List[String] = List(Hadoop, Spark, Hive)scala> val rdd1 = sc.parallelize(list) rdd1: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[1] at parallelize at <console>:26scala> println(rdd1.count()) 3scala> println(rdd1.collect().mkString("--")) Hadoop--Spark--Hivescala> rdd1.cache() # 緩存起來,后續用到rdd1的時候,不用從頭開始計算了 res10: rdd1.type = ParallelCollectionRDD[1] at parallelize at <console>:26

5. 分區

  • 提高并行度
  • 減小通信開銷

分區原則:分區個數盡量 = 集群CPU核心數

  • 創建rdd時指定分區數量 sc.textFile(path, partitionNum)
scala> val arr = Array(1,2,3,4,5) arr: Array[Int] = Array(1, 2, 3, 4, 5)scala> val rdd = sc.parallelize(arr, 2) # 2 個分區 rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:26
  • 更改分區數量
scala> rdd.partitions.size res0: Int = 2scala> val rdd1 = rdd.repartition(1) rdd1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[4] at repartition at <console>:28scala> rdd1.partitions.size res1: Int = 1

  • wordCount 例子
scala> val lines = sc.| textFile("/user/word.txt") # 讀取文件 lines: org.apache.spark.rdd.RDD[String] = /user/word.txt MapPartitionsRDD[6] at textFile at <console>:25scala> val wordCount = lines.flatMap(line => line.split(" ")).| map(word => (word, 1)).reduceByKey((a, b) => a+b) wordCount: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[9] at reduceByKey at <console>:27scala> wordCount.collect() # 收集 res2: Array[(String, Int)] = Array((love,2), (spark,1), (c++,1), (i,2), (michael,1))scala> wordCount.foreach(println) # 打印 (spark,1) (c++,1) (i,2) (michael,1) (love,2)
  • 求平均值例子
scala> val rdd = sc.parallelize(Array(("spark",2),("hadoop",3),("hadoop",7),("spark",3))) rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24scala> rdd.mapValues(x => (x, 1)).reduceByKey((x,y)=>(x._1+y._1, x._2+y._2)).mapValues(x => (x._1/x._2)).collect() res0: Array[(String, Int)] = Array((spark,2), (hadoop,5))

6. 文件數據讀寫

6.1 本地

scala> val textFile = sc.| textFile("file:///home/hadoop/workspace/word.txt") textFile: org.apache.spark.rdd.RDD[String] = file:///home/hadoop/workspace/word.txt MapPartitionsRDD[5] at textFile at <console>:25scala> textFile.| saveAsTextFile("file:///home/hadoop/workspace/writeword")# 后面跟的是一個目錄,而不是文件名 ls /home/hadoop/workspace/writeword/ part-00000 part-00001 _SUCCESShadoop@dblab-VirtualBox:/usr/local/spark/bin$ cat /home/hadoop/workspace/writeword/part-00000 i love programming it is very interesting
  • 再次讀取寫入的文件(會把目錄下所有文件讀取)
scala> val textFile = sc.textFile("file:///home/hadoop/workspace/writeword") textFile: org.apache.spark.rdd.RDD[String] = file:///home/hadoop/workspace/writeword MapPartitionsRDD[9] at textFile at <console>:24

6.2 hdfs

scala> val textFile = | sc.textFile("hdfs://localhost:9000/user/word.txt") textFile: org.apache.spark.rdd.RDD[String] = hdfs://localhost:9000/user/word.txt MapPartitionsRDD[11] at textFile at <console>:25scala> textFile.first() res6: String = i love programming

保存到 hdfs (默認 當前用戶的目錄前綴 /user/用戶名/)

scala> textFile.saveAsTextFile("writeword")

查看 hdfs

hadoop@dblab-VirtualBox:/usr/local/hadoop/bin$ ./hdfs dfs -ls -R /user/ drwxr-xr-x - hadoop supergroup 0 2021-04-22 16:01 /user/hadoop drwxr-xr-x - hadoop supergroup 0 2021-04-21 22:48 /user/hadoop/.sparkStaging drwx------ - hadoop supergroup 0 2021-04-21 22:48 /user/hadoop/.sparkStaging/application_1618998320460_0002 -rw-r--r-- 1 hadoop supergroup 73189 2021-04-21 22:48 /user/hadoop/.sparkStaging/application_1618998320460_0002/__spark_conf__.zip -rw-r--r-- 1 hadoop supergroup 120047699 2021-04-21 22:48 /user/hadoop/.sparkStaging/application_1618998320460_0002/__spark_libs__4686608713384839717.zip drwxr-xr-x - hadoop supergroup 0 2021-04-22 16:01 /user/hadoop/writeword -rw-r--r-- 1 hadoop supergroup 0 2021-04-22 16:01 /user/hadoop/writeword/_SUCCESS -rw-r--r-- 1 hadoop supergroup 42 2021-04-22 16:01 /user/hadoop/writeword/part-00000 -rw-r--r-- 1 hadoop supergroup 20 2021-04-22 16:01 /user/hadoop/writeword/part-00001 drwxr-xr-x - hadoop supergroup 0 2017-11-05 21:57 /user/hive drwxr-xr-x - hadoop supergroup 0 2017-11-05 21:57 /user/hive/warehouse drwxr-xr-x - hadoop supergroup 0 2017-11-05 21:57 /user/hive/warehouse/hive.db -rw-r--r-- 1 hadoop supergroup 62 2021-04-21 20:06 /user/word.txt

6.3 Json文件

hadoop@dblab-VirtualBox:/usr/local/hadoop/bin$ cat /usr/local/spark/examples/src/main/resources/people.json {"name":"Michael"} {"name":"Andy", "age":30} {"name":"Justin", "age":19} scala> val jsonStr = sc.| textFile("file:///usr/local/spark/examples/src/main/resources/people.json") jsonStr: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/examples/src/main/resources/people.json MapPartitionsRDD[14] at textFile at <console>:25scala> jsonStr.foreach(println) {"name":"Michael"} {"name":"Andy", "age":30} {"name":"Justin", "age":19}
  • 解析 json 文件
scala.util.parsing.json.JSON JSON.parseFull(jsonString : String) 返回 Some or None

編寫程序

import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf import scala.util.parsing.json.JSON object JSONRead{def main(args:Array[String]){val inputFile = "file:///usr/local/spark/examples/src/main/resources/people.json"val conf = new SparkConf().setAppName("JSONRead")val sc = new SparkContext(conf)val jsonStrs = sc.textFile(inputFile)val res = jsonStrs.map(s => JSON.parseFull(s))res.foreach({ r => r match {case Some(map:Map[String, Any]) => println(map)case None => println("parsing failed")case other => println("unknown data structure: " + other)}})} }

使用 sbt 編譯打包為 jar,spark-submit --class "JSONRead" <路徑 of jar>(有待實踐操作)
參考: 使用Intellij Idea編寫Spark應用程序(Scala+SBT) http://dblab.xmu.edu.cn/blog/1492-2/

6.4 Hbase

hadoop@dblab-VirtualBox:/usr/local/hbase/bin$ ./hbase shell SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.1.5, r239b80456118175b340b2e562a5568b5c744252e, Sun May 8 20:29:26 PDT 2016hbase(main):001:0> disable "student" 0 row(s) in 3.0730 secondshbase(main):002:0> drop "student" 0 row(s) in 1.3530 secondshbase(main):003:0> create "student","info" 0 row(s) in 1.3570 seconds=> Hbase::Table - student hbase(main):004:0> put "student","1","info:name","michael" 0 row(s) in 0.0920 secondshbase(main):005:0> put "student","1","info:gender","M" 0 row(s) in 0.0410 secondshbase(main):006:0> put "student","1","info:age","18" 0 row(s) in 0.0080 seconds

也需要編寫程序,sbt 編譯打包

總結

以上是生活随笔為你收集整理的RDD 编程的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。