IDEA+scala+spark程序开发流程
1. 新建JAVA工程
2. 設(shè)置scala SDK?
File -> Project Struction -> Libraries -> +; 添加Scala SDK。如果沒(méi)有配置過(guò)系統(tǒng)的scala SDK, 指定系統(tǒng)中安裝的scala位置。
3. 導(dǎo)入spark libraries
File -> Project Struction -> Libraries -> +; 導(dǎo)入spark的jar包,這里可以統(tǒng)一對(duì)所有的jar包進(jìn)行命名,如spark-lib。
4. 設(shè)置Source
File->Project Structure->Modules->Sources->src
這里選中src,右鍵New Folder取名main,然后在main下面新建三個(gè)文件夾:java,resources,scala。最后右鍵scala文件夾,選擇Sources。
在面板的右側(cè)的+Add Content Root下面就會(huì)增加src\main\scala作為源文件夾,可以檢驗(yàn)文件結(jié)構(gòu)是否正確。
需要注意的是,如果沒(méi)有設(shè)置source文件夾或者文件結(jié)構(gòu)設(shè)置錯(cuò)誤,后面運(yùn)行spark程序都會(huì)報(bào)錯(cuò)!
- 不設(shè)置Source文件夾:Error=找不到或無(wú)法加載主類 scala.HelloWorld
- main, main/scala都設(shè)置為source:Error=HelloWorld is already defined as object HelloWorld
5. 編寫spark測(cè)試程序
右鍵scala文件夾 -> New -> Scala Class -> 選擇object
測(cè)試代碼如下:
package scala import org.apache.spark.{SparkConf, SparkContext}object HelloWorld {def main(args: Array[String]): Unit = {val logFile = "E:\\software\\spark-2.3.0-bin-hadoop2.7\\helloSpark.txt"val conf = new SparkConf().setAppName("wordcount").setMaster("local")val sc = new SparkContext(conf)val rdd = sc.textFile(logFile)val counts = rdd.flatMap(line=>line.split(",")).map(x=>(x,1)).reduceByKey((x,y)=>(x+y))counts.foreach(println)sc.stop()} }Ctrl+Shift+F10 運(yùn)行程序,日志如下。
2019-08-05 22:18:32 INFO? Executor:54 - Running task 0.0 in stage 1.0 (TID 1) 2019-08-05 22:18:32 INFO? ShuffleBlockFetcherIterator:54 - Getting 1 non-empty blocks out of 1 blocks 2019-08-05 22:18:32 INFO? ShuffleBlockFetcherIterator:54 - Started 0 remote fetches in 6 ms (scala,1) (spark,2) (hello,3) (sparkui,1) (java,1) 2019-08-05 22:18:32 INFO? Executor:54 - Finished task 0.0 in stage 1.0 (TID 1). 1138 bytes result sent to driver 2019-08-05 22:18:32? TaskSetManager:54 - Finished task 0.0 in stage 1.0 (TID 1) in 56 ms on localhost (executor driver) (1/1) 2019-08-05 22:18:32 INFO? TaskSchedulerImpl:54 - Removed TaskSet 1.0, whose tasks have all completed, from pool? 2019-08-05 22:18:32 INFO? DAGScheduler:54 - ResultStage 1 (main at <unknown>:0) finished in 0.072 s 2019-08-05 22:18:32 INFO? DAGScheduler:54 - Job 0 finished: main at <unknown>:0, took 0.859162 s 2019-08-05 22:18:32 INFO? AbstractConnector:318 - Stopped Spark@1f4c2977{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2019-08-05 22:18:32 INFO? SparkUI:54 - Stopped Spark web UI at http://10.119.9.164:4040?
總結(jié)
以上是生活随笔為你收集整理的IDEA+scala+spark程序开发流程的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: On Error Resume Next
- 下一篇: RDD 与 DataFrame原理-区别