日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

大规模数据处理Apache Spark开发

發(fā)布時間:2023/11/28 生活经验 41 豆豆
生活随笔 收集整理的這篇文章主要介紹了 大规模数据处理Apache Spark开发 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

大規(guī)模數(shù)據(jù)處理Apache Spark開發(fā)

Spark是用于大規(guī)模數(shù)據(jù)處理的統(tǒng)一分析引擎。它提供了Scala、Java、Python和R的高級api,以及一個支持用于數(shù)據(jù)分析的通用計算圖的優(yōu)化引擎。它還支持一組豐富的高級工具,包括用于SQL和DataFrames的Spark
SQL、用于機器學習的MLlib、用于圖形處理的GraphX以及用于流處理的結(jié)構(gòu)化流。

https://github.com/apache/spark

https://spark.apache.org/

Online Documentation

可以在project web頁面上找到最新的Spark文檔,包括編程指南。此readme文件僅包含基本的安裝說明。

Building Spark

Spark是使用Apache Maven構(gòu)建的。要構(gòu)建Spark及其示例程序,請運行:

./build/mvn -DskipTests clean package

(如果下載了預(yù)構(gòu)建包,則無需執(zhí)行此操作。)

更詳細的文件可從項目現(xiàn)場“Building Spark”獲取。

有關(guān)一般開發(fā)技巧,包括使用IDE開發(fā)Spark的信息,請參閱"Useful Developer Tools"。

Interactive Scala Shell

The easiest way to start using Spark is through the Scala
shell: ./bin/spark-shell

Try the following command, which should return
1,000,000,000:

scala> spark.range(1000* 1000 * 1000).count()

Interactive Python Shell

Alternatively, if you prefer Python, you can use the
Python shell: ./bin/pyspark

And run the following command, which should also return
1,000,000,000:

spark.range(1000 * 1000 * 1000).count()

Spark also comes with
several sample programs in the examples directory. To run one of them, use ./bin/run-example [params]. For example:

./bin/run-example SparkPi will run the Pi example locally.

You can set the MASTER environment variable when running examples to submit examples to a cluster. This can be a mesos:// or spark:// URL, “yarn” to run on YARN, and “l(fā)ocal” to run locally with one thread, or “l(fā)ocal[N]” to run locally with N threads. You can also use an abbreviated class name if the class is in the examples package. For instance:

MASTER=spark://host:7077 ./bin/run-example SparkPi Many of the example programs print usage help if no params are given.

Running Tests

Testing first requires building Spark.
Once Spark is built, tests can be run using:

./dev/run-tests

Please see the guidance on how to run tests for a module, or individual tests.

There is also a Kubernetes integration test, see
resource-managers/kubernetes/integration tests/README.md

關(guān)于Hadoop版本的說明

Spark使用Hadoop核心庫與HDFS和其他Hadoop支持的存儲系統(tǒng)進行通信。由于協(xié)議在不同版本的Hadoop中發(fā)生了變化,因此必須針對集群運行的同一版本構(gòu)建Spark。

請參閱構(gòu)建文檔"Specifying the Hadoop Version and Enabling YARN",以獲取構(gòu)建特定Hadoop發(fā)行版的詳細指導(dǎo),包括為特定的配置單元和配置單元節(jié)儉服務(wù)器發(fā)行版構(gòu)建。

配置

有關(guān)如何配置Spark的概述,請參閱聯(lián)機文檔中的配置指南。

貢獻

請查閱Spark指南,以了解如何開始為項目作出貢獻。

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in
different versions of Hadoop, you must build Spark against the same version that your cluster runs.

Please refer to the build documentation at “Specifying the Hadoop Version and Enabling YARN” for detailed guidance on building for a particular distribution of Hadoop, including building for particular Hive and Hive Thriftserver distributions.

Configuration

Please refer to the Configuration Guide in the online documentation for an overview on how to configure Spark.

Contributing

Please review the Contribution to Spark guide
for information on how to get started contributing to the project.

總結(jié)

以上是生活随笔為你收集整理的大规模数据处理Apache Spark开发的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。