日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

spark之13:提交应用的方法(spark-submit)

發(fā)布時(shí)間:2024/1/23 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 spark之13:提交应用的方法(spark-submit) 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

spark之13:提交應(yīng)用的方法(spark-submit)

@(SPARK)[spark, 大數(shù)據(jù)]

參考自:https://spark.apache.org/docs/latest/submitting-applications.html

常見的語法:

./bin/spark-submit \--class <main-class>--master <master-url> \--deploy-mode <deploy-mode> \--conf <key>=<value> \... # other options<application-jar> \[application-arguments]

舉幾個(gè)常用的用法例子:

//正式集群正在使用的腳本 /home/hadoop/spark/bin/spark-shell \ --master yarn-client \ --name $1 \ --executor-memory 6G \ --num-executors 10 \ --conf spark.driver.extraClassPath=/home/hadoop/hiveconf/metastore/${project} \ --principal ${project}/scheduler@NIE.NETEASE.COM \ --keytab /home/scheduler/keytab/${project}.keytab# Run application locally on 8 cores ./bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master local[8] \/path/to/examples.jar \100# Run on a Spark Standalone cluster in client deploy mode ./bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master spark://207.184.161.138:7077 \--executor-memory 20G \--total-executor-cores 100 \/path/to/examples.jar \1000# Run on a Spark Standalone cluster in cluster deploy mode with supervise ./bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master spark://207.184.161.138:7077 \--deploy-mode cluster--supervise--executor-memory 20G \--total-executor-cores 100 \/path/to/examples.jar \1000# Run on a YARN cluster export HADOOP_CONF_DIR=XXX ./bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master yarn-cluster \ # can also be `yarn-client` for client mode--executor-memory 20G \--num-executors 50 \/path/to/examples.jar \1000# Run a Python application on a Spark Standalone cluster ./bin/spark-submit \--master spark://207.184.161.138:7077 \examples/src/main/python/pi.py \1000

1、一引起重要的參數(shù)說明

(1)—-class: 主類,即main函數(shù)所有的類

(2)—- master : master的URL,見下面的詳細(xì)說明。

(3)—-deploy-mode:client和cluster2種模式

(4)—-conf:指定key=value形式的配置

2、關(guān)于jar包

hadoop和spark的配置會(huì)被自動(dòng)加載到SparkContext,因此,提交application時(shí)只需要提交用戶的代碼以及其它依賴包,這有2種做法:

(1)將用戶代碼打包成jar,然后在提交application時(shí)使用—-jar來添加依賴jar包

(2)將用戶代碼與依賴一起打包成一個(gè)大包 assembly jar (or “uber” jar)

關(guān)于依賴關(guān)系更詳細(xì)的說明:

When using spark-submit, the application jar along with any jars included with the –jars option will be automatically transferred to the cluster. Spark uses the following URL scheme to allow different strategies for disseminating jars:

file: - Absolute paths and file:/ URIs are served by the driver’s HTTP file server, and every executor pulls the file from the driver HTTP server.

hdfs:, http:, https:, ftp: - these pull down files and JARs from the URI as expected

local: - a URI starting with local:/ is expected to exist as a local file on each worker node. This means that no network IO will be incurred, and works well for large files/JARs that are pushed to each worker, or shared via NFS, GlusterFS, etc.

Note that JARs and files are copied to the working directory for each SparkContext on the executor nodes. This can use up a significant amount of space over time and will need to be cleaned up. With YARN, cleanup is handled automatically, and with Spark standalone, automatic cleanup can be configured with the spark.worker.cleanup.appDataTtl property.

Users may also include any other dependencies by supplying a comma-delimited list of maven coordinates with –packages. All transitive dependencies will be handled when using this command. Additional repositories (or resolvers in SBT) can be added in a comma-delimited fashion with the flag –repositories. These commands can be used with pyspark, spark-shell, and spark-submit to include Spark Packages.

For Python, the equivalent –py-files option can be used to distribute .egg, .zip and .py libraries to executors.

3、關(guān)于master的值

(1)對(duì)于standalone模式,是spark://ip:port/的形式

(2)對(duì)于yarn,有yarn-cluster與yarn-cluster2種

(3)對(duì)于mesos,目前只有client選項(xiàng)

(4)除此之外,還有l(wèi)ocal[N]這種用于本地調(diào)試的選項(xiàng)

Master URL Meaning
local Run Spark locally with one worker thread (i.e. no parallelism at all).
local[K] Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine).
local[*] Run Spark locally with as many worker threads as logical cores on your machine.
spark://HOST:PORT Connect to the given Spark standalone cluster master. The port must be whichever one your master is configured to use, which is 7077 by default.
mesos://HOST:PORT Connect to the given Mesos cluster. The port must be whichever one your is configured to use, which is 5050 by default. Or, for a Mesos cluster using ZooKeeper, use mesos://zk://….
yarn-client Connect to a YARN cluster in client mode. The cluster location will be found based on the HADOOP_CONF_DIR or YARN_CONF_DIR variable.
yarn-cluster Connect to a YARN cluster in cluster mode. The cluster location will be found based on the HADOOP_CONF_DIR or YARN_CONF_DIR variable.

4、關(guān)于client與cluster模式

A common deployment strategy is to submit your application from a gateway machine that is physically co-located with your worker machines (e.g. Master node in a standalone EC2 cluster). In this setup, client mode is appropriate. In client mode, the driver is launched directly within the spark-submit process which acts as a client to the cluster. The input and output of the application is attached to the console. Thus, this mode is especially suitable for applications that involve the REPL (e.g. Spark shell).

Alternatively, if your application is submitted from a machine far from the worker machines (e.g. locally on your laptop), it is common to usecluster mode to minimize network latency between the drivers and the executors. Note that cluster mode is currently not supported for Mesos clusters. Currently only YARN supports cluster mode for Python applications.

5、加載本地的配置文件

The spark-submit script can load default Spark configuration values from a properties file and pass them on to your application. By default it will read options from conf/spark-defaults.conf in the Spark directory. For more detail, see the section on loading default configurations.
Loading default Spark configurations this way can obviate the need for certain flags to spark-submit. For instance, if the spark.master property is set, you can safely omit the –master flag from spark-submit. In general, configuration values explicitly set on a SparkConf take the highest precedence, then flags passed to spark-submit, then values in the defaults file.

If you are ever unclear where configuration options are coming from, you can print out fine-grained debugging information by running spark-submit with the –verbose option.

附spark-submit的完整命令:

hadoop@gdc-nn01-logtest:~/spark$ bin/spark-submit Usage: spark-submit [options] [app arguments] Usage: spark-submit --kill [submission ID] --master [spark://...] Usage: spark-submit --status [submission ID] --master [spark://...]Options:--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") oron one of the worker machines inside the cluster ("cluster")(Default: client).--class CLASS_NAME Your application's main class (for Java / Scala apps).--name NAME A name of your application.--jars JARS Comma-separated list of local jars to include on the driverand executor classpaths.--packages Comma-separated list of maven coordinates of jars to includeon the driver and executor classpaths. Will search the localmaven repo, then maven central and any additional remoterepositories given by --repositories. The format for thecoordinates should be groupId:artifactId:version.--repositories Comma-separated list of additional remote repositories tosearch for the maven coordinates given with --packages.--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to placeon the PYTHONPATH for Python apps.--files FILES Comma-separated list of files to be placed in the workingdirectory of each executor.--conf PROP=VALUE Arbitrary Spark configuration property.--properties-file FILE Path to a file from which to load extra properties. If notspecified, this will look for conf/spark-defaults.conf.--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512M).--driver-java-options Extra Java options to pass to the driver.--driver-library-path Extra library path entries to pass to the driver.--driver-class-path Extra class path entries to pass to the driver. Note thatjars added with --jars are automatically included in theclasspath.--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).--proxy-user NAME User to impersonate when submitting the application.--help, -h Show this help message and exit--verbose, -v Print additional debug output--version, Print the version of current SparkSpark standalone with cluster deploy mode only:--driver-cores NUM Cores for driver (Default: 1).Spark standalone or Mesos with cluster deploy mode only:--supervise If given, restarts the driver on failure.--kill SUBMISSION_ID If given, kills the driver specified.--status SUBMISSION_ID If given, requests the status of the driver specified.Spark standalone and Mesos only:--total-executor-cores NUM Total cores for all executors.Spark standalone and YARN only:--executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode,or all available cores on the worker in standalone mode)YARN-only:--driver-cores NUM Number of cores used by the driver, only in cluster mode(Default: 1).--queue QUEUE_NAME The YARN queue to submit to (Default: "default").--num-executors NUM Number of executors to launch (Default: 2).--archives ARCHIVES Comma separated list of archives to be extracted into theworking directory of each executor.--principal PRINCIPAL Principal to be used to login to KDC, while running onsecure HDFS.--keytab KEYTAB The full path to the file that contains the keytab for theprincipal specified above. This keytab will be copied tothe node running the Application Master via the SecureDistributed Cache, for renewing the login tickets and thedelegation tokens periodically.15/07/22 11:03:25 INFO util.Utils: Shutdown hook called 創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎(jiǎng)勵(lì)來咯,堅(jiān)持創(chuàng)作打卡瓜分現(xiàn)金大獎(jiǎng)

總結(jié)

以上是生活随笔為你收集整理的spark之13:提交应用的方法(spark-submit)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。