日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

2018年第41周-sparkSql搭建及配置

發布時間:2025/3/17 编程问答 47 豆豆
生活随笔 收集整理的這篇文章主要介紹了 2018年第41周-sparkSql搭建及配置 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

spark搭建

下載spark-2.3.2

wget https://archive.apache.org/dist/spark/spark-2.3.2/spark-2.3.2-bin-hadoop2.7.tgz

需下載-hadoop-2.7版本的spark, 不然要自己加很多依賴進spark目錄

修改配置

復制\$HADOOP_HOME/etc/hadoop/core-site.xml 至 \$SPARK_HOME/conf
復制\$HADOOP_HOME/etc/hadoop/hdfs-site.xml 至 \$SPARK_HOME/conf
復制\$HIVE_HOME/conf/hive-site.xml 至 \$SPARK_HOME/conf

修改$SPARK_HOME/conf/hive-site.xml, 將sparksql監聽的端口改為10002, 免得與原hive的hiveserver2端口沖突

<property><name>hive.server2.thrift.port</name><value>10002</value><description>Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'binary'.</description></property>

 新建啟動腳本并執行

在$SPARK_HOME目錄創建文件startThriftServer.sh

vim startThriftServer.sh 添加以下內容 #!/bin/bash./sbin/start-thriftserver.sh \--master yarn

執行腳本

chmod +x ./startThriftServer.sh ./startThriftServer.sh

啟動測試

執行beeline連接, 在$SPARK_HOME目錄

[jevoncode@s1 spark-2.3.2-bin-hadoop2.7]$ ./bin/beeline Beeline version 1.2.1.spark2 by Apache Hive beeline> !connect jdbc:hive2://localhost:10002/hive_data Connecting to jdbc:hive2://localhost:10002/hive_data Enter username for jdbc:hive2://localhost:10002/hive_data: jevoncode Enter password for jdbc:hive2://localhost:10002/hive_data: *************** 2018-10-14 11:15:24 INFO Utils:310 - Supplied authorities: localhost:10002 2018-10-14 11:15:24 INFO Utils:397 - Resolved authority: localhost:10002 2018-10-14 11:15:24 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2://localhost:10002/hive_data Connected to: Spark SQL (version 2.3.2) Driver: Hive JDBC (version 1.2.1.spark2) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://localhost:10002/hive_data>

就可以執行sql語句了

spark動態資源配置

搭建完spark之后, 發現執行sql很慢, 從其webUI來看, 只有兩個Executors執行, 而yarn集群有7臺服務器, 從zabbix可以看到資源資源利用率低.

webUI在yarn界面, 點擊Thrift JDBC/ODBC Server的ApplicationMaster即可進入

此問題的解決方法, 啟動spark動態資源功能即可, 配置如下:

1.配置\$SPARK_HOME/conf/spark-defaults.conf

spark.dynamicAllocation.enabled true spark.shuffle.service.enabled true

2.配置\$HADOOP_HOME/etc/hadoop/yarn-site.xml, 每個NodeManager都要配置

<property>\<name>yarn.nodemanager.aux-services</name>\<value>mapreduce_shuffle,spark_shuffle</value></property> <property><name>yarn.nodemanager.aux-services.spark_shuffle.class</name><value>org.apache.spark.network.yarn.YarnShuffleService</value></property>

3.復制\$SPARK_HOME/yarn/spark-2.3.2-yarn-shuffle.jar至\$HADOOP_HOME/share/hadoop/yarn/

4.重啟每個NodeManager

5.此時執行sql就可以看到有很多Executors在執行

TroubleShoot

1.shuffle的配置問題

2018-10-14 10:24:05 WARN YarnScheduler:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2018-10-14 10:24:20 WARN YarnScheduler:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2018-10-14 10:24:35 WARN YarnScheduler:66 - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

webUI什么錯誤信息也沒, 狀態也看不到

最后在yarn的application的日志里找到這個錯誤

2018-10-14 10:20:38 ERROR YarnAllocator:91 - Failed to launch executor 23 on container container_e69_1538148198468_17372_01_000024 org.apache.spark.SparkException: Exception while starting container container_e69_1538148198468_17372_01_000024 on host jevoncode.comat org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:125)at org.apache.spark.deploy.yarn.ExecutorRunnable.run(ExecutorRunnable.scala:65)at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:534)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not existat sun.reflect.GeneratedConstructorAccessor35.newInstance(Unknown Source)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)at org.apache.hadoop.yarn.client.api.impl.NMClientImpl.startContainer(NMClientImpl.java:205)at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:122)... 5 more

shuffle的配置問題, 上述錯誤是因為yarn-site.xml沒有配置spark_shuffle和指定spark_shuffle.class

2.HADOOP_CONF_DIR配置問題
需在~/.bashrc增加配置

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

總結

以上是生活随笔為你收集整理的2018年第41周-sparkSql搭建及配置的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。