日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 运维知识 > linux >内容正文

linux

linux上配置spark集群

發(fā)布時(shí)間:2023/12/19 linux 25 豆豆
生活随笔 收集整理的這篇文章主要介紹了 linux上配置spark集群 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

環(huán)境:

linux

spark1.6.0

hadoop2.2.0

一.安裝scala(每臺(tái)機(jī)器) 1.下載scala-2.11.0.tgz ? 放在目錄: /opt下,tar -zxvf?scala-2.11.0.tgz 2.在hadoop用戶(hù)下 vim /etc/profile 3.在profile文件加入Scala路徑 export SCALA_JAVA=/opt/scala-2.11.0export PATH=$PATH:$SCALA_JAVA/bin? 4.使配置環(huán)境生效 source /etc/profile 5.檢驗(yàn)scala是否安裝成功 [hadoop@testhdp01 ~]$ scala -version
Scala code runner version 2.10.1 -- Copyright 2002-2013, LAMP/EPF 成功 二.安裝spark 1.編譯spark1.6.0(在linux下編譯很多次都編譯不成功,所以我放到mac下編譯的。) 官網(wǎng)編譯方法:http://spark.apache.org/docs/latest/building-spark.html 進(jìn)入spark目錄,然后執(zhí)行以下命令: build/mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package ./make-distribution.sh --name custom-spark --tgz -Psparkr -Phadoop-2.2 -Phive -Phive-thriftserver -Pyarn mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 -Phive -Phive-thriftserver -DskipTests clean package 用idea編譯方法: https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-IDESetup 2.配置spark cd /opt/spark-1.6.0-bin-hadoop2.2.0/confcp spark-env.sh.template spark-env.shcp slaves.template slavesvim spark-env.sh 加入 export SCALA_HOME=/opt/scala-2.10.1 export JAVA_HOME=/opt/jdk1.7.0_51 export SPARK_MASTER_IP=192.168.22.7 export HADOOP_HOME=/opt/hadoop-2.2.0 export SPARK_HOME=/opt/spark-1.6.0-bin-hadoop2.2.0 export SPARK_LIBRARY_PATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$HADOOP_HOME/lib/native export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop/ export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native export SPARK_JAR=$SPARK_HOME/lib/spark-assembly-1.6.0-hadoop2.2.0.jar

mac 下配置如下,在文件頭加入

#jdk export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home export PATH=$PATH:$JAVA_HOME/bin#scala export SCALA_HOME=/usr/local/Cellar/scala-2.10.4 export PATH=$PATH:$SCALA_HOME/bin#hadoop export HADOOP_HOME=/usr/local/Cellar/hadoop/2.7.2/libexec export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin#hive export HIVE_HOME=/usr/local/Cellar/hive/2.0.1/libexec export SPARK_CLASSPATH=$HIVE_HOME/lib/mysql-connector-java-5.1.28.jar:$SPARK_CLASSPATH#spark export SPARK_HOME=/usr/local/Cellar/spark-1.3.1-bin-hadoop2.6 export PATH=$PATH:$SPARK_HOME/bin

?

?

3.配置spark 支持hive

vim spark-env.sh export HIVE_HOME=/opt/apache-hive-0.13.0 export SPARK_CLASSPATH=$HIVE_HOME/lib/mysql-connector-java-5.1.26.jar:$SPARK_CLASSPATH 拷貝apache-hive-0.13.1-bin/conf/hive-site.xml到$SPARK_HOME/conf下 cp /opt/apache-hive-0.13.0/conf/hive-site.xml conf/ 在/etc/profile.d目錄下創(chuàng)建hive.sh文件 加入環(huán)境變量設(shè)置 #!/bin/bash export HIVE_HOME=/opt/apache-hive-0.13.0 export PATH=$HIVE_HOME/bin:$PATH 是環(huán)境變量生效 source /etc/profile.d/hive.sh 4.配置集群 進(jìn)入spark的conf目錄 vim slaves 刪除localhost 加入子節(jié)點(diǎn)的名字 testhdp02 testhdp03 配置spark系統(tǒng)環(huán)境(三個(gè)子節(jié)點(diǎn)都要配置) sudo su - root sudo vim /etc/profile export SPARK_HOME=/opt/spark-1.5.0-bin-hadoop2.2.0 export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin 3.把配置好的spark打包,發(fā)送到子節(jié)點(diǎn) ? http://192.168.22.7:8080/ 三:錯(cuò)誤分析 bin/spark-shell 運(yùn)行 val textFile = sc.textFile("README.md") textFile.count() 出現(xiàn)如下錯(cuò)誤: Caused by: java.lang.reflect.InvocationTargetExceptionat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)... 61 more Caused by: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:135)at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:175)at org.apache.hadoop.mapred.TextInputFormat.configure(TextInputFormat.java:45)... 66 more Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not foundat org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1626)at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)... 68 more

?

解決方案: 修改saprk-env.sh文件 export SCALA_HOME=/opt/scala-2.10.1 export JAVA_HOME=/opt/jdk1.7.0_51 export SPARK_MASTER_IP=192.168.22.7 export HADOOP_HOME=/opt/hadoop-2.2.0 export SPARK_HOME=/opt/spark-1.6.0-bin-hadoop2.2.0 export SPARK_LIBRARY_PATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$HADOOP_HOME/lib/native export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop/ export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native export SPARK_JAR=$SPARK_HOME/lib/spark-assembly-1.6.0-hadoop2.2.0.jar export SPARK_CLASSPATH=$SPARK_CLASSPATH:$HADOOP_HOME/share/hadoop/yarn/*:$HADOOP_HOME/share/hadoop/yarn/lib/*:$HADOOP_HOME/share/hadoop/common/*:$HADOOP_HOME/share/hadoop/common/lib/*:$HADOOP_HOME/share/hadoop/hdfs/*:$HADOOP_HOME/share/hadoop/hdfs/lib/*:$HADOOP_HOME/share/hadoop/mapreduce/*:$HADOOP_HOME/share/hadoop/mapreduce/lib/*:$HADOOP_HOME/share/hadoop/tools/lib/*:$SPARK_HOME/lib/*

?


?

轉(zhuǎn)載于:https://www.cnblogs.com/aijianiula/p/5192580.html

總結(jié)

以上是生活随笔為你收集整理的linux上配置spark集群的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。