日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 >

YARN配置Kerberos认证

發布時間:2025/4/5 41 豆豆
生活随笔 收集整理的這篇文章主要介紹了 YARN配置Kerberos认证 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

為什么80%的碼農都做不了架構師?>>> ??

關于 Kerberos 的安裝和 HDFS 配置 kerberos 認證,請參考?HDFS配置kerberos認證。

1. 環境說明

系統環境:

  • 操作系統:CentOs 6.6
  • Hadoop版本:CDH5.4
  • JDK版本:1.7.0_71
  • 運行用戶:root

集群各節點角色規劃為:

192.168.56.121 cdh1 NameNode、ResourceManager、HBase、Hive metastore、Impala Catalog、Impala statestore、Sentry 192.168.56.122 cdh2 DataNode、SecondaryNameNode、NodeManager、HBase、Hive Server2、Impala Server 192.168.56.123 cdh3 DataNode、HBase、NodeManager、Hive Server2、Impala Server

cdh1作為master節點,其他節點作為slave節點,hostname 請使用小寫,要不然在集成 kerberos 時會出現一些錯誤。

2. 生成 keytab

在 cdh1 節點,即 KDC server 節點上執行下面命令:

cd /var/kerberos/krb5kdc/kadmin.local -q "addprinc -randkey yarn/cdh1@JAVACHEN.COM " kadmin.local -q "addprinc -randkey yarn/cdh2@JAVACHEN.COM " kadmin.local -q "addprinc -randkey yarn/cdh3@JAVACHEN.COM "kadmin.local -q "addprinc -randkey mapred/cdh1@JAVACHEN.COM " kadmin.local -q "addprinc -randkey mapred/cdh2@JAVACHEN.COM " kadmin.local -q "addprinc -randkey mapred/cdh3@JAVACHEN.COM "kadmin.local -q "xst -k yarn.keytab yarn/cdh1@JAVACHEN.COM " kadmin.local -q "xst -k yarn.keytab yarn/cdh2@JAVACHEN.COM " kadmin.local -q "xst -k yarn.keytab yarn/cdh3@JAVACHEN.COM "kadmin.local -q "xst -k mapred.keytab mapred/cdh1@JAVACHEN.COM " kadmin.local -q "xst -k mapred.keytab mapred/cdh2@JAVACHEN.COM " kadmin.local -q "xst -k mapred.keytab mapred/cdh3@JAVACHEN.COM "

拷貝 yarn.keytab 和 mapred.keytab 文件到其他節點的?/etc/hadoop/conf?目錄

$ scp yarn.keytab mapred.keytab cdh1:/etc/hadoop/conf $ scp yarn.keytab mapred.keytab cdh2:/etc/hadoop/conf $ scp yarn.keytab mapred.keytab cdh3:/etc/hadoop/conf

并設置權限,分別在 cdh1、cdh2、cdh3 上執行:

$ ssh cdh1 "cd /etc/hadoop/conf/;chown yarn:hadoop yarn.keytab;chown mapred:hadoop mapred.keytab ;chmod 400 *.keytab" $ ssh cdh2 "cd /etc/hadoop/conf/;chown yarn:hadoop yarn.keytab;chown mapred:hadoop mapred.keytab ;chmod 400 *.keytab" $ ssh cdh3 "cd /etc/hadoop/conf/;chown yarn:hadoop yarn.keytab;chown mapred:hadoop mapred.keytab ;chmod 400 *.keytab"

由于 keytab 相當于有了永久憑證,不需要提供密碼(如果修改 kdc 中的 principal 的密碼,則該 keytab 就會失效),所以其他用戶如果對該文件有讀權限,就可以冒充 keytab 中指定的用戶身份訪問 hadoop,所以 keytab 文件需要確保只對 owner 有讀權限(0400)

3. 修改 YARN 配置文件

修改 yarn-site.xml,添加下面配置:

<property><name>yarn.resourcemanager.keytab</name><value>/etc/hadoop/conf/yarn.keytab</value> </property> <property><name>yarn.resourcemanager.principal</name> <value>yarn/_HOST@JAVACHEN.COM</value> </property><property><name>yarn.nodemanager.keytab</name><value>/etc/hadoop/conf/yarn.keytab</value> </property> <property><name>yarn.nodemanager.principal</name> <value>yarn/_HOST@JAVACHEN.COM</value> </property> <property><name>yarn.nodemanager.container-executor.class</name> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value> </property> <property><name>yarn.nodemanager.linux-container-executor.group</name><value>yarn</value> </property>

如果想要 YARN 開啟 SSL,則添加:

<property><name>yarn.http.policy</name><value>HTTPS_ONLY</value> </property>

修改 mapred-site.xml,添加如下配置:

<property><name>mapreduce.jobhistory.keytab</name><value>/etc/hadoop/conf/mapred.keytab</value> </property> <property><name>mapreduce.jobhistory.principal</name> <value>mapred/_HOST@JAVACHEN.COM</value> </property>

如果想要 mapreduce jobhistory 開啟 SSL,則添加:

<property><name>mapreduce.jobhistory.http.policy</name><value>HTTPS_ONLY</value> </property>

在?/etc/hadoop/conf?目錄下創建 container-executor.cfg 文件,內容如下:

#configured value of yarn.nodemanager.linux-container-executor.group yarn.nodemanager.linux-container-executor.group=yarn #comma separated list of users who can not run applications banned.users=bin #Prevent other super-users min.user.id=0 #comma separated list of system users who CAN run applications allowed.system.users=root,nobody,impala,hive,hdfs,yarn

設置該文件權限:

$ chown root:yarn container-executor.cfg $ chmod 400 container-executor.cfg$ ll container-executor.cfg -r-------- 1 root yarn 354 11-05 14:14 container-executor.cfg

注意:

  • container-executor.cfg?文件讀寫權限需設置為?400,所有者為?root:yarn。
  • yarn.nodemanager.linux-container-executor.group?要同時配置在 yarn-site.xml 和 container-executor.cfg,且其值需要為運行 NodeManager 的用戶所在的組,這里為 yarn。
  • banned.users?不能為空,默認值為?hfds,yarn,mapred,bin
  • min.user.id?默認值為 1000,在有些 centos 系統中,用戶最小 id 為500,則需要修改該值
  • 確保?yarn.nodemanager.local-dirs?和?yarn.nodemanager.log-dirs?對應的目錄權限為?755?。

設置 /usr/lib/hadoop-yarn/bin/container-executor 讀寫權限為?6050?如下:

$ chown root:yarn /usr/lib/hadoop-yarn/bin/container-executor $ chmod 6050 /usr/lib/hadoop-yarn/bin/container-executor$ ll /usr/lib/hadoop-yarn/bin/container-executor ---Sr-s--- 1 root yarn 333 11-04 19:11 container-executor

測試是否配置正確:

$ /usr/lib/hadoop-yarn/bin/container-executor --checksetup

如果提示錯誤,則查看 NodeManger 的日志,然后對照?YARN ONLY: Container-executor Error Codes?查看錯誤對應的問題說明。

關于 LinuxContainerExecutor 的詳細說明,可以參考?http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html#LinuxContainerExecutor。

記住將修改的上面文件同步到其他節點:cdh2、cdh3,并再次一一檢查權限是否正確。

$ cd /etc/hadoop/conf/$ scp yarn-site.xml mapred-site.xml container-executor.cfg cdh2:/etc/hadoop/conf/ $ scp yarn-site.xml mapred-site.xml container-executor.cfg cdh3:/etc/hadoop/conf/$ ssh cdh2 "cd /etc/hadoop/conf/; chown root:yarn container-executor.cfg ; chmod 400 container-executor.cfg" $ ssh cdh3 "cd /etc/hadoop/conf/; chown root:yarn container-executor.cfg ; chmod 400 container-executor.cfg"

4. 啟動服務

啟動 ResourceManager

resourcemanager 是通過 yarn 用戶啟動的,故在 cdh1 上先獲取 yarn 用戶的 ticket 再啟動服務:

$ kinit -k -t /etc/hadoop/conf/yarn.keytab yarn/cdh1@JAVACHEN.COM $ service hadoop-yarn-resourcemanager start

然后查看日志,確認是否啟動成功。

啟動 NodeManager

resourcemanager 是通過 yarn 用戶啟動的,故在 cdh2 和 cdh3 上先獲取 yarn 用戶的 ticket 再啟動服務:

$ ssh cdh2 "kinit -k -t /etc/hadoop/conf/yarn.keytab yarn/cdh2@JAVACHEN.COM ;service hadoop-yarn-nodemanager start" $ ssh cdh3 "kinit -k -t /etc/hadoop/conf/yarn.keytab yarn/cdh3@JAVACHEN.COM ;service hadoop-yarn-nodemanager start"

啟動 MapReduce Job History Server

resourcemanager 是通過 mapred 用戶啟動的,故在 cdh1 上先獲取 mapred 用戶的 ticket 再啟動服務:

$ kinit -k -t /etc/hadoop/conf/mapred.keytab mapred/cdh1@JAVACHEN.COM $ service hadoop-mapreduce-historyserver start

5. 測試

檢查 web 頁面是否可以訪問:http://cdh1:8088/cluster

運行一個 mapreduce 的例子:

$ klistTicket cache: FILE:/tmp/krb5cc_1002Default principal: yarn/cdh1@JAVACHEN.COMValid starting Expires Service principal11/10/14 11:18:55 11/11/14 11:18:55 krbtgt/cdh1@JAVACHEN.COMrenew until 11/17/14 11:18:55Kerberos 4 ticket cache: /tmp/tkt1002klist: You have no tickets cached$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 10000

如果沒有報錯,則說明配置成功。最后運行的結果為:

Job Finished in 54.56 seconds Estimated value of Pi is 3.14120000000000000000

如果出現下面錯誤,請檢查環境變量中?HADOOP_YARN_HOME?是否設置正確,并和?yarn.application.classpath?中的保持一致。

14/11/13 11:41:02 INFO mapreduce.Job: Job job_1415849491982_0003 failed with state FAILED due to: Application application_1415849491982_0003 failed 2 times due to AM Container for appattempt_1415849491982_0003_000002 exited with exitCode: 1 due to: Exception from container-launch. Container id: container_1415849491982_0003_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1:at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)at org.apache.hadoop.util.Shell.run(Shell.java:455)at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:281)at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)at java.util.concurrent.FutureTask.run(FutureTask.java:138)at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)at java.lang.Thread.run(Thread.java:662)Shell output: main : command provided 1 main : user is yarn main : requested yarn user is yarnContainer exited with a non-zero exit code 1 .Failing this attempt.. Failing the application. 14/11/13 11:41:02 INFO mapreduce.Job: Counters: 0 Job Finished in 13.428 seconds java.io.FileNotFoundException: File does not exist: hdfs://cdh1:8020/user/yarn/QuasiMonteCarlo_1415850045475_708291630/out/reduce-out

轉載于:https://my.oschina.net/boltwu/blog/825957

總結

以上是生活随笔為你收集整理的YARN配置Kerberos认证的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。