日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

0027-如何在CDH集群启用Kerberos

發布時間:2025/3/20 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 0027-如何在CDH集群启用Kerberos 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

2019獨角獸企業重金招聘Python工程師標準>>>

1.文檔編寫目的


本文檔講述如何在CDH集群啟用及配置Kerberos,您將學習到以下知識:

1.如何安裝及配置KDC服務

2.如何通過CDH啟用Kerberos

3.如何登錄Kerberos并訪問Hadoop相關服務

文檔主要分為以下幾步:

1.安裝及配置KDC服務

2.CDH集群啟用Kerberos

3.Kerberos使用

這篇文檔將重點介紹如何在CDH集群啟用及配置Kerberos,并基于以下假設:

1.CDH集群運行正常

2.集群未啟用Kerberos

3.MySQL 5.1.73

以下是本次測試環境,但不是本操作手冊的必需環境:

1.操作系統:CentOS 6.5

2.CDH和CM版本為5.12.0

3.采用root用戶進行操作

2.KDC服務安裝及配置


本文檔中將KDC服務安裝在Cloudera Manager Server所在服務器上(KDC服務可根據自己需要安裝在其他服務器)

1.在Cloudera Manager服務器上安裝KDC服務

[root@ip-172-31-6-148~]# yum -y install krb5-serverkrb5-libs krb5-auth-dialog krb5-workstation

2.修改/etc/krb5.conf配置

[root@ip-172-31-6-148 fayson_r]# vim /etc/krb5.conf [logging]default = FILE:/var/log/krb5libs.logkdc = FILE:/var/log/krb5kdc.logadmin_server = FILE:/var/log/kadmind.log[libdefaults]default_realm = FAYSON.COMdns_lookup_realm = falsedns_lookup_kdc = falseticket_lifetime = 24hrenew_lifetime = 7dforwardable = true[realms]FAYSON.COM = {kdc = ip-172-31-6-148.fayson.comadmin_server = ip-172-31-6-148.fayson.com}[domain_realm].ip-172-31-6-148.fayson.com = FAYSON.COMip-172-31-6-148.fayson.com = FAYSON.COM

標紅部分為需要修改的信息。

3.修改/var/kerberos/krb5kdc/kadm5.acl配置

[root@ip-172-31-6-148~]# vim /var/kerberos/krb5kdc/kadm5.acl */admin@FAYSON.COM *

4.修改/var/kerberos/krb5kdc/kdc.conf配置

[root@ip-172-31-6-148 ~]# vim /var/kerberos/krb5kdc/kdc.conf [kdcdefaults]kdc_ports = 88kdc_tcp_ports = 88[realms]FAYSON.COM= {#master_key_type = aes256-ctsmax_renewable_life= 7d 0h 0m 0sacl_file = /var/kerberos/krb5kdc/kadm5.acldict_file = /usr/share/dict/wordsadmin_keytab = /var/kerberos/krb5kdc/kadm5.keytabsupported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:n ormal des-cbc-md5:normal des-cbc-crc:normal}

標紅部分為需要修改的配置。

5.創建Kerberos數據庫

[root@ip-172-31-6-148 ~]# kdb5_util create –r FAYSON.COM -s Loading random data Initializing database '/var/kerberos/krb5kdc/principal' for realm 'FAYSON.COM', master key name 'K/M@FAYSON.COM' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: Re-enter KDC database master key to verify:

此處需要輸入Kerberos數據庫的密碼。

6.創建Kerberos的管理賬號

[root@ip-172-31-6-148 ~]# kadmin.local Authenticating as principal fayson/admin@CLOUDERA.COM with password. kadmin.local: addprinc admin/admin@FAYSON.COM WARNING: no policy specified for admin/admin@FAYSON.COM; defaulting to no policy Enter password for principal "admin/admin@FAYSON.COM": Re-enter password for principal "admin/admin@FAYSON.COM": Principal "admin/admin@FAYSON.COM" created. kadmin.local: exit [root@ip-172-31-6-148 ~]#

標紅部分為Kerberos管理員賬號,需要輸入管理員密碼。

7.將Kerberos服務添加到自啟動服務,并啟動krb5kdc和kadmin服務

[root@ip-172-31-6-148~]# chkconfig krb5kdc on [root@ip-172-31-6-148 ~]# chkconfig kadmin on [root@ip-172-31-6-148 ~]# service krb5kdc start Starting Kerberos 5 KDC: [ OK ] [root@ip-172-31-6-148 ~]# service kadmin start Starting Kerberos 5 Admin Server: [ OK ] [root@ip-172-31-6-148 ~]#

8.測試Kerberos的管理員賬號

[root@ip-172-31-6-148 ~]# kinit admin/admin@FAYSON.COM Password for admin/admin@FAYSON.COM: [root@ip-172-31-6-148 ~]# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: admin/admin@FAYSON.COMValid starting Expires Service principal 09/05/17 16:39:17 09/06/17 16:39:17 krbtgt/FAYSON.COM@FAYSON.COMrenew until 09/12/17 16:39:17 [root@ip-172-31-6-148 ~]#

9.為集群安裝所有Kerberos客戶端,包括Cloudera Manager

[root@ip-172-31-6-148 cdh-shell-master]# yum -y install krb5-libs krb5-workstation

10.在Cloudera Manager Server服務器上安裝額外的包

[root@ip-172-31-6-148cdh-shell-master]# yum -y install openldap-clients

11.將KDC Server上的krb5.conf文件拷貝到所有Kerberos客戶端

[root@ip-172-31-6-148cdh-shell-master]# scp -r /etc/krb5.conf root@172.31.5.190:/etc/

此處使用腳本進行拷貝

[root@ip-172-31-6-148cdh-shell-master]# sh b.sh node.list /etc/krb5.conf /etc/ krb5.conf 100% 451 0.4KB/s 00:00 krb5.conf 100% 451 0.4KB/s 00:00 krb5.conf 100% 451 0.4KB/s 00:00 krb5.conf 100% 451 0.4KB/s 00:00 [root@ip-172-31-6-148 cdh-shell-master]#

3.CDH集群啟用Kerberos


1.在KDC中給Cloudera Manager添加管理員賬號

[root@ip-172-31-6-148 cdh-shell-bak]# kadmin.local Authenticating as principal admin/admin@FAYSON.COM with password. kadmin.local: addprinc cloudera-scm/admin@FAYSON.COM WARNING: no policy specified for cloudera-scm/admin@FAYSON.COM; defaulting to no policy Enter password for principal "cloudera-scm/admin@FAYSON.COM": Re-enter password for principal "cloudera-scm/admin@FAYSON.COM": Principal "cloudera-scm/admin@FAYSON.COM" created. kadmin.local: exit [root@ip-172-31-6-148 cdh-shell-bak]#

2.進入Cloudera Manager的“管理”-> “安全”界面

3.選擇“啟用Kerberos”,進入如下界面

確保如下列出的所有檢查項都已完成

4.點擊“繼續”,配置相關的KDC信息,包括類型、KDC服務器、KDC Realm、加密類型以及待創建的Service Principal(hdfs,yarn,,hbase,hive等)的更新生命期等

5.點擊“繼續”

6.不建議讓Cloudera Manager來管理krb5.conf, 點擊“繼續”

7.輸入Cloudera Manager的Kerbers管理員賬號,必須和之前創建的賬號一致,點擊“繼續”

8.等待啟用Kerberos完成,點擊“繼續”

9.點擊“繼續”

10.勾選重啟集群,點擊“繼續”

11.等待集群重啟成功,點擊“繼續”

至此已成功啟用Kerberos。

4.Kerberos使用


使用fayson用戶運行MapReduce任務及操作Hive,需要在集群所有節點創建fayson用戶。

1.使用kadmin創建一個fayson的principal

[root@ip-172-31-6-148 cdh-shell-bak]# kadmin.local Authenticating as principal admin/admin@FAYSON.COM with password. kadmin.local: addprinc fayson@FAYSON.COM WARNING: no policy specified for fayson@FAYSON.COM; defaulting to no policy Enter password for principal "fayson@FAYSON.COM": Re-enter password for principal "fayson@FAYSON.COM": Principal "fayson@FAYSON.COM" created. kadmin.local: exit [root@ip-172-31-6-148 cdh-shell-bak]#

2.使用fayson用戶登錄Kerberos

[root@ip-172-31-6-148 cdh-shell-bak]# kdestroy [root@ip-172-31-6-148 cdh-shell-bak]# kinit fayson Password for fayson@FAYSON.COM: [root@ip-172-31-6-148 cdh-shell-bak]# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: fayson@FAYSON.COMValid starting Expires Service principal 09/05/17 17:19:08 09/06/17 17:19:08 krbtgt/FAYSON.COM@FAYSON.COMrenew until 09/12/17 17:19:08 [root@ip-172-31-6-148 cdh-shell-bak]#

3.運行MapReduce作業

[root@ip-172-31-6-148~]# hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 1 ... Starting Job 17/09/02 20:10:43 INFO mapreduce.Job: Running job: job_1504383005209_0001 17/09/02 20:10:56 INFO mapreduce.Job: Job job_1504383005209_0001 running in ubermode : false 17/09/02 20:10:56 INFO mapreduce.Job: map0% reduce 0% 17/09/02 20:11:09 INFO mapreduce.Job: map20% reduce 0% 17/09/02 20:11:12 INFO mapreduce.Job: map40% reduce 0% 17/09/02 20:11:13 INFO mapreduce.Job: map50% reduce 0% 17/09/02 20:11:15 INFO mapreduce.Job: map60% reduce 0% 17/09/02 20:11:16 INFO mapreduce.Job: map70% reduce 0% 17/09/02 20:11:19 INFO mapreduce.Job: map80% reduce 0% 17/09/02 20:11:21 INFO mapreduce.Job: map100% reduce 0% 17/09/02 20:11:26 INFO mapreduce.Job: map100% reduce 100% 17/09/02 20:11:26 INFO mapreduce.Job: Job job_1504383005209_0001 completedsuccessfully

4.使用beeline連接hive進行測試

[root@ip-172-31-6-148 cdh-shell-bak]# beeline Beeline version 1.1.0-cdh5.12.1 by Apache Hive beeline> !connect jdbc:hive2://localhost:10000/;principal=hive/ip-172-31-6-148.fayson.com@FAYSON.COM ... Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://localhost:10000/> show tables; ... INFO : OK +-------------+--+ | tab_name | +-------------+--+ | test_table | +-------------+--+ 1 row selected (0.194 seconds) 0: jdbc:hive2://localhost:10000/> select * from test_table; ... INFO : OK +----------------+----------------+--+ | test_table.s1 | test_table.s2 | +----------------+----------------+--+ | 4 | lisi | | 1 | test | | 2 | fayson | | 3 | zhangsan | +----------------+----------------+--+ 4 rows selected (0.144 seconds) 0: jdbc:hive2://localhost:10000/>

運行Hive MapReduce作業

0: jdbc:hive2://localhost:10000/> select count(*) from test_table; ... INFO : OK +------+--+ | _c0 | +------+--+ | 4 | +------+--+ 1 row selected (35.779 seconds) 0: jdbc:hive2://localhost:10000/>

5.常見問題


1.使用Kerberos用戶身份運行MapReduce作業報錯

main : run as user is fayson main : requested yarn user is fayson Requested user fayson is not whitelisted and has id 501,whichis below the minimum allowed 1000Failing this attempt. Failing the application. 17/09/02 20:05:04 INFO mapreduce.Job: Counters: 0 Job Finished in 6.184 seconds java.io.FileNotFoundException: File does not exist:hdfs://ip-172-31-6-148:8020/user/fayson/QuasiMonteCarlo_1504382696029_1308422444/out/reduce-outat org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1266)at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1258)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1258)at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1820)at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1844)at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)at sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)at sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)atorg.apache.hadoop.util.RunJar.run(RunJar.java:221)at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

問題原因:是由于Yarn限制了用戶id小于10000的用戶提交作業;

解決方法:修改Yarn的min.user.id來解決

推薦關注Hadoop實操,第一時間,分享更多Hadoop干貨,歡迎轉發和分享。

轉載于:https://my.oschina.net/u/4016761/blog/2878676

總結

以上是生活随笔為你收集整理的0027-如何在CDH集群启用Kerberos的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。