1) FreeIPA客戶端配置
2) CDP DC7.1集群使用FreeIPA啟用Kerberos
3) Kerberos使用
這篇文檔將重點介紹如何在CDH集群使用FreeIPA啟用及配置Kerberos,並基於以下假設:1) CDP DC集群運行正常
2) 集群未啟用Kerberos
3) FreeIPA Server搭建完成
4) MySQL 5.1.73
1) 作業系統:CentOS7.7
2) CM和Cloudera Runtime的版本為7.1.1
3) 採用root用戶進行操作
yum -y install ipa-client[root@grocery ~]WARNING: ntpd time&date synchronization service will not be configured asconflicting service (chronyd) is enabledUse
DNS discovery failed to determine your DNS domainProvide the domain name of your IPA server (ex: example.com): vpc.cloudera.comProvide your IPA server name (ex: ipa.example.com): XXX.vpc.cloudera.comThe failure to use DNS to find your IPA server indicates that your resolv.conf file is not properly configured.Autodiscovery of servers for failover cannot work with this configuration.If you proceed with the installation, services will be configured to always access the discovered server for all operations and will not fail over to other servers in case of failure.Proceed with fixed values and no DNS discovery? [no]: yesClient hostname: grocery.vpc.cloudera.comRealm: VPC.CLOUDERA.COMDNS Domain: vpc.cloudera.comIPA Server: xuefeng.vpc.cloudera.comBaseDN: dc=vpc,dc=cloudera,dc=com
Continue to configure the system with these values? [no]: yes3) 修改集群節點的/etc/krb5.conf配置文件。a) 注釋掉default_ccache_name[root@wangxf ~]# vi /etc/krb5.conf[root@wangxf ~]# kinit admin Password for admin@VPC.CLOUDERA.COM: You have new mail in /var/spool/mail/root[root@wangxf ~]# klistTicket cache: FILE:/tmp/krb5cc_0Default principal: admin@VPC.CLOUDERA.COM
Valid starting Expires Service principal11/28/2019 03:54:32 11/29/2019 03:54:29 krbtgt/VPC.CLOUDERA.COM@VPC.CLOUDERA.COM[root@wangxf ~]#
登陸CM,在CM管理控制臺點擊集群菜單,然後點擊Enable Kerberos,如下圖所示:或者:進入Administration->Security,準備啟動安全進入Security,點擊Status,在集群中點擊Enable Kerberos開始啟動Enable Kerberos for Cluster,選擇KDC TYPE為Red Hat IPA。選擇Red Hat IPA 之後,頁面會跳出IPA的配置步驟:上述步驟都執行完成後,複選I have completed all the above steps.在設置KDC頁面中,依次填寫配置相關的KDC信息,包括類型、KDC伺服器、KDC Realm、加密類型以及待創建的Service Principal(hdfs,yarn,,hbase,hive等)的更新生命期等,填寫完成後點擊下一步不建議讓Cloudera Manager來管理krb5.conf, 點擊「繼續」輸入Cloudera Manager的Kerbers管理員帳號如果重複執行會報錯,這時需要到FreeIPA中把新加入的用戶刪除,然後重新執行。包括部署其他節點客戶端命令,配置埠等。我們前面已經配置了其他機器的kerberos客戶端,這裡不需要配置。埠保存默認。1) 在FreeIPA系統中已經存在admin用戶,使用admin用戶認證後,訪問hdfs正常。Kdestroy掉已有的憑據,hdfs訪問報權限問題,無法訪問。[root@grocery ~]Ticket cache: FILE:/tmp/krb5cc_0Default principal: admin@VPC.CLOUDERA.COM
Valid starting Expires Service principal12/09/2019 08:48:45 12/10/2019 08:48:43 krbtgt/VPC.CLOUDERA.COM@VPC.CLOUDERA.COM renew until 12/16/2019 08:48:43[root@grocery ~]Found 6 itemsdrwxr-xr-x - hbase hbase 0 2019-12-09 09:13 /hbasedrwxr-xr-x - hdfs supergroup 0 2019-12-09 08:13 /rangerdrwxrwxr-x - solr solr 0 2019-12-09 08:17 /solrdrwxrwxrwt - hdfs supergroup 0 2019-12-09 08:15 /tmpdrwxr-xr-x - hdfs supergroup 0 2019-12-09 08:19 /userdrwxr-xr-x - hdfs supergroup 0 2019-12-09 08:13 /warehouse [root@grocery ~][root@grocery ~]19/12/09 09:19:43 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]ls: DestHost:destPort grocery.vpc.cloudera.com:8020 , LocalHost:localPort grocery.vpc.cloudera.com/10.65.31.238:0. Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS][root@grocery ~]2) 使用admin用戶執行MapReduce任務,因為MapReduce任務需要寫到用戶的臨時目錄下,而在hdfs中沒有為該用戶創建臨時目錄,因此任務由於訪問權限問題報錯。hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100hdfs dfs -ls /user這是因為在hadoop中沒有/user/admin的目錄,導致mapreduce的臨時文件沒有地方寫,導致作業錯誤。3) 使用hdfs用戶來創建/user/admin目錄並賦權進入/var/run/cloudera-scm-agent/process/,找到hdfs最新的目錄cd /var/run/cloudera-scm-agent/process/cd 89-hdfs-NAMENODEkinit -kt hdfs.keytab hdfs/grocery.vpc.cloudera.com@VPC.CLOUDERA.COMkinit: Pre-authentication failed: Unsupported key table format version number while getting initial credentials執行下列命令在hdfs中為admin用戶創建目錄並賦權hdfs dfs -mkdir /user/admin hdfs dfs -chown -R admin:hadoop /user/admin hdfs dfs -ls /user/Found 10 itemsdrwxr-xr-x - admin hadoop 0 2019-12-09 09:27 /user/admindrwxrwxrwx - mapred hadoop 0 2019-12-09 08:18 /user/historydrwxrwxr-t - hive hive 0 2019-12-09 08:21 /user/hivedrwxrwxr-x - hue hue 0 2019-12-09 08:17 /user/huedrwx- - livy livy 0 2019-12-09 08:18 /user/livydrwxrwxr-x - oozie oozie 0 2019-12-09 08:14 /user/ooziedrwxr-x--x - spark spark 0 2019-12-09 08:14 /user/sparkdrwxr-xr-x - hdfs supergroup 0 2019-12-09 08:13 /user/tezdrwxr-xr-x - hdfs supergroup 0 2019-12-09 08:16 /user/yarndrwx- - zeppelin zeppelin 0 2019-12-09 08:19 /user/zeppelin4) 切換到admin用戶,再次執行剛才的MapReduce任務使用admin用戶進行hive連接、以及建表和插入數據和查詢等操作都可以正常完成。beelineshow databases;use defaultcreate table t1 (s1 string,s2 string);insert into t1 values('1','2');select * from t1;Hive會自動應用當前的Kerberos憑據,直接登錄。使用admin用戶在hue中執行hive和訪問hdfs使用admin用戶往/user目錄上傳文件失敗,用戶沒有權限。使用admin用戶往/user/admin目錄上傳文件成功。1) 在CDP數據中心版上啟動Kerberos,比在CDH中啟動Kerberos簡單。2) 在CDP數據中心版上使用Kerberos認證也變得更加簡單,例如beeline連接串中不需要寫憑據等。hue 的Kerberos Ticket Renewer角色報錯,查看角色詳細日誌:執行klist -f -c /var/run/hue/hue_krb5_ccacheticket_lifetime = 24h renew_lifetime = 7d forwardable = true1) https://docs.cloudera.com/runtime/7.0.3/cdp-security-overview/topics/security-how-identity-management-works-in-cdp.html 2) HUE+kerberos啟動報錯Couldn't renew kerberos ticket解決方案:https://blog.csdn.net/vah101/article/details/79111585