前七篇文章帶大家初步了解了kubernetes資源對象,本文主要帶大家搭建kubernetes集群環境。
1、kubernetes集群介紹及規劃
kubernetes集群分為兩類:一主多從、多主多從。
目前Kubernetes集群安裝方式有kubeadm、minikube、二進位包、源碼編譯。kubeadm:快速搭建kubernetes集群的工具,安裝簡單方便。
minikube:快速搭建單節點kubernetes的工具,安裝簡單、適用於入門學習。
二進位包:從官網下載每個組件的二進位包、依次安裝,安裝過程複雜,但是對於理解kubernetes組件更加有效。
源碼編譯:下載源碼,執行編譯為二進位包,然後依次安裝,適用於需要修改kubernetes源碼的場景。
本文將介紹基於kubeadm搭建一個「一主二從」的kubernetes集群。
2.1 服務器規劃節點名稱
IP
角色
系統版本
k8s-master
192.168.226.100
Master
CentOS Linux release 7.9.2009
k8s-node01
192.168.226.101
Node
CentOS Linux release 7.9.2009
k8s-node02
192.168.226.102
Node
CentOS Linux release 7.9.2009
harbor
192.168.226.103
Harbor
CentOS Linux release 7.9.2009
2.2 系統環境初始設置
(1)檢查作業系統版本,建議使用centos7以上版本的系統。[root@master01 ~]# cat /etc/redhat-releaseCentOS Linux release 7.9.2009 (Core)(2)設置主機名解析(企業中一般使用內部DNS伺服器)
[root@node01 ~]# vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.226.100 master01192.168.226.101 node01192.168.226.102 node02192.168.226.103 harbor(3)設置時間同步Kubernetes要求集群中的節點必須精確一致,這裡使用chronyd服務從網絡同步時間,企業一般使用內部的時間同步伺服器。#啟動chronyd[root@master01 ~]# systemctl start chronyd#設置chronyd開機啟動啟動[root@master01 ~]# systemctl enable chronyd(4)禁用iptables和firewalled服務【生產環境慎重操作】
kubernetes和docker在運行中會產生大量的iptables規則,為了不讓系統規則跟他們混淆,直接關閉系統的規則。Centos7默認啟動了防火牆服務(firewalld.service),安全的做法是在防火牆上配置各組件需要相互通信的埠號,在安全的網絡環境中,可以簡單地關閉防火前服務。#關閉防火牆[root@node01 ~]# systemctl stop firewalld#禁止防火牆開機自動啟動[root@node01 ~]# systemctl disable firewalldRemoved symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
#如果未安裝iptables服務,可以忽略此步[root@node01 ~]# systemctl stop iptablesFailed to stop iptables.service: Unit iptables.service not loaded.[root@node01 ~]# systemctl disable iptablesFailed to execute operation: No such file or directory(5)禁用SELinuxSELinux是linux系統下的一個安全服務,禁用SElinux,讓容器可以讀取主機文件系統,否則安裝集群中會產生各種各樣的問題。隨著kubernetes對SELinux支持的增強,可以逐步啟用SELinux機制,並通過kubernetes設置容器的安全機制。[root@master01 ~]# vim /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.# SELINUX= can take one of these three values:# enforcing - SELinux security policy is enforced.# permissive - SELinux prints warnings instead of enforcing.# disabled - No SELinux policy is loaded.#SELINUX=enforcingSELINUX=disabled# SELINUXTYPE= can take one of three values:# targeted - Targeted processes are protected,# minimum - Modification of targeted policy. Only selected processes are protected.# mls - Multi Level Security protection.SELINUXTYPE=targeted修改完成後保存退出,重啟系統才能生效。
重啟前查看
[root@master01 ~]# getenforceEnforcing重啟後查看
[root@master01 ~]# getenforceDisabled(6)禁用swap分區swap分區指的是虛擬內存分區,它的作用是在物理內存使用完之後,將磁碟空間虛擬成內存來使用。
啟用swap設備會對系統的性能產生非常負面的影響,因此kubernetes要求每個節點都要禁用swap分區。
另外,kubeadm也需要關閉linux的swap系統交換區。
修改方式一:編輯/etc/fstab文件,注釋swap分區配置(永久關閉)[root@master01 ~]# vim /etc/fstab
## /etc/fstab# Created by anaconda on Thu Jul 29 17:29:34 2021## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#/dev/mapper/centos-root / xfs defaults 0 0UUID=582672d0-6257-40cd-ad81-3e6a030a111b /boot xfs defaults 0 0#/dev/mapper/centos-swap swap swap defaults 0 0文件保存後,重啟系統才能生效。
修改方式二:直接執行 swapoff –a 命令(臨時關閉,重啟後會按照/etc/fstab配置文件啟用swap分區)
[root@node01 ~]# swapoff -a驗證swap是否關閉:
[root@master01 ~]# free -m total used free shared buff/cache availableMem: 1828 404 1035 13 388 1266Swap: 0 0 0(7)修改linux內核參數[root@node01 ~]# vim /etc/sysctl.d/kubernetes.confnet.bridge.bridge-nf-call-iptables=1net.bridge.bridge-nf-call-ip6tables=1net.ipv4.ip_forward=1net.ipv4.tcp_tw_recycle=0net.ipv4.neigh.default.gc_thresh1=1024net.ipv4.neigh.default.gc_thresh2=2048net.ipv4.neigh.default.gc_thresh3=4096vm.swappiness=0vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_instances=8192fs.inotify.max_user_watches=1048576fs.file-max=52706963fs.nr_open=52706963net.ipv6.conf.all.disable_ipv6=1net.netfilter.nf_conntrack_max=2310720
[root@master01 sysctl.d]# sysctl –p #加載網橋過濾模塊[root@master01 sysctl.d]# modprobe br_netfilter#查看網橋過濾模塊是否加載成功[root@master01 sysctl.d]# lsmod | grep br_netfilterbr_netfilter 22256 0bridge 151336 1 br_netfilter問題總結:
#執行sysctl -p 時出現:[root@localhost ~]# sysctl -psysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directorysysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory#解決方法:[root@localhost ~]# modprobe br_netfilter(8)升級系統最新穩定內核查看系統內核,centos7自帶的3.10.x內核存在一些bug,導致運行的docker、kubernetes不穩定,在安裝kubernetes之前,將系統升級為最新穩定版本,本文用用的是5.4.141。
#查看系統內核[root@harbor ~]# uname -r3.10.0-1160.el7.x86_64升級系統內核步驟:
#第一步,啟用ELRepo倉庫[root@master01 /]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org[root@master01 /]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm#第二步,查看可用的系統內核包[root@master01 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
已加載插件:fastestmirror, langpacksLoading mirror speeds from cached hostfile * elrepo-kernel: mirrors.tuna.tsinghua.edu.cn可安裝的軟體包elrepo-release.noarch 7.0-5.el7.elrepo elrepo-kernelkernel-lt-devel.x86_64 5.4.141-1.el7.elrepo elrepo-kernelkernel-lt-doc.noarch 5.4.141-1.el7.elrepo elrepo-kernelkernel-lt-headers.x86_64 5.4.141-1.el7.elrepo elrepo-kernelkernel-lt-tools.x86_64 5.4.141-1.el7.elrepo elrepo-kernelkernel-lt-tools-libs.x86_64 5.4.141-1.el7.elrepo elrepo-kernelkernel-lt-tools-libs-devel.x86_64 5.4.141-1.el7.elrepo elrepo-kernelkernel-ml.x86_64 5.13.11-1.el7.elrepo elrepo-kernelkernel-ml-devel.x86_64 5.13.11-1.el7.elrepo elrepo-kernelkernel-ml-doc.noarch 5.13.11-1.el7.elrepo elrepo-kernelkernel-ml-headers.x86_64 5.13.11-1.el7.elrepo elrepo-kernelkernel-ml-tools.x86_64 5.13.11-1.el7.elrepo elrepo-kernelkernel-ml-tools-libs.x86_64 5.13.11-1.el7.elrepo elrepo-kernelkernel-ml-tools-libs-devel.x86_64 5.13.11-1.el7.elrepo elrepo-kernelperf.x86_64 5.13.11-1.el7.elrepo elrepo-kernelpython-perf.x86_64 5.13.11-1.el7.elrepo elrepo-kernel
#可看到本次能升級到5.4.141和5.13.11兩個內核版本
#第三步,安裝新內核[root@master01 /]# yum --enablerepo=elrepo-kernel install kernel-lt#--enablerepo 選項開啟CentOS系統上的指定倉庫。默認開啟的是elrepo,這裡用elrepo-kernel替換。
#第四步,查看系統的全部內核[root@master01 ~]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg0 : CentOS Linux (5.4.141-1.el7.elrepo.x86_64) 7 (Core)1 : CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)2 : CentOS Linux (0-rescue-d6070eb94632405a8d7946becd78c86b) 7 (Core)
#第五步,設置開機從新內核啟動(根據內核列表設置開機啟動的內核)[root@master01 ~]# grub2-set-default 0# 0來自上一步查到的內核代號,想將哪一個內核設為默認啟動內核則指定哪一個代號#第六步,生成 grub 配置文件並重啟[root@master01 ~]# grub2-mkconfig -o /boot/grub2/grub.cfgGenerating grub configuration file ...Found linux image: /boot/vmlinuz-5.4.141-1.el7.elrepo.x86_64Found initrd image: /boot/initramfs-5.4.141-1.el7.elrepo.x86_64.imgFound linux image: /boot/vmlinuz-3.10.0-1160.el7.x86_64Found initrd image: /boot/initramfs-3.10.0-1160.el7.x86_64.imgFound linux image: /boot/vmlinuz-0-rescue-d6070eb94632405a8d7946becd78c86bFound initrd image: /boot/initramfs-0-rescue-d6070eb94632405a8d7946becd78c86b.imgdone[root@master01 ~]# reboot#第七步,查看正在使用的內核[root@master01 ~]# uname -r5.4.141-1.el7.elrepo.x86_64
升級內核問題總結:
如果用vmware軟體安裝的centos7系統,升級完系統內核後,啟動報錯如下:
報錯原因:虛擬機硬體版本兼容性問題,新建虛擬機的時候,選擇自定義安裝,版本選擇較高兼容性的版本:
新建虛擬機時候,選擇兼容性高的版本,如下圖:
(9)配置ipvs功能
默認情況下,Kube-proxy將在kubeadm部署的集群中以iptables模式運,kube-proxy主要解決的是svc(service)與pod之間的調度關係,ipvs的調度方式可以極大的增加它的訪問效率。
#安裝ipset ipvsadm[root@master01 ~]# yum install ipset ipvsadm –y
#添加需要加載的模塊,寫入腳本cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_sh#modprobe -- nf_conntrack_ipv4 #######4.19版本一下內核使用modprobe -- nf_conntrack #######4.19版本及以上內核使用modprobe -- ip_tablesmodprobe -- ip_setmodprobe -- xt_setmodprobe -- ipt_setmodprobe -- ipt_rpfiltermodprobe -- ipt_REJECTmodprobe -- ipipEOF
#為腳本添加執行權限[root@master01 modules]# chmod 755 /etc/sysconfig/modules/ipvs.modules
#執行腳本[root@master01 modules]# bash /etc/sysconfig/modules/ipvs.modules#在內核4.19版本nf_conntrack_ipv4已經改為nf_conntrack,升級為4.19版本及以上的內核有可能會報錯 「modprobe: FATAL: Module nf_conntrack_ipv4 not ,參考上述注釋修改。
#查看對應的模塊是否加載成功[root@node01 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4[root@master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack -e ip2.3 安裝docker(1)卸載舊版本[root@master01 yum.repos.d]# yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine(2)切換鏡像源,切換為阿里雲鏡像源[root@master01 yum.repos.d]# yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo已加載插件:fastestmirror, langpacksadding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repograbbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.reporepo saved to /etc/yum.repos.d/docker-ce.repo(3)查看當前鏡像支持的docker版本[root@master01 yum.repos.d]# yum list docker-ce --showduplicates(4)安裝制定版本的docker-ce必須指定--setopt=obsoletes=0,否則yum會自動安裝最新版本[root@master01 yum.repos.d]# yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y(5)配置阿里雲鏡像加速、配置crgoup driverDocker默認情況下使用的cgroup driver為cgroupfs,而kubernetes推薦使用systemd替換cgroups。
mkdir -p /etc/dockertee /etc/docker/daemon.json <<-'EOF'{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://4cnob6ep.mirror.aliyuncs.com"],"log-driver": "json-file","log-opts": { "max-size": "100m" }}EOFsystemctl daemon-reloadsystemctl restart docker(6)設置docker開機自動啟動[root@master01 docker]# systemctl enable docker(7)查看docker版本、詳細信息
[root@master01 docker]# docker version[root@master01 docker]# docker info3、搭建Kubernetes集群
3.1 安裝kubernetes組件(1)由於kubernetes的鏡像源在國外,下載速度慢,切換為國內阿里雲的鏡像源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF(2)安裝kubeadm、kubelet、kubectl[root@master01 ~]# yum install --setopt=obsoletes=0 -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0(3)配置kubelet的cgroup
[root@master01 yum.repos.d]# vim /etc/sysconfig/kubelet#KUBELET_EXTRA_ARGS=KUBELET_CGROUP_ARGS=」--cgroup-driver=systemd」KUBE_PROXY_MODE=ipvsKUBELET_EXTRA_ARGS="--fail-swap-on=false"(4)重啟kubelet,並設置為開機啟動[root@master01 yum.repos.d]# systemctl daemon-reload[root@master01 yum.repos.d]# systemctl restart kubelet[root@master01 yum.repos.d]# systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.3.2 啟用kubectl命令自動補全安裝並配置bash-completion,啟用kubectl命令的自動補全功能[root@master01 ~]# yum install -y bash-completion [root@master01 ~]# echo 'source /usr/share/bash-completion/bash_completion' >> /etc/profile[root@master01 ~]# source /etc/profile[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc[root@master01 ~]# source ~/.bashrc3.3 修改kubeadm默認配置
Kubeadm的初始化控制平面(init)命令和加入節點(join)命令均可以通過指定的配置文件修改默認參數的值。Kubeadm將配置文件以ConfigMap形式保存在集群中,便於後續的查詢和升級工作,kubeadm config子命令提供了對這組功能的支持。
kubeadm config upload from-file:由配置文件上傳到集群中生成ConfigMap。
kubeadm config upload from-flags:由配置參數生成ConfigMap。
kubeadm config view:查看當前集群中的配置值。
kubeadm config print init-defaults:輸出kubeadm init默認參數文件的內容。
kubeadm config print join-defaults:輸出kubeadm join默認參數文件的內容。
kubeadm config migrate:在新舊版本之間進行配置轉換。
kubeadm config images list:列出所需的鏡像列表。
kubeadm config images pull:拉取鏡像到本地。
使用kubeadm config print init-defaults列印集群初始化默認的使用的配置,通過如下指令創建默認的kubeadm-config.yaml文件:
[root@master01 kubernetes]# kubeadm config print init-defaults > kubeadm-config.yamlW0820 20:34:03.392106 3417 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io](1)修改kubeadm-config.yaml默認配置:
localAPIEndpoint: advertiseAddress: 192.168.226.100(修改為master節點的ip)imageRepository: k8s.gcr.io(拉取鏡像的倉庫地址)kubernetesVersion: v1.18.0(k8s版本)(2)增加配置:networking: podSubnet: 10.244.0.0/16#聲明pod的所處網段【注意,必須要添加此內容】默認情況下我們會安裝一個#flannel網絡插件去實現覆蓋性網路,它的默認pod網段就這麼一個網段,#如果這個網段不一致的話,後期我們需要進入pod一個個修改#文件末尾增加如下配置,把默認的調度方式改為ipvs調度模式---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationfeatureGates: SupportIPVSProxyMode: truemode: ipvs(3)修改後的kubeadm配置如下(註:此配置適用於一主多從模式集群):
apiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authenticationkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 192.168.226.100 bindPort: 6443nodeRegistration: criSocket: /var/run/dockershim.sock name: master01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master---apiServer: timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns: type: CoreDNSetcd: local: dataDir: /var/lib/etcdimageRepository: k8s.gcr.iokind: ClusterConfigurationkubernetesVersion: v1.18.0networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12scheduler: {}
---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationfeatureGates: SupportIPVSProxyMode: truemode: ipvs(4)kubeadm-config.yaml組成部署說明:
InitConfiguration:用於定義一些初始化配置,如初始化使用的token以及apiserver地址等。
ClusterConfiguration:用於定義apiserver、etcd、network、scheduler、controller-manager等master組件相關配置項。
KubeletConfiguration:用於定義kubelet組件相關的配置項。
KubeProxyConfiguration:用於定義kube-proxy組件相關的配置項。
註:在默認的kubeadm-config.yaml文件中只有InitConfiguration、ClusterConfiguration 兩部分。我們可以通過如下操作生成另外兩部分的示例文件:# 生成KubeletConfiguration示例文件kubeadm config print init-defaults --component-configs KubeletConfiguration# 生成KubeletConfiguration示例文件並寫入文件kubeadm config print init-defaults --component-configs KubeletConfiguration > kubeadm-kubeletconfig.yaml# 生成KubeProxyConfiguration示例文件kubeadm config print init-defaults --component-configs KubeProxyConfiguration# 生成KubeProxyConfiguration示例文件並寫入文件kubeadm config print init-defaults --component-configs KubeProxyConfiguration > kubeadm-kubeproxyconfig.yaml3.4 初始化master節點(1)準備鏡像
由於國內不能訪問谷歌倉庫(使用科學上網方式除外),需要手動將kubernetes集群所需鏡像從國內倉庫下載到本地,然後再打上k8s.gcr.io的tag,操作步驟如下:
第一步,查看初始化kubernetes集群需要的鏡像:[root@master01 ~]# kubeadm config images listW0820 21:31:08.731408 7659 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)W0820 21:31:08.731739 7659 version.go:103] falling back to the local client version: v1.18.0W0820 21:31:08.732395 7659 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]k8s.gcr.io/kube-apiserver:v1.18.0k8s.gcr.io/kube-controller-manager:v1.18.0k8s.gcr.io/kube-scheduler:v1.18.0k8s.gcr.io/kube-proxy:v1.18.0k8s.gcr.io/pause:3.2k8s.gcr.io/etcd:3.4.3-0k8s.gcr.io/coredns:1.6.7
第二步,編寫下載鏡像的腳本:
#編寫鏡像清單文件[root@master01 ~]# vim k8s-images.txtkube-apiserver:v1.18.0kube-controller-manager:v1.18.0kube-scheduler:v1.18.0kube-proxy:v1.18.0pause:3.2etcd:3.4.3-0coredns:1.6.7
#編寫下載鏡像腳本文件[root@master01 kubernetes]# vim k8s-images.shfor image in `cat k8s-images.txt`do echo 下載---- $image docker pull gotok8s/$image docker tag gotok8s/$image k8s.gcr.io/$image docker rmi gotok8s/$imagedone
#授執行權限[root@master01 ~]# chmod 777 k8s-images.sh
#執行腳本,從國內倉庫下載鏡像[root@master01 kubernetes]# ./k8s-images.sh(2)初始化master節點使用指定的yaml文件進行初始化安裝自動頒發證書(1.13後支持) 把所有的信息都寫入到 kubeadm-init.log中,
[root@master01 kubernetes]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#根據控制臺日誌提示,初始化成功後的後置操作[root@master01 kubernetes]# mkdir -p $HOME/.kube[root@master01 kubernetes]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@master01 kubernetes]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查看集群節點狀態[root@master01 kubernetes]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 NotReady master 20m v1.18.0(3)集群初始化日誌解析[root@master01 kubernetes]# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log #初始化集群命令W0820 21:46:37.633578 9253 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.0 #kubernetes版本[preflight] Running pre-flight checks #檢測當前運行環境[preflight] Pulling images required for setting up a Kubernetes cluster#下載鏡像[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' #安裝鏡像[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" #kubelet環境變量保存位置[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" #kubelet配置文件保存位置[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"#kubernetes使用的證書位置[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.226.100] #配置DNS以及當前默認的域名[certs] Generating "apiserver-kubelet-client" certificate and key #生成kubernetes組件的密鑰[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.226.100 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.226.100 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes" #kubernetes組件配置文件位置[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"W0820 21:46:45.991676 9253 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0820 21:46:45.993829 9253 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 21.006303 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:19a94c1fef133502412cec496124e49424e6fb40900ea3b8fb25bb2a30947217[mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully! #初始化成功To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.226.100:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:b58abf99813a2c8a98ff35e6ef7af4a0684869a60791efef561dc600d2d488e43.5 node節點集群(1)將master節點上下載鏡像的腳本分別複製到node01、node02節點[root@master01 kubernetes]# scp k8s-images.txt k8s-images.sh root@192.168.226.101:/data/kubernetes/The authenticity of host '192.168.226.101 (192.168.226.101)' can't be established.ECDSA key fingerprint is SHA256:bGZOi1f3UFkN+Urjo7zAmsMyeGbgU7f+ROmAjweU2ac.ECDSA key fingerprint is MD5:8f:73:26:e2:6d:f4:00:87:1d:eb:42:4e:03:9d:39:a0.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '192.168.226.101' (ECDSA) to the list of known hosts.root@192.168.226.101's password:k8s-images.txt 100% 134 6.9KB/s 00:00 k8s-images.sh 100% 170 70.2KB/s 00:00 [root@master01 kubernetes]# scp k8s-images.txt k8s-images.sh root@192.168.226.102:/data/kubernetes/The authenticity of host '192.168.226.102 (192.168.226.102)' can't be established.ECDSA key fingerprint is SHA256:IQbFmazVPUenOUh5+o6183jj7FJzKDXBfTPgG6imdWU.ECDSA key fingerprint is MD5:3c:44:a8:0c:fd:66:88:ba:6c:aa:fe:28:46:f5:25:d1.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '192.168.226.102' (ECDSA) to the list of known hosts.root@192.168.226.102's password:k8s-images.txt 100% 134 54.5KB/s 00:00 k8s-images.sh 100% 170 26.3KB/s 00:00(2)node01、node02機器分別執行如下命令,下載鏡像:[root@node01 kubernetes]# ./k8s-images.sh(3)參考master節點初始化後的日誌,分別在node01、node02節點執行kubeadm join命令,將這兩個節點加入集群:[root@node02 kubernetes]# kubeadm join 192.168.226.100:6443 --token abcdef.0123456789abcdef \> --discovery-token-ca-cert-hash sha256:b58abf99813a2c8a98ff35e6ef7af4a0684869a60791efef561dc600d2d488e4W0820 22:09:55.345093 9653 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.(4)在master節點查看集群狀態
[root@master01 kubernetes]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 NotReady master 24m v1.18.0node01 NotReady <none> 92s v1.18.0node02 NotReady <none> 65s v1.18.0註:可以看到集群所有節點的狀態都是NotReady,因為缺少網絡插件,如flannel,地址https://github.com/flannel-io/flannel可以查看flannel在github上的相關項目。
(5)查看集群的其他信息
#查看組件的健康信息[root@master01 kubernetes]# kubectl get csNAME STATUS MESSAGE ERRORcontroller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
#查看kube-system命名空間下的pod,均為kubernetes系統pod[root@master01 kubernetes]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-66bff467f8-5nfc2 0/1 Pending 0 26mcoredns-66bff467f8-r966w 0/1 Pending 0 26metcd-master01 1/1 Running 0 27mkube-apiserver-master01 1/1 Running 0 27mkube-controller-manager-master01 1/1 Running 0 27mkube-proxy-g98hd 1/1 Running 0 26mkube-proxy-hbt4j 1/1 Running 0 4m42skube-proxy-mtmrm 1/1 Running 0 4m15skube-scheduler-master01 1/1 Running 0 27m
#查看當前系統的所有命名空間[root@master01 kubernetes]# kubectl get nsNAME STATUS AGEdefault Active 28mkube-node-lease Active 28mkube-public Active 28mkube-system Active 28m
#查詢kubernetes集群的所有ConfigMap[root@master01 kubernetes]# kubectl get configmaps -ANAMESPACE NAME DATA AGEkube-public cluster-info 2 44mkube-system coredns 1 44mkube-system extension-apiserver-authentication 6 44mkube-system kube-proxy 2 44mkube-system kubeadm-config 2 44mkube-system kubelet-config-1.18 1 44m3.6 安裝網絡插件(1)下載鏡像[root@master01 kubernetes]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml--2021-08-21 08:48:14-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml正在解析主機 raw.githubusercontent.com (raw.githubusercontent.com)... 0.0.0.0, ::正在連接 raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443... 失敗:拒絕連接。正在連接 raw.githubusercontent.com (raw.githubusercontent.com)|::|:443... 失敗:無法指定被請求的地址。
#下載kube-flannel.yml顯示「拒絕連接」,因為網站被防火牆屏蔽了,解決方案:在/etd/hosts文件中添加一條, 199.232.68.133 raw.githubusercontent.com[root@master01 kubernetes]# vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.226.100 master01192.168.226.101 node01192.168.226.102 node02192.168.226.103 harbor
199.232.68.133 raw.githubusercontent.com
#重新下載kube-flannel.yml文件[root@master01 kubernetes]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml--2021-08-21 08:50:39-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml正在解析主機 raw.githubusercontent.com (raw.githubusercontent.com)... 199.232.68.133正在連接 raw.githubusercontent.com (raw.githubusercontent.com)|199.232.68.133|:443... 已連接。已發出 HTTP 請求,正在等待回應... 200 OK長度:4813 (4.7K) [text/plain]正在保存至: 「kube-flannel.yml」
100%[=================================================================================================================================>] 4,813 --.-K/s 用時 0s
2021-08-21 08:50:40 (20.8 MB/s) - 已保存 「kube-flannel.yml」 [4813/4813])(2)修改yml
…… containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.14.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: #############資源限額,生產環境一定要根據機器實際配置適當調大此單數 requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi"……(3)安裝網絡插件#執行安裝命令[root@master01 kubernetes]# kubectl apply -f kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created
註:如果遇到flannel進行拉取失敗的問題,有兩種解決方案:
(一)、將kube-flannel.yml文件中所有kuay.io的鏡像地址修改為國內可訪問的倉庫地址。
(二)、手動下載鏡像(https://github.com/flannel-io/flannel/releases),導入本地。
3.7 檢查集群狀態[root@master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 Ready master 19h v1.18.0node01 Ready <none> 18h v1.18.0node02 Ready <none> 18h v1.18.0#集群所有節點狀態均為ready
[root@master01 ~]#[root@master01 ~]# kubectl get pods -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-66bff467f8-5nfc2 1/1 Running 3 19hcoredns-66bff467f8-r966w 1/1 Running 3 19hetcd-master01 1/1 Running 4 19hkube-apiserver-master01 1/1 Running 5 19hkube-controller-manager-master01 1/1 Running 4 19hkube-flannel-ds-d6pl5 1/1 Running 3 7h37mkube-flannel-ds-lvdqq 1/1 Running 3 7h37mkube-flannel-ds-nbq6r 1/1 Running 3 7h37mkube-proxy-g98hd 1/1 Running 4 19hkube-proxy-hbt4j 1/1 Running 4 18hkube-proxy-mtmrm 1/1 Running 4 18hkube-scheduler-master01 1/1 Running 4 19h#kubernetes系統組件運行正常
[root@master01 ~]#[root@master01 ~]# kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} #kubernetes組件狀態健康
#檢查ipvs規則[root@node01 ~]# ipvsadm -LnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.96.0.1:443 rr -> 192.168.226.100:6443 Masq 1 1 0 TCP 10.96.0.10:53 rr -> 10.244.0.10:53 Masq 1 0 0 -> 10.244.0.11:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.10:9153 Masq 1 0 0 -> 10.244.0.11:9153 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 10.244.0.10:53 Masq 1 0 0 -> 10.244.0.11:53 Masq 1 0 04、安裝harbor
VMware開源的企業級Registry項目Harbor,以Docker公司開源的registry 為基礎,提供了管理UI、基於角色的訪問控制(Role Based Access Control)、AD/LDAP集成、審計日誌(Audit logging) 等企業用戶需求的功能。Harbor由一組容器組成:nginx、harbor-jobservice、harbor-ui、harbor-db、harbor-adminserver、registry以及harbor-log,這些容器之間都通過Docker內的DNS服務發現來連接通信。通過這種方式,每一個容器都可以通過相應的容器來進行訪問。對於終端用戶來說,只有反向代理(Nginx)服務的埠需要對外暴露。Harbor是通過docker compose來部署的。在Harbor原始碼的make目錄下的docker-compose模板會被用於部署Harbor。
4.1 安裝docker-compose(Harbor是通過docker compose來部署的)#下載docker-compose[root@harbor ~]# curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose#授執行權限[root@harbor ~]# chmod +x /usr/local/bin/docker-compose#設置快捷方式-軟連結[root@harbor ~]# ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose#查看docker-compose版本,可以正常查看docker-compose版本,即安裝成功[root@harbor bin]# docker-compose --version4.2 下載harbor安裝包
在github上下載harbor安裝包即可,harbor地址:https://github.com/goharbor/harbor/releases,本文使用的是最版v2.3.1,
[root@harbor /]# mkdir /data/harbor -p[root@harbor /]# cd /data/harbor/[root@harbor harbor]# wget https://github.com/goharbor/harbor/releases/download/v2.3.1/harbor-offline-installer-v2.3.1.tgz4.3 解壓harbor離線版安裝包、修改配置文件[root@harbor harbor]# tar zxvf harbor-offline-installer-v2.3.1.tgz[root@harbor harbor]# cd harbor/[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml#修改harbor配置文件[root@harbor harbor]# vim harbor.yml#需要修改的參數:#hostname設置訪問地址,可以使用ip、域名,不可以設置為127.0.0.1或localhosthostname: 192.168.226.103#證書、私鑰路徑certificate: /home/harbor/certs/harbor.crt #########取消注釋,填寫實際路徑private_key: /home/harbor/certs/harbor.key #########取消注釋,填寫實際路徑#web端登錄密碼harbor_admin_password = Harbor123454.4 製作證書
[root@harbor harbor_data]# mkdir /data/harbor_data/certs –p[root@harbor harbor_data]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout /data/harbor_data/certs/harbor.key -x509 -out /data/harbor_data/certs/harbor.crt -subj /C=CN/ST=BJ/L=BJ/O=DEVOPS/CN=harbor.wangzy.com -days 3650參數說明:
req 產生證書籤發申請命令
-newkey 生成新私鑰
rsa:4096 生成秘鑰位數
-nodes 表示私鑰不加密
-sha256 使用SHA-2哈希算法
-keyout 將新創建的私鑰寫入的文件名
-x509 籤發X.509格式證書命令。X.509是最通用的一種籤名證書格式。
-out 指定要寫入的輸出文件名
-subj 指定用戶信息
-days 有效期(3650表示十年)
4.5 安裝
[root@harbor harbor]# ./prepare[root@harbor harbor]# ./install.sh4.6 將部署的私有harbor設置為信任倉庫
# 添加信任倉庫地址[root@master01 ~]# vim /etc/docker/daemon.json{"exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://4cnob6ep.mirror.aliyuncs.com"],"log-driver": "json-file","log-opts": { "max-size": "100m" },"insecure-registries": ["https://192.168.226.103"] }#重啟docker服務[root@master01 ~]# systemctl daemon-reload[root@master01 ~]# systemctl restart docker4.7 運維
#切換到harbor安裝路徑下[root@harbor ~]# cd /data/harbor/harbor/#啟動容器,容器不存在就無法啟動,不會自動創建鏡像[root@harbor harbor]# docker-compose start#停止容器[root@harbor harbor]# docker-compose stop#後臺啟動,如果容器不存在根據鏡像自動創建[root@harbor harbor]# docker-compose up –d#停止容器並刪除容器[root@harbor harbor]# docker-compose down -v列出項目中目前的所有容器[root@harbor harbor]# docker-compose ps顯示正在運行的進程[root@harbor harbor]# docker-compose top4.8 驗證harbor可用性harbor啟動後,就可通過 https://192.168.226.103 進行訪問。默認的帳戶為 admin,密碼為 Harbor12345。
(1) 使用admin用戶登錄
(2) 登錄master,從遠程倉庫下載helloworld鏡像,並推送到新建的harbor倉庫。
[root@master01 ~]NAME DESCRIPTION STARS OFFICIAL AUTOMATEDsupermanito/helloworld 學習資料 216 karthequian/helloworld A simple helloworld nginx container to get y… 17 [OK]strm/helloworld-http A hello world container for testing http bal… 6 [OK]deis/helloworld 6 [OK]buoyantio/helloworld 4 wouterm/helloworld A simple Docker image with an Nginx server … 1 [OK] [root@master01 ~]Using default tag: latestlatest: Pulling from wouterm/helloworld658bc4dc7069: Pull completea3ed95caeb02: Pull completeaf3cc4b92fa1: Pull completed0034177ece9: Pull complete983d35417974: Pull completeaef548056d1a: Pull completeDigest: sha256:a949eca2185607e53dd8657a0ae8776f9d52df25675cb3ae3a07754df5f012e6Status: Downloaded newer image for wouterm/helloworld:latest [root@master01 ~]REPOSITORY TAG IMAGE ID CREATED SIZEwouterm/helloworld latest 0706462ea954 4 years ago 17.8MB [root@master01 ~]Username: adminPassword:WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/ Login Succeeded [root@master01 ~] [root@master01 ~]The push refers to repository [192.168.226.103/library/myhelloworld]6781046df40c: Pushed5f70bf18a086: Pushed7958de11d0de: Pushed3d3d4b273cf9: Pushed1aaf09e09313: Pusheda58990fe2574: Pushedv1: digest: sha256:7278e5235e3780dc317f4a93dfd3961bf5760119d77553da3f7c9ee9d32a040a size: 1980
登錄harbor web端,可以查詢剛剛上傳的鏡像myhelloworld:v1。
PS:需要安裝過程中配置文件的小夥伴可以公眾號給我留言。