Kubernetes 使用kubeadm创建集群
实践环境
CentOS-7-x86_64-DVD-1810
Docker 19.03.9
Kubernetes version: v1.20.5
开始之前
1台Linux操作或更多,兼容运行
deb
,
rpm
确保每台机器2G内存或以上
确保当控制面板的结点机,其CPU核数为双核或以上
确保集群中的所有机器网络互连
目标
-
安装一个
Kubernetes
集群控制面板 -
基于集群安装一个
Pod network
以便集群之间可以相互通信
安装指导
安装Docker
安装过程略
注意,安装
docker
时,需要指
Kubenetes
支持的版本(参见如下),如果安装的
docker
版本过高导致,会提示以下问题
WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03
安装docker时指定版本
sudo yum install docker-ce-19.03.9 docker-ce-cli-19.03.9 containerd.io
如果没有安装
docker
,运行
kubeadm init
时会提示以下问题
cannot automatically set CgroupDriver when starting the Kubelet: cannot execute 'docker info -f {{.CgroupDriver}}': executable file not found in $PATH
[preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": executable file not found in $PATH
安装
kubeadm
如果没有安装的话,先安装
kubeadm
,如果已安装,可通过
apt-get update
&&
apt-get upgrade
或
yum update
命令更新
kubeadm
最新版
注意:更新
kubeadm
过程中,
kubelet
每隔几秒中就会重启,这个是正常现象。
其它前置操作
关闭防火墙
# systemctl stop firewalld && systemctl disable firewalld
运行上述命令停止并禁用防火墙,否则运行
kubeadm init
时会提示以下问题
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
修改
/etc/docker/daemon.json
文件
编辑
/etc/docker/daemon.json
文件,添加以下内容
{
"exec-opts":["native.cgroupdriver=systemd"]
}
然后执行
systemctl restart docker
命令重启
docker
如果不执行以上操作,运行
kubeadm init
时会提示以下问题
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
安装
socat
,
conntrack
等依赖软件包
# yum install socat conntrack-tools
如果按未安装上述依赖包,运行
kubeadm init
时会提示以下问题
[WARNING FileExisting-socat]: socat not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:`
[ERROR FileExisting-conntrack]: conntrack not found in system path`
设置
net.ipv4.ip_forward
值为1
设置
net.ipv4.ip_forward
值为1,具体如下
# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
说明:
net.ipv4.ip_forward
如果为0,则表示禁止转发数据包,为1则表示允许转发数据包,如果
net.ipv4.ip_forward
值不为1,运行
kubeadm init
时会提示以下问题
ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
以上配置临时生效,为了避免重启机器后失效,进行如下设置
# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
注意:网上有推荐以下方式进行永久配置的,但是笔者试过,实际不起作用
# echo "sysctl -w net.ipv4.ip_forward=1" >> /etc/rc.local
# chmod +x /etc/rc.d/rc.local
设置
net.bridge.bridge-nf-call-iptables
值为1
做法参考
net.ipv4.ip_forward
设置
注意:上文操作,在每个集群结点都要实施一次
初始化控制面板结点
控制面板组件运行的机器,称之为控制面板结点,包括 etcd (集群 数据库 ) 和 API Server (供 kubectl 命令行工具调用)
-
(推荐)如果打算升级单个控制面板
kubeadm
集群为高可用版(high availability
),应该为kubeadm init
指定--control-plane-endpoint
参数选项以便为所有控制面板结点设置共享endpont
。该endpont
可以是DNS
名称或者本地 负载均衡 IP地址。 -
选择一个网络插件,并确认该插件是否需要传递参数给
kubeadm init
,这取决于你所选插件,比如使用flannel
,就必须为kubeadm init
指定--pod-network-cidr
参数选项 -
(可选)1.14版本开始,
kubeadm
会自动检测容器运行时,如果需要使用不同的容器运行时,或者有多于1个容器运行时的情况下,需要为kubeadm init
指定--cri-socket
参数选项 -
(可选)除非指定了其它的,
kubeadm
使用与默认网关关联的网络接口为指定控制面板结点API服务器设置advertise
地址。如果需要指定其它的网络接口,需要为kubeadm init
指定apiserver-advertise-address=<ip-address>
参数选项。发布IPV6
Kubernetes
集群,需要为kubeadm init
指定--apiserver-advertise-address
参数选项,以设置IPv6
地址,形如--apiserver-advertise-address=fd00::101
-
(可选)运行
kubeadm init
之前,先运行kubeadm config images pull
,以确认可连接到gcr.io
容器镜像注册中心
如下,带参数运行
kubeadm init
以便初始化控制面板结点机,运行该命令时会先执行一系列的预检,以确保机器满足运行
kubernetes
。如果预检发现错误,则自动退出程序,否则继续执行,下载并安装集群控制面板组件。这可能会花费几分钟
# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version stable --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.96.0.1 10.118.80.93]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [10.118.80.93 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [10.118.80.93 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 89.062309 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1sh85v.surdstc5dbrmp1s2
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.118.80.93:6443 --token ap4vvq.8xxcc0uea7dxbjlo \
--discovery-token-ca-cert-hash sha256:c4493c04d789463ecd25c97453611a9dfacb36f4d14d5067464832b9e9c5039a
如上,命令输出
Your Kubernetes control-plane has initialized successfully!
及其它提示,告诉我们初始化控制面板结点成功。
注意:
参照输出提示,为了让非root用户也可以正常执行
kubectl
,运行以下命令
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
可选的,如果是
root
用户,可运行以下命令
export KUBECONFIG=/etc/kubernetes/admin.conf
记录
kubeadm init
输出中的
kubeadm join
,后面需要用该命令
添加结点到集群
中
token用于控制面板结点和加入集群的结点之间的相互认证。需要安全保存,因为任何拥有该token的人都可以添加认证结点到集群中。 可用
kubeadm token
展示,创建和删除该
token
。命令详情参考
kubeadm reference guide
.
安装
Pod
网络插件
**必须基于
Pod
网络发布一个
Container Network Interface
(CNI) ,以便Pod之间可相互通信。
Pod
网络安装之前,不会启动Cluster DNS (CoreDNS) **
-
注意Pod 网络不能和主机网络重叠,如果重叠,会出问题(如果发现网络发现网络插件的首选Pod网络与某些主机网络之间发生冲突,则应考虑使用合适的CIDR块,然后在执行
kubeadm init
时,增加--pod-network-cidr
选项替换网络插件YAML中的网络配置. -
默认的,
kubeadm
设置集群强制使用 RBAC (基于角色访问控制)。确保Pod网络插件及用其发布的任何清单支持RBAC
-
如果让集群使用
IPv6
--dual-stack
,或者仅single-stack IPv6
网络,确保往插件支持IPv6
.CNI
v0.6.0 中添加了IPv6的支持。
好些项目使用CNI提供提供
Kubernetes
网络支持,其中一些也支持
网络策略
,以下是实现了
Kubernetes
网络模型的插件列表查看地址:
可在控制面板结点机上或者拥有
kubeconfig
凭据的结点机上通过执行下述命令安装一个
Pod
网络插件,该插件直接以
daemonset
的方式安装,并且会把配置文件写入
/etc/cni/net.d
目录:
kubectl apply -f <add-on.yaml>
flannel
网络插件安装
手动发布flannel(
Kubernetes
v1.17+)
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
参考连接: https://github.com/flannel-io/flannel#flannel
每个集群只能安装一个
Pod
网络,
Pod
网络安装完成后,可通过执行
kubectl get pods --all-namespaces
命令,查看命令输出中
coredns-xxxxxxxxxx-xxx Pod
是否处于
Running
来判断网络是否正常
查看
flannel
子网环境配置信息
# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
flannel
网络插件安装完成后,
宿主机
上会自动增加两个虚拟网卡:
cni0
和
flannel.1
# ifconfig -a
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.1 netmask 255.255.255.0 broadcast 10.244.0.255
inet6 fe80::705d:43ff:fed6:80c9 prefixlen 64 scopeid 0x20<link>
ether 72:5d:43:d6:80:c9 txqueuelen 1000 (Ethernet)
RX packets 312325 bytes 37811297 (36.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 356346 bytes 206539626 (196.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:e1ff:fec3:8b6a prefixlen 64 scopeid 0x20<link>
ether 02:42:e1:c3:8b:6a txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 266 (266.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.118.80.93 netmask 255.255.255.0 broadcast 10.118.80.255
inet6 fe80::6ff9:dbee:6b27:1315 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:d3:3b:ef txqueuelen 1000 (Ethernet)
RX packets 2092903 bytes 1103282695 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 969483 bytes 253273828 (241.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.0 netmask 255.255.255.255 broadcast 10.244.0.0
inet6 fe80::a49a:2ff:fe38:3e4b prefixlen 64 scopeid 0x20<link>
ether a6:9a:02:38:3e:4b txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 30393748 bytes 5921348235 (5.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 30393748 bytes 5921348235 (5.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
重新初始化控制面板结点
实践过程中因选项配置不对,在网络插件安装后才发现需要,需要重新执行
kubeadm init
命令。具体实践操作如下:
# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "localhost.localdomain" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
# rm -rf /etc/cni/net.d
# rm -f $HOME/.kube/config
#
执行完上述命令后,需要重新执行 初始化控制面板结点操作,并且重新安装网络插件
遇到的问题总结
重新执行
kubeadm init
命令后,执行
kubectl get pods --all-namespaces
查看
Pod
状态,发现
coredns-xxxxxxxxxx-xxxxxx
状态为
ContainerCreating
,如下
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-pxvdx 0/1 ContainerCreating 0 8m33s
kube-system coredns-7f89b7bc75-v4p57 0/1 ContainerCreating 0 8m33s
kube-system etcd-localhost.localdomain 1/1 Running 0 8m49s
...略
执行
kubectl describe pod coredns-7f89b7bc75-pxvdx -n kube-system
命令查看对应
Pod
详细信息,发现如下错误:
Warning FailedCreatePodSandBox 98s (x4 over 103s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "04434c63cdf067e698a8a927ba18e5013d2a1a21afa642b3cddedd4ff4592178" network for pod "coredns-7f89b7bc75-pxvdx": networkPlugin cni failed to set up pod "coredns-7f89b7bc75-pxvdx_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.1.15.1/24
如下,查看网卡信息,发现 cni0已分配了
IP
地址(网络插件上次分配的),导致本次网络插件给它设置
IP
失败。
# ifconfig -a
cni0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.118.80.1 netmask 255.255.255.0 broadcast 10.118.80.255
inet6 fe80::482d:65ff:fea6:32fd prefixlen 64 scopeid 0x20<link>
ether 4a:2d:65:a6:32:fd txqueuelen 1000 (Ethernet)
RX packets 267800 bytes 16035849 (15.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 116238 bytes 10285959 (9.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.1.15.0 netmask 255.255.255.255 broadcast 10.1.15.0
inet6 fe80::a49a:2ff:fe38:3e4b prefixlen 64 scopeid 0x20<link>
ether a6:9a:02:38:3e:4b txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
...略
解决方法如下,删除配置错误的cni0网卡,删除网卡后会自动重建,然后就好了
$ sudo ifconfig cni0 down
$ sudo ip link delete cni0
控制面板结点
Toleration
(可选)
默认的,出于安全考虑,集群不会在控制面板结点机上调度(
schedule
)
Pod
。如果希望在控制面板结点机上调度
Pod
,比如用于开发的单机
Kubernetes
集群,需要运行以下命令
kubectl taint nodes --all node-role.kubernetes.io/master- # 移除所有Labels以node-role.kubernetes.io/master打头的结点的污点(Taints)
实践如下
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready control-plane,master 63m v1.20.5
# kubectl taint nodes --all node-role.kubernetes.io/master-
node/localhost.localdomain untainted
添加结点到集群
修改新结点的
hostname
# hostname
localhost.localdomain
# hostname k8sNode1
以上通过命令修改主机名仅临时生效,为了避免重启失效,需要编辑
/etc/hostname
文件,替换默认的
localhost.localdomain
为目标名称(例中为
k8sNode
),如果不添加,后续操作会遇到一下错误
[WARNING Hostname]: hostname "k8sNode1" could not be reached
[WARNING Hostname]: hostname "k8sNode1": lookup k8sNode1 on 223.5.5.5:53: read udp 10.118.80.94:33293->223.5.5.5:53: i/o timeout
修改
/ect/hosts
配置,增加结点机
hostname
到结点机
IP
(例中为
10.118.80.94
)的映射,如下
# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.118.80.94 k8sNode1
ssh
登录目标结点机,切换至
root
用户(如果非
root
用户登录),然后运行控制面板机器上执行
kubeadm init
命令输出的
kubeadm join
命令,录入:
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
可在控制面板机上通过运行一下命令查看已有且未过期
token
# kubeadm token list
如果没有
token
,可在控制面板机上通过以下命令重新生成
token
# kubeadm token create
实践如下
# kubeadm join 10.118.80.93:6443 --token ap4vvq.8xxcc0uea7dxbjlo --discovery-token-ca-cert-hash sha256:c4493c04d789463ecd25c97453611a9dfacb36f4d14d5067464832b9e9c5039a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
控制面板节点机即
master
机器上查看是否新增结点
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8snode1 NotReady <none> 74s v1.20.5
localhost.localdomain Ready control-plane,master 7h24m v1.20.5
如上,新增了一个k8snode1结点
遇到问题总结
问题1:运行]
kubeadm join
时报错,如下
# kubeadm join 10.118.80.93:6443 --token ap4vvq.8xxcc0uea7dxbjlo --discovery-token-ca-cert-hash sha256:c4493c04d789463ecd25c97453611a9dfacb36f4d14d5067464832b9e9c5039a
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "ap4vvq"
To see the stack trace of this error execute with --v=5 or higher
解决方法:
token过期,运行
kubeadm token create
命令重新生成token
问题1:运行]
kubeadm join
时报错,如下
# kubeadm join 10.118.80.93:6443 --token pa0gxw.4vx2wud1e7e0rzbx --discovery-token-ca-cert-hash sha256:c4493c04d789463ecd25c97453611a9dfacb36f4d14d5067464832b9e9c5039a
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: cluster CA found in cluster-info ConfigMap is invalid: none of the public keys "sha256:8e2f94e2f4f1b66c45d941c0a7f72e328c242346360751b5c1cf88f437ab854f" are pinned
To see the stack trace of this error execute with --v=5 or higher
解决方法:
discovery-token-ca-cert-hash
失效,运行以下命令,重新获取
discovery-token-ca-cert-hash
值
# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
8e2f94e2f4f1b66c45d941c0a7f72e328c242346360751b5c1cf88f437ab854f
使用输出的
hash
值
--discovery-token-ca-cert-hash sha256:8e2f94e2f4f1b66c45d941c0a7f72e328c242346360751b5c1cf88f437ab854f
问题2:
cni config uninitialized
错误问题
通过k8s自带UI查看新加入结点状态为
KubeletNotReady
,提示信息如下,
[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, CSINode is not yet initialized, missing node capacity for resources: ephemeral-storage]
解决方法: 重新安装CNI网络插件(实践时采用了虚拟机,可能是因为当时使用的快照没包含网络插件),然后重新清理结点,最后再重新加入结点
# CNI_VERSION="v0.8.2"
# mkdir -p /opt/cni/bin
# curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-amd64-${CNI_VERSION}.tgz" | sudo tar -C /opt/cni/bin -xz
清理
如果在集群中使用一次性服务器进行测试,则可以直接关闭这些服务器,不需要进行进一步的清理。可以使用
kubectl config delete cluster
删除对集群的本地引用(笔者未试过)。
但是,如果您想更干净地清理集群,则应该首先清空结点数据,确保节点数据被清空,然后再删除结点
移除结点
控制面板结点机上的操作
先在控制面板结点机上运行以下命令,告诉控制面板结点机器强制删除待删除结点数据
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
实践如下:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8snode1 Ready <none> 82m v1.20.5
localhost.localdomain Ready control-plane,master 24h v1.20.5
# kubectl drain k8snode1 --delete-emptydir-data --force --ignore-daemonsets
node/k8snode1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-4xqcc, kube-system/kube-proxy-c7qzs
evicting pod default/nginx-deployment-64859b8dcc-v5tcl
evicting pod default/nginx-deployment-64859b8dcc-qjrld
evicting pod default/nginx-deployment-64859b8dcc-rcvc8
pod/nginx-deployment-64859b8dcc-rcvc8 evicted
pod/nginx-deployment-64859b8dcc-qjrld evicted
pod/nginx-deployment-64859b8dcc-v5tcl evicted
node/k8snode1 evicted
# kubectl get nodes
NAME STATUS ROLES AGE VERSION