Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
This question does not appear to be about
a specific programming problem, a software algorithm, or software tools primarily used by programmers
. If you believe the question would be on-topic on
another Stack Exchange site
, you can leave a comment to explain where the question may be able to be answered.
Closed
2 years ago
.
I have deployed Kubernetes cluster v1.18.8 with kubeadm on production environment.Cluster setup is 3 Master and 3 Worker nodes with external Kube-api loadbalancer, etcd residing in Master nodes.Didn't see any issue during installation and all pods in kube-system are running. However when i get error when i run below command i get error:
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
While troubleshooting i found that the ports are not being listened.
sudo netstat -tlpn |grep kube
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 132584/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 133300/kube-proxy
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 197705/kube-control
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 213741/kube-schedul
tcp6 0 0 :::10250 :::* LISTEN 132584/kubelet
tcp6 0 0 :::6443 :::* LISTEN 132941/kube-apiserv
tcp6 0 0 :::10256 :::* LISTEN 133300/kube-proxy
If i check the same thing on development enviroment kubernetes cluster(v1.17) i see no issue.
kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
sudo netstat -tlpn |grep 102
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 2141/kube-controlle
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 2209/kube-scheduler
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1230/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2668/kube-proxy
tcp6 0 0 :::10256 :::* LISTEN 2668/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1230/kubelet
tcp6 0 0 :::10251 :::* LISTEN 2209/kube-scheduler
tcp6 0 0 :::10252 :::* LISTEN 2141/kube-controlle
On newly created prodction cluster i have deployed nginx and another application just to test how the kubernetes components behave, didn't see any error.
Is it the expected behaviour in version v1.18? Will really apprecite any help on this.
NOTE: No port is being blocked in internal communication
–
–
The command Kubectl get componentstatus
is depreciated in newer version(1.19) and it already has many issues.
The main point to note here is that Kubernetes has disabled insecure serving of
these components for older versions(atleast from v1.18). Hence i couldn't see kube-controller and kube-scheduler being listned on 1051 and 1052 ports. To restore this functionality you can remove the --port=0 from their manifests files(Not recommended as this can expose their metrics to the whole internet) that you can see inside:
/etc/kubernetes/manifests/
I commented out --port=0 field from the manifest file just to check this, kubectl get componentstatus command worked.