於ubuntu安裝k8s,以下是我安裝遇到的問題及我的解法,
其中包含公司防火牆無法連到某些網站我的解法說明,
因為v1.21.2還有未解問題,會降版及改CentOS再試看看,
所以相關安裝還會有新文更新試驗結果,敬請期待…
一.方法一:
在公司內部,安裝遇到公司防火牆擋連https://packages.cloud.google.com/apt/,故最後改用方法二乃至方法三
1.安裝套件apt-transport-https、ca-certificates、curl
sudo apt-get update
=> 遇到網站憑證問題,故加上–allow….設定
sudo apt-get update –allow-unauthenticated –allow-insecure-repositories
sudo apt-get install -y apt-transport-https ca-certificates curl
2.安裝kubelet kubeadm kubectl
1) 設定https://packages.cloud.google.com網站憑證
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg –dearmor | sudo tee /usr/share/keyrings/kubernetes-archivekeyring.gpg > /dev/null
2) 設定kubernetes安裝apt
echo “deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
=> 若上述指令執行仍有問題,建議在deb […]加上trusted=yes allow-insecure=yes allow-weak=yes allow-downgrade-to-insecure=yes check-valid-until=no試看看
sudo apt-get update –allow-unauthenticated –allow-insecure-repositories
sudo apt-get install -y kubelet kubeadm kubectl
=> 此步驟因防火牆擋連https://packages.cloud.google.com/apt/無法下載,故改試方法二
二.方法二:
用snap工具安裝 (若未安裝snap,要執行apt install snap)
參考網址: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
1.安裝kubelet kubeadm kubectl
snap install kubectl –classic
kubectl version –client
snap install kubeadm –classic
kubeadm version –client
snap install kubelet –classic
kubelet version –client
kubelet
2.swap改由k8s控管,故要關閉swap
swapoff -a
sed -e ‘/swap/ s/^#*/#/’ -i /etc/fstab
free -m
3.初始master,以建立k8s cluster
kubeadm init –pod-network-cidr 10.5.0.0/16
=> 出現以下錯誤,不曉得如何解,故改用方法三
root@ubuntu-VirtualBox:~/snap# kubeadm init –pod-network-cidr 10.5.0.0/16
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs" as the Docker cgroup driver. The recommended driver is “systemd". Please follow the guide at https://kuberne tes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileExisting-conntrack]: conntrack not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
root@ubuntu-VirtualBox:~/snap# systemctl enable kubelet.service
Failed to enable unit: Unit file kubelet.service does not exist.
此項解法: [ERROR FileExisting-conntrack]: conntrack not found in system path
=> apt-get install conntrack
三.方法三:
參考網址:
http://kimiwublog.blogspot.com/2017/05/kubernetes.html
https://milexz.pixnet.net/blog/post/228096329-%E3%80%90k8s%E3%80%91kubernetes%E7%92%B0%E5%A2%83%E6%9E%B6%E8%A8%ADby-kubeadm
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
https://www.downloadkubernetes.com/
1.安裝kubectl
1)curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
=> 遇到如下錯誤 curl: (60) SSL certificate problem: EE certificate key too weak
解法: 因ubuntu 20.04將TLS 最低版本為1.2,故認證失敗
修改/etc/ssl/openssl.cnf,在 oid_section = new_oids下增加
openssl_conf = default_conf
[default_conf]
ssl_conf = ssl_sect
[ssl_sect]
system_default = system_default_sect
[system_default_sect]
MinProtocol = TLSv1.1
CipherString = DEFAULT@SECLEVEL=1
2)curl -LO “https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256″
echo “$(<kubectl.sha256) kubectl" | sha256sum –check
3)sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version
2.安裝kubeadm
1)curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubeadm"
2)curl -LO “https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubeadm.sha256″
echo “$(<kubeadm.sha256) kubeadm" | sha256sum –check
3)sudo install -o root -g root -m 0755 kubeadm /usr/local/bin/kubeadm
kubeadm version
3.安裝kubelet
1)curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet"
2)curl -LO “https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubelet.sha256″
echo “$(<kubelet.sha256) kubelet" | sha256sum –check
3)sudo install -o root -g root -m 0755 kubelet /usr/local/bin/kubelet
kubelet –version
4.swap改由k8s控管,故要關閉swap
sudo swapoff -a
sudo sed -e ‘/swap/ s/^#*/#/’ -i /etc/fstab
free -m
5.初始master,以建立k8s cluster
sudo kubeadm init –pod-network-cidr 10.5.0.0/16
出現如下錯誤:
[init] Using Kubernetes version: v1.21.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs" as the Docker cgroup driver. The recommended driver is “systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
解法說明:
1)[WARNING IsDockerSystemdCheck]: detected “cgroupfs" as the Docker cgroup driver. The recommended driver is “systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
=> 參考如下第6項7)、8)說明
2)[WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
=> 參考如下第6項8)說明
3)其中以下錯誤是因此環境之前有執行過,故需用kubeadm reset清掉之前的設定
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
4)To see the stack trace of this error execute with –v=5 or higher
6.參考 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/說明,先將整個環境再重新view一次
1)確認MAC address and product_uuid在每個node是unique,因第一個node尚未建,所以此步驟可略過
ifconfig -a
sudo cat /sys/class/dmi/id/product_uuid
2)Check network adapters
3)使用iptables並確認有載入bridged traffic模組
lsmod | grep br_netfilter
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl –system
4)確認相關ports是否有被佔用
A.ontrol-plane node(s)
Protocol Direction Port Range Purpose Used By
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self
B.Worker node(s)
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services? All
5)可使用以下runtime環境
- Docker /var/run/dockershim.sock
- containerd /run/containerd/containerd.sock
- CRI-O /var/run/crio/crio.sock
6)取得kubeadm設定檔
sudo kubeadm config images pull
註:kubeadm init會到到此行提醒[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
7)將docker的cgroup driver改為systemd
參考網址: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
A.sudo mkdir /etc/docker (若不存在此路徑才執行)
cat <<EOF | sudo tee /etc/docker/daemon.json
{
“exec-opts": [“native.cgroupdriver=systemd"],
“log-driver": “json-file",
“log-opts": {
“max-size": “100m"
},
“storage-driver": “overlay2″
}
EOF
B.重啟docker服務
systemctl daemon-reload && systemctl restart docker && systemctl enable kubelet.service
C.確認是否使用cgroup
docker info |grep “Cgroup"
=> Cgroup Driver: systemd
8)建立kubelet服務,並將其cgroup driver設定為systemd
A.sudo vi /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/local/bin/kubelet
–v=2 \
–cgroup-driver=systemd \
–runtime-cgroups=/systemd/system.slice \
–kubelet-cgroups=/systemd/system.slice
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
B.重啟kubelet服務
daemon-reload && systemctl restart kubelet
C.若無kubelet.service可參考設定,可下載這一份試看看
curl -sSL “https://raw.githubusercontent.com/kubernetes/release/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed “s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service
D.查看kubelet.service啟動的LOG,若啟動失敗可先參考此份的原因
journalctl -xeu kubelet => 看啟動Log 或 journalctl -f -u kubelet
E.若無法設定好cgroup driver可參考此
https://www.cnblogs.com/hellxz/p/kubelet-cgroup-driver-different-from-docker.html
Check on the worker nodes file /var/lib/kubelet/kubeadm-flags.env and in KUBELET_KUBEADM_ARGS if you have –cgroup-driver=cgroupfsflag. Changed it to systemd and kubelet will start working again.
9)新增或設定檔案 kubeadm-config.yaml => 但我試的結果是跟kubelet的cgroup設定為systemd沒有影響
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
kubernetesVersion: v1.21.0
—
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
7.再次初始master,以建立k8s cluster
1)需用kubeadm reset清掉之前的設定
kubeadm reset
2)取得預設定檔
kubeadm config images pull
3)確認kubelet.service可以啟動成功
systemctl start kubelet.service 或 systemctl restart kubelet.service
systemctl status kubelet.service
4)清除所有虛擬服務
ipvsadm –clear
5)執行以下仍失敗
sudo kubeadm init –pod-network-cidr 10.5.0.0/16
sudo kubeadm init –kubernetes-version=v1.21.2 –pod-network-cidr=10.244.0.0/16 –service-cidr=10.96.0.0/12 –ignore-preflight-errors=Swap –v=5
sudo kubeadm init –kubernetes-version=v1.21.2 –pod-network-cidr=10.244.0.0/16 –service-cidr=10.96.0.0/12 –ignore-preflight-errors=all –v=5
sudo kubeadm init –config kubeadm-config.yml –v=5
出現的錯誤: Error execution phase wait-control-plane
=> 執行以下仍無果,暫時試到這
sudo vi /etc/ufw/sysctl.conf
# 2021.07.08
加上
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
參考網址:
https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/kubelet-integration/
https://www.cnblogs.com/horizonli/p/10855666.html
https://www.qikqiak.com/k8s-book/docs/16.%E7%94%A8%20kubeadm%20%E6%90%AD%E5%BB%BA%E9%9B%86%E7%BE%A4%E7%8E%AF%E5%A2%83.html
https://jimmysong.io/kubernetes-handbook/cloud-native/cloud-native-local-quick-start.html


ubuntu v21.04安裝k8s(kubernetes) v1.21.2遇到的問題及處理方式 有 “ 1 則迴響 ”