集群安装

操作系统

  • 服务器版本:CentOS7.x-86_x64

硬件配置

  • 主节点 master 192.168.2.100
  • 节点1 node1 192.168.2.101
  • 节点2 node2 192.168.2.102
  • 中间件服务器 middle(harbor等等)192.168.2.103

服务器要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

一台或多台机器,操作系统 CentOS7.x-86_x64

硬件配置:2GB或更多RAM,2核CPU或更多CPU,硬盘30GB或更多

可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点

安装 docker

注意:所有主机操作

参考文档
http://haimait.top/docs/k8s/docker-install

安装 docker-compose

注意:所有主机操作

参考文档
http://haimait.top/docs/k8s/docker-compose

安装 cri-dockerd

注意:所有主机操作

获取软件

下载软件
mkdir /data/softs && cd /data/softs
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.2/cri-dockerd-0.3.2.amd64.tgz

解压软件
tar xf cri-dockerd-0.3.2.amd64.tgz
mv cri-dockerd/cri-dockerd /usr/local/bin/

检查效果
[root@master softs]# cri-dockerd --version
cri-dockerd 0.3.2 (23513f4c)

定制配置

定制配置文件
cat > /etc/systemd/system/cri-dockerd.service<<-EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
[Service]
Type=notify
#注意这里的镜像地址要能pull到镜像
ExecStart=/usr/local/bin/cri-dockerd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin -
-container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --cri-dockerd-rootdirectory=/var/lib/dockershim --docker-endpoint=unix:///var/run/docker.sock --
cri-dockerd-root-directory=/var/lib/docker
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
定制配置
cat > /etc/systemd/system/cri-dockerd.socket <<-EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
设置服务开机自启动
systemctl daemon-reload
systemctl enable cri-dockerd.service
systemctl restart cri-dockerd.service
systemctl is-active cri-dockerd

配置 K8S 基础环境

所有机器执行以下操作

# 每个节点分别设置对应主机名
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname middle

# 查看主机名
hostname

# 所有节点都修改 hosts 以自己服务器的ip为准
cat >> /etc/hosts << EOF
192.168.2.100 master
192.168.2.101 node1
192.168.2.102 node2
192.168.2.103 harbor.top middle
127.0.0.1 localhost
EOF


## 同步系统时间
yum install ntpdate -y
/usr/sbin/ntpdate -u pool.ntp.org

打开定时任务
crontab -e

写入下面任务
*/10 * * * * /usr/sbin/ntpdate -u pool.ntp.org

看时间
date

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config


### 关闭swap 禁止swap分区

swapoff -a  
sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab


#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl --system


# 所有节点确保防火墙关闭
systemctl stop firewalld
systemctl disable firewalld

# 检查防火墙状态
firewall-cmd --state

# 关闭规则
systemctl stop iptables
systemctl disable iptables
systemctl status iptables


### 配置节点间ssh免密码登录(可以跳过)

配置ssh互信,那么节点之间就能无密访问,方便日后执行自动化部署

# 在master主节点上执行
ssh-keygen     # 每台机器执行这个命令, 一路回车即可
ssh-copy-id  node1     # 要主节点上执行从master上拷贝公钥到其他node1节点(这里node1为从节点的主机名,也可以写ip地址),这里需要输入 yes和密码
# 从主节登陆到从节点
ssh node1

# 在node1从节点上执行
ssh-keygen
ssh-copy-id  master     # 从node1上拷贝公钥到其他master节点这里需要输入 yes和密码
# 在从节点上登录到主节点
ssh master

安装 harbor 镜像仓库

注意:只安装 中间件主机 middle

参考文档
http://haimait.top/docs/k8s/harbor

安装 kubeletkubeadmkubectl

masternode1node2 机器执行以下操作

定制阿里云的关于kubernetes的软件源
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 更新软件源
yum makecache fast

# 环境软件部署
yum install kubeadm kubectl kubelet -y

# 或者是指定版本安装 和上面的只要执行一个就行
 yum install -y kubelet-1.27.4 kubeadm-1.27.4 kubectl-1.27.4 --disableexcludes=kubernetes


## 启动 kubelet、docker,并设置开机启动(所有节点)
#重新加载服务的配置文件
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet

systemctl restart kubelet(跳过)

# 查看版本
kubelet --version # v1.27.4
kubectl version   # v1.27.4
kubeadm version   # v1.27.4

[root@master flannel]# kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.4
Kustomize Version: v5.0.1
Server Version: v1.27.3

使用 kubeadm 引导安装 k8s 集群

注意: 只在 master 主机操作

1. 下载镜像

步骤:
下载镜像 –> 修改镜像名 –> push 到镜像仓库

新建项目
打开浏览器登录 http://192.168.2.103/harbor/projects
新建项目 google_containers 权限公开

检查镜像文件列表
[root@master ~]# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/coredns/coredns:v1.10.1


# 步骤:获取镜像文件 -> 修改名称 -> 推送到 harbor 仓库 -> 删除本地下载的阿里云 tag 的镜像
# 获取镜像名称赋值给images
images=$(kubeadm config images list --kubernetes-version=1.27.3 | awk -F "/" '{print $NF}')

# 从aliyum仓库拉镜像
for i in ${images}
do
 docker pull registry.aliyuncs.com/google_containers/$i
 docker tag registry.aliyuncs.com/google_containers/$i harbor.top/google_containers/$i
 docker push harbor.top/google_containers/$i
 docker rmi registry.aliyuncs.com/google_containers/$i
done

2. master 节点初始化

环境初始化命令
kubeadm init --kubernetes-version=1.27.3 \
--apiserver-advertise-address=192.168.2.100 \
--image-repository harbor.top/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=Swap \
--cri-socket=unix:///var/run/cri-dockerd.sock

注释说明

--apiserver-advertise-address  # `master` 主机节点的 `ip`
--service-cidr: # service ip段
--pod-network-cidr # pod ip段

成功返回结果

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.100:6443 --token g7ih9b.mz7smax3vwbnxi5v \
        --discovery-token-ca-cert-hash sha256:9740cb924bb78cb91864c41e50a39187676cbb7a92a014b9b8b743513c6df409

上面kubeadm join 192.168.2.100:6443 --token 09umch.j47h1kyxx44znbtc ... 就是把工作节点(从节点)加到master主节点的token(24小时内有效)

3. node 节点加入集群

注意后面加 --cri-socket=unix:///var/run/cri-dockerd.sock

成功返回结果

[root@node1 ~]# kubeadm join 192.168.2.100:6443 --token g7ih9b.mz7smax3vwbnxi5v --discovery-token-ca-cert-hash sha256:9740cb924bb78cb91864c41e50a39187676cbb7a92a014b9b8b743513c6df409 --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果报错,想要重置

# 失败了可以用 kubeadm reset 重置
# 重新初始化
kubeadm reset # 先重置

# 删除yaml
rm -rf  /etc/kubernetes/manifests/kube-apiserver.yaml  /etc/kubernetes/manifests/kube-controller-manager.yaml /etc/kubernetes/manifests/kube-scheduler.yaml /etc/kubernetes/manifests/etcd.yaml 

#重新初使化
kubeadm init --kubernetes-version=1.27.3 \
--apiserver-advertise-address=192.168.2.100 \
--image-repository harbor.top/google_containers \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=Swap \
--cri-socket=unix:///var/run/cri-dockerd.sock

# 查看日志
journalctl -xeu kubelet
journalctl -fu kubelet

# 解决:k8s[kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with
# 参考:
# https://blog.csdn.net/leisurelen/article/details/117392370

初使化其它报错记录

报错一:
你这个错误 curl -sSL http://localhost:10248/healthz,本地配host了么
解决:配置本地hosts
127.0.0.1 localhost

报错二:
7月 03 17:41:45 master kubelet[115473]: E0703 17:41:45.730879  115473 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotRe
7月 03 17:41:50 master kubelet[115473]: W0703 17:41:50.460817  115473 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
解决方法:
docker pull quay.io/coreos/flannel:v0.10.0-amd64 
mkdir -p /etc/cni/net.d/
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
EOF
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=172.100.0.0/16
FLANNEL_SUBNET=172.100.1.0/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

报错三:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
解决:
原因:kubernetes master没有与本机绑定,集群初始化的时候没有绑定,此时设置在本机的环境变量即可解决问题。
source /etc/profile
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile

报错四:
初始化K8S master时报错

The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

或者一直报下面的错

705 00:36:34 master kubelet[106102]: E0705 00:36:34.627900  106102 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://192.168.2.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master?timeout=10s": dial tcp 192.168.2.100:6443: connect: connection refused


解决 参考下面的连接:
原文链接:https://blog.csdn.net/qq_26129413/article/details/122207954

我正好相反,我把`"exec-opts": ["native.cgroupdriver=systemd"]``vim /etc/docker/daemon.json` 里删除,
再重启docker
sudo systemctl daemon-reload
sudo systemctl restart docker
systemctl status docker
docker info
然后再

返回这个成功

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.111:6443 --token pgshih.lqxd8nxo0stohgzt \
    --discovery-token-ca-cert-hash sha256:39b46cd80f5810e06fa255debf442d5a5880f97fdb2ca1b48a680c42cee36a48 

4. 权限操作

主节点 master 节点操作

定制 kubernetes 的登录权限

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

新令牌 如果 token 失效后,可以使用下的命令重新生成
kubeadm token create –print-join-command

查看是否成功

[root@master manifests]# kubectl get node
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   14h   v1.27.4
node1    NotReady   <none>          14h   v1.27.4
node2    NotReady   <none>          39s   v1.27.4

至此:
master 节点的 k8s 已经安装成功
node1node2 也已经加入到 master 节点

命令补全

master 主机中执行

echo "source <(kubectl completion bash)" >> ~/.bashrc
echo "source <(kubeadm completion bash)" >> ~/.bashrc
echo "alias k='kubectl'" >> ~/.bashrc
source ~/.bashrc

注意

确认以下三个服务是运行的 这样重启master或者node服务器,才能正常恢复集群
[root@master flannel]# systemctl is-active kubelet cri-dockerd docker
active
active
active

确认以下三个服务是开机自动运行的,执行以下命令,没有提示就表示是开机运行
[root@master flannel]# systemctl enable kubelet cri-dockerd docker
[root@master flannel]#

安装网络插件

calico 或者 flannel ,选择其中一个安装即可
注意:只在 master 服务器上安装即可

安装 calico

这里使用 helm 方式安装 calico

1. 安装 helm

下载列表
https://github.com/helm/helm/releases

下载 Linux arm64 二进制包
Linux amd64 (checksum / 2b6efaa009891d3703869f4be80ab86faa33fa83d9d5ff2f6492a8aebe97b219)

tar -zxvf helm-v3.12.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
## 查看是否生效
helm list 

2. 下载 calico

下载列表
https://github.com/projectcalico/calico/releases

下载 Linux arm64 二进制包

安装 calico


[root@master softs]# helm install calico ./tigera-operator-v3.26.1.tgz -n kube-system
NAME: calico
LAST DEPLOYED: Thu Aug 10 02:06:48 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

# 查看节点状态
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES           AGE     VERSION
master   Ready    control-plane   2d14h   v1.27.4
node1    Ready    <none>          2d14h   v1.27.4
node2    Ready    <none>          2d      v1.27.4

# 查看pod状态
[root@master ~]# kubectl get pod -A
NAMESPACE          NAME                                      READY   STATUS    RESTARTS      AGE
calico-apiserver   calico-apiserver-845d74c84c-kvj68         1/1     Running   0             9m16s
calico-apiserver   calico-apiserver-845d74c84c-rgz64         1/1     Running   0             9m16s
calico-system      calico-kube-controllers-55fb6f474-vx7lg   1/1     Running   3 (10m ago)   20h
calico-system      calico-node-5gqbp                         1/1     Running   1 (15h ago)   20h
calico-system      calico-node-nl8g7                         1/1     Running   3 (13m ago)   20h
calico-system      calico-node-qmp8t                         1/1     Running   0             13m
calico-system      calico-typha-7fb75cc495-5lzvc             1/1     Running   4 (11m ago)   20h
calico-system      calico-typha-7fb75cc495-p895k             1/1     Running   4 (12m ago)   20h
calico-system      csi-node-driver-77n4n                     2/2     Running   2 (15h ago)   20h
calico-system      csi-node-driver-km9kr                     2/2     Running   0             20h
calico-system      csi-node-driver-zk8nw                     2/2     Running   4 (13m ago)   20h
kube-system        coredns-7d5ccbb94b-5xmh5                  1/1     Running   2 (13m ago)   2d14h
kube-system        coredns-7d5ccbb94b-mhqc5                  1/1     Running   2 (13m ago)   2d14h
kube-system        etcd-master                               1/1     Running   2 (13m ago)   2d14h
kube-system        kube-apiserver-master                     1/1     Running   2 (13m ago)   2d14h
kube-system        kube-controller-manager-master            1/1     Running   4 (13m ago)   2d14h
kube-system        kube-proxy-572fm                          1/1     Running   2 (13m ago)   2d14h
kube-system        kube-proxy-5wmv4                          1/1     Running   2 (20h ago)   2d
kube-system        kube-proxy-9zf2x                          1/1     Running   3 (14m ago)   2d14h
kube-system        kube-scheduler-master                     1/1     Running   4 (13m ago)   2d14h
kube-system        tigera-operator-5f4668786-mdvv5           1/1     Running   4 (12m ago)   20h

安装 fannel

注意:只在 master 服务器上安装即可

1. 新建目录
mkdir -p /data/softs/flannel && cd /data/softs/flannel

2. 下载 yml 文件
wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
或者
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml


3. 下载镜像-> 重新打tag -> 推送镜像到仓库
for i in $(grep image kube-flannel.yml | grep -v '#' | awk -F '/' '{print $NF}')
do
 docker pull flannel/$i
 docker tag flannel/$i harbor.top/google_containers/$i
 docker push harbor.top/google_containers/$i
 docker rmi flannel/$i
done

4. 备份配置文件
cp kube-flannel.yml{,.bak}

5. 修改配置文件 
把 kube-flannel.yml 中的镜像地址 docker.io/flannel 替换成 harbor.top/google_containers

执行下面的命令
sed -i '/ image:/s/docker.io\/flannel/harbor.top\/google_containers/' kube-flannel.yml

查看修改的配置
[root@master flannel]# grep image kube-flannel.yml
        image: harbor.top/google_containers/flannel-cni-plugin:v1.2.0
        image: harbor.top/google_containers/flannel:v0.22.1
        image: harbor.top/google_containers/flannel:v0.22.1

6. 应用配置文件
[root@master flannel]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

7. 检查效果
[root@master flannel]# kubectl get namespace
NAME              STATUS   AGE
default           Active   17h
kube-flannel      Active   11s
kube-node-lease   Active   17h
kube-public       Active   17h
kube-system       Active   17h

[root@master flannel]# kubectl get pod -n kube-flannel
NAME                    READY   STATUS     RESTARTS   AGE
kube-flannel-ds-dzgfg   0/1     Init:0/2   0          44s
kube-flannel-ds-q6dhk   0/1     Init:0/2   0          44s
kube-flannel-ds-wdtpw   1/1     Running    0          44s

[root@master flannel]# kubectl get po -A
NAMESPACE      NAME                             READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-74wph            1/1     Running   0          2m13s
kube-flannel   kube-flannel-ds-9gz7p            1/1     Running   0          2m13s
kube-flannel   kube-flannel-ds-vfhrc            1/1     Running   0          2m13s
kube-system    coredns-7d5ccbb94b-rqt7g         1/1     Running   0          9m32s
kube-system    coredns-7d5ccbb94b-w6scb         1/1     Running   0          9m32s
kube-system    etcd-master                      1/1     Running   0          9m45s
kube-system    kube-apiserver-master            1/1     Running   0          9m45s
kube-system    kube-controller-manager-master   1/1     Running   0          9m45s
kube-system    kube-proxy-gqv8c                 1/1     Running   0          7m54s
kube-system    kube-proxy-rgbpl                 1/1     Running   0          9m33s
kube-system    kube-proxy-w9rdp                 1/1     Running   0          7m56s
kube-system    kube-scheduler-master            1/1     Running   0          9m45s


[root@master flannel]# k get nodes
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   23h   v1.27.4
node1    Ready    <none>          23h   v1.27.4
node2    Ready    <none>          9h    v1.27.4


关闭 flannel 服务 跳过
kebuctl delete -f kube-flannel.yml

至此已经安装完成

常用命令

master 上使执行

# 查看集群所有节点
kubectl get nodes

# 根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml

# 查看集群部署了哪些应用? 相当于 docker ps 
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A 

# 每一秒刷新一次
watch -n 1 kubectl get pod -A 

# 查看pod详情
kubectl describe pod  calico-node-rsmm8 --namespace=kube-system
kubectl describe pod  calico-kube-controllers-5cdbdf5595-dxnzd --namespace=kube-system

# 查看证书有效期
[root@k8s-master ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text | grep ' Not '
            Not Before: Sep 23 06:46:53 2022 GMT
            Not After : Sep 23 06:46:53 2023 GMT
[root@k8s-master ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Sep 23, 2023 06:46 UTC   44d                                     no      
apiserver                  Sep 23, 2023 06:46 UTC   44d             ca                      no      
apiserver-kubelet-client   Sep 23, 2023 06:46 UTC   44d             ca                      no      
controller-manager.conf    Sep 23, 2023 06:46 UTC   44d                                     no      
front-proxy-client         Sep 23, 2023 06:46 UTC   44d             front-proxy-ca          no      
scheduler.conf             Sep 23, 2023 06:46 UTC   44d                                     no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Sep 20, 2032 06:46 UTC   9y              no      
front-proxy-ca          Sep 20, 2032 06:46 UTC   9y              no

参考文档

官方文档

https://kubernetes.io/zh-cn/docs/concepts/overview/components/#node-components

王树林:无废话纯享版视频地址:
https://www.bilibili.com/video/BV1Ah4y1L7TY?p=11

安装参考文档:
尚硅谷
https://www.yuque.com/leifengyang/oncloud/ghnb83
https://www.bilibili.com/video/BV13Q4y1C7hS?p=32&spm_id_from=pageDriver&vd_source=a68414cd60fe26e829ce1cdd4d75a9e6

易文档:
https://k8s.easydoc.net/docs/dRiQjyTY/28366845/6GiNOzyZ/9EX8Cp45

作者:海马  创建时间:2023-07-09 11:01
最后编辑:海马  更新时间:2024-03-24 09:51