- k8s部署(1master、2node)
- 1、设置主机名以及host解析
- 2、安装依赖包
- 3、设置防火墙为iptables并清空规则
- 4、关闭swap和selinux
- 5、开启ipvs
- 6、调整内核参数
- 7、调整系统时区
- 8、设置rsyslogd和systemd-journald
- 9、关闭NetworkManager(安装calico网络插件,如flannel则不必关闭)
- 10、升级系统内核为4.4
- 11、安装Docker
- 12、安装kubeadm、kubelet、kubectl等k8s组件
- 13、下载k8s所需镜像
- 14、初始化主节点(master节点执行)
- 15、工作节点加入集群(node节点执行初始化成功输出信息中的加入命令)
- 16、部署扁平化网络(master节点执行)
- flannel安装
- calico 安装
- 17、整理安装目录
k8s部署(1master、2node)
部署k8s的服务器CPU核心数必须大于2
注:如无特别说明,则三台都执行命令。
1、设置主机名以及host解析
三台分别设置主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
查看主机名
hostname
三台都添加host解析
命令:
vim /etc/hosts
内容:
192.168.100.10 k8s-master01
192.168.100.20 k8s-node01
192.168.100.30 k8s-node02
2、安装依赖包
在线安装:
yum -y install conntrack ntpdate ntp ipvsadm ipset iptables curl sysstat libseccomp wget vim net-tools git
离线安装:
到一台联网的机器上执行命令:
mkdir /root/base
yum -y install conntrack ntpdate ntp ipvsadm ipset iptables curl sysstat libseccomp wget vim net-tools git iptables-services --downloadonly --downloaddir=/root/base
tar -zcvf base.tgz /root/base
将base.tgz导出,再导入到要部署的服务器上
导入后执行
tar -zxvf base.tgz
cd base && rpm -Uvh ./*
3、设置防火墙为iptables并清空规则
在线安装:
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
离线安装:
systemctl stop firewalld && systemctl disable firewalld
systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
4、关闭swap和selinux
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
5、开启ipvs
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
注:
#加载网桥过滤模块
modprobe br_netfilter
#查看网桥过滤模块是否加载成功
lsmod | grep br_netfilter
#添加执行权限,并执行ipvs.modules脚本
chmod 755 /etc/sysconfig/modules/ipvs.modules
#验证对应模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
6、调整内核参数
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
7、调整系统时区
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0
systemctl restart rsyslog
systemctl restart crond
8、设置rsyslogd和systemd-journald
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
Storage=persistent
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
SystemMaxUse=10G
SystemMaxFileSize=200M
MaxRetentionSec=2week
ForwardToSyslog=no
EOF
systemctl restart systemd-journald
9、关闭NetworkManager(安装calico网络插件,如flannel则不必关闭)
systemctl disable NetworkManager
systemctl stop NetworkManager
10、升级系统内核为4.4
在线安装:
下载yum源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!
yum下载升级4.4内核,lt表示4.4
yum --enablerepo=elrepo-kernel install -y kernel-lt
离线安装:
导入elrepo和kernel两个内核包
rpm -Uvh elrepo-release-7.0-3.el7.elrepo.noarch.rpm
rpm -Uvh kernel-lt-4.4.232-1.el7.elrepo.x86_64.rpm
+++
安装后设置开机从新内核启动(注意改动版本,这里版本是4.4.232)
grub2-set-default 'CentOS Linux (4.4.232-1.el7.elrepo.x86_64) 7 (Core)'
完成以上步骤后重启reboot
重启完成输入uname -r检查是否是4.4版本。
11、安装Docker
在线安装:
1、下载yum工具
yum install -y yum-utils
2、增加yum源
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3、下载docker长期稳定版
yum -y install --setopt=obsoletes=0 docker-ce-19.03.9-3.el7
4、创建/etc/docker目录(一般有这个目录,没有就添加)
mkdir /etc/docker
5、配置daemon.json
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://34z7nf41.mirror.aliyuncs.com"],
"data-root": "/data/docker",
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "4"
}
}
EOF
6、设置docker服务自启动并重启服务器
systemctl enable docker && reboot
离线安装:
1、docker下载地址:
https://download.docker.com/linux/static/stable/x86_64/
2、解压并移动
tar -zxvf xxx.tgz
cp -a docker/* /usr/bin/
3、编写service服务注册文件
vim /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
4、添加service文件权限
chmod +x /etc/systemd/system/docker.service
5、配置Registry
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://34z7nf41.mirror.aliyuncs.com"],
"data-root": "/data/docker",
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "4"
}
}
EOF
修改完之后
systemctl daemon-reload
systemctl start docker
systemctl enable docker.service
12、安装kubeadm、kubelet、kubectl等k8s组件
这三个组件需要版本一致
在线安装:
先添加kubernetes的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
查看kubeadm版本:
yum list kubeadm --showduplicates
本次选择1.23.17-0版本(根据需要下载版本)
yum install -y kubelet-1.23.17-0 kubectl-1.23.17-0 kubeadm-1.23.17-0
设置kubelet的开机自启
systemctl enable kubelet
离线安装:
yum install -y kubelet-1.23.17-0 kubectl-1.23.17-0 kubeadm-1.23.17-0 --download-only --downloaddir=/root/kubeadm
13、下载k8s所需镜像
查看需要依赖的镜像版本
命令:
kubeadm config images list
返回结果:
registry.k8s.io/kube-apiserver:v1.23.17
registry.k8s.io/kube-controller-manager:v1.23.17
registry.k8s.io/kube-scheduler:v1.23.17
registry.k8s.io/kube-proxy:v1.23.17
registry.k8s.io/pause:3.6
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.8.6
根据上面返回结果,编写脚本文件images.sh,内容如下(下载k8s相关镜像,下载后将镜像名改为以k8s.gcr.io/开头的名字,以便kubeadm识别使用)
注:阿里云的coredns的镜像名为coredns:v1.8.6
,
docker官方仓库coredns的镜像名为coredns/coredns:1.8.6
,
执行脚本后修改镜像名为registry.k8s.io/coredns/coredns:v1.8.6
#!/bin/bash
images=(
kube-apiserver:v1.23.17
kube-controller-manager:v1.23.17
kube-scheduler:v1.23.17
kube-proxy:v1.23.17
pause:3.6
etcd:3.5.6-0
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} registry.k8s.io/${imageName}
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6 registry.k8s.io/coredns/coredns:v1.8.6
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
给脚本增加执行权限,并运行
chmod +x images.sh
./images.sh
打包镜像(离线部署)
docker save -o k8simages.tar $(docker images | grep registry.k8s.io | awk '{print $1":"$2}' | xargs)
14、初始化主节点(master节点执行)
kubeadm config print init-defaults > kubeadm-config.yaml
对kubeadm-config.yaml文件进行编辑
vim kubeadm-config.yaml
localAPIEndpoint:
advertiseAddress: 192.168.100.10 #修改为master节点IP
nodeRegistration:
name: k8s-master01 #修改为master节点主机名
---
kubernetesVersion: v1.23.17 #修改为跟镜像版本一致
networking:
podSubnet: "172.100.0.0/16" #自定义pod的IP地址池,不跟其他网络冲突
serviceSubnet: 10.96.0.0/12
以下部分复制粘贴在末尾,用于开启ipvs,注意三条短横线也要粘贴。
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
cgroupDriver: cgroupfs
初始化并输出初始化日志
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
注:kubeadm版本在1.15时,使用–experimental-upload-certs
在1.16以后(包括1.16),使用–upload-certs
如果初始化失败,则输入
kubeadm reset -f
然后重新初始化。
如果初始化成功,则会显示:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:e0bd674e3fb9551d2e55e7bc37ea103364338b73cfc963dc6a70f2d7c23ef90b
根据初始化成功输出信息,执行以下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
15、工作节点加入集群(node节点执行初始化成功输出信息中的加入命令)
kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:e0bd674e3fb9551d2e55e7bc37ea103364338b73cfc963dc6a70f2d7c23ef90b
16、部署扁平化网络(master节点执行)
注:保持只有一张网卡,因为会绑定网卡设备,默认绑定数字大的那张网卡
flannel安装
1、导入flannel_v0.12.0-amd64.tar镜像文件,并执行以下命令。
docker load -i flannel_v0.12.0-amd64.tar
2、导入文件kube-flannel.yml,修改podSubnet后执行命令
修改podSubnet:
vim kube-flannel.yml
...
net-conf.json: |
{
"Network": "172.100.0.0/16", #修改为podSubnet,与上面kubeadm-config.yaml里一致
"Backend": {
"Type": "vxlan"
}
...
执行命令创建pod
kubectl apply -f kube-flannel.yml
3、执行命令查看状态
kubectl get pods -n kube-system -o wide
#查看pod的ready状态
kubectl get nodes
#status状态都为ready即为部署成功
calico 安装
官方网站:https://docs.projectcalico.org
这里选用3.25.1版本
1、安装 Tigera Calico 运算符和自定义资源定义。
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
2、通过创建必要的自定义资源来安装 Calico
wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/custom-resources.yaml
修改
vim custom-resources.yaml
spec:
calicoNetwork:
ipPools:
...
cidr: 172.100.0.0/16 #修改为k8s初始化时的podSubnet
...
执行命令创建
kubectl create -f custom-resources.yaml
3、执行命令查看状态
kubectl get pods -n kube-system -o wide
#查看pod的ready状态
kubectl get nodes
#status状态都为ready即为部署成功
17、整理安装目录
mkdir -pv /usr/local/kubernetes/install
mv kubeadm-init.log kubeadm-config.yaml images.sh kubernetes.conf /usr/local/kubernetes/install
mkdir -pv /usr/local/kubernetes/cni
mv custom-resources.yaml tigera-operator.yaml /usr/local/kubernetes/cni
最后编辑:海马 更新时间:2024-08-03 21:41