K8S部署教程

K8S部署教程

准备工作:

VMware新建虚拟机,选择你的Ubuntu Server镜像,按照以下配置安装(至少三台虚拟机),建议安装用户名为vagrant(方便后续按照教程安装)

ID『虚拟机』设置建议配置默认值说明
1处理器-2最低要求
2内存-4096 MB节约内存
3显示器取消复选加速 3D 图形复选节约内存
4.1网络适配器1-nat需上网
4.2网络适配器2-host only固定IP
5硬盘40 GB20 GB保证练习容量
6选择固件类型UEFI传统 BIOSVMware Fusion 支持嵌套虚拟化

设置结果:

IDHOSTNAMECPU 核RAMDISKNIC
1k8s-master2 或更多4 GB或更多40 GB1. nat
2. host only
2k8s-worker1同上2 GB或更多同上同上
3k8s-worker2同上同上同上同上

准备环境:

1. [选做] 设置当前用户 sudo 免密

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
# 当前用户的密码
export USER_PASS=vagrant

# 缓存 sudo 密码 vagrant
echo ${USER_PASS} | sudo -v -S 

# 永久生效
sudo tee /etc/sudoers.d/$USER <<-EOF
$USER ALL=(ALL) NOPASSWD: ALL
EOF

sudo cat /etc/sudoers.d/$USER

2.[选做] 设置 root 密码

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
export USER_PASS=vagrant

(echo ${USER_PASS}; echo ${USER_PASS}) \
  | sudo passwd root

sudo sed -i /etc/ssh/sshd_config \
  -e '/PasswordAuthentication /{s+#++;s+no+yes+}'

if egrep -q '^(#|)PermitRootLogin' /etc/ssh/sshd_config; then
  echo sudo sed -i /etc/ssh/sshd_config \
    -e '/^#PermitRootLogin/{s+#++;s+ .*+ yes+}' \
    -e '/^PermitRootLogin/{s+#++;s+ .*+ yes+}' 
fi

sudo systemctl restart sshd

3.[选做] 设置时区

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
sudo timedatectl set-timezone Asia/Shanghai

4.[选做] 使用国内镜像仓库

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
# C. 国内
if ! curl --connect-timeout 2 google.com &>/dev/null; then
  if [ "$(uname -m)" = "aarch64" ]; then
    # arm64
    export MIRROR_URL=http://mirror.nju.edu.cn/ubuntu-ports
  else
    # x86
    export MIRROR_URL=http://mirror.nju.edu.cn/ubuntu
  fi

  # 生成软件仓库源
  export CODE_NAME=$(lsb_release -cs)
  export COMPONENT="main restricted universe multiverse"
  sudo tee /etc/apt/sources.list >/dev/null <<-EOF
deb $MIRROR_URL $CODE_NAME $COMPONENT
deb $MIRROR_URL $CODE_NAME-updates $COMPONENT
deb $MIRROR_URL $CODE_NAME-backports $COMPONENT
deb $MIRROR_URL $CODE_NAME-security $COMPONENT
EOF
fi

cat /etc/apt/sources.list

5.<必做> 设置静态 IP

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
# 获取 IP
export NICP=$(ip a | awk '/inet / {print $2}' | grep -v ^127)

if [ "$(echo ${NICP} | wc -w)" != "1" ]; then
  select IP1 in ${NICP}; do
    break
  done
else
  export IP1=${NICP}
fi
# 获取网卡名 - 使用第2块网卡
export NICN=$(ip a | awk '/^3:/ {print $2}' | sed 's/://')

# 获取网关
export NICG=$(ip route | awk '/^default/ {print $3}')

# 获取 DNS
unset DNS; unset DNS1
for i in 114.114.114.114 8.8.4.4 8.8.8.8; do
  if nc -w 2 -zn $i 53 &>/dev/null; then
    export DNS1=$i
    export DNS="$DNS, $DNS1"
  fi
done

printf "
  addresses: \e[1;34m${IP1}\e[0;0m
  ethernets: \e[1;34m${NICN}\e[0;0m
  routes: \e[1;34m${NICG}\e[0;0m
  nameservers: \e[1;34m${DNS#, }\e[0;0m
"
# 更新 dns
sudo sed -i /etc/systemd/resolved.conf \
  -e '/^DNS=/s/=.*/='"$(echo ${DNS#, } | sed 's/,//g')"'/'
sudo systemctl restart systemd-resolved.service
sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

# 更新网卡配置文件 - 第2块网卡 静态ip
export NYML=/etc/netplan/50-vagrant.yaml
sudo tee ${NYML} <<EOF
network:
  ethernets:
    ${NICN}:
      addresses: [${IP1}]
  version: 2
  renderer: networkd
EOF
sudo netplan apply

6.<必做> 安装相关软件

sudo apt-get update && \
sudo apt-get -y install \
  openssh-server sshpass vim nfs-common \
  bash-completion netcat-openbsd iputils-ping

7.<必做> 禁用 swap

## 为了保证 kubelet 正常工作,你必须禁用交换分区

# 取交换文件名称
export SWAPF=$(awk '/swap/ {print $1}' /etc/fstab)

# 立即禁用
sudo swapoff $SWAPF

# 永久禁用
sudo sed -i '/swap/d' /etc/fstab

# 删除交换文件
sudo rm $SWAPF

# 确认
free -h

安装K8S:

1.<必做> 安装运行时

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
## 创建镜像仓库文件
export AFILE=/etc/apt/sources.list.d/docker.list

if ! curl --connect-timeout 2 google.com &>/dev/null; then
  # C. 国内
  export AURL=http://mirror.nju.edu.cn/docker-ce
else
  # A. 国外
  export AURL=http://download.docker.com
fi

sudo tee $AFILE >/dev/null <<-EOF
deb $AURL/linux/ubuntu $(lsb_release -cs) stable
EOF

# 导入公钥
# http://download.docker.com/linux/ubuntu/gpg
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg \
  | sudo apt-key add -

# W: Key is stored in legacy trusted.gpg keyring
sudo cp /etc/apt/trusted.gpg /etc/apt/trusted.gpg.d

# 安装 containerd
sudo apt-get update && \
sudo apt-get -y install containerd.io
# 生成默认配置文件
containerd config default \
  | sed -e '/SystemdCgroup/s+false+true+' \
      -e "/sandbox_image/s+3.6+3.9+" \
  | sudo tee /etc/containerd/config.toml

if ! curl --connect-timeout 2 google.com &>/dev/null; then
  # C. 国内
  export M1='[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]'
  export M1e='endpoint = ["https://docker.nju.edu.cn"]'
  export M2='[plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]'
  export M2e='endpoint = ["https://quay.nju.edu.cn"]'
  export M3='[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]'
  export M3e='endpoint = ["https://k8s.mirror.nju.edu.cn"]'
  sudo sed -i /etc/containerd/config.toml \
    -e "/sandbox_image/s+registry.k8s.io+registry.aliyuncs.com/google_containers+" \
    -e "/registry.mirrors/a\        $M1" \
    -e "/registry.mirrors/a\          $M1e" \
    -e "/registry.mirrors/a\        $M2" \
    -e "/registry.mirrors/a\          $M2e" \
    -e "/registry.mirrors/a\        $M3" \
    -e "/registry.mirrors/a\          $M3e"
fi

# 服务重启
sudo systemctl restart containerd

2.<必做> 安装 kubeadm、kubelet 和 kubectl

## 添加 Kubernetes apt 仓库
sudo mkdir /etc/apt/keyrings &>/dev/null

if ! curl --connect-timeout 2 google.com &>/dev/null; then
  # C. 国内
  export AURL=http://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/deb
else
  # F. 国外
  export AURL=http://pkgs.k8s.io/core:/stable:/v1.29/deb
fi
  export KFILE=/etc/apt/keyrings/kubernetes-apt-keyring.gpg
  curl -fsSL ${AURL}/Release.key \
    | sudo gpg --dearmor -o ${KFILE}
  sudo tee /etc/apt/sources.list.d/kubernetes.list <<-EOF
deb [signed-by=${KFILE}] ${AURL} /
EOF

sudo apt-get -y update

# 列出所有小版本
sudo apt-cache madison kubelet | grep 1.29

# 更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包
# 安装 kubelet、kubeadm 和 kubectl 考试版本
sudo apt-get -y install \
  apt-transport-https ca-certificates curl gpg \
  kubelet=1.29.1-* kubeadm=1.29.1-* kubectl=1.29.1-*

# 锁定版本
sudo apt-mark hold kubelet kubeadm kubectl

3.[建议] kubeadm 命令补全

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
# 立即生效
source <(kubeadm completion bash)

# 永久生效
[ ! -d /home/${LOGNAME}/.kube ] && mkdir /home/${LOGNAME}/.kube

kubeadm completion bash \
  | tee /home/${LOGNAME}/.kube/kubeadm_completion.bash.inc \
  | sudo tee /root/.kube/kubeadm_completion.bash.inc

echo 'source ~/.kube/kubeadm_completion.bash.inc' \
  | tee -a /home/${LOGNAME}/.bashrc \
  | sudo tee -a /root/.bashrc

4.<必做> crictl 命令配置

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
# 配置文件
sudo crictl config \
  --set runtime-endpoint=unix:///run/containerd/containerd.sock \
  --set image-endpoint=unix:///run/containerd/containerd.sock \
  --set timeout=10

# crictl_tab
source <(crictl completion bash)

sudo test ! -d /root/.kube && sudo mkdir /root/.kube

crictl completion bash \
  | tee /home/${LOGNAME}/.kube/crictl_completion.bash.inc \
  | sudo tee /root/.kube/crictl_completion.bash.inc

echo 'source ~/.kube/crictl_completion.bash.inc' \
  | tee -a /home/${LOGNAME}/.bashrc \
  | sudo tee -a /root/.bashrc

# 注销重新登陆后,生效
sudo usermod -aG root ${LOGNAME}

5.<必做> kubeadm 命令使用前检查

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
## 1. Bridge
# [ERROR FileContent-.proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
sudo apt-get -y install bridge-utils && \
echo br_netfilter \
  | sudo tee /etc/modules-load.d/br.conf && \
sudo modprobe br_netfilter
## 2. 内核支持
# [ERROR FileContent-.proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
sudo tee /etc/sysctl.d/k8s.conf <<-EOF
net.ipv4.ip_forward=1
EOF

# 立即生效
sudo sysctl -p /etc/sysctl.d/k8s.conf

6.<必做> 配置 主机名

[vagrant@k8s-master]$ sudo hostnamectl set-hostname k8s-master
 [vagrant@k8s-worker1]$ sudo hostnamectl set-hostname k8s-worker1
 [vagrant@k8s-worker2]$ sudo hostnamectl set-hostname k8s-worker2

7.<必做> 编辑 hosts

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
# 显示 ip 地址和主机名,方便复制
echo $(hostname -I) $(hostname)
sudo tee -a /etc/hosts >/dev/null <<EOF
# K8s-cluster
192.168.8.3 k8s-master
192.168.8.4 k8s-worker1
192.168.8.5 k8s-worker2
EOF

cat /etc/hosts

初始化集群:

1.<必做> kubeadm init 初始化

[vagrant@k8s-master]$
export REGISTRY_MIRROR=registry.aliyuncs.com/google_containers
export POD_CIDR=172.16.1.0/16
export SERVICE_CIDR=172.17.1.0/18

if ! curl --connect-timeout 2 google.com &>/dev/null; then
  # C. 国内
  sudo kubeadm config images pull \
    --kubernetes-version 1.29.1 \
    --image-repository ${REGISTRY_MIRROR}
  sudo kubeadm init \
    --kubernetes-version 1.29.1 \
    --apiserver-advertise-address=${IP1%/*} \
    --pod-network-cidr=${POD_CIDR} \
    --service-cidr=${SERVICE_CIDR} \
    --node-name=$(hostname -s) \
    --image-repository=${REGISTRY_MIRROR}
else
  # A. 国外
  sudo kubeadm config images pull \
    --kubernetes-version 1.29.1
  sudo kubeadm init \
    --kubernetes-version 1.29.1 \
    --apiserver-advertise-address=${IP1%/*} \
    --pod-network-cidr=${POD_CIDR} \
    --service-cidr=${SERVICE_CIDR} \
    --node-name=$(hostname -s)
fi

结果:

[init] Using Kubernetes version: v1.29.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0612 04:18:15.295414 10552 images.go:80] could not find officially supported version of etcd for Kubernetes v1.29.1, falling back to the nearest etcd version (3.5.7-0)
W0612 04:18:30.785239 10552 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.8.3]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.8.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.8.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0612 04:18:33.894812 10552 images.go:80] could not find officially supported version of etcd for Kubernetes v1.29.1, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.502471 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.8.3:6443 --token abcdef.0123456789abcdef \
  --discovery-token-ca-cert-hash sha256:49ff7a97c017153baee67c8f44fa84155b8cf06ca8cee067f766ec252cb8d1ac

2.<必做> kubeadm join 加入集群

[vagrant@k8s-worker1 | k8s-worker2]$
sudo \
    kubeadm join 192.168.8.3:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:49ff7a97c017153baee67c8f44fa84155b8cf06ca8cee067f766ec252cb8d1ac

3.<必做> Client - kubeconfig 配置文件

[vagrant@k8s-master]$
# - vagrant
sudo cat /etc/kubernetes/admin.conf \
  | tee /home/${LOGNAME}/.kube/config

# - root
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" \
  | sudo tee -a /root/.bashrc

4.<必做> 创建网络

[vagrant@k8s-master]$
# 1. URL
if ! curl --connect-timeout 2 google.com &>/dev/null; then
  # C. 国内
  export CURL=https://ghp.ci/raw.githubusercontent.com/projectcalico
else
  # A. 国外
  export CURL=https://raw.githubusercontent.com/projectcalico
fi

# 2. FILE
export CFILE=calico/v3.26.4/manifests/calico.yaml

# 3. INSTALL
kubectl apply -f ${CURL}/${CFILE}

5.[建议] Client - kubectl 命令补全

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
# 立即生效
source <(kubectl completion bash)

# 永久生效
kubectl completion bash \
  | tee /home/${LOGNAME}/.kube/completion.bash.inc \
  | sudo tee /root/.kube/completion.bash.inc

echo 'source ~/.kube/completion.bash.inc' \
  | tee -a /home/${LOGNAME}/.bashrc \
  | sudo tee -a /root/.bashrc

6.[建议] Client - kubectl 命令别名

[vagrant@k8s-master | k8s-worker1 | k8s-worker2]$
# 立即生效
alias k='kubectl'
complete -F __start_kubectl k

# 永久生效
cat <<-EOF \
  | tee -a /home/${LOGNAME}/.bashrc \
  | sudo tee -a /root/.bashrc
alias k='kubectl'
complete -F __start_kubectl k
EOF

7.确认环境正常

[vagrant@k8s-master]
$ kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
k8s-master    Ready    control-plane   3m42s   v1.29.1
k8s-worker1   Ready    <none>          60s     v1.29.1
k8s-worker2   Ready    <none>          54s     v1.29.1

$ kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-786b679988-jqmtn   1/1     Running   0          2m58s
kube-system   calico-node-4sbwf                          1/1     Running   0          71s
kube-system   calico-node-cfq42                          1/1     Running   0          65s
kube-system   calico-node-xgmqv                          1/1     Running   0          2m58s
kube-system   coredns-7bdc4cb885-grj7m                   1/1     Running   0          3m46s
kube-system   coredns-7bdc4cb885-hjdkp                   1/1     Running   0          3m46s
kube-system   etcd-k8s-master                            1/1     Running   0          3m50s
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          3m50s
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          3m50s
kube-system   kube-proxy-2cgvn                           1/1     Running   0          3m46s
kube-system   kube-proxy-2zh5k                           1/1     Running   0          65s
kube-system   kube-proxy-nd5z8                           1/1     Running   0          71s
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          3m50s

$ kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}
controller-manager   Healthy   ok
scheduler            Healthy   ok

附录:

1.新节点加入 K8s 集群命令

  • 查看 kubeadm init 命令的输出
  • 使用 kubeadm token create 命令重新创建
[vagrant@k8s-master]$
kubeadm token create --print-join-command

2.kubeadm reset 重置环境

还原通过==kubeadm init==或==kubeadm join==对此主机所做的更改

[vagrant@k8s-master]$
echo y | sudo kubeadm reset

3.kubectl Err1

The connection to the server localhost:8080 was refused - did you specify the right host or port?

[vagrant@k8s-worker2]$

# 创建目录
mkdir ~/.kube

# 拷贝配置文件
scp root@k8s-master:/etc/kubernetes/admin.conf \
    ~/.kube/config

# 验证
kubectl get node

4.RHEL布署

  • 关闭 firewalld
  • 关闭 SELinux
## firewalld
systemctl disable --now firewalld

## SELinux
sed -i '/^SELINUX=/s/=.*/=disabled/' \
    /etc/selinux/config
#  重启生效
reboot

广告:

© 版权声明
THE END
喜欢就支持一下吧
点赞6 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容