当前位置: 首页 > news >正文

认识kubernetes kubeadm安装k8s

一、Kubernetes概念?

Kubernetes(简称k8s,因为k与s之间有8个字母)是一个开源容器编排系统,由Google开发并捐赠给云原生计算基金会(CNCF)。它用于自动化容器化应用程序的部署、扩展和管理,解决了大规模容器集群中的资源调度、服务发现、负载均衡等复杂问题。

通俗类比:如果将容器比作“集装箱”,Kubernetes就是“港口管理系统”,负责高效调度和管理这些“集装箱”的存储、运输和分配。

二、Kubernetes的核心功能

自动化部署与回滚

快速部署应用程序,支持版本控制,出现问题时可自动回滚到稳定版本。

示例:一键更新微服务版本,无需手动重启服务。

弹性伸缩

根据负载自动调整容器实例数量(如CPU使用率超过80%时扩容)。

示例:电商大促时自动增加服务器实例,活动结束后自动缩减。

服务发现与负载均衡

自动分配流量到健康的容器实例,支持多种负载均策略。

示例:用户请求被分发到多个后端服务实例,避免单点故障。

存储编排

动态挂载持久化存储(如NFS、云存储)到容器中。

示例:数据库容器自动挂载云硬盘,数据持久化不丢失。

自我修复

检测容器故障并自动重启或替换,确保服务可用性。

示例:某个容器崩溃后,k8s自动拉起新实例。

配置与密钥管理

集中管理应用的配置文件和敏感信息(如数据库密码)。

示例:通过configmap和secret动态注入配置,无需重新构建镜像。

三、kubeadm安装

第三大节的执行,是在三台上的执行,xshell可以同时输入到三台会话。

三台Ubuntu-22.04-live-server系统

192.168.114.110    master

192.168.114.120    node1

192.168.114.130    node2

关闭防火墙(三台执行)

[root@master ~]#:systemctl stop ufw 

[root@master ~]#:systemctl disable --now ufw
Synchronizing state of ufw.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable ufw

[root@master ~]#:

  3.1前期准备

关闭swap分区

swapoff -a 
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

3.2 安装containerd

安装containerd,如果未指定版本,安装的即为最新版。也可以通过apt-cache madison containerd列出版本。如果指定,使用apt install -y containerd=1.7.12。

apt install -y containerd

创建配置文件夹

mkdir /etc/containerd

相关配置文件:config.toml。toml类型

containerd config default > /etc/containerd/config.toml

 修改配置文件中pause基础镜像的地址,修改65行137行,在168行下添加两行。修改镜像加速配置。

修改以下配置项,在170行下添加两行。

vim /etc/containerd/config.toml
......sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
......SystemdCgroup = true
......[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]endpoint = ["https://[你的加速器地址].mirror.aliyuncs.com"]

重启服务:systemctl restart containerd 

3.3 安装crictl

准备crictl安装包。

创建文件夹,-C指定解压的位置

mkdir /usr/local/bin/crictl
tar xf crictl-v1.29.0-linux-amd64.tar.gz -C /usr/local/bin/crictl

在最后添加环境变量

vim /etc/profile
......
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/bin/crictl

查看版本。 

配置crictl

vim /etc/crictl.yamlruntime-endpoint: "unix:///run/containerd/containerd.sock"
image-endpint: "unix:///run/containerd/containerd.sock"
timeout: 10
debug: false

3.4 安装nerdctl工具

准备nerdctl-1.7.6-linux-amd64.tar.gz包。解压到/usr/local/bin/下。

[root@master data]#:ls
crictl-v1.29.0-linux-amd64.tar.gz  nerdctl-1.7.6-linux-amd64.tar.gz
[root@master data]#:
[root@master data]#:tar xf nerdctl-1.7.6-linux-amd64.tar.gz -C /usr/local/bin/

创建配置文件夹,写入配置文件。

mkdir /etc/nerdctl
vim /etc/nerdctl/nerdctl.toml
namespace = "k8s.io"
debug = false
debug_full = false
insecure_registry = true

查看镜像。

[root@master data]#:nerdctl images
REPOSITORY    TAG    IMAGE ID    CREATED    PLATFORM    SIZE    BLOB SIZE
[root@master data]#:

3.5 安装CNI工具(插件)

CNI是什么:为容器提供网桥,如果不安装CNI,容器只有host网络模式。

导入包cni-plugins-linux-amd64-v1.5.1.tgz

创建文件目录,并解压到指定目录下。

[root@master data]#:ls
cni-plugins-linux-amd64-v1.5.1.tgz  crictl-v1.29.0-linux-amd64.tar.gz  nerdctl-1.7.6-linux-amd64.tar.gz
[root@master data]#:
[root@master data]#:mkdir /opt/cni/bin -p
[root@master data]#:tar xf cni-plugins-linux-amd64-v1.5.1.tgz -C /opt/cni/bin/
[root@master data]#:

测试功能使用:

导入一个nginx镜像包。load上传到本地。

直接在命令行执行运行一个容器。暴露端口。进入容器启动nginx服务,并访问nginx服务。

在从节点访问master暴露出来的8080端口。

这里通过rmi将镜像删除

[root@master data]#:nerdctl rmi harbor.hiuiu.com/basic_image/centos7_filebeat_nginx:2408.u
[root@master data]#:nerdctl images 
REPOSITORY    TAG       IMAGE ID        CREATED          PLATFORM       SIZE         BLOB SIZE
<none>        <none>    66bf78b3037a    4 minutes ago    linux/amd64    844.6 MiB    289.8 MiB
[root@master data]#:nerdctl rmi 6
[root@master data]#:nerdctl images 
REPOSITORY    TAG    IMAGE ID    CREATED    PLATFORM    SIZE    BLOB SIZE
[root@master data]#:nerdctl ps -a 
CONTAINER ID    IMAGE    COMMAND    CREATED    STATUS    PORTS    NAMES
[root@master data]#:

3.6 初始化k8s环境

3.6.1 安装基本的软件

[root@master data]#:apt install -y chrony ipvsadm tree ipset

3.6.2 加载模块

[root@master data]#:modprobe br_netfilter && lsmod | grep br_netfilter
br_netfilter           32768  0
bridge                311296  1 br_netfilter
[root@master data]#:modprobe ip_conntrack && lsmod | grep conntrack
nf_conntrack          172032  0
nf_defrag_ipv6         24576  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,btrfs,raid456
[root@master data]#:

加载模块

[root@master data]#:vim /etc/modules-load.d/modules.conf
ip_vs
ip_vs_lc
ip_vs_lblc
ip_vs_lblcr
ip_vs_rr
ip_vs_wrr
ip_vs_sh
ip_vs_dh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
ip_tables
ip_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
xt_set
br_netfilter
nf_conntrack
overlay

重启 

[root@master data]#:systemctl restart systemd-modules-load.service

3.6.3 修改内核参数

[root@master data]#:vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
sysctl: cannot stat /proc/sys/Kernel/pid_max: No such file or directory
fs.file-max = 1000000
net.ipv4.tcp_max_tw_buckets = 6000
net.netfilter.nf_conntrack_max = 2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0
[root@master data]#:sysctl -p
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
sysctl: cannot stat /proc/sys/sysctl: cannot stat /proc/sys/Kernel/pid_max: No such file or directory: No such file or directory
fs.file-max = 1000000
net.ipv4.tcp_max_tw_buckets = 6000
net.netfilter.nf_conntrack_max = 2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0

如果是克隆的虚拟机,这个id与宿主机相同,需要删除,再重新生成一个不同的id。如果不是克隆的虚拟机,这里忽略

如果是克隆的虚拟机,这个id与宿主机相同,需要删除,再重新生成一个不同的id。如果不是克隆的虚拟机,这里忽略
cat /etc/machine-id         #查看
rm -rf /etc/machine-id        #删除
systemd-machine-id-setup  #重建
#因为如果克隆的系统,这个码会一致,要删除后重建

3.6 安装k8s

到这里会指定命令在哪台节点上执行,没指定一样是三台都执行。主要安装kubeadmin、kubelet、kubectl

安装系统依赖。

[root@master data]#:apt install -y apt-transport-https ca-certificates curl gpg
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20240203~22.04.1).
ca-certificates set to manually installed.
curl is already the newest version (7.81.0-1ubuntu1.20).
curl set to manually installed.
gpg is already the newest version (2.2.27-3ubuntu2.3).
gpg set to manually installed.
The following NEW packages will be installed:apt-transport-https
......
......
Running kernel seems to be up-to-date.No services need to be restarted.No containers need to be restarted.No user sessions are running outdated binaries.No VM guests are running outdated hypervisor (qemu) binaries on this host.
[root@master data]#:

3.6.1 安装kubeadm,kubelet,kubectl

        在安装k8s时,/etc/apt/keyrings目录,创建用于安全存储kubernetes软件仓库的GPG签名秘钥。目录作用是APT包管理器的默认秘钥环目录,专门用于存放软件源的GPG公钥文件。.gpg后缀。也是避免权限问题。

[root@master data]#:mkdir -p -m 755 /etc/apt/keyrings
[root@master data]#:curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg        #默认是官方地址
[root@master data]#:

        将kubernetes官方软件源添加到系统的APT包管理配置中,并指定使用阿里云镜像作为下载源,同时启用GPG签名验证以确保软件包的安全性。

        kubeadmin,kubelet,kubectl是k8s的三个核心工具,不在系统默认的软件源包中,必须通过官方或镜像仓库安装。

先配置阿里镜像源。

[root@master data]#:echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /
[root@master data]#:

更新软件包列表apt update

[root@master data]#:apt update 
Hit:1 http://mirrors.aliyun.com/ubuntu jammy InRelease
Hit:2 http://mirrors.aliyun.com/ubuntu jammy-updates InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu jammy-backports InRelease
Hit:4 http://mirrors.aliyun.com/ubuntu jammy-security InRelease
Get:5 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  InRelease [1192 B]
Get:6 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packages [20.3 kB]
Fetched 21.5 kB in 1s (17.2 kB/s)  
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
123 packages can be upgraded. Run 'apt list --upgradable' to see them.
[root@master data]#:

列出kubeadmin版本。

[root@master data]#:apt-cache madison kubeadmkubeadm | 1.30.14-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.13-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.12-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.11-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.10-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.9-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.8-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.7-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.6-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.5-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.4-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.3-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.2-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.1-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packageskubeadm | 1.30.0-1.1 | https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  Packages
[root@master data]#:

我们这里都安装1.30.3-1.1版本。

apt-get install -y kubelet=1.30.3-1.1 kubeadm=1.30.3-1.1 kubectl=1.30.3-1.1

[root@master data]#:apt-get install -y kubelet=1.30.3-1.1 kubeadm=1.30.3-1.1 kubectl=1.30.3-1.1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:conntrack cri-tools ebtables kubernetes-cni socat
The following NEW packages will be installed:conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 8 newly installed, 0 to remove and 123 not upgraded.
Need to get 93.9 MB of archives.
After this operation, 343 MB of additional disk space will be used.
Get:1 http://mirrors.aliyun.com/ubuntu jammy/main amd64 conntrack amd64 1:1.4.6-2build2 [33.5 kB]
Get:2 http://mirrors.aliyun.com/ubuntu jammy/main amd64 ebtables amd64 2.0.11-4build2 [84.9 kB]
Get:3 http://mirrors.aliyun.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB]
Get:4 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  cri-tools 1.30.1-1.1 [21.3 MB]
Get:5 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  kubeadm 1.30.3-1.1 [10.4 MB]                         
Get:6 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  kubectl 1.30.3-1.1 [10.8 MB]                         
Get:7 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  kubernetes-cni 1.4.0-1.1 [32.9 MB]                   
Get:8 https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb  kubelet 1.30.3-1.1 [18.1 MB]
......
......
Running kernel seems to be up-to-date.No services need to be restarted.No containers need to be restarted.No user sessions are running outdated binaries.No VM guests are running outdated hypervisor (qemu) binaries on this host.
[root@master data]#:

查看三个工具的版本。

[root@master data]#:kubeadm version 
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.3", GitCommit:"6fc0a69044f1ac4c13841ec4391224a2df241460", GitTreeState:"clean", BuildDate:"2024-07-16T23:53:15Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
[root@master data]#:
[root@master data]#:kubectl version 
Client Version: v1.30.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master data]#:
[root@master data]#:kubelet --version 
Kubernetes v1.30.3
[root@master data]#:

3.6.2 创建MASTER

重启containerd。

[root@master data]#:systemctl restart containerd
[root@master data]#:

 创建master节点。初始化只在主节点执行!

kubeadm init \
  --apiserver-advertise-address=192.168.114.110 \                #API Server 监听的 IP
  --apiserver-bind-port=6443 \                                                #API Server 监听的端口
  --kubernetes-version=v1.30.3 \                                                #指定 Kubernetes 版本
  --pod-network-cidr=10.10.0.0/16 \                                        #Pod 网络地址范围
  --service-cidr=10.20.0.0/16 \                                                #Service 网络地址范围
  --service-dns-domain=cluster.local \                                        #集群 DNS 域名
  --image-repository=registry.aliyuncs.com/google_containers \        #镜像仓库地址
  --ignore-preflight-errors=swap                                        #忽略 swap 检查错误
  

[root@master data]#:kubeadm init --apiserver-advertise-address=192.168.114.110 --apiserver-bind-port=6443 --kubernetes-version=v1.30.3 --pod-network-cidr=10.10.0.0/16 --service-cidr=10.20.0.0/16 --service-dns-domain=cluster.local --image-repository=registry.aliyuncs.com/google_containers  --ignore-preflight-errors=swap
[init] Using Kubernetes version: v1.30.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.20.0.1 192.168.114.110]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.114.110 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.114.110 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.004874578s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 6.504756142s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: wzla5n.qrc1e3v4ndqtl06i
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.114.110:6443 --token wzla5n.qrc1e3v4ndqtl06i \--discovery-token-ca-cert-hash sha256:195c9d97012c0c9222b1cd2485b32c3131f0ab59213a605599fdfc0964edb247 
[root@master data]#:

补充:若初始化失败时,将唯一镜像id删除后,如若需要清理所有镜像时,可通过命令清理未被使用的镜像层。

 

[root@master data]#:nerdctl image prune -f
[root@master data]#:

 此时,只剩一些镜像名和标签未被标记的镜像标签,此时直接指定镜像id删除镜像。

[root@master data]#:nerdctl images 
REPOSITORY    TAG       IMAGE ID        CREATED           PLATFORM       SIZE         BLOB SIZE
<none>        <none>    3c66304a30ef    23 minutes ago    linux/amd64    114.3 MiB    31.3 MiB
<none>        <none>    97a7bb219855    21 minutes ago    linux/amd64    146.3 MiB    54.6 MiB
<none>        <none>    57bb58420452    22 minutes ago    linux/amd64    62.3 MiB     18.4 MiB
<none>        <none>    f35091eb7d32    23 minutes ago    linux/amd64    109.1 MiB    29.7 MiB
<none>        <none>    7031c1b28338    22 minutes ago    linux/amd64    732.0 KiB    314.0 KiB
[root@master data]#:nerdctl images -q 
3c66304a30ef
97a7bb219855
57bb58420452
f35091eb7d32
7031c1b28338
[root@master data]#:nerdctl rmi `nerdctl images -q`

 

3.6.3 从节点加入集群

此时主节点执行完初始化时,提示控制平面成功初始化。使用集群需运行以下命令。我们直接使用root用户执行。

 主节点执行以下命令,用于配置kubectl命令行工具的管理员访问权限。使其能够在k8s集群的控制平面(API Server)通信。

[root@master data]#:mkdir -p $HOME/.kube
[root@master data]#:cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master data]#:chown $(id -u):$(id -g) $HOME/.kube/config
[root@master data]#:export KUBECONFIG=/etc/kubernetes/admin.conf
[root@master data]#:

 mkdir -p $HOME/.kube        #创建.kube目录,存放kubectl的配置文件

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config        #将k8s管理员配置文件复制到kubectl的默认配置路径下。

chown $(id -u):$(id -g) $HOME/.kube/config        #修改配置文件的所有者,确保当前用户有权限访问。

export KUBECONFIG=/etc/kubernetes/admin.conf        #临时设置环境变量,指定kubectl使用/etc/kubernetes/admin.conf作为配置文件。而~/.kube/config是永久生效。

两个从节点执行以下命令,是将该节点加入到已存在的kubernetes主节点管理的集群中,使其成为集群中的工作节点,运行容器和pod。

[root@node1 data]#:kubeadm join 192.168.114.110:6443 --token wzla5n.qrc1e3v4ndqtl06i \
> --discovery-token-ca-cert-hash sha256:195c9d97012c0c9222b1cd2485b32c3131f0ab59213a605599fdfc0964edb247
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002499283s
[kubelet-start] Waiting for the kubelet to perform the TLS BootstrapThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@node1 data]#:

主节点查看集群状态

#查看镜像
[root@master data]#:nerdctl images | grep -v none
REPOSITORY                                                         TAG         IMAGE ID        CREATED           PLATFORM       SIZE         BLOB SIZE
registry.aliyuncs.com/google_containers/coredns                    v1.11.1     a6b67bdb2a67    25 minutes ago    linux/amd64    60.9 MiB     17.3 MiB
registry.aliyuncs.com/google_containers/etcd                       3.5.12-0    97a7bb219855    24 minutes ago    linux/amd64    146.3 MiB    54.6 MiB
registry.aliyuncs.com/google_containers/kube-apiserver             v1.30.3     3c66304a30ef    27 minutes ago    linux/amd64    114.3 MiB    31.3 MiB
registry.aliyuncs.com/google_containers/kube-controller-manager    v1.30.3     f35091eb7d32    26 minutes ago    linux/amd64    109.1 MiB    29.7 MiB
registry.aliyuncs.com/google_containers/kube-proxy                 v1.30.3     08e5c999c0a7    25 minutes ago    linux/amd64    84.4 MiB     27.7 MiB
registry.aliyuncs.com/google_containers/kube-scheduler             v1.30.3     57bb58420452    26 minutes ago    linux/amd64    62.3 MiB     18.4 MiB
registry.aliyuncs.com/google_containers/pause                      3.9         7031c1b28338    24 minutes ago    linux/amd64    732.0 KiB    314.0 KiB
[root@master data]#:
#查看node状态
[root@master data]#:kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   Ready      control-plane   22m     v1.30.3
node1    NotReady   <none>          4m47s   v1.30.3
node2    NotReady   <none>          4m46s   v1.30.3
[root@master data]#:

此时并未安装网络组件。这里使用calico网络组件。

3.6.4 安装calico网络组件

将网络组件包放在/data/目录下。三台服务器都需要网络组件。

解压,到images目录下,将calico-cni.tar,calico-node.tar,calico-kube-controllers.tar三个镜像包上传到本地镜像。

[root@master data]#:ll calico-release-v3.28.0.tgz 
-rw-r--r-- 1 root root 1101155877 Jul  3 22:40 calico-release-v3.28.0.tgz
#解压
[root@master data]#:tar xf calico-release-v3.28.0.tgz
[root@master data]#:ls
calico-release-v3.28.0.tgz          crictl-v1.29.0-linux-amd64.tar.gz  nginx.tar.gz
cni-plugins-linux-amd64-v1.5.1.tgz  nerdctl-1.7.6-linux-amd64.tar.gz   release-v3.28.0
#进入images目录。
[root@master data]#:cd release-v3.28.0/images/
[root@master images]#:ls
calico-cni.tar       calico-flannel-migration-controller.tar  calico-node.tar        calico-typha.tar
calico-dikastes.tar  calico-kube-controllers.tar              calico-pod2daemon.tar
#上传三个镜像
[root@master images]#:nerdctl load -i calico-cni.tar
unpacking docker.io/calico/cni:v3.28.0 (sha256:2da41a4fcb31618b20817de9ec9fd13167344f5e2e034cee8baf73d89e212b4e)...
Loaded image: calico/cni:v3.28.0
[root@master images]#:nerdctl load -i calico-node.tar
unpacking docker.io/calico/node:v3.28.0 (sha256:5a4942472d32549581ed34d785c3724ecffd0d4a7c805e5f64ef1d89d5aaa947)...
Loaded image: calico/node:v3.28.0
[root@master images]#:nerdctl load -i calico-kube-controllers.tar
unpacking docker.io/calico/kube-controllers:v3.28.0 (sha256:83e080cba8dbb2bf2168af368006921fcb940085ba6326030a4867963d2be2b3)...
Loaded image: calico/kube-controllers:v3.28.0
#查看calico镜像
[root@master images]#:nerdctl images | grep calico
calico/cni                                            v3.28.0    2da41a4fcb31    2 minutes ago         linux/amd64    199.4 MiB    199.3 MiB
calico/kube-controllers                               v3.28.0    83e080cba8db    32 seconds ago        linux/amd64    75.6 MiB     75.5 MiB
calico/node                                           v3.28.0    5a4942472d32    About a minute ago    linux/amd64    342.4 MiB    338.1 MiB
[root@master images]#:

在主上修改calico.yaml文件,修改calico地址为pod网络地址段。

[root@master images]#:cd ../manifests/
[root@master manifests]#:vim calico.yaml
......- name: CALICO_IPV4POOL_CIDRvalue: "10.10.0.0/16"
......

运行网络组件

[root@master manifests]#:kubectl apply -f calico.yaml 
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
[root@master manifests]#:

四、验证

查看nodes节点是否为Ready。

[root@master manifests]#:kubectl get nodes 
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   58m   v1.30.3
node1    Ready    <none>          40m   v1.30.3
node2    Ready    <none>          40m   v1.30.3
[root@master manifests]#:

查看主节点镜像

[root@master manifests]#:nerdctl images | grep -v none 
REPOSITORY                                                         TAG         IMAGE ID        CREATED           PLATFORM       SIZE         BLOB SIZE
calico/cni                                                         v3.28.0     2da41a4fcb31    14 minutes ago    linux/amd64    199.4 MiB    199.3 MiB
calico/kube-controllers                                            v3.28.0     83e080cba8db    13 minutes ago    linux/amd64    75.6 MiB     75.5 MiB
calico/node                                                        v3.28.0     5a4942472d32    13 minutes ago    linux/amd64    342.3 MiB    338.1 MiB
registry.aliyuncs.com/google_containers/coredns                    v1.11.1     a6b67bdb2a67    52 minutes ago    linux/amd64    60.9 MiB     17.3 MiB
registry.aliyuncs.com/google_containers/etcd                       3.5.12-0    97a7bb219855    51 minutes ago    linux/amd64    146.3 MiB    54.6 MiB
registry.aliyuncs.com/google_containers/kube-apiserver             v1.30.3     3c66304a30ef    53 minutes ago    linux/amd64    114.3 MiB    31.3 MiB
registry.aliyuncs.com/google_containers/kube-controller-manager    v1.30.3     f35091eb7d32    53 minutes ago    linux/amd64    109.1 MiB    29.7 MiB
registry.aliyuncs.com/google_containers/kube-proxy                 v1.30.3     08e5c999c0a7    52 minutes ago    linux/amd64    84.4 MiB     27.7 MiB
registry.aliyuncs.com/google_containers/kube-scheduler             v1.30.3     57bb58420452    52 minutes ago    linux/amd64    62.3 MiB     18.4 MiB
registry.aliyuncs.com/google_containers/pause                      3.9         7031c1b28338    51 minutes ago    linux/amd64    732.0 KiB    314.0 KiB
[root@master manifests]#:

从节点镜像

[root@node1 images]#:nerdctl images | grep -v none 
REPOSITORY                                            TAG        IMAGE ID        CREATED           PLATFORM       SIZE         BLOB SIZE
calico/cni                                            v3.28.0    2da41a4fcb31    15 minutes ago    linux/amd64    199.4 MiB    199.3 MiB
calico/kube-controllers                               v3.28.0    83e080cba8db    14 minutes ago    linux/amd64    75.6 MiB     75.5 MiB
calico/node                                           v3.28.0    5a4942472d32    14 minutes ago    linux/amd64    342.4 MiB    338.1 MiB
registry.aliyuncs.com/google_containers/kube-proxy    v1.30.3    08e5c999c0a7    33 minutes ago    linux/amd64    84.4 MiB     27.7 MiB
registry.aliyuncs.com/google_containers/pause         3.9        7031c1b28338    33 minutes ago    linux/amd64    732.0 KiB    314.0 KiB
[root@node1 images]#:
[root@node2 images]#:nerdctl images | grep -v none 
REPOSITORY                                            TAG        IMAGE ID        CREATED           PLATFORM       SIZE         BLOB SIZE
calico/cni                                            v3.28.0    2da41a4fcb31    15 minutes ago    linux/amd64    199.4 MiB    199.3 MiB
calico/kube-controllers                               v3.28.0    83e080cba8db    14 minutes ago    linux/amd64    75.6 MiB     75.5 MiB
calico/node                                           v3.28.0    5a4942472d32    14 minutes ago    linux/amd64    342.3 MiB    338.1 MiB
registry.aliyuncs.com/google_containers/kube-proxy    v1.30.3    08e5c999c0a7    33 minutes ago    linux/amd64    84.4 MiB     27.7 MiB
registry.aliyuncs.com/google_containers/pause         3.9        7031c1b28338    33 minutes ago    linux/amd64    732.0 KiB    314.0 KiB
[root@node2 images]#:

主节点查看命名空间下的pod状态,kube-system为默认命名空间。

[root@master manifests]#:kubectl get pod -A 
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-564985c589-p7np6   1/1     Running   0          6m57s
kube-system   calico-node-dbj5k                          1/1     Running   0          6m57s
kube-system   calico-node-wq42k                          1/1     Running   0          6m56s
kube-system   calico-node-wtbl4                          1/1     Running   0          6m56s
kube-system   coredns-7b5944fdcf-22p7n                   1/1     Running   0          52m
kube-system   coredns-7b5944fdcf-zf65f                   1/1     Running   0          52m
kube-system   etcd-master                                1/1     Running   2          52m
kube-system   kube-apiserver-master                      1/1     Running   2          52m
kube-system   kube-controller-manager-master             1/1     Running   2          52m
kube-system   kube-proxy-gbk5c                           1/1     Running   0          35m
kube-system   kube-proxy-m5gfh                           1/1     Running   0          52m
kube-system   kube-proxy-vhqr6                           1/1     Running   0          35m
kube-system   kube-scheduler-master                      1/1     Running   2          52m
[root@master manifests]#:

此时安装k8s完成。

上传一个nginx本地镜像,使用yaml文件启动一个pod。只需要上传node节点即可,master负责调度容器到node节点。

[root@node1 images]#:ls
nginx.tar
[root@node1 images]#:nerdctl load -i nginx.tar 
unpacking harbor.hiuiu.com/nginx/nginx:1.21.5 (sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3)...
Loaded image: harbor.hiuiu.com/nginx/nginx:1.21.5
[root@node1 images]#:nerdctl images | grep nginx 
harbor.hiuiu.com/nginx/nginx                          1.21.5     ee89b00528ff    17 seconds ago    linux/amd64    149.1 MiB    54.1 MiB
[root@node1 images]#[root@node2 images]#:ls
nginx.tar
[root@node2 images]#:nerdctl load -i nginx.tar 
unpacking harbor.hiuiu.com/nginx/nginx:1.21.5 (sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3)...
Loaded image: harbor.hiuiu.com/nginx/nginx:1.21.5
[root@node2 images]#:nerdctl images | grep nginx 
harbor.hiuiu.com/nginx/nginx                          1.21.5     ee89b00528ff    17 seconds ago    linux/amd64    149.1 MiB    54.1 MiB
[root@node2 images]#:

主节点编写yaml文件,运行。

[root@master manifests]#:mkdir /data/yamlfile ; cd /data/yamlfile/
[root@master yamlfile]#:vim nginx.yml
apiVersion: v1
kind: Pod
metadata:name: nginx
spec:containers:- name: nginximage: harbor.hiuiu.com/nginx/nginx:1.21.5  imagePullPolicy: Neverports:- containerPort: 80command: ["/bin/sh"]args: ["-c", "while tree; do echo hello; sleep 10; done"]
[root@master yamlfile]#:kubectl apply -f nginx.yml 
pod/nginx created
[root@master yamlfile]#:
[root@master ~]#:kubectl get pod 
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          35s

成功运行。

---end---

http://www.xdnf.cn/news/1079911.html

相关文章:

  • Web基础关键_007_JavaScript 的 DOM
  • 34. 在排序数组中查找元素的第一个和最后一个位置
  • WPF学习笔记(22)项面板模板ltemsPanelTemplate与三种模板总结
  • 【进阶篇-消息队列】——Kafka如何实现事务的
  • R 语言安装使用教程
  • 物联网MQTT协议与实践:从零到精通的硬核指南
  • 【2.4 漫画SpringBoot实战】
  • Java的SpringAI+Deepseek大模型实战之会话记忆
  • Qt Creator自定义控件开发流程
  • Windows 10 2016 长期服务版
  • WPF学习笔记(16)树控件TreeView与数据模板
  • 刷卡登入数据获取
  • MySQL的窗口函数介绍
  • Redis—哨兵模式
  • 相机光学(四十八)——渐晕
  • [自然语言处理]计算语言的熵
  • Qt宝藏库:20+实用开源项目合集
  • ReentrantLock 原理
  • Euler2203安装.NetCore6.0环境操作步骤
  • 前端单元测试覆盖率工具有哪些,分别有什么优缺点
  • Java中的volatile到底是什么来路
  • RAG实战指南 Day 4:LlamaIndex框架实战指南
  • CentOS系统高效部署fastGPT全攻略
  • 21、MQ常见问题梳理
  • 【论】电力-交通融合网协同优化:迎接电动汽车时代的挑战
  • thinkphp8接管异常处理类
  • 【第三章:神经网络原理详解与Pytorch入门】01.神经网络算法理论详解与实践-(2)神经网络整体结构
  • STM32-第二节-GPIO输入(按键,传感器)
  • C盘爆满元凶!WinSxS组件解密
  • JsonCpp的核心类及核心函数使用汇总