当前位置: 首页 > ds >正文

Ubuntu 24.04.2安装k8s 1.33.4 配置cilium

软件版本:
ubuntu24.04.2,
kubeadm v1.33.34
kubernetes v1.33.4
containerd v2.0.2
cilium version v1.18.0

服务器角色分配
node1 192.168.2.21 Ubuntu 24.04.2 LTS master
node2 192.168.2.22 Ubuntu 24.04.2 LTS node
node3 192.168.2.23 Ubuntu 24.04.2 LTS node
node4 192.168.2.24 Ubuntu 24.04.2 LTS node
node5 192.168.2.25 Ubuntu 24.04.2 LTS node
node6 192.168.2.26 Ubuntu 24.04.2 LTS node

第一步、基础设置
所有机器均需要操作

关闭swap

sed -ri 's/^([^#].*swap.*)$/#\1/' /etc/fstab && grep swap /etc/fstab && swapoff -a && free -h# 关闭防火墙
ufw disable# 设置时区
timedatectl set-timezone Asia/Shanghai
systemctl restart systemd-timesyncd.service# 开启ipv4转发
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
# Kubernetes & Cilium 必需参数
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1# 可选:调优内核参数(如连接跟踪表大小)
net.netfilter.nf_conntrack_max = 1048576
EOF# 应用配置(等效于 sysctl -p /etc/sysctl.d/k8s.conf)
sysctl --systemsysctl net.ipv4.ip_forward

配置hostname以及hosts

cat >> /etc/hosts << EOF
192.168.2.21 ops-test-021
192.168.2.22 ops-test-022
192.168.2.23 ops-test-023
192.168.2.24 ops-test-024
192.168.2.25 ops-test-025
192.168.2.26 ops-test-026
192.168.2.27 ops-test-027
192.168.2.28 ops-test-028
192.168.2.29 ops-test-029
192.168.2.30 ops-test-030
EOF

第二步:安装containerd
所有节点均需操作
2.1 下载和配置containerd
从 https://github.com/containerd/containerd/releases 下载
二进制文件是为基于 glibc 的 Linux 发行版(如 Ubuntu 和 Rocky Linux)动态构建的。 此二进制文件可能不适用于基于 musl 的发行版,例如 Alpine Linux。 此类发行版的用户可能必须从源或第三方软件包安装 containerd
旧的 Linux 发行版上不起作用,并将在 containerd 2.0 中删除。cri-containerd-xx

## (containerd2.x文档) https://github.com/containerd/containerd/blob/main/docs/containerd-2.0.md
wget https://github.com/containerd/containerd/releases/download/v2.0.2/containerd-2.0.2-linux-amd64.tar.gzapt install runc
$ tar Cxzvf /usr/local containerd-2.0.2-linux-amd64.tar.gz
bin/
bin/containerd-shim-runc-v2
bin/containerd-shim
bin/ctr
bin/containerd-shim-runc-v1
bin/containerd
bin/containerd-stressmkdir -p /etc/containerd && containerd config default > /etc/containerd/config.tomlsed -i "s#registry.k8s.io/pause:3.10#registry.aliyuncs.com/google_containers/pause:3.10#g" /etc/containerd/config.toml
#添加SystemdCgroup = true参数
sed -i "/ShimCgroup = ''/a \            SystemdCgroup = true" /etc/containerd/config.toml#安装containerd.service官网提供的
wget -P https://raw.githubusercontent.com/containerd/containerd/main/containerd.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd.service

2.2 安装crictl

CRICTL_VERSION=v1.33.0
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$CRICTL_VERSION/crictl-$CRICTL_VERSION-linux-amd64.tar.gz
tar zxvf crictl-$CRICTL_VERSION-linux-amd64.tar.gz -C /usr/local/bin

2.3 配置私有Harbor镜像仓库

修改配置
大概在52行开始

vim +52 /etc/containerd/config.toml 51     [plugins.'io.containerd.cri.v1.images'.registry]52       config_path = '/etc/containerd/certs.d'    #修改该行的配置信息 

重新启动containerd

第三步:安装K8S组件

# 更新源
sudo apt update && sudo apt upgrade -y
# 安装工具
apt install -y apt-transport-https ca-certificates curl gpg
# 创建目录,有的版本有,看情况创建
mkdir -p -m 755 /etc/apt/keyrings
# 下载秘钥
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg && \
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#  添加软件源1.33
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list# 更新,安装软件,防止更新
apt update && \
apt install -y kubelet kubectl kubeadm && \
apt-mark hold kubelet kubeadm kubectl
# 设置开机自启
sudo systemctl enable --now kubelet
# 查看版本
kubeadm version第四步:初始化集群
4.1 下载相关镜像
# 先下载阿里云镜像,node1节点即可,即master节点
sudo kubeadm config images pull \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.33.4 \
--cri-socket=unix:///run/containerd/containerd.sock4.2 在master节点生成初始化集群的配置文件
kubeadm config print init-defaults > kubeadm-config.yaml4.3 配置文件需要修改如下内容
# 修改kubeadm-config配置文件
vim kubeadm-config.yaml# 管理节点的IP地址
advertiseAddress: 192.168.2.21# 本机注册到集群后的节点名称
name: ops-test-008#版本
kubernetesVersion: 1.33.3#跳过kube-proxy 这装
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sockimagePullPolicy: IfNotPresentimagePullSerial: truename: ops-test-008taints: null
skipPhases:  # 添加在这里- addon/kube-proxy  # 关键:跳过 kube-proxy 安装
timeouts:
--增加如上两行#在 networking 部分添加 podSubnet(必须与后续 Cilium 的 ipv4NativeRoutingCIDR 一致)
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16  # 新增此行# 集群镜像下载地址,修改为阿里云
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers4.4 通过配置文件初始化集群
kubeadm init --config kubeadm-config.yaml ---执行结果-----root@ops-test-021:~# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.33.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ops-test-021] and IPs [10.96.0.1 192.168.2.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ops-test-021] and IPs [192.168.2.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ops-test-021] and IPs [192.168.2.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002761381s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.2.21:6443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is healthy after 2.402727849s
[control-plane-check] kube-scheduler is healthy after 3.021801856s
[control-plane-check] kube-apiserver is healthy after 4.502387192s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ops-test-021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ops-test-021 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNSYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.2.21:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:9394cee06110b4ad157125ef8b791466fe1f118840c33e0f987fb6c7bfd6b8c8 4.5 据集群初始化后的提示,执行如下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config4.6 其它节点加入集群
kubeadm join 192.168.2.21:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:9394cee06110b4ad157125ef8b791466fe1f118840c33e0f987fb6c7bfd6b8c8 root@ops-test-022:~# kubeadm join 192.168.2.21:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:9394cee06110b4ad157125ef8b791466fe1f118840c33e0f987fb6c7bfd6b8c8 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
W0828 11:14:09.307801    3443 configset.go:78] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" is forbidden: User "system:bootstrap:abcdef" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.502524921s
[kubelet-start] Waiting for the kubelet to perform the TLS BootstrapThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.root@ops-test-022:~# 查看集群状态
root@ops-test-021:~# kubectl get nodes
NAME           STATUS     ROLES           AGE     VERSION
ops-test-021   NotReady   control-plane   8m19s   v1.33.4
ops-test-022   NotReady   <none>          3m45s   v1.33.4
ops-test-023   NotReady   <none>          3m34s   v1.33.4
ops-test-024   NotReady   <none>          3m20s   v1.33.4
ops-test-026   NotReady   <none>          9s      v1.33.4#验证节点 PodCIDR 分配
root@ops-test-021:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
ops-test-021    10.244.0.0/24
ops-test-022    10.244.1.0/24
ops-test-023    10.244.2.0/24
ops-test-024    10.244.3.0/24
ops-test-026    10.244.4.0/244.7 部署Cilium工具4.7.1. 安装 Cilium CLI 工具
curl -L --remote-name https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
tar xzvf cilium-linux-amd64.tar.gz
sudo mv cilium /usr/local/bin
验证:
cilium version4.7.2. 使用默认的VXLAN模式cilium install \--set enableKubeProxyReplacement=true \--set kubeProxyReplacement=true \--set ipam.mode=kubernetes \--set tunnel=vxlan \  # 将路由模式改为 VXLAN--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \--set ipam.operator.clusterPoolIPv4MaskSize=24 \--set hubble.enabled=true \--set hubble.relay.enabled=true \--set hubble.ui.enabled=true4.7.3. 安装 Cilium(Native-Routing 模式,云原生路由模式)--本次使用的方式cilium install \--set enableKubeProxyReplacement=true \--set kubeProxyReplacement=true \--set ipam.mode=kubernetes \--set routingMode=native \--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \--set ipam.operator.clusterPoolIPv4MaskSize=24 \--set ipv4NativeRoutingCIDR=10.244.0.0/16 \--set autoDirectNodeRoutes=true \--set hubble.enabled=true \--set hubble.relay.enabled=true \--set hubble.ui.enabled=true --set enableKubeProxyReplacement=true`                        显式启用 kube-proxy 替代功能
--set kubeProxyReplacement=strict`                            使用 Cilium 的 eBPF 功能完全替代 kube-proxy
--set ipam.mode=kubernetes`                                   使用 Kubernetes 控制器进行 Pod IP 分配
--set routingMode=native`                                     启用原生路由模式,节点间直接路由通信
--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16` 设置 Pod 网络地址池 
--set ipam.operator.clusterPoolIPv4MaskSize=24`               每个节点分配一个 `/24` 网段用于 Pod IP
--set ipv4NativeRoutingCIDR=10.244.0.0/16`                    指定原生路由可达的 Pod 网络范围 
--set autoDirectNodeRoutes=true`                              自动添加节点间直连路由
--set hubble.enabled=true`                                    启用 Hubble 网络流量观测功能
--set hubble.relay.enabled=true`                              启用 Hubble Relay(Hubble 数据中转服务)
--set hubble.ui.enabled=true`                                 启用 Hubble Web UI(可视化界面)--执行结果-----
root@ops-test-021:~# cilium install \--set enableKubeProxyReplacement=true \--set kubeProxyReplacement=true \--set ipam.mode=kubernetes \--set routingMode=native \--set ipam.operator.clusterPoolIPv4PodCIDRList=10.244.0.0/16 \--set ipam.operator.clusterPoolIPv4MaskSize=24 \--set ipv4NativeRoutingCIDR=10.244.0.0/16 \--set autoDirectNodeRoutes=true \--set hubble.enabled=true \--set hubble.relay.enabled=true \--set hubble.ui.enabled=true 
ℹ️  Using Cilium version 1.18.0
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has not been installed
ℹ️  Cilium will fully replace all functionalities of kube-proxy4.7.4. Native Routing + Cilium Ingress (云原生路由+启用ingress)cilium install \--set kubeProxyReplacement=strict \         # 完全替换 kube-proxy--set enableKubeProxyReplacement=true \     # 启用 kube-proxy 替换--set ipam.mode=kubernetes \                # 使用 Kubernetes IPAM--set routingMode=native \                  # 启用 Native Routing--set ipv4NativeRoutingCIDR=10.244.0.0/16 \ # 设置 Pod CIDR--set autoDirectNodeRoutes=true \           # 自动管理节点路由--set ingressController.enabled=true \      # 启用 Cilium Ingress Controller--set ingressController.loadbalancerMode=dedicated \  # 使用专用 LB 模式(推荐)--set hubble.enabled=true \                 # 启用 Hubble--set hubble.relay.enabled=true \           # 启用 Hubble Relay--set hubble.ui.enabled=true                # 启用 Hubble UI4.7.5 验证安装cilium status---验证结果
root@ops-test-021:~# cilium status/¯¯\/¯¯\__/¯¯\    Cilium:             OK\__/¯¯\__/    Operator:           OK/¯¯\__/¯¯\    Envoy DaemonSet:    OK\__/¯¯\__/    Hubble Relay:       OK\__/       ClusterMesh:        disabledDaemonSet              cilium                   Desired: 5, Ready: 5/5, Available: 5/5
DaemonSet              cilium-envoy             Desired: 5, Ready: 5/5, Available: 5/5
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui                Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 5cilium-envoy             Running: 5cilium-operator          Running: 1clustermesh-apiserver    hubble-relay             Running: 1hubble-ui                Running: 1
Cluster Pods:          4/4 managed by Cilium
Helm chart version:    1.18.0
Image versions         cilium             quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2: 5cilium-envoy       quay.io/cilium/cilium-envoy:v1.34.4-1753677767-266d5a01d1d55bd1d60148f991b98dac0390d363@sha256:231b5bd9682dfc648ae97f33dcdc5225c5a526194dda08124f5eded833bf02bf: 5cilium-operator    quay.io/cilium/operator-generic:v1.18.0@sha256:398378b4507b6e9db22be2f4455d8f8e509b189470061b0f813f0fabaf944f51: 1hubble-relay       quay.io/cilium/hubble-relay:v1.18.0@sha256:c13679f22ed250457b7f3581189d97f035608fe13c87b51f57f8a755918e793a: 1hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15: 1hubble-ui          quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392: 1
root@ops-test-021:~# 4.7.6 hubble ui应用#vim hubble-ui-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: hubble-uinamespace: kube-systemannotations:nginx.ingress.kubernetes.io/rewrite-target: /
spec:ingressClassName: nginxrules:- host: hubble.cctbb.comhttp:paths:- path: /pathType: Prefixbackend:service:name: hubble-uiport:number: 80# kubectl apply -f hubble-ui-ingress.yaml访问:http://hubble.cctbb.com
http://www.xdnf.cn/news/20267.html

相关文章:

  • finalize() 方法介绍
  • unity 接入火山引擎API,包括即梦AI
  • flutter-使用fluttertoast制作丰富的高颜值toast
  • 从 ETL 到 Agentic AI:工业数据管理变革与 TDengine IDMP 的治理之道
  • Android8 binder源码学习分析笔记(二)
  • Java 操作 Excel 全方位指南:从入门到避坑,基于 Apache POI
  • Spring Boot 源码深度解析:揭秘自动化配置的魔法
  • AR技术:电力巡检的智能升级之路
  • Python的RSS/Atom源解析库feedparser
  • 【微知】vscode如何开启markdown的review模式?
  • 飞算JavaAI炫技赛:在线图书借阅平台的设计与实现
  • 【完整源码+数据集+部署教程】雪崩检测与分类图像分割系统源码和数据集:改进yolo11-HSFPN
  • 网页版的云手机都有哪些优势?
  • C++(Qt)软件调试---bug排查记录(36)
  • 如何根据Excel数据表生成多个合同、工作证、录取通知书等word文件?
  • 【自动化实战】Python操作Excel/WORD/PDF:openpyxl与docx库详解
  • WinForms 项目里生成时选择“首选目标平台 32 位导致有些电脑在获取office word对象时获取不到
  • EXCEL列数据前面补零
  • GD32入门到实战35--485实现OTA
  • 警惕!你和ChatGPT的对话,可能正在制造分布式妄想
  • 计算机网络2 第二章 物理层——用什么方式传输邮件
  • 狗都能看懂的HunYuan3D 1.0详解
  • 一种基于注解与AOP的Spring Boot接口限流防刷方案
  • C#海康车牌识别实战指南带源码
  • VAE(变分自动编码器)技术解析
  • iOS混淆工具实战 在线教育直播类 App 的课程与互动安全防护
  • FairGuard游戏加固产品常见问题解答
  • 云市场周报 (2025.09.05):解读腾讯云AI安全、阿里数据湖与KubeVela
  • C语言中常见的数据结构及其代码实现
  • 数据传输优化-异步不阻塞处理增强首屏体验