Kubernetes(k8s)学习笔记(五)--部署Ingress实现域名访问和负载均衡
Ingress是基于nginx,通过在k8s中部署ingress,可实现域名访问和pod节点间的负载均衡。
下面是实现过程:
一.准备一个ingress-controller.yaml文件
apiVersion: v1
kind: Namespace
metadata:name: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---kind: ConfigMap
apiVersion: v1
metadata:name: nginx-configurationnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: tcp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: udp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
apiVersion: v1
kind: ServiceAccount
metadata:name: nginx-ingress-serviceaccountnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:name: nginx-ingress-clusterrolelabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
rules:- apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secretsverbs:- list- watch- apiGroups:- ""resources:- nodesverbs:- get- apiGroups:- ""resources:- servicesverbs:- get- list- watch- apiGroups:- "extensions"resources:- ingressesverbs:- get- list- watch- apiGroups:- ""resources:- eventsverbs:- create- patch- apiGroups:- "extensions"resources:- ingresses/statusverbs:- update---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:name: nginx-ingress-rolenamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
rules:- apiGroups:- ""resources:- configmaps- pods- secrets- namespacesverbs:- get- apiGroups:- ""resources:- configmapsresourceNames:# Defaults to "<election-id>-<ingress-class>"# Here: "<ingress-controller-leader>-<nginx>"# This has to be adapted if you change either parameter# when launching the nginx-ingress-controller.- "ingress-controller-leader-nginx"verbs:- get- update- apiGroups:- ""resources:- configmapsverbs:- create- apiGroups:- ""resources:- endpointsverbs:- get---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:name: nginx-ingress-role-nisa-bindingnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: nginx-ingress-role
subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:name: nginx-ingress-clusterrole-nisa-bindinglabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: nginx-ingress-clusterrole
subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---apiVersion: apps/v1
kind: DaemonSet
metadata:name: nginx-ingress-controllernamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
spec:selector:matchLabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxtemplate:metadata:labels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxannotations:prometheus.io/port: "10254"prometheus.io/scrape: "true"spec:hostNetwork: trueserviceAccountName: nginx-ingress-serviceaccountcontainers:- name: nginx-ingress-controllerimage: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.20.0args:- /nginx-ingress-controller- --configmap=$(POD_NAMESPACE)/nginx-configuration- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services- --udp-services-configmap=$(POD_NAMESPACE)/udp-services- --publish-service=$(POD_NAMESPACE)/ingress-nginx- --annotations-prefix=nginx.ingress.kubernetes.iosecurityContext:allowPrivilegeEscalation: truecapabilities:drop:- ALLadd:- NET_BIND_SERVICE# www-data -> 33runAsUser: 33env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceports:- name: httpcontainerPort: 80- name: httpscontainerPort: 443livenessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 10readinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPperiodSeconds: 10successThreshold: 1timeoutSeconds: 10---
apiVersion: v1
kind: Service
metadata:name: ingress-nginxnamespace: ingress-nginx
spec:#type: NodePortports:- name: httpport: 80targetPort: 80protocol: TCP- name: httpsport: 443targetPort: 443protocol: TCPselector:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx
这里需要注意的地方一个是镜像,如果使用这个镜像siriuszg/nginx-ingress-controller:0.20.0 可能会导致镜像拉取失败,类似这样的提示:
[root@k8s-node1 k8s]# kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/nginx-ingress-controller-p4lgj 0/1 ImagePullBackOff 0 5m56s pod/nginx-ingress-controller-p5lxq 0/1 ImagePullBackOff 0 5m34s
可使用下面的命令进行调整镜像
kubectl set image -n ingress-nginx daemonset/nginx-ingress-controller \nginx-ingress-controller=registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.8.1
另一个需要注意的地方是版本对应,由于我安装的k8s是1.17版本,对应的ingress需要使用0.20.0版本。如果版本过高或过低会出现兼容性问题。
二.安装ingress-controller
kubectl apply -f ingress-controller.yaml
三.编写yaml文件ingress-tomcat9.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:name: tomcat9-ingressannotations:nginx.ingress.kubernetes.io/rewrite-target: /
spec:rules:- host: tomcat.example.comhttp:paths:- path: /backend:serviceName: tomcat9 # 1.17 版本使用此字段servicePort: 80 # 1.17 版本使用此字段
上面的配置中,在spec部分通过host指定对外访问的域名为tomcat.example.com,serviceName为前面创建的服务名,serviceport为前面服务指定的端口号,使用80则不用带端口号访问。
注意,不同版本的k8s配置上略在差异,请结合自己的版本,做必要的调整,我的k8s是1.17版本的。
四.创建发布ingress-tomcat
kubectl apply -f ingress-tomcat9.yaml
执行上面的命令后会出现下面的提示
ingress.extensions/web created
稍等几分钟,查看启动运行状态
开发和测试阶段,我们可以使用SwitchHosts实现模仿配置域名访问,使用管理员启动运行SwitchHosts,并添加下面的配置
然后在浏览器上使用域名访问
可尝试把pod里边某个tomcat服务停止运行,再刷新,发现依然可以访问实现负载均衡的效果。
拓展:
在 Kubernetes 中,当 Pod 状态为 ImagePullBackOff 时,通常是由于镜像拉取失败导致的(如镜像不存在、权限不足或网络问题)。要彻底清理这些失败状态的 Pod 及相关资源,请按以下步骤操作:
1. 删除问题 Pod
# 查看所有 Pod(包括失败状态的)
kubectl get pods --all-namespaces | grep -i "ImagePullBackOff"# 删除指定 Pod
kubectl delete pod <pod-name> -n <namespace># 强制删除(如果卡住)
kubectl delete pod <pod-name> --grace-period=0 --force -n <namespace>
2. 清理关联的 Deployment/StatefulSet
如果 Pod 由控制器(如 Deployment)管理,需修改控制器配置:
# 查看控制器
kubectl get deployments,statefulsets -n <namespace># 更新镜像或删除控制器
kubectl edit deployment <deployment-name> -n <namespace> # 修改 spec.template.spec.containers[].image
# 或
kubectl delete deployment <deployment-name> -n <namespace>
3.清理未使用的镜像
在 每个节点 上执行:
# 查看占空间的镜像
sudo docker images | grep -i "<none>"# 删除悬空镜像
sudo docker image prune -a -f# 按名称删除特定镜像
sudo docker rmi <image-name>:<tag>
4. 修复镜像拉取问题
根据具体错误选择解决方案:
(1) 镜像不存在
# 修改 Deployment 使用正确的镜像
kubectl set image deployment/<deployment-name> <container-name>=<correct-image>:<tag>
(2) 需要私有仓库认证
# 创建 docker-registry secret
kubectl create secret docker-registry my-registry-key \
--docker-server=<registry-url> \
--docker-username=<user> \
--docker-password=<password> \
--docker-email=<email># 在 Deployment 中引用
kubectl edit deployment <deployment-name>
spec:
template:
spec:
imagePullSecrets:
- name: my-registry-key
(3) 镜像拉取策略问题
spec:
containers:
- name: my-container
image: my-image:latest
imagePullPolicy: IfNotPresent # 改为 IfNotPresent 或 Always
5. 彻底清理资源(可选)
# 删除所有失败状态的 Pod
kubectl delete pods --field-selector=status.phase=Failed --all-namespaces# 清理 Evicted 状态的 Pod
kubectl delete pods --field-selector=status.phase=Evicted --all-namespaces
6. 验证修复结果
# 查看新建 Pod 状态
kubectl get pods -n <namespace> -w# 检查事件日志
kubectl describe pod <pod-name> | grep -A 10 Events
示例:
kubectl describe pod -n ingress-nginx nginx-ingress-controller-5pndz | grep -A 10 Events