当前位置: 首页 > news >正文

Docker Swarm 集群使用记录

1 初始化集群

manager主机目录:

data
├── base_data.yml
├── base_monitoring.yml
├── base_server_middleware.yml
└── docker├── consul├── elasticsearch├── filebeat├── grafana├── kib├── konga├── mongodb├── mysql├── nacos├── nginx├── portainer├── postgresql├── prometheus├── rabbitmq└── redis
1.1 先配置各个服务器之间的host名称

先配置各个主机节点名称,方便后续区分

 hostnamectl set-hostname  manager    # 管理节点hostnamectl set-hostname  node1      # work节点1hostnamectl set-hostname  node2      # work节点2

修改完主机名称后,需要 配置一下 /etc/hosts 记录,否则后续使用dns可能会出现 unable to resolve host xxxx: Temporary failure in name resolution 问题

vi /etc/hosts127.0.1.1 上面修改主机节点的名称, 如 127.0.1.1 manager 或 node1、node2等
1.2 创建集群并加入
docker swarm init --advertise-addr 10.10.6.111 --data-path-addr 10.10.6.111
docker swarm join --token SWMTKN-1-51niu3a5jh0bgj738go49re9yoo1hpzidq6nxn5ho114yx43-ekeyf6rynb6xl9rykyrx8 \
10.10.6.111:2377 --advertise-addr 172.168.1.175:2377

需要注意,如果用docker swarm 搭建的机器的服务器来源于不同网络的,比如每个服务器都是云服务器,并且云服务器之间都是只能通过公网ip进行通信的,那么node节点需要在加入服务器的时候使用 --advertise-addr参数加上当前服务器的公网ip, 如 我上面的172.168.1.175 这个ip地址是公网ip,可以与10.10.6.111进行连接,否则将集群内的服务无法通过 dns 服务发现的内网进行通信,如果没有使用 --advertise-addr进行标明加入的节点ip地址,那么docker swarm node节点默认以当前eth0网卡中的ip地址加入到集群中,eth0 网卡下的ip默认是内网ip,所以才会导致如果是不同网段下的节点无法正常通信,如果当前集群是在内网中,每个服务器的eth0 网卡下的ip可以进行互联的话,那么则不需要使用 --advertise-addr进行连接

1.2 开放端口号

1 到云服务器厂商管理后台上将 7946,4789,2377 tcp/udp端口进行开放,开放给集群内的服务器

2 在各个集群内的服务器上开放防火墙端口
ufw allow proto tcp from any to any port 7946,4789,2377
ufw allow proto udp from any to any port 7946,4789,2377

2 创建基本数据库服务

base_data.yml:

version: '3.8'networks:base_service_database-net:external: trueservices:
#mysqlmysql:# mysql:8.0.20 或其它mysql版本(自己修改)image: mysql:8.0.20# 容器名册container_name: mysql-8networks:- base_service_database-netenvironment:#密码设置- MYSQL_ROOT_PASSWORD=python- TZ=Asia/Shanghai- SET_CONTAINER_TIMEZONE=true- CONTAINER_TIMEZONE=Asia/Shanghaivolumes:# 前面宿主机目录,后面容器内目录(宿主机没有的目录会自动创建)- /data/docker/mysql/mysql8:/etc/mysql- /data/docker/mysql/mysql8/logs:/logs- /data/docker/mysql/mysql8/data:/var/lib/mysql- /etc/localtime:/etc/localtime- /data/docker/mysql/mysql8/mysql-files:/var/lib/mysql-filesdeploy:placement:constraints:- node.hostname == manager  #replicas: 1  # 单副本确保固定节点ports:# 前面宿主机目录,后面容器内目录- 3613:3306restart: alwaysprivileged: true#mongomongo:restart: alwaysimage: mongo:8.0.3container_name: mongodbnetworks:- base_service_database-netvolumes:- /data/docker/mongodb/config/mongod.conf:/etc/mongod.conf- /data/docker/mongodb/data:/data/db- /data/docker/mongodb/logs:/var/log/mongodbports:- 27017:27017environment:- MONGO_INITDB_ROOT_PASSWORD=python- MONGO_INITDB_ROOT_USERNAME=caipu_srvdeploy:placement:constraints:- node.hostname == manager  ##redisredis:image: redis:7.0.12container_name: redisrestart: alwaysnetworks:- base_service_database-netcommand: redis-server /usr/local/etc/redis/redis.conf --appendonly novolumes:- /etc/localtime:/etc/localtime- /data/docker/redis/config/redis.conf:/usr/local/etc/redis/redis.conf- /data/docker/redis/data:/data- /data/docker/redis/logs:/logsports:- 6379:6379deploy:placement:constraints:- node.hostname == manager  #kong-database:image: postgres:16container_name: kong-databaserestart: alwaysnetworks:- base_service_database-netenvironment:- POSTGRES_USER=kong- POSTGRES_DB=kong- POSTGRES_PASSWORD=kongvolumes:- /data/docker/postgresql/data:/var/lib/postgis/data- /data/docker/postgresql/data:/var/lib/postgresql/dataports:- "5348:5432"deploy:placement:constraints:- node.hostname == manager  ##kong数据库的初始化kong-migration:container_name: kong-migrationimage: kongcommand: kong migrations bootstrapnetworks:- base_service_database-netrestart: on-failureenvironment:- KONG_PG_HOST=kong-database- KONG_DATABASE=postgres- KONG_PG_USER=kong- KONG_PG_PASSWORD=kong- KONG_CASSANDRA_CONTACT_POINTS=kong-databaselinks:- kong-databasedepends_on:- kong-databasedeploy:placement:constraints:- node.hostname == manager  #elasticsearch:image: elasticsearch:7.17.7restart: alwayscontainer_name: elasticsearchnetworks:- base_service_database-netenvironment:- discovery.type=single-node- ES_JAVA_OPTS=-Xms512m -Xmx512mports:- "9200:9200"- "9300:9300"volumes:- /data/docker/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml- /data/docker/elasticsearch/data:/usr/share/elasticsearch/data- /data/docker/elasticsearch/logs:/usr/share/elasticsearch/logsdeploy:placement:constraints:- node.hostname == manager

先执行创建网络

docker network create --driver overlay base_service_database-net --attachable

然后再执行下面命令启动创建服务

docker stack deploy -c  base_data.yml  base_service_database
3 创建监控服务

注意,cadvisor的docker镜像可能无法直接下载,可以通过 下载cadvisor 链接去下载cadvisor tar镜像,然后通过 docker load -i 进行离线安装

base_monitoring.yml

version: "3.8"networks:monitoring:external: truebase_service_database-net:external: trueservices:# Prometheus 服务prometheus:image: prom/prometheus:latestports:- "9090:9090"volumes:- /data/docker/prometheus/data:/prometheus- /data/docker/prometheus/prometheus.yml:/etc/prometheus/prometheus.ymlnetworks:- monitoring- base_service_database-netdeploy:placement:constraints:- node.role == managerenvironment:- TZ=Asia/Shanghai# Node Exporter(全局部署到所有节点)node-exporter:image: prom/node-exporter:latestcommand:- '--path.rootfs=/host'pid: hostvolumes:- '/:/host:ro,rslave'environment:- TZ=Asia/Shanghainetworks:- monitoringdeploy:mode: global# cAdvisor(全局部署到所有节点)cadvisor:image: gcr.io/cadvisor/cadvisor:v0.52.1volumes:- /:/rootfs:ro- /var/run:/var/run:ro- /sys:/sys:ro- /proc:/proc- /var/lib/docker/:/var/lib/docker:rosecurity_opt:- apparmor:unconfined  devices:- /dev/kmsg:/dev/kmsgnetworks:- monitoringdeploy:mode: globalports:- "8080:8080"# Grafana 仪表盘grafana:image: grafana/grafana:latestports:- "3000:3000"volumes:- /data/docker/grafana:/var/lib/grafanaenvironment:- TZ=Asia/Shanghainetworks:- monitoringdeploy:placement:constraints:- node.role == manager

先执行创建网络

docker network create --driver overlay --attachable monitoring

然后再执行下面命令启动创建服务

docker stack deploy -c base_monitoring.yml  monitoring

如果出现 prometheu 或者 grafana起不来的话,那么就将挂载的目录修改权限,

chmod 777  /data/docker/grafana
chmod 777  /data/docker/prometheus/data
4 创建中间件服务

base_server_middleware.yml

version: '3.8'networks:base_service_database-net:external: trueweb_app:external: truemonitoring:external: trueservices:consul:image: consul:1.15.4restart: alwayscontainer_name: consulnetworks:- web_appports:- "8500:8500"- "8600:8600"volumes:- /etc/localtime:/etc/localtime- /data/docker/consul/data:/consul/datadeploy:placement:constraints:- node.hostname == managernacos:image: qingpan/rnacos:stablecontainer_name: nacosnetworks:- web_appports:- "8848:8848"- "9848:9848"- "10848:10848"volumes:- /data/docker/nacos/logs:/home/nacos/logsrestart: alwaysenvironment:- RNACOS_HTTP_PORT=8848- RNACOS_ENABLE_NO_AUTH_CONSOLE=true- TZ=Asia/Shanghai- MODE=standalone- SPRING_DATASOURCE_PLATFORM=mysql- MYSQL_SERVICE_HOST=81.71.64.139- MYSQL_SERVICE_PORT=3306- MYSQL_SERVICE_USER=root- MYSQL_SERVICE_PASSWORD=python- MYSQL_SERVICE_DB_NAME=nacos_config- MYSQL_SERVICE_DB_PARAM=characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTCdeploy:placement:constraints:- node.hostname == managerportainer:image: 6053537/portainer-cecontainer_name: portainernetworks:- monitoringports:- "9000:9000"volumes:- /var/run/docker.sock:/var/run/docker.sock- /data/docker/portainer:/datarestart: alwaysdeploy:placement:constraints:- node.hostname == managerrabbitmq:image: rabbitmq:4.0.7-managementcontainer_name: rabbitmqnetworks:- monitoring- web_app- base_service_database-netenvironment:- RABBITMQ_DEFAULT_USER=root- RABBITMQ_DEFAULT_PASS=q123q123ports:- "5672:5672"- "15672:15672"volumes:- /data/docker/rabbitmq/data:/var/lib/rabbitmq- /data/docker/rabbitmq/logs:/var/log/rabbitmqrestart: alwaysdeploy:placement:constraints:- node.hostname == managerkonga:container_name: kongaimage: pantsel/konga:latestrestart: alwaysnetworks:- monitoring- base_service_database-netports:- "1337:1337"deploy:placement:constraints:- node.hostname == manager  #kibana:container_name: kibanaimage: kibana:7.17.7restart: alwaysvolumes:- /data/docker/kib/config/kibana.yml:/usr/share/kibana/config/kibana.ymlnetworks:- base_service_database-net- monitoringports:- "5601:5601"deploy:placement:constraints:- node.hostname == manager  #filebeat:container_name: filebeatimage: elastic/filebeat:7.17.7restart: alwaysnetworks:- base_service_database-netdeploy:mode: globalconfigs:- source: filebeat-configtarget: /usr/share/filebeat/filebeat.yml  # 配置文件挂载路径volumes:- type: bindsource: /data/logs/target: /data/logs/- type: bindsource: /var/run/docker.socktarget: /var/run/docker.sock- type: bindsource: /var/lib/docker/containerstarget: /var/lib/docker/containersread_only: trueconfigs:filebeat-config:file: /data/docker/filebeat/config/filebeat.yml  # 使用本地的 filebeat.yml 文件

先执行创建网络

docker network create --driver overlay --attachable web_app

然后再创建filebeat.yml的config配置
filebeat.yml 配置如下:

filebeat.inputs:
- type: filestreamenabled: truepaths:- /data/logs/*/*.logparsers:- multiline:type: patternpattern: '^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}'negate: truematch: aftermax_lines: 500timeout: 10sprocessors:- dissect:tokenizer: "%{log_timestamp} | %{log_level} | %{namespace} | %{file_path} | %{method} | %{track_id} | %{message}"field: "message"max_lines: 500target_prefix: ""overwrite_keys: true- timestamp:field: log_timestamplayouts:- '2006-01-02 15:04:05.000'test:- '2025-04-14 09:16:52.758'- drop_fields:fields: ["log_timestamp"]output.elasticsearch:hosts: ["http://elasticsearch:9200"]index: "caipu_srv-logs-%{+yyyy.MM.dd}"indices:- index: "caipu_srv-logs-%{+yyyy.MM.dd}"when.contains:tags: "xixi"pipeline: "xixi_processor"setup.template.enabled: false
setup.template.name: "caipu_srv"
setup.template.pattern: "caipu_srv-*"
docker config create filebeat-config  /data/docker/filebeat/config/filebeat.yml

最后再执行下面命令启动创建服务

docker stack deploy -c base_server_middleware.yml  base_server_middleware
http://www.xdnf.cn/news/1141597.html

相关文章:

  • CentOS7下的ElasticSearch部署
  • 消息队列 2.RabbitMQ
  • 中国1km逐月潜在蒸散发数据集 - matlab按shp批量裁剪
  • SVN使用过程中的几个疑问与解答
  • 【Lua】闭包可能会导致的变量问题
  • bmp图像操作:bmp图像保存及raw与bmp转换
  • 内容生产的3种方式 最佳实践:人 / 人+机 / 机
  • Win11安装Docker,并使用Docker安装RabbitMQ
  • 14-链路聚合
  • 如何上传github(解决git的时候输入正确的账号密码,但提示认证失败)
  • react/vue vite ts项目中,自动引入路由文件、 import.meta.glob动态引入路由 无需手动引入
  • 7月18日总结
  • Java全栈工程师面试实录:从Spring Boot到AI大模型的深度技术解析
  • 基于K8s ingress灰度发布配置
  • 【Docker#2】容器历史发展 | 虚拟化实现方式
  • Java大厂面试实录:从Spring Boot到AI微服务架构的深度解析
  • [源力觉醒 创作者计划]_文心一言 4.5开源深度解析:性能狂飙 + 中文专精
  • 如何快速下载 MT4 交易平台
  • div和span区别
  • 智象科技赋能金融、证券行业 IT 运维
  • Git使用与管理
  • mac mlx大模型框架的安装和使用
  • BIST会对锁步核做什么?
  • 【PTA数据结构 | C语言版】根据后序和中序遍历输出前序遍历
  • Kubernetes (k8s)、Rancher 和 Podman 的异同点分析
  • Copula 回归与结构方程模型:R 语言构建多变量因果关系网络
  • 异世界历险之数据结构世界(排序(插入,希尔,堆排))
  • mysql 性能优化入门
  • 搜索引擎优化全攻略:提升百度排名优化
  • JAVA 使用Apache POI合并Word文档并保留批注的实现