【C/C++】基于 Docker 容器运行的 Kafka + C++ 练手项目
文章目录
- 基于 Docker 容器运行的 Kafka + C++ 练手项目
- 1 项目目的
- 2 项目框架
- 3 代码
- 4 编译运行
- 5 功能与接口说明
- 5.1 Producer 接口:`producer.cpp`
- 关键调用流程
- 参数说明
- 5.2 Consumer 接口:`consumer.cpp`
- 关键调用流程
- 消费流程中注意
- 5.3 工程技术点
基于 Docker 容器运行的 Kafka + C++ 练手项目
使用 C++ 语言调用 Kafka 接口的示例项目,通过容器化部署 Kafka + Zookeeper 环境,实现了 Kafka 生产者与消费者的基本功能。
1 项目目的
- 学习如何用 C++ 操作 Kafka(使用 librdkafka 的 C++ 封装)
- 实践分布式消息队列的基本使用模式:生产者-消费者
- 通过 Docker 快速部署 Kafka + Zookeeper 环境
- 为将来构建中间件(如日志系统、异步任务系统、RPC 框架)奠定基础
2 项目框架
cpp-kafka-project/
├── docker-compose.yml # Kafka + Zookeeper + 开发环境容器定义
├── cpp_kafka_code/
│ ├── CMakeLists.txt
│ ├── producer.cpp
│ ├── consumer.cpp
│ └── create_topic.sh # 创建 topic 的脚本
关键技术点
-
Kafka + Zookeeper 容器化
-
使用 Confluent 提供的官方镜像:
confluentinc/cp-kafka
和cp-zookeeper
-
通过
docker-compose.yml
启动三个容器:zookeeper
:协调 Kafka Brokerkafka
:消息代理cpp_dev
:Ubuntu 开发容器,内含 C++ 源码和构建环境
-
-
Kafka C++ 客户端库
- 使用
librdkafka
的 C++ 封装接口rdkafkacpp.h
- 动态链接
librdkafka++
和librdkafka
- 使用
-
CMake 构建系统
- 自动查找和链接 Kafka 所需的库与头文件
- 支持分离构建(out-of-source)
3 代码
docker-compose.yml
version: "3.8"services:zookeeper:image: confluentinc/cp-zookeeper:7.5.0container_name: zookeeperports:- "2181:2181"environment:ZOOKEEPER_CLIENT_PORT: 2181ZOOKEEPER_TICK_TIME: 2000kafka:image: confluentinc/cp-kafka:7.5.0container_name: kafkaports:- "9092:9092"environment:KAFKA_BROKER_ID: 1KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1depends_on:- zookeeperdev:image: ubuntu:22.04container_name: cpp_devtty: truestdin_open: truecommand: /bin/bashworking_dir: /home/dev/codevolumes:- ./cpp_kafka_code:/home/dev/codedepends_on:- kafka
create-topic.sh
#!/bin/bashdocker exec kafka kafka-topics \--create \--topic test_topic \--bootstrap-server localhost:9092 \--partitions 1 \--replication-factor 1
加执行权限
chmod +x create_topic.sh
CMakeLists.txt
cmake_minimum_required(VERSION 3.10)
project(cpp_kafka_example)set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)find_package(PkgConfig REQUIRED)
pkg_check_modules(RDKAFKA REQUIRED IMPORTED_TARGET rdkafka++)add_executable(producer producer.cpp)
target_link_libraries(producer PkgConfig::RDKAFKA)add_executable(consumer consumer.cpp)
target_link_libraries(consumer PkgConfig::RDKAFKA)
producer.cpp
#include <librdkafka/rdkafkacpp.h>
#include <iostream>
#include <csignal>
#include <memory>class ExampleEventCb : public RdKafka::EventCb {void event_cb(RdKafka::Event &event) override {if (event.type() == RdKafka::Event::EVENT_ERROR) {std::cerr << "Kafka Error: " << event.str() << std::endl;}}
};int main() {std::string brokers = "kafka:9092";std::string topic_str = "test_topic";std::string errstr;// 配置ExampleEventCb event_cb;std::unique_ptr<RdKafka::Conf> conf(RdKafka::Conf::create(RdKafka::Conf::CONF_GLOBAL));conf->set("bootstrap.servers", brokers, errstr);conf->set("event_cb", &event_cb, errstr);// 创建 producerstd::unique_ptr<RdKafka::Producer> producer(RdKafka::Producer::create(conf.get(), errstr));if (!producer) {std::cerr << "Failed to create producer: " << errstr << std::endl;return 1;}// 创建 Topicstd::unique_ptr<RdKafka::Topic> topic(RdKafka::Topic::create(producer.get(), topic_str, nullptr, errstr));if (!topic) {std::cerr << "Failed to create topic: " << errstr << std::endl;return 1;}std::string message = "Hello from C++ Kafka Producer!";RdKafka::ErrorCode resp = producer->produce(topic.get(), // topic ptrRdKafka::Topic::PARTITION_UA, // partitionRdKafka::Producer::RK_MSG_COPY, // message flagsconst_cast<char *>(message.c_str()), // payloadmessage.size(), // payload sizenullptr, // optional keynullptr); // opaqueif (resp != RdKafka::ERR_NO_ERROR) {std::cerr << "Produce failed: " << RdKafka::err2str(resp) << std::endl;} else {std::cout << "Message sent successfully\n";}producer->flush(3000);return 0;
}
consumer.cpp
#include <librdkafka/rdkafkacpp.h>
#include <iostream>
#include <csignal>
#include <memory>bool running = true;void signal_handler(int) {running = false;
}class ExampleEventCb : public RdKafka::EventCb {void event_cb(RdKafka::Event &event) override {if (event.type() == RdKafka::Event::EVENT_ERROR) {std::cerr << "Kafka Error: " << event.str() << std::endl;}}
};int main() {signal(SIGINT, signal_handler);std::string brokers = "kafka:9092";std::string topic = "test_topic";std::string group_id = "cpp_consumer_group";std::string errstr;ExampleEventCb event_cb;auto conf = RdKafka::Conf::create(RdKafka::Conf::CONF_GLOBAL);conf->set("bootstrap.servers", brokers, errstr);conf->set("group.id", group_id, errstr);conf->set("auto.offset.reset", "earliest", errstr);conf->set("event_cb", &event_cb, errstr);auto consumer = RdKafka::KafkaConsumer::create(conf, errstr);if (!consumer) {std::cerr << "Failed to create consumer: " << errstr << std::endl;return 1;}consumer->subscribe({topic});std::cout << "Consuming messages from topic " << topic << std::endl;while (running) {auto msg = consumer->consume(1000);if (msg->err() == RdKafka::ERR_NO_ERROR) {std::string message(reinterpret_cast<const char*>(msg->payload()), msg->len());std::cout << "Received message: " << message << std::endl;}}delete msg;}consumer->close();delete consumer;return 0;
}
4 编译运行
- 启动容器
# docker-compose.yml所在目录下
docker-compose up -d
- 安装依赖并编译
docker exec -it cpp_dev /bin/bash
# 以下在容器内执行
apt update && apt install -y g++ cmake pkg-config librdkafka-dev librdkafka++1
mkdir -p build && cd build
cmake ..
make
- 创建kafka topic:
# 宿主机下
./create_topic.sh
- cpp_dev容器下运行consumer和producer
./consumer &
./producer
输出
/home/dev/code/build# ./consumer &
[1] 4069
/home/dev/code/build# Consuming messages from topic test_topic/home/dev/code/build# ./producer
Message sent successfully
Received message: Hello from C++ Kafka Producer!
5 功能与接口说明
5.1 Producer 接口:producer.cpp
功能:向指定的 topic(如 test_topic
)持续发送消息。
关键调用流程
RdKafka::Conf::create(...) // 创建配置对象
conf->set(...) // 设置 broker 等参数
RdKafka::Producer::create(...) // 创建 Producer 实例
producer->produce(...) // 发送消息
参数说明
参数 | 说明 |
---|---|
topic | 目标 topic 名称 |
partition | 使用 RdKafka::Topic::PARTITION_UA 表示由 Kafka 自动分配 |
message flags | 通常为 RK_MSG_COPY |
payload | 消息数据(char*) |
payload length | 消息长度(size_t) |
5.2 Consumer 接口:consumer.cpp
功能:从指定的 topic 订阅并消费消息。
关键调用流程
RdKafka::Conf::create(...) // 创建全局配置
conf->set(...) // 设置 group.id 等参数
RdKafka::KafkaConsumer::create(...) // 创建 KafkaConsumer 实例
consumer->subscribe(...) // 订阅 topic
consumer->consume(...) // 拉取消息
消费流程中注意
msg->payload()
需要转换为char*
后构造成字符串打印- 使用
msg->err()
判断是否正常收到消息
5.3 工程技术点
技术点 | 描述 |
---|---|
容器部署 | 无需本机安装 Kafka,快速启动测试环境 |
Kafka 消费模型 | 使用 KafkaConsumer 拉模式消费,便于理解 |
CMake 模块化 | 可轻松扩展更多模块(如 logger、metrics) |
中间件模板 | 可作为日志系统、消息队列、调度中心等中间件的原型 |