在RK3588上部署ROS2与ORB-SLAM3实现Gazebo小车自主导航-环境搭建过程
在RK3588上部署ROS2与ORB-SLAM3实现Gazebo小车自主导航-环境搭建过程
- 项目背景与目标
- 为什么选择这些技术?
- 环境准备与验证
- 创建开发容器(在桌面系统里创建)
- 在容器里安装ROS2 Humble
- 创建测试环境
- 创建ROS节点(小球运动模拟)
- 使用RViz2可视化数据
- 部署ORB-SLAM3
- 编译安装流程
- 发布相机位姿涉及的修改点
- 测试SLAM系统
- 如何获取位姿信息
- Gazebo仿真环境
- 安装TurtleBot3仿真包
- 运行仿真环境
- 完整系统集成[TODO]
- 参考链接
项目背景与目标
在机器人开发领域,同时定位与地图构建(SLAM) 技术是赋予机器人自主移动能力的关键。本项目将展示如何在RK3588开发板上部署ROS2(Robot Operating System 2)环境,集成先进的ORB-SLAM3视觉SLAM算法,并通过Gazebo物理仿真环境模拟小车运动。
为什么选择这些技术?
- RK3588:强大的ARM处理器,提供足够的计算能力处理SLAM算法
- ROS2 Humble:机器人开发的标准化框架,提供通信中间件和工具集
- ORB-SLAM3:目前最先进的视觉SLAM系统之一,支持单目、双目和RGB-D相机
- Gazebo:高保真物理仿真环境,可在实际部署前验证算法
环境准备与验证
创建开发容器(在桌面系统里创建)
cat> run_ros.sh <<-'EOF'
#!/bin/bash
image_name="ubuntu:22.04"
echo $image_name
container_name="ros2_humble"
# 允许容器访问X11显示
xhost +
if [ $(docker ps -a -q -f name=^/${container_name}$) ]; thenecho "容器 '$container_name' 已经存在。"
elseecho "容器 '$container_name' 不存在。正在创建..."docker run -id --privileged --net=host \-v /etc/localtime:/etc/localtime:ro \-v $PWD:/home -e DISPLAY=$DISPLAY -w /home \-v /tmp/.X11-unix:/tmp/.X11-unix \-e GDK_SCALE -e GDK_DPI_SCALE \--name $container_name --hostname=$container_name $image_name /bin/bash
fi
docker start $container_name
docker exec -ti $container_name bash
EOF
bash run_ros.sh
容器化开发的优势:
使用Docker容器可以创建隔离的开发环境,避免污染主机系统。--privileged
和--net=host
参数让容器获得必要的硬件访问权限,-v /tmp/.X11-unix
允许容器内应用显示GUI界面。
在容器里安装ROS2 Humble
apt update
apt install x11-apps -y
xclock # 测试GUI显示apt install lsb-core -y
apt install locales -y
locale-gen en_US en_US.UTF-8
update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
export LANG=en_US.UTF-8apt install curl gnupg2 lsb-release vim -y
curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(source /etc/os-release && echo $UBUNTU_CODENAME) main" | tee /etc/apt/sources.list.d/ros2.list > /dev/nullapt update
apt install ros-humble-desktop -yapt install python3-argcomplete -y
apt install ros-dev-tools -y
apt install v4l-utils -y# 安装开发工具
apt install python3-rosdep python3-rosinstall python3-rosinstall-generator python3-wstool build-essential -yapt install ros-humble-vision-opencv && apt install ros-humble-message-filters -yapt install ros-humble-tf2-geometry-msgs -y# 安装Python依赖
apt install python3-pip -yapt install ros-humble-sensor-msgs-py ros-humble-cv-bridge -yapt install libepoxy-dev -y# 初始化ROS依赖系统
rosdep init
rosdep updatesource /opt/ros/humble/setup.bash
echo "127.0.0.1 ros2_humble" >> /etc/hosts
为什么需要这些步骤?
ROS2依赖特定的系统环境,包括正确的语言设置和软件源配置。这些命令确保系统满足ROS2运行的基本要求,并安装必要的通信工具和开发包。
创建测试环境
创建ROS节点(小球运动模拟)
mkdir -p /home/ROS2
cd /home/ROS2
cat > v2x_node_top_view.py <<-'EOF'
import rclpy
from rclpy.node import Node
import numpy as np
from sensor_msgs.msg import PointCloud2, Image as ImageRos
from visualization_msgs.msg import Marker, MarkerArray
import math
import cv2
from sensor_msgs.msg import Image, PointCloud2
from std_msgs.msg import Header
from sensor_msgs_py import point_cloud2
import builtin_interfaces.msgfrom sensor_msgs.msg import Image
import cv2import sys
import numpy as np
from sensor_msgs.msg import Imagedef imgmsg_to_cv2(img_msg):if img_msg.encoding != "bgr8":rospy.logerr("This Coral detect node has been hardcoded to the 'bgr8' encoding. Come change the code if you're actually trying to implement a new camera")dtype = np.dtype("uint8") # Hardcode to 8 bits...dtype = dtype.newbyteorder('>' if img_msg.is_bigendian else '<')image_opencv = np.ndarray(shape=(img_msg.height, img_msg.width, 3), # and three channels of data. Since OpenCV works with bgr natively, we don't need to reorder the channels.dtype=dtype, buffer=img_msg.data)# If the byt order is different between the message and the system.if img_msg.is_bigendian == (sys.byteorder == 'little'):image_opencv = image_opencv.byteswap().newbyteorder()return image_opencvdef cv2_to_imgmsg(cv_image):img_msg = Image()img_msg.height = cv_image.shape[0]img_msg.width = cv_image.shape[1]img_msg.encoding = "bgr8"img_msg.is_bigendian = 0img_msg.data = cv_image.tostring()img_msg.step = len(img_msg.data) // img_msg.height # That double line is actually integer division, not a commentreturn img_msg# 生成圆柱点云
def generate_cylinder(center_x, center_y, radius, height, num_points=500):points = []for _ in range(num_points):# 随机角度和高度theta = np.random.uniform(0, 2*np.pi)h = np.random.uniform(0, height)# 计算点坐标x = center_x + radius * np.cos(theta)y = center_y + radius * np.sin(theta)z = h# 添加到点云 (x,y,z + 强度)points.append([x, y, z, 0.0])return np.array(points, dtype=np.float32)# 创建可视化Marker
def create_cylinder_marker(center_x, center_y, height, radius, id_num, frame_id="map", stamp=None):marker = Marker()marker.header.frame_id = frame_idif stamp:marker.header.stamp = stampmarker.ns = "cylinders"marker.id = id_nummarker.type = Marker.CYLINDERmarker.action = Marker.ADD# 位置和尺寸marker.pose.position.x = center_xmarker.pose.position.y = center_ymarker.pose.position.z = height/2 # 中心位于半高marker.pose.orientation.w = 1.0marker.scale.x = radius * 2marker.scale.y = radius * 2marker.scale.z = height# 颜色 (RGBA)marker.color.r = 0.0marker.color.g = 1.0marker.color.b = 0.0marker.color.a = 0.5 # 半透明return markerdef create_ball_marker(x, y, z, id_num, frame_id="map", stamp=None):marker = Marker()marker.header.frame_id = frame_idif stamp:marker.header.stamp = stampmarker.ns = "ball"marker.id = id_nummarker.type = Marker.SPHEREmarker.action = Marker.ADD# 位置和尺寸marker.pose.position.x = xmarker.pose.position.y = ymarker.pose.position.z = zmarker.pose.orientation.w = 1.0marker.scale.x = 0.2 # 直径marker.scale.y = 0.2marker.scale.z = 0.2# 颜色 (RGBA)marker.color.r = 1.0marker.color.g = 0.0marker.color.b = 0.0marker.color.a = 1.0return marker# 创建顶视图摄像头图像
def create_top_view_image(cylinder1, cylinder2, ball_pos, ball_history, img_width=640, img_height=480):# 创建白色背景image = np.ones((img_height, img_width, 3), dtype=np.uint8) * 255# 定义坐标映射参数scale = 100 # 缩放比例offset_x = img_width // 2offset_y = img_height // 2# 将3D坐标转换为2D图像坐标 (顶视图)def project_to_2d(x, y):u = int(offset_x + x * scale)v = int(offset_y + y * scale)return u, v# 绘制坐标轴cv2.line(image, (offset_x, 0), (offset_x, img_height), (200, 200, 200), 1)cv2.line(image, (0, offset_y), (img_width, offset_y), (200, 200, 200), 1)cv2.putText(image, "X", (img_width - 20, offset_y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0