当前位置: 首页 > java >正文

MDP的observations部分

文章目录

  • 1.isaaclab的observations
    • 1.1 根状态相关观测
      • base_pos_z
      • base_lin_vel (use)
      • base_ang_vel (use)
      • projected_gravity (use)
      • root_pos_w
      • root_quat_w
      • root_lin_vel_w
      • root_ang_vel_w
    • 1.2 关节状态相关观测
      • joint_pos
      • joint_pos_rel (use)
      • joint_pos_limit_normalized
      • joint_vel
      • joint_vel_rel (use)
      • note:compare joint_pos and joint_pos_vel
    • 1.3 动作相关观测。
      • last_action (use)
    • 1.4 命令相关观测
      • generated_commands (use)
    • 1.5 传感器相关观测
      • height_scan (use)
      • body_incoming_wrench
      • imu_orientation
      • imu_ang_vel
      • imu_lin_acc
      • image
      • image_features
  • 2 robot_lab中的observations
    • joint_pos_rel_without_wheel
    • phase
      • 1. ​​周期性运动编码原理​​
      • 2. ​​神经网络友好设计​​
      • 3.🚀 ​​典型应用场景​​
  • 结合IMU数据
      • 🌓 终极解析:相位跳变与平滑过渡原理

observations.py 是 Isaac Lab 项目中的观测系统核心模块,位于 source/isaaclab/isaaclab/envs/mdp/ 目录下。它提供了一套完整的观测函数库,用于从仿真环境中提取各种状态信息,为强化学习智能体提供感知能力。

1.isaaclab的observations

1.1 根状态相关观测

在这里插入图片描述
在这里插入图片描述
​导航任务​​:优先使用世界坐标系(_w系列函数)
​​本体控制​​:优先使用基座坐标系(_b系列函数)

base_pos_z


def base_pos_z(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""Root height in the simulation world frame.仿真世界坐标系中的根部高度。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Articulation = env.scene[asset_cfg.name]return asset.data.root_pos_w[:, 2].unsqueeze(-1)

base_lin_vel (use)

作用机制:
速度反馈控制:提供机器人当前的平移速度(x, y, z方向)
运动状态感知:让策略网络了解机器人的实际运动状态
速度跟踪:与目标速度命令对比,实现闭环控制

def base_lin_vel(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""Root linear velocity in the asset's root frame.资产根坐标系中的根部线性速度。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: RigidObject = env.scene[asset_cfg.name]return asset.data.root_lin_vel_b

base_ang_vel (use)

作用机制:
旋转控制:提供机器人的角速度(roll, pitch, yaw)
姿态稳定:检测不期望的旋转运动
转向控制:实现精确的方向控制

def base_ang_vel(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""Root angular velocity in the asset's root frame.资产根坐标系中的根部角速度。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: RigidObject = env.scene[asset_cfg.name]return asset.data.root_ang_vel_b

projected_gravity (use)

作用机制:
姿态感知:提供机器人相对于重力的倾斜角度
平衡控制:检测机器人是否保持直立
地形适应:在斜坡上调整姿态

def projected_gravity(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""Gravity projection on the asset's root frame.重力在资产根坐标系上的投影。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: RigidObject = env.scene[asset_cfg.name]return asset.data.projected_gravity_b

root_pos_w

def root_pos_w(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""Asset root position in the environment frame.环境坐标系中的资产根部位置。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: RigidObject = env.scene[asset_cfg.name]return asset.data.root_pos_w - env.scene.env_origins

root_quat_w

def root_quat_w(env: ManagerBasedEnv, make_quat_unique: bool = False, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")
) -> torch.Tensor:"""Asset root orientation (w, x, y, z) in the environment frame.环境坐标系中的资产根部方向(w, x, y, z)。If :attr:`make_quat_unique` is True, then returned quaternion is made unique by ensuringthe quaternion has non-negative real component. This is because both ``q`` and ``-q`` representthe same orientation.如果 :attr:`make_quat_unique` 为 True,则通过确保四元数具有非负实部来使返回的四元数唯一。这是因为 ``q`` 和 ``-q`` 都表示相同的方向。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: RigidObject = env.scene[asset_cfg.name]quat = asset.data.root_quat_w# make the quaternion real-part positive if configured# 如果配置了,则使四元数的实部为正return math_utils.quat_unique(quat) if make_quat_unique else quat

root_lin_vel_w

def root_lin_vel_w(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""Asset root linear velocity in the environment frame.环境坐标系中的资产根部线性速度。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: RigidObject = env.scene[asset_cfg.name]return asset.data.root_lin_vel_w

root_ang_vel_w

def root_ang_vel_w(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""Asset root angular velocity in the environment frame.环境坐标系中的资产根部角速度。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: RigidObject = env.scene[asset_cfg.name]return asset.data.root_ang_vel_w

1.2 关节状态相关观测

在这里插入图片描述
在这里插入图片描述

joint_pos

def joint_pos(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""The joint positions of the asset.资产的关节位置。Note: Only the joints configured in :attr:`asset_cfg.joint_ids` will have their positions returned.注意:只有在 :attr:`asset_cfg.joint_ids` 中配置的关节才会返回其位置。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Articulation = env.scene[asset_cfg.name]return asset.data.joint_pos[:, asset_cfg.joint_ids]

joint_pos_rel (use)

作用机制:
关节状态感知:了解每个关节的当前位置
运动学约束:避免关节超出物理限制
步态协调:协调多个关节的运动

def joint_pos_rel(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:"""The joint positions of the asset w.r.t. the default joint positions.相对于默认关节位置的资产关节位置。Note: Only the joints configured in :attr:`asset_cfg.joint_ids` will have their positions returned.注意:只有在 :attr:`asset_cfg.joint_ids` 中配置的关节才会返回其位置。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Articulation = env.scene[asset_cfg.name]return asset.data.joint_pos[:, asset_cfg.joint_ids] - asset.data.default_joint_pos[:, asset_cfg.joint_ids]

joint_pos_limit_normalized

def joint_pos_limit_normalized(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")
) -> torch.Tensor:"""The joint positions of the asset normalized with the asset's joint limits.使用资产关节限制归一化的资产关节位置。Note: Only the joints configured in :attr:`asset_cfg.joint_ids` will have their normalized positions returned.注意:只有在 :attr:`asset_cfg.joint_ids` 中配置的关节才会返回其归一化位置。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Articulation = env.scene[asset_cfg.name]return math_utils.scale_transform(asset.data.joint_pos[:, asset_cfg.joint_ids],asset.data.soft_joint_pos_limits[:, asset_cfg.joint_ids, 0],asset.data.soft_joint_pos_limits[:, asset_cfg.joint_ids, 1],)

joint_vel

def joint_vel(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")):"""The joint velocities of the asset.资产的关节速度。Note: Only the joints configured in :attr:`asset_cfg.joint_ids` will have their velocities returned.注意:只有在 :attr:`asset_cfg.joint_ids` 中配置的关节才会返回其速度。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Articulation = env.scene[asset_cfg.name]return asset.data.joint_vel[:, asset_cfg.joint_ids]

joint_vel_rel (use)

作用机制:
动态控制:了解关节运动的动态特性
平滑运动:避免关节速度突变
能耗优化:控制关节运动的效率

def joint_vel_rel(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")):"""The joint velocities of the asset w.r.t. the default joint velocities.相对于默认关节速度的资产关节速度。Note: Only the joints configured in :attr:`asset_cfg.joint_ids` will have their velocities returned.注意:只有在 :attr:`asset_cfg.joint_ids` 中配置的关节才会返回其速度。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Articulation = env.scene[asset_cfg.name]return asset.data.joint_vel[:, asset_cfg.joint_ids] - asset.data.default_joint_vel[:, asset_cfg.joint_ids]

note:compare joint_pos and joint_pos_vel

绝对值 vs 相对值:joint_pos 返回关节的绝对位置,joint_pos_rel 返回相对于默认位置的偏移
参考基准:相对值以机器人的默认姿态作为零点参考
主要是使用joint_pos_vel
训练效果:
数值稳定性:相对值通常更小,数值更稳定
学习效率:神经网络更容易学习小范围的数值变化
泛化能力:相对位置更能反映运动模式,而非绝对配置

1.3 动作相关观测。

last_action (use)

返回上一时间步的动作向量

def last_action(env: ManagerBasedEnv, action_name: str | None = None) -> torch.Tensor:"""The last input action to the environment.返回上一时间步的动作向量(完整或指定名称的动作)The name of the action term for which the action is required. If None, theentire action tensor is returned.需要动作的动作项的名称。如果为None,则返回整个动作张量。"""if action_name is None:return env.action_manager.actionelse:return env.action_manager.get_term(action_name).raw_actions

1.4 命令相关观测

generated_commands (use)

作用机制:
目标指导:告诉策略网络期望的运动目标
任务理解:提供高层次的运动意图
误差计算:与当前状态对比计算控制误差

def generated_commands(env: ManagerBasedRLEnv, command_name: str) -> torch.Tensor:"""The generated command from command term in the command manager with the given name.来自命令管理器中具有给定名称的命令项的生成命令。"""return env.command_manager.get_command(command_name)

1.5 传感器相关观测

地形感知:height_scan - 周围地形高度信息
力觉感知:body_incoming_wrench - 接触力和扭矩
惯性感知:IMU系列函数 - 姿态、角速度、加速度
视觉感知:image - 多类型图像数据
高级视觉:image_features - 深度学习特征提取

height_scan (use)

地形感知:提供前方地形的高度信息
预测性控制:提前调整步态应对地形变化
避障能力:检测和避开障碍物

def height_scan(env: ManagerBasedEnv, sensor_cfg: SceneEntityCfg, offset: float = 0.33) -> torch.Tensor:"""Height scan from the given sensor w.r.t. the sensor's frame.从给定传感器相对于传感器坐标系的高度扫描。The provided offset (Defaults to 0.5) is subtracted from the returned values.提供的偏移量(默认为0.5)将从返回值中减去。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)sensor: RayCaster = env.scene.sensors[sensor_cfg.name]# height scan: height = sensor_height - hit_point_z - offset# 高度扫描:高度 = 传感器高度 - 命中点z坐标 - 偏移量return sensor.data.pos_w[:, 2].unsqueeze(1) - sensor.data.ray_hits_w[..., 2] - offset

body_incoming_wrench

def body_incoming_wrench(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg) -> torch.Tensor:"""Incoming spatial wrench on bodies of an articulation in the simulation world frame.仿真世界坐标系中关节体上的传入空间扭矩。This is the 6-D wrench (force and torque) applied to the body link by the incoming joint force.这是由传入关节力施加到身体链接上的6维扭矩(力和扭矩)。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Articulation = env.scene[asset_cfg.name]# obtain the link incoming forces in world frame# 获取世界坐标系中的链接传入力link_incoming_forces = asset.root_physx_view.get_link_incoming_joint_force()[:, asset_cfg.body_ids]return link_incoming_forces.view(env.num_envs, -1)

imu_orientation

def imu_orientation(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("imu")) -> torch.Tensor:"""Imu sensor orientation in the simulation world frame.仿真世界坐标系中的IMU传感器方向。Args:env: The environment.环境。asset_cfg: The SceneEntity associated with an IMU sensor. Defaults to SceneEntityCfg("imu").与IMU传感器关联的场景实体。默认为SceneEntityCfg("imu")。Returns:Orientation in the world frame in (w, x, y, z) quaternion form. Shape is (num_envs, 4).世界坐标系中的方向,以(w, x, y, z)四元数形式表示。形状为(num_envs, 4)。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Imu = env.scene[asset_cfg.name]# return the orientation quaternion# 返回方向四元数return asset.data.quat_w

imu_ang_vel

def imu_ang_vel(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("imu")) -> torch.Tensor:"""Imu sensor angular velocity w.r.t. environment origin expressed in the sensor frame.相对于环境原点的IMU传感器角速度,在传感器坐标系中表示。Args:env: The environment.环境。asset_cfg: The SceneEntity associated with an IMU sensor. Defaults to SceneEntityCfg("imu").与IMU传感器关联的场景实体。默认为SceneEntityCfg("imu")。Returns:The angular velocity (rad/s) in the sensor frame. Shape is (num_envs, 3).传感器坐标系中的角速度(rad/s)。形状为(num_envs, 3)。"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)asset: Imu = env.scene[asset_cfg.name]# return the angular velocity# 返回角速度return asset.data.ang_vel_b

imu_lin_acc

def imu_lin_acc(env: ManagerBasedEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("imu")) -> torch.Tensor:"""Imu sensor linear acceleration w.r.t. the environment origin expressed in sensor frame.相对于环境原点的IMU传感器线性加速度,在传感器坐标系中表示。Args:env: The environment.环境。asset_cfg: The SceneEntity associated with an IMU sensor. Defaults to SceneEntityCfg("imu").与IMU传感器关联的场景实体。默认为SceneEntityCfg("imu")。Returns:The linear acceleration (m/s^2) in the sensor frame. Shape is (num_envs, 3).传感器坐标系中的线性加速度(m/s^2)。形状为(num_envs, 3)。"""asset: Imu = env.scene[asset_cfg.name]return asset.data.lin_acc_b

image

def image(env: ManagerBasedEnv,sensor_cfg: SceneEntityCfg = SceneEntityCfg("tiled_camera"),data_type: str = "rgb",convert_perspective_to_orthogonal: bool = False,normalize: bool = True,
) -> torch.Tensor:"""Images of a specific datatype from the camera sensor.来自相机传感器的特定数据类型的图像。If the flag :attr:`normalize` is True, post-processing of the images are performed based on theirdata-types:如果标志 :attr:`normalize` 为True,则根据图像的数据类型执行后处理:- "rgb": Scales the image to (0, 1) and subtracts with the mean of the current image batch.- "rgb": 将图像缩放到(0, 1)并减去当前图像批次的均值。- "depth" or "distance_to_camera" or "distance_to_plane": Replaces infinity values with zero.- "depth"或"distance_to_camera"或"distance_to_plane": 将无穷大值替换为零。Args:env: The environment the cameras are placed within.放置相机的环境。sensor_cfg: The desired sensor to read from. Defaults to SceneEntityCfg("tiled_camera").要读取的所需传感器。默认为SceneEntityCfg("tiled_camera")。data_type: The data type to pull from the desired camera. Defaults to "rgb".从所需相机提取的数据类型。默认为"rgb"。convert_perspective_to_orthogonal: Whether to orthogonalize perspective depth images.This is used only when the data type is "distance_to_camera". Defaults to False.是否正交化透视深度图像。仅当数据类型为"distance_to_camera"时使用。默认为False。normalize: Whether to normalize the images. This depends on the selected data type.Defaults to True.是否归一化图像。这取决于选择的数据类型。默认为True。Returns:The images produced at the last time-step在最后一个时间步产生的图像"""# extract the used quantities (to enable type-hinting)# 提取使用的量(用于启用类型提示)sensor: TiledCamera | Camera | RayCasterCamera = env.scene.sensors[sensor_cfg.name]# obtain the input image# 获取输入图像images = sensor.data.output[data_type]# depth image conversion# 深度图像转换if (data_type == "distance_to_camera") and convert_perspective_to_orthogonal:images = math_utils.orthogonalize_perspective_depth(images, sensor.data.intrinsic_matrices)# rgb/depth image normalization# rgb/深度图像归一化if normalize:if data_type == "rgb":images = images.float() / 255.0mean_tensor = torch.mean(images, dim=(1, 2), keepdim=True)images -= mean_tensorelif "distance_to" in data_type or "depth" in data_type:images[images == float("inf")] = 0return images.clone()

image_features

image_features 类是 Isaac Lab 中实现视觉感知的强大工具,通过预训练模型提取高质量的图像特征,为机器人提供丰富的视觉理解能力,特别适用于导航、抓取、识别等需要视觉感知的复杂任务。

class image_features(ManagerTermBase):"""Extracted image features from a pre-trained frozen encoder.从预训练的冻结编码器中提取的图像特征。This term uses models from the model zoo in PyTorch and extracts features from the images.该项使用PyTorch模型库中的模型并从图像中提取特征。It calls the :func:`image` function to get the images and then processes them using the model zoo.它调用 :func:`image` 函数获取图像,然后使用模型库处理它们。A user can provide their own model zoo configuration to use different models for feature extraction.用户可以提供自己的模型库配置,以使用不同的模型进行特征提取。The model zoo configuration should be a dictionary that maps different model names to a dictionarythat defines the model, preprocess and inference functions. The dictionary should have the followingentries:模型库配置应该是一个字典,将不同的模型名称映射到定义模型、预处理和推理函数的字典。字典应该包含以下条目:- "model": A callable that returns the model when invoked without arguments.- "model": 一个可调用对象,在不带参数调用时返回模型。- "reset": A callable that resets the model. This is useful when the model has a state that needs to be reset.- "reset": 一个重置模型的可调用对象。当模型有需要重置的状态时,这很有用。- "inference": A callable that, when given the model and the images, returns the extracted features.- "inference": 一个可调用对象,当给定模型和图像时,返回提取的特征。If the model zoo configuration is not provided, the default model zoo configurations are used. The defaultmodel zoo configurations include the models from Theia :cite:`shang2024theia` and ResNet :cite:`he2016deep`.These models are loaded from `Hugging-Face transformers <https://huggingface.co/docs/transformers/index>`_ and`PyTorch torchvision <https://pytorch.org/vision/stable/models.html>`_ respectively.如果未提供模型库配置,则使用默认的模型库配置。默认的模型库配置包括来自Theia和ResNet的模型。这些模型分别从Hugging-Face transformers和PyTorch torchvision加载。Args:sensor_cfg: The sensor configuration to poll. Defaults to SceneEntityCfg("tiled_camera").要轮询的传感器配置。默认为SceneEntityCfg("tiled_camera")。data_type: The sensor data type. Defaults to "rgb".传感器数据类型。默认为"rgb"。convert_perspective_to_orthogonal: Whether to orthogonalize perspective depth images.This is used only when the data type is "distance_to_camera". Defaults to False.是否正交化透视深度图像。仅当数据类型为"distance_to_camera"时使用。默认为False。model_zoo_cfg: A user-defined dictionary that maps different model names to their respective configurations.Defaults to None. If None, the default model zoo configurations are used.用户定义的字典,将不同的模型名称映射到其各自的配置。默认为None。如果为None,则使用默认的模型库配置。model_name: The name of the model to use for inference. Defaults to "resnet18".用于推理的模型名称。默认为"resnet18"。model_device: The device to store and infer the model on. This is useful when offloading the computationfrom the environment simulation device. Defaults to the environment device.存储和推理模型的设备。当从环境仿真设备卸载计算时,这很有用。默认为环境设备。inference_kwargs: Additional keyword arguments to pass to the inference function. Defaults to None,which means no additional arguments are passed.传递给推理函数的额外关键字参数。默认为None,意味着不传递额外参数。Returns:The extracted features tensor. Shape is (num_envs, feature_dim).提取的特征张量。形状为(num_envs, feature_dim)。Raises:ValueError: When the model name is not found in the provided model zoo configuration.当在提供的模型库配置中找不到模型名称时。ValueError: When the model name is not found in the default model zoo configuration.当在默认模型库配置中找不到模型名称时。"""def __init__(self, cfg: ObservationTermCfg, env: ManagerBasedEnv):# initialize the base class# 初始化基类super().__init__(cfg, env)# extract parameters from the configuration# 从配置中提取参数self.model_zoo_cfg: dict = cfg.params.get("model_zoo_cfg")  # type: ignoreself.model_name: str = cfg.params.get("model_name", "resnet18")  # type: ignoreself.model_device: str = cfg.params.get("model_device", env.device)  # type: ignore# List of Theia models - These are configured through `_prepare_theia_transformer_model` function# Theia模型列表 - 这些通过`_prepare_theia_transformer_model`函数配置default_theia_models = ["theia-tiny-patch16-224-cddsv","theia-tiny-patch16-224-cdiv","theia-small-patch16-224-cdiv","theia-base-patch16-224-cdiv","theia-small-patch16-224-cddsv","theia-base-patch16-224-cddsv",]# List of ResNet models - These are configured through `_prepare_resnet_model` function# ResNet模型列表 - 这些通过`_prepare_resnet_model`函数配置default_resnet_models = ["resnet18", "resnet34", "resnet50", "resnet101"]# Check if model name is specified in the model zoo configuration# 检查模型名称是否在模型库配置中指定if self.model_zoo_cfg is not None and self.model_name not in self.model_zoo_cfg:raise ValueError(f"Model name '{self.model_name}' not found in the provided model zoo configuration."" Please add the model to the model zoo configuration or use a different model name."f" Available models in the provided list: {list(self.model_zoo_cfg.keys())}.""\nHint: If you want to use a default model, consider using one of the following models:"f" {default_theia_models + default_resnet_models}. In this case, you can remove the"" 'model_zoo_cfg' parameter from the observation term configuration.")if self.model_zoo_cfg is None:if self.model_name in default_theia_models:model_config = self._prepare_theia_transformer_model(self.model_name, self.model_device)elif self.model_name in default_resnet_models:model_config = self._prepare_resnet_model(self.model_name, self.model_device)else:raise ValueError(f"Model name '{self.model_name}' not found in the default model zoo configuration."f" Available models: {default_theia_models + default_resnet_models}.")else:model_config = self.model_zoo_cfg[self.model_name]# Retrieve the model, preprocess and inference functions# 检索模型、预处理和推理函数self._model = model_config["model"]()self._reset_fn = model_config.get("reset")self._inference_fn = model_config["inference"]def reset(self, env_ids: torch.Tensor | None = None):# reset the model if a reset function is provided# this might be useful when the model has a state that needs to be reset# for example: video transformers# 如果提供了重置函数,则重置模型# 当模型有需要重置的状态时,这可能很有用# 例如:视频变换器if self._reset_fn is not None:self._reset_fn(self._model, env_ids)def __call__(self,env: ManagerBasedEnv,sensor_cfg: SceneEntityCfg = SceneEntityCfg("tiled_camera"),data_type: str = "rgb",convert_perspective_to_orthogonal: bool = False,model_zoo_cfg: dict | None = None,model_name: str = "resnet18",model_device: str | None = None,inference_kwargs: dict | None = None,) -> torch.Tensor:# obtain the images from the sensor# 从传感器获取图像image_data = image(env=env,sensor_cfg=sensor_cfg,data_type=data_type,convert_perspective_to_orthogonal=convert_perspective_to_orthogonal,normalize=False,  # we pre-process based on model)# store the device of the image# 存储图像的设备image_device = image_data.device# forward the images through the model# 通过模型前向传播图像features = self._inference_fn(self._model, image_data, **(inference_kwargs or {}))# move the features back to the image device# 将特征移回图像设备return features.detach().to(image_device)"""Helper functions.辅助函数。"""def _prepare_theia_transformer_model(self, model_name: str, model_device: str) -> dict:"""Prepare the Theia transformer model for inference.为推理准备Theia变换器模型。Args:model_name: The name of the Theia transformer model to prepare.要准备的Theia变换器模型的名称。model_device: The device to store and infer the model on.存储和推理模型的设备。Returns:A dictionary containing the model and inference functions.包含模型和推理函数的字典。"""from transformers import AutoModeldef _load_model() -> torch.nn.Module:"""Load the Theia transformer model.加载Theia变换器模型。"""model = AutoModel.from_pretrained(f"theaiinstitute/{model_name}", trust_remote_code=True).eval()return model.to(model_device)def _inference(model, images: torch.Tensor) -> torch.Tensor:"""Inference the Theia transformer model.推理Theia变换器模型。Args:model: The Theia transformer model.Theia变换器模型。images: The preprocessed image tensor. Shape is (num_envs, height, width, channel).预处理的图像张量。形状为(num_envs, height, width, channel)。Returns:The extracted features tensor. Shape is (num_envs, feature_dim).提取的特征张量。形状为(num_envs, feature_dim)。"""# Move the image to the model device# 将图像移动到模型设备image_proc = images.to(model_device)# permute the image to (num_envs, channel, height, width)# 将图像排列为(num_envs, channel, height, width)image_proc = image_proc.permute(0, 3, 1, 2).float() / 255.0# Normalize the image# 归一化图像mean = torch.tensor([0.485, 0.456, 0.406], device=model_device).view(1, 3, 1, 1)std = torch.tensor([0.229, 0.224, 0.225], device=model_device).view(1, 3, 1, 1)image_proc = (image_proc - mean) / std# Taken from Transformers; inference converted to be GPU only# 取自Transformers;推理转换为仅GPUfeatures = model.backbone.model(pixel_values=image_proc, interpolate_pos_encoding=True)return features.last_hidden_state[:, 1:]# return the model, preprocess and inference functions# 返回模型、预处理和推理函数return {"model": _load_model, "inference": _inference}def _prepare_resnet_model(self, model_name: str, model_device: str) -> dict:"""Prepare the ResNet model for inference.为推理准备ResNet模型。Args:model_name: The name of the ResNet model to prepare.要准备的ResNet模型的名称。model_device: The device to store and infer the model on.存储和推理模型的设备。Returns:A dictionary containing the model and inference functions.包含模型和推理函数的字典。"""from torchvision import modelsdef _load_model() -> torch.nn.Module:"""Load the ResNet model.加载ResNet模型。"""# map the model name to the weights# 将模型名称映射到权重resnet_weights = {"resnet18": "ResNet18_Weights.IMAGENET1K_V1","resnet34": "ResNet34_Weights.IMAGENET1K_V1","resnet50": "ResNet50_Weights.IMAGENET1K_V1","resnet101": "ResNet101_Weights.IMAGENET1K_V1",}# load the model# 加载模型model = getattr(models, model_name)(weights=resnet_weights[model_name]).eval()return model.to(model_device)def _inference(model, images: torch.Tensor) -> torch.Tensor:"""Inference the ResNet model.推理ResNet模型。Args:model: The ResNet model.ResNet模型。images: The preprocessed image tensor. Shape is (num_envs, channel, height, width).预处理的图像张量。形状为(num_envs, channel, height, width)。Returns:The extracted features tensor. Shape is (num_envs, feature_dim).提取的特征张量。形状为(num_envs, feature_dim)。"""# move the image to the model device# 将图像移动到模型设备image_proc = images.to(model_device)# permute the image to (num_envs, channel, height, width)# 将图像排列为(num_envs, channel, height, width)image_proc = image_proc.permute(0, 3, 1, 2).float() / 255.0# normalize the image# 归一化图像mean = torch.tensor([0.485, 0.456, 0.406], device=model_device).view(1, 3, 1, 1)std = torch.tensor([0.229, 0.224, 0.225], device=model_device).view(1, 3, 1, 1)image_proc = (image_proc - mean) / std# forward the image through the model# 通过模型前向传播图像return model(image_proc)# return the model, preprocess and inference functions# 返回模型、预处理和推理函数return {"model": _load_model, "inference": _inference}

2 robot_lab中的observations

joint_pos_rel_without_wheel

获取相对于默认位置的关节位置(排除轮子关节)

def joint_pos_rel_without_wheel(env: ManagerBasedEnv,asset_cfg: SceneEntityCfg = SceneEntityCfg("robot"),wheel_asset_cfg: SceneEntityCfg = SceneEntityCfg("robot"),
) -> torch.Tensor:"""获取相对于默认位置的关节位置(排除轮子关节)该函数计算机器人关节相对于其默认位置的偏移量,但会将轮子关节的偏移量设置为零。这样可以避免轮子的旋转对观测造成干扰,专注于机器人本体的关节状态。Args:env: 仿真环境对象asset_cfg: 机器人资产的配置,默认为"robot"wheel_asset_cfg: 轮子资产的配置,用于指定轮子关节,默认为"robot"Returns:torch.Tensor: 相对关节位置张量,形状为 (num_envs, num_joints)轮子关节的值被设置为0应用场景:- 四足机器人的腿部关节观测- 轮式机器人的非驱动关节监控- 避免轮子连续旋转对学习造成的干扰注意:该函数假设轮子关节是连续旋转的,因此其绝对位置对控制策略的意义不大,通过置零可以减少观测空间的复杂性。"""# 提取目标资产(支持类型提示)asset: Articulation = env.scene[asset_cfg.name]# 计算关节位置相对于默认位置的偏移joint_pos_rel = asset.data.joint_pos[:, asset_cfg.joint_ids] - asset.data.default_joint_pos[:, asset_cfg.joint_ids]# 将轮子关节的相对位置设置为零# 这样可以避免轮子连续旋转对观测造成的干扰joint_pos_rel[:, wheel_asset_cfg.joint_ids] = 0return joint_pos_rel

phase

phase 函数是一个运动相位周期性编码器,专门为机器人的周期性运动(如步态、舞蹈)提供时序信息编码。

def phase(env: ManagerBasedRLEnv, cycle_time: float) -> torch.Tensor:"""计算运动相位的周期性编码该函数基于当前回合的时间步长计算运动相位,并将其编码为正弦和余弦值。这种周期性编码有助于机器人学习周期性的运动模式,如步态循环。Args:env: 强化学习环境对象cycle_time: 一个完整周期的时间长度(秒)Returns:torch.Tensor: 相位编码张量,形状为 (num_envs, 2)第一列为 sin(2π * phase)第二列为 cos(2π * phase)工作原理:1. 获取当前回合的时间步数2. 计算相对于周期时间的相位3. 使用正弦和余弦函数进行周期性编码应用场景:- 步态学习中的相位信息- 周期性运动的时序编码- 为策略网络提供时间上下文优势:- 周期性编码避免了相位跳跃问题- 连续的正弦余弦表示便于神经网络处理- 提供了丰富的时序信息用于运动协调"""# 初始化回合长度缓冲区(如果不存在)if not hasattr(env, "episode_length_buf") or env.episode_length_buf is None:env.episode_length_buf = torch.zeros(env.num_envs, device=env.device, dtype=torch.long)# 计算当前相位:(当前时间步 * 步长时间) / 周期时间phase = env.episode_length_buf[:, None] * env.step_dt / cycle_time# 使用正弦和余弦函数进行周期性编码# 这样可以提供连续且周期性的相位表示phase_tensor = torch.cat([torch.sin(2 * torch.pi * phase), torch.cos(2 * torch.pi * phase)], dim=-1)return phase_tensor

1. ​​周期性运动编码原理​​

计算示例:周期0.5秒,步长0.01秒,第25步时
phase = 25 * 0.01 / 0.5  # = 0.5 (半周期)
phase_tensor = [sin(π), cos(π)][0, -1]

物理意义​​:将机器人步态周期(如行走时的摆动/支撑相)映射到单位圆坐标
​​优势对比​​:
​​原始相位值​​:0.99→1.0时发生跳变(0.99→0.0)
​​编码后​​:[0.999→0.001] 平滑过渡到 [0.004→0.999]

2. ​​神经网络友好设计​​

在这里插入图片描述
​解耦特性​​:sin2ϕ+cos2ϕ=1 提供内在约束
​​梯度平滑​​:避免相位跳变导致的梯度爆炸

3.🚀 ​​典型应用场景​​

  1. ​​四足机器人步态控制​​
    在这里插入图片描述
phase_tensor = phase(env, cycle_time=0.5)  # 0.5秒完成一个步态周期

观测空间组合

obs = torch.cat([joint_pos(env),base_lin_vel(env),phase_tensor  # 关键时序信号
], dim=-1)
​​对角腿同步​​:相同相位值控制对角腿运动
​​步态切换​​:调整cycle_time实现行走/奔跑转换
  1. ​​人形机器人平衡控制​​

结合IMU数据

obs = torch.cat([imu_orientation(env),projected_gravity(env),phase(env, cycle_time=1.0)  # 1秒重心摆动周期
], dim=-1)

防跌倒策略​​:相位编码预测重心运动趋势
​​能耗优化​​:在cosϕ峰值时施加推力(效率最高点)

🌓 终极解析:相位跳变与平滑过渡原理

🚶‍♂️ ​​生活化比喻​​

想象您在一个圆形操场上跑步:
在这里插入图片描述

​​原始相位值​​ = 您在跑道上的位置(0.0~1.0)

​​当您跑到终点(位置1.0)时​​:
下个瞬间突然回到起点(位置0.0) → 空间跳跃!
→ 相当于机器人步态突然重置 → 动作抽搐

编码后相位​​:
用指南针方向表示位置:

0.99[指南针显示350°]  
1.00[指南针显示0°]  
1.01[指南针显示4°]

无论跑多少圈,指南针方向都​​平滑变化​​!

http://www.xdnf.cn/news/10803.html

相关文章:

  • MS9288C+MS2131 1080P@60Hz USB3.0环出采集
  • 常见的七种排序算法 ——直接插入排序
  • 个人博客系统自动化测试报告
  • 最佳实践 | 璞华易研“PLM+AI智能研发平台”,助力汉旸科技实现高新材料“数据驱动研发”
  • 95. Java 数字和字符串 - 操作字符串的其他方法
  • OpenEMMA: 打破Waymo闭源,首个开源端到端多模态模型
  • 蓝绿部署解析
  • Python爬虫监控程序设计思路
  • 统信 UOS 服务器版离线部署 DeepSeek 攻略
  • 飞牛fnNAS存储模式RAID 5数据恢复
  • DSN(数字交换网络)由什么组成?
  • 基于Hutool的验证码功能完整技术文档
  • Nginx 响应头 Vary 的介绍与应用
  • YOLO学习笔记 | 一种用于海面目标检测的多尺度YOLO算法
  • 在前端使用JS生成音频文件并保存到本地
  • day18 leetcode-hot100-36(二叉树1)
  • tauri项目绕开plugin-shell直接调用可执行文件并携带任意参数
  • 【深度学习】大模型MCP工作流原理介绍、编写MCP
  • 谷歌地图2022高清卫星地图手机版v10.38.2 安卓版 - 前端工具导航
  • 小白的进阶之路系列之十一----人工智能从初步到精通pytorch综合运用的讲解第四部分
  • Franka科研新力量——基于异构预训练Transformer的扩展研究
  • 智能氮气柜的发展历程和前景展望
  • 从基础原理到Nginx实战应用
  • 架构设计的目标:高内聚、低耦合的本质
  • Pointer Network
  • FreeRTOS,其发展历程详细时间线、由来、历史背景
  • STM32学习之WWDG(原理+实操)
  • Go基础|map入门
  • 2025 Java面试大全技术文章(面试题1)
  • ABP-Book Store Application中文讲解 - Part 6: Authors: Domain Layer