当前位置: 首页 > ops >正文

BEV:隐式相机视角转换-----BEVFormer

一、背景

基于imp投影的相机视角转换,对相机的内外参依赖较高,BEV 网格融合固定,可能对小目标不够敏感;考虑通过transformer的方式进行相机的视角转换,BEV query 可以自适应关注关键区域,提高小目标检测,transformer 注意力机制,灵活采样。故通过BEVFormer的demo代码理解其原理。

二、代码

import torch
import torch.nn as nn# -------------------------
# 参数
# -------------------------
B, C, H, W = 2, 64, 16, 16        # 摄像头特征图
bev_H, bev_W = 8, 8                # BEV 网格
num_cameras = 7
num_classes = 10
num_det_queries = 32                # detection query 数量# -------------------------
# 1. 多摄像头特征
# -------------------------
camera_feats = [torch.randn(B, C, H*W) for _ in range(num_cameras)]  # B x C x N (N=H*W)
for i in range(num_cameras):camera_feats[i] = camera_feats[i].permute(0, 2, 1)  # B x N x C# -------------------------
# 2. BEV query + Transformer 投影
# -------------------------
num_bev_queries = bev_H * bev_W
bev_queries = nn.Parameter(torch.randn(num_bev_queries, B, C))class BEVProjectionTransformer(nn.Module):def __init__(self, C, num_heads=8):super().__init__()self.attn = nn.MultiheadAttention(embed_dim=C, num_heads=num_heads)def forward(self, bev_queries, camera_feats):"""bev_queries: num_bev_queries x B x Ccamera_feats: list of B x N x C"""# 拼接所有摄像头特征feats = torch.cat(camera_feats, dim=1)      # B x (num_cameras*N) x Cfeats = feats.permute(1,0,2)               # (num_cameras*N) x B x Cbev_out, _ = self.attn(bev_queries, feats, feats)return bev_outbev_proj_transformer = BEVProjectionTransformer(C)
bev_features = bev_proj_transformer(bev_queries, camera_feats)
bev_features_grid = bev_features.permute(1,0,2).reshape(B, bev_H, bev_W, C)# -------------------------
# 3. Detection query + Transformer
# -------------------------
det_queries = nn.Parameter(torch.randn(num_det_queries, B, C))class DetectionDecoderTransformer(nn.Module):def __init__(self, C, num_heads=8):super().__init__()self.attn = nn.MultiheadAttention(embed_dim=C, num_heads=num_heads)def forward(self, det_queries, bev_features_grid):B, H, W, C = bev_features_grid.shapebev_flat = bev_features_grid.reshape(B, H*W, C).permute(1,0,2)out, _ = self.attn(det_queries, bev_flat, bev_flat)return outdecoder = DetectionDecoderTransformer(C)
det_features = decoder(det_queries, bev_features_grid)# -------------------------
# 4. Detection head
# -------------------------
class SimpleDetectionHead(nn.Module):def __init__(self, C, num_classes):super().__init__()self.cls_head = nn.Linear(C, num_classes)self.bbox_head = nn.Linear(C, 7)def forward(self, det_features):cls_logits = self.cls_head(det_features)bbox_preds = self.bbox_head(det_features)return cls_logits, bbox_predsdetection_head = SimpleDetectionHead(C, num_classes)
cls_logits, bbox_preds = detection_head(det_features)print("类别 logits shape:", cls_logits.shape)     # num_det_queries x B x num_classes
print("3D bbox preds shape:", bbox_preds.shape)   # num_det_queries x B x 7
http://www.xdnf.cn/news/18148.html

相关文章:

  • JVM 面试精选 20 题(续)
  • 面试经验分享-某电影厂
  • 黎阳之光:以数字之力,筑牢流域防洪“智慧防线”
  • 图像采集卡与工业相机:机器视觉“双剑合璧”的效能解析
  • 【ASP.NET Core】ASP.NET Core中间件解析
  • 如何安全删除GitHub中的敏感文件?git-filter-repo操作全解析
  • PowerBI VS FineBI VS QuickBI实现帕累托分析
  • [WiFi]RealTek RF MP Tool操作说明(RTL8192ES)
  • 编排之神--Kubernetes中的认证授权详解
  • PyTorch数据加载利器:torch.utils.data 详解与实践
  • RNN深层困境:残差无效,Transformer为何能深层?
  • 【RustFS干货】RustFS的智能路由算法与其他分布式存储系统(如Ceph)的路由方案相比有哪些独特优势?
  • MySQL深分页性能优化实战:大数据量情况下如何进行优化
  • 阿里云参数配置化
  • C++入门自学Day14-- deque类型使用和介绍(初识)
  • 私有化部署全攻略:开源模型本地化改造的性能与安全评测
  • IPD流程执行检查表
  • 消费者API
  • Flink on Native K8S安装部署
  • 软件系统运维常见问题
  • 快手可灵招海外产品运营实习生
  • 51单片机拼接板(开发板积木)
  • 计算机毕设推荐:痴呆症预测可视化系统Hadoop+Spark+Vue技术栈详解
  • MySQL事务篇-事务概念、并发事务问题、隔离级别
  • Vibe 编码技巧与建议(Vibe Coding Tips and Tricks)
  • AAA服务器技术
  • Qt中使用QString显示平方符号(如²)
  • 搭建最新--若依分布式spring cloudv3.6.6 前后端分离项目--步骤与记录常见的坑
  • 【qml-5】qml与c++交互(类型单例)
  • 前端下载文件、压缩包