yolov8n-obb训练rknn模型
必备:
准备一台ubuntu22的服务器或者虚拟机(x86_64)
1、数据集标注:
1)推荐使用X-AnyLabeling标注工具
2)标注选【旋转框】
3)可选AI标注,再手动补充,提高标注速度
4)导出->导出yolo旋转框标签->选择一个class.txt文件(里面写你标注的标签名)
2、下载环境(版本号必须一致)
使用rknn修改后的ultralytics_yolov8项目:ultralytics_yolov8
ONNX转换为RKNN模型需要使用官方rknn_model_zoo工具:rknn_model_zoo-2.2.0
官方rknn-toolkit2工具:rknn-toolkit2
使用git clone项目后,可以使用git checkout v2.2.0切换到对应分支
3、安装环境
1)进入ultralytics_yolov8目录,安装依赖。
把下面内容保存到requirement.txt
# Ultralytics requirements
# Usage: pip install -r requirements.txt# Base ----------------------------------------
matplotlib>=3.2.2
numpy>=1.18.5
opencv-python>=4.6.0
Pillow>=7.1.2
PyYAML>=5.3.1
requests>=2.23.0
scipy>=1.4.1
torch>=1.7.0
torchvision>=0.8.1
tqdm>=4.64.0# Logging -------------------------------------
tensorboard>=2.4.1
# clearml
# comet# Plotting ------------------------------------
pandas>=1.1.4
seaborn>=0.11.0# Export --------------------------------------
# coremltools>=6.0 # CoreML export
# onnx>=1.12.0 # ONNX export
# onnx-simplifier>=0.4.1 # ONNX simplifier
# nvidia-pyindex # TensorRT export
# nvidia-tensorrt # TensorRT export
# scikit-learn==0.19.2 # CoreML quantization
# tensorflow>=2.4.1 # TF exports (-cpu, -aarch64, -macos)
# tensorflowjs>=3.9.0 # TF.js export
# openvino-dev>=2022.3 # OpenVINO export# Extras --------------------------------------
ipython # interactive notebook
psutil # system utilization
thop>=0.1.1 # FLOPs computation
# albumentations>=1.0.3
# pycocotools>=2.0.6 # COCO mAP
# roboflow
然后安装
pip install -r requirement.txt
2)安装rknn-toolkit2
注意:310对应的是Python3.10版本 根据自己的python版本选择。支持 python 3.6 - - 3.12版本
cd ~/rknn-toolkit2/rknn-toolkit2/packages
pip install -r requirements_cp310-2.2.0.txt
pip install rknn_toolkit2-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
4、训练模型
1)保存为dataset.yaml
train: /home/admin/labels
val: /home/admin/labels
nc: 1
names: ['cat']
2)保存为yolov8-obb.yaml(注意 nc值要和上面names数组的数量一致)
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license# Ultralytics YOLOv8-obb Oriented Bounding Boxes (OBB) model with P3/8 - P5/32 outputs
# Model docs: https://docs.ultralytics.com/models/yolov8
# Task docs: https://docs.ultralytics.com/tasks/obb# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'# [depth, width, max_channels]n: [0.33, 0.25, 1024] # YOLOv8n-obb summary: 144 layers, 3228867 parameters, 3228851 gradients, 9.1 GFLOPss: [0.33, 0.50, 1024] # YOLOv8s-obb summary: 144 layers, 11452739 parameters, 11452723 gradients, 29.8 GFLOPsm: [0.67, 0.75, 768] # YOLOv8m-obb summary: 184 layers, 26463235 parameters, 26463219 gradients, 81.5 GFLOPsl: [1.00, 1.00, 512] # YOLOv8l-obb summary: 224 layers, 44540355 parameters, 44540339 gradients, 169.4 GFLOPsx: [1.00, 1.25, 512] # YOLOv8x-obb summary: 224 layers, 69555651 parameters, 69555635 gradients, 264.3 GFLOPs# YOLOv8.0n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4- [-1, 3, C2f, [128, True]]- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8- [-1, 6, C2f, [256, True]]- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16- [-1, 6, C2f, [512, True]]- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32- [-1, 3, C2f, [1024, True]]- [-1, 1, SPPF, [1024, 5]] # 9# YOLOv8.0n head
head:- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 6], 1, Concat, [1]] # cat backbone P4- [-1, 3, C2f, [512]] # 12- [-1, 1, nn.Upsample, [None, 2, "nearest"]]- [[-1, 4], 1, Concat, [1]] # cat backbone P3- [-1, 3, C2f, [256]] # 15 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 12], 1, Concat, [1]] # cat head P4- [-1, 3, C2f, [512]] # 18 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 9], 1, Concat, [1]] # cat head P5- [-1, 3, C2f, [1024]] # 21 (P5/32-large)- [[15, 18, 21], 1, OBB, [nc,1]] # OBB(P3, P4, P5)
3)下载yolov8n-obb.pt
4)保存为train.py,执行python train.py开始训练
from ultralytics import YOLO
import torchmodel_yaml_path = "yolov8-obb.yaml"
#数据集配置文件
data_yaml_path = r'dataset.yaml'
#data_yaml_path = r'/mnt/imgs/train/obb.yaml'
#预训练模型
pre_model_name = 'yolov8n-obb.pt'def main():#torch.backends.cudnn.enabled = Falsemodel = YOLO(model_yaml_path).load(pre_model_name) # build from YAML and transfer weightsmodel.info()model.train(data=data_yaml_path,epochs=100,imgsz=960,batch=10,amp=False,workers=2,degrees=180.0)if __name__ == '__main__':main()# yolo obb train data=data/hat.yaml model=yolov8s-obb.pt epochs=200 imgsz=640 device=0
5)训练完成后,获得模型地址
/home/admin/ultralytics_yolov8/runs/obb/train/weights/best.pt
5、导出rknn模型
1)先导出onnx模型,保存到yolo2onnx.py,执行
from ultralytics import YOLOmodel = YOLO('/home/admin/ultralytics_yolov8/runs/obb/train/weights/best.pt')results = model.export(format='rknn')
得到
/home/admin/ultralytics_yolov8/runs/obb/train/weights/best.onnx
2)onnx转换为rknn
修改 ~/rknn_model_zoo/examples/yolov8_obb/python/yolov8_obb.py
--- a/examples/yolov8_obb/python/yolov8_obb.py
+++ b/examples/yolov8_obb/python/yolov8_obb.py
@@ -12,12 +12,10 @@ from shapely.geometry import Polygonfrom rknn.api import RKNN-CLASSES = ['plane', 'ship', 'storage tank', 'baseball diamond', 'tennis court',
-'basketball court', 'ground track field', 'harbor', 'bridge', 'large vehicle', 'small vehicle', 'helicopter',
- 'roundabout', 'soccer ball field', 'swimming pool']
+CLASSES = ['cat']-nmsThresh = 0.4
-objectThresh = 0.5
+nmsThresh = 0.25
+objectThresh = 0.45def letterbox_resize(image, size, bg_color):"""
@@ -207,7 +205,7 @@ if __name__ == '__main__':# Set inputsimg = cv2.imread('../model/test.jpg')- letterbox_img, aspect_ratio, offset_x, offset_y = letterbox_resize(img, (640,640), 114) # letterbox缩放
+ letterbox_img, aspect_ratio, offset_x, offset_y = letterbox_resize(img, (960,960), 114) # letterbox缩放infer_img = letterbox_img[..., ::-1] # BGR2RGB# Inference
修改 ~/rknn_model_zoo/examples/yolov8_obb/model/yolov8_obb_labels_list.txt,清空所有内容,只留一个cat,不要有换行
开始转换
python convert.py /home/admin/ultralytics_yolov8/runs/obb/train/weights/best.onnx rk3566
得到
/home/admin/rknn_model_zoo/examples/yolov8_obb/model/yolov8n_obb.rknn