[深度学习] 使用深度学习开发的循线小车-程序员宅基地

技术标签: python  深度学习  pytorch  

ubuntu 安装 docker_ubuntu 如何知道已经安装了docker-程序员宅基地

CentOS7的Docker无法拉取镜像_docker查找不到centos7镜像-程序员宅基地

ubuntu 安装 docker_ubuntu 如何知道已经安装了docker-程序员宅基地

【Python】Pytorch分类模型转onnx以及onnx模型推理-程序员宅基地

OriginBot智能机器人开源套件|23.视觉巡线(AI深度学习) - 知乎

ubuntu22.04新机配置深度学习环境(一遍成) - 知乎

告别反复调参,从零开始搭建深度学习方法的循线小车

地平线旭日X3派-上路第三步-AI工具链环境部署

地平线 X3J3 芯片开发手册 ~~~ 开发环境搭建

ERROR: Get “https://registry-1.docker.io/v2/“: dial tcp: lookup registry-1.docker.io on 127.0.0.53:5_lookup registry-1.docker.io on 127.0.0.53:53: serv-程序员宅基地

完成Docker环境安装后,需要将无root权限的用户添加到Docker用户组中。参考如下命令:
 

sudo groupadd docker
sudo gpasswd -a ${USER} docker
sudo systemctl restart docker  # CentOS7/Ubuntu
# re-login

模型训练

以上提到的模型可以直接复用pytorch中的定义,数据集的切分和模型的训练,都封装在 line_follower_model 功能包的代码中。

model_traning

接下来,运行如下指令,开始训练:

cd ~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model
ros2 run line_follower_model training

报错: ./best_line_follower_model_xy.pth cannot be opened

thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model$ ros2 run line_follower_model training
/home/thomas/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/home/thomas/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /home/thomas/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
100.0%
0.672721, 30.660010
save
Traceback (most recent call last):
  File "/home/thomas/dev_ws/install/line_follower_model/lib/line_follower_model/training", line 33, in <module>
    sys.exit(load_entry_point('line-follower-model==0.0.0', 'console_scripts', 'training')())
  File "/home/thomas/dev_ws/install/line_follower_model/lib/python3.10/site-packages/line_follower_model/training_member_function.py", line 131, in main
    torch.save(model.state_dict(), BEST_MODEL_PATH)
  File "/home/thomas/.local/lib/python3.10/site-packages/torch/serialization.py", line 628, in save
    with _open_zipfile_writer(f) as opened_zipfile:
  File "/home/thomas/.local/lib/python3.10/site-packages/torch/serialization.py", line 502, in _open_zipfile_writer
    return container(name_or_buffer)
  File "/home/thomas/.local/lib/python3.10/site-packages/torch/serialization.py", line 473, in __init__
    super().__init__(torch._C.PyTorchFileWriter(self.name))
RuntimeError: File ./best_line_follower_model_xy.pth cannot be opened.


这是由于没有文件夹的写权限

thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning$ ls -l
total 8
drwxr-xr-x 3 root root 4096 Mar 27 11:03 10_model_convert
drwxr-xr-x 7 root root 4096 Mar 27 14:29 line_follower_model
thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning$ sudo chmod 777 *
[sudo] password for thomas: 
thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning$ ls
10_model_convert  line_follower_model
thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning$ ls -l
total 8
drwxrwxrwx 3 root root 4096 Mar 27 11:03 10_model_convert
drwxrwxrwx 7 root root 4096 Mar 27 14:29 line_follower_model

再次执行

ros2 run line_follower_model training

thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model$ ros2 run line_follower_model training
/home/thomas/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/home/thomas/.local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=ResNet18_Weights.IMAGENET1K_V1`. You can also use `weights=ResNet18_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
0.722548, 6.242182
save
0.087550, 5.827808
save
0.045032, 0.380008
save
0.032235, 0.111976
save
0.027896, 0.039962
save
0.030725, 0.204738
0.025075, 0.036258
save
0.028099, 0.040965
0.016858, 0.032197
save
0.019491, 0.036230
0.018325, 0.043560
0.019858, 0.322563
0.015115, 0.070269
0.014820, 0.030373

模型训练过程需要一段时间,几十分钟或者一个小时,需要耐心等待,完成后可以看到生成的文件 best_line_follower_model_xy.pth

thomas@thomas-J20:~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model$ ls -l
total 54892
-rw-rw-r-- 1 thomas thomas 44789846 Mar 28 13:28 best_line_follower_model_xy.pth

模型转换

pytorch训练得到的浮点模型如果直接运行在RDK X3上效率会很低,为了提高运行效率,发挥BPU的5T算力,这里需要进行浮点模型转定点模型操作。

model_transform

生成onnx模型

接下来执行 generate_onnx 将之前训练好的模型,转换成 onnx 模型: 

ros2 run line_follower_model generate_onnx

运行后在当前目录下得到生成 best_line_follower_model_xy.onnx 模型

thomas@J-35:~/dev_ws/src/originbot_desktop/originbot_deeplearning/line_follower_model$ ls -l
total 98556
-rw-rw-r-- 1 thomas thomas 44700647 Apr  2 21:02 best_line_follower_model_xy.onnx
-rw-rw-r-- 1 thomas thomas 44789846 Apr  2 19:37 best_line_follower_model_xy.pth


 

启动AI工具链docker

解压缩之前下载好的AI工具链的docker镜像和OE包,OE包目录结构如下:

 
. 
├── bsp 
│   └── X3J3-Img-PL2.2-V1.1.0-20220324.tgz 
├── ddk 
│   ├── package 
│   ├── samples 
│   └── tools 
├── doc 
│   ├── cn 
│   ├── ddk_doc 
│   └── en 
├── release_note-CN.txt 
├── release_note-EN.txt 
├── run_docker.sh 
└── tools 
    ├── 0A_CP210x_USB2UART_Driver.zip 
    ├── 0A_PL2302-USB-to-Serial-Comm-Port.zip 
    ├── 0A_PL2303-M_LogoDriver_Setup_v202_20200527.zip 
    ├── 0B_hbupdate_burn_secure-key1.zip 
    ├── 0B_hbupdate_linux_cli_v1.1.tgz 
    ├── 0B_hbupdate_linux_gui_v1.1.tgz 
    ├── 0B_hbupdate_mac_v1.0.5.app.tar.gz 
    └── 0B_hbupdate_win64_v1.1.zip 
 

将 originbot_desktop 代码仓库中的 10_model_convert 包拷贝到至OE开发包 ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/ 目录下。

image

再把 line_follower_model 功能包下标注好的数据集文件夹 image_dataset 和生成的  best_line_follower_model_xy.onnx 模型拷贝到以上 ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/ 目录下,数据集文件夹 image_dataset 保留100张左右的数据用于校准:

image

然后回到OE包的根目录下,加载AI工具链的docker镜像:

cd /home/thomas/Me/deeplearning/horizon_xj3_open_explorer_v2.3.3_20220727/
sh run_docker.sh /data/ 

 

生成校准数据


在启动的Docker镜像中,完成如下操作:

cd ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper 
sh 02_preprocess.sh 


命令执行过程如下:

thomas@J-35:~/Me/deeplearning/horizon_xj3_open_explorer_v2.3.3_20220727$ sudo sh run_docker.sh /data/ 
[sudo] password for thomas: 
run_docker.sh: 14: [: unexpected operator
run_docker.sh: 23: [: openexplorer/ai_toolchain_centos_7_xj3: unexpected operator
docker version is v2.3.3
dataset path is /data
open_explorer folder path is /home/thomas/Me/deeplearning/horizon_xj3_open_explorer_v2.3.3_20220727
[root@1e1a1a7e24f4 open_explorer]# cd ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper 
[root@1e1a1a7e24f4 mapper]# sh 02_preprocess.sh 

cd $(dirname $0) || exit

python3 ../../../data_preprocess.py \
  --src_dir ./image_dataset \
  --dst_dir ./calibration_data_bgr_f32 \
  --pic_ext .rgb \
  --read_mode opencv
Warning please note that the data type is now determined by the name of the folder suffix
Warning if you need to set it explicitly, please configure the value of saved_data_type in the preprocess shell script
regular preprocess
write:./calibration_data_bgr_f32/xy_008_160_31a8e30a-eca6-11ee-bb07-dfd665df7b81.rgb
write:./calibration_data_bgr_f32/xy_009_160_39c18c40-eca6-11ee-bb07-dfd665df7b81.rgb
write:./calibration_data_bgr_f32/xy_028_092_3327df66-ec9b-11ee-bb07-dfd665df7b81.rgb
模型编译生成定点模型

接下来执行以下命令生成定点模型文件,稍后会在机器人上部署:

cd ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper
sh 03_build.sh

命令执行过程如下:

[root@1e1a1a7e24f4 mapper]# sh 03_build.sh
2024-04-02 21:46:50,078 INFO Start hb_mapper....
2024-04-02 21:46:50,079 INFO log will be stored in /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/hb_mapper_makertbin.log
2024-04-02 21:46:50,079 INFO hbdk version 3.37.2
2024-04-02 21:46:50,080 INFO horizon_nn version 0.14.0
2024-04-02 21:46:50,080 INFO hb_mapper version 1.9.9
2024-04-02 21:46:50,081 INFO Start Model Convert....
2024-04-02 21:46:50,100 INFO Using abs path /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/best_line_follower_model_xy.onnx
2024-04-02 21:46:50,102 INFO validating model_parameters...
2024-04-02 21:46:50,231 WARNING User input 'log_level' deleted,Please do not use this parameter again
2024-04-02 21:46:50,231 INFO Using abs path /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/model_output
2024-04-02 21:46:50,232 INFO validating model_parameters finished
2024-04-02 21:46:50,232 INFO validating input_parameters...
2024-04-02 21:46:50,232 INFO input num is set to 1 according to input_names
2024-04-02 21:46:50,233 INFO model name missing, using model name from model file: ['input']
2024-04-02 21:46:50,233 INFO model input shape missing, using shape from model file: [[1, 3, 224, 224]]
2024-04-02 21:46:50,233 INFO validating input_parameters finished
2024-04-02 21:46:50,233 INFO validating calibration_parameters...
2024-04-02 21:46:50,233 INFO Using abs path /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/calibration_data_bgr_f32
2024-04-02 21:46:50,234 INFO validating calibration_parameters finished
2024-04-02 21:46:50,234 INFO validating custom_op...
2024-04-02 21:46:50,234 INFO custom_op does not exist, skipped
2024-04-02 21:46:50,234 INFO validating custom_op finished
2024-04-02 21:46:50,234 INFO validating compiler_parameters...
2024-04-02 21:46:50,235 INFO validating compiler_parameters finished
2024-04-02 21:46:50,239 WARNING Please note that the calibration file data type is set to float32, determined by the name of the calibration dir name suffix
2024-04-02 21:46:50,239 WARNING if you need to set it explicitly, please configure the value of cal_data_type in the calibration_parameters group in yaml
2024-04-02 21:46:50,240 INFO *******************************************
2024-04-02 21:46:50,240 INFO First calibration picture name: xy_008_160_31a8e30a-eca6-11ee-bb07-dfd665df7b81.rgb
2024-04-02 21:46:50,240 INFO First calibration picture md5:
83281dbdee2db08577524faa7f892adf  /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/calibration_data_bgr_f32/xy_008_160_31a8e30a-eca6-11ee-bb07-dfd665df7b81.rgb
2024-04-02 21:46:50,265 INFO *******************************************
2024-04-02 21:46:51,682 INFO [Tue Apr  2 21:46:51 2024] Start to Horizon NN Model Convert.
2024-04-02 21:46:51,683 INFO Parsing the input parameter:{'input': {'input_shape': [1, 3, 224, 224], 'expected_input_type': 'YUV444_128', 'original_input_type': 'RGB', 'original_input_layout': 'NCHW', 'means': array([123.675, 116.28 , 103.53 ], dtype=float32), 'scales': array([0.0171248, 0.017507 , 0.0174292], dtype=float32)}}
2024-04-02 21:46:51,684 INFO Parsing the calibration parameter
2024-04-02 21:46:51,684 INFO Parsing the hbdk parameter:{'hbdk_pass_through_params': '--fast --O3', 'input-source': {'input': 'pyramid', '_default_value': 'ddr'}}
2024-04-02 21:46:51,685 INFO HorizonNN version: 0.14.0
2024-04-02 21:46:51,685 INFO HBDK version: 3.37.2
2024-04-02 21:46:51,685 INFO [Tue Apr  2 21:46:51 2024] Start to parse the onnx model.
2024-04-02 21:46:51,770 INFO Input ONNX model infomation:
ONNX IR version:          6
Opset version:            11
Producer:                 pytorch2.2.2
Domain:                   none
Input name:               input, [1, 3, 224, 224]
Output name:              output, [1, 2]
2024-04-02 21:46:52,323 INFO [Tue Apr  2 21:46:52 2024] End to parse the onnx model.
2024-04-02 21:46:52,324 INFO Model input names: ['input']
2024-04-02 21:46:52,324 INFO Create a preprocessing operator for input_name input with means=[123.675 116.28  103.53 ], std=[58.39484253 57.12000948 57.37498298], original_input_layout=NCHW, color convert from 'RGB' to 'YUV_BT601_FULL_RANGE'.
2024-04-02 21:46:52,750 INFO Saving the original float model: resnet18_224x224_nv12_original_float_model.onnx.
2024-04-02 21:46:52,751 INFO [Tue Apr  2 21:46:52 2024] Start to optimize the model.
2024-04-02 21:46:53,782 INFO [Tue Apr  2 21:46:53 2024] End to optimize the model.
2024-04-02 21:46:53,953 INFO Saving the optimized model: resnet18_224x224_nv12_optimized_float_model.onnx.
2024-04-02 21:46:53,953 INFO [Tue Apr  2 21:46:53 2024] Start to calibrate the model.
2024-04-02 21:46:53,954 INFO There are 100 samples in the calibration data set.
2024-04-02 21:46:54,458 INFO Run calibration model with kl method.
2024-04-02 21:47:06,290 INFO [Tue Apr  2 21:47:06 2024] End to calibrate the model.
2024-04-02 21:47:06,291 INFO [Tue Apr  2 21:47:06 2024] Start to quantize the model.
2024-04-02 21:47:09,926 INFO input input is from pyramid. Its layout is set to NHWC
2024-04-02 21:47:10,502 INFO [Tue Apr  2 21:47:10 2024] End to quantize the model.
2024-04-02 21:47:11,101 INFO Saving the quantized model: resnet18_224x224_nv12_quantized_model.onnx.
2024-04-02 21:47:14,165 INFO [Tue Apr  2 21:47:14 2024] Start to compile the model with march bernoulli2.
2024-04-02 21:47:15,502 INFO Compile submodel: main_graph_subgraph_0
2024-04-02 21:47:16,985 INFO hbdk-cc parameters:['--fast', '--O3', '--input-layout', 'NHWC', '--output-layout', 'NHWC', '--input-source', 'pyramid']
2024-04-02 21:47:17,276 INFO INFO: "-j" or "--jobs" is not specified, launch 2 threads for optimization
2024-04-02 21:47:17,277 WARNING missing stride for pyramid input[0], use its aligned width by default.
[==================================================] 100%
2024-04-02 21:47:25,296 INFO consumed time 8.06245
2024-04-02 21:47:25,555 INFO FPS=121.27, latency = 8246.2 us   (see main_graph_subgraph_0.html)
2024-04-02 21:47:25,895 INFO [Tue Apr  2 21:47:25 2024] End to compile the model with march bernoulli2.
2024-04-02 21:47:25,896 INFO The converted model node information:
========================================================================================================================================
Node                                              ON   Subgraph  Type                    Cosine Similarity  Threshold                   
----------------------------------------------------------------------------------------------------------------------------------------
HZ_PREPROCESS_FOR_input                           BPU  id(0)     HzSQuantizedPreprocess  0.999952           127.000000                  
/conv1/Conv                                       BPU  id(0)     HzSQuantizedConv        0.999723           3.186383                    
/maxpool/MaxPool                                  BPU  id(0)     HzQuantizedMaxPool      0.999790           3.562476                    
/layer1/layer1.0/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.999393           3.562476                    
/layer1/layer1.0/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.999360           2.320694                    
/layer1/layer1.1/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.997865           5.567303                    
/layer1/layer1.1/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.998228           2.442273                    
/layer2/layer2.0/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.995588           6.622376                    
/layer2/layer2.0/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.996943           3.076967                    
/layer2/layer2.0/downsample/downsample.0/Conv     BPU  id(0)     HzSQuantizedConv        0.997177           6.622376                    
/layer2/layer2.1/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.996080           3.934074                    
/layer2/layer2.1/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.997443           3.025215                    
/layer3/layer3.0/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.998448           4.853349                    
/layer3/layer3.0/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.998819           2.553357                    
/layer3/layer3.0/downsample/downsample.0/Conv     BPU  id(0)     HzSQuantizedConv        0.998717           4.853349                    
/layer3/layer3.1/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.998631           3.161120                    
/layer3/layer3.1/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.998802           2.501193                    
/layer4/layer4.0/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.999474           5.645166                    
/layer4/layer4.0/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.999709           2.401657                    
/layer4/layer4.0/downsample/downsample.0/Conv     BPU  id(0)     HzSQuantizedConv        0.999250           5.645166                    
/layer4/layer4.1/conv1/Conv                       BPU  id(0)     HzSQuantizedConv        0.999808           5.394126                    
/layer4/layer4.1/conv2/Conv                       BPU  id(0)     HzSQuantizedConv        0.999865           3.072157                    
/avgpool/GlobalAveragePool                        BPU  id(0)     HzSQuantizedConv        0.999965           17.365398                   
/fc/Gemm                                          BPU  id(0)     HzSQuantizedConv        0.999967           2.144315                    
/fc/Gemm_NHWC2NCHW_LayoutConvert_Output0_reshape  CPU  --        Reshape
2024-04-02 21:47:25,897 INFO The quantify model output:
===========================================================================
Node      Cosine Similarity  L1 Distance  L2 Distance  Chebyshev Distance  
---------------------------------------------------------------------------
/fc/Gemm  0.999967           0.007190     0.005211     0.008810
2024-04-02 21:47:25,898 INFO [Tue Apr  2 21:47:25 2024] End to Horizon NN Model Convert.
2024-04-02 21:47:26,084 INFO start convert to *.bin file....
2024-04-02 21:47:26,183 INFO ONNX model output num : 1
2024-04-02 21:47:26,184 INFO ############# model deps info #############
2024-04-02 21:47:26,185 INFO hb_mapper version   : 1.9.9
2024-04-02 21:47:26,185 INFO hbdk version        : 3.37.2
2024-04-02 21:47:26,185 INFO hbdk runtime version: 3.14.14
2024-04-02 21:47:26,186 INFO horizon_nn version  : 0.14.0
2024-04-02 21:47:26,186 INFO ############# model_parameters info #############
2024-04-02 21:47:26,186 INFO onnx_model          : /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/best_line_follower_model_xy.onnx
2024-04-02 21:47:26,186 INFO BPU march           : bernoulli2
2024-04-02 21:47:26,187 INFO layer_out_dump      : False
2024-04-02 21:47:26,187 INFO log_level           : DEBUG
2024-04-02 21:47:26,187 INFO working dir         : /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/model_output
2024-04-02 21:47:26,187 INFO output_model_file_prefix: resnet18_224x224_nv12
2024-04-02 21:47:26,188 INFO ############# input_parameters info #############
2024-04-02 21:47:26,188 INFO ------------------------------------------
2024-04-02 21:47:26,188 INFO ---------input info : input ---------
2024-04-02 21:47:26,189 INFO input_name          : input
2024-04-02 21:47:26,189 INFO input_type_rt       : nv12
2024-04-02 21:47:26,189 INFO input_space&range   : regular
2024-04-02 21:47:26,189 INFO input_layout_rt     : None
2024-04-02 21:47:26,190 INFO input_type_train    : rgb
2024-04-02 21:47:26,190 INFO input_layout_train  : NCHW
2024-04-02 21:47:26,190 INFO norm_type           : data_mean_and_scale
2024-04-02 21:47:26,191 INFO input_shape         : 1x3x224x224
2024-04-02 21:47:26,191 INFO mean_value          : 123.675,116.28,103.53,
2024-04-02 21:47:26,191 INFO scale_value         : 0.0171248,0.017507,0.0174292,
2024-04-02 21:47:26,192 INFO cal_data_dir        : /open_explorer/ddk/samples/ai_toolchain/horizon_model_convert_sample/03_classification/10_model_convert/mapper/calibration_data_bgr_f32
2024-04-02 21:47:26,192 INFO ---------input info : input end -------
2024-04-02 21:47:26,192 INFO ------------------------------------------
2024-04-02 21:47:26,192 INFO ############# calibration_parameters info #############
2024-04-02 21:47:26,193 INFO preprocess_on       : False
2024-04-02 21:47:26,193 INFO calibration_type:   : kl
2024-04-02 21:47:26,193 INFO cal_data_type       : N/A
2024-04-02 21:47:26,194 INFO ############# compiler_parameters info #############
2024-04-02 21:47:26,194 INFO hbdk_pass_through_params: --fast --O3
2024-04-02 21:47:26,194 INFO input-source        : {'input': 'pyramid', '_default_value': 'ddr'}
2024-04-02 21:47:26,226 INFO Convert to runtime bin file sucessfully!
2024-04-02 21:47:26,226 INFO End Model Convert
[root@1e1a1a7e24f4 mapper]# 

编译成功后,会在 model_output 路径下生成最终的模型文件 resnet18_224x224_nv12.bin

拷贝模型文件 resnet18_224x224_nv12.bin 到 line_follower_model 功能包里,以备后续部署使用。

模型部署

将编译生成的定点模型 resnet18_224x224_nv12.bin,拷贝到OriginCar端 line_follower_perception 功能包下的 model 文件夹中,替换原有的模型,并且在OriginCar端重新编译工作空间。

scp -r ./resnet18_224x224_nv12.bin [email protected]:/root/dev_ws/src/origincar/origincar_deeplearning/line_follower_perception/model/ 

 

编译完成后,就可以通过以下命令部署模型,其中参数 model_path 和 model_name 指定模型的路径和名称:

cd /root/dev_ws/src/origincar/origincar_deeplearning/line_follower_perception/
ros2 run line_follower_perception line_follower_perception --ros-args -p model_path:=model/resnet18_224x224_nv12.bin -p model_name:=resnet18_224x224_nv12

命令执行过程如下:

root@ubuntu:~/dev_ws/src/origincar/origincar_deeplearning/line_follower_perception# ros2 run line_follower_perception line_follower_perception --ros-args -p model_path:=model/resnet18_224x224_nv12.bin -p model_name:=resnet18_224x224_nv12
[INFO] [1712122458.232674628] [dnn]: Node init.
[INFO] [1712122458.233179215] [LineFollowerPerceptionNode]: path:model/resnet18_224x224_nv12.bin

[INFO] [1712122458.233256001] [LineFollowerPerceptionNode]: name:resnet18_224x224_nv12

[INFO] [1712122458.233340036] [dnn]: Model init.
[EasyDNN]: EasyDNN version = 1.6.1_(1.18.6 DNN)
[BPU_PLAT]BPU Platform Version(1.3.3)!
[HBRT] set log level as 0. version = 3.15.25.0
[DNN] Runtime version = 1.18.6_(3.15.25 HBRT)
[A][DNN][packed_model.cpp:234][Model](2024-04-03,13:34:18.775.957) [HorizonRT] The model builder version = 1.9.9
[INFO] [1712122458.918322553] [dnn]: The model input 0 width is 224 and height is 224
[INFO] [1712122458.918465125] [dnn]: Task init.
[INFO] [1712122458.920699164] [dnn]: Set task_num [4]

启动相机

先将OriginCar放置到巡线的场景中。

通过如下命令,启动零拷贝模式下的摄像头驱动,加速内部的图像处理效率:

export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
export CYCLONEDDS_URI='<CycloneDDS><Domain><General><NetworkInterfaceAddress>wlan0</NetworkInterfaceAddress></General></Domain></CycloneDDS>'
ros2 launch origincar_bringup usb_websocket_display.launch.py 

相机启动成功后,就可以在巡线终端中看到动态识别的路径线位置了:

启动机器人

启动OriginCar底盘,机器人开始自主寻线运动:

ros2 launch origincar_base origincar_bringup.launch.py 

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/chdlr/article/details/137152412

智能推荐

视频教程-跟一夫学UI设计 APPUI综合设计与图标实战案例视频教程 photoshop绘制icon案例-UI-程序员宅基地

文章浏览阅读104次。跟一夫学UI设计 APPUI综合设计与图标实战案例视频教程 photoshop绘制icon案例 ..._app风格案例视频

vue 海康视频播放_vue-hkvideo-程序员宅基地

文章浏览阅读4k次,点赞4次,收藏22次。1. 下载并安装海康 web 插件https://open.hikvision.com/download/5c67f1e2f05948198c909700?type=102. 把上一步解压的三个 js, 复制到你的项目中, 根据路径, 自己引入到 index.html 中3. 建议运行它的 demo, 大概看看代码, 了解一下它的大致结构, 它的注解很详细, 3 分钟就能看完4. 贴上我的代码(我的是每次只显示一个画面, 点击摄像头切换画面)<temp.._vue-hkvideo

html li 鼠标经过变色,CSS实现li标签鼠标经过时改变背景颜色-程序员宅基地

文章浏览阅读5.3k次,点赞3次,收藏4次。很多时候需要用到这个css效果,实际上就用了一个li标签的热点样式,不仅是li标签,div等也可以的完整代码如下,div/css鼠标热点改变li标签背景颜色body{ background-color:#CCCC99; margin:0; padding:0; color:#fff;}ul{ margin:0; padding:50px;}li{ list-style:none; height:2..._ul li 样式 鼠标移入颜色

数据恢复:在 Linux 上恢复删除了的文件_linux系统,删了某一个文件夹的数据还清空了回收站,还能不能找回来我的数据-程序员宅基地

文章浏览阅读238次,点赞4次,收藏8次。把删除创建为rm -i 的别名当 -i 选项配合 rm 命令(也包括其他文件处理命令比如 cp 或者 mv)使用时,在删除文件前会出现一个提示。其中,/home/gacanepa/rescued 是另外一个磁盘中的目录 - 请记住,把文件恢复到被删除文件所在的磁盘中不是一个明智的做法。安装完成后,我们做一个简单的测试吧。如果在恢复过程中,占用了被删除文件之前所在的磁盘分区,就可能无法恢复文件。但愿你对于你的文件足够小心,当你要从外部磁盘或 USB 设备中恢复丢失的文件时,你只需使用这个工具即可。

2021-09-15 WPF上位机 15-属性绑定(数据格式化)_wpf 自定义属性绑定 格式化 实现-程序员宅基地

文章浏览阅读3.2w次。<Window x:Class="Zhaoxi.BindingStudy.DataFormatStudy.DataFormatStudyWin" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.._wpf 自定义属性绑定 格式化 实现

[常用办公软件] wps怎么自动生成目录?wps自动生成目录的设置教程_wps目录自动生成-程序员宅基地

文章浏览阅读1.1w次,点赞3次,收藏5次。转载请说明来源于"厦门SEO"本文地址:http://www.96096.cc/Article/160880.html常用办公软件  WPS Office是由金山软件股份有限公司开发的一款针对个人永久免费的办公软件,在我们的日常生活和工作中,WPS Office比起微软Microsoft Office来说在文字上的处理会更深入国人用户的人心,熟悉操作WPS的办公小技巧,能够更高效的提高我们的工作效率,今天小编要为大家分享的是WPS怎么自动生成目录?快来一起看看WPS自动生成目录的设置教程吧。_wps目录自动生成

随便推点

使用OkHttp 缓存 API 调用提高Android应用性能

要能够将 API 调用的响应本地存储到缓存中,首先,我们需要定义缓存并通知客户端。在下面的代码片段中,我们使用 okhttp 库中的 Cache 类定义了缓存。我们将此缓存的最大大小设置为 5 MB。然后,在初始化 okhttpclient 参数时使用cache()函数。.build()如果设备连接到互联网:如果最后一次 API 响应是在不到 30 分钟之前检索的,则显示缓存的响应;否则,获取新的响应并将其存储在缓存中。如果设备离线:使用最多 1 天前的 API 响应以保持应用程序功能。

一键实现在VS Code中绘制流程图

而其较为出众的一点,就是较好的可拓展性,即丰富的插件应用,这些应用可以极大地提高生产效率,并优化日常使用。可以发现,其整体格局和我们常见的流程图编辑应用较为类似,其主题颜色也与我们的VS Code保持一致,在这里为了编辑方便,我们还是将编辑器主题改为浅色。当然,其功能仍存在局限,不能够完全代替我们传统的图形绘制工具,但也可以作为我们日常工作的有益补充,帮助我们完成一些特定情景下的项目。整体布局也十分明晰,与我们常用的Visio极为类似:左侧为形状选项卡,中间为画布容器,右侧为样式编辑。

go http框架下的静态资源代理实现(压缩,缓存验证自定义)

之前在说了我的第一版静态资源代理,后面我又完善了一下:照着以上思路,可以在其他语言其他框架中实现,因为对框架没有依赖,都是使用的一些基本功能。

RecyclerView实现吸顶效果项目实战(三):布局管理器LayoutManager-程序员宅基地

文章浏览阅读338次,点赞4次,收藏6次。架构师不是天生的,是在项目中磨练起来的,所以,我们学了技术就需要结合项目进行实战训练,那么在Android里面最常用的架构无外乎 MVC,MVP,MVVM,但是这些思想如果和模块化,层次化,组件化混和在一起,那就不是一件那么简单的事了,我们需要一个真正身经百战的架构师才能讲解透彻其中蕴含的深理。此时,RecyclerView第一个item是添加进Adapter中的最后一个,最后一个item是第一个加进Adapter的数据,RecyclerView会自动滑到末尾,另外item整体是依靠下方的。

【智能排班系统】基于AOP和自定义注解实现接口幂等性-程序员宅基地

文章浏览阅读884次。使用多种方式实现接口幂等性,通过定义注解方便对方法进行幂等性控制

SpringBoot整合Swagger2 详解_springboot swagger2 开关-程序员宅基地

文章浏览阅读324次。SpringBoot、Swagger2 整合详解_springboot swagger2 开关

推荐文章

热门文章

相关标签