关于tensorrt里面的wts校验

news/2024/7/10 22:56:02 标签: YOLO

可以参考我改的项目,不过目前推理结果不对,还在修复:

https://github.com/lindsayshuo/yolov8-cls-tensorrtx

写完api搭建的网络后进行验证网络的传输shape,可以参考如下代码:

for (const auto& kv : weightMap) {
    if (kv.first.find("conv.weight") != std::string::npos || kv.first.find("linear.weight") != std::string::npos) { // 检查 conv.weight 或 linear.weight
        std::cout << "Weight name: " << kv.first << ", ";
        std::cout << "Count: " << kv.second.count << ", ";
        std::cout << "Type: " << (kv.second.type == nvinfer1::DataType::kFLOAT ? "FLOAT" :
                                  kv.second.type == nvinfer1::DataType::kHALF ? "HALF" : "INT8") << std::endl;
    }
}

执行完输出如下:

Loading weights: ../yolov8n-cls.wts
Weight name: model.0.conv.weight, Count: 432, Type: FLOAT
Weight name: model.1.conv.weight, Count: 4608, Type: FLOAT
Weight name: model.2.cv1.conv.weight, Count: 1024, Type: FLOAT
Weight name: model.2.cv2.conv.weight, Count: 1536, Type: FLOAT
Weight name: model.2.m.0.cv1.conv.weight, Count: 2304, Type: FLOAT
Weight name: model.2.m.0.cv2.conv.weight, Count: 2304, Type: FLOAT
Weight name: model.3.conv.weight, Count: 18432, Type: FLOAT
Weight name: model.4.cv1.conv.weight, Count: 4096, Type: FLOAT
Weight name: model.4.cv2.conv.weight, Count: 8192, Type: FLOAT
Weight name: model.4.m.0.cv1.conv.weight, Count: 9216, Type: FLOAT
Weight name: model.4.m.0.cv2.conv.weight, Count: 9216, Type: FLOAT
Weight name: model.4.m.1.cv1.conv.weight, Count: 9216, Type: FLOAT
Weight name: model.4.m.1.cv2.conv.weight, Count: 9216, Type: FLOAT
Weight name: model.5.conv.weight, Count: 73728, Type: FLOAT
Weight name: model.6.cv1.conv.weight, Count: 16384, Type: FLOAT
Weight name: model.6.cv2.conv.weight, Count: 32768, Type: FLOAT
Weight name: model.6.m.0.cv1.conv.weight, Count: 36864, Type: FLOAT
Weight name: model.6.m.0.cv2.conv.weight, Count: 36864, Type: FLOAT
Weight name: model.6.m.1.cv1.conv.weight, Count: 36864, Type: FLOAT
Weight name: model.6.m.1.cv2.conv.weight, Count: 36864, Type: FLOAT
Weight name: model.7.conv.weight, Count: 294912, Type: FLOAT
Weight name: model.8.cv1.conv.weight, Count: 65536, Type: FLOAT
Weight name: model.8.cv2.conv.weight, Count: 98304, Type: FLOAT
Weight name: model.8.m.0.cv1.conv.weight, Count: 147456, Type: FLOAT
Weight name: model.8.m.0.cv2.conv.weight, Count: 147456, Type: FLOAT
Weight name: model.9.conv.conv.weight, Count: 327680, Type: FLOAT
Weight name: model.9.linear.weight, Count: 1280000, Type: FLOAT
[03/10/2024-23:30:13] [W] [TRT] The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
max_channels : [1280]
Input shape: [3, 224, 224]
Calculated channel width: 16
Maximum channel limit: 1280
conv0 Dimensions(3): [ 16 112 112 ]
Calculated channel width: 32
Maximum channel limit: 1280
conv1 Dimensions(3): [ 32 56 56 ]
Calculated channel width: 32
Maximum channel limit: 1280
Calculated channel width: 32
Maximum channel limit: 1280
conv2 Dimensions(3): [ 32 56 56 ]
Calculated channel width: 64
Maximum channel limit: 1280
conv3 Dimensions(3): [ 64 28 28 ]
Calculated channel width: 64
Maximum channel limit: 1280
Calculated channel width: 64
Maximum channel limit: 1280
conv4 Dimensions(3): [ 64 28 28 ]
Calculated channel width: 128
Maximum channel limit: 1280
conv5 Dimensions(3): [ 128 14 14 ]
Calculated channel width: 128
Maximum channel limit: 1280
Calculated channel width: 128
Maximum channel limit: 1280
conv6 Dimensions(3): [ 128 14 14 ]
Calculated channel width: 256
Maximum channel limit: 1280
conv7 Dimensions(3): [ 256 7 7 ]
Calculated channel width: 256
Maximum channel limit: 1280
Calculated channel width: 256
Maximum channel limit: 1280
conv8 Dimensions(3): [ 256 7 7 ]
conv_class Dimensions(3): [ 1280 9 9 ]
Dimensions of the output from pool2 layer: 1280 1 1 
Number of feature maps: 1
model.9.linear.weight count: 1280000
Shape of model.9.linear.weight: [1000 x 1280]
Output shape of yolo: [1000, 1, 1]
Building engine, please wait for a while...

就拿Weight name: model.0.conv.weight, Count: 432, Type: FLOAT来说,这个432等于16×3×3×3,具体来源可以根据如下代码查找onnx对应输出:

import onnx

model_in_file = 'yolov8n-cls.onnx'

if __name__ == "__main__":
    model = onnx.load(model_in_file)

    # 打印节点信息
    nodes = model.graph.node
    for node in nodes:
        if node.op_type == 'Conv':  # 检查节点是否为卷积操作
            for attribute in node.attribute:
                if attribute.name == 'strides':
                    # 获取步长
                    strides = attribute.ints
                    print(f'Node name: {node.name}, Strides: {strides}')
    nodnum = len(nodes)

    for nid in range(nodnum):
        if (nodes[nid].output[0] == 'stride_32'):
            print('Found stride_32: index = ', nid)
        else:
            print(nodes[nid].output)

    # 打印初始器信息
    inits = model.graph.initializer
    ininum = len(inits)

    for iid in range(ininum):
        el = inits[iid]
        print('name:', el.name, ' dtype:', el.data_type, ' dim:', el.dims)

    # 打印输出节点信息
    print(model.graph.output)

    # 打印输入的形状
    inputs = model.graph.input
    for input in inputs:
        # 获取输入的名字、类型和形状
        print('Input name:', input.name)
        try:
            # 输入的类型信息是包装过的,需要解包来获取
            tensor_type = input.type.tensor_type
            # ONNX使用枚举类型来表示数据类型
            dtype = onnx.TensorProto.DataType.Name(tensor_type.elem_type)
            # 获取形状信息
            shape = [dim.dim_value for dim in tensor_type.shape.dim]
            print(' Data type:', dtype)
            print(' Shape:', shape)
        except:
            print('Input shape is not fully defined.')

    print('Done')

现在看输出结果:

Node name: /model.0/conv/Conv, Strides: [2, 2]
Node name: /model.1/conv/Conv, Strides: [2, 2]
Node name: /model.2/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.2/m.0/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.2/m.0/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.2/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.3/conv/Conv, Strides: [2, 2]
Node name: /model.4/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.4/m.0/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.4/m.0/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.4/m.1/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.4/m.1/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.4/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.5/conv/Conv, Strides: [2, 2]
Node name: /model.6/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.6/m.0/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.6/m.0/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.6/m.1/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.6/m.1/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.6/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.7/conv/Conv, Strides: [2, 2]
Node name: /model.8/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.8/m.0/cv1/conv/Conv, Strides: [1, 1]
Node name: /model.8/m.0/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.8/cv2/conv/Conv, Strides: [1, 1]
Node name: /model.9/conv/conv/Conv, Strides: [1, 1]
['/model.0/conv/Conv_output_0']
['/model.0/act/Sigmoid_output_0']
['/model.0/act/Mul_output_0']
['/model.1/conv/Conv_output_0']
['/model.1/act/Sigmoid_output_0']
['/model.1/act/Mul_output_0']
['/model.2/cv1/conv/Conv_output_0']
['/model.2/cv1/act/Sigmoid_output_0']
['/model.2/cv1/act/Mul_output_0']
['onnx::Split_64']
['/model.2/Split_output_0', '/model.2/Split_output_1']
['/model.2/m.0/cv1/conv/Conv_output_0']
['/model.2/m.0/cv1/act/Sigmoid_output_0']
['/model.2/m.0/cv1/act/Mul_output_0']
['/model.2/m.0/cv2/conv/Conv_output_0']
['/model.2/m.0/cv2/act/Sigmoid_output_0']
['/model.2/m.0/cv2/act/Mul_output_0']
['/model.2/m.0/Add_output_0']
['/model.2/Concat_output_0']
['/model.2/cv2/conv/Conv_output_0']
['/model.2/cv2/act/Sigmoid_output_0']
['/model.2/cv2/act/Mul_output_0']
['/model.3/conv/Conv_output_0']
['/model.3/act/Sigmoid_output_0']
['/model.3/act/Mul_output_0']
['/model.4/cv1/conv/Conv_output_0']
['/model.4/cv1/act/Sigmoid_output_0']
['/model.4/cv1/act/Mul_output_0']
['onnx::Split_84']
['/model.4/Split_output_0', '/model.4/Split_output_1']
['/model.4/m.0/cv1/conv/Conv_output_0']
['/model.4/m.0/cv1/act/Sigmoid_output_0']
['/model.4/m.0/cv1/act/Mul_output_0']
['/model.4/m.0/cv2/conv/Conv_output_0']
['/model.4/m.0/cv2/act/Sigmoid_output_0']
['/model.4/m.0/cv2/act/Mul_output_0']
['/model.4/m.0/Add_output_0']
['/model.4/m.1/cv1/conv/Conv_output_0']
['/model.4/m.1/cv1/act/Sigmoid_output_0']
['/model.4/m.1/cv1/act/Mul_output_0']
['/model.4/m.1/cv2/conv/Conv_output_0']
['/model.4/m.1/cv2/act/Sigmoid_output_0']
['/model.4/m.1/cv2/act/Mul_output_0']
['/model.4/m.1/Add_output_0']
['/model.4/Concat_output_0']
['/model.4/cv2/conv/Conv_output_0']
['/model.4/cv2/act/Sigmoid_output_0']
['/model.4/cv2/act/Mul_output_0']
['/model.5/conv/Conv_output_0']
['/model.5/act/Sigmoid_output_0']
['/model.5/act/Mul_output_0']
['/model.6/cv1/conv/Conv_output_0']
['/model.6/cv1/act/Sigmoid_output_0']
['/model.6/cv1/act/Mul_output_0']
['onnx::Split_111']
['/model.6/Split_output_0', '/model.6/Split_output_1']
['/model.6/m.0/cv1/conv/Conv_output_0']
['/model.6/m.0/cv1/act/Sigmoid_output_0']
['/model.6/m.0/cv1/act/Mul_output_0']
['/model.6/m.0/cv2/conv/Conv_output_0']
['/model.6/m.0/cv2/act/Sigmoid_output_0']
['/model.6/m.0/cv2/act/Mul_output_0']
['/model.6/m.0/Add_output_0']
['/model.6/m.1/cv1/conv/Conv_output_0']
['/model.6/m.1/cv1/act/Sigmoid_output_0']
['/model.6/m.1/cv1/act/Mul_output_0']
['/model.6/m.1/cv2/conv/Conv_output_0']
['/model.6/m.1/cv2/act/Sigmoid_output_0']
['/model.6/m.1/cv2/act/Mul_output_0']
['/model.6/m.1/Add_output_0']
['/model.6/Concat_output_0']
['/model.6/cv2/conv/Conv_output_0']
['/model.6/cv2/act/Sigmoid_output_0']
['/model.6/cv2/act/Mul_output_0']
['/model.7/conv/Conv_output_0']
['/model.7/act/Sigmoid_output_0']
['/model.7/act/Mul_output_0']
['/model.8/cv1/conv/Conv_output_0']
['/model.8/cv1/act/Sigmoid_output_0']
['/model.8/cv1/act/Mul_output_0']
['onnx::Split_138']
['/model.8/Split_output_0', '/model.8/Split_output_1']
['/model.8/m.0/cv1/conv/Conv_output_0']
['/model.8/m.0/cv1/act/Sigmoid_output_0']
['/model.8/m.0/cv1/act/Mul_output_0']
['/model.8/m.0/cv2/conv/Conv_output_0']
['/model.8/m.0/cv2/act/Sigmoid_output_0']
['/model.8/m.0/cv2/act/Mul_output_0']
['/model.8/m.0/Add_output_0']
['/model.8/Concat_output_0']
['/model.8/cv2/conv/Conv_output_0']
['/model.8/cv2/act/Sigmoid_output_0']
['/model.8/cv2/act/Mul_output_0']
['/model.9/conv/conv/Conv_output_0']
['/model.9/conv/act/Sigmoid_output_0']
['/model.9/conv/act/Mul_output_0']
['/model.9/pool/GlobalAveragePool_output_0']
['/model.9/Flatten_output_0']
['/model.9/linear/Gemm_output_0']
['output0']
name: model.0.conv.weight  dtype: 1  dim: [16, 3, 3, 3]
name: model.0.conv.bias  dtype: 1  dim: [16]
name: model.1.conv.weight  dtype: 1  dim: [32, 16, 3, 3]
name: model.1.conv.bias  dtype: 1  dim: [32]
name: model.2.cv1.conv.weight  dtype: 1  dim: [32, 32, 1, 1]
name: model.2.cv1.conv.bias  dtype: 1  dim: [32]
name: model.2.cv2.conv.weight  dtype: 1  dim: [32, 48, 1, 1]
name: model.2.cv2.conv.bias  dtype: 1  dim: [32]
name: model.2.m.0.cv1.conv.weight  dtype: 1  dim: [16, 16, 3, 3]
name: model.2.m.0.cv1.conv.bias  dtype: 1  dim: [16]
name: model.2.m.0.cv2.conv.weight  dtype: 1  dim: [16, 16, 3, 3]
name: model.2.m.0.cv2.conv.bias  dtype: 1  dim: [16]
name: model.3.conv.weight  dtype: 1  dim: [64, 32, 3, 3]
name: model.3.conv.bias  dtype: 1  dim: [64]
name: model.4.cv1.conv.weight  dtype: 1  dim: [64, 64, 1, 1]
name: model.4.cv1.conv.bias  dtype: 1  dim: [64]
name: model.4.cv2.conv.weight  dtype: 1  dim: [64, 128, 1, 1]
name: model.4.cv2.conv.bias  dtype: 1  dim: [64]
name: model.4.m.0.cv1.conv.weight  dtype: 1  dim: [32, 32, 3, 3]
name: model.4.m.0.cv1.conv.bias  dtype: 1  dim: [32]
name: model.4.m.0.cv2.conv.weight  dtype: 1  dim: [32, 32, 3, 3]
name: model.4.m.0.cv2.conv.bias  dtype: 1  dim: [32]
name: model.4.m.1.cv1.conv.weight  dtype: 1  dim: [32, 32, 3, 3]
name: model.4.m.1.cv1.conv.bias  dtype: 1  dim: [32]
name: model.4.m.1.cv2.conv.weight  dtype: 1  dim: [32, 32, 3, 3]
name: model.4.m.1.cv2.conv.bias  dtype: 1  dim: [32]
name: model.5.conv.weight  dtype: 1  dim: [128, 64, 3, 3]
name: model.5.conv.bias  dtype: 1  dim: [128]
name: model.6.cv1.conv.weight  dtype: 1  dim: [128, 128, 1, 1]
name: model.6.cv1.conv.bias  dtype: 1  dim: [128]
name: model.6.cv2.conv.weight  dtype: 1  dim: [128, 256, 1, 1]
name: model.6.cv2.conv.bias  dtype: 1  dim: [128]
name: model.6.m.0.cv1.conv.weight  dtype: 1  dim: [64, 64, 3, 3]
name: model.6.m.0.cv1.conv.bias  dtype: 1  dim: [64]
name: model.6.m.0.cv2.conv.weight  dtype: 1  dim: [64, 64, 3, 3]
name: model.6.m.0.cv2.conv.bias  dtype: 1  dim: [64]
name: model.6.m.1.cv1.conv.weight  dtype: 1  dim: [64, 64, 3, 3]
name: model.6.m.1.cv1.conv.bias  dtype: 1  dim: [64]
name: model.6.m.1.cv2.conv.weight  dtype: 1  dim: [64, 64, 3, 3]
name: model.6.m.1.cv2.conv.bias  dtype: 1  dim: [64]
name: model.7.conv.weight  dtype: 1  dim: [256, 128, 3, 3]
name: model.7.conv.bias  dtype: 1  dim: [256]
name: model.8.cv1.conv.weight  dtype: 1  dim: [256, 256, 1, 1]
name: model.8.cv1.conv.bias  dtype: 1  dim: [256]
name: model.8.cv2.conv.weight  dtype: 1  dim: [256, 384, 1, 1]
name: model.8.cv2.conv.bias  dtype: 1  dim: [256]
name: model.8.m.0.cv1.conv.weight  dtype: 1  dim: [128, 128, 3, 3]
name: model.8.m.0.cv1.conv.bias  dtype: 1  dim: [128]
name: model.8.m.0.cv2.conv.weight  dtype: 1  dim: [128, 128, 3, 3]
name: model.8.m.0.cv2.conv.bias  dtype: 1  dim: [128]
name: model.9.conv.conv.weight  dtype: 1  dim: [1280, 256, 1, 1]
name: model.9.conv.conv.bias  dtype: 1  dim: [1280]
name: model.9.linear.weight  dtype: 1  dim: [1000, 1280]
name: model.9.linear.bias  dtype: 1  dim: [1000]
[name: "output0"
type {
  tensor_type {
    elem_type: 1
    shape {
      dim {
        dim_value: 1
      }
      dim {
        dim_value: 1000
      }
    }
  }
}
]
Input name: images
 Data type: FLOAT
 Shape: [1, 3, 224, 224]
Done

显而易见 name: model.0.conv.weight dtype: 1 dim: [16, 3, 3, 3],它的内积就是Weight name: model.0.conv.weight, Count: 432, Type: FLOAT


http://www.niftyadmin.cn/n/5423180.html

相关文章

kernel:NMI watchdog: BUG: soft lockup - CPU

Message from syslogdfat02 at Mar 11 14:11:49 ... kernel:NMI watchdog: BUG: soft lockup - CPU#72 stuck for 23s! [lt-swmr_sparse_:86901] 这条错误信息来自于系统的日志守护进程&#xff08;syslogd&#xff09;&#xff0c;具体报错信息是由内核发送的&#xff0c;涉…

Python爬虫——scrapy-2

目录 scrapy简介 安装ipython 基本使用 访问百度 总结 scrapy简介 scrapy shell是Scrapy框架提供的一个交互式命令行工具&#xff0c;用于快速调试和测试Scrapy爬虫。它能够加载Scrapy项目的设置和爬虫代码&#xff0c;并提供一个交互式环境&#xff0c;可以在其中执行Scra…

Java开发从入门到精通(一):Java的进阶语法中级知识:方法、实参、形参,参数传递、方法重载

Java大数据开发和安全开发 Java的方法1.1 方法是什么1.1.1 方法的定义1.1.2 方法如何执行?1.1.3 方法定义时注意点1.1.4 使用方法的好处是? 1.2 方法的多种形式1.2.1 无参数 无返回值1.2.2 有参数 无返回值 1.3 方法使用时的常见问题1.4 方法的设计案例1.4.1 计算1-n的和1.4.…

寻找最优的路测线路C卷(JavaPythonC++Node.jsC语言)

评估一个网络的信号质量,其中一个做法是将网络划分为栅格,然后对每个棚格的信号质量计算。路测的时候,希望选择一条信号最好的路线(彼此相连的栅格集合)进行演示。 现给出R行C列的整数数组Cov,每个单元格的数值S即为该栅格的信号质量(已归一化,无单位,值越大信号越好)。 …

Python Streamlit 报错处理

命令报错&#xff1a; streamlit run web.py --server.port8001 Traceback (most recent call last):File "/root/anaconda3/envs/python3.8/bin/streamlit", line 5, in <module>from streamlit.web.cli import mainFile "/root/anaconda3/envs/python3…

java面试题:为什么 SQL 语句不要过多的 join?

1 考察点 面试官主要想了解面试者对于 SQL 优化的理解以及在实际工作中如何处理 SQL 语句的性能问题。要回答好这个问题&#xff0c;只需要了解 join 操作会带来哪些影响&#xff0c;而这些影响对程序产生什么样的影响就行了。这个问题就是面试官想考察我们是不是平日里善于积…

Onlyfans年龄验证/无法支付解决方案

很多小伙伴在使用的时候遇到年龄验证&#xff0c;或者需要绑定visa卡&#xff0c;这里需要注意的是提示绑定visa卡直接绑定就好了&#xff0c;记得开好你的环境&#xff0c;要不然也会出现身份验证&#xff0c;对于我们来说验证一般是不过的 1、准备好环境 2、准备好卡&#…

探索AI时代“芯”路径 软通动力子公司鸿湖万联助阵第八届瑞芯微开发者大会

3月7日-8日&#xff0c;第八届瑞芯微开发者大会&#xff08;RKDC2024&#xff09;在福州成功举办&#xff0c;大会以“AI芯片AI应用AloT”为主题&#xff0c;通过芯片应用及生态伙伴的技术展示、产品和技术论坛等系列活动串联&#xff0c;吸引数千名开发者、合作伙伴以及行业专…