Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在COCO上直接训练v5lite-s 416x416,未修改任何参数,map仅35.2 #63

Closed
Broad-sky opened this issue Nov 1, 2021 · 12 comments · Fixed by #9
Closed

在COCO上直接训练v5lite-s 416x416,未修改任何参数,map仅35.2 #63

Broad-sky opened this issue Nov 1, 2021 · 12 comments · Fixed by #9
Labels
documentation Improvements or additions to documentation

Comments

@Broad-sky
Copy link

直接用原始参数在COCO上训练v5lite-s,输入416x416,测试的结果如下:
模型测试命令:python test.py --device 0 --conf-thres 0.1 --iou-thres 0.5

Class Images Labels P R [email protected] [email protected]:.95: 100%|█| 79/79 [00:45<00:00
all 5000 36335 0.537 0.363 0.352 0.203

使用博主提供的模型v5lite-s,输入416x416,测试的结果如下:
模型测试命令:python test.py --device 0 --conf-thres 0.1 --iou-thres 0.5

Class Images Labels P R [email protected] [email protected]:.95: 100%|█| 79/79 [00:48<00:00
all 5000 36335 0.542 0.388 0.373 0.225

mAP相差2个点,请问这是什么原因导致的呢?期待大佬的回复!谢谢。

@ppogg
Copy link
Owner

ppogg commented Nov 1, 2021

直接使用v5封好的test.py文件进行评测要比正常的低1-2个点,请使用scripts脚本里面的eval.py评价脚本,直接调用cocotools的api进行评测:https://github.com/ppogg/YOLOv5-Lite/blob/master/scripts/eval.py,使用方式为,test.py里面生产的xx_precisions.json替换脚本中的路径:
image
蓝线是你下载的真实的coco_val.json
红线是你用test.py生产的xx_precisions.json
测试命令:python test.py --device 0 --conf-thres 0.01 --iou-thres 0.45 python eval.py
测试完后麻烦发一下截图看下指标
此为yolov5关于test.py评价精度的讨论,你也可以根据链接中的仓库自己写评价脚本,建议直接调用coco api进行评价:
ultralytics/yolov5#5116
ultralytics/yolov5#2258

@Broad-sky
Copy link
Author

你好~~~

这是使用你提供的v5lite-s.pt模型eval.py评价脚本评测的结果:

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.228
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.376
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.239
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.052
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.233
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.379
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.291
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.297
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.068
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.308
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.497

这是我直接训练v5lite-s的模型,eval.py评价脚本评测结果:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.207
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.356
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.209
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.058
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.218
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.332
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.195
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.275
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.281
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.076
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.296
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.455

@Broad-sky
Copy link
Author

直接使用v5封好的test.py文件进行评测要比正常的低1-2个点,请使用scripts脚本里面的eval.py评价脚本,直接调用cocotools的api进行评测:https://github.com/ppogg/YOLOv5-Lite/blob/master/scripts/eval.py,使用方式为,test.py里面生产的xx_precisions.json替换脚本中的路径: image 蓝线是你下载的真实的coco_val.json 红线是你用test.py生产的xx_precisions.json 测试完后麻烦发一下截图看下指标

大佬,其实我想请教一下,为什么使用原始参数,直接训练v5lite-s模型,怎么比你的低2个点,需要做调参?还是需要使用预训练的backbone?

@ppogg
Copy link
Owner

ppogg commented Nov 1, 2021

你好~~~

这是使用你提供的v5lite-s.pt模型eval.py评价脚本评测的结果:

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.228 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.376 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.239 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.052 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.233 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.379 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.291 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.297 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.068 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.308 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.497

这是我直接训练v5lite-s的模型,eval.py评价脚本评测结果: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.207 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.356 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.209 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.058 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.218 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.332 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.195 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.275 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.281 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.076 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.296 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.455

还是太低了,如果相差只有0.1或者0.2个百分点是正常的,但是[email protected]已经相差了四五个百分点了,你测一下v5lite-g.pt这个模型,然后待会也贴一下指标,谢谢!

@Broad-sky
Copy link
Author

你好~~~
这是使用你提供的v5lite-s.pt模型eval.py评价脚本评测的结果:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.228 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.376 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.239 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.052 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.233 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.379 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.291 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.297 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.068 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.308 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.497
这是我直接训练v5lite-s的模型,eval.py评价脚本评测结果: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.207 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.356 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.209 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.058 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.218 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.332 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.195 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.275 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.281 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.076 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.296 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.455

还是太低了,如果相差只有0.1或者0.2个百分点是正常的,但是[email protected]已经相差了四五个百分点了,你测一下v5lite-g.pt这个模型,然后待会也贴一下指标,谢谢!

v5lite-g这个模型你提供的没有下载下来,我训练一个,再来贴。请问训练的时候还有什么需要注意的地方?

期待回复,谢谢!

@ppogg
Copy link
Owner

ppogg commented Nov 1, 2021

直接使用v5封好的test.py文件进行评测要比正常的低1-2个点,请使用scripts脚本里面的eval.py评价脚本,直接调用cocotools的api进行评测:https://github.com/ppogg/YOLOv5-Lite/blob/master/scripts/eval.py,使用方式为,test.py里面生产的xx_precisions.json替换脚本中的路径: image 蓝线是你下载的真实的coco_val.json 红线是你用test.py生产的xx_precisions.json 测试完后麻烦发一下截图看下指标

大佬,其实我想请教一下,为什么使用原始参数,直接训练v5lite-s模型,怎么比你的低2个点,需要做调参?还是需要使用预训练的backbone?

image
请参考此处https://blog.csdn.net/weixin_45829462/article/details/119767896,在训练结束的前15轮,关闭其他数据增强,留下0.2的马赛克,学习率为0.0001
另外,你仍然需要测下v5lite-c.pt和v5lite-g的coco评价指标,如果不找出测试结果这么低的原因,你训练出来的模型很难估算出真实值,建议测下v5lite-c和v5lite-g后贴一下截图,我可以帮你看看

@Broad-sky
Copy link
Author

直接使用v5封好的test.py文件进行评测要比正常的低1-2个点,请使用scripts脚本里面的eval.py评价脚本,直接调用cocotools的api进行评测:https://github.com/ppogg/YOLOv5-Lite/blob/master/scripts/eval.py,使用方式为,test.py里面生产的xx_precisions.json替换脚本中的路径: image 蓝线是你下载的真实的coco_val.json 红线是你用test.py生产的xx_precisions.json 测试完后麻烦发一下截图看下指标

大佬,其实我想请教一下,为什么使用原始参数,直接训练v5lite-s模型,怎么比你的低2个点,需要做调参?还是需要使用预训练的backbone?

image 请参考此处https://blog.csdn.net/weixin_45829462/article/details/119767896,在训练结束的前15轮,关闭其他数据增强,留下0.2的马赛克,学习率为0.0001 另外,你仍然需要测下v5lite-c.pt和v5lite-g的coco评价指标,如果不找出测试结果这么低的原因,你训练出来的模型很难估算出真实值,建议测下v5lite-c和v5lite-g后贴一下截图,我可以帮你看看

这种训练策略针对v5lite-s、v5lite-c、v5lite-g均奏效吗?
好,测完了我就截图贴上来。谢谢。

@ppogg
Copy link
Owner

ppogg commented Nov 1, 2021

对于v5lite-s凑效,对于c和g,马赛克的scale需要相应变成0.2,0.5,在最后的15个epoch
还有上面打错了,s的话是在最后的15轮将所有的增强策略关闭

@ppogg
Copy link
Owner

ppogg commented Nov 1, 2021

朋友你好,其他两个模型的评测指标还未发,另外,你需要先解决在你的评价指标和仓库模型测试相差了四五个点的问题,这样方便你在后续复现。你可以加我qq1138099162,这几天晚上帮你把coco api的评估流程跑一遍。

@ppogg
Copy link
Owner

ppogg commented Nov 1, 2021

你好~~~
这是使用你提供的v5lite-s.pt模型eval.py评价脚本评测的结果:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.228 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.376 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.239 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.052 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.233 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.379 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.209 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.291 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.297 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.068 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.308 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.497
这是我直接训练v5lite-s的模型,eval.py评价脚本评测结果: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.207 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.356 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.209 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.058 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.218 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.332 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.195 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.275 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.281 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.076 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.296 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.455

还是太低了,如果相差只有0.1或者0.2个百分点是正常的,但是[email protected]已经相差了四五个百分点了,你测一下v5lite-g.pt这个模型,然后待会也贴一下指标,谢谢!

v5lite-g这个模型你提供的没有下载下来,我训练一个,再来贴。请问训练的时候还有什么需要注意的地方?

期待回复,谢谢!

啊,为啥下不下来???

@Broad-sky
Copy link
Author

朋友你好,其他两个模型的评测指标还未发,另外,你需要先解决在你的评价指标和仓库模型测试相差了四五个点的问题,这样方便你在后续复现。你可以加我qq1138099162,这几天晚上帮你把coco api的评估流程跑一遍。

谢谢你持续的回复与帮助,直接用你的模型v5lite-s评测,这几个点的差距,主要是我把conf_thresh由0.0001改为0.1,iou由0.45改为0.5,修改回去能得到你公布的mAP。

我加你QQ了,请同意一下,谢谢。

@ppogg
Copy link
Owner

ppogg commented Nov 1, 2021

@ppogg ppogg added the documentation Improvements or additions to documentation label Nov 2, 2021
@ppogg ppogg linked a pull request Nov 2, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants