79294238

Date: 2024-12-19 12:17:10
Score: 3.5
Natty:
Report link

I'm facing the same issue. I trained a YOLO11 model with only one class. The model performed well during training, validation, and testing.

Validating runs/detect/train/weights/best.pt... Ultralytics 8.3.49 🚀 Python-3.9.6 torch-2.2.2 CPU (Intel Core(TM) i3-8100B 3.60GHz) YOLO11n summary (fused): 238 layers, 2,582,347 parameters, 0 gradients, 6.3 GFLOPs Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 3/3 [00:09<00:00, 3.29s/it] all 79 1229 0.955 0.908 0.971 0.861 Speed: 3.1ms preprocess, 111.1ms inference, 0.0ms loss, 0.8ms postprocess per image

If you export the model with format=coreml and nms=true, it generates a .mlpackage file. In Xcode, this allows the preview to analyze images and detect objects correctly. However, it does not work when using model.prediction(input: input) from the class or with VNCoreMLRequest(model: YOLOModel).

To resolve this issue, you need to export the model using format=mlmodel with coremltools==6.2. This ensures compatibility with both methods, and everything works as expected, it's work with the two methods

Reasons:
  • Long answer (-1):
  • No code block (0.5):
  • Me too answer (2.5): I'm facing the same issue
  • Filler text (0.5): ██████████
  • Low reputation (1):
Posted by: Guillaume