79321147

Date: 2025-01-01 04:26:58
Score: 1.5
Natty:
Report link

How to export YOLO segmentation model with flexible input sizes

from ultralytics import YOLO
import coremltools as ct

# Export to torchscript first
model = YOLO("yolov8n-seg.pt")
model.export(format="torchscript")

# Convert to CoreML with flexible input size
input_shape = ct.Shape(
    shape=(1, 3, 
           ct.RangeDim(lower_bound=32, upper_bound=1024, default=640),
           ct.RangeDim(lower_bound=32, upper_bound=1024, default=640))
)

mlmodel = ct.convert(
    "yolov8n-seg.torchscript",
    inputs=[ct.ImageType(
        name="images",
        shape=input_shape,
        color_layout=ct.colorlayout.RGB,
        scale=1.0/255.0
    )],
    minimum_deployment_target=ct.target.iOS16
)

mlmodel.save("yolov8n-seg-flexible.mlpackage")

This creates an .mlpackage that accepts images from 32 32 to 1024 1024 (you can modify these bounds as needed). Default is 640 640.

Read about the stuff here:

Reasons:
  • Probably link only (1):
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Starts with a question (0.5): How to
  • Low reputation (1):
Posted by: Rushil M