How to export YOLO segmentation model with flexible input sizes
from ultralytics import YOLO
import coremltools as ct
# Export to torchscript first
model = YOLO("yolov8n-seg.pt")
model.export(format="torchscript")
# Convert to CoreML with flexible input size
input_shape = ct.Shape(
shape=(1, 3,
ct.RangeDim(lower_bound=32, upper_bound=1024, default=640),
ct.RangeDim(lower_bound=32, upper_bound=1024, default=640))
)
mlmodel = ct.convert(
"yolov8n-seg.torchscript",
inputs=[ct.ImageType(
name="images",
shape=input_shape,
color_layout=ct.colorlayout.RGB,
scale=1.0/255.0
)],
minimum_deployment_target=ct.target.iOS16
)
mlmodel.save("yolov8n-seg-flexible.mlpackage")
This creates an .mlpackage that accepts images from 32 32 to 1024 1024 (you can modify these bounds as needed). Default is 640 640.
Read about the stuff here: