ultralytics

    ultralytics/ultralytics

    Ultralytics YOLO 🚀

    cli
    computer-vision
    deep-learning
    machine-learning
    hub
    image-classification
    instance-segmentation
    object-detection
    pose-estimation
    python
    pytorch
    rotated-object-detection
    segment-anything
    tracking
    ultralytics
    yolo
    yolo-world
    yolo11
    yolo26
    yolov8
    Python
    AGPL-3.0
    52.3K stars
    10.0K forks
    52.3K watching
    Updated 3/11/2026
    View on GitHub
    Backblaze Advertisement

    Loading star history...

    Health Score

    75

    Weekly Growth

    +0

    +0.0% this week

    Contributors

    1

    Total contributors

    Open Issues

    315

    Generated Insights

    About ultralytics


    Ultralytics creates cutting-edge, state-of-the-art (SOTA) YOLO models built on years of foundational research in computer vision and AI. Constantly updated for performance and flexibility, our models are fast, accurate, and easy to use. They excel at object detection, tracking, instance segmentation, image classification, and pose estimation tasks.

    Find detailed documentation in the Ultralytics Docs. Get support via GitHub Issues. Join discussions on Discord, Reddit, and the Ultralytics Community Forums!

    Request an Enterprise License for commercial use at Ultralytics Licensing.

    YOLO11 performance plots
    Ultralytics GitHub space Ultralytics LinkedIn space Ultralytics Twitter space Ultralytics YouTube space Ultralytics TikTok space Ultralytics BiliBili space Ultralytics Discord

    📄 Documentation

    See below for quickstart installation and usage examples. For comprehensive guidance on training, validation, prediction, and deployment, refer to our full Ultralytics Docs.

    Install

    Install the ultralytics package, including all requirements, in a Python>=3.8 environment with PyTorch>=1.8.

    PyPI - Version Ultralytics Downloads PyPI - Python Version

    pip install ultralytics
    

    For alternative installation methods, including Conda, Docker, and building from source via Git, please consult the Quickstart Guide.

    Conda Version Docker Image Version Ultralytics Docker Pulls

    Usage

    CLI

    You can use Ultralytics YOLO directly from the Command Line Interface (CLI) with the yolo command:

    # Predict using a pretrained YOLO model (e.g., YOLO11n) on an image
    yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
    

    The yolo command supports various tasks and modes, accepting additional arguments like imgsz=640. Explore the YOLO CLI Docs for more examples.

    Python

    Ultralytics YOLO can also be integrated directly into your Python projects. It accepts the same configuration arguments as the CLI:

    from ultralytics import YOLO
    
    # Load a pretrained YOLO11n model
    model = YOLO("yolo11n.pt")
    
    # Train the model on the COCO8 dataset for 100 epochs
    train_results = model.train(
        data="coco8.yaml",  # Path to dataset configuration file
        epochs=100,  # Number of training epochs
        imgsz=640,  # Image size for training
        device="cpu",  # Device to run on (e.g., 'cpu', 0, [0,1,2,3])
    )
    
    # Evaluate the model's performance on the validation set
    metrics = model.val()
    
    # Perform object detection on an image
    results = model("path/to/image.jpg")  # Predict on an image
    results[0].show()  # Display results
    
    # Export the model to ONNX format for deployment
    path = model.export(format="onnx")  # Returns the path to the exported model
    

    Discover more examples in the YOLO Python Docs.

    ✨ Models

    Ultralytics supports a wide range of YOLO models, from early versions like YOLOv3 to the latest YOLO11. The tables below showcase YOLO11 models pretrained on the COCO dataset for Detection, Segmentation, and Pose Estimation. Additionally, Classification models pretrained on the ImageNet dataset are available. Tracking mode is compatible with all Detection, Segmentation, and Pose models. All Models are automatically downloaded from the latest Ultralytics release upon first use.

    Ultralytics YOLO supported tasks

    Detection (COCO)

    Explore the Detection Docs for usage examples. These models are trained on the COCO dataset, featuring 80 object classes.

    Modelsize
    (pixels)
    mAPval
    50-95
    Speed
    CPU ONNX
    (ms)
    Speed
    T4 TensorRT10
    (ms)
    params
    (M)
    FLOPs
    (B)
    YOLO11n64039.556.1 ± 0.81.5 ± 0.02.66.5
    YOLO11s64047.090.0 ± 1.22.5 ± 0.09.421.5
    YOLO11m64051.5183.2 ± 2.04.7 ± 0.120.168.0
    YOLO11l64053.4238.6 ± 1.46.2 ± 0.125.386.9
    YOLO11x64054.7462.8 ± 6.711.3 ± 0.256.9194.9
    • mAPval values refer to single-model single-scale performance on the COCO val2017 dataset. See YOLO Performance Metrics for details.
      Reproduce with yolo val detect data=coco.yaml device=0
    • Speed metrics are averaged over COCO val images using an Amazon EC2 P4d instance. CPU speeds measured with ONNX export. GPU speeds measured with TensorRT export.
      Reproduce with yolo val detect data=coco.yaml batch=1 device=0|cpu
    Segmentation (COCO)

    Refer to the Segmentation Docs for usage examples. These models are trained on COCO-Seg, including 80 classes.

    Modelsize
    (pixels)
    mAPbox
    50-95
    mAPmask
    50-95
    Speed
    CPU ONNX
    (ms)
    Speed
    T4 TensorRT10
    (ms)
    params
    (M)
    FLOPs
    (B)
    YOLO11n-seg64038.932.065.9 ± 1.11.8 ± 0.02.910.4
    YOLO11s-seg64046.637.8117.6 ± 4.92.9 ± 0.010.135.5
    YOLO11m-seg64051.541.5281.6 ± 1.26.3 ± 0.122.4123.3
    YOLO11l-seg64053.442.9344.2 ± 3.27.8 ± 0.227.6142.2
    YOLO11x-seg64054.743.8664.5 ± 3.215.8 ± 0.762.1319.0
    • mAPval values are for single-model single-scale on the COCO val2017 dataset. See YOLO Performance Metrics for details.
      Reproduce with yolo val segment data=coco.yaml device=0
    • Speed metrics are averaged over COCO val images using an Amazon EC2 P4d instance. CPU speeds measured with ONNX export. GPU speeds measured with TensorRT export.
      Reproduce with yolo val segment data=coco.yaml batch=1 device=0|cpu
    Classification (ImageNet)

    Consult the Classification Docs for usage examples. These models are trained on ImageNet, covering 1000 classes.

    Modelsize
    (pixels)
    acc
    top1
    acc
    top5
    Speed
    CPU ONNX
    (ms)
    Speed
    T4 TensorRT10
    (ms)
    params
    (M)
    FLOPs
    (B) at 224
    YOLO11n-cls22470.089.45.0 ± 0.31.1 ± 0.01.60.5
    YOLO11s-cls22475.492.77.9 ± 0.21.3 ± 0.05.51.6
    YOLO11m-cls22477.393.917.2 ± 0.42.0 ± 0.010.45.0
    YOLO11l-cls22478.394.323.2 ± 0.32.8 ± 0.012.96.2
    YOLO11x-cls22479.594.941.4 ± 0.93.8 ± 0.028.413.7
    • acc values represent model accuracy on the ImageNet dataset validation set.
      Reproduce with yolo val classify data=path/to/ImageNet device=0
    • Speed metrics are averaged over ImageNet val images using an Amazon EC2 P4d instance. CPU speeds measured with ONNX export. GPU speeds measured with TensorRT export.
      Reproduce with yolo val classify data=path/to/ImageNet batch=1 device=0|cpu
    Pose (COCO)

    See the Pose Estimation Docs for usage examples. These models are trained on COCO-Pose, focusing on the 'person' class.

    Modelsize
    (pixels)
    mAPpose
    50-95
    mAPpose
    50
    Speed
    CPU ONNX
    (ms)
    Speed
    T4 TensorRT10
    (ms)
    params
    (M)
    FLOPs
    (B)
    YOLO11n-pose64050.081.052.4 ± 0.51.7 ± 0.02.97.6
    YOLO11s-pose64058.986.390.5 ± 0.62.6 ± 0.09.923.2
    YOLO11m-pose64064.989.4187.3 ± 0.84.9 ± 0.120.971.7
    YOLO11l-pose64066.189.9247.7 ± 1.16.4 ± 0.126.290.7
    YOLO11x-pose64069.591.1488.0 ± 13.912.1 ± 0.258.8203.3
    • mAPval values are for single-model single-scale on the COCO Keypoints val2017 dataset. See YOLO Performance Metrics for details.
      Reproduce with yolo val pose data=coco-pose.yaml device=0
    • Speed metrics are averaged over COCO val images using an Amazon EC2 P4d instance. CPU speeds measured with ONNX export. GPU speeds measured with TensorRT export.
      Reproduce with yolo val pose data=coco-pose.yaml batch=1 device=0|cpu
    Oriented Bounding Boxes (DOTAv1)

    Check the OBB Docs for usage examples. These models are trained on DOTAv1, including 15 classes.

    Modelsize
    (pixels)
    mAPtest
    50
    Speed
    CPU ONNX
    (ms)
    Speed
    T4 TensorRT10
    (ms)
    params
    (M)
    FLOPs
    (B)
    YOLO11n-obb102478.4117.6 ± 0.84.4 ± 0.02.717.2
    YOLO11s-obb102479.5219.4 ± 4.05.1 ± 0.09.757.5
    YOLO11m-obb102480.9562.8 ± 2.910.1 ± 0.420.9183.5
    YOLO11l-obb102481.0712.5 ± 5.013.5 ± 0.626.2232.0
    YOLO11x-obb102481.31408.6 ± 7.728.6 ± 1.058.8520.2
    • mAPtest values are for single-model multiscale performance on the DOTAv1 test set.
      Reproduce by yolo val obb data=DOTAv1.yaml device=0 split=test and submit merged results to the DOTA evaluation server.
    • Speed metrics are averaged over DOTAv1 val images using an Amazon EC2 P4d instance. CPU speeds measured with ONNX export. GPU speeds measured with TensorRT export.
      Reproduce by yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu

    🧩 Integrations

    Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with partners like Weights & Biases, Comet ML, Roboflow, and Intel OpenVINO, can optimize your AI workflow. Explore more at Ultralytics Integrations.

    Ultralytics active learning integrations

    Ultralytics HUB 🌟Weights & BiasesCometNeural Magic
    Streamline YOLO workflows: Label, train, and deploy effortlessly with Ultralytics HUB. Try now!Track experiments, hyperparameters, and results with Weights & Biases.Free forever, Comet ML lets you save YOLO models, resume training, and interactively visualize predictions.Run YOLO inference up to 6x faster with Neural Magic DeepSparse.

    🌟 Ultralytics HUB

    Experience seamless AI with Ultralytics HUB, the all-in-one platform for data visualization, training YOLO models, and deployment—no coding required. Transform images into actionable insights and bring your AI visions to life effortlessly using our cutting-edge platform and user-friendly Ultralytics App. Start your journey for Free today!

    Ultralytics HUB preview image

    🤝 Contribute

    We thrive on community collaboration! Ultralytics YOLO wouldn't be the SOTA framework it is without contributions from developers like you. Please see our Contributing Guide to get started. We also welcome your feedback—share your experience by completing our Survey. A huge Thank You 🙏 to everyone who contributes!

    Ultralytics open-source contributors

    We look forward to your contributions to help make the Ultralytics ecosystem even better!

    📜 License

    Ultralytics offers two licensing options to suit different needs:

    • AGPL-3.0 License: This OSI-approved open-source license is perfect for students, researchers, and enthusiasts. It encourages open collaboration and knowledge sharing. See the LICENSE file for full details.
    • Ultralytics Enterprise License: Designed for commercial use, this license allows for the seamless integration of Ultralytics software and AI models into commercial products and services, bypassing the open-source requirements of AGPL-3.0. If your use case involves commercial deployment, please contact us via Ultralytics Licensing.

    📞 Contact

    For bug reports and feature requests related to Ultralytics software, please visit GitHub Issues. For questions, discussions, and community support, join our active communities on Discord, Reddit, and the Ultralytics Community Forums. We're here to help with all things Ultralytics!


    Ultralytics GitHub space Ultralytics LinkedIn space Ultralytics Twitter space Ultralytics YouTube space Ultralytics TikTok space Ultralytics BiliBili space Ultralytics Discord

    Discover Repositories

    Search across tracked repositories by name or description