sunsmarterjie/yolov12

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Yunjie Tian1, Qixiang Ye2, David Doermann1

1 University at Buffalo, SUNY, 2 University of Chinese Academy of Sciences.


Comparison with popular methods in terms of latency-accuracy (left) and FLOPs-accuracy (right) trade-offs

arXivHugging Face DemoOpen In ColabKaggle NotebookLightlyTrain NotebookdeployOpenbayes

AbstractEnhancing the network architecture of the YOLO framework has been crucial for a long time but has focused on CNN-based improvements despite the proven superiority of attention mechanisms in modeling capabilities. This is because attention-based models cannot match the speed of CNN-based models. This paper proposes an attention-centric YOLO framework, namely YOLOv12, that matches the speed of previous CNN-based ones while harnessing the performance benefits of attention mechanisms.

YOLOv12 surpasses all popular real-time object detectors in accuracy with competitive speed. For example, YOLOv12-N achieves 40.6% mAP with an inference latency of 1.64 ms on a T4 GPU, outperforming advanced YOLOv10-N / YOLOv11-N by 2.1%/1.2% mAP with a comparable speed. This advantage extends to other model scales. YOLOv12 also surpasses end-to-end real-time detectors that improve DETR, such as RT-DETR / RT-DETRv2: YOLOv12-S beats RT-DETR-R18 / RT-DETRv2-R18 while running 42% faster, using only 36% of the computation and 45% of the parameters.

Turbo (default):

Model (det)size
(pixels)
mAPval
50-95
Speed (ms)
T4 TensorRT10
params
(M)
FLOPs
(G)
YOLO12n64040.41.602.56.0
YOLO12s64047.62.429.119.4
YOLO12m64052.54.2719.659.8
YOLO12l64053.85.8326.582.4
YOLO12x64055.410.3859.3184.6

v1.0:

Model (det)size
(pixels)
mAPval
50-95
Speed (ms)
T4 TensorRT10
params
(M)
FLOPs
(G)
YOLO12n64040.61.642.66.5
YOLO12s64048.02.619.321.4
YOLO12m64052.54.8620.267.5
YOLO12l64053.76.7726.488.9
YOLO12x64055.211.7959.1199.0

Instance segmentation:

Model (seg)size
(pixels)
mAPbox
50-95
mAPmask
50-95
Speed (ms)
T4 TensorRT10
params
(M)
FLOPs
(B)
YOLOv12n-seg64039.932.81.842.89.9
YOLOv12s-seg64047.538.62.849.833.4
YOLOv12m-seg64052.442.36.2721.9115.1
YOLOv12l-seg64054.043.27.6128.8137.7
YOLOv12x-seg64055.244.215.4364.5308.7
wget https://.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu11torch2.2cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
conda create -n yolov12 python=3.11
conda activate yolov12
pip install -r requirements.txt
pip install -e .

yolov12n yolov12s yolov12m yolov12l yolov12x

from ultralytics import YOLO

model = YOLO('yolov12{n/s/m/l/x}.pt')
model.val(data='coco.yaml', save_json=True)
from ultralytics import YOLO

model = YOLO('yolov12n.yaml')

# Train the model
results = model.train(
  data='coco.yaml',
  epochs=600, 
  batch=256, 
  imgsz=640,
  scale=0.5,  # S:0.9; M:0.9; L:0.9; X:0.9
  mosaic=1.0,
  mixup=0.0,  # S:0.05; M:0.15; L:0.15; X:0.2
  copy_paste=0.1,  # S:0.15; M:0.4; L:0.5; X:0.6
  device="0,1,2,3",
)

# Evaluate model performance on the validation set
metrics = model.val()

# Perform object detection on an image
results = model("path/to/image.jpg")
results[0].show()
from ultralytics import YOLO

model = YOLO('yolov12{n/s/m/l/x}.pt')
model.predict()
from ultralytics import YOLO

model = YOLO('yolov12{n/s/m/l/x}.pt')
model.export(format="engine", half=True)  # or format="onnx"
python app.py
# Please visit http://127.0.0.1:7860

The code is based on ultralytics. Thanks for their excellent work!

@article{tian2025yolov12,
  title={YOLOv12: Attention-Centric Real-Time Object Detectors},
  author={Tian, Yunjie and Ye, Qixiang and Doermann, David},
  journal={arXiv preprint arXiv:2502.12524},
  year={2025}
}

About

YOLOv12: Attention-Centric Real-Time Object Detectors

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 8

Languages