👋 Hello
我们为你编写可复用的计算机视觉工具。 无论你需要从硬盘加载数据集、在图像或视频上绘制检测框,还是计算某个区域的检测数量,你都可以信赖我们!🤝
💻 安装
在 Python>=3.9 环境中使用 pip 安装 supervision 包:
pip install supervision
想了解更多关于 conda、mamba 以及源码安装的方法,请查看我们的 指南。
🔥 快速上手
模型(models)
Supervision 设计时采用了模型无关的原则。你可以接入任意分类、检测或分割模型。为方便使用,我们提供了 connectors,支持 Ultralytics、Transformers、MMDetection 等主流库。
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(...)
model = YOLO("yolov8s.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
len(detections)
# 5
👉 更多模型连接器
* inference 使用 Inference 运行需要 Roboflow API KEY。python
import cv2
import supervision as sv
from inference import get_model
image = cv2.imread(...)
model = get_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>)
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
len(detections)
# 5
标注器(annotators)
Supervision 提供了多种高度可定制的 annotators,让你可以为自己的场景绘制理想的可视化效果。
import cv2
import supervision as sv
image = cv2.imread(...)
detections = sv.Detections(...)
box_annotator = sv.BoxAnnotator()
annotated_frame = box_annotator.annotate(
scene=image.copy(),
detections=detections)
https://github.com/roboflow/supervision/assets/26109316/691e219c-0565-4403-9218-ab5644f39bce
数据集(datasets)
Supervision 提供了一套 utils,可用来加载、拆分、合并、保存数据集,并支持多种格式。
import supervision as sv
from roboflow import Roboflow
project = Roboflow().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")
ds = sv.DetectionDataset.from_coco(
images_directory_path=f"{dataset.location}/train",
annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)
path, image, annotation = ds[0]
# 按需加载图像
for path, image, annotation in ds:
# 按需加载图像
👉 更多数据集工具
加载(load)python
dataset = sv.DetectionDataset.from_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
)
dataset = sv.DetectionDataset.from_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)
dataset = sv.DetectionDataset.from_coco(
images_directory_path=...,
annotations_path=...
)
拆分(split)
python
train_dataset, test_dataset = dataset.split(split_ratio=0.7)
test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
len(train_dataset), len(test_dataset), len(valid_dataset)
# (700, 150, 150)
合并(merge)
python
ds_1 = sv.DetectionDataset(...)
len(ds_1)
# 100
ds_1.classes
# ['dog', 'person']
ds_2 = sv.DetectionDataset(...)
len(ds_2)
# 200
ds_2.classes
# ['cat']
ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
len(ds_merged)
# 300
ds_merged.classes
# ['cat', 'dog', 'person']
保存(save)
python
dataset.as_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
)
dataset.as_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)
dataset.as_coco(
images_directory_path=...,
annotations_path=...
)
* 转换(convert)
python
sv.DetectionDataset.from_yolo(
images_directory_path=...,
annotations_directory_path=...,
data_yaml_path=...
).as_pascal_voc(
images_directory_path=...,
annotations_directory_path=...
)