👋 Hello

我们为你编写可复用的计算机视觉工具。 无论你需要从硬盘加载数据集、在图像或视频上绘制检测框,还是计算某个区域的检测数量,你都可以信赖我们!🤝

💻 安装

Python>=3.9 环境中使用 pip 安装 supervision 包:

  1. pip install supervision

想了解更多关于 conda、mamba 以及源码安装的方法,请查看我们的 指南

🔥 快速上手

模型(models)

Supervision 设计时采用了模型无关的原则。你可以接入任意分类、检测或分割模型。为方便使用,我们提供了 connectors,支持 Ultralytics、Transformers、MMDetection 等主流库。

  1. import cv2
  2. import supervision as sv
  3. from ultralytics import YOLO
  4. image = cv2.imread(...)
  5. model = YOLO("yolov8s.pt")
  6. result = model(image)[0]
  7. detections = sv.Detections.from_ultralytics(result)
  8. len(detections)
  9. # 5
👉 更多模型连接器 * inference 使用 Inference 运行需要 Roboflow API KEYpython import cv2 import supervision as sv from inference import get_model image = cv2.imread(...) model = get_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>) result = model.infer(image)[0] detections = sv.Detections.from_inference(result) len(detections) # 5

标注器(annotators)

Supervision 提供了多种高度可定制的 annotators,让你可以为自己的场景绘制理想的可视化效果。

  1. import cv2
  2. import supervision as sv
  3. image = cv2.imread(...)
  4. detections = sv.Detections(...)
  5. box_annotator = sv.BoxAnnotator()
  6. annotated_frame = box_annotator.annotate(
  7. scene=image.copy(),
  8. detections=detections)

https://github.com/roboflow/supervision/assets/26109316/691e219c-0565-4403-9218-ab5644f39bce

数据集(datasets)

Supervision 提供了一套 utils,可用来加载、拆分、合并、保存数据集,并支持多种格式。

  1. import supervision as sv
  2. from roboflow import Roboflow
  3. project = Roboflow().workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
  4. dataset = project.version(<PROJECT_VERSION>).download("coco")
  5. ds = sv.DetectionDataset.from_coco(
  6. images_directory_path=f"{dataset.location}/train",
  7. annotations_path=f"{dataset.location}/train/_annotations.coco.json",
  8. )
  9. path, image, annotation = ds[0]
  10. # 按需加载图像
  11. for path, image, annotation in ds:
  12. # 按需加载图像
👉 更多数据集工具 加载(load) python dataset = sv.DetectionDataset.from_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ) dataset = sv.DetectionDataset.from_pascal_voc( images_directory_path=..., annotations_directory_path=... ) dataset = sv.DetectionDataset.from_coco( images_directory_path=..., annotations_path=... ) 拆分(split) python train_dataset, test_dataset = dataset.split(split_ratio=0.7) test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5) len(train_dataset), len(test_dataset), len(valid_dataset) # (700, 150, 150) 合并(merge) python ds_1 = sv.DetectionDataset(...) len(ds_1) # 100 ds_1.classes # ['dog', 'person'] ds_2 = sv.DetectionDataset(...) len(ds_2) # 200 ds_2.classes # ['cat'] ds_merged = sv.DetectionDataset.merge([ds_1, ds_2]) len(ds_merged) # 300 ds_merged.classes # ['cat', 'dog', 'person'] 保存(save) python dataset.as_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ) dataset.as_pascal_voc( images_directory_path=..., annotations_directory_path=... ) dataset.as_coco( images_directory_path=..., annotations_path=... ) * 转换(convert) python sv.DetectionDataset.from_yolo( images_directory_path=..., annotations_directory_path=..., data_yaml_path=... ).as_pascal_voc( images_directory_path=..., annotations_directory_path=... )