SimpleDet Docs

Inference

This repository documents two inference paths: the native project/runtime flow and lightweight prediction from a torchvision-compatible checkpoint. Pick the path that matches your checkpoint type.

Use this page to choose the right inference path based on the checkpoint type and the kind of outputs you need.

Pipeline inference

Use this when the checkpoint comes from the pipeline path and you want evaluator-compatible test-time outputs.

from simpledet import detect
from simpledet.suite import build_detector

outputs = detect(
    build=True,
    detector_spec=build_detector(
        "faster_rcnn",
        encoder="convnext_tiny.in12k_ft_in1k",
        num_classes=4,
        in_channels=3,
    ),
    data_folder="/data/project",
    result_folder="results/project",
    annot_file_train="/data/project/annotations/train.json",
    annot_file_val="/data/project/annotations/val.json",
    annot_file_test="/data/project/annotations/test.json",
    tif_channels_to_load=[1, 1, 1],
    in_channels=3,
    categories=("a", "b", "c", "d"),
)

This is the same test-time path used during evaluation. It writes the native manifest for the run and returns the prediction payload for downstream scoring.

Direct non-config inference

Use this when your dataset follows the standard project layout and you want pipeline-managed test execution without creating a config file first.

from simpledet import run_inference

result = run_inference(
    dataset_root="/data/project",
    detector_spec=build_detector(
        "faster_rcnn",
        encoder="convnext_tiny.in12k_ft_in1k",
        num_classes=4,
        in_channels=3,
    ),
    categories=("a", "b", "c", "d"),
    in_channels=3,
)

This helper executes the pipeline build and test stages directly and returns the same stage payloads the pipeline would produce.

Lightweight inference

Use this only for checkpoints created by the lightweight training flow or other compatible torchvision detector checkpoints.

from simpledet.detectors.infer import load_model, predict

model = load_model(
    "runs/exp001/epoch_001.pth",
    device="cpu",
    score_threshold=0.10,
    max_detections=50,
)

single = predict("sample.png", model=model)
batch = predict(["a.png", "b.png"], model=model)

This path returns Python dictionaries directly. It is useful for ad hoc prediction, but it does not reproduce the pipeline evaluator behavior.

Which one should you use?

Use detect()

When you want pipeline-managed testing against the configured dataset and evaluator stack.

Use predict()

When you want direct predictions from a compatible lightweight checkpoint in Python code.