SimpleDet Docs

CLI reference

SimpleDet installs one console script and several public Python workflow callables.

Use this page to see which entry points are real shell commands and which ones are Python callables only.

Installed console script

CommandDescription
simpledet --versionPrint the installed package version
python -m simpledet --check-runtimeVerify the optional runtime stack used by the package
python -m simpledet --list-detectorsList supported high-level detector architectures
python -m simpledet --list-encodersList supported encoder/backbone names
python -m simpledet --show-detector-help retinanetExplain one detector family and recommended encoders
python -m simpledet --init-project project.tomlWrite a starter project config
python -m simpledet --project-validate path/to/project.tomlValidate a project config without running training
python -m simpledet --project-run path/to/project.toml --stages build trainRun selected stages from a project config
python -m simpledet --train-root /data/project ...Run training directly from CLI arguments
python -m simpledet --infer-root /data/project ...Run inference directly from CLI arguments
python -m simpledet --eval-root /data/project ...Run evaluation directly from CLI arguments

Discovery commands

Use these before direct execution if you do not want to guess supported architecture or encoder names.

python -m simpledet --list-detectors
python -m simpledet --list-encoders
python -m simpledet --show-detector-help retinanet

--show-detector-help prints the detector family, a short summary, and a few encoder suggestions for the selected architecture.

Direct non-config workflow

Use these commands when your dataset follows the standard project layout and you want native execution without creating a project file.

python -m simpledet --train-root /data/project \
  --categories car building ship \
  --in-channels 3 \
  --detector retinanet \
  --encoder resnet18.a1_in1k \
  --batch-size 2 \
  --max-epochs 30

python -m simpledet --infer-root /data/project \
  --categories car building ship \
  --in-channels 3 \
  --detector retinanet \
  --encoder resnet18.a1_in1k

python -m simpledet --eval-root /data/project \
  --categories car building ship \
  --in-channels 3 \
  --detector retinanet \
  --encoder resnet18.a1_in1k

Direct execution currently requires --categories, --in-channels, and a high-level detector selection.

python -m simpledet --train-root /data/project \
  --categories car building ship \
  --in-channels 3 \
  --detector retinanet \
  --encoder resnet18.a1_in1k \
  --batch-size 2 \
  --max-epochs 30

Use one of these model-definition paths:

  • --detector with optional --encoder and --num-classes for suite-backed high-level model selection

Direct CLI execution uses the native Lightning backend automatically when you use one of the supported architectures, for example --detector retinanet, --detector vfnet, --detector centernet, or --detector faster_rcnn.

Optional runtime flags include --tif-channels-to-load, --result-folder, --resize, --batch-size, --max-epochs, --learning-rate, and --no-validate.

Project config workflow

Use a JSON or TOML file when you want a repeatable operational entrypoint for the native runtime.

python -m simpledet --init-project project.toml
python -m simpledet --project-validate project.toml
python -m simpledet --project-run project.toml --stages build test

A reusable example can be created with --init-project and then adjusted for your dataset root and output folder.

simpledet.train

Public Python callable, not a shell executable.

train(*, config=None, pipeline=None, build=True, **pipeline_kwargs)
  • config=... uses the lightweight torchvision path
  • Forwarded runtime kwargs map to the native execution helpers
  • detector_spec=... is the supported high-level model input

simpledet.detect

detect(*, pipeline=None, build=True, **pipeline_kwargs)

Runs the package inference helper for the current native runtime. This is separate from load_model() plus predict().

simpledet.evaluate

evaluate(*, pipeline=None, build=True, **pipeline_kwargs)

Currently a thin wrapper around the same native evaluation path used by run_evaluation(...).

Lightweight inference helpers

load_model(checkpoint, *, device="cpu", model_name=None, num_classes=None, score_threshold=0.05, max_detections=None)
predict(image, *, model=None, score_threshold=None)