SimpleDet Docs

Overview

SimpleDet is structured to answer one practical question: how do you keep model definition, runtime configuration, and experiment outputs understandable enough to rerun later?

Use this page to understand the three layers of the repo before diving into commands or code.

How to read this repository

The repo is intentionally split into layers. The model authoring layer says what detector you want. The runtime layer says how to train and evaluate it on your dataset. The lightweight helpers exist for small checks, but they are not the preferred surface for serious experiments.

1. Detector authoring layer

This is where you describe the architecture itself: backbone, neck, head or decoder, and class count.

  • simpledet.suite defines EncoderSpec, NeckSpec, HeadSpec, DecoderSpec, and DetectorSpec.
  • Supported native architectures: retinanet, retina, fcos, atss, gfl, vfnet, fovea, foveabox, reppoints, yolof, centernet, faster_rcnn, mask_rcnn, grid_rcnn, and cascade_rcnn.
  • Backbone sources: timm encoders or explicit registry-backed custom backbones.

Start here if: you want to decide what detector to run without manually editing several channel and feature-map fields.

2. Native runtime layer

This is where dataset paths, workdir layout, evaluators, logging, and training settings are attached to the detector.

  • run_training, run_inference, and run_evaluation are the canonical runtime entrypoints.
  • Use detector_spec=... for the recommended flow.
  • Use ProjectConfig plus run_project(...) when you want a reusable project file.

Start here if: you already know what model you want and need a reproducible experiment run with checkpoints and evaluation outputs.

3. Lightweight helper layer

This layer keeps a small torchvision-based path available, mostly for smoke testing.

  • simpledet.detectors.train.train(config=...) for a small torchvision training flow.
  • simpledet.detectors.infer.load_model() and predict() for lightweight inference.
  • simpledet.detectors.data.load_dataset() for generic dataset inspection.

Start here if: you need a quick sanity check, not a fully managed native experiment.

One lifecycle example

choose architecture
  -> build a DetectorSpec
  -> compile a native build plan
  -> pass it into run_training(...)
  -> train into one workdir
  -> keep the best checkpoint and evaluator outputs together

This lifecycle is the main design choice behind the repo. It keeps the “what model is this?” question separate from the “how was this run?” question.

Repository areas

PathPurpose
simpledet/simpledet/suite/Canonical detector specs, builders, and native planner
simpledet/simpledet/api.pyProject config and native runtime entrypoints
simpledet/simpledet/native/Native modeling, training loop, and runtime implementation
simpledet/simpledet/_model_resolution.pyRuntime backbone inspection and model adaptation
simpledet/simpledet/detectors/Lightweight training, inference, evaluation, and dataset helpers
MyConfigs/Experiment config workspace
notebooks/Tools/Static example notebooks