SimpleDet Docs

Experiments and reproducibility

Treat the detector spec, runtime config, data paths, and outputs as one experiment record.

Use this page when you care about rerunning, comparing, and preserving experiments rather than only getting a model to train once.

Standard workflow

  1. Create the detector spec.
  2. Record dataset root and annotation files.
  3. Fix the random seed and runtime parameters.
  4. Launch training into a dedicated result folder.
  5. Keep the checkpoint and evaluator outputs together.

Pipeline layout

results/project/
  BS_2_LR_0.001_IMG_768_71_Optim_SGD/
    native-manifest.json
    checkpoints/

Checklist

  • detector spec or compiled model config
  • shared runtime config path
  • dataset root and annotation files
  • seed, resize, batch_size, and learning_rate
  • exact checkpoint used for evaluation