Skip to content

Add high-resolution fine-tuning stage (Midnight paper Section 2)#66

Open
anas-zafar wants to merge 1 commit intoMedARC-AI:mainfrom
anas-zafar:feat/highres-finetuning
Open

Add high-resolution fine-tuning stage (Midnight paper Section 2)#66
anas-zafar wants to merge 1 commit intoMedARC-AI:mainfrom
anas-zafar:feat/highres-finetuning

Conversation

@anas-zafar
Copy link
Copy Markdown

No description provided.

Implements the post-training procedure from the Midnight MICCAI 2025 paper
that fine-tunes the model at 392x392 global crops for 120k optimizer steps
to produce richer patch embeddings.

- vitg14_reg4_highres.yaml: new config (392px crops, lr=1e-4, accum=3, 96 epochs)
- run_highres.sh: launch script with checkpoint validation and resume support
- train.py: add _load_from_teacher_checkpoint(), gradient accumulation inner
  loop, fix eval transform to use cfg.crops.global_crops_size not hardcoded 224
- ssl_meta_arch.py: forward_backward accepts accumulation_steps and divides
  loss before backward so each micro-batch contributes equally to gradients
- config.py: LR scaling uses effective batch (micro-batch x GPUs x accum_steps)
  so gradient accumulation does not silently under-scale the learning rate
- ssl_default_config.yaml: add gradient_accumulation_steps and pretrained_weights
  defaults (both backward-compatible with existing stage-1 runs)
@anas-zafar anas-zafar force-pushed the feat/highres-finetuning branch 2 times, most recently from 2085d00 to 5065943 Compare March 22, 2026 17:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant