Skip to content

Hyperparameter tuning #19

@goutamyg

Description

@goutamyg

Hi! Thank you for publishing your code.

Your released code has a section dedicated to configuration files corresponding to different tracker modules https://github.com/PinataFarms/FEARTracker/tree/main/model_training/config

It has parameters/choices related to training and inference (optimizer, learning rate scheduler, penalty_k, window influence, lr to name a few). Can you please suggest which dataset was used to tune these hyperparameter values? Was it fine-tuned using the test-set itself?

Also, I am particularly intrigued by a statement in the paper: "For each epoch, we randomly sample 20,000 images from LaSOT, 120,000 from COCO, 400,000 from YoutubeBB, 320,000 from GOT10k and 310,000 images from the ImageNet dataset". Can you suggest what was the reason behind choosing such a sampling split and not going for uniform sampling?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions