Skip to content

About Semantic Segmentation #123

@YunZhou0321

Description

@YunZhou0321

Thank you very much for your outstanding work! I am focusing on the semantic segmentation task. In the OneFormer3D code, I noticed that semantic segmentation training is performed by predicting the superpoint categories and applying a cross-entropy loss with respect to the ground-truth superpoint labels. In my experiments, I removed the queries and corresponding loss functions related to instance segmentation. It's worth noting that semantic segmentation relies on the pretrained weights from SSTNet. When I trained the model from scratch without loading the SSTNet pretrained weights (i.e., using randomly initialized weights), the model was still able to converge, but the mIoU was lower compared to using the pretrained weights. What do you think could be the reason for this? Is it because the spconvUnet has already been trained with the semantic loss
𝐿_semantic as described in its paper? In the SSTNet paper, 𝐿_semantic is computed using both the Dice loss and cross-entropy loss at the point level.

This is the performance of not loading SSTNET_Weights

Image

This is the performance of loading SSTNET_Weights

Image

Many thanks in advance for your kind support and valuable feedback.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions