Skip to content

model_stage3代码问题 #15

@itis112

Description

@itis112
    output = torch.bmm(affinity, w_v) 
    # output=output.permute(0,2,1)
    # output=self.transformer1_FFN[1](nn.Dropout(0.2)(F.relu(((self.transformer1_FFN[0](output))))))
    # output=output.permute(0,2,1)
    output=output.reshape(batch,dim,w,h)
    output=self.transformer1_encoder1[3](output)  
    x=x+output   
    return x

你好,我想问一下按照您论文里面figure3的encoder的部分是先进行了自注意力后再add再升通道的,但是按照您model_stage3.py里面就是上面的代码是进行了自注意力后先升通道再与原始值相加的,请问是我理解有问题吗?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions