Skip to content

测试时为啥候选物品的embedding 不需要过Model模块的fc呢,也即MLP模块,而是直接使用物品的modalembedding或者说id embedding #7

@Jackie-gj

Description

@Jackie-gj
    self.fc = MLP_Layers(word_embedding_dim=num_fc_ftr,
                         item_embedding_dim=args.embedding_dim,
                         layers=[args.embedding_dim] * (args.dnn_layer + 1),
                         drop_rate=args.drop_rate)这段代码在训练是对输入的embedding进行了转换,然后再与候选的正负样本计算相似度以及BCE损失, 在模型预测时,为啥是直接使用item_embeddings而不需要经过上面得MLP_Layers呢?
item_embeddings = item_embeddings.to(local_rank)
with torch.no_grad():
    eval_all_user = []
    item_rank = torch.Tensor(np.arange(item_num) + 1).to(local_rank)
    for data in eval_dl:
        user_ids, input_embs, log_mask, labels = data
        user_ids, input_embs, log_mask, labels = \
            user_ids.to(local_rank), input_embs.to(local_rank),\
            log_mask.to(local_rank), labels.to(local_rank).detach()
        prec_emb = model.module.user_encoder(input_embs, log_mask, local_rank)[:, -1].detach()
        scores = torch.matmul(prec_emb, item_embeddings.t()).squeeze(dim=-1).detach()

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions