Skip to content

scripts的pretain下的timer_xl_multivariate测试问题 #55

@lim1164552675-ship-it

Description

@lim1164552675-ship-it

当我使用脚本进行训练后,当我使用生成的checkpoint.pth进行测试时,显示有错误,我想问如何解决这种问题呢

Image 测试时错误为Traceback (most recent call last): File "/mnt/f/测试项目/OPNElsm/OpenLTM-main/run.py", line 181, in exp.test(setting, test=1) File "/mnt/f/测试项目/OPNElsm/OpenLTM-main/exp/exp_forecast.py", line 225, in test self.model.load_state_dict(checkpoint) File "/home/lim0618/miniconda3/envs/OPENLSM/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2629, in load_state_dict raise RuntimeError( RuntimeError: Error(s) in loading state_dict for DataParallel: Missing key(s) in state_dict: "module.embedding.weight", "module.embedding.bias", "module.blocks.attn_layers.0.attention.inner_attention.attn_bias.emb.weight", "module.blocks.attn_layers.0.attention.query_projection.weight", "module.blocks.attn_layers.0.attention.query_projection.bias", "module.blocks.attn_layers.0.attention.key_projection.weight", "module.blocks.attn_layers.0.attention.key_projection.bias", "module.blocks.attn_layers.0.attention.value_projection.weight", "module.blocks.attn_layers.0.attention.value_projection.bias", "module.blocks.attn_layers.0.attention.out_projection.weight", "module.blocks.attn_layers.0.attention.out_projection.bias", "module.blocks.attn_layers.0.conv1.weight", "module.blocks.attn_layers.0.conv1.bias", "module.blocks.attn_layers.0.conv2.weight", "module.blocks.attn_layers.0.conv2.bias", "module.blocks.attn_layers.0.norm1.weight", "module.blocks.attn_layers.0.norm1.bias", "module.blocks.attn_layers.0.norm2.weight", "module.blocks.attn_layers.0.norm2.bias", "module.blocks.attn_layers.1.attention.inner_attention.attn_bias.emb.weight", "module.blocks.attn_layers.1.attention.query_projection.weight", "module.blocks.attn_layers.1.attention.query_projection.bias", "module.blocks.attn_layers.1.attention.key_projection.weight", "module.blocks.attn_layers.1.attention.key_projection.bias", "module.blocks.attn_layers.1.attention.value_projection.weight", "module.blocks.attn_layers.1.attention.value_projection.bias", "module.blocks.attn_layers.1.attention.out_projection.weight", "module.blocks.attn_layers.1.attention.out_projection.bias", "module.blocks.attn_layers.1.conv1.weight", "module.blocks.attn_layers.1.conv1.bias", "module.blocks.attn_layers.1.conv2.weight", "module.blocks.attn_layers.1.conv2.bias", "module.blocks.attn_layers.1.norm1.weight", "module.blocks.attn_layers.1.norm1.bias", "module.blocks.attn_layers.1.norm2.weight", "module.blocks.attn_layers.1.norm2.bias", "module.blocks.attn_layers.2.attention.inner_attention.attn_bias.emb.weight", "module.blocks.attn_layers.2.attention.query_projection.weight", "module.blocks.attn_layers.2.attention.query_projection.bias", "module.blocks.attn_layers.2.attention.key_projection.weight", "module.blocks.attn_layers.2.attention.key_projection.bias", "module.blocks.attn_layers.2.attention.value_projection.weight", "module.blocks.attn_layers.2.attention.value_projection.bias", "module.blocks.attn_layers.2.attention.out_projection.weight", "module.blocks.attn_layers.2.attention.out_projection.bias", "module.blocks.attn_layers.2.conv1.weight", "module.blocks.attn_layers.2.conv1.bias", "module.blocks.attn_layers.2.conv2.weight", "module.blocks.attn_layers.2.conv2.bias", "module.blocks.attn_layers.2.norm1.weight", "module.blocks.attn_layers.2.norm1.bias", "module.blocks.attn_layers.2.norm2.weight", "module.blocks.attn_layers.2.norm2.bias", "module.blocks.attn_layers.3.attention.inner_attention.attn_bias.emb.weight", "module.blocks.attn_layers.3.attention.query_projection.weight", "module.blocks.attn_layers.3.attention.query_projection.bias", "module.blocks.attn_layers.3.attention.key_projection.weight", "module.blocks.attn_layers.3.attention.key_projection.bias", "module.blocks.attn_layers.3.attention.value_projection.weight", "module.blocks.attn_layers.3.attention.value_projection.bias", "module.blocks.attn_layers.3.attention.out_projection.weight", "module.blocks.attn_layers.3.attention.out_projection.bias", "module.blocks.attn_layers.3.conv1.weight", "module.blocks.attn_layers.3.conv1.bias", "module.blocks.attn_layers.3.conv2.weight", "module.blocks.attn_layers.3.conv2.bias", "module.blocks.attn_layers.3.norm1.weight", "module.blocks.attn_layers.3.norm1.bias", "module.blocks.attn_layers.3.norm2.weight", "module.blocks.attn_layers.3.norm2.bias", "module.blocks.norm.weight", "module.blocks.norm.bias", "module.head.weight", "module.head.bias". Unexpected key(s) in state_dict: "embedding.weight", "embedding.bias", "blocks.attn_layers.0.attention.inner_attention.attn_bias.emb.weight", "blocks.attn_layers.0.attention.query_projection.weight", "blocks.attn_layers.0.attention.query_projection.bias", "blocks.attn_layers.0.attention.key_projection.weight", "blocks.attn_layers.0.attention.key_projection.bias", "blocks.attn_layers.0.attention.value_projection.weight", "blocks.attn_layers.0.attention.value_projection.bias", "blocks.attn_layers.0.attention.out_projection.weight", "blocks.attn_layers.0.attention.out_projection.bias", "blocks.attn_layers.0.conv1.weight", "blocks.attn_layers.0.conv1.bias", "blocks.attn_layers.0.conv2.weight", "blocks.attn_layers.0.conv2.bias", "blocks.attn_layers.0.norm1.weight", "blocks.attn_layers.0.norm1.bias", "blocks.attn_layers.0.norm2.weight", "blocks.attn_layers.0.norm2.bias", "blocks.attn_layers.1.attention.inner_attention.attn_bias.emb.weight", "blocks.attn_layers.1.attention.query_projection.weight", "blocks.attn_layers.1.attention.query_projection.bias", "blocks.attn_layers.1.attention.key_projection.weight", "blocks.attn_layers.1.attention.key_projection.bias", "blocks.attn_layers.1.attention.value_projection.weight", "blocks.attn_layers.1.attention.value_projection.bias", "blocks.attn_layers.1.attention.out_projection.weight", "blocks.attn_layers.1.attention.out_projection.bias", "blocks.attn_layers.1.conv1.weight", "blocks.attn_layers.1.conv1.bias", "blocks.attn_layers.1.conv2.weight", "blocks.attn_layers.1.conv2.bias", "blocks.attn_layers.1.norm1.weight", "blocks.attn_layers.1.norm1.bias", "blocks.attn_layers.1.norm2.weight", "blocks.attn_layers.1.norm2.bias", "blocks.attn_layers.2.attention.inner_attention.attn_bias.emb.weight", "blocks.attn_layers.2.attention.query_projection.weight", "blocks.attn_layers.2.attention.query_projection.bias", "blocks.attn_layers.2.attention.key_projection.weight", "blocks.attn_layers.2.attention.key_projection.bias", "blocks.attn_layers.2.attention.value_projection.weight", "blocks.attn_layers.2.attention.value_projection.bias", "blocks.attn_layers.2.attention.out_projection.weight", "blocks.attn_layers.2.attention.out_projection.bias", "blocks.attn_layers.2.conv1.weight", "blocks.attn_layers.2.conv1.bias", "blocks.attn_layers.2.conv2.weight", "blocks.attn_layers.2.conv2.bias", "blocks.attn_layers.2.norm1.weight", "blocks.attn_layers.2.norm1.bias", "blocks.attn_layers.2.norm2.weight", "blocks.attn_layers.2.norm2.bias", "blocks.attn_layers.3.attention.inner_attention.attn_bias.emb.weight", "blocks.attn_layers.3.attention.query_projection.weight", "blocks.attn_layers.3.attention.query_projection.bias", "blocks.attn_layers.3.attention.key_projection.weight", "blocks.attn_layers.3.attention.key_projection.bias", "blocks.attn_layers.3.attention.value_projection.weight", "blocks.attn_layers.3.attention.value_projection.bias", "blocks.attn_layers.3.attention.out_projection.weight", "blocks.attn_layers.3.attention.out_projection.bias", "blocks.attn_layers.3.conv1.weight", "blocks.attn_layers.3.conv1.bias", "blocks.attn_layers.3.conv2.weight", "blocks.attn_layers.3.conv2.bias", "blocks.attn_layers.3.norm1.weight", "blocks.attn_layers.3.norm1.bias", "blocks.attn_layers.3.norm2.weight", "blocks.attn_layers.3.norm2.bias", "blocks.norm.weight", "blocks.norm.bias", "head.weight", "head.bias". ERROR conda.cli.main_run:execute(127): `conda run python /mnt/f/测试项目/OPNElsm/OpenLTM-main/run.py --task_name forecast --is_training 0 --root_path ./dataset/ --data_path ETTh1_1.csv --model_id multivariate_pretrain --model timer_xl --data MultivariateDatasetBenchmark --input_token_len 96 --test_pred_len 96 --e_layers 4 --d_model 512 --d_ff 2048 --batch_size 32 --learning_rate 0.0001 --train_epochs 10 --gpu 0 --cosine --tmax 10 --dp --devices 0` failed. (See above for error)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions