Skip to content

Closes #305 | Create Multilingual open relation dataloader#320

Open
madenindya wants to merge 2 commits intoIndoNLP:masterfrom
madenindya:multilingual-open-relation
Open

Closes #305 | Create Multilingual open relation dataloader#320
madenindya wants to merge 2 commits intoIndoNLP:masterfrom
madenindya:multilingual-open-relation

Conversation

@madenindya
Copy link
Copy Markdown
Contributor

@madenindya madenindya commented Oct 13, 2022

Closes #305

Checkbox

  • Confirm that this PR is linked to the dataset issue.
  • Create the dataloader script nusantara/nusa_datasets/my_dataset/my_dataset.py (please use only lowercase and underscore for dataset naming).
  • Provide values for the _CITATION, _DATASETNAME, _DESCRIPTION, _HOMEPAGE, _LICENSE, _URLs, _SUPPORTED_TASKS, _SOURCE_VERSION, and _NUSANTARA_VERSION variables.
  • Implement _info(), _split_generators() and _generate_examples() in dataloader script.
  • Make sure that the BUILDER_CONFIGS class attribute is a list with at least one NusantaraConfig for the source schema and one for a nusantara schema.
  • Confirm dataloader script works with datasets.load_dataset function.
  • Confirm that your dataloader script passes the test suite run with python -m tests.test_nusantara --path=nusantara/nusa_datasets/my_dataset/my_dataset.py.
  • If my dataset is local, I have provided an output of the unit-tests in the PR (please copy paste). This is OPTIONAL for public datasets, as we can test these without access to the data files.

@madenindya
Copy link
Copy Markdown
Contributor Author

Hi, I found many difficulties with this dataset and would like to have some advice:

  1. The data source is from Kaggle, how do I download it here? The original .zip size is huge. In local, I only download Indonesian data and use it for test reference.
  2. When creating entities offset, I found it hard to get as the original data didn't show that. When I tried to make it on my own, it was also hard to infer as there are some exact words in a sentence.
  3. And because I couldn't get the offset, I don't know whether it's better to have a different EntID for the same word in the same sentence. As it might come from the same/different position word, but I don't have that info.

I mark my line of codes as TODO for the things that need further discussion

Additional Questions:

  • Why the entites.text is List & not just a single string? (context: schema's KB)

@madenindya madenindya changed the title Create Multilingual open relation dataloader Closes #305 | Create Multilingual open relation dataloader Oct 17, 2022
@muhsatrio
Copy link
Copy Markdown
Collaborator

Hi, I found many difficulties with this dataset and would like to have some advice:

  1. The data source is from Kaggle, how do I download it here? The original .zip size is huge. In local, I only download Indonesian data and use it for test reference.
  2. When creating entities offset, I found it hard to get as the original data didn't show that. When I tried to make it on my own, it was also hard to infer as there are some exact words in a sentence.
  3. And because I couldn't get the offset, I don't know whether it's better to have a different EntID for the same word in the same sentence. As it might come from the same/different position word, but I don't have that info.

I mark my line of codes as TODO for the things that need further discussion

Additional Questions:

  • Why the entites.text is List & not just a single string? (context: schema's KB)

Same like bottleneck that I faced.

cc: @SamuelCahyawijaya @holylovenia @bryanwilie

@muhsatrio
Copy link
Copy Markdown
Collaborator

I think you can continue discussing it in slack for the faster response kak @madenindya, thank you!

@SamuelCahyawijaya
Copy link
Copy Markdown
Member

@muhsatrio : Sorry I missed this PR. Has this one been finalized? I can check it right away

@muhsatrio
Copy link
Copy Markdown
Collaborator

@muhsatrio : Sorry I missed this PR. Has this one been finalized? I can check it right away

I think still there is no any changes kak

@SamuelCahyawijaya
Copy link
Copy Markdown
Member

IMO, for this dataset we can just implement the source schema for now, as it will be complicated to extend it to the KB schema. What do you think? @madenindya @muhsatrio

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Create dataset loader for Multilingual Open Relations

3 participants