A repository of the codes and the datasets used in the paper "Semantic Similarity Based Filtering for Turkish Paraphrase Dataset Creation".
The work in this paper was done by Besher Alkurdi, Hasan Yunus Sarioglu and Mehmet Fatih Amasyali.
You can access the filtered datasets that we used to train our models from the Hugging Face Hub at the links below:
- OpenSubtitles2018 (OST)
- Tatoeba (TAT)
- TED2013 (TED) Not used for evaluation
The raw (non-filtered versions) can be acessed from the links below:
- OpenSubtitles2018 Raw (OST-RAW)
- Tatoeba Raw (TAT-RAW)
The folder datasets
contains the file human_annotations.csv
, which includes the human annotations for a sample of pairs from the datasets.
- The
src
column contains the source text for each pair. - The
tgt
column contains the target text. - The
dataset
column indicates which dataset the pair is from. - The
human
column indicates the overall label for the pair, based on the annotators' scores and agreement. If the label was discarded due to disagreement, a value of -1 is recorded in thehuman
column. - The
annotator_1
andannotator_2
columns contain the scores given by the two annotators for each pair.
You can access the finetuned model checkpoints from the links below:
- mT5-base (OST): mT5-base finetuned on the filtered OpenSubtitles2018 dataset.
- mT5-base (TAT): mT5-base finetuned on the filtered Tatoeba dataset.
- trBART (OST): trBART finetuned on the filtered OpenSubtitles2018 dataset.
- trBART (TAT): trBART finetuned on the filtered Tatoeba dataset.
All the codes are available under the code folder. Description will be added later.