Kyoto Free Translation Task Data
A parallel corpus for the evaluation and development of Japanese-English machine translation systems
The data was originally prepared by the National Institute for Information and Communications Technology (NICT) and released as the Japanese-English Bilingual Corpus of Wikipedia's Kyoto Articles. The data was processed to form the Kyoto Free Translation Task dataset. Data was cleaned to remove sentences with fewer than 1 or more than 40 words, and separated into training, tuning, development and test sets. The training data should be used for training statistical models, tuning data used for tuning weights, development data used for testing the system in development and testing data used for reporting final results. The validation sets presented here correspond to the development set.
The "ContentElements" field contains eight options: "TrainingData", "TestData", "ValidationData", "TuningData", "TrainingDataset", "TestDataset", "ValidationDataset" and "TuningDataset". "TrainingData", "TestData", "ValidationData" and "TuningData" are structured as associations. "TrainingDataset", "TestDataset", "ValidationDataset" and "TuningDataset" are structured as datasets.
Retrieve the resource:
Obtain the first three training examples:
Obtain the last three test examples:
Obtain the one random validation example:
Obtain the one random tuning example:
Obtain five random pairs from the training set in Dataset form:
Obtain five random pairs from the test set in Dataset form:
Obtain five random pairs from the validation set in Dataset form:
Obtain five random pairs from the tuning set in Dataset form:
Obtain a character-level histogram of test example lengths:
"Kyoto Free Translation Task Data"
from the Wolfram Data Repository
Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
Data Resource History