Contribution Guidelines
Thank you for your interest in contributing to the OLDI datasets!
To ensure high-quality contributions, please follow the steps in the checklist below and see the rest of this page for more information.
Language data release checklist
- Decide on your type of contribution.
- Email the organisers at [email protected] to discuss your proposed contribution.
- Identify the right language code for your contribution.
- Fill out a dataset card and ensure all people participating in data collection or annotation have acknowledged its contents.
- For translation projects, ensure all translators have acknowledged the translation guidelines. For monolingual projects, ensure all contributors have acknowledged the monolingual contribution guidelines.
- Deliver the data and dataset card by submitting a pull request to the appropriate repository and accepting the DCO.
Types of contribution
There are three main types of contributions:
- Fixes to existing data: in case of incorrect or incomplete exisiting translations.
- Completely new translations: typically involves starting from the original English data and having it translated by qualified, native speakers of the target language (see translation guidelines).
- Other contributions: for example, new monolingual datasets (see monolingual contribution guidelines).
In each case, before starting work please make sure to email the organisers at [email protected]. This ensures nobody else is already working on the same task and allows the community to better coordinate work.
Language codes
We use standardized language codes throughout OLDI. These are made up of three parts, separated by underscores:
- A language subtag: we use ISO 639-3 language codes. Macrolanguage codes must not be used if a more specific code exists: e.g. please use
cmn
,yue
,wuu
, etc. rather thanzho
. - A script subtag: we use ISO 15924 script codes.
- A language variety subtag: to identify the specific language variety we use Glottocodes, which have the advantage of being stable and of allowing the identification of languages (or languoids) at various levels of granularity.
Example: apc_Arab_sout3123
is South Levantine Arabic written in the Arabic script.
Dataset card
For new data, we collect precise information about the language variety, the quality assurance workflow and, where applicable, the translation workflow. Please use the following Markdown template to provide this information.
Template
Translation guidelines
These translation guidelines must be acknlowedged by all translators who will be contributing data.
Important note
Your translations will be used to help train or evaluate machine translation engines. For this reason, this project requires human translation.
- If you are translating data to be used for evaluation purposes, such as for FLORES+, using or even referencing machine translation output is not allowed (this includes post-editing).
- If you are translating data to be used for training purposes, such as Seed, the use of post-edited machine translated content is allowed, provided all data is manually verified and edited where necessary. Note that some machine translation services – including DeepL, Google Translate, and ChatGPT – prohibit the use of their output for training other translation or AI models, so their use is not permitted.
General guidelines
- You will be translating sentences coming from different sources. Please refer to the source document if available.
- Do not convert any units of measurement. Translate them exactly as noted in the source content.
- When translating, please maintain the same tone used in the source document. For example, encyclopedic content coming from sources like Wikipedia should be translated using a formal tone.
- Provide fluent translations without deviating too much from the source structure. Only allow necessary changes.
- Do not expand or replace information compared to what is present in the source documents. Do not add any explanatory or parenthetical information, definitions, etc.
- Do not ignore any meaningful text that was present in the source.
- In case of multiple possible translations, please pick the one that makes the most sense (e.g., for gender concordance, cultural fit in the target language, level of formality, etc.).
- Translations must be faithful to the source in terms of pragmatics such as (if applicable) level of hedging/modality, sentiment and its intensity, negation, speech effects (disfluencies), etc.
- For proper nouns and common abbreviations, please see the guidelines on Named Entities below.
- Idiomatic expressions should not be translated word for word. Use an equivalent idiom, if one exists. If no equivalent idiom exists, use an idiom of similar meaning. If no similar expressions exist in the target language, paraphrase the idiom such that the meaning is retained in the target language.
- When a pronoun to be translated is ambiguous (for instance, when it could be interpreted as either him/her or he/she), opt for gender neutral pronouns (such as them/they) if those exist in the target language. However, when a pronoun to be translated is clearly marked for gender, you should follow the source material and continue to mark for gender.
- Foreign words and phrases used in the text should be kept in their original language when this is necessary to preserve the meaning of the sentence (e.g. if given as an example of a foreign word).
Named entities
Named entities are people, places, organisations, etc., that are commonly referred to using a proper noun. This section provides guidance on how to handle named entities. Please review the following guidelines carefully:
-
If there is a commonly used term in the target language for the Named Entity:
- If the most commonly used term is the same as in the source language, then keep it as it is.
- If the most commonly used term is a translation or a transliteration, then use that.
-
If there is no commonly used term:
- If possible, a transliteration of the original term should be used.
- If a transliteration would not be commonly understood in the context, and the source term would be more acceptable, you may retain the original term.
Monolingual contribution guidelines
All contributors must acknowledge the following guidelines.
Important note
The goal of this effort is the collection of high-quality textual monolingual data, for the purposes of training language identification systems, language models and other related tools. Synthetic data is not allowed. Examples of disallowed synthetic data include machine-translated content, LLM output, and text generated from templates.
General guidelines
- All contributed data must be human-generated. Surface changes that are mechanical in nature (such as certain types of transliteration) may be performed with the aid of automated systems, provided this is clearly documented.
- Clearly identify the provenance of the data. In many cases, this may be done by providing a URL or a bibliographic reference.
- Ensure the data is in the claimed language and free of issues such as encoding problems. If at all possible, this should be done by having one or more native speakers manually check a sufficiently large representative sample of the whole dataset.
Data format
- Data must be in plain text format.
- Minimal markup in Markdown format may be used where applicable. Markup should be limited to italics, bold, ordered and unordered lists, inline code spans, block quotes, ATX headings (
#
,##
). Strikethrough (~~
), footnotes ([^1]
) and mathematics formatting ($
and$$
) with GitLab/GFM compatible syntax may also be used. -
Where possible, we strongly encourage contributions of document-level data, rather than sentence-level data. Retaining the context that comes with full documents enables the development of more sophisticated models.
- For document-level data, there must be one document per file. Paragraphs must be separated by two subsequent newlines.
- For sentence-level data, sentences must be separated by single newlines.
-
A standardised set of metadata must be added in the form of a YAML front matter. The front matter must be placed at the top of each file, preceding the textual content, and must be delimited by
---
.- Analogously to a dataset card, the language of the data must be marked using the
iso_639_3
,iso_15924
andglottocode
fields under the top-levellanguage
key. Should this structure be too restrictive for a given dataset, e.g. for code-switched text, please reach out to the organisers at [email protected]. - The source of the data must be specified in the
source
field. This may take the form of a URL, a bibliographic reference, or free-form text. - The license, in the form of an SPDX license identifier, must be specified in the
license
field. - The date of submission of the data must be specified in
YYYY-MM-DD
format in thesubmission_date
field. - Document-level data must be marked as
document: true
, whereas sentence-level data must be marked asdocument: false
. - Files that use Markdown syntax must set
markdown: true
.
- Analogously to a dataset card, the language of the data must be marked using the
The following is an example of well-formed document-level data.
The following is an example of well-formed sentence-level data.