Dataset: https://huggingface.co/datasets/allenai/MADLAD-400
“Note that the english subset in this version is missing 18% of documents that were included in the published analysis of the dataset. These documents will be incoporated in an update coming soon.”
arXiv paper: https://arxiv.org/abs/2309.04662
Models: https://github.com/google-research/google-research/tree/master/madlad_400
u/jbochi’s work on getting the models to run: https://www.reddit.com/r/LocalLLaMA/comments/17qt6m4/translate_to_and_from_400_languages_locally_with/
Found 2 relevant code implementations for “MADLAD-400: A Multilingual And Document-Level Large Audited Dataset”.
If you have code to share with the community, please add it here 😊🙏
–
To opt out from receiving code links, DM me.
Credit to u/jbochi for getting the models to run + telling Google to fix their model checkpoints.
thanks
There use of monolingual and multilingual to describe the same dataset is unusual.
I get that they’re probably trying to say “monolingual at the document-level”, but the back and forth is quite confusing.
E.g.
"We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset
“We use both supervised parallel data with a machine translation objective and the monolingual MADLAD-400 dataset”
“Through MADLAD-400, we introduce a highly multilingual, general web-domain, document-level text dataset”
Unless I am missing something obvious, these are either typos or poor wording decisions.