File size: 489 Bytes
5036248 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
# Bangla-Electra
This is a first attempt at a Bangla/Bengali language model trained with
Google Research's [ELECTRA](https://github.com/google-research/electra).
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1gpwHvXAnNQaqcu-YNx1kafEVxz07g2jL
## Corpus
Trained on a web crawl from https://oscar-corpus.com/ (deduped version, 5.8GB) and 1 July 2020 dump of bn.wikipedia.org (414MB)
## Vocabulary
Included as vocab.txt in the upload - vocab_size is 29898
|