Estonian RoBERTa model

est-roberta

Est-RoBERTa is a monolingual Estonian RoBERTa-like language representation model. It was trained on Estonian corpora, containing mostly news articles, with 2.51 billion tokens in total.

The model can be used for various NLP classification tasks by fine tuning the model end-to-end or alternatively by extracting the word embedding vectors for each word occurrence and using the vectors as input. The model vocabulary consists of 40,000 (subword) tokens. Any word not present in the vocabulary gets split into subword tokens, eg. "identification" might get split as "▁identif ic ation". The tokens that form the beginning of a word (or the whole word) have a special character (▁) prepended (that is not underscore character). Other tokens that form a non-beginning part of a word do not have any characters prepended or appended.

The model configuration is in pytorch format, specifically for usage with transformers toolset by Huggingface (https://huggingface.co/transformers/), where it is also hosted already (https://huggingface.co/EMBEDDIA/est-roberta)

You don’t have the permission to edit this resource.