Please use DOI in citation:

FinEst BERT is a trilingual BERT-like (Bidirectional Encoder Representations from Transformers) language representation model. It was trained on Finnish, Estonian, and English data.

The model can be used for various NLP classification tasks by fine tuning the model end-to-end or alternatively by extracting the word embedding vectors for each word occurrence and using the vectors as input. The model vocabulary consists of 74986 (subword) tokens. Any word not present in the vocabulary gets split into subword tokens, eg. "identification" might get split as "identif ##ic ##ation". The tokens that are part of the same word as the preceding token, have two hashes (##) prepended.

More details can be read in the article:

The model configuration is in pytorch format, specifically for usage with transformers toolset by Huggingface. (

You don’t have the permission to edit this resource.