ModernBERT-base-multilingual
Model Details
ModernBERT๋ ์๋ฐฉํฅ ์ธ์ฝ๋ ์ํคํ ์ฒ์ ํ๋์ ์ธ ํธ๋์คํฌ๋จธ ๊ธฐ๋ฒ์ ์ ์ฉํ ๋ชจ๋ธ์ ๋๋ค. RoPE๋ฅผ ์ฌ์ฉํด ์ต๋ 8,192 ํ ํฐ์ ๊ธด ๋ฌธ๋งฅ์ ํจ์จ์ ์ผ๋ก ์ฒ๋ฆฌํ๋ฉฐ, Local-Global ์ดํ ์ ํจํด์ผ๋ก ๊ณ์ฐ ๋ณต์ก๋๋ฅผ ์ค์์ต๋๋ค. GeGLU ํ์ฑํ ํจ์์ Pre-normalization ๋ธ๋ก, Unpadding ๊ธฐ๋ฒ์ ํตํด ๊ธฐ์กด BERT ๋๋น ์ต๋ 4๋ฐฐ ๋น ๋ฅธ ์ฒ๋ฆฌ ์๋๋ฅผ ๋ฌ์ฑํ์ต๋๋ค.
์ด ์ฐ๊ตฌ๋ ๊ตฌ๊ธ์ TPU Research Cloud(TRC)๋ฅผ ํตํด ์ง์๋ฐ์ Cloud TPU๋ก ํ์ต๋์์ต๋๋ค.
How to Get Started with the Model
from transformers import AutoTokenizer, ModernBertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("team-lucid/ModernBERT-base-multilingual")
model = ModernBertForSequenceClassification.from_pretrained("team-lucid/ModernBERT-base-multilingual")
inputs = tokenizer("์๋
, ์ธ์!", return_tensors="pt")
outputs = model(**inputs)
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support