sonar-llm-900m / README.md
raxtemur's picture
Initial upload (weights + code + README)
8c27ff8 verified
metadata
language:
  - en
tags:
  - sonar-llm
  - sonar
  - llama
  - text-generation
  - embeddings
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation

SONAR-LLM (900M)

We present SONAR-LLM, a decoder-only transformer that "thinks" in the same continuous SONAR embedding space, yet is supervised through token-level cross-entropy propagated via the frozen SONAR decoder. This hybrid objective retains the semantic abstraction of LCM while eliminating its diffusion sampler and restoring a likelihood-based training signal. Across model sizes from 39M to 1.3B parameters, SONAR-LLM attains competitive generation quality.

Original repository: FusionBrainLab/SONAR-LLM

Paper: arXiv:2508.05305

Minimal bundle with SONAR-LLM 900M checkpoint and code.

Install

  • Use a fresh venv/conda
  • Install SONAR from the official repo: facebookresearch/SONAR
  • Ensure PyTorch and transformers are installed
  • (Optional) Download NLTK punkt: python -c "import nltk; nltk.download('punkt')"

Usage

from huggingface_hub import snapshot_download
import sys
p = snapshot_download("raxtemur/sonar-llm-900m")
sys.path.insert(0, p)

from sonarllm_model import SONARLLMGenerator, SONARLLMGenerationConfig

gen = SONARLLMGenerator.load_from_checkpoint(p)
eos_emb = gen.t2vec.predict(["End of sequence."], source_lang="eng_Latn").to(gen.device)
cfg = SONARLLMGenerationConfig(temperature=0.2, latent_top_p=0.9, decoder_beam_size=1)
print(gen.generate("Once upon a time", eos_emb, cfg))

Files

  • pytorch_model.bin
  • config.json
  • sonarllm_model/

Notes