Helion 2.0
Collection
Series of the Helion V2.0
•
2 items
•
Updated
•
1
Advanced 10.2B parameter multimodal language model with 200K context, native vision, and tool use capabilities.
from transformers import AutoModelForCausalLM, AutoProcessor
from PIL import Image
model = AutoModelForCausalLM.from_pretrained(
"DeepXR/Helion-V2.0-Thinking",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained("DeepXR/Helion-V2.0-Thinking")
# Text generation
prompt = "Explain quantum computing in simple terms:"
inputs = processor(text=prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(processor.decode(outputs[0], skip_special_tokens=True))
# Image understanding
image = Image.open("photo.jpg")
inputs = processor(text="What's in this image?", images=image, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(processor.decode(outputs[0], skip_special_tokens=True))
| Benchmark | Helion-V2.0 | Helion-V2.0-Thinking | Improvement |
|---|---|---|---|
| MMLU (5-shot) | 64.2% | 72.3% | +12.6% |
| HellaSwag (10-shot) | 80.5% | 84.8% | +5.3% |
| ARC-Challenge (25-shot) | 58.3% | 68.7% | +17.8% |
| TruthfulQA MC2 | 52.1% | 58.4% | +12.1% |
| GSM8K (8-shot) | 68.7% | 72.1% | +4.9% |
| HumanEval (0-shot) | 48.2% | 52.8% | +9.5% |
| Benchmark | Score | Notes |
|---|---|---|
| VQA v2 | 78.9% | Visual question answering |
| TextVQA | 72.4% | Text in images |
| ChartQA | 76.8% | Chart understanding |
| DocVQA | 84.3% | Document analysis |
| AI2D | 78.2% | Scientific diagrams |
| Benchmark | Score |
|---|---|
| Berkeley Function Calling | 89.7% |
| API-Bank | 86.4% |
| JSON Schema Adherence | 94.8% |
| Configuration | VRAM | Performance |
|---|---|---|
| BF16 | 24GB | 42 tok/s (RTX 4090) |
| INT8 | 16GB | 67 tok/s (RTX 4080) |
| INT4 | 12GB | 89 tok/s (RTX 4070) |
pip install transformers torch accelerate pillow
from transformers import BitsAndBytesConfig
# 8-bit (16GB VRAM)
config = BitsAndBytesConfig(load_in_8bit=True)
# 4-bit (12GB VRAM)
config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
model = AutoModelForCausalLM.from_pretrained(
"DeepXR/Helion-V2.0-Thinking",
quantization_config=config,
device_map="auto"
)
import json
tools = [{
"name": "calculator",
"description": "Perform calculations",
"parameters": {"expression": {"type": "string"}}
}]
prompt = f"Available tools: {json.dumps(tools)}\n\nUser: What is 127 * 89?\nAssistant:"
inputs = processor(text=prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128, temperature=0.2)
# Process entire documents
with open("long_document.txt") as f:
document = f.read() # Up to 200K tokens
prompt = f"{document}\n\nSummarize the key points:"
inputs = processor(text=prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1024)
images = [Image.open(f"image{i}.jpg") for i in range(3)]
prompt = "Compare these images and describe the differences:"
inputs = processor(text=prompt, images=images, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
Built-in safety guardrails including:
See safety_wrapper.py for production deployment.
inference.py - Full inference script with examplessafety_wrapper.py - Production safety wrapperevaluate.py - Comprehensive evaluation suitebenchmark.py - Performance benchmarkingQUICKSTART.md - Quick start guideUSE_CASES.md - Detailed use case examplessafety_config.json - Safety configurationrequirements.txt - DependenciesDockerfile - Container deployment@misc{helion-v2-thinking-2025,
title={Helion-V2.0-Thinking: A 10.2B Multimodal Language Model},
author={DeepXR},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/DeepXR/Helion-V2.0-Thinking}
}
Apache 2.0 - See LICENSE file for details.
Built with Transformers, trained on diverse open datasets. Thanks to the open-source AI community.