File size: 1,174 Bytes
b0364a6 72f2ef8 b0364a6 72f2ef8 b0364a6 72f2ef8 b0364a6 72f2ef8 b0364a6 72f2ef8 b0364a6 72f2ef8 b0364a6 72f2ef8 b0364a6 72f2ef8 b0364a6 72f2ef8 b0364a6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
license: apache-2.0
datasets: Microsoft/ChatBench
language: en
base_model: microsoft/chatbench-distilgpt2
library_name: transformers
tags:
- Microsoft
- ChatBench
- Interactive Benchmark
- User Simulator
- Benchmarking
- llama-cpp
- gguf-my-repo
---
# chatbench-distilgpt2
**Model creator:** [microsoft](https://huggingface.co/microsoft)<br/>
**Original model**: [microsoft/chatbench-distilgpt2](https://huggingface.co/microsoft/chatbench-distilgpt2)<br/>
**GGUF quantization:** provided by [ysn-rfd](https:/huggingface.co/ysn-rfd) using `llama.cpp`<br/>
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Use with Ollama
```bash
ollama run "hf.co/ysn-rfd/OpenELM-3B-Instruct-GGUF:F16"
```
## Use with LM Studio
```bash
lms load "ysn-rfd/OpenELM-3B-Instruct-GGUF"
```
## Use with llama.cpp CLI
```bash
llama-cli --hf "ysn-rfd/OpenELM-3B-Instruct-GGUF:F16" -p "The meaning to life and the universe is"
```
## Use with llama.cpp Server:
```bash
llama-server --hf "ysn-rfd/OpenELM-3B-Instruct-GGUF:F16" -c 4096
```
|