# π Shell Assistant v3 (0.5B)
> **A local, agentic Linux command assistant powered by a fine-tuned Qwen 2.5 Coder (0.5B).** > Converts natural language into structured, executable JSON with built-in risk assessment.
## π Overview
This project implements an **end-to-end agentic workflow** for Linux terminal management. Unlike standard chatbots, this model is fine-tuned to output strict **JSON structures** containing the command, `sudo` requirements, and a `risk` classification.
A Bash wrapper script acts as the **execution layer**, parsing the JSON and applying safety logic before running commands on your system.
### Key Features
* **β‘ Ultra-Lightweight:** Runs on 0.5B parameters (CPU-friendly, low latency).
* **π‘οΈ Safety-First:** Every command is classified as `low`, `medium`, or `high` risk.
* **π§ Context-Aware:** Understands complex logic (e.g., "Allow IP 10.0.0.5 on port 80" $\rightarrow$ `ufw allow from 10.0.0.5...`).
* **π§ Distro-Agnostic:** Capable of generalizing between Debian (`apt`) and Fedora (`dnf`) syntaxes.
---
## π οΈ Architecture
The pipeline consists of three main components:
1. **Synthetic Data Generator (`dataset_generator.py`)**: Creates thousands of training samples covering networking, logs, Docker, and system maintenance.
2. **Fine-Tuned Model (Ollama)**: A Qwen 2.5 Coder 0.5B model trained to translate intent into JSON.
3. **Execution Wrapper (`cli-assistant`)**: A Bash script that queries the model, parses JSON, warns on high-risk actions, and executes commands.
---
## π¦ Installation
### 1. Prerequisites
* [Ollama](https://ollama.com/) installed and running.
* `jq` (for JSON parsing in the wrapper).
```bash
sudo apt-get install jq # Debian/Ubuntu
sudo dnf install jq # Fedora
```
### 2. Setup the Model
Pull the base model and create the custom shell assistant version:
ollama create shell-assistant-v3-0.5b -f Modelfile
3. Setup the Wrapper
Make the script executable and (optional) move it to your path:
chmod +x shell-assistant.sh
sudo mv shell-assistant.sh /usr/local/bin/ask
π» Usage
Run the assistant followed by your natural language request.
Basic Commands
# Check logs (Low Risk - Auto runs)
ask "check nginx logs"
# Output:
# β
Safe to run.
# π Executing: journalctl -u nginx -n 50
High-Risk Protections
Dangerous commands trigger a confirmation prompt.
ask "kill all tmux sessions"
# Output:
# --------------------------------
# Command: tmux kill-server
# Sudo: false
# Risk: high
# --------------------------------
# β οΈ WARNING: Flagged as HIGH RISK.
# Execute 'tmux kill-server'? [y/N]:
Complex Logic
ask "allow 192.168.1.50 to access port 5432"
# -> Generates: ufw allow from 192.168.1.50 to any port 5432
π§ Training Details
The model was fine-tuned on 15,000 synthetic samples generated by dataset_generator.py.
- Objective: Instruction Tuning (Text \rightarrow JSON).
- Format:
{
"cmd": "systemctl restart docker",
"sudo": true,
"risk": "medium"
}
- Focus Areas:
ufw,systemctl,journalctl,docker,git,tmux.
π³ Usage with Ollama
- Download the model file (
shell-assistant-0.5b.gguf) and theModelfile. - Create the model locally:
ollama create shell-assistant -f Modelfile
β οΈ Disclaimer
This tool executes commands on your system. While the risk assessment and confirmation prompts are designed to prevent accidents, you should always review the command before confirming execution. The authors are not responsible for accidental system damage (e.g., rm -rf /).
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for petyussz/shell-assistant-0.5b
Base model
Qwen/Qwen2.5-0.5B
Finetuned
Qwen/Qwen2.5-Coder-0.5B
Finetuned
Qwen/Qwen2.5-Coder-0.5B-Instruct