# 🐚 Shell Assistant v3 (0.5B)

> **A local, agentic Linux command assistant powered by a fine-tuned Qwen 2.5 Coder (0.5B).** > Converts natural language into structured, executable JSON with built-in risk assessment.

## πŸš€ Overview

This project implements an **end-to-end agentic workflow** for Linux terminal management. Unlike standard chatbots, this model is fine-tuned to output strict **JSON structures** containing the command, `sudo` requirements, and a `risk` classification.

A Bash wrapper script acts as the **execution layer**, parsing the JSON and applying safety logic before running commands on your system.

### Key Features
* **⚑ Ultra-Lightweight:** Runs on 0.5B parameters (CPU-friendly, low latency).
* **πŸ›‘οΈ Safety-First:** Every command is classified as `low`, `medium`, or `high` risk.
* **🧠 Context-Aware:** Understands complex logic (e.g., "Allow IP 10.0.0.5 on port 80" $\rightarrow$ `ufw allow from 10.0.0.5...`).
* **🐧 Distro-Agnostic:** Capable of generalizing between Debian (`apt`) and Fedora (`dnf`) syntaxes.

---

## πŸ› οΈ Architecture

The pipeline consists of three main components:

1.  **Synthetic Data Generator (`dataset_generator.py`)**: Creates thousands of training samples covering networking, logs, Docker, and system maintenance.
2.  **Fine-Tuned Model (Ollama)**: A Qwen 2.5 Coder 0.5B model trained to translate intent into JSON.
3.  **Execution Wrapper (`cli-assistant`)**: A Bash script that queries the model, parses JSON, warns on high-risk actions, and executes commands.

---

## πŸ“¦ Installation

### 1. Prerequisites
* [Ollama](https://ollama.com/) installed and running.
* `jq` (for JSON parsing in the wrapper).
    ```bash
    sudo apt-get install jq  # Debian/Ubuntu
    sudo dnf install jq      # Fedora
    ```

### 2. Setup the Model
Pull the base model and create the custom shell assistant version:
ollama create shell-assistant-v3-0.5b -f Modelfile

3. Setup the Wrapper

Make the script executable and (optional) move it to your path:

chmod +x shell-assistant.sh
sudo mv shell-assistant.sh /usr/local/bin/ask

πŸ’» Usage

Run the assistant followed by your natural language request.

Basic Commands

# Check logs (Low Risk - Auto runs)
ask "check nginx logs"

# Output:
# βœ… Safe to run.
# πŸš€ Executing: journalctl -u nginx -n 50

High-Risk Protections

Dangerous commands trigger a confirmation prompt.

ask "kill all tmux sessions"

# Output:
# --------------------------------
# Command:  tmux kill-server
# Sudo:     false
# Risk:     high
# --------------------------------
# ⚠️  WARNING: Flagged as HIGH RISK.
# Execute 'tmux kill-server'? [y/N]: 

Complex Logic

ask "allow 192.168.1.50 to access port 5432"
# -> Generates: ufw allow from 192.168.1.50 to any port 5432

🧠 Training Details

The model was fine-tuned on 15,000 synthetic samples generated by dataset_generator.py.

  • Objective: Instruction Tuning (Text \rightarrow JSON).
  • Format:
{
  "cmd": "systemctl restart docker",
  "sudo": true,
  "risk": "medium"
}
  • Focus Areas: ufw, systemctl, journalctl, docker, git, tmux.

🐳 Usage with Ollama

  1. Download the model file (shell-assistant-0.5b.gguf) and the Modelfile.
  2. Create the model locally:
    ollama create shell-assistant -f Modelfile
    

⚠️ Disclaimer

This tool executes commands on your system. While the risk assessment and confirmation prompts are designed to prevent accidents, you should always review the command before confirming execution. The authors are not responsible for accidental system damage (e.g., rm -rf /).


Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for petyussz/shell-assistant-0.5b

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(65)
this model