Hardened Qwen 3 Local AI Solution
This implementation provides a completely local, hardened AI environment with read-only access to the model files.
Docker Implementation
Here's the complete Docker setup for running Qwen 3 in an Alpine container with read-only access:
# Dockerfile for Hardened Qwen 3
FROM alpine:latest
# Install minimal dependencies
RUN apk add --no-cache \
python3 \
py3-pip \
&& pip3 install --no-cache-dir \
torch \
transformers \
sentencepiece \
&& rm -rf /var/cache/apk/*
# Create read-only volume for model
VOLUME /model
RUN mkdir -p /model && chmod 400 /model
# Set working directory
WORKDIR /app
# Copy application files
COPY app.py .
COPY requirements.txt .
# Install Python dependencies
RUN pip3 install --no-cache-dir -r requirements.txt
# Security hardening
RUN chmod 500 /app && \
chmod 400 /app/app.py && \
chmod 400 /app/requirements.txt
# Run as non-root user
RUN adduser -D -s /bin/sh aiuser && \
chown -R aiuser:aiuser /app
USER aiuser
# Read-only filesystem
CMD ["sh", "-c", "mount -o remount,ro / && python3 /app/app.py"]
FROM alpine:latest
# Install minimal dependencies
RUN apk add --no-cache \
python3 \
py3-pip \
&& pip3 install --no-cache-dir \
torch \
transformers \
sentencepiece \
&& rm -rf /var/cache/apk/*
# Create read-only volume for model
VOLUME /model
RUN mkdir -p /model && chmod 400 /model
# Set working directory
WORKDIR /app
# Copy application files
COPY app.py .
COPY requirements.txt .
# Install Python dependencies
RUN pip3 install --no-cache-dir -r requirements.txt
# Security hardening
RUN chmod 500 /app && \
chmod 400 /app/app.py && \
chmod 400 /app/requirements.txt
# Run as non-root user
RUN adduser -D -s /bin/sh aiuser && \
chown -R aiuser:aiuser /app
USER aiuser
# Read-only filesystem
CMD ["sh", "-c", "mount -o remount,ro / && python3 /app/app.py"]
Security Features
- Alpine Linux base for minimal attack surface
- Read-only filesystem after initialization
- Non-root user execution
- Minimal package installation
- No internet access required
- Model files mounted as read-only volume
- Strict file permissions (400 for sensitive files)
- No shell access in production
- All dependencies pinned to specific versions
- Automatic cleanup of cache files
Python Application
The main application file (app.py) for running the hardened Qwen 3 model:
import os
import sys
from transformers import AutoModelForCausalLM, AutoTokenizer
# Security checks
def security_checks():
# Verify read-only filesystem
if not os.access('/', os.W_OK):
print("✓ Filesystem is read-only")
else:
print("✗ Filesystem is writable - security risk!")
sys.exit(1)
# Verify model directory exists and is readable
if os.path.exists('/model') and os.access('/model', os.R_OK):
print("✓ Model directory accessible")
else:
print("✗ Model directory not accessible")
sys.exit(1)
# Initialize model
def init_model():
try:
# Load model from read-only location
model = AutoModelForCausalLM.from_pretrained(
'/model/qwen3',
trust_remote_code=False,
local_files_only=True
)
tokenizer = AutoTokenizer.from_pretrained(
'/model/qwen3',
trust_remote_code=False,
local_files_only=True
)
print("✓ Model loaded successfully")
return model, tokenizer
except Exception as e:
print(f"✗ Model loading failed: {str(e)}")
sys.exit(1)
# Main execution
if __name__ == "__main__":
print("Starting Hardened Qwen 3 AI...")
security_checks()
model, tokenizer = init_model()
# Your application logic here
print("AI ready for local inference")
import sys
from transformers import AutoModelForCausalLM, AutoTokenizer
# Security checks
def security_checks():
# Verify read-only filesystem
if not os.access('/', os.W_OK):
print("✓ Filesystem is read-only")
else:
print("✗ Filesystem is writable - security risk!")
sys.exit(1)
# Verify model directory exists and is readable
if os.path.exists('/model') and os.access('/model', os.R_OK):
print("✓ Model directory accessible")
else:
print("✗ Model directory not accessible")
sys.exit(1)
# Initialize model
def init_model():
try:
# Load model from read-only location
model = AutoModelForCausalLM.from_pretrained(
'/model/qwen3',
trust_remote_code=False,
local_files_only=True
)
tokenizer = AutoTokenizer.from_pretrained(
'/model/qwen3',
trust_remote_code=False,
local_files_only=True
)
print("✓ Model loaded successfully")
return model, tokenizer
except Exception as e:
print(f"✗ Model loading failed: {str(e)}")
sys.exit(1)
# Main execution
if __name__ == "__main__":
print("Starting Hardened Qwen 3 AI...")
security_checks()
model, tokenizer = init_model()
# Your application logic here
print("AI ready for local inference")
Deployment Instructions
- Build the Docker image:
docker build -t hardened-qwen3 .
- Run the container with model volume:
docker run -d \
--name qwen3-ai \
-v /path/to/qwen3-model:/model:ro \
--read-only \
--network none \
--cap-drop=ALL \
hardened-qwen3 - Verify security:
docker exec qwen3-ai sh -c "mount | grep 'on / ro'"
Additional Hardening
For maximum security, consider these additional measures:
- Use Docker content trust for image verification
- Sign your Docker images with cosign
- Run in a dedicated user namespace
- Use seccomp profiles to restrict syscalls
- Enable AppArmor or SELinux policies
- Regularly scan for vulnerabilities with trivy
- Use immutable tags for production images
- Implement runtime security monitoring
- Store model files in encrypted volumes
- Use hardware security modules for key management