File size: 3,013 Bytes
838c8ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
import gradio as gr
from transformers import pipeline

# --- PiMusic3: Pi Forge Music Player ---
# Author: onenoly11
# Description: Generates, transcribes, and analyzes audio
# using MusicGen, Whisper, and DistilBERT within the Pi Forge framework.

"""
Ah, the spire tremblesโ€”a rift in the weave...
(This stanza is now preserved safely as a docstring.)
"""

# --- Compatibility Shim: MusicGen fallback ---
try:
    # Try using Audiocraft (local / full build)
    from audiocraft.models import musicgen
    def generate_music(prompt):
        """Generate music using local Audiocraft (MusicGen)."""
        model = musicgen.MusicGen.get_pretrained("facebook/musicgen-small")
        model.set_generation_params(duration=10)
        wav = model.generate([prompt])
        import tempfile, soundfile as sf
        temp_wav = tempfile.NamedTemporaryFile(suffix=".wav", delete=False)
        sf.write(temp_wav.name, wav[0].cpu().numpy().T, 32000)
        return temp_wav.name
    MUSICGEN_MODE = "Audiocraft (local)"
except Exception:
    # Fallback to Transformers pipeline (Hugging Face cloud build)
    musicgen = pipeline("text-to-audio", model="facebook/musicgen-small")
    def generate_music(prompt):
        """Generate music using Transformers pipeline fallback."""
        result = musicgen(prompt)
        return result["audio"]
    MUSICGEN_MODE = "Transformers (cloud)"

# --- Whisper and Sentiment Pipelines ---
whisper = pipeline("automatic-speech-recognition", model="openai/whisper-base")
sentiment = pipeline("sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english")

# --- Utility Functions ---
def transcribe_audio(audio_path):
    result = whisper(audio_path)
    return result["text"]

def analyze_sentiment(text):
    result = sentiment(text)
    return f"{result[0]['label']} ({result[0]['score']:.2f})"

# --- Interface ---
with gr.Blocks(title="PiMusic3 ๐ŸŽต") as demo:
    gr.Markdown(f"### ๐ŸŽถ PiMusic3 โ€” Pi Forge Music Player\nMode: **{MUSICGEN_MODE}**\nGenerate, transcribe, and analyze sound ethically.")

    with gr.Tab("MusicGen"):
        prompt = gr.Textbox(label="Music Prompt", placeholder="Describe your sound...")
        generate_btn = gr.Button("๐ŸŽผ Generate")
        audio_out = gr.Audio(label="Generated Music")
        generate_btn.click(fn=generate_music, inputs=prompt, outputs=audio_out)

    with gr.Tab("Whisper Transcribe"):
        mic = gr.Audio(sources=["microphone", "upload"], type="filepath", label="๐ŸŽ™๏ธ Record or Upload Audio")
        transcribe_btn = gr.Button("๐Ÿ“ Transcribe")
        transcript = gr.Textbox(label="Transcription")
        transcribe_btn.click(fn=transcribe_audio, inputs=mic, outputs=transcript)

    with gr.Tab("Sentiment Analysis"):
        text_in = gr.Textbox(label="Enter text for sentiment check")
        analyze_btn = gr.Button("๐Ÿ” Analyze")
        sentiment_out = gr.Textbox(label="Result")
        analyze_btn.click(fn=analyze_sentiment, inputs=text_in, outputs=sentiment_out)

demo.launch()