Nihal2000 commited on
Commit
5ada319
·
0 Parent(s):

Initial deployment

Browse files
.gitignore ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # .gitignore for HF Space
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.log
6
+ .env
7
+ .venv
8
+ venv/
DEPLOY.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HF Space Deployment Instructions
2
+
3
+ ## What's in this folder
4
+
5
+ This `hf_space` directory contains a **complete, standalone** Hugging Face Space ready to deploy.
6
+
7
+ ### Structure
8
+
9
+ ```
10
+ hf_space/
11
+ ├── app.py # Entry point for HF Space
12
+ ├── README.md # Space description with metadata
13
+ ├── requirements.txt # Minimal dependencies
14
+ ├── .gitignore # Git ignore patterns
15
+ ├── ui/
16
+ │ ├── gradio_interface.py # Gradio UI components
17
+ │ ├── remote_client.py # Modal backend client
18
+ │ └── backend.py # Backend abstraction
19
+ ├── core/
20
+ │ └── models.py # Pydantic data models
21
+ ├── visualization/
22
+ │ └── blaxel_generator.py # 3D visualization
23
+ ├── voice/
24
+ │ └── elevenlabs_tts.py # Voice TTS (stub)
25
+ └── config/
26
+ └── api_keys.py # Config (stub)
27
+ ```
28
+
29
+ ## Deployment Steps
30
+
31
+ ### Option 1: Push to Existing Space
32
+
33
+ If you already have a Space created at https://huggingface.co/spaces/Nihal2000/debuggenie:
34
+
35
+ ```bash
36
+ cd hf_space
37
+
38
+ # Initialize git if needed
39
+ git init
40
+ git add .
41
+ git commit -m "Initial DebugGenie HF Space deployment"
42
+
43
+ # Add HF remote
44
+ git remote add hf https://huggingface.co/spaces/Nihal2000/debuggenie
45
+
46
+ # Push
47
+ git push hf main --force
48
+ ```
49
+
50
+ ### Option 2: Create New Space
51
+
52
+ 1. Go to https://huggingface.co/new-space
53
+ 2. Create Space:
54
+ - **Name**: `debuggenie`
55
+ - **License**: MIT
56
+ - **SDK**: Gradio
57
+ - **Hardware**: CPU Basic (free) or upgrade to T4 Small for faster performance
58
+ 3. Clone the Space locally:
59
+ ```bash
60
+ git clone https://huggingface.co/spaces/Nihal2000/debuggenie
61
+ ```
62
+ 4. Copy contents of `hf_space/` into the cloned directory
63
+ 5. Commit and push:
64
+ ```bash
65
+ git add .
66
+ git commit -m "Initial deployment"
67
+ git push
68
+ ```
69
+
70
+ ### Configure Secrets
71
+
72
+ **CRITICAL**: Before the Space will work, you must set the `MODAL_API_URL` secret:
73
+
74
+ 1. Go to your Space settings: https://huggingface.co/spaces/Nihal2000/debuggenie/settings
75
+ 2. Navigate to **Repository secrets**
76
+ 3. Add a new secret:
77
+ - **Name**: `MODAL_API_URL`
78
+ - **Value**: Your Modal endpoint URL (e.g., `https://nihal2000--debuggenie-app-analyze-error.modal.run`)
79
+
80
+ ### Get Modal URL
81
+
82
+ First, deploy your Modal backend:
83
+
84
+ ```bash
85
+ cd .. # Back to main debuggenie directory
86
+ modal deploy modal_app.py
87
+ ```
88
+
89
+ Copy the URL from the output (looks like `https://[username]--debuggenie-app-analyze-error.modal.run`).
90
+
91
+ ## Testing Locally
92
+
93
+ You can test the HF Space locally before pushing:
94
+
95
+ ```bash
96
+ cd hf_space
97
+
98
+ # Set the Modal URL
99
+ set MODAL_API_URL=https://your-modal-url.modal.run
100
+
101
+ # Run
102
+ python app.py
103
+ ```
104
+
105
+ ## Troubleshooting
106
+
107
+ ### Import Errors
108
+
109
+ All imports should work because we've included stub files for optional dependencies.
110
+
111
+ ### Modal Connection Errors
112
+
113
+ 1. Verify `MODAL_API_URL` is set in HF Space secrets
114
+ 2. Check that Modal backend is deployed and running
115
+ 3. Test Modal endpoint directly with curl:
116
+ ```bash
117
+ curl -X POST https://your-modal-url.modal.run \
118
+ -H "Content-Type: application/json" \
119
+ -d '{"error_text": "test error"}'
120
+ ```
121
+
122
+ ### Space Not Building
123
+
124
+ Check the Space logs for build errors. Common issues:
125
+ - Missing dependencies in `requirements.txt`
126
+ - Import errors (check all modules have `__init__.py`)
127
+
128
+ ## Next Steps
129
+
130
+ After deployment:
131
+ 1. Visit your Space: https://huggingface.co/spaces/Nihal2000/debuggenie
132
+ 2. Test with a sample error
133
+ 3. Share with others!
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: DebugGenie
3
+ emoji: 🧞
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 5.0.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ ---
12
+
13
+ # DebugGenie 🧞 - AI Debugging Assistant
14
+
15
+ **Multi-Agent AI System for Intelligent Error Analysis**
16
+
17
+ Powered by Claude, Gemini, and GPT-4 working together to solve your bugs.
18
+
19
+ ## Features
20
+
21
+ - 🤖 **Multi-Agent Analysis**: Three specialized AI agents collaborate to debug your errors
22
+ - 🎨 **3D Error Flow Visualization**: Interactive visualization of execution paths
23
+ - 🔊 **Voice Explanations**: AI-generated voice walkthroughs of solutions
24
+ - 📊 **Confidence Scoring**: Each solution ranked by probability of success
25
+ - 🧩 **Context-Aware**: Upload screenshots, code files, and stack traces
26
+
27
+ ## Architecture
28
+
29
+ This Space is the **frontend** for DebugGenie. The heavy AI processing runs on Modal's serverless infrastructure.
30
+
31
+ - **Frontend**: Lightweight Gradio UI (this Space)
32
+ - **Backend**: Modal serverless functions with GPU support
33
+ - **Communication**: Secure HTTPS API calls
34
+
35
+ ## Usage
36
+
37
+ 1. Paste your error message or stack trace
38
+ 2. Optionally upload a screenshot or code files
39
+ 3. Click "Analyze Error"
40
+ 4. Get AI-powered solutions with confidence scores
41
+
42
+ ## Configuration
43
+
44
+ This Space requires a `MODAL_API_URL` secret pointing to the deployed Modal backend.
45
+
46
+ ## Tech Stack
47
+
48
+ - Gradio 5.0
49
+ - Pydantic for data validation
50
+ - Plotly for visualizations
51
+ - Requests for backend communication
52
+
53
+ ## License
54
+
55
+ MIT License - See LICENSE file for details
app.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import gradio as gr
3
+ from ui.gradio_interface import create_interface
4
+ from ui.remote_client import RemoteBackend
5
+
6
+ # Get Modal API URL from environment variable
7
+ MODAL_API_URL = os.environ.get("MODAL_API_URL")
8
+
9
+ if not MODAL_API_URL:
10
+ raise ValueError("MODAL_API_URL environment variable is not set. Please configure it in Hugging Face Space secrets.")
11
+
12
+ # Initialize remote backend
13
+ backend = RemoteBackend(api_url=MODAL_API_URL)
14
+
15
+ # Create interface
16
+ demo = create_interface(backend)
17
+
18
+ if __name__ == "__main__":
19
+ demo.launch()
config/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Empty __init__.py file
config/api_keys.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stub for api_keys module to avoid import errors in HF Space
2
+ # In the HF deployment, we don't need the full api_keys config since
3
+ # the UI only talks to the remote Modal backend.
4
+
5
+ class APIConfig:
6
+ def __init__(self):
7
+ # Stub - only needed for imports, not actually used
8
+ self.elevenlabs_api_key = ""
9
+
10
+ api_config = APIConfig()
core/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Empty __init__.py file
core/models.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Dict, Any, Optional
2
+ from pydantic import BaseModel, Field
3
+
4
+ class DebugResult(BaseModel):
5
+ root_cause: str
6
+ solutions: List[Dict[str, Any]]
7
+ fix_instructions: str
8
+ confidence_score: float
9
+ agent_metrics: Dict[str, Any]
10
+ execution_time: float
11
+
12
+ class RankedSolution(BaseModel):
13
+ rank: int
14
+ title: str
15
+ description: str
16
+ steps: List[str]
17
+ code_changes: List[dict] = Field(default_factory=list)
18
+ confidence: float
19
+ sources: List[str] = Field(default_factory=list)
20
+ why_ranked_here: str
21
+ trade_offs: List[str] = Field(default_factory=list)
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ gradio>=5.0
2
+ requests>=2.32.3
3
+ pydantic>=2.0.0
4
+ plotly>=5.0.0
5
+ loguru>=0.7.0
6
+ Pillow>=10.0.0
ui/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Empty __init__.py file
ui/backend.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from abc import ABC, abstractmethod
2
+ from typing import Dict, Any, Optional
3
+ from core.models import DebugResult
4
+
5
+ class DebugBackend(ABC):
6
+ @abstractmethod
7
+ async def analyze(self, context: Dict[str, Any]) -> DebugResult:
8
+ pass
9
+
10
+ class LocalBackend(DebugBackend):
11
+ def __init__(self):
12
+ from core.orchestrator import DebugOrchestrator
13
+ self.orchestrator = DebugOrchestrator()
14
+
15
+ async def analyze(self, context: Dict[str, Any]) -> DebugResult:
16
+ return await self.orchestrator.orchestrate_debug(
17
+ error_context=context,
18
+ stream_callback=None
19
+ )
ui/gradio_interface.py ADDED
@@ -0,0 +1,342 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import asyncio
3
+ from typing import List, Dict, Any, Optional
4
+ from loguru import logger
5
+ import json
6
+ import os
7
+
8
+ # Import core components
9
+ from ui.backend import DebugBackend, LocalBackend
10
+ from core.models import RankedSolution
11
+ from visualization.blaxel_generator import ErrorFlowVisualizer
12
+ from voice.elevenlabs_tts import VoiceExplainer
13
+ from config.api_keys import api_config
14
+
15
+ class DebugGenieUI:
16
+ def __init__(self, backend: DebugBackend):
17
+ self.backend = backend
18
+ self.visualizer = ErrorFlowVisualizer()
19
+
20
+ # Initialize voice explainer if API key is available
21
+ try:
22
+ self.voice_explainer = VoiceExplainer(api_key=api_config.elevenlabs_api_key)
23
+ except:
24
+ logger.warning("ElevenLabs API key not found - voice features disabled")
25
+ self.voice_explainer = None
26
+
27
+ async def handle_analyze(
28
+ self,
29
+ error_text: str,
30
+ screenshot,
31
+ codebase_files,
32
+ progress=gr.Progress()
33
+ ):
34
+ """Main analysis handler with progressive updates."""
35
+ try:
36
+ # Initialize outputs
37
+ chat_history = []
38
+ solutions_html = ""
39
+ viz_html = ""
40
+ voice_audio = None
41
+ analysis_json = {}
42
+
43
+ # Validate inputs
44
+ if not error_text and screenshot is None:
45
+ return (
46
+ [["❌ Error", "Please provide either an error message or a screenshot."]],
47
+ "<div>No analysis performed.</div>",
48
+ "<div>No visualization available.</div>",
49
+ None,
50
+ {},
51
+ "Status: ❌ Missing input"
52
+ )
53
+
54
+ progress(0.1, desc="Starting analysis...")
55
+ chat_history.append({"role": "user", "content": f"Analyze this error:\n```\n{error_text[:200]}...\n```"})
56
+
57
+ # Build context
58
+ context = {
59
+ 'error_text': error_text,
60
+ 'image': screenshot,
61
+ 'code_context': ""
62
+ }
63
+
64
+ # Add screenshot context if provided
65
+ if screenshot is not None:
66
+ context['type'] = 'ide' # Could be auto-detected
67
+
68
+ progress(0.2, desc="Running multi-agent analysis...")
69
+
70
+ # Run backend analysis
71
+ result = await self.backend.analyze(context)
72
+
73
+ progress(0.7, desc="Generating visualizations...")
74
+
75
+ # Build chat response
76
+ response_text = f"""
77
+ ## 🎯 Root Cause
78
+ {result.root_cause}
79
+
80
+ ## ✅ Recommended Solutions
81
+ """
82
+ for idx, sol in enumerate(result.solutions[:3], 1):
83
+ response_text += f"\n### {idx}. {sol.get('title', 'Solution')}\n"
84
+ response_text += f"{sol.get('description', '')}\n"
85
+ response_text += f"**Confidence:** {sol.get('probability', 0):.0%}\n"
86
+
87
+ chat_history.append({"role": "assistant", "content": response_text})
88
+
89
+ # Generate solutions accordion HTML
90
+ solutions_html = self._generate_solutions_html(result.solutions)
91
+
92
+ # Generate 3D visualization if we have stack trace
93
+ # For demo, create a mock trace
94
+ mock_trace = self.visualizer.generate_mock_trace()
95
+ viz_html = self.visualizer.generate_flow(mock_trace)
96
+
97
+ # Build analysis JSON
98
+ analysis_json = {
99
+ "execution_time": f"{result.execution_time:.2f}s",
100
+ "confidence": result.confidence_score,
101
+ "agents_used": list(result.agent_metrics.keys()),
102
+ "metrics": result.agent_metrics
103
+ }
104
+
105
+ # Generate voice explanation for top solution
106
+ progress(0.9, desc="Generating voice explanation...")
107
+ if self.voice_explainer and result.solutions:
108
+ try:
109
+ # Convert first solution to RankedSolution format
110
+ top_solution = result.solutions[0]
111
+ ranked_sol = RankedSolution(
112
+ rank=1,
113
+ title=top_solution.get('title', 'Solution'),
114
+ description=top_solution.get('description', ''),
115
+ steps=[], # Would parse from fix_instructions
116
+ confidence=top_solution.get('probability', 0.5),
117
+ sources=[],
118
+ why_ranked_here=f"Top ranked solution with {top_solution.get('probability', 0)*100:.0f}% confidence",
119
+ trade_offs=[]
120
+ )
121
+
122
+ audio_bytes = self.voice_explainer.generate_explanation(
123
+ ranked_sol,
124
+ mode="walkthrough"
125
+ )
126
+
127
+ if audio_bytes:
128
+ # Save to temp file for Gradio
129
+ voice_path = self.voice_explainer.save_audio(
130
+ audio_bytes,
131
+ f"explanation_{hash(error_text[:100])}.mp3"
132
+ )
133
+ voice_audio = voice_path
134
+ except Exception as e:
135
+ logger.warning(f"Voice generation failed: {e}")
136
+ voice_audio = None
137
+
138
+ progress(1.0, desc="Complete!")
139
+
140
+ return (
141
+ chat_history,
142
+ solutions_html,
143
+ viz_html,
144
+ voice_audio,
145
+ analysis_json,
146
+ f"Status: ✅ Analysis complete in {result.execution_time:.2f}s"
147
+ )
148
+
149
+ except Exception as e:
150
+ logger.error(f"Analysis failed: {e}")
151
+ return (
152
+ [[f"❌ Error", f"Analysis failed: {str(e)}"]],
153
+ f"<div class='error'>Error: {str(e)}</div>",
154
+ "<div>Visualization unavailable</div>",
155
+ None,
156
+ {"error": str(e)},
157
+ f"Status: ❌ Failed - {str(e)}"
158
+ )
159
+
160
+ def _generate_solutions_html(self, solutions: List[Dict]) -> str:
161
+ """Generate HTML for solutions accordion."""
162
+ if not solutions:
163
+ return "<div>No solutions found.</div>"
164
+
165
+ html = "<div style='font-family: sans-serif;'>"
166
+
167
+ for idx, sol in enumerate(solutions, 1):
168
+ title = sol.get('title', f'Solution {idx}')
169
+ desc = sol.get('description', 'No description')
170
+ prob = sol.get('probability', 0.5)
171
+
172
+ # Color code by probability
173
+ color = "green" if prob > 0.7 else "orange" if prob > 0.4 else "red"
174
+
175
+ html += f"""
176
+ <details style='border: 2px solid {color}; border-radius: 8px; padding: 16px; margin: 12px 0;'>
177
+ <summary style='font-size: 18px; font-weight: bold; cursor: pointer;'>
178
+ {idx}. {title}
179
+ <span style='color: {color}; float: right;'>
180
+ {prob:.0%} confidence
181
+ </span>
182
+ </summary>
183
+ <div style='margin-top: 12px; padding: 12px; background: #f5f5f5; border-radius: 4px;'>
184
+ <p>{desc}</p>
185
+ </div>
186
+ </details>
187
+ """
188
+
189
+ html += "</div>"
190
+ return html
191
+
192
+ def create_interface(backend: DebugBackend):
193
+ """Create the main Gradio interface."""
194
+ ui = DebugGenieUI(backend)
195
+
196
+ with gr.Blocks(
197
+ title="DebugGenie 🧞",
198
+ theme=gr.themes.Soft(
199
+ primary_hue="blue",
200
+ secondary_hue="purple"
201
+ ),
202
+ css="""
203
+ .gradio-container {
204
+ font-family: 'Inter', sans-serif;
205
+ }
206
+ .error {
207
+ color: red;
208
+ padding: 16px;
209
+ background: #fee;
210
+ border-radius: 8px;
211
+ }
212
+ """
213
+ ) as demo:
214
+
215
+ gr.Markdown(
216
+ """
217
+ # 🧞 DebugGenie - AI Debugging Assistant
218
+ ### Multi-Agent AI System for Intelligent Error Analysis
219
+
220
+ Powered by Claude, Gemini, and GPT-4 working together to solve your bugs.
221
+ """
222
+ )
223
+
224
+ with gr.Row():
225
+ with gr.Column(scale=1):
226
+ gr.Markdown("## 📝 Input")
227
+
228
+ error_input = gr.Code(
229
+ label="Paste Error Message / Stack Trace",
230
+ language="python",
231
+ lines=10
232
+ )
233
+
234
+ screenshot_input = gr.Image(
235
+ label="Upload Screenshot (Optional)",
236
+ type="pil",
237
+ sources=["upload", "clipboard"]
238
+ )
239
+
240
+ codebase_files = gr.File(
241
+ label="Upload Codebase Files (Optional)",
242
+ file_count="multiple"
243
+ )
244
+
245
+ analyze_btn = gr.Button(
246
+ "🔍 Analyze Error",
247
+ variant="primary",
248
+ size="lg"
249
+ )
250
+
251
+ gr.Markdown(
252
+ """
253
+ ---
254
+ **Tips:**
255
+ - Paste complete error traces for best results
256
+ - Screenshots help with IDE or browser errors
257
+ - Upload related code files for deeper analysis
258
+ """
259
+ )
260
+
261
+ with gr.Column(scale=2):
262
+ gr.Markdown("## 🎯 Results")
263
+
264
+ status_text = gr.Markdown("**Status:** Ready to analyze")
265
+
266
+ with gr.Tabs():
267
+ with gr.Tab("💬 Chat"):
268
+ chatbot = gr.Chatbot(
269
+ height=500,
270
+ type="messages",
271
+ avatar_images=(
272
+ None,
273
+ "https://em-content.zobj.net/thumbs/120/apple/354/genie_1f9de.png"
274
+ )
275
+ )
276
+
277
+ with gr.Tab("🎯 Solutions"):
278
+ solutions_accordion = gr.HTML(
279
+ value="<div>No solutions yet. Analyze an error to get started.</div>"
280
+ )
281
+
282
+ with gr.Tab("🎨 3D Error Flow"):
283
+ viz_3d = gr.HTML(
284
+ value="<div style='text-align: center; padding: 40px;'>Visualization will appear here after analysis.</div>"
285
+ )
286
+
287
+ with gr.Tab("📊 Analysis Details"):
288
+ analysis_details = gr.JSON(
289
+ label="Detailed Metrics"
290
+ )
291
+
292
+ # Voice explanation (collapsed by default)
293
+ with gr.Accordion("🔊 Voice Explanation", open=False):
294
+ voice_output = gr.Audio(
295
+ label="AI-Generated Explanation",
296
+ autoplay=False
297
+ )
298
+
299
+ # Examples
300
+ gr.Examples(
301
+ examples=[
302
+ [
303
+ "Traceback (most recent call last):\n File \"app.py\", line 42, in process_data\n result = json.loads(data)\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)",
304
+ None,
305
+ None
306
+ ],
307
+ [
308
+ "TypeError: 'NoneType' object is not subscriptable\n File \"main.py\", line 15, in get_user\n return users[user_id]['name']",
309
+ None,
310
+ None
311
+ ]
312
+ ],
313
+ inputs=[error_input, screenshot_input, codebase_files],
314
+ label="📚 Example Errors"
315
+ )
316
+
317
+ # Event handlers
318
+ analyze_btn.click(
319
+ fn=ui.handle_analyze,
320
+ inputs=[error_input, screenshot_input, codebase_files],
321
+ outputs=[
322
+ chatbot,
323
+ solutions_accordion,
324
+ viz_3d,
325
+ voice_output,
326
+ analysis_details,
327
+ status_text
328
+ ]
329
+ )
330
+
331
+ return demo
332
+
333
+ if __name__ == "__main__":
334
+ # Default to local backend for direct execution
335
+ backend = LocalBackend()
336
+ demo = create_interface(backend)
337
+ demo.launch(
338
+ server_name="127.0.0.1",
339
+ server_port=7860,
340
+ share=False,
341
+ show_error=True
342
+ )
ui/remote_client.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import requests
2
+ import json
3
+ from typing import Dict, Any
4
+ from core.models import DebugResult
5
+ from ui.backend import DebugBackend
6
+ from loguru import logger
7
+
8
+ class RemoteBackend(DebugBackend):
9
+ def __init__(self, api_url: str, token: str = None):
10
+ self.api_url = api_url
11
+ self.token = token
12
+
13
+ async def analyze(self, context: Dict[str, Any]) -> DebugResult:
14
+ """
15
+ Call the remote Modal endpoint.
16
+ """
17
+ try:
18
+ # Prepare payload
19
+ # The Modal endpoint expects a dict with "error_text" and optional "context"
20
+ # Our context already has 'error_text', 'image', 'code_context'
21
+ # We might need to adapt if the endpoint signature is strict,
22
+ # but modal_app.py takes `error_info: dict`.
23
+
24
+ # Note: Image handling might need base64 encoding if not already done.
25
+ # In gradio_interface.py, 'image' is passed as PIL image or None.
26
+ # We need to serialize it if it's a PIL image.
27
+
28
+ payload = context.copy()
29
+ if payload.get('image'):
30
+ # If it's a PIL image, convert to base64
31
+ import base64
32
+ from io import BytesIO
33
+
34
+ img = payload['image']
35
+ if not isinstance(img, str):
36
+ buffered = BytesIO()
37
+ img.save(buffered, format="PNG")
38
+ img_str = base64.b64encode(buffered.getvalue()).decode()
39
+ payload['image'] = img_str
40
+
41
+ headers = {"Content-Type": "application/json"}
42
+ if self.token:
43
+ headers["Authorization"] = f"Bearer {self.token}"
44
+
45
+ logger.info(f"Sending request to {self.api_url}")
46
+ response = requests.post(self.api_url, json=payload, headers=headers, timeout=600)
47
+ response.raise_for_status()
48
+
49
+ data = response.json()
50
+ return DebugResult(**data)
51
+
52
+ except Exception as e:
53
+ logger.error(f"Remote analysis failed: {e}")
54
+ raise e
visualization/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Empty __init__.py file
visualization/blaxel_generator.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Dict, Any, Optional
2
+ import json
3
+ from pydantic import BaseModel
4
+ from loguru import logger
5
+
6
+ # Mocking Blaxel SDK for now as it's a hypothetical or external dependency not in requirements
7
+ # In a real scenario, we would import: from blaxel import Scene, Node, Edge, Animation
8
+ # We will generate a Plotly 3D scatter plot as a fallback/implementation of the concept
9
+ import plotly.graph_objects as go
10
+
11
+ class StackFrame(BaseModel):
12
+ function: str
13
+ file: str
14
+ line: int
15
+ is_external: bool = False
16
+
17
+ class ErrorFlowVisualizer:
18
+ def __init__(self):
19
+ pass
20
+
21
+ def _get_node_color(self, idx: int, total: int, is_external: bool) -> str:
22
+ if is_external:
23
+ return "purple"
24
+ if idx == 0:
25
+ return "green" # Entry
26
+ if idx == total - 1:
27
+ return "red" # Error
28
+ return "blue" # Normal
29
+
30
+ def generate_flow(self, stack_trace: List[Dict[str, Any]]) -> str:
31
+ """
32
+ Generate 3D visualization of error flow.
33
+ Returns HTML string of the visualization.
34
+ """
35
+ try:
36
+ frames = [StackFrame(**f) for f in stack_trace]
37
+ if not frames:
38
+ return "<div>No stack trace available for visualization.</div>"
39
+
40
+ # Limit nodes for performance
41
+ if len(frames) > 50:
42
+ frames = frames[-50:]
43
+
44
+ # 3D Coordinates layout (simple spiral or linear)
45
+ x_coords = []
46
+ y_coords = []
47
+ z_coords = []
48
+ colors = []
49
+ sizes = []
50
+ hover_texts = []
51
+
52
+ for idx, frame in enumerate(frames):
53
+ # Simple linear layout along X axis, with slight spiral
54
+ x_coords.append(idx * 2)
55
+ y_coords.append(0)
56
+ z_coords.append(0)
57
+
58
+ color = self._get_node_color(idx, len(frames), frame.is_external)
59
+ colors.append(color)
60
+
61
+ # Pulse effect for error node (simulated by larger size)
62
+ size = 20 if idx == len(frames) - 1 else 12
63
+ sizes.append(size)
64
+
65
+ hover_texts.append(
66
+ f"<b>{frame.function}</b><br>"
67
+ f"{frame.file}:{frame.line}<br>"
68
+ f"{'External Library' if frame.is_external else 'User Code'}"
69
+ )
70
+
71
+ # Create Nodes Trace
72
+ node_trace = go.Scatter3d(
73
+ x=x_coords, y=y_coords, z=z_coords,
74
+ mode='markers+text',
75
+ marker=dict(
76
+ size=sizes,
77
+ color=colors,
78
+ opacity=0.8,
79
+ line=dict(width=2, color='white')
80
+ ),
81
+ text=[f.function for f in frames],
82
+ textposition="top center",
83
+ hoverinfo='text',
84
+ hovertext=hover_texts
85
+ )
86
+
87
+ # Create Edges Trace
88
+ edge_x = []
89
+ edge_y = []
90
+ edge_z = []
91
+
92
+ for i in range(len(frames) - 1):
93
+ edge_x.extend([x_coords[i], x_coords[i+1], None])
94
+ edge_y.extend([y_coords[i], y_coords[i+1], None])
95
+ edge_z.extend([z_coords[i], z_coords[i+1], None])
96
+
97
+ edge_trace = go.Scatter3d(
98
+ x=edge_x, y=edge_y, z=edge_z,
99
+ mode='lines',
100
+ line=dict(color='#888', width=2),
101
+ hoverinfo='none'
102
+ )
103
+
104
+ # Layout
105
+ layout = go.Layout(
106
+ title="3D Error Flow Visualization",
107
+ showlegend=False,
108
+ scene=dict(
109
+ xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
110
+ yaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
111
+ zaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
112
+ bgcolor='rgba(0,0,0,0)'
113
+ ),
114
+ margin=dict(l=0, r=0, b=0, t=30),
115
+ height=500
116
+ )
117
+
118
+ fig = go.Figure(data=[edge_trace, node_trace], layout=layout)
119
+
120
+ # Return HTML div
121
+ return fig.to_html(full_html=False, include_plotlyjs='cdn')
122
+
123
+ except Exception as e:
124
+ logger.error(f"Visualization generation failed: {e}")
125
+ return f"<div>Error generating visualization: {str(e)}</div>"
126
+
127
+ def generate_mock_trace(self) -> List[Dict[str, Any]]:
128
+ """Generate a mock trace for testing."""
129
+ return [
130
+ {"function": "main", "file": "app.py", "line": 10, "is_external": False},
131
+ {"function": "process_request", "file": "core/handler.py", "line": 45, "is_external": False},
132
+ {"function": "validate_input", "file": "utils/validation.py", "line": 12, "is_external": False},
133
+ {"function": "json.loads", "file": "json/decoder.py", "line": 337, "is_external": True},
134
+ {"function": "decode", "file": "json/decoder.py", "line": 355, "is_external": True},
135
+ ]
voice/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Empty __init__.py file
voice/elevenlabs_tts.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stub for voice module to avoid import errors in HF Space
2
+ # Voice generation is optional and won't be used in the remote backend setup
3
+
4
+ class VoiceExplainer:
5
+ def __init__(self, api_key=None):
6
+ pass
7
+
8
+ def generate_explanation(self, solution, mode="walkthrough"):
9
+ return None
10
+
11
+ def save_audio(self, audio_bytes, filename):
12
+ return None