Spaces:
Sleeping
Sleeping
Updated app.py and requirements
Browse files- DEPLOYMENT_GUIDE.md +362 -0
- README.md +28 -7
- app.py +348 -0
- config.json +56 -0
- model.safetensors +3 -0
- preprocessor_config.json +22 -0
- requirements.txt +8 -6
- scaler.pt +3 -0
DEPLOYMENT_GUIDE.md
ADDED
|
@@ -0,0 +1,362 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Hugging Face Spaces Deployment Guide
|
| 2 |
+
|
| 3 |
+
## 📋 Prerequisites
|
| 4 |
+
|
| 5 |
+
1. A Hugging Face account (create one at https://huggingface.co/join)
|
| 6 |
+
2. Your trained ConvNeXt model uploaded to Hugging Face Model Hub
|
| 7 |
+
3. Git installed on your system
|
| 8 |
+
4. Git LFS (Large File Storage) installed
|
| 9 |
+
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
## 🚀 Step-by-Step Deployment Instructions
|
| 13 |
+
|
| 14 |
+
### Step 1: Create a New Space on Hugging Face
|
| 15 |
+
|
| 16 |
+
1. Go to https://huggingface.co/spaces
|
| 17 |
+
2. Click **"Create new Space"**
|
| 18 |
+
3. Fill in the details:
|
| 19 |
+
- **Space name**: `project-phoenix-cervical-classification` (or your preferred name)
|
| 20 |
+
- **License**: MIT (or your choice)
|
| 21 |
+
- **Select SDK**: Choose **Gradio**
|
| 22 |
+
- **SDK version**: 4.0.0 or latest
|
| 23 |
+
- **Hardware**: CPU (Free) or GPU (Paid - recommended for faster inference)
|
| 24 |
+
- **Visibility**: Public or Private
|
| 25 |
+
|
| 26 |
+
4. Click **"Create Space"**
|
| 27 |
+
|
| 28 |
+
---
|
| 29 |
+
|
| 30 |
+
### Step 2: Clone Your Space Repository
|
| 31 |
+
|
| 32 |
+
Open a terminal and run:
|
| 33 |
+
|
| 34 |
+
```bash
|
| 35 |
+
# Navigate to your project directory
|
| 36 |
+
cd "C:\Meet\Projects\Project_8_Phoenix_Cervical Cancer Image Classification\Project-Phoenix\Phoenix\Project-Phoenix"
|
| 37 |
+
|
| 38 |
+
# Clone your Hugging Face Space
|
| 39 |
+
git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
|
| 40 |
+
cd YOUR_SPACE_NAME
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
Replace `YOUR_USERNAME` with your Hugging Face username and `YOUR_SPACE_NAME` with your space name.
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
### Step 3: Set Up Git LFS (for Large Files)
|
| 48 |
+
|
| 49 |
+
```bash
|
| 50 |
+
# Install Git LFS if not already installed
|
| 51 |
+
# For Windows: Download from https://git-lfs.github.com/
|
| 52 |
+
# Or use: choco install git-lfs
|
| 53 |
+
|
| 54 |
+
# Initialize Git LFS in the repository
|
| 55 |
+
git lfs install
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
---
|
| 59 |
+
|
| 60 |
+
### Step 4: Copy Files to Space Repository
|
| 61 |
+
|
| 62 |
+
Copy the following files from your Project-Phoenix directory to the cloned space directory:
|
| 63 |
+
|
| 64 |
+
```bash
|
| 65 |
+
# Copy the main app file
|
| 66 |
+
Copy-Item -Path "../app.py" -Destination "."
|
| 67 |
+
|
| 68 |
+
# Copy requirements
|
| 69 |
+
Copy-Item -Path "../requirements.txt" -Destination "."
|
| 70 |
+
|
| 71 |
+
# Copy README (rename to README.md)
|
| 72 |
+
Copy-Item -Path "../README_HF.md" -Destination "./README.md"
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
Or manually copy:
|
| 76 |
+
- `app.py`
|
| 77 |
+
- `requirements.txt`
|
| 78 |
+
- Rename `README_HF.md` to `README.md`
|
| 79 |
+
|
| 80 |
+
---
|
| 81 |
+
|
| 82 |
+
### Step 5: Update the Model ID in app.py (if needed)
|
| 83 |
+
|
| 84 |
+
Make sure your `app.py` has the correct model ID. Open `app.py` and verify line 33:
|
| 85 |
+
|
| 86 |
+
```python
|
| 87 |
+
HF_MODEL_ID = os.getenv("HF_MODEL_ID", "Meet2304/convnextv2-cervical-cell-classification")
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
Change `"Meet2304/convnextv2-cervical-cell-classification"` to your actual model ID if different.
|
| 91 |
+
|
| 92 |
+
---
|
| 93 |
+
|
| 94 |
+
### Step 6: Commit and Push to Hugging Face
|
| 95 |
+
|
| 96 |
+
```bash
|
| 97 |
+
# Add all files
|
| 98 |
+
git add .
|
| 99 |
+
|
| 100 |
+
# Commit the changes
|
| 101 |
+
git commit -m "Initial deployment of Project Phoenix Gradio app"
|
| 102 |
+
|
| 103 |
+
# Push to Hugging Face
|
| 104 |
+
git push
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
You may be prompted for credentials:
|
| 108 |
+
- **Username**: Your Hugging Face username
|
| 109 |
+
- **Password**: Use a Hugging Face **Access Token** (not your password)
|
| 110 |
+
|
| 111 |
+
To create an access token:
|
| 112 |
+
1. Go to https://huggingface.co/settings/tokens
|
| 113 |
+
2. Click "New token"
|
| 114 |
+
3. Name it (e.g., "Space Deploy")
|
| 115 |
+
4. Select "write" permission
|
| 116 |
+
5. Copy the token and use it as the password
|
| 117 |
+
|
| 118 |
+
---
|
| 119 |
+
|
| 120 |
+
### Step 7: Monitor Deployment
|
| 121 |
+
|
| 122 |
+
1. Go to your Space URL: `https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME`
|
| 123 |
+
2. You'll see a "Building" status
|
| 124 |
+
3. Check the **Logs** tab to monitor the build process
|
| 125 |
+
4. Common stages:
|
| 126 |
+
- Installing dependencies from requirements.txt
|
| 127 |
+
- Loading the model from Hugging Face Hub
|
| 128 |
+
- Starting the Gradio server
|
| 129 |
+
|
| 130 |
+
The build typically takes 3-10 minutes depending on hardware.
|
| 131 |
+
|
| 132 |
+
---
|
| 133 |
+
|
| 134 |
+
### Step 8: Test Your Deployed App
|
| 135 |
+
|
| 136 |
+
Once the status changes to "Running":
|
| 137 |
+
|
| 138 |
+
1. Your app will be live at: `https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME`
|
| 139 |
+
2. Test both tabs:
|
| 140 |
+
- **Basic Prediction**: Upload a cell image and click "Classify"
|
| 141 |
+
- **Prediction + Explainability**: Upload an image and see GRAD-CAM visualization
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
## 🔧 Troubleshooting
|
| 146 |
+
|
| 147 |
+
### Build Fails Due to Dependencies
|
| 148 |
+
|
| 149 |
+
If you see errors during installation:
|
| 150 |
+
|
| 151 |
+
1. Check the **Logs** tab for specific errors
|
| 152 |
+
2. Common issues:
|
| 153 |
+
- **PyTorch version**: May need to specify CPU version for free tier
|
| 154 |
+
- **OpenCV**: Sometimes requires additional system libraries
|
| 155 |
+
|
| 156 |
+
Update `requirements.txt` if needed:
|
| 157 |
+
|
| 158 |
+
```txt
|
| 159 |
+
torch>=2.0.0
|
| 160 |
+
torchvision>=0.15.0
|
| 161 |
+
transformers>=4.30.0
|
| 162 |
+
gradio>=4.0.0
|
| 163 |
+
opencv-python-headless>=4.8.0 # Use headless version for servers
|
| 164 |
+
numpy>=1.24.0
|
| 165 |
+
Pillow>=10.0.0
|
| 166 |
+
grad-cam>=1.4.8
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
### Model Not Loading
|
| 170 |
+
|
| 171 |
+
If you see "Model not found" errors:
|
| 172 |
+
|
| 173 |
+
1. Verify your model is public (or the Space has access if private)
|
| 174 |
+
2. Check the model ID is correct in `app.py`
|
| 175 |
+
3. Ensure your model was properly uploaded to Hugging Face Model Hub
|
| 176 |
+
|
| 177 |
+
### Out of Memory on CPU
|
| 178 |
+
|
| 179 |
+
If the free CPU tier runs out of memory:
|
| 180 |
+
|
| 181 |
+
1. Upgrade to a GPU Space (paid)
|
| 182 |
+
2. Or optimize the model:
|
| 183 |
+
- Use model quantization
|
| 184 |
+
- Reduce batch size (already 1 in this app)
|
| 185 |
+
|
| 186 |
+
---
|
| 187 |
+
|
| 188 |
+
## 🌐 Connecting Your Next.js Frontend to the Deployed Space
|
| 189 |
+
|
| 190 |
+
### Option 1: Use Gradio Client API (Recommended)
|
| 191 |
+
|
| 192 |
+
The easiest way is to use Gradio's client API. Your Space provides an API endpoint automatically.
|
| 193 |
+
|
| 194 |
+
1. **Get your Space API URL**: `https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space`
|
| 195 |
+
|
| 196 |
+
2. **Install Gradio Client in your Next.js project**:
|
| 197 |
+
|
| 198 |
+
```bash
|
| 199 |
+
cd "C:\Meet\Projects\Project_8_Phoenix_Cervical Cancer Image Classification\Project-Phoenix\Phoenix\phoenix-app"
|
| 200 |
+
npm install @gradio/client
|
| 201 |
+
```
|
| 202 |
+
|
| 203 |
+
3. **Create an API service file** (`src/lib/inference-api.ts`):
|
| 204 |
+
|
| 205 |
+
```typescript
|
| 206 |
+
import { client } from "@gradio/client";
|
| 207 |
+
|
| 208 |
+
const SPACE_URL = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space";
|
| 209 |
+
|
| 210 |
+
export async function predictBasic(imageFile: File) {
|
| 211 |
+
try {
|
| 212 |
+
const app = await client(SPACE_URL);
|
| 213 |
+
|
| 214 |
+
const result = await app.predict("/predict_basic", [imageFile]);
|
| 215 |
+
|
| 216 |
+
return result.data;
|
| 217 |
+
} catch (error) {
|
| 218 |
+
console.error("Prediction error:", error);
|
| 219 |
+
throw error;
|
| 220 |
+
}
|
| 221 |
+
}
|
| 222 |
+
|
| 223 |
+
export async function predictWithExplainability(imageFile: File) {
|
| 224 |
+
try {
|
| 225 |
+
const app = await client(SPACE_URL);
|
| 226 |
+
|
| 227 |
+
const result = await app.predict("/predict_with_explainability", [imageFile]);
|
| 228 |
+
|
| 229 |
+
return result.data;
|
| 230 |
+
} catch (error) {
|
| 231 |
+
console.error("Prediction with explainability error:", error);
|
| 232 |
+
throw error;
|
| 233 |
+
}
|
| 234 |
+
}
|
| 235 |
+
```
|
| 236 |
+
|
| 237 |
+
4. **Update your inference page** to use the API:
|
| 238 |
+
|
| 239 |
+
```typescript
|
| 240 |
+
// In your handleAnalyze function
|
| 241 |
+
const handleAnalyze = async () => {
|
| 242 |
+
setIsAnalyzing(true);
|
| 243 |
+
|
| 244 |
+
try {
|
| 245 |
+
if (selectedSource === 'upload' && fileState.files.length > 0) {
|
| 246 |
+
const file = fileState.files[0].file as File;
|
| 247 |
+
const result = await predictBasic(file);
|
| 248 |
+
|
| 249 |
+
setAnalysisResult({
|
| 250 |
+
predicted_class: result.label,
|
| 251 |
+
confidence: result.confidences[0].confidence,
|
| 252 |
+
top_predictions: result.confidences.map((c: any) => ({
|
| 253 |
+
class: c.label,
|
| 254 |
+
probability: c.confidence
|
| 255 |
+
}))
|
| 256 |
+
});
|
| 257 |
+
} else if (selectedSource === 'sample' && selectedSample) {
|
| 258 |
+
// For sample images, fetch the image first then predict
|
| 259 |
+
const response = await fetch(currentImage!);
|
| 260 |
+
const blob = await response.blob();
|
| 261 |
+
const file = new File([blob], 'sample.jpg', { type: 'image/jpeg' });
|
| 262 |
+
|
| 263 |
+
const result = await predictBasic(file);
|
| 264 |
+
|
| 265 |
+
setAnalysisResult({
|
| 266 |
+
predicted_class: result.label,
|
| 267 |
+
confidence: result.confidences[0].confidence,
|
| 268 |
+
top_predictions: result.confidences.map((c: any) => ({
|
| 269 |
+
class: c.label,
|
| 270 |
+
probability: c.confidence
|
| 271 |
+
}))
|
| 272 |
+
});
|
| 273 |
+
}
|
| 274 |
+
} catch (error) {
|
| 275 |
+
console.error("Analysis error:", error);
|
| 276 |
+
// Handle error appropriately
|
| 277 |
+
} finally {
|
| 278 |
+
setIsAnalyzing(false);
|
| 279 |
+
}
|
| 280 |
+
};
|
| 281 |
+
```
|
| 282 |
+
|
| 283 |
+
### Option 2: Use Direct API Endpoint
|
| 284 |
+
|
| 285 |
+
Alternatively, you can use the automatic API endpoint that Gradio creates:
|
| 286 |
+
|
| 287 |
+
```typescript
|
| 288 |
+
const SPACE_API = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space/api/predict";
|
| 289 |
+
|
| 290 |
+
async function predictWithFetch(imageFile: File) {
|
| 291 |
+
const formData = new FormData();
|
| 292 |
+
formData.append('data', imageFile);
|
| 293 |
+
|
| 294 |
+
const response = await fetch(SPACE_API, {
|
| 295 |
+
method: 'POST',
|
| 296 |
+
body: formData,
|
| 297 |
+
});
|
| 298 |
+
|
| 299 |
+
return await response.json();
|
| 300 |
+
}
|
| 301 |
+
```
|
| 302 |
+
|
| 303 |
+
---
|
| 304 |
+
|
| 305 |
+
## 📊 Monitoring and Analytics
|
| 306 |
+
|
| 307 |
+
1. **View Usage Statistics**: Go to your Space settings to see usage metrics
|
| 308 |
+
2. **Check Logs**: Monitor real-time logs in the Space interface
|
| 309 |
+
3. **Set up Alerts**: Configure notifications for errors or downtime
|
| 310 |
+
|
| 311 |
+
---
|
| 312 |
+
|
| 313 |
+
## 🔐 Security Considerations
|
| 314 |
+
|
| 315 |
+
1. **API Keys**: If you need authentication, use Hugging Face's built-in authentication
|
| 316 |
+
2. **Rate Limiting**: Consider implementing rate limiting for public spaces
|
| 317 |
+
3. **Model Access**: Ensure your model repository has appropriate access controls
|
| 318 |
+
|
| 319 |
+
---
|
| 320 |
+
|
| 321 |
+
## 💰 Cost Considerations
|
| 322 |
+
|
| 323 |
+
- **CPU (Free)**: Limited resources, slower inference
|
| 324 |
+
- **CPU Basic ($5/month)**: Better performance
|
| 325 |
+
- **GPU T4 Small ($0.60/hour)**: Recommended for production
|
| 326 |
+
- **GPU A10G Large ($3.15/hour)**: High-performance inference
|
| 327 |
+
|
| 328 |
+
---
|
| 329 |
+
|
| 330 |
+
## 🎯 Next Steps
|
| 331 |
+
|
| 332 |
+
1. **Deploy the Space** following steps 1-7
|
| 333 |
+
2. **Test the interface** directly on Hugging Face
|
| 334 |
+
3. **Integrate with your Next.js frontend** using the API
|
| 335 |
+
4. **Monitor performance** and upgrade hardware if needed
|
| 336 |
+
5. **Collect user feedback** and iterate
|
| 337 |
+
|
| 338 |
+
---
|
| 339 |
+
|
| 340 |
+
## 📝 Additional Resources
|
| 341 |
+
|
| 342 |
+
- [Gradio Documentation](https://gradio.app/docs/)
|
| 343 |
+
- [Hugging Face Spaces Guide](https://huggingface.co/docs/hub/spaces)
|
| 344 |
+
- [Gradio Client Library](https://gradio.app/guides/getting-started-with-the-js-client/)
|
| 345 |
+
|
| 346 |
+
---
|
| 347 |
+
|
| 348 |
+
## ✅ Quick Checklist
|
| 349 |
+
|
| 350 |
+
- [ ] Created Hugging Face Space
|
| 351 |
+
- [ ] Cloned Space repository
|
| 352 |
+
- [ ] Copied app.py, requirements.txt, README.md
|
| 353 |
+
- [ ] Updated model ID in app.py
|
| 354 |
+
- [ ] Committed and pushed to Hugging Face
|
| 355 |
+
- [ ] Verified deployment in Logs
|
| 356 |
+
- [ ] Tested both prediction modes
|
| 357 |
+
- [ ] Integrated API with Next.js frontend
|
| 358 |
+
- [ ] Tested end-to-end workflow
|
| 359 |
+
|
| 360 |
+
---
|
| 361 |
+
|
| 362 |
+
**Need Help?** Check the logs tab in your Space or refer to Hugging Face documentation.
|
README.md
CHANGED
|
@@ -1,14 +1,35 @@
|
|
| 1 |
---
|
| 2 |
-
title: Project Phoenix
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: gradio
|
| 7 |
-
sdk_version:
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
license: mit
|
| 11 |
-
short_description: Explainable cervical cancer classification
|
| 12 |
---
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Project Phoenix - Cervical Cancer Cell Classification
|
| 3 |
+
emoji: 🔬
|
| 4 |
+
colorFrom: purple
|
| 5 |
+
colorTo: blue
|
| 6 |
sdk: gradio
|
| 7 |
+
sdk_version: 4.0.0
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
license: mit
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# Project Phoenix - Cervical Cancer Cell Classification
|
| 14 |
+
|
| 15 |
+
This is a Gradio application for automated classification of cervical cancer cells using a fine-tuned ConvNeXt V2 model.
|
| 16 |
+
|
| 17 |
+
## Model
|
| 18 |
+
|
| 19 |
+
The model classifies cervical cells into 5 categories:
|
| 20 |
+
- Dyskeratotic
|
| 21 |
+
- Koilocytotic
|
| 22 |
+
- Metaplastic
|
| 23 |
+
- Parabasal
|
| 24 |
+
- Superficial-Intermediate
|
| 25 |
+
|
| 26 |
+
## Features
|
| 27 |
+
|
| 28 |
+
- Basic prediction mode for quick classification
|
| 29 |
+
- Explainability mode with GRAD-CAM visualization
|
| 30 |
+
- High accuracy ConvNeXt V2 architecture
|
| 31 |
+
- Real-time inference
|
| 32 |
+
|
| 33 |
+
## Usage
|
| 34 |
+
|
| 35 |
+
Upload a cervical cell image and get instant classification results with confidence scores.
|
app.py
CHANGED
|
@@ -0,0 +1,348 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Project Phoenix - Cervical Cancer Cell Classification
|
| 3 |
+
Gradio application for running inference on ConvNeXt V2 model from Hugging Face
|
| 4 |
+
with explainability features (GRAD-CAM).
|
| 5 |
+
Deployed on Hugging Face Spaces.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
import os
|
| 9 |
+
import numpy as np
|
| 10 |
+
import cv2
|
| 11 |
+
from typing import Dict, Tuple, Optional
|
| 12 |
+
|
| 13 |
+
# Deep Learning
|
| 14 |
+
import torch
|
| 15 |
+
import torch.nn as nn
|
| 16 |
+
import torch.nn.functional as F
|
| 17 |
+
from PIL import Image
|
| 18 |
+
|
| 19 |
+
# Transformers
|
| 20 |
+
from transformers import (
|
| 21 |
+
ConvNextV2ForImageClassification,
|
| 22 |
+
AutoImageProcessor
|
| 23 |
+
)
|
| 24 |
+
|
| 25 |
+
# GRAD-CAM
|
| 26 |
+
from pytorch_grad_cam import GradCAM
|
| 27 |
+
from pytorch_grad_cam.utils.image import show_cam_on_image
|
| 28 |
+
from pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget
|
| 29 |
+
|
| 30 |
+
# Gradio
|
| 31 |
+
import gradio as gr
|
| 32 |
+
|
| 33 |
+
# ========== CONFIGURATION ==========
|
| 34 |
+
|
| 35 |
+
# Hugging Face model ID
|
| 36 |
+
HF_MODEL_ID = os.getenv("HF_MODEL_ID", "Meet2304/convnextv2-cervical-cell-classification")
|
| 37 |
+
|
| 38 |
+
# Class names
|
| 39 |
+
CLASS_NAMES = [
|
| 40 |
+
'im_Dyskeratotic',
|
| 41 |
+
'im_Koilocytotic',
|
| 42 |
+
'im_Metaplastic',
|
| 43 |
+
'im_Parabasal',
|
| 44 |
+
'im_Superficial-Intermediate'
|
| 45 |
+
]
|
| 46 |
+
|
| 47 |
+
# Display names (cleaner for UI)
|
| 48 |
+
DISPLAY_NAMES = [
|
| 49 |
+
'Dyskeratotic',
|
| 50 |
+
'Koilocytotic',
|
| 51 |
+
'Metaplastic',
|
| 52 |
+
'Parabasal',
|
| 53 |
+
'Superficial-Intermediate'
|
| 54 |
+
]
|
| 55 |
+
|
| 56 |
+
# Device
|
| 57 |
+
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
| 58 |
+
|
| 59 |
+
# ========== MODEL LOADING ==========
|
| 60 |
+
|
| 61 |
+
print("Loading model from Hugging Face...")
|
| 62 |
+
print(f"Model ID: {HF_MODEL_ID}")
|
| 63 |
+
print(f"Device: {DEVICE}")
|
| 64 |
+
|
| 65 |
+
# Load image processor
|
| 66 |
+
processor = AutoImageProcessor.from_pretrained(HF_MODEL_ID)
|
| 67 |
+
print("✓ Processor loaded")
|
| 68 |
+
|
| 69 |
+
# Load model
|
| 70 |
+
model = ConvNextV2ForImageClassification.from_pretrained(HF_MODEL_ID)
|
| 71 |
+
model = model.to(DEVICE)
|
| 72 |
+
model.eval()
|
| 73 |
+
print("✓ Model loaded and set to evaluation mode")
|
| 74 |
+
|
| 75 |
+
print(f"Model configuration:")
|
| 76 |
+
print(f" - Number of classes: {model.config.num_labels}")
|
| 77 |
+
print(f" - Image size: {model.config.image_size}")
|
| 78 |
+
print(f" - Total parameters: {sum(p.numel() for p in model.parameters()):,}")
|
| 79 |
+
|
| 80 |
+
# ========== HELPER FUNCTIONS ==========
|
| 81 |
+
|
| 82 |
+
def preprocess_image(image: Image.Image) -> Tuple[torch.Tensor, np.ndarray]:
|
| 83 |
+
"""
|
| 84 |
+
Preprocess image for model input.
|
| 85 |
+
|
| 86 |
+
Returns:
|
| 87 |
+
Tuple of (preprocessed_tensor, original_image_array)
|
| 88 |
+
"""
|
| 89 |
+
# Store original for visualization
|
| 90 |
+
original_image = np.array(image.convert('RGB'))
|
| 91 |
+
|
| 92 |
+
# Preprocess using the model's processor
|
| 93 |
+
inputs = processor(images=image, return_tensors="pt")
|
| 94 |
+
pixel_values = inputs['pixel_values'].to(DEVICE)
|
| 95 |
+
|
| 96 |
+
return pixel_values, original_image
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
class ConvNeXtGradCAMWrapper(nn.Module):
|
| 100 |
+
"""Wrapper for ConvNeXtV2ForImageClassification to make it compatible with GRAD-CAM."""
|
| 101 |
+
|
| 102 |
+
def __init__(self, model):
|
| 103 |
+
super().__init__()
|
| 104 |
+
self.model = model
|
| 105 |
+
|
| 106 |
+
def forward(self, x):
|
| 107 |
+
outputs = self.model(pixel_values=x)
|
| 108 |
+
return outputs.logits
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
def get_target_layers(model):
|
| 112 |
+
"""Get the target layers for GRAD-CAM from ConvNeXt model."""
|
| 113 |
+
return [model.convnextv2.encoder.stages[-1].layers[-1]]
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
def apply_gradcam(
|
| 117 |
+
pixel_values: torch.Tensor,
|
| 118 |
+
original_image: np.ndarray,
|
| 119 |
+
target_class: Optional[int] = None
|
| 120 |
+
) -> Tuple[np.ndarray, int, float]:
|
| 121 |
+
"""
|
| 122 |
+
Apply GRAD-CAM to visualize model attention.
|
| 123 |
+
|
| 124 |
+
Args:
|
| 125 |
+
pixel_values: Preprocessed image tensor
|
| 126 |
+
original_image: Original image as numpy array
|
| 127 |
+
target_class: Target class index (None for predicted class)
|
| 128 |
+
|
| 129 |
+
Returns:
|
| 130 |
+
Tuple of (visualization, predicted_class, confidence)
|
| 131 |
+
"""
|
| 132 |
+
# Wrap the model
|
| 133 |
+
wrapped_model = ConvNeXtGradCAMWrapper(model)
|
| 134 |
+
|
| 135 |
+
# Get target layers
|
| 136 |
+
target_layers = get_target_layers(model)
|
| 137 |
+
|
| 138 |
+
# Initialize GRAD-CAM
|
| 139 |
+
cam = GradCAM(model=wrapped_model, target_layers=target_layers)
|
| 140 |
+
|
| 141 |
+
# Get prediction
|
| 142 |
+
model.eval()
|
| 143 |
+
with torch.no_grad():
|
| 144 |
+
outputs = model(pixel_values)
|
| 145 |
+
logits = outputs.logits
|
| 146 |
+
predicted_class = logits.argmax(-1).item()
|
| 147 |
+
probabilities = F.softmax(logits, dim=-1)[0]
|
| 148 |
+
|
| 149 |
+
# Use predicted class if target not specified
|
| 150 |
+
if target_class is None:
|
| 151 |
+
target_class = predicted_class
|
| 152 |
+
|
| 153 |
+
# Create target for GRAD-CAM
|
| 154 |
+
targets = [ClassifierOutputTarget(target_class)]
|
| 155 |
+
|
| 156 |
+
# Generate GRAD-CAM
|
| 157 |
+
grayscale_cam = cam(input_tensor=pixel_values, targets=targets)
|
| 158 |
+
grayscale_cam = grayscale_cam[0, :]
|
| 159 |
+
|
| 160 |
+
# Resize original image to match CAM dimensions
|
| 161 |
+
cam_h, cam_w = grayscale_cam.shape
|
| 162 |
+
rgb_image_for_overlay = cv2.resize(original_image, (cam_w, cam_h)).astype(np.float32) / 255.0
|
| 163 |
+
|
| 164 |
+
# Create visualization
|
| 165 |
+
visualization = show_cam_on_image(
|
| 166 |
+
rgb_image_for_overlay,
|
| 167 |
+
grayscale_cam,
|
| 168 |
+
use_rgb=True,
|
| 169 |
+
colormap=cv2.COLORMAP_JET
|
| 170 |
+
)
|
| 171 |
+
|
| 172 |
+
return visualization, predicted_class, float(probabilities[predicted_class].item())
|
| 173 |
+
|
| 174 |
+
|
| 175 |
+
# ========== GRADIO INTERFACE FUNCTIONS ==========
|
| 176 |
+
|
| 177 |
+
def predict_basic(image):
|
| 178 |
+
"""
|
| 179 |
+
Basic prediction without explainability.
|
| 180 |
+
|
| 181 |
+
Args:
|
| 182 |
+
image: PIL Image or numpy array
|
| 183 |
+
|
| 184 |
+
Returns:
|
| 185 |
+
Dictionary with class probabilities for Gradio Label component
|
| 186 |
+
"""
|
| 187 |
+
if image is None:
|
| 188 |
+
return None
|
| 189 |
+
|
| 190 |
+
try:
|
| 191 |
+
# Convert to PIL Image if needed
|
| 192 |
+
if isinstance(image, np.ndarray):
|
| 193 |
+
image = Image.fromarray(image)
|
| 194 |
+
|
| 195 |
+
# Preprocess
|
| 196 |
+
pixel_values, _ = preprocess_image(image)
|
| 197 |
+
|
| 198 |
+
# Predict
|
| 199 |
+
model.eval()
|
| 200 |
+
with torch.no_grad():
|
| 201 |
+
outputs = model(pixel_values)
|
| 202 |
+
logits = outputs.logits
|
| 203 |
+
probabilities = F.softmax(logits, dim=-1)[0]
|
| 204 |
+
|
| 205 |
+
# Format for Gradio Label component
|
| 206 |
+
return {DISPLAY_NAMES[i]: float(probabilities[i]) for i in range(len(DISPLAY_NAMES))}
|
| 207 |
+
|
| 208 |
+
except Exception as e:
|
| 209 |
+
print(f"Error in prediction: {e}")
|
| 210 |
+
return None
|
| 211 |
+
|
| 212 |
+
|
| 213 |
+
def predict_with_explainability(image):
|
| 214 |
+
"""
|
| 215 |
+
Prediction with GRAD-CAM explainability.
|
| 216 |
+
|
| 217 |
+
Args:
|
| 218 |
+
image: PIL Image or numpy array
|
| 219 |
+
|
| 220 |
+
Returns:
|
| 221 |
+
Tuple of (probabilities_dict, gradcam_image, info_text)
|
| 222 |
+
"""
|
| 223 |
+
if image is None:
|
| 224 |
+
return None, None, "Please upload an image."
|
| 225 |
+
|
| 226 |
+
try:
|
| 227 |
+
# Convert to PIL Image if needed
|
| 228 |
+
if isinstance(image, np.ndarray):
|
| 229 |
+
image = Image.fromarray(image)
|
| 230 |
+
|
| 231 |
+
# Preprocess
|
| 232 |
+
pixel_values, original_image = preprocess_image(image)
|
| 233 |
+
|
| 234 |
+
# Predict
|
| 235 |
+
model.eval()
|
| 236 |
+
with torch.no_grad():
|
| 237 |
+
outputs = model(pixel_values)
|
| 238 |
+
logits = outputs.logits
|
| 239 |
+
probabilities = F.softmax(logits, dim=-1)[0]
|
| 240 |
+
predicted_class = logits.argmax(-1).item()
|
| 241 |
+
|
| 242 |
+
# Apply GRAD-CAM
|
| 243 |
+
visualization, pred_class, confidence = apply_gradcam(pixel_values, original_image)
|
| 244 |
+
|
| 245 |
+
# Format probabilities for Gradio
|
| 246 |
+
probs_dict = {DISPLAY_NAMES[i]: float(probabilities[i]) for i in range(len(DISPLAY_NAMES))}
|
| 247 |
+
|
| 248 |
+
# Create info text
|
| 249 |
+
info_text = f"**Predicted Class:** {DISPLAY_NAMES[predicted_class]}\n\n"
|
| 250 |
+
info_text += f"**Confidence:** {confidence*100:.2f}%\n\n"
|
| 251 |
+
info_text += "The heatmap shows regions the model focused on for classification."
|
| 252 |
+
|
| 253 |
+
return probs_dict, visualization, info_text
|
| 254 |
+
|
| 255 |
+
except Exception as e:
|
| 256 |
+
print(f"Error in prediction with explainability: {e}")
|
| 257 |
+
return None, None, f"Error: {str(e)}"
|
| 258 |
+
|
| 259 |
+
|
| 260 |
+
# ========== GRADIO INTERFACE ==========
|
| 261 |
+
|
| 262 |
+
# Custom CSS for better styling
|
| 263 |
+
custom_css = """
|
| 264 |
+
.gradio-container {
|
| 265 |
+
font-family: 'Arial', sans-serif;
|
| 266 |
+
}
|
| 267 |
+
.header {
|
| 268 |
+
text-align: center;
|
| 269 |
+
margin-bottom: 2rem;
|
| 270 |
+
}
|
| 271 |
+
"""
|
| 272 |
+
|
| 273 |
+
# Create Gradio Blocks interface
|
| 274 |
+
with gr.Blocks(css=custom_css, title="Project Phoenix - Cervical Cancer Cell Classification") as demo:
|
| 275 |
+
|
| 276 |
+
gr.Markdown("""
|
| 277 |
+
# 🔬 Project Phoenix - Cervical Cancer Cell Classification
|
| 278 |
+
|
| 279 |
+
ConvNeXt V2 model for automated classification of cervical cancer cells into 5 categories:
|
| 280 |
+
- **Dyskeratotic**: Abnormal keratinization
|
| 281 |
+
- **Koilocytotic**: HPV-infected cells
|
| 282 |
+
- **Metaplastic**: Transitional cells
|
| 283 |
+
- **Parabasal**: Immature cells
|
| 284 |
+
- **Superficial-Intermediate**: Mature cells
|
| 285 |
+
""")
|
| 286 |
+
|
| 287 |
+
with gr.Tabs():
|
| 288 |
+
# Tab 1: Basic Prediction
|
| 289 |
+
with gr.TabItem("🎯 Basic Prediction"):
|
| 290 |
+
gr.Markdown("Upload an image to classify the cervical cell type.")
|
| 291 |
+
|
| 292 |
+
with gr.Row():
|
| 293 |
+
with gr.Column():
|
| 294 |
+
input_image_basic = gr.Image(type="pil", label="Upload Cell Image")
|
| 295 |
+
predict_btn_basic = gr.Button("Classify", variant="primary", size="lg")
|
| 296 |
+
|
| 297 |
+
with gr.Column():
|
| 298 |
+
output_label_basic = gr.Label(label="Classification Results", num_top_classes=5)
|
| 299 |
+
|
| 300 |
+
predict_btn_basic.click(
|
| 301 |
+
fn=predict_basic,
|
| 302 |
+
inputs=input_image_basic,
|
| 303 |
+
outputs=output_label_basic
|
| 304 |
+
)
|
| 305 |
+
|
| 306 |
+
# Tab 2: Prediction with Explainability
|
| 307 |
+
with gr.TabItem("🔍 Prediction + Explainability (GRAD-CAM)"):
|
| 308 |
+
gr.Markdown("Upload an image to classify and visualize model attention using GRAD-CAM.")
|
| 309 |
+
|
| 310 |
+
with gr.Row():
|
| 311 |
+
with gr.Column():
|
| 312 |
+
input_image_explain = gr.Image(type="pil", label="Upload Cell Image")
|
| 313 |
+
predict_btn_explain = gr.Button("Classify with Explainability", variant="primary", size="lg")
|
| 314 |
+
|
| 315 |
+
with gr.Column():
|
| 316 |
+
output_label_explain = gr.Label(label="Classification Results", num_top_classes=5)
|
| 317 |
+
output_gradcam = gr.Image(label="GRAD-CAM Heatmap")
|
| 318 |
+
output_info = gr.Markdown(label="Analysis")
|
| 319 |
+
|
| 320 |
+
predict_btn_explain.click(
|
| 321 |
+
fn=predict_with_explainability,
|
| 322 |
+
inputs=input_image_explain,
|
| 323 |
+
outputs=[output_label_explain, output_gradcam, output_info]
|
| 324 |
+
)
|
| 325 |
+
|
| 326 |
+
# Footer
|
| 327 |
+
gr.Markdown("""
|
| 328 |
+
---
|
| 329 |
+
### 📊 About the Model
|
| 330 |
+
|
| 331 |
+
This model is a fine-tuned **ConvNeXt V2** neural network trained on the SIPaKMeD dataset
|
| 332 |
+
for cervical cancer cell classification. The model achieves high accuracy in distinguishing
|
| 333 |
+
between different cell types, which is crucial for early cancer detection and diagnosis.
|
| 334 |
+
|
| 335 |
+
**GRAD-CAM** (Gradient-weighted Class Activation Mapping) provides visual explanations by
|
| 336 |
+
highlighting the regions in the image that were most important for the model's decision.
|
| 337 |
+
|
| 338 |
+
🔗 **Model**: [Meet2304/convnextv2-cervical-cell-classification](https://huggingface.co/Meet2304/convnextv2-cervical-cell-classification)
|
| 339 |
+
""")
|
| 340 |
+
|
| 341 |
+
# ========== LAUNCH ==========
|
| 342 |
+
|
| 343 |
+
if __name__ == "__main__":
|
| 344 |
+
demo.launch(
|
| 345 |
+
server_name="0.0.0.0",
|
| 346 |
+
server_port=7860,
|
| 347 |
+
share=False
|
| 348 |
+
)
|
config.json
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"ConvNextV2ForImageClassification"
|
| 4 |
+
],
|
| 5 |
+
"depths": [
|
| 6 |
+
3,
|
| 7 |
+
3,
|
| 8 |
+
9,
|
| 9 |
+
3
|
| 10 |
+
],
|
| 11 |
+
"drop_path_rate": 0.0,
|
| 12 |
+
"dtype": "float32",
|
| 13 |
+
"hidden_act": "gelu",
|
| 14 |
+
"hidden_sizes": [
|
| 15 |
+
96,
|
| 16 |
+
192,
|
| 17 |
+
384,
|
| 18 |
+
768
|
| 19 |
+
],
|
| 20 |
+
"id2label": {
|
| 21 |
+
"0": "im_Dyskeratotic",
|
| 22 |
+
"1": "im_Koilocytotic",
|
| 23 |
+
"2": "im_Metaplastic",
|
| 24 |
+
"3": "im_Parabasal",
|
| 25 |
+
"4": "im_Superficial-Intermediate"
|
| 26 |
+
},
|
| 27 |
+
"image_size": 224,
|
| 28 |
+
"initializer_range": 0.02,
|
| 29 |
+
"label2id": {
|
| 30 |
+
"im_Dyskeratotic": 0,
|
| 31 |
+
"im_Koilocytotic": 1,
|
| 32 |
+
"im_Metaplastic": 2,
|
| 33 |
+
"im_Parabasal": 3,
|
| 34 |
+
"im_Superficial-Intermediate": 4
|
| 35 |
+
},
|
| 36 |
+
"layer_norm_eps": 1e-12,
|
| 37 |
+
"model_type": "convnextv2",
|
| 38 |
+
"num_channels": 3,
|
| 39 |
+
"num_stages": 4,
|
| 40 |
+
"out_features": [
|
| 41 |
+
"stage4"
|
| 42 |
+
],
|
| 43 |
+
"out_indices": [
|
| 44 |
+
4
|
| 45 |
+
],
|
| 46 |
+
"patch_size": 4,
|
| 47 |
+
"problem_type": "single_label_classification",
|
| 48 |
+
"stage_names": [
|
| 49 |
+
"stem",
|
| 50 |
+
"stage1",
|
| 51 |
+
"stage2",
|
| 52 |
+
"stage3",
|
| 53 |
+
"stage4"
|
| 54 |
+
],
|
| 55 |
+
"transformers_version": "4.56.1"
|
| 56 |
+
}
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a363f59155c438aa2a0a7dfe9a1c30395a5f1e4ed5e3764eb524bbf13780cc09
|
| 3 |
+
size 111505052
|
preprocessor_config.json
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"crop_pct": 0.875,
|
| 3 |
+
"do_normalize": true,
|
| 4 |
+
"do_rescale": true,
|
| 5 |
+
"do_resize": true,
|
| 6 |
+
"image_mean": [
|
| 7 |
+
0.485,
|
| 8 |
+
0.456,
|
| 9 |
+
0.406
|
| 10 |
+
],
|
| 11 |
+
"image_processor_type": "ConvNextImageProcessor",
|
| 12 |
+
"image_std": [
|
| 13 |
+
0.229,
|
| 14 |
+
0.224,
|
| 15 |
+
0.225
|
| 16 |
+
],
|
| 17 |
+
"resample": 3,
|
| 18 |
+
"rescale_factor": 0.00392156862745098,
|
| 19 |
+
"size": {
|
| 20 |
+
"shortest_edge": 384
|
| 21 |
+
}
|
| 22 |
+
}
|
requirements.txt
CHANGED
|
@@ -1,6 +1,8 @@
|
|
| 1 |
-
torch
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
|
|
|
|
|
|
|
|
| 1 |
+
torch>=2.0.0
|
| 2 |
+
torchvision>=0.15.0
|
| 3 |
+
transformers>=4.30.0
|
| 4 |
+
gradio>=4.0.0
|
| 5 |
+
opencv-python>=4.8.0
|
| 6 |
+
numpy>=1.24.0
|
| 7 |
+
Pillow>=10.0.0
|
| 8 |
+
grad-cam>=1.4.8
|
scaler.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ef706dea6fa21cce93467d3fe62ef52728bb0ef994a68ff1f88ad193daffd93
|
| 3 |
+
size 1383
|