|
|
--- |
|
|
title: Translation Post-Editing Evaluator |
|
|
emoji: 🌍 |
|
|
colorFrom: blue |
|
|
colorTo: green |
|
|
sdk: gradio |
|
|
sdk_version: 5.29.0 |
|
|
app_file: interface.py |
|
|
pinned: false |
|
|
--- |
|
|
|
|
|
# Translation Post-Editing Evaluator |
|
|
|
|
|
This Space evaluates machine translations using BLEU, COMET, and ChrF metrics, with ChatGPT integration for post-editing. |
|
|
|
|
|
# 📝 Post-Editing Evaluation Tool |
|
|
|
|
|
This project is a web-based evaluation tool that scores machine translation (MT) output against human-edited references using BLEU, CHRF, and COMET metrics. It is packaged as a Gradio interface and deployable via Hugging Face Spaces. |
|
|
|
|
|
--- |
|
|
|
|
|
## 🚀 Features |
|
|
|
|
|
- 📊 Evaluate MT output with: |
|
|
- **BLEU** |
|
|
- **CHRF** |
|
|
- **COMET** (requires OpenAI API key) |
|
|
- 🖥️ Simple, interactive web UI via Gradio |
|
|
- 🐳 Hugging Face Spaces–compatible Docker deployment |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧪 Example Use |
|
|
|
|
|
Paste or upload: |
|
|
- **Source text** |
|
|
- **Machine translation output** |
|
|
- **Post-edited reference** |
|
|
|
|
|
Then click **"Evaluate"** to see automatic quality scores. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📦 Installation (for local development) |
|
|
|
|
|
```bash |
|
|
git clone https://github.com/yourusername/post_editing_evaluator.git |
|
|
cd post_editing_evaluator |
|
|
python -m venv venv |
|
|
source venv/bin/activate # or .\venv\Scripts\activate on Windows |
|
|
pip install -r requirements.txt |
|
|
python interface.py |