Switch text-to-image and automatic speech recognition (ASR) back to using the Hugging Face inference client; Zero GPU cannot accommodate the time it takes for those tasks b71a3ad LiKenun commited on Nov 9, 2025
Enable low CPU memory usage with `accelerate` during model loading, which is useful on Hugging Face Spaces and other memory-constrained environments 0b93b56 LiKenun commited on Nov 9, 2025
Add translation to English sample with automatic source language detection 24f37c6 LiKenun commited on Nov 5, 2025
Updated code to address “UserWarning: You have not specified a value for the `type` parameter” 0f3cd78 LiKenun commited on Nov 5, 2025
Move environment variable querying code out of the Gradio UI-construction functions all the way to the root of the application, `app.py` 55d79e2 LiKenun commited on Nov 5, 2025
Move environment variable querying code out of the inference functions 1c1b97a LiKenun commited on Nov 4, 2025
Reorganize structure for even less code clutter; `app.py` is greatly slimmed down 39d9406 LiKenun commited on Nov 3, 2025
AI-generated chat sample revision 1: support both seq2seq and causal LM models 1509884 LiKenun commited on Nov 3, 2025
Enable audio file retrieval by URL for automatic speech recognition (ASR) sample bb6107f LiKenun commited on Nov 3, 2025
Switch the automatic speech recognition (ASR) implementation to use the inference client instead 0fea237 LiKenun commited on Nov 3, 2025