view post Post 6169 Excited to onboard FeatherlessAI on Hugging Face as an Inference Provider - they bring a fleet of 6,700+ LLMs on-demand on the Hugging Face Hub 🤯Starting today, you'd be able to access all those LLMs (OpenAI compatible) on HF model pages and via OpenAI client libraries too! 💥Go, play with it today: https://huggingface.co/blog/inference-providers-featherlessP.S. They're also bringing on more GPUs to support all your concurrent requests! See translation 1 reply · 🔥 7 7 + Reply