Eric Lau
andywu-kby
AI & ML interests
None yet
Recent Activity
posted
an
update
22 days ago
We’re excited to launch the new Prompt Gallery, a curated collection designed to help creators, developers, and AI artists get better results in text-to-image generation — faster and with less guesswork.
Demo: https://miragic.ai/prompt-search
Whether you're exploring ideas, studying prompt engineering, or building your own image workflows, this gallery gives you instant access to high-quality prompts paired with their real generated outputs.
reacted
to
jasoncorkill's
post
with 🚀
29 days ago
Do you remember https://thispersondoesnotexist.com/ ? It was one of the first cases where the future of generative media really hit us. Humans are incredibly good at recognizing and analyzing faces, so they are a very good litmus test for any generative image model.
But none of the current benchmarks measure the ability of models to generate humans independently. So we built our own. We measure the models ability to generate a diverse set of human faces and using over 20'000 human annotations we ranked all of the major models on their ability to generate faces. Find the full ranking here:
https://app.rapidata.ai/mri/benchmarks/68af24ae74482280b62f7596
We have release the full underlying data publicly here on huggingface: https://huggingface.co/datasets/Rapidata/Face_Generation_Benchmark
reacted
to
Locutusque's
post
with 🔥
29 days ago
🚀 AutoXLA - Accelerating Large Models on TPU
AutoXLA is an experimental library that automates the distribution, optimization, and quantization of large language models for TPUs using PyTorch/XLA. It extends the Hugging Face Transformers interface with TPU-aware features such as automatic sharding, custom attention kernels, and quantization-aware loading, making large-scale deployment and training both simpler and faster.
With quantization and Splash Attention kernels, AutoXLA achieves up to 4× speedups over standard Flash Attention implementations, significantly improving throughput for both inference and training workloads.
Whether you’re experimenting with distributed setups (FSDP, 2D, or 3D sharding) or optimizing memory via LanguageModelQuantizer, AutoXLA is built to make scaling LLMs on TPU seamless.
⚠️ Note: This is an experimental repository. Expect rough edges! Please report bugs or unexpected behavior through GitHub issues.
🔗 GitHub Repository: https://github.com/Locutusque/AutoXLA