GRAST-SQL: Scaling Text2SQL via LLM-efficient Schema Filtering with Functional Dependency Graph Rerankers

The GRAST-SQL model was introduced in the paper Scaling Text2SQL via LLM-efficient Schema Filtering with Functional Dependency Graph Rerankers.

Authors: Thanh Dat Hoang, Thanh Tam Nguyen, Thanh Trung Huynh, Hongzhi Yin, Quoc Viet Hung Nguyen

GRAST-SQL is an open-source, LLM-efficient schema filtering framework designed to scale Text2SQL systems to real-world, large databases that often exceed LLM context limits. It compacts Text2SQL prompts by employing a multi-step approach: (i) ranking columns with a query-aware LLM encoder enriched with values and metadata, (ii) reranking inter-connected columns via a lightweight graph transformer over functional dependencies, and (iii) selecting a connectivity-preserving sub-schema with a Steiner-tree heuristic.

This framework achieves near-perfect recall and higher precision than existing methods (CodeS, SchemaExP, Qwen rerankers, embedding retrievers), maintains sub-second median latency, scales to schemas with 23,000+ columns, and significantly reduces prompt tokens while often improving accuracy in end-to-end systems.

For more details on the project, including training and full evaluation scripts, visit the GitHub repository.

Sample Usage

GRAST-SQL models are often served via vLLM for efficient embedding generation, as suggested by the project's GitHub repository. The following example demonstrates how to set up a vLLM server and use the model to generate embeddings for text inputs. This is a crucial step for the ranking and filtering pipeline described in the paper.

First, ensure you have vLLM installed. You can typically install it via pip:

pip install vllm

Step 1: Start the vLLM Server

Start the vLLM server for the GRAST-SQL model in a separate terminal or background process. This command specifies using griffith-bigdata/GRAST-SQL-0.6B-BIRD-Reranker as the model, enabling embedding generation.

CUDA_VISIBLE_DEVICES=0,1 vllm serve griffith-bigdata/GRAST-SQL-0.6B-BIRD-Reranker \
  --port 8000 \
  --max-model-len 8192 \
  --tensor-parallel-size 2 \
  --task embedding \
  --gpu-memory-utilization 0.8

Step 2: Generate Embeddings using Python

Once the vLLM server is running, you can connect to it and generate embeddings programmatically:

from vllm import LLM, SamplingParams

# Ensure the vLLM server is running at the specified port.
# Replace with the actual path to your GRAST-SQL model checkpoint if different.
model_path = "griffith-bigdata/GRAST-SQL-0.6B-BIRD-Reranker"
llm = LLM(
    model=model_path,
    tensor_parallel_size=1, # Adjust based on your GPU setup
    dtype="auto",
    max_model_len=8192,
    enforce_eager=True,
    trust_remote_code=True,
    gpu_memory_utilization=0.8,
    task="embedding", # Essential for embedding models
)

# Example texts for which to generate embeddings
text_list = [
    "List all tables related to user activity.",
    "Find columns for product price and description."
]

# Generate embeddings
# The `llm.encode` method is used when the vLLM server is started with --task embedding.
embeddings = llm.encode(texts=text_list)

for i, text in enumerate(text_list):
    print(f"Text: '{text}'")
    print(f"Embedding shape: {embeddings[i].shape}")
    print(f"First 5 embedding dimensions: {embeddings[i][:5]}
")

# These embeddings can then be utilized by the GRAST-SQL framework
# for tasks like column ranking and schema filtering.

Datasets

The GRAST-SQL framework was evaluated on the following datasets:

Models

Other GRAST-SQL models available on the Hugging Face Hub:

More models can be found in the Huggingface collection.

System Flow

GRAST-SQL main flow

Citation

If you find this work useful for your research, please cite the paper:

@misc{hoang2025scalingtext2sqlllmefficientschema,
      title={Scaling Text2SQL via LLM-efficient Schema Filtering with Functional Dependency Graph Rerankers}, 
      author={Thanh Dat Hoang and Thanh Tam Nguyen and Thanh Trung Huynh and Hongzhi Yin and Quoc Viet Hung Nguyen},
      year={2025},
      eprint={2512.16083},
      archivePrefix={arXiv},
      primaryClass={cs.DB},
      url={https://arxiv.org/abs/2512.16083}, 
}
Downloads last month
15
Safetensors
Model size
8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including griffith-bigdata/GRAST-SQL-8B-BIRD-Reranker