Recent advances in Large Language Models (LLMs) have significantly improved table understanding tasks such as Table Question Answering (TableQA), yet challenges remain in ensuring reliability, scalability, and efficiency, especially in resource-constrained or privacy-sensitive environments. In this paper, we introduce MATA, a multi-agent TableQA framework that leverages multiple complementary reasoning paths and a set of tools built with small language models. MATA generates candidate answers through diverse reasoning styles for a given table and question, then refines or selects the optimal answer with the help of these tools. Furthermore, it incorporates an algorithm designed to minimize expensive LLM agent calls, enhancing overall efficiency. MATA maintains strong performance with small, open-source models and adapts easily across various LLM types. Extensive experiments on two benchmarks of varying difficulty with ten different LLMs demonstrate that MATA achieves state-of-the-art accuracy and highly efficient reasoning while avoiding excessive LLM inference. Our results highlight that careful orchestration of multiple reasoning pathways yields scalable and reliable TableQA.
Here, you can find the experimental code, and fine-tuned model checkpoints for MATA, which we have developed for our research.
- MATA scheduler checkpoint: Download from Google Drive and place it at
scheduler/mobilebert_multilabel_45.pt. - MATA confidence checker checkpoint: Available at Hugging Face. The code loads this checkpoint automatically through
transformers. - Training datasets for the scheduler and confidence checker: Available at Google Drive.
1. Clone this repository using the web URL.
git clone https://github.com/AIDASLab/MATA.git
cd MATA2. To use MATA, you need to install Ollama. Please run the following code in your local environment. Our code is designed to be used on Linux systems.
curl -fsSL https://ollama.com/install.sh | sh3. Place the scheduler checkpoint inside the scheduler folder.
4. Run the following code.
ollama serve5. Check whether the model you want to use is supported by Ollama on the official Ollama website, then pull the corresponding model using the code below. (The model name phi4:14b in the code is just an example.)
ollama pull phi4:14bThe format matcher in utils/FM_inference.py uses qwen2.5:0.5b-instruct-q8_0, so please also pull it:
ollama pull qwen2.5:0.5b-instruct-q8_06. If you want to change the Ollama model, update the following locations consistently:
MATA.py: the main reasoning model inChatOllama(model=...).MATA.py: the judge-agent model in the secondChatOllama(model=...).utils/adjust_context.py: the fallback model used insidellm_adjusted_context.utils/adjust_context.py: the Hugging Face tokenizer name inmeasure_and_adjust_context(model_name=...).
7. Our code was developed in an Anaconda environment. Please run the code below to create a new virtual environment. This will make it easy to install the libraries required for MATA.
conda env create -f ./langchain.yml8. Run the following code.
python MATA.py --config config.yaml9. If you do not want to use the scheduler or want to increase the number of self-refinement iterations, you can either modify the config.yaml file or run the code as shown below.
python MATA.py --config config.yaml --Use_Scheduler False --N 5Notes: This repository provides the demo code using phi4:14b as the main reasoning model and qwen2.5:0.5b-instruct-q8_0 as the format-matcher model. If you use different Ollama models, update all model locations listed above consistently.
Notes (Security Notice): MATA executes Python code generated by the local LLM for the PoT reasoning path. Please run this repository only in a trusted local environment or an isolated sandbox/container, especially when using untrusted tables, questions, prompts, or model outputs. The scheduler checkpoint should also be treated as trusted model weights because it is loaded with PyTorch.
The codes and training datasets for MATA and the baselines used in the experiments can be found at the following link.
@misc{hyeon2026mata,
title={MATA: Multi-Agent Framework for Reliable and Flexible Table Question Answering},
author={Sieun Hyeon and Jusang Oh and Sunghwan Steve Cho and Jaeyoung Do},
year={2026},
eprint={2602.09642},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2602.09642},
}