TREC 2025 Proceedings

gmn-rerank-500

Submission Details

Organization
DS@GT
Track
Tip-of-the-Tongue Search
Task
Retrieval Task
Date
2025-09-01

Run Description

Please describe in details how this run was generated
This run is generated based on three first-stage retrieval results 1) Using gemini 2.5 flash LLM to generate 20 entities that answer TOT query 2) The official Pyterrier BM25 results 3) Dense retrieval results from BGE-M3 model The 20 docs from 1), top 500 from 2), and top 500 from 3) are concatenated to be the first-stage retrieval results. Then LLM (gemini-2.5-flash) is used to rerank those 1020 documents per query. It uses the listwise reranking implementation from the rank_llm library.
Specify datasets used in this run.
["This year's TREC TOT training data"]
(if you checked "other", describe here)
Are you 100% confident that no data from https://github.com/microsoft/Tip-of-the-Tongue-Known-Item-Retrieval-Dataset-for-Movie-Identification or iRememberThisMovie.com (besides the training data provided as part of this year's track) was used for producing this run (including any data used for pretraining models that you are building on top of)?
no
Did you use any of the official baseline runs in any way to produce this run?
yes
If you did use any of the official baseline runs in any way to produce this run, please describe how below in sufficient detail (e.g., as reranking candidates or in ensemble with other approaches).
used pyterrier baseline as reranking candidates

Evaluation Files

Paper