TEXT RETRIEVAL CONFERENCE (TREC) 2026
February 2026 - November 2026
Conducted by:
National Institute of Standards and Technology (NIST)
The Text Retrieval Conference (TREC) workshop series encourages research in information retrieval and related applications by providing a large test collection, uniform scoring procedures, and a forum for organizations interested in comparing their results. Details about TREC can be found at the TREC web site, trec.nist.gov.
You are invited to participate in TREC 2026. TREC 2026 will consist of a set of tasks known as "tracks". Each track focuses on a particular subproblem or variant of the retrieval task as described below. Organizations may choose to participate in any or all of the tracks. Training and test materials are available from NIST for some tracks; other tracks will provide instructions for dataset download.
Dissemination of TREC work and results other than in the (publicly available) conference proceedings is welcomed, but the conditions of participation specifically preclude any advertising claims based on TREC results. All retrieval results submitted to NIST are published in the Proceedings and are archived on the TREC web site with the submitting organization identified.
TREC participants are added to the TREC Slack instance, and the primary mode of communication in TREC is Slack. There is a general mailing list ([email protected]) but this is used for major announcements only. Some tracks have mailing lists which you should follow if you are interested in those tracks.
Schedule
- As soon as possible -- submit your application to participate in TREC 2026. Go to ir.nist.gov/evalbase, create an account, and register your organization.
The organization structure lets you have everyone on your team have access as a group. You can participate solo, too, you just need to create your own organization. If you participated before, you can reuse your organization, but you do need to register. Organization registrations that do not submit to the conference are removed at the end of the cycle.
If you don't remember your account or password, you can ask for a password recovery. Likewise, if you have an organization but can't seem to connect to it, please contact Ian Soboroff ([email protected]). Please don't sign up for multiple accounts, and please don't register multiple organizations.
Submitting an application will add you to Slack and the active participants' mailing list. On March 15th, NIST will announce a new password for the "active participants" portion of the TREC web site. We accept applications to participate until late May, but applying earlier means you can be involved in track discussions. Processing applications requires some manual effort on our end. Once your application is processed (at most a few business days), the "Welcome to TREC" email message with details about TREC participation will be sent to the email address provided in the application.
- June--August -- Results submission deadline for most tracks. Specific deadlines for each track will be included in the track guidelines, which will be finalized in the spring.
- September 30 (estimated) -- Relevance judgments and individual evaluation scores due back to participants.
- Nov 16--20 -- TREC 2026 in-person conference at NIST in Gaithersburg, MD, USA with a remote attendance option
Track Descriptions
Below is a brief summary of the TREC 2026 tracks. Complete descriptions of tasks performed in previous years are included in the Overview papers in each of the TREC proceedings (in the Publications section of the web site).
The exact definition of the tasks to be performed in each track for TREC 2026 is still being formulated. Track discussion takes place on the track mailing list (or other communication medium). To join the discussion, follow the instructions for the track as detailed below.
TRECVID, TREC's sister evaluation of multimedia understanding, and TAC, TREC's sister evaluation of NLP, have been folded back into TREC. TREC 2026 has 7 tracks: AutoJudge, Change Detection, Million LLM, RAG, RAGTIME, User simulation, and VQA.
AutoJudge, Change Detection, and User Simulation are new tracks.
AutoJudge
TREC Auto-Judge is a lightweight meta-track that builds on existing TREC tracks that feature retrieval and/or generation. It collects the unjudged system runs submitted to those tracks and invites participants to supply automatic relevance labels generated by candidate LLM judges. Once official manual assessments become available, this track ranks the Auto-Judge systems by correlation with ground truth and analyses of failure modes. The track offers a unified testbed for LLM-as-a-Judge research, enables cross-track comparisons free from benchmark memorization, and delivers guidance for selecting the most reliable judge for each task scenario.
Anticipated timeline: "run" submissions in mid-September
Track coordinators:
Laura Dietz, University of New Hampshire
Naghmeh Farzi, University of New Hampshire
Eugene Yang, John Hopkins University
Oleg Zendel, RMIT University
Charles L. A. Clarke, University of Waterloo
Hossein A. Rahmani, University College London
Track website: https://trec-auto-judge.cs.unh.edu/
User Simulation
User simulation offers a scalable alternative to expensive user studies for evaluating interactive search. However, the community lacks standardized methods for validating simulators and trusting their results, which hinders their widespread adoption and slows progress. This track aims to establish a systematic framework for evaluating user simulators, understand the criteria for what makes a simulator "good enough," and create best practices for simulation-based evaluation.
Anticipated timeline: runs due in early September
Track coordinators:
Krisztian Balog, University of Stavanger
Nolwenn Bernard, Technische Hochschule Köln
Timo Breuer, Technische Hochschule Köln
Marcel Gohsen, Bauhaus-Universität Weimar
Christin Katharina Kreutz, TH Mittelhessen University of Applied Sciences
Andreas Kruff, Institute of Information Science at TH Köln
Philipp Schaer, Institute of Information Science at TH Köln
ChengXiang Zhai, University of Illinois at Urbana-Champaign
Paul Thomas, Microsoft
Track website: https://trec.usersim.ai/
Change Detection
This track models an expert user following a topic of interest over time. The model of interaction follows an "inbox" or reading queue, with the goal to maximize importance and novelty in the queue. The topics are broken into "key questions" that the user is following, and systems will rank documents over each day according to which questions they would address. Systems can also propose new questions.
Anticipated timeline: runs due end of July
Track coordinators:
Kristine Rogers
David Grossman
John Frank
Peter Gantz
Megan Niemczyk
Track website: TBD
Million LLM
Imagine that in the future LLM-powered generative tools abound, specialized for every kind of use. Given a user's query and a set of LLMs, rank the LLMs on the basis of their ability to answer the query correctly.
Anticipated timeline: runs due end of August
Track coordinators:
Evangelos Kanoulas, University of Amsterdam
Jamie Callan, Carnegie Mellon University
Panagiotis Eustratiadis, University of Amsterdam
Mark Sanderson, RMIT
Track Web Page: https://trec-mllm.github.io/
Retrieval Augmented Generation (RAG)
The RAG track aims to enhance retrieval and generation effectiveness to focus on varied information needs in an evolving world. Data sources will include a large corpus and topics that capture long-form definitions, list, and ambiguous information needs.
The track will involve 2 subtasks:
- Retrieval Task: Rank passages for a given queries
- Augmented Generation: Generate RAG answers given a baseline passage ranking
- RAG Task : Generate answers based on participant system retrieval (or no retrieval at all!)
Anticipated timeline: runs due end of July.
Track coordinators:
Nour Jedidi, University of Waterloo
Shivani Upadhyay, University of Waterloo
Ronak Pradeep, University of Waterloo
Nandan Thakur, University of Waterloo
Daniel Campos, Snowflake
Nick Craswell, Microsoft
Jimmy Lin, University of Waterloo
Track Web Page: https://trec-rag.github.io/
RAGTIME Track
TREC RAGTIME is a TREC shared task to study and benchmark report generation from news (both English and multi-lingual). Key features of the track are its focus on multi-faceted reports (going beyond factoid QA), and a citation-based evaluation (providing supporting evidence of claims made in the report). It also benchmarks Cross-Language (CLIR) and Multi-lingual (MLIR) retrieval as supporting subtasks. The languages are English, Russian, Chinese, and Spanish.
Anticipated timeline: runs due in mid-to-late July.
Track coordinators:
Dawn Lawrie, Johns Hopkins University
Sean MacAvaney, University of Glasgow
James Mayfield, Johns Hopkins University
Paul McNamee, Johns Hopkins University
Douglas W. Oard, University of Maryland
Luca Soldaini, Allen Institute for AI
Eugene Yang, Johns Hopkins University
Track Web Page: https://trec-ragtime.github.io/
Video Question Answering (VQA)
The Video QA track is about question answering from multimedia sources. The goals of the new track include pushing multi-modal integration and complex reasoning. There will be a generation subtask and a multiple-choice subtask. The track will use a set of YouTube shorts, movie scenes, and Vimeo Creative Commons videos.
Anticipated timeline: TBA
Track coordinators:
George Awad, NIST
Sanjay Purushotam, UMBC
Track Web Page: https://www-nlpir.nist.gov/projects/tv2026/vqa.html
Conference Format
The conference itself will be used as a forum both for presentation of results (including failure analyses and system comparisons), and for more lengthy system presentations describing retrieval techniques used, experiments run using the data, and other issues of interest to researchers in information retrieval. All groups will be invited to present their results in a joint poster session. Some groups may also be selected to present during plenary talk sessions.
Application Details
Organizations wishing to participate in TREC 2026 should respond to this call for participation by submitting an application. Participants in previous TRECs who wish to participate in TREC 2026 must submit a new application.
To apply, use the Evalbase web app at ir.nist.gov/evalbase. First you will need to create an account and profile, then you can register a participating organization from the main Evalbase page. If you participated in TREC in 2024 or 2025, you can reuse your existing organization, but you still need to register.
Any questions about conference participation should be sent to the general TREC email address, trec (at) nist.gov.