The Text Retrieval Conference (TREC) workshop series encourages
research in information retrieval and related applications by
providing a large test collection, uniform scoring procedures,
and a forum for organizations interested in comparing their
results. Details about TREC
can be found at the TREC web site, http://trec.nist.gov.
You are invited to participate in TREC 2023. TREC 2023 will
consist of a set of tasks known as "tracks". Each track focuses
on a particular subproblem or variant of the retrieval task as
described below. Organizations may choose to participate in any or
all of the tracks. Training and test materials are available from
NIST for some tracks; other tracks will use special collections that
are available from other organizations for a fee.
Dissemination of TREC work and results other than in the (publicly
available) conference proceedings is welcomed, but the conditions of
participation specifically preclude any advertising claims based
on TREC results. All retrieval results submitted to NIST are
published in the Proceedings and are archived on the TREC web site
with the submitting organization identified.
Schedule:
As soon as possible -- submit your application to participate in
TREC 2023 as described below.
Submitting an application will add you to the active participants'
mailing list. On March 1st, NIST will announce a new password
for the "active participants" portion of the TREC web site.
We accept applications to participate until late May, but
applying earlier means you can be involved in track discussions.
Processing applications requires some manual effort on our end.
Once your application is processed (at most a few business days),
the "Welcome to TREC" email message with
details about TREC participation will be sent to the email address
provided in the application.
July--August
Results submission deadline for most tracks.
Specific deadlines for each track will be included in
the track guidelines, which will be finalized in the spring.
September 30 (estimated)
Relevance judgments and individual
evaluation scores due back to participants.
Nov 13--17
TREC 2023 conference at NIST in Gaithersburg, MD, USA if in-person meeting can be held. Otherwise, a virtual conference during this week.
Task Description
Below is a brief summary of the tasks. Complete descriptions of
tasks performed in previous years are included in the Overview
papers in each of the TREC proceedings (in the Publications section
of the web site).
The exact definition of the tasks to be performed in each track for
TREC 2023 is still being formulated. Track discussion takes place
on the track mailing list (or other communication medium). To join
the discussion,
follow the instructions for the track as detailed below.
TREC 2023 will contain eight tracks. Four of the tracks (Deep Learning, Clinical Trials, CrisisFACTs and NeuCLIR) ran in TREC 2022; the CAST, Fair Ranking and Health Misinformation tracks have ended, and four new tracks, iKAT, AToMiC, Product Search, and Tip-of-the-Tongue, are starting.
AToMiC Track
The Authoring Tools for Multimedia Content (AToMiC) Track aims to build reliable benchmarks for multimedia search systems. The focus of this track is to develop and evaluate IR techniques for text-to-image and image-to-text search problems.
Anticipated timeline: Document (Image and Texts) collection available in January, evaluation topics in June, final submissions in July.
Track coordinators:
Jheng-Hong (Matt) Yang, University of Waterloo
Jimmy Lin, University of Waterloo
Carlos Lassance, Naver Labs Europe
Rafael S. Rezende, Naver Labs Europe
Stéphane Clinchant, Naver Labs Europe
Krishna Srinivasan, Google Research
Miriam Redi, Wikimedia Foundation
Track Web Page:
https://trec-atomic.github.io/
Mailing list:
Google group, name: atomic-participants
Clinical Trials Track
The goal of the Clinical Trials track is to focus research on the
clinical trials matching problem: given a free text summary of a patient
health record, find suitable clinical trials for that patient.
Anticipated timeline: TBD
Track coordinators:
Dina Demner-Fushman, U.S. National Library of Medicine
William Hersh, Oregon Health and Science University
Kirk Roberts, University of Texas Health Science Center
Track Web Page:
http://www.trec-cds.org/
Mailing list:
Google group, name: trec-cds
CrisisFACTS Track
The CrisisFACTS track focuses on temporal summarization for first responders in emergency situations. These summaries differ from traditional summarization in that they order information by time and produce a series of short updates instead of a longer narrative.
Anticipated timeline: Results due in July/August
Track coordinators:
Cody Buntain (University of Maryland)
Benjamin Horne (University of Tennessee–Knoxville)
Amanda Hughes (Brigham Young University)
Muhammad Imran (QCRI)
Richard McCreadie (University of Glasgow)
Hemant Purohit (George Mason University)
Track Web Page:
https://crisisfacts.github.io/
Mailing list:
Google group, name: trec-is
Deep Learning Track
The Deep Learning track focuses on IR tasks where a large training set is available, allowing us to compare a variety of retrieval approaches including deep neural networks and strong non-neural approaches, to see what works best in a large-data regime.
Anticipated timeline: Results due in early August
Track coordinators:
Nick Craswell, Microsoft
Bhaskar Mitra, Microsoft Research
Emine Yilmaz, University College London
Daniel Campos, University of Illinois at Urbana-Champaign
Jimmy Lin, University of Waterloo
Track Web Page:
https://microsoft.github.io/msmarco/TREC-Deep-Learning
Interactive Knowledge Assistance Track (iKAT)
iKAT is the successor to the Conversational Assistance Track (CAsT). The fourth year of CAST aimed to add more conversational elements to the interaction streams, by introducing mixed initiatives (clarifications, and suggestions) to create multi-path, multi-turn conversations for each topic. TREC iKAT evolves CAsT into a new track to signal this new
trajectory. iKAT aims to focus on supporting multi-path, multi-turn, multi-perspective conversations. That is for a given topic, the direction and the conversation that evolves depends not only on the prior responses but also on the user.
Anticipated timeline: TBD
Track coordinators:
Mohammed Aliannejadi, University of Amsterdam
Zahra Abbasiantaeb, University of Amsterdam
Shubham Chatterjee, University of Glasgow
Jeff Dalton, University of Glasgow
Leif Azzopardi, University of Strathclyde
Track Web Page:https://trecikat.com
Mailing list:
Google group, name: trec_ikat
NeuCLIR Track
Cross-language Information Retrieval (CLIR) has been studied at TREC and subsequent evaluation forums for more than twenty years, but recent advances in the application of deep learning to information retrieval (IR) warrant a new, large-scale effort that will enable exploration of classical and modern IR techniques for this task.
Anticipated timeline: Document collection available in January, evaluation topics and baseline results in June, final submissions in July
Track coordinators:
Dawn Lawrie, Johns Hopkins University
Sean MacAvaney, University of Glasgow
James Mayfield, Johns Hopkins University
Paul McNamee, Johns Hopkins University
Douglas W. Oard, University of Maryland
Luca Soldaini, Allen Institute for AI
Eugene Yang, Johns Hopkins University
Track Web Page:
https://neuclir.github.io/
Mailing list:
Google group, name: neuclir-participants
Product Search Track
The product search track focuses on IR tasks in the world of product search and discovery. This track seeks to understand what methods work best for product search, improve evaluation methodology, and provide a reusable dataset which allows easy benchmarking in a public forum.
Anticipated timeline: Early August
Track coordinators:
Daniel Campos, University of Illinois at Urbana-Champaign
Corby Rosset, Microsoft
Surya Kallumadi, Lowes
ChengXiang Zhai, University of Illinois at Urbana-Champaign
Alessandro Magnani, Walmart
Track Web Page:
Product Search track web page
Tip-of-the-Tongue Track
The Tip-of-the-Tongue (ToT) Track focuses on the known-item identification task where the searcher has previously experienced or consumed the item (e.g., a movie) but cannot recall a reliable identifier (i.e., "It's on the tip of my tongue…"). Unlike traditional ad-hoc keyword-based search, these information requests tend to be natural-language, verbose, and complex containing a wide variety of search strategies such as multi-hop reasoning, and frequently express uncertainty and suffer from false memories.
Anticipated timeline: Results due in early August
Track coordinators:
Jaime Arguello, University of North Carolina
Samarth Bhargav, University of Amsterdam
Bhaskar Mitra, Microsoft Research
Fernando Diaz, Google
Evangelos Kanoulas, University of Amsterdam
Track Web Page: https://trec-tot.github.io/
Twitter: @TREC_ToT
Mastodon: @TREC_ToT@idf.social
Conference Format
The conference itself will be used as a forum both for presentation
of results (including failure analyses and system comparisons),
and for more lengthy system presentations describing retrieval
techniques used, experiments run using the data, and other issues
of interest to researchers in information retrieval.
All groups will be invited to present their results in a joint
poster session (assuming in-person meeting is possible).
Some groups may also be selected to present
during plenary talk sessions.
Application Details
Organizations wishing to participate in TREC 2023 should respond
to this call for participation by submitting an application.
Participants in previous TRECs who wish to participate
in TREC 2023 must submit a new application.
To apply, submit the online application at
http://ir.nist.gov/trecsubmit.open/application.html
The application system
will send an acknowledgment to the email address
supplied in the form once it has processed the form.
Any questions about conference participation should be sent
to the general TREC email address, trec (at) nist.gov.
|