TREC 2025 Proceedings

cru-ablR-LSR-

Submission Details

Organization
HLTCOE
Track
RAG TREC Instrument for Multilingual Evaluation
Task
Report Generation Task
Date
2025-08-20

Run Description

Document collection
['English subset', 'Arabic subset', 'Chinese subset', 'Russian subset']
Machine translation of documents
['Yes we used the organizer-provided machine translations']
Write a short description of your retrieval process
See below
Write a short description of your generation process
Crucible@ragtime Original run tag: strict-filtered-covered-covextr-crucible-ranking-plaidx_svc-ragtime-test_t+ps_lsr-ragtime.run-SupportedAnswerabilityExtractorRequest Answerability prompt. Retrieving docs with multi-level LSR. Crucible report generation. Guiding nuggets: plaidx_svc Document source: nugget citations. Nugget extraction prompt 'SupportedAnswerExtractorAll' on collection "ragtime-mt" LLM: llama3.3-70b-instruct Sentences retained when citations supported, at least one nugget covers the summary sentence, at least one nugget covers extractive document segment according to argue_eval. Using abstractive summarization Only retain sentences that have extraction confidence value >= 0.5, are not already selected (according to stopped and stemmed match), do not contain the expression 'source document' For each nugget, among remaining sentence candidates, select the sentence with highest extraction confidence. Chop to 2000 characters. Created on 2025-08-20
Which LLM(s) where used by your system?
Llama-3.3-70B Instruct (70B)
Open repository link
https://github.com/laura-dietz/scale25-crucible/releases/tag/ragtime25-submission
Assessing priority
8

Evaluation Files

Paper