The Thirty-Third Text REtrieval Conference
(TREC 2024)

NeuCLIR (Cross language and multilingual search) Report generation task Appendix

RuntagOrgIs this run manual or automatic?What collection is this submission using?What form of the documents did you use?Please provide a short description of this run, including info about anything checked "Other" above.Please give this run a priority for inclusion in manual assessments.
zho-jhu-orion-aggregated-w-gpt4o hltcoe
manual
Chinese
['Original non-English text']
Orion run using GPT4o
3
rus-jhu-orion-aggregated-w-gpt4o hltcoe
manual
Russian
['Original non-English text']
Orion run using GPT4o
3
fas-jhu-orion-aggregated-w-gpt4o hltcoe
manual
Farsi
['Original non-English text']
Orion run using GPT4o
3
zho-jhu-orion-aggregated-w-claude hltcoe
manual
Chinese
['Original non-English text']
Orion run using Claude
4
rus-jhu-orion-aggregated-w-claude hltcoe
manual
Russian
['Original non-English text']
Orion run using Claude
4
fas-jhu-orion-aggregated-w-claude hltcoe
manual
Farsi
['Original non-English text']
Orion run using Claude
4
fas-hltcoe-eugene-gpt4o hltcoe
manual
Farsi
['Original non-English text']
Eugene's run using GPT4o
1 (top)
rus-hltcoe-eugene-gpt4o hltcoe
manual
Russian
['Original non-English text']
Eugene's run using GPT4o
1 (top)
fas-hltcoe-eugene-gpt35turbo hltcoe
manual
Farsi
['Original non-English text']
Eugene's run using GPT35-turbo
1 (top)
rus-hltcoe-eugene-gpt35turbo hltcoe
manual
Russian
['Original non-English text']
Eugene's run using GPT35-turbo
1 (top)
zho-hltcoe-eugene-gpt35turbo hltcoe
manual
Chinese
['Original non-English text']
Eugene's run using GPT35-turbo
1 (top)
zho-hltcoe-eugene-gpt4o-fixed hltcoe
manual
Chinese
['Original non-English text']
Eugene's run using GPT4o
1 (top)
IDA_CCS_abstractive_fas IDACCS
automatic
Farsi
['Original non-English text']
We used the UMD webserver for the CLIR. We then used GPT-4o to translate and then ollowed by GPT-4o generation.
2
IDA_CCS_abstractive_rus IDACCS
automatic
Russian
['Original non-English text']
We used the UMD webserver for the CLIR. We then used GPT-4o to translate and then ollowed by GPT-4o generation.
2
IDA_CCS_abstractive_zho IDACCS
automatic
Chinese
['Original non-English text']
We used the UMD webserver for the CLIR. We then used GPT-4o to translate, followed by GPT-4o generation.
2
IDA_CCS_hybrid_fas IDACCS
automatic
Farsi
['Original non-English text']
We used the UMD webserver for the CLIR. We then used GPT-4o to translate and applied a hybrid summarization with occams extractive summarization followed by GPT-4o generation.
1 (top)
IDA_CCS_hybrid_rus IDACCS
automatic
Russian
['Original non-English text']
We used the UMD webserver for the CLIR. We then used GPT-4o to translate and applied a hybrid summarization with occams extractive summarization followed by GPT-4o generation.
1 (top)
IDA_CCS_hybrid_zho IDACCS
automatic
Chinese
['Original non-English text']
We used the UMD webserver for the CLIR. We then used GPT-4o to translate and applied a hybrid summarization with occams extractive summarization followed by GPT-4o generation.
1 (top)
rus_irlab-ams-std-translate-llama-70B-api IRLab-Amsterdam
automatic
Russian
['Original non-English text', 'Other']
This run used standard RAG pipeline, including - Retrieval*: track-provided API (ColBERTX top-10 documents) - Augmentation*: raw document - Generation: RAG Direct prompting (Llama3.1-70B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
4
zho_irlab-ams-std-translate-llama-70B-api IRLab-Amsterdam
automatic
Chinese
['Original non-English text', 'Other']
This run used standard RAG pipeline, including - Retrieval*: track-provided API (ColBERTX top-10 documents) - Augmentation*: raw document - Generation: RAG Direct prompting (Llama3.1-70B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
4
fas_irlab-ams-std-translate-llama-70B-api IRLab-Amsterdam
automatic
Farsi
['Original non-English text', 'Other']
This run used standard RAG pipeline, including - Retrieval*: track-provided API (ColBERTX top-10 documents) - Augmentation*: raw document - Generation: RAG Direct prompting (Llama3.1-70B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
4
fas_irlab-ams-postcite-v IRLab-Amsterdam
automatic
Farsi
['Original non-English text', 'Other']
This run used post-hoc citation pipeline with verification, including - Generation: Zero-shot prompting Direct prompting (GPT 3.5) - Post-retrieval: track-provided API (sentence chunks as query + ColBERTX top-2 documents) - Post-verification: re-ranking via attribution scores (crosslingual verification)
1 (top)
rus_irlab-ams-postcite-v IRLab-Amsterdam
automatic
Russian
['Original non-English text', 'Other']
This run used post-hoc citation pipeline with verification, including - Generation: Zero-shot prompting Direct prompting (GPT 3.5) - Post-retrieval: track-provided API (sentence chunks as query + ColBERTX top-2 documents) - Post-verification: re-ranking via attribution scores (crosslingual verification)
1 (top)
zho_irlab-ams-postcite-v IRLab-Amsterdam
automatic
Chinese
['Original non-English text', 'Other']
This run used post-hoc citation pipeline with verification, including - Generation: Zero-shot Direct prompting (GPT 3.5) - Post-retrieval: track-provided API using sentence chunks as query (ColBERTX top-2 documents) - Post-verification: re-ranking via attribution scores (crosslingual verification)
1 (top)
zho_irlab-ams-std-recomp-llama-8B IRLab-Amsterdam
automatic
Chinese
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (ReComp-NQ) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
2
rus_irlab-ams-std-recomp-llama-8B IRLab-Amsterdam
automatic
Russian
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (ReComp-NQ) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
2
fas_irlab-ams-std-recomp-llama-8B IRLab-Amsterdam
automatic
Farsi
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (ReComp-NQ) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
2
fas_irlab-ams-postcite IRLab-Amsterdam
automatic
Farsi
['Original non-English text', 'Other']
This run used post-hoc citation pipeline with verification, including - Generation: Zero-shot Direct prompting (GPT 3.5) - Post-retrieval: track-provided API (sentence chunks as query + ColBERTX top-2 documents)
7
rus_irlab-ams-postcite IRLab-Amsterdam
automatic
Russian
['Original non-English text', 'Other']
This run used post-hoc citation pipeline with verification, including - Generation: Zero-shot Direct prompting (GPT 3.5) - Post-retrieval: track-provided API (sentence chunks as query + ColBERTX top-2 documents)
7
zho_irlab-ams-postcite IRLab-Amsterdam
automatic
Chinese
['Original non-English text', 'Other']
This run used post-hoc citation pipeline with verification, including - Generation: Zero-shot Direct prompting (GPT 3.5) - Post-retrieval: track-provided API (sentence chunks as query + ColBERTX top-2 documents)
7
fas_irlab-ams-std-translate-llama-8B IRLab-Amsterdam
automatic
Farsi
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-10 documents) - Augmentation*: raw document - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
6
rus_irlab-ams-std-translate-llama-8B IRLab-Amsterdam
automatic
Russian
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-10 documents) - Augmentation*: raw document - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
6
zho_irlab-ams-std-translate-llama-8B IRLab-Amsterdam
automatic
Chinese
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-10 documents) - Augmentation*: raw document - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
6
rfused_rgn_gpt4o_zho h2oloo
automatic
Chinese
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok GPT-4o top20
1 (top)
rfused_rgn_gpt4o_rus h2oloo
automatic
Russian
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok GPT-4o top20
1 (top)
rfused_rgn_gpt4o_fas h2oloo
automatic
Farsi
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok GPT-4o top20
1 (top)
rfused_rgn_crp_fas h2oloo
automatic
Farsi
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok Cohere top20
3
rfused_rgn_crp_rus h2oloo
automatic
Russian
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok Cohere Command R Plus top20
3
rfused_rgn_crp_zho h2oloo
automatic
Chinese
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok Cohere Command R Plus top20
3
rfused_rgn_l70b_fas h2oloo
automatic
Farsi
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok Llama3.1-70b top20
2
rfused_rgn_l70b_rus h2oloo
automatic
Russian
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok Llama3.1-70b top20
2
rfused_rgn_l70b_zho h2oloo
automatic
Chinese
['Original non-English text', 'Track-provided translations']
10 subquestions per question from GPT-4o + 2 (req, req + background) First Stage: RRF: PLAID server (top 100), SPLADE DT + Rocchio (Top 1K), BM25 DT + Rocchio (Top 1k) Second Stage (on top1K): RRF: First Stage, monot5-3b, lit5-v2large RRF Between subquestions (only req + background from here on) Third Stage (on top 100): RRF: RankZephyr, RankLlama3.1-70b, RankGPT4o Ragnarok Llama3.1-70b top20
2
zho_irlab-ams-std-mdcomp-330-translate-llama-8B IRLab-Amsterdam
automatic
Chinese
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (FiD-Flan-T5-mds) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
3
rus_irlab-ams-std-mdcomp-330-translate-llama-8B IRLab-Amsterdam
automatic
Russian
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (FiD-Flan-T5-mds) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
3
fas_irlab-ams-std-mdcomp-330-translate-llama-8B IRLab-Amsterdam
automatic
Farsi
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (FiD-Flan-T5-mds) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
3
fas_irlab-ams-std-mdcomp-331-translate-llama-8B IRLab-Amsterdam
automatic
Farsi
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (FiD-Flan-T5-mds type1) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
8
rus_irlab-ams-std-mdcomp-331-translate-llama-8B IRLab-Amsterdam
automatic
Russian
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (FiD-Flan-T5-mds type1) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
8
zho_irlab-ams-std-mdcomp-331-translate-llama-8B IRLab-Amsterdam
automatic
Chinese
['Original non-English text', 'Other']
This run used standard RAG pipeline with compression, including - Retrieval*: track-provided API (ColBERTX top-30 documents) - Augmentation*: document summarization (FiD-Flan-T5-mds type1) - Generation: RAG Direct prompting (Llama3.1-8B zero-shot) *Before retrieval: query rewriting (GPT 3.5 zero-shot) *Before augmentation: google translate
8
rfused_rgn_l70bph_fas h2oloo
automatic
Farsi
['Original non-English text', 'Track-provided translations']
rfused_rgn_l70b with post-hoc citation update based on sentences
2
rfused_rgn_l70bph_rus h2oloo
automatic
Russian
['Original non-English text', 'Track-provided translations']
rfused_rgn_l70b with post-hoc citation based on sentences
2
rfused_rgn_l70bph_zho h2oloo
automatic
Chinese
['Original non-English text', 'Track-provided translations']
rfused_rgn_l70b with post-hoc citation based on sentences
2