Text REtrieval Conference (TREC)
|
Organization Name: GE/Rutgers | Run ID: gerua1 |
Section 1.0 System Summary and Timing |
---|
Section 1.1 System Information |
Hardware Model Used for TREC Experiment:Sun Sparc Enterprise System Use:SHARED Total Amount of Hard Disk Storage:20 Gb Total Amount of RAM:256 MB Clock Rate of CPU:MHz |
Section 1.2 System Comparisons |
Amount of developmental "Software Engineering":NONE List of features that are not present in the system, but would have been beneficial to have: List of features that are present in the system, and impacted its performance, but are not detailed within this form: |
Section 2.0 Construction of Indices, Knowledge Bases, and Other Data Structures |
---|
Length of the stopword list:300 words Type of Stemming:MORPHOLOGICAL Controlled Vocabulary:NO Term weighting:YES
Phrase discovery:YES
Type of Spelling Correction:NONE Manually-Indexed Terms:NO Proper Noun Identification:YES Syntactic Parsing:YES Tokenizer:YES Word Sense Disambiguation:NO Other technique:YES Additional comments:Syntactic normalization for head+modifier pairs in order to cover a variety of syntactic forms with equivalent semantics. These include: noun+adjective, noun+noun, noun+prep+noun, verb+object, and subject+verb syntactic patterns. |
Section 3.0 Statistics on Data Structures Built from TREC Text |
---|
Section 3.1 First Data Structure |
Structure Type:INVERTED INDEX Type of other data structure used: Brief description of method using other data structure: Total storage used:8 Gb Total computer time to build:5 hours Automatic process:YES Manual hours required:hours Type of manual labor:NONE Term positions used:NO Only single terms used:NO Concepts (vs. single terms) represented:YES
Type of representation:terms within a stream Auxilary files used:
Additional comments:The index is organized into the structure of streams, i.e., parallel indexes with alternative representations of the data. Internally, only single terms are used but within each stream they represent different entities: words, phrases, head+modifier concepts, proper names, etc. |
Section 3.2 Second Data Structure |
Structure Type: Type of other data structure used: Brief description of method using other data structure: Total storage used:Gb Total computer time to build:hours Automatic process: Manual hours required:hours Type of manual labor:NONE Term positions used: Only single terms used: Concepts (vs. single terms) represented:
Type of representation: Auxilary files used:
Additional comments: |
Section 3.3 Third Data Structure |
Structure Type: Type of other data structure used: Brief description of method using other data structure: Total storage used:Gb Total computer time to build:hours Automatic process: Manual hours required:hours Type of manual labor:NONE Term positions used: Only single terms used: Concepts (vs. single terms) represented:
Type of representation: Auxilary files used:
Additional comments: |
Section 4.0 Data Built from Sources Other than the Input Text |
---|
File type:NONE Domain type:DOMAIN INDEPENDENT Total Storage:Gb Number of Concepts Represented:concepts Type of representation:NONE Automatic or Manual:
Type of Manual Labor used:NONE Additional comments: |
File is:NONE Total Storage:Gb Number of Concepts Represented:concepts Type of representation:NONE Additional comments: |
Section 5.0 Computer Searching |
---|
Average computer time to search (per query): 4.0 CPU seconds |
Times broken down by component(s): 1.0 sec/stream |
Section 5.1 Searching Methods |
Vector space model:YES Probabilistic model: Cluster searching: N-gram matching: Boolean matching: Fuzzy logic: Free text scanning: Neural networks: Conceptual graphic matching: Other: Additional comments: |
Section 5.2 Factors in Ranking |
Term frequency:YES Inverse document frequency:YES Other term weights:NO Semantic closeness:NO Position in document:NO Syntactic clues:NO Proximity of terms:NO Information theoretic weights:NO Document length:YES Percentage of query terms which match:NO N-gram frequency:NO Word specificity:NO Word sense frequency:NO Cluster distance:NO Other:YES Additional comments:Retrieval was peformed in parallel by each stream, and results were merged. The final document rank was affected by the |
Section 6.0 Query Construction |
---|
Section 6.1 Automatically Built Queries for Ad-hoc Tasks |
Topic fields used: Average computer time to build query CPU seconds Term weighting (weights based on terms in topics): Phrase extraction from topics: Syntactic parsing of topics: Word sense disambiguation: Proper noun identification algorithm: Tokenizer: Expansion of queries using previously constructed data structures: Automatic addition of: |
Section 6.2 Manually Constructed Queries for Ad-hoc Tasks |
Topic fields used: Average time to build query? minutes Type of query builder: Tool used to build query: Method used in intial query construction? Total CPU time for all iterations: seconds Clock time from initial construction of query to completion of final query: minutes Average number of iterations: Average number of documents examined per iteration: Minimum number of iterations: Maximum number of iterations: The end of an iteration is determined by: Automatic term reweighting from relevant documents: Automatic query expansion from relevant documents: Other automatic methods: Manual methods used: |
Disclaimer: Contents of this online document are not necessarily the official views of, nor endorsed by the U.S. Government, the Department of Commerce, or NIST. |