The University of Maryland participated in the English and Czech tasks. For English, one monolingual run using fields based on automatic transcription (the required condition) and one (otherwise identical) cross-language run using French queries were officially scored. Weighted use of alternative translations yielded an apparent improvement over one-best translation that was not statistically significant, but statistical translation models trained on European Parliament proceedings were found to be poorly matched to this task. Three contrastive runs in which manually generated metadata from the English collection was indexed were also officially scored. Results for Czech were not informative in this first year of that task.
展开▼