View on GitHub

SemEval-2021

The 15th International Workshop on Semantic Evaluation

SemEval 2021: Best Task, Best Paper!!

SemEval-2021 features two overall awards, one for organizers of a task and one for a team participating in a task.

We are very pleased to announce the winners of these awards for SemEval-2021!

Best Task Paper Award

The Best Task Paper award, for organizers of an individual shared task, recognizes a task that stands out for making an important intellectual contribution to empirical computational semantics, as demonstrated by a creative, interesting, and scientifically rigorous dataset and evaluation design, and a well-written task overview paper.

Corey Harper, Jessica Cox, Curt Kohler, Antony Scerri, Ron Daniel Jr., and Paul Groth

MeasEval is an original information extraction task focused on quantitative measurements in scientific text, with spans for the quantity, units, item measured, and other mentioned attributes, as well as relations between them. The task setup featured a carefully developed annotation schema, guidelines, and dataset, and an evaluation metric with score components for the various kinds of spans and relations. 19 teams participated, and baseline systems developed by the organizers were evaluated as well. The task paper surveys the system papers and includes strong analysis of results, with breakdowns by span/relation type and genre, thoughtful investigation of possible artifacts of the evaluation metric, main conclusions, and ideas for future work.

Honorable Mention: Task 5, Toxic Spans Detection

John Pavlopoulos, Jeffrey Sorensen, Léo Laugier, and Ion Androutsopoulos

Honorable Mention: Task 10, Source-Free Domain Adaptation for Semantic Processing

Egoitz Laparra, Xin Su, Yipun Zhao, Özlem Uzuner, Timothy Miller, and Steven Bethard

Best System Paper Award

The Best System Paper award, for task participants, recognizes a system description paper that advances our understanding of a problem and available solutions with respect to a task. It need not be the highest-scoring system in the task, but it must have a strong analysis component in the evaluation, as well as a clear and reproducible description of the problem, algorithms, and methodology.

Best Paper 2021: UIUC-BioNLP at SemEval-2021 Task 11: A Cascade of Neural Models for Structuring Scholarly NLP Contributions

Haoyang Liu, M. Janina Sarol, and Halil Kilicoglu

This paper approaches the task of extracting structured information from scholarly articles. A sophisticated system is developed, blending established ideas in Information Extraction with more recent neural approaches. The various engineering decisions are clearly motivated and discussed, with additional evaluations of the different components. Limitations of both the dataset and the model are discussed, providing ideas for future work.

Honorable Mention: OCHADAI-KYOTO at SemEval-2021 Task 1: Enhancing Model Generalization and Robustness for Lexical Complexity Prediction

Yuki Taya, Lis Kanashiro Pereira, Fei Cheng, and Ichiro Kobayashi

Honorable Mention: TA-MAMC at SemEval-2021 Task 4: Task-adaptive Pretraining and Multi-head Attention for Abstract Meaning Reading Comprehension

Jing Zhang, Yimeng Zhuang, and Yinpei Su