View on GitHub

SemEval-2021

The 15th International Workshop on Semantic Evaluation

Call for Task Proposals – SemEval-2021: International Workshop on Semantic Evaluation

UPDATE: SUBMISSION DEADLINE EXTENDED - PROPOSALS DUE APRIL 3, 2020 (see all dates below)

We invite proposals for tasks to be run as part of SemEval-2021. SemEval (the International Workshop on Semantic Evaluation) is an ongoing series of evaluations of computational semantics systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics.

The SemEval evaluations explore the nature of meaning in natural languages in practical terms, by providing a mechanism to identify problems (e.g., how to characterize meaning and what is necessary to compute it) and to explore the strengths of possible solutions by means of standardized evaluation on shared datasets. SemEval evaluations initially focused on identifying word senses computationally, but have later grown to investigate the interrelationships among elements in a sentence (e.g., semantic relations, semantic parsing, semantic role labeling), relations between sentences (e.g., coreference), and author attitudes (e.g., sentiment analysis), among other research directions.

For SemEval-2021, we welcome any task that can test an automatic system for semantic analysis of text, be it application-dependent or application-independent. We especially welcome tasks for different languages, cross-lingual tasks, tasks requiring semantic interpretation, and tasks with both intrinsic and application-based evaluation. See the websites of previous editions of SemEval to get an idea about the range of tasks explored, e.g., for SemEval-2019 and SemEval-2020.

We strongly encourage proposals based on pilot studies that have already generated initial data and that can provide concrete examples and foresee the challenges of preparing the full task. In the event of receiving many proposals, preference will be given to tasks that have already run a pilot study for the proposed task.

We especially welcome tasks that are devoted to developing novel applications of computational semantics. We will encourage tasks that have a clearly defined end-user application showcasing and enhancing our understanding of computational semantics, as well as extending the current state-of-the-art.

Task Selection

Task proposals will be reviewed by experts, and reviews will serve as the basis for acceptance decisions. Everything else being equal, more innovative new tasks will be given preference over task re-runs. Task proposals will be evaluated on:

New Tasks vs. Task Reruns

We welcome both new tasks and task reruns. For a new task, an aspect to be addressed in the proposal is whether it would be able to attract participants. Preference will be given to novel tasks that have not received much attention yet.

For task reruns, the organizers should in their proposal defend the need for another iteration of their task. Valid reasons for a rerun would be: the need for a new form of evaluation (e.g., a new metric to test new phenomena, a new application-oriented scenario, etc.), the need to test on new types of data (e.g., social media, domain-specific corpora), a significant expansion in scale over a previous trial run of the task, etc.

In the case of a rerun, we further discourage carrying over the same tasks and just adding new subtasks as this can lead to the accumulation of too many subtasks. Evaluating on a different dataset with the same task formulation typically should not be considered a separate subtask.

Tasks that have already run for three years will not be accepted for SemEval-2021. If however the organizers estimate there is need for another iteration of their task, they are welcome to submit a task rerun proposal for SemEval-2021. Solid justification for the rerun will be needed, highlighting its novel aspects compared to previous editions, in respect to the criteria discussed above.

Task Organization

We welcome people who have never organized a SemEval task before, as well as those who have. Apart from providing trial, training, and test data, task organizers are expected to:

Important Dates - Updated March 19, 2020

Tasks that fail to keep up with crucial deadlines such as the dates for having the task and CodaLab website up and dates for uploading trial, training, and test data may be cancelled at the discretion of SemEval organizers. While consideration will be given to extenuating circumstances, our goal is to provide sufficient time for the participants to develop strong and well-thought-out systems. Cancelled tasks will be encouraged to submit proposals for the subsequent year’s SemEval.

The SemEval-2021 Workshop will be co-located with a major NLP conference in 2021.

Submission Details

The task proposals should be a self-contained document of no longer than 3 pages, and should follow the ACL 2020 style guidelines. References do not count against the page limit. The submissions should use the official ACL 2020 style templates:

All submissions must be in PDF format.

Each proposal should contain the following:

Proposals will be reviewed by an independent group of area experts who may not have familiarity with recent SemEval tasks and therefore, all proposals should be written in a self-explanatory manner and contain sufficient examples.

Submission site for task proposals

In case you are not sure whether a task is suitable for SemEval, please feel free to get in touch with the SemEval organizers at semeval-organizers@googlegroups.com to discuss your idea.

Chairs

Contact: semeval-organizers@googlegroups.com