View on GitHub

SemEval-2025

The 19th International Workshop on Semantic Evaluation

Call for Task Proposals

We invite proposals for tasks to be run as part of SemEval-2025. SemEval (the International Workshop on Semantic Evaluation) is an ongoing series of evaluations of computational semantics systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics.

SemEval tasks explore the nature of meaning in natural languages: how to characterize meaning and how to compute it. This is achieved in practical terms, using shared datasets and standardized evaluation metrics to quantify the strengths and weaknesses and possible solutions. SemEval tasks encompass a broad range of semantic topics from the lexical level to the discourse level, including word sense identification, semantic parsing, coreference resolution, and sentiment analysis, among others.

For SemEval-2025, we welcome tasks that can test an automatic system for the semantic analysis of text (e.g., intrinsic semantic evaluation, or an application-oriented evaluation). We especially encourage tasks for languages other than English, cross-lingual tasks, and tasks that develop novel applications of computational semantics. See the websites of previous editions of SemEval to get an idea about the range of tasks explored, e.g. SemEval-2020 and SemEval-2021-/2023/2024.

We strongly encourage proposals based on pilot studies that have already generated initial data, evaluation measures and baselines. In this way, we can avoid unforeseen challenges down the road which that may delay the task. For example, you may see this task proposal as a sample.

In case you are not sure whether a task is suitable for SemEval, please feel free to get in touch with the SemEval organizers at semevalorganizers@gmail.com to discuss your idea.

Task Selection

Task proposals will be reviewed by experts, and reviews will serve as the basis for acceptance decisions. Everything else being equal, more innovative new tasks will be given preference over task reruns. Task proposals will be evaluated on:

New Tasks vs. Task Reruns

We welcome both new tasks and task reruns. For a new task, the proposal should address whether the task would be able to attract participants. Preference will be given to novel tasks that have not received much attention yet.

For reruns of previous shared tasks (whether or not the previous task was part of SemEval), the proposal should address the need for another iteration of the task. Valid reasons include: a new form of evaluation (e.g. a new evaluation metric, a new application-oriented scenario), new genres or domains (e.g. social media, domain-specific corpora), or a significant expansion in scale. We further discourage carrying over a previous task and just adding new subtasks, as this can lead to the accumulation of too many subtasks. Evaluating on a different dataset with the same task formulation, or evaluating on the same dataset with a different evaluation metric, typically should not be considered a separate subtask.

Task Organization

We welcome people who have never organized a SemEval task before, as well as those who have. Apart from providing a dataset, task organizers are expected to:

Desk Rejects

Important dates

Preliminary timetable

Tasks that fail to keep up with crucial deadlines (such as the dates for having the task and CodaLab website up and dates for uploading sample, training, and evaluation data) or that diverge significantly from the proposal may be cancelled at the discretion of SemEval organizers. While consideration will be given to extenuating circumstances, our goal is to provide sufficient time for the participants to develop strong and well-thought-out systems. Cancelled tasks will be encouraged to submit proposals for the subsequent year’s SemEval. To reduce the risk of tasks failing to meet the deadlines, we are unlikely to accept multiple tasks with overlap in the task organizers.

Submission Details

The task proposal should be a self-contained document of no longer than 3 pages (plus additional pages for references). All submissions must be in PDF format, following the ACL template.

Each proposal should contain the following:

Proposals will be reviewed by an independent group of area experts who may not have familiarity with recent SemEval tasks, and therefore all proposals should be written in a self-explanatory manner and contain sufficient examples.

The submission webpage is: SemEval2025 Task Proposal Submission

Chairs