View on GitHub


The 16th International Workshop on Semantic Evaluation

Quick links:, SemEval-2022 tasks, SemEval-2022 home

This page has two sections:

  1. Setting up CodaLab competition websites (for task organizers)
  2. Participating in CodaLab competitions (for task participants)

Setting up a CodaLab competition website for SemEval-2022

Benefits of CodaLab:

Note that CodaLab competitions is not the same as CodaLab worksheets. There is no requirement of participants to upload any code to CodaLab. In the vast majority of competitions, participants simply upload their output files, as in all past SemEvals.


Some resources that should help you in this process:

Important Notes:

Phases of the competition:

Consider setting up at least the following three phases for your competition:

  1. Practice phase:
    • Runs from now until roughly 10 Jan 2022 (official dates TBA)
    • Uses the official evaluation script, but on the trial data
    • Set maximum submissions to something high like 999
    • Make the leaderboard public
    • Allows participants to check their formatting
  2. Evaluation phase:
    • Runs from roughly 10 Jan until 31 Jan 2022 or some subset, if your competition is shorter (official dates TBA)
    • Uses the official evaluation script and the official test data
    • Set maximum submissions to a number less than or equal to 10. If the number is greater than 1, a suggested option is to tell the participants that only their final valid submission on CodaLab will be taken as the official submission to the competition. The participants can still describe contrastive runs in their system paper. If you choose to accept more than one official submission per team, then you will have to look for the other submissions in the 'submissions' tab (the leaderboard only shows the latest valid submission).
    • Hide the leaderboard (leaderboard_management_mode: hide_results)
    • Determines the official leaderboard rankings for SemEval
    • At the end of the evaluation period, make a copy of the leaderboard and save it as backup in case the leaderboard gets updated (especially needed if you have not set up a post-evaluation phase)
  3. Post-Evaluation phase:
    • Runs from roughly 31 Jan (or earlier, if your evaluation length is shorter than the maximum allowed time) (official dates TBA)
    • Uses the official evaluation script and the official test data
    • Enable “Auto migration” of submissions from Evaluation phase to this phase
    • Set maximum submissions to something high like 999
    • Make the leaderboard public
    • Allows participants to score “contrastive runs” that can be included as part of the analysis in system description papers. Also allows scoring of future systems interested in the task beyond SemEval-2022
    • At most one submission for each participant can be displayed on the leaderboard.

Participants must click the Submit to Leaderboard button underneath one of their submissions to display those results on the leaderboard. (Task organizers may override the participants using the SHOW setting on the Submissions page.)

Participating in a CodaLab competition:

  1. Create a CodaLab account. Sign in.
  2. Edit your profile appropriately. Make sure to add a team name, and enter names of team members. (Go to Settings, and look under Competition settings.)
  3. Proceed to task webpage on CodaLab. Read information on all the pages.
  4. Download data: training, development, and test (when released)
  5. Run your system on the data and generate a submission file, which must follow the official submission format outlined for your task. CodaLab does not place any restrictions on the name of the zip file.
  6. Make submissions on the development set (Phase 1).
    • Wait a few moments for the submission to execute.
    • Click on the Refresh Status button to check status.
    • Check to make sure submission is successful.
      • System will show status as Finished.
      • Click on Download evaluation output from scoring step to examine the result. If you choose to, you can upload the result on the leaderboard.
      • If unsuccessful, check error log, fix format issues (if any), resubmit updated zip.
  7. Once the evaluation period begins, you can make submissions for the test set. The procedure is similar to that on the dev set. These differences apply:
    • The leader board will be disabled until the end of the evaluation period.
    • You cannot see the results of your submission. They will be posted on a later date after the evaluation period ends.
    • You can still see if your submission was successful or resulted in some error.
    • In case of error, you can view the error log.


Contact information for organizers of individual tasks is available in the list of tasks. General questions about SemEval organization should be directed to