SemEval-2021: Program
SemEval-2021 will be colocated with ACL 2021. All times shown are UTC.
Logistical notes
- Access to the event is through the Underline virtual platform. If you have registered for ACL you should have received a "Welcome to ACL-IJCNLP 2021" email with login instruction.
- PLENARY SESSIONS: These will be held over Zoom, via the links provided in Underline (Thursday 5th, Friday 6th).
- PAPER TALKS: Oral paper plenary sessions will be for Q&A only. Attendees are advised to watch the prerecorded talks prior to the session.
- For Diyi Yang's invited talk on August 5, use the *SEM Zoom link instead of the SemEval one.
- POSTERS: Poster sessions will be held in GatherTown, via the link provided in Underline. To accommodate different time zones, each poster session is held three times. Emojis indicate when authors intend to be present at their poster.
- Note that the first poster session time slot starts Aug 5 02:00 UTC = evening of Aug 4 in the Americas.
- UPDATE: As of Aug 5 03:00 UTC, all the posters are in a single GatherTown room, in roughly numerical order by paper number. As of Aug 5 13:00 UTC, there is a GatherTown poster room for each group of tasks. If your poster is missing, please contact Underline (in the meantime, you should be able to present by sharing your screen). We appreciate your patience with glitches in the technical setup.
All SemEval papers can be found in the proceedings.
Thursday, August 5
Access via the *SEM Zoom link. Session Chair: Lun-Wei Ku
Recently, natural language processing (NLP) has had increasing success and produced extensive industrial applications. Despite being sufficient to enable these applications, current NLP systems often ignore the social part of language, e.g., who says it, in what context, for what goals. In this talk, we take a closer look at social factors in language via a new theory taxonomy, and its interplay with computational methods via two lines of work. The first one studies what makes language persuasive by introducing a semi-supervised method to leverage hierarchical structures in text to recognize persuasion strategies in good-faith requests. The second part demonstrates how various structures in conversations can be utilized to generate better summaries for everyday interaction. We conclude by discussing several open-ended questions towards how to build socially aware language technologies, with the hope of getting closer to the goal of human-like language understanding.
Bio: Diyi Yang is an assistant professor in the School of Interactive Computing at Georgia Tech. She is broadly interested in Computational Social Science, and Natural Language Processing. Diyi received her PhD from the Language Technologies Institute at Carnegie Mellon University. Her work has been published at leading NLP/HCI conferences, and also resulted in multiple award nominations from EMNLP, ICWSM, SIGCHI and CSCW. She is named as a Forbes 30 under 30 in Science, a recipient of IEEE AI 10 to Watch, and has received faculty research awards from Amazon, Facebook, JPMorgan Chase, and Salesforce.
Session chair: Guy Emerson
- SemEval-2021 Task 1: Lexical Complexity Prediction, Matthew Shardlow, Richard Evans, Gustavo Henrique Paetzold and Marcos Zampieri
- OCHADAI-KYOTO at SemEval-2021 Task 1: Enhancing Model Generalization and Robustness for Lexical Complexity Prediction, Yuki Taya, Lis Kanashiro Pereira, Fei Cheng and Ichiro Kobayashi
- SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC), Federico Martelli, Najla Kalach, Gabriele Tola and Roberto Navigli
- SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning, Boyuan Zheng, Xiaoyu Yang, Yu-Ping Ruan, Zhenhua Ling, Quan Liu, Si Wei and Xiaodan Zhu
- TA-MAMC at SemEval-2021 Task 4: Task-adaptive Pretraining and Multi-head Attention for Abstract Meaning Reading Comprehension Jing Zhang, Yimeng Zhuang and Yinpei Su
Session chair: Alexis Palmer
- SemEval-2021 Task 5: Toxic Spans Detection, John Pavlopoulos, Jeffrey Sorensen, LΓ©o Laugier and Ion Androutsopoulos
- SemEval-2021 Task 6: Detection of Persuasion Techniques in Texts and Images, Dimitar Dimitrov, Bishr Bin Ali, Shaden Shaar, Firoj Alam, Fabrizio Silvestri, Hamed Firooz, Preslav Nakov and Giovanni Da San Martino
- Alpha at SemEval-2021 Task 6: Transformer Based Propaganda Classification, Zhida Feng, Jiji Tang, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun and Li Chen
- SemEval 2021 Task 7: HaHackathon, Detecting and Rating Humor and Offense, J. A. Meaney, Steven Wilson, Luis Chiruzzo, Adam Lopez and Walid Magdy
Session chair: Alexis Palmer
- #56 LangResearchLab NC at SemEval-2021 Task 1: Linguistic Feature Based Modelling for Lexical Complexity, Raksha Agarwal and Niladri Chatterjee π»π³
- #67 OCHADAI-KYOTO at SemEval-2021 Task 1: Enhancing Model Generalization and Robustness for Lexical Complexity Prediction, Yuki Taya, Lis Kanashiro Pereira, Fei Cheng and Ichiro Kobayashi π»
- #123 SINAI at SemEval-2021 Task 1: Complex word identification using Word-level features, Jenny Ortiz-Zambrano and Arturo Montejo-RΓ‘ez π
- #150 TUDA-CCL at SemEval-2021 Task 1: Using Gradient-boosted Regression Tree Ensembles Trained on a Heterogeneous Feature Set for Predicting Lexical Complexity, Sebastian Gombert and Sabine Bartsch π³π
- #156 JCT at SemEval-2021 Task 1: Context-aware Representation for Lexical Complexity Prediction, Chaya Liebeskind, Otniel Elkayam and Shmuel Liebeskind π³π
- #158 cs60075_team2 at SemEval-2021 Task 1 : Lexical Complexity Prediction using Transformer-based Language Models pre-trained on various text corpora, Abhilash Nandy, Sayantan Adak, Tanurima Halder and Sai Mahesh π
- #195 IAPUCP at SemEval-2021 Task 1: Stacking Fine-Tuned Transformers is Almost All You Need for Lexical Complexity Prediction, Kervy Rivas Rojas and Fernando Alva-Manchego
- #110 Uppsala NLP at SemEval-2021 Task 2: Multilingual Language Models for Fine-tuning and Feature Extraction in Word-in-Context Disambiguation, Huiling You, Xingran Zhu and Sara Stymne π
- #113 SkoltechNLP at SemEval-2021 Task 2: Generating Cross-Lingual Training Data for the Word-in-Context Task, Anton Razzhigaev, Nikolay Arefyev and Alexander Panchenko
- #191 Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT Fine-tuning, Boris Zhestiankin and Maria Ponomareva π³
- #200 SzegedAI at SemEval-2021 Task 2: Zero-shot Approach for Multilingual and Cross-lingual Word-in-Context Disambiguation, GΓ‘bor Berend π³π
- #66 ReCAM@IITK at SemEval-2021 Task 4: BERT and ALBERT based Ensemble for Abstract Word Prediction, Abhishek Mittal and Ashutosh Modi π³
- #136 ECNU_ICA_1 SemEval-2021 Task 4: Leveraging Knowledge-enhanced Graph Attention Networks for Reading Comprehension of Abstract Meaning, Pingsheng Liu, Linlin Wang, Qian Zhao, Hao Chen, Yuxi Feng, Xin Lin and liang he
- #137 LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting, Abheesht Sharma, Harshit Pandey, Gunjan Chhablani, Yash Bhartia and Tirtharaj Dash π»π³π
- #165 IIE-NLP-Eyas at SemEval-2021 Task 4: Enhancing PLM for ReCAM with Special Tokens, Re-Ranking, Siamese Encoders and Back Translation, Yuqiang Xie, Luxi Xing, Wei Peng and Yue Hu
- #192 TA-MAMC at SemEval-2021 Task 4: Task-adaptive Pretraining and Multi-head Attention for Abstract Meaning Reading Comprehension, Jing Zhang, Yimeng Zhuang and Yinpei Su π»π
- #207 NLP-IIS@UT at SemEval-2021 Task 4: Machine Reading Comprehension using the Long Document Transformer, Hossein Basafa, Sajad Movahedi, Ali Ebrahimi, Azadeh Shakery and Heshaam Faili π³
- #18 IITK@Detox at SemEval-2021 Task 5: Semi-Supervised Learning and Dice Loss for Toxic Spans Detection, Archit Bansal, Abhay Kaushik and Ashutosh Modi π³
- #37 UniParma at SemEval-2021 Task 5: Toxic Spans Detection Using CharacterBERT and Bag-of-Words Model, Akbar Karimi, Leonardo Rossi and Andrea Prati π
- #42 UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans Detection, Andrei Paraschiv, Dumitru-Clementin Cercel and Mihai Dascalu π³
- #45 NLRG at SemEval-2021 Task 5: Toxic Spans Detection Leveraging BERT-based Token Classification and Span Prediction Techniques, Gunjan Chhablani, Abheesht Sharma, Harshit Pandey, Yash Bhartia and Shan Suthaharan π»π³π
- #85 UoB at SemEval-2021 Task 5: Extending Pre-Trained Language Models to Include Task and Domain-Specific Information for Toxic Span Prediction, Erik Yan and Harish Tayyar Madabushi π³
- #90 Cisco at SemEval-2021 Task 5: Whatβs Toxic?: Leveraging Transformers for Multiple Toxic Span Extraction from Online Comments, Sreyan Ghosh and Sonal Kumar
- #175 MedAI at SemEval-2021 Task 5: Start-to-end Tagging Framework for Toxic Spans Detection, Zhen Wang, Hongjie Fan and Junfei Liu
- #210 HamiltonDinggg at SemEval-2021 Task 5: Investigating Toxic Span Detection using RoBERTa Pre-training, Huiyang Ding and David Jurgens π³
- #101 Alpha at SemEval-2021 Task 6: Transformer Based Propaganda Classification, Zhida Feng, Jiji Tang, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun and Li Chen π
- #157 WVOQ at SemEval-2021 Task 6: BART for Span Detection and Classification, Cees Roele π³π
- #55 HumorHunter at SemEval-2021 Task 7: Humor and Offense Recognition with Disentangled Attention, Yubo Xie, Junze Li and Pearl Pu π
- #92 Grenzlinie at SemEval-2021 Task 7: Detecting and Rating Humor and Offense, Renyuan Liu and Xiaobing Zhou
- #118 abcbpc at SemEval-2021 Task 7: ERNIE-based Multi-task Model for Detecting and Rating Humor and Offense, Chao Pang, Xiaoran Fan, Weiyue Su, Xuyi Chen, Shuohuan Wang, Jiaxiang Liu, Xuan Ouyang, Shikun Feng and Yu Sun π»
- #129 Humor@IITK at SemEval-2021 Task 7: Large Language Models for Quantifying Humor and Offensiveness, Aishwarya Gupta, Avik Pal, Bholeshwar Khurana, Lakshay Tyagi and Ashutosh Modi π³
- #201 RoMa at SemEval-2021 Task 7: A Transformer-based Approach for Detecting and Rating Humor and Offense, Roberto Labadie, Mariano Jason Rodriguez, Reynier Ortega and Paolo Rosso π³
Friday, August 6
Session chair: Nathan Schneider
This talk brings a psycholinguistic perspective to the questions of what makes a 'good' sentence for a speaker (or NLG system) to produce and what makes a 'good' inference about the world for a listener (or NLU system) to draw from the sentences they encounter. I consider the link between real-world predictability and text likelihood β do the things that speakers choose to say about the world provide a transparent mapping to how the world really is? This talk will introduce experimental evidence that comprehenders expect speakers to mention newsworthy content (namely content that is not highly predictable from world knowledge). For example, comprehenders who are asked to guess what a speaker is going to say next will infer from the mention of the word 'yellow' that the speaker is unlikely to be talking about something prototypically yellow (they anticipate that the speaker is talking about a shirt instead of a banana) and, more generally, they will guess that a sentence contains content that deviates from their real-world priors (they anticipate a description of a newsworthy situation with properties that are rare in the real world). Such findings have implications for the way we use text to infer meaningful facts about the world and the way we evaluate the felicity and sensibility of a text.
Bio: Hannah Rohde is a Reader in Linguistics & English Language at the University of Edinburgh. She works in experimental pragmatics, using psycholinguistic techniques to investigate questions in areas such as pronoun interpretation, referring expression generation, implicature, presupposition, deception, and the establishment of discourse coherence. Her undergraduate degree was in Computer Science and Linguistics from Brown University, followed by a PhD in Linguistics at the University of California San Diego and postdoctoral fellowships at Northwestern and Stanford. She has helped organise the EU-wide "TextLink: Structuring discourse in multilingual Europe" COST Action network and is a recipient of the Philip Leverhulme Prize in Languages and Literatures. Her dream is to one day experience in-person conferences again β to indulge in standing around in overcrowded corridors, talking to interesting people over conference coffee and biscuits!
Session chair: Natalie Schluter
- SemEval-2021 Task 8: MeasEval β Extracting Counts and Measurements and their Related Contexts, Corey Harper, Jessica Cox, Curt Kohler, Antony Scerri, Ron Daniel Jr. and Paul Groth
- SemEval-2021 Task 9: Fact Verification and Evidence Finding for Tabular Data in Scientific Documents (SEM-TAB-FACTS), Nancy X. R. Wang, Diwakar Mahajan, Marina Danilevsky and Sara Rosentha
- BreakingBERT@IITK at SemEval-2021 Task 9: Statement Verification and Evidence Finding with Tables, Aditya Jindal, Ankur Gupta, Jaya Srivastava, Preeti Menghwani, Vijit Malik, Vishesh Kaushik and Ashutosh Modi
- SemEval-2021 Task 12: Learning with Disagreements, Alexandra Uma, Tommaso Fornaciari, Anca Dumitrache, Tristan Miller, Jon Chamberlain, Barbara Plank, Edwin Simpson and Massimo Poesio
Session chair: Natalie Schluter
- SemEval-2021 Task 10: Source-Free Domain Adaptation for Semantic Processing, Egoitz Laparra, Xin Su, Yiyun Zhao, Γzlem Uzuner, Timothy Miller and Steven Bethard
- BLCUFIGHT at SemEval-2021 Task 10: Novel Unsupervised Frameworks For Source-Free Domain Adaptation, Weikang Wang, Yi Wu, Yixiang Liu and Pengyuan Liu
- SemEval-2021 Task 11: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph, Jennifer D'Souza, SΓΆren Auer and Ted Pedersen
- UIUC_BioNLP at SemEval-2021 Task 11: A Cascade of Neural Models for Structuring Scholarly NLP Contributions, Haoyang Liu, M. Janina Sarol and Halil Kilicoglu
Session chair: Nathan Schneider
- #115 KGP at SemEval-2021 Task 8: Leveraging Multi-Staged Language Models for Extracting Measurements, their Attributes and Relations, Neel Karia, Ayush Kaushal and Faraaz Mallick π³π
- #154 DPR at SemEval-2021 Task 8: Dynamic Path Reasoning for Measurement Relation Extraction, Amir Pouran Ben Veyseh, Franck Dernoncourt and Thien Huu Nguyen π³
- #171 CLaC-np at SemEval-2021 Task 8: Dependency DGCNN, Nihatha Lathiff, Pavel Khloponin and Sabine Bergler π³π
- #174 CLaC-BP at SemEval-2021 Task 8: SciBERT Plus Rules for MeasEval, Benjamin Therien, Parsa Bagherzadeh and Sabine Bergler π³π
- #63 BreakingBERT@IITK at SemEval-2021 Task 9: Statement Verification and Evidence Finding with Tables, Aditya Jindal, Ankur Gupta, Jaya Srivastava, Preeti Menghwani, Vijit Malik, Vishesh Kaushik and Ashutosh Modi π»π
- #104 THiFly_Queens at SemEval-2021 Task 9: Two-stage Statement Verification with Adaptive Ensembling and Slot-based Operation, Yuxuan Zhou, Kaiyin Zhou, Xien Liu, Ji Wu and Xiaodan Zhu π»
- #112 TAPAS at SemEval-2021 Task 9: Reasoning over tables with intermediate pre-training, Thomas MΓΌller, Julian Eisenschlos and Syrine Krichene π³
- #199 BOUN at SemEval-2021 Task 9: Text Augmentation Techniques for Fact Verification in Tabular Data, Abdullatif KΓΆksal, Yusuf YΓΌksel, Bekir YΔ±ldΔ±rΔ±m and Arzucan ΓzgΓΌr π³π
- #93 IITK at SemEval-2021 Task 10: Source-Free Unsupervised Domain Adaptation using Class Prototypes, Harshit Kumar, Jinang Shah, Nidhi Hegde, Priyanshu Gupta, Vaibhav Jindal and Ashutosh Modi π³
- #105 PTST-UoM at SemEval-2021 Task 10: Parsimonious Transfer for Sequence Tagging, Kemal Kurniawan, Lea Frermann, Philip Schulz and Trevor Cohn π»π³
- #106 BLCUFIGHT at SemEval-2021 Task 10: Novel Unsupervised Frameworks For Source-Free Domain Adaptation, Weikang Wang, Yi Wu, Yixiang Liu and pengyuan liu
- #125 Self-Adapter at SemEval-2021 Task 10: Entropy-based Pseudo-Labeler for Source-free Domain Adaptation, Sangwon Yoon, Yanghoon Kim and Kyomin Jung π»π³π
- #151 The University of Arizona at SemEval-2021 Task 10: Applying Self-training, Active Learning and Data Augmentation to Source-free Domain Adaptation, Xin Su, Yiyun Zhao and Steven Bethard π»π
- #49 KnowGraph@IITK at SemEval-2021 Task 11: Building Knowledge Graph for NLP Research, Shashank Shailabh, Sajal Chaurasia and Ashutosh Modi π»π³π
- #95 YNU-HPCC at SemEval-2021 Task 11: Using a BERT Model to Extract Contributions from NLP Scholarly Articles, Xinge Ma, Jin Wang and Xuejie Zhang π»π
- #103 ITNLP at SemEval-2021 Task 11: Boosting BERT with Sampling and Adversarial Training for Knowledge Extraction, Genyu Zhang, Yu Su, Changhong He, Lei Lin, Chengjie Sun and Lili Shan π»
- #172 UIUC_BioNLP at SemEval-2021 Task 11: A Cascade of Neural Models for Structuring Scholarly NLP Contributions, Haoyang Liu, M. Janina Sarol and Halil Kilicoglu π³π
- #185 Duluth at SemEval-2021 Task 11: Applying DeBERTa to Contributing Sentence Selection and Dependency Parsing for Entity Extraction, Anna Martin and Ted Pedersen π³
- #197 INNOVATORS at SemEval-2021 Task-11: A Dependency Parsing and BERT-based model for Extracting Contribution Knowledge from Scientific Papers, Hardik Arora, Tirthankar Ghosal, Sandeep Kumar, Suraj Patwal and Phil Gooch π»π