Employing Contrastive Strategies for Multi-label Textual Emotion Recognition
Abstract
Textual emotion recognition is an important part of the human-computer interaction field. Current methods of textual emotion recognition mainly use large-scale pre-trained models fine-tuning. However, these methods are not accurate enough in the semantic representation of sentences. Contrastive learning has been shown to optimize the representation of vectors in the feature space. Therefore, we introduce the contrastive strategies to the textual emotion recognition task. We propose two approaches: using self-supervised contrastive learning before fine-tuning the pre-trained model, and using contrastive training on the same inputs during fine-tuning. We experiment on two multi-label emotion classification datasets: Ren-CECps and NLPCC2018. The experimental results demonstrate that the latter contrastive approach effectively improves the accuracy of emotion recognition.