Machine Learning

Classification non-supervisée de documents à partir des modèles Transformeurs

Publié le - Revue des Nouvelles Technologies de l'Information, Extraction et Gestion des Connaissances, RNTI-E-38

Auteurs : Mira Ait-Saada, François Role, Mohamed Nadif

Pre-trained Transformer-based word embeddings are now widely used in text mining where they are known to significantly improve supervised tasks such as text classification, named entity recognition and question answering. Since the Transformer models create several different embeddings for the same input, one at each layer of their architecture, various studies have already tried to identify those of these embeddings that most contribute to the success of the above-mentioned tasks. In contrast the same performance analysis has not yet been carried out in the unsupervised setting. In this paper we evaluate the effectiveness of Transformer models on the important task of text clustering. In particular, we present a clustering ensemble approach that harnesses all the network’s layers. Numerical experiments carried out on real datasets with different Transformer models show the effectiveness of the proposed method compared to several baselines.