Text clustering with large language model embeddings

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Text clustering is an important method for organising the increasing volume of digital content, aiding in the structuring and discovery of hidden patterns in uncategorised data. The effectiveness of text clustering largely depends on the selection of textual embeddings and clustering algorithms. This study argues that recent advancements in large language models (LLMs) have the potential to enhance this task. The research investigates how different textual embeddings, particularly those utilised in LLMs, and various clustering algorithms influence the clustering of text datasets. A series of experiments were conducted to evaluate the impact of embeddings on clustering results, the role of dimensionality reduction through summarisation, and the adjustment of model size. The findings indicate that LLM embeddings are superior at capturing subtleties in structured language. OpenAI's GPT-3.5 Turbo model yields better results in three out of five clustering metrics across most tested datasets. Most LLM embeddings show improvements in cluster purity and provide a more informative silhouette score, reflecting a refined structural understanding of text data compared to traditional methods. Among the more lightweight models, BERT demonstrates leading performance. Additionally, it was observed that increasing model dimensionality and employing summarisation techniques do not consistently enhance clustering efficiency, suggesting that these strategies require careful consideration for practical application. These results highlight a complex balance between the need for refined text representation and computational feasibility in text clustering applications. This study extends traditional text clustering frameworks by integrating embeddings from LLMs, offering improved methodologies and suggesting new avenues for future research in various types of textual analysis.

Original languageEnglish
Pages (from-to)100-108
Number of pages9
JournalInternational Journal of Cognitive Computing in Engineering
Volume6
DOIs
Publication statusPublished - Dec 2025

Bibliographical note

Publisher Copyright:
© 2024 The Authors

Funding

This work was financed by the Portuguese Agency FCT (Funda\u00E7\u00E3o para a Ci\u00EAncia e a Tecnologia) , in the framework of projects UIDB/04111/2020 , UIDB/00066/2020 , CEECINST/00002/2021/CP2788/CT0001 and CEECINST/00147/2018/CP1498/CT0015 . This work was financed by the Funda\u00E7\u00E3o para a Ci\u00EAncia e a Tecnologia, Portugal, in the framework of projects UIDB/04111/2020, UIDB/00066/2020, CEECINST/00002/2021/CP2788/CT0001 and CEECINST/00147/2018/CP1498/CT0015, as well as by the Instituto Lus\u00F3fono de Investiga\u00E7\u00E3o e Desenvolvimento (ILIND), Portugal under project COFAC/ILIND/COPELABS/1/2022.

FundersFunder number
Portuguese Agency FCT
Fundação para a Ciência e a TecnologiaCEECINST/00147/2018/CP1498/CT0015, CEECINST/00002/2021/CP2788/CT0001, UIDB/04111/2020, UIDB/00066/2020
Instituto Lusófono de Investigação e DesenvolvimentoCOFAC/ILIND/COPELABS/1/2022

    Keywords

    • Large language models
    • Text clustering
    • Text summarisation

    Fingerprint

    Dive into the research topics of 'Text clustering with large language model embeddings'. Together they form a unique fingerprint.

    Cite this