TY - GEN
T1 - LLMs Still Can’t Avoid Instanceof
T2 - 46th International Conference on Software Engineering: Software Engineering Education and Training, ICSE-SEET 2024
AU - Cipriano, Bruno Pereira
AU - Alves, Pedro
N1 - DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2024/4/14
Y1 - 2024/4/14
N2 - Large Language Models (LLMs) have emerged as promising tools to assist students while solving programming assignments. However, object-oriented programming (OOP), with its inherent complexity involving the identification of entities, relationships, and responsibilities, is not yet mastered by these tools. Contrary to introductory programming exercises, there exists a research gap with regard to the behavior of LLMs in OOP contexts. In this study, we experimented with three prominent LLMs - GPT-3.5, GPT-4, and Bard - to solve real-world OOP exercises used in educational settings, subsequently validating their solutions using an Automatic Assessment Tool (AAT). The findings revealed that while the models frequently achieved mostly working solutions to the exercises, they often overlooked the best practices of OOP. GPT-4 stood out as the most proficient, followed by GPT-3.5, with Bard trailing last. We advocate for a renewed emphasis on code quality when employing these models and explore the potential of pairing LLMs with AATs in pedagogical settings. In conclusion, while GPT-4 showcases promise, the deployment of these models in OOP education still mandates supervision.
AB - Large Language Models (LLMs) have emerged as promising tools to assist students while solving programming assignments. However, object-oriented programming (OOP), with its inherent complexity involving the identification of entities, relationships, and responsibilities, is not yet mastered by these tools. Contrary to introductory programming exercises, there exists a research gap with regard to the behavior of LLMs in OOP contexts. In this study, we experimented with three prominent LLMs - GPT-3.5, GPT-4, and Bard - to solve real-world OOP exercises used in educational settings, subsequently validating their solutions using an Automatic Assessment Tool (AAT). The findings revealed that while the models frequently achieved mostly working solutions to the exercises, they often overlooked the best practices of OOP. GPT-4 stood out as the most proficient, followed by GPT-3.5, with Bard trailing last. We advocate for a renewed emphasis on code quality when employing these models and explore the potential of pairing LLMs with AATs in pedagogical settings. In conclusion, while GPT-4 showcases promise, the deployment of these models in OOP education still mandates supervision.
KW - OOP best practices
KW - bard
KW - gpt-3
KW - gpt-4
KW - large language models
KW - object-oriented design
KW - object-oriented programming
KW - programming assignments
KW - teaching
UR - http://www.scopus.com/inward/record.url?scp=85195457150&partnerID=8YFLogxK
U2 - 10.1145/3639474.3640052
DO - 10.1145/3639474.3640052
M3 - Conference contribution
AN - SCOPUS:85195457150
T3 - Proceedings of the 46th International Conference on Software Engineering: Software Engineering Education and Training
SP - 162
EP - 169
BT - SEET@ICSE
PB - IEEE Computer Society
Y2 - 14 April 2024 through 20 April 2024
ER -