A Comparative Study on the Enhancement of Lexical Richness in Students’ English Argumentative Writing by Different Large Language Models
DOI:
https://doi.org/10.54097/58h6df92Keywords:
Large Language Models; Lexical Richness; Argumentative Writing.Abstract
This study investigates the impact of different large language models (LLMs) on lexical richness in student argumentative writing. By comparing the outputs of LLMs like GPT-3 and LaMDA, the research aims to identify specific linguistic features and stylistic variations introduced by each model. The study analyzes the extent to which LLMs enhance lexical density, lexical diversity and lexical sophistication in student writing, considering factors like authenticity, originality, and student learning. Quantitative and qualitative analyses are employed to assess lexical richness scores and identify stylistic patterns. The findings provide insights into the potential benefits and drawbacks of using LLMs for enhancing lexical richness in student writing, offering practical recommendations for educators on their appropriate integration into writing instruction.
Downloads
References
[1] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
[2] Coxhead, A. (2000). A new academic word list. The Language Teacher, 54(1), 12-21.
[3] Nation, I. S. P. (2001). Learning vocabulary in another language. Cambridge University Press.
[4] Read, J. (2000). Assessing vocabulary. Cambridge University Press.
[5] Nation, I. S. P. (2001). Learning vocabulary in another language. Cambridge University Press.
[6] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., & OpenAI. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1, 1–9.
[7] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
[8] Park, J., Kim, S., & Cho, K. (2022). A survey of large language models. arXiv preprint arXiv:2203.13420.
[9] Hashimoto, T., Kawahara, T., & Sakaguchi, K. (2022). Evaluating the effectiveness of a large language model-based writing assistant for improving student writing. In Proceedings of the 2022 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 1-11).
[10] Kelleher, T., & Paxton, C. (2022). The potential impact of large language models on education: A review of the literature. Journal of Technology in Human Services, 40(1), 1-18.
[11] Wang, Y., Zhang, J., Li, Y., & Zhou, M. (2022). Personalized learning based on large language models: A review. IEEE Access, 10, 102903-102915.
[12] Laufer, B., & Nation, P. (1995). Vocabulary size and vocabulary knowledge. In Vocabulary acquisition (pp. 15-30). Cambridge University Press.
[13] McCarthy, M., & Jarvis, S. (2010). English for academic purposes: A guide and resource book for teachers. Cambridge University Press.
[14] Biber, D. (2006). Linguistic corpora in descriptive and theoretical linguistics. In The handbook of corpus linguistics (pp. 53-78). Blackwell Publishing.
[15] Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition and Communication, 32 (1), 365-387.
[16] Google AI. (2022). LaMDA: Our breakthrough conversation technology. Google AI Blog.
[17] OpenAI. (2020). GPT-3. OpenAI.
[18] Google AI. (2022). LaMDA: Our breakthrough conversational technology. https://ai.googleblog.com/2022/01/lamda-our-breakthrough-conversational.html
[19] OpenAI. (2020). GPT-3. https://openai.com/blog/gpt-3/
[20] McCarthy, M., & Jarvis, S. (2010). Vocabulary. In A companion to corpus linguistics (pp. 261-276). Routledge.
[21] Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. Cambridge University Press.
[22] Biber, D. (1988). Variation across speech and writing. Cambridge University Press.
[23] Halliday, M. A. K. (1994). An introduction to functional grammar. Edward Arnold.
[24] Hyland, K. (2002). Specificity and hedging in academic writing. Journal of Pragmatics, 34(12), 1599-1625.
[25] Scott, M. (2004). WordSmith Tools version 5. Oxford University Press.
[26] Thoppilan, R., Schuh, M., Scao, T. L., Lee, K., Qin, J., Le, H., ... & Chang, M. W. (2022). LaMDA: Language model for dialogue applications. arXiv preprint arXiv:2201.08239.
[27] Sidorov, G., Knyazev, B., & Kuratov, A. (2021). Text generation with controllable stylistic attributes. arXiv preprint arXiv:2103.00936.
[28] Wang, S., Li, X., & Zhang, Y. (2022). Detecting plagiarism in code generated by large language models. arXiv preprint arXiv:2204.07857.
[29] Weizenbaum, J. (2017). Computer power and human reason: From judgment to calculation. W. W. Norton & Company.
[30] Bender, E. M., Gebru, T., McMillan-Major, A., Shmitchell, S., & Buolamwini, J. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610-623).
[31] Klingner, J. K., Vaughn, S., & Schumm, J. S. (2019). Teaching reading comprehension to students with learning disabilities. Guilford Publications.
[32] Warschauer, M. (2019). Technology and language learning. Routledge.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Journal of Education and Social Development

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.