Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Can linguists distinguish between ChatGPT/AI and human writing?

Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing Methods in Applied Linguistics, ISSN: 2772-7661, Vol: 2, Issue: 3, Page: 100068 Received 3 June 2023, Revised 18 July 2023, Accepted 18 July 2023, Available online 7 August 2023, Version of Record 7 August 2023. There has been considerable intrigue surrounding the use of Large Language Model powered AI chatbots such as ChatGPT in research, educational contexts, and beyond. However, most studies have explored such tools’ general capabilities and applications for language teaching purposes. The current study advances this discussion to examine issues pertaining to human judgements, accuracy, and research ethics. Specifically, we investigate: 1) the extent to which linguists/reviewers from top journals can distinguish AI- from human-generated writing, 2) what the basis of reviewers’ decisions are, and 3) the extent to which editors of top Applied Linguistics journals believe AI tools are ethical for research purposes. In the study, reviewers (N = 72) completed a judgement task involving AI- and human-generated research abstracts, and several reviewers participated in follow-up interviews to explain their rationales. Similarly, editors (N = 27) completed a survey and interviews to discuss their beliefs. Findings suggest that despite employing multiple rationales to judge texts, reviewers were largely unsuccessful in identifying AI versus human writing, with an overall positive identification rate of only 38.9%. Additionally, many editors believed there are ethical uses of AI tools for facilitating research processes, yet some disagreed. Future research directions are discussed involving AI tools and academic publishing.

Sorry, comments are closed for this post.