Over the past two years, AI has made such strides in accuracy and realism that its capabilities have become closer and closer to that of humans. With its advances, the ethics concerning AI usage are being questioned, along with the role of AI in spaces like classrooms.
At the end of May, sophomore US History I classes received their graded civics paper assignment, along with a message that a large portion of the papers were written with AI assistance. Although no statements were made concerning a total policy shift, the implication was present that on a teacher-to-teacher basis, some classes may be moving away from take home essays based off of the prevalence of AI in this year’s papers.
Although in the past, plagiarism was always a concern for teachers, according to history teacher Christopher Connole, unsanctioned writing aids had never been used to quite the extent that it has been in the last two years. This issue has sparked conversations with many of the teachers in both the history and English departments, raising concerns about how appropriate AI usage should be taught and handled moving forwards.
“Is my personal approach [to AI] going to change? Most likely. I think the conversation that is happening among the group of educators I’m talking to in the building is, first of all, what, as an educator, are we trying to teach? And then, how do we make sure that that is an authentic assessment [of a student’s ability]?” Connole said. “Allowing a student to type […] opens up the door to, ‘well, I can have multiple tabs open,’ then that’s defeating the purpose. So does that mean I can only allow handwritten material? And if I can only allow handwritten material, then how are you accessing the sources to write the handwritten material?”
The civics paper, though assigned differently in separate classes, is a summative paper that students are encouraged to plan and gather sources for throughout the year. The issue with strictly paper-based writing is that students wouldn’t be able to access those sources, most of which are online, while they write. This circumstance causes a dilemma for teachers in which they have to pit the risk of AI in at-home essays against the restricted environment of in-class, hand-written essays.
“When you have to make a change like this, you’re bound to lose something. So now, based on what I’ve seen, we’re going to be giving up some of that process to make sure that the work was authentic and it’s an actual student’s thinking,” English teacher Emily Coates said. “I want students to be able to think and articulate ideas, so I’m willing to give up some of the grammar stuff. Right now, I feel like I have to give that up, because the importance of authenticity is greater for me right now.”
Although AI has become very advanced, it is possible for teachers to detect suspicious writing patterns or writing that is distinct from what a person actually sounds like. Advanced AI detectors such as Turnitin.com or the built-in teacher tools in Google Docs also exist for faculty use, but even with those considered, according to Coates, it can even be difficult for teachers to confront a student about AI, or deal with pushback when they do. This makes it very easy for students to slip by having used AI, and go unpunished, making swapping away from online and take-home essays much more appealing for teachers rather than facing those sorts of conflicts.
“The [style of] correcting [essays] now is, can they write? Can they grammatically put together a piece of paper? Did they plagiarize? Oh, wait a sec, I think that that doesn’t really sound like their voice because I’ve corrected other things by them. Now, how do I prove that it’s not their voice?” Connole said. “If I don’t really believe [its authenticity], I have to go through the revision history. I can watch and see if you’re cutting and pasting, making edits along the way, or flowing without any mistakes. But you can instruct AI to make mistakes on purpose. So how? You don’t want to accuse somebody without evidence? And the evidence is very hard to find because the AI detectors don’t always work.”