News:

Welcome to the new (and now only) Fora!

Main Menu

[Article] Policing is not Pedagogy: On the Supposed Threat of ChatGPT

Started by Parasaurolophus, August 04, 2023, 01:51:23 PM

Previous topic - Next topic

Parasaurolophus

Policing is not Pedagogy: On the Supposed Threat of ChatGPT

Okay, so it's a long blog post. But it's got a lot of substance, and is on a topic which I think we're all at least somewhat interested in. So... what do you think?


Some excerpts:

QuoteLet's be clear about the sole reason why people think that ChatGPT's powers will transform pedagogy: cheating will be easier. (I will focus on ChatGPT only here, although there are and will be other LLM systems that can perform—or outperform—the functions ChatGPT has). Students, rushing to complete assignments, will simply spend an hour or so refining their prompt to ChatGPT, and then ChatGPT will spit out a good enough essay. At that point, the student merely needs to massage it so that it better conforms to the instructor's expectations. It seems a lot easier than spending scores of hours reading articles and books, participating in class discussion, paying attention to lectures, and, finally, composing the essay. The charge many people make is that the average student will give in and use ChatGPT as much as possible in place of doing the required work.

In short, students will cheat their way to an A. Or, to put it more gently, they will cheat their way to a completed assignment. It follows that because they will use ChatGPT to cheat, students will not get as much out of school as they otherwise would have. Call this the cheating threat to learning.

The only solution, it has been suggested, is that we must force our students to be free to learn. The only available tactics, many seem to think, are either aggressively policing student essays or switching to in-class high stakes testing. On this view, we're supposed to be high-tech plagiarism cops armed with big exams.

But how much responsibility do teachers have to invigilate their students in order to prevent cheating? Not much, I think. And so we do not have a particularly robust responsibility to police our students' usage of ChatGPT. So we should reject the idea that we should be high-tech plagiarism cops. I have two arguments for this claim.

First, it is inappropriate for us to organize our assessments entirely around viewing our students as potential wrongdoers. We should commit, throughout our classes—and especially when it comes to points at which our students are especially vulnerable—to seeing them primarily as if they want to be part of the collective project of learning. Insofar as we structure assessments, which are an important part of the class, around preventing cheating, we necessarily become suspicious of our students, viewing them as opponents who must be threatened with sanctions to ensure good behavior or, barring that, outsmarted so that they cannot successfully break the rules. This both limits and corrupts the collective project of learning.



For my part, I think the comments are right to point out that the strategies cited really kind of are about AI-proofing assignments.

But leaving that aside... I already do most of the things Smith suggests. I adopted these strategies to combat a culture of cheating when I arrived here. And it worked so-so. Well enough for me to muddle on awhile, anyway, although the prevalence of cheating remained higher than I'd like. It's been increasing over time, though (doubtless because my classes are mostly online these days), and ChatGPT seems to have radically accelerated it. It gets used for discussion posts graded solely on their existence (as in: you pass if you post, fail if you don't). It gets used to respond to discussion posts. It gets used to make presentations. It gets used to generate reading questions. To answer quizzes (which can't be in-class in my case). It gets used for the scaffolded writing--though not for the peer feedback, which I abandoned after some disastrous experimentation when I started here.

On the other hand, I do think he's right to say that we should be thinking in terms of giving students opportunities to engage and apply themselves, and build a learning community. It's just that when they look around and see 90% of their peers cheating (seriously, those are the numbers we're contending with here) and doing okay or better than they are, that's a real problem.
I know it's a genus.

spork

Quote from: Parasaurolophus on August 04, 2023, 01:51:23 PM[. . .]

seeing them primarily as if they want to be part of the collective project of learning.

[. . .]


I disagree with the premise. The proportion of undergrads at the schools I've worked at who have wanted to learn has declined steadily over time. Now it's a distinct minority.

Much of the brouhaha about ChatGPT comes from making the emperor's lack of clothing even more obvious. Students have been cheating on writing assignments for decades, if not centuries, yet until recently professors could assuage themselves with delusions like "What I teach is so meaningful and how I teach is so engaging that cheating would never ever even enter into the minds of my students."

Now there is no way to avoid acknowledging the fact that students never learned anything from those term papers assigned since the dawn of time by faculty who felt as though students were as eager to write in the passive voice as they themselves still were. The mirage of self-importance many professors lived in has disappeared.

The widespread use of ChatGPT by students has given me even more reason to disengage further from my job. I get paid to teach. If students want to pay $100K-$200K for a lifestyle experience that doesn't involve learning, I don't care. But ChatGPT has really hit my wife, an academic in a field that uses writing a lot. ChatGPT and her university's (lack of) response to it has called into question her professional raison d'etre, if you'll pardon the French.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

dismalist

The fundamental difficulty is that cheating pays. Clearly, the cheaters have learned little or nothing, but they still get certified. ChatGPT and related programs make cheating cheaper -- less chance of getting caught, less work making up answers -- so we'll get more of it.

In the end, if the cheating can't be limited, it will destroy the signal that higher education sends, making it worthless. Well, except that it would certify that one is good at using ChatGPT! :-)

What is needed to create genuine learning communities is to find a way to separate the learners from the signalers. How about low to no tuition for spartan living -- no football teams, no dining halls with nine serving stations, four students to a dorm room? One gets the idea. Of course, would work only if tolerance for spartan living is positively correlated with desire to learn. I expect it is, highly.
That's not even wrong!
--Wolfgang Pauli

Sun_Worshiper

The author makes some good points, but overall I don't agree with the spirit of this essay. We do, in fact, have to do what we can to prevent cheating. This is both to save students from their own laziness and to create an even playing field in the classroom.

That having been said, there is no way to prove that students are using ChatGPT and so it is a losing battle to focus on that. Better to teach students the strengths and weaknesses of LLMs and also to diversify the basket of assignments that are required of them so that it isn't all essays that they write at home and discussion board posts. For my part, I'll be increasing presentations and in-class writing assignments with a lockdown browser, while cutting out discussion boards (already a waste of time) and reducing the number of take home essay assignments.

We also have some obligation, imo, to prepare students for a world in which these tools are the norm. I've been using ChatGPT for all kinds of things and it is helpful for cutting down monotonous tasks or things I'd rather not put much thought into. I would encourage everyone who wants to be competitive to use it, while also being cognizant of its weaknesses.



Stockmann

If we don't at least try to keep the issue of cheating from becoming worse, that combined with grade inflation is sooner rather than later going to destroy the value of a degree. HE's current problems are a walk in the park compared to what's going to happen if it gets to that point. There are already plenty of students, parents and politicians who resent HE, perhaps in the US esp. its price tag. If employers start deciding that degrees are just ChatGPT grading ChatGPT work and everyone gets an A anyway, then enrollment will collapse. This - not ChatGPT itself - is an existential threat to HE as we know it.

If we don't keep cheating somewhat under control, then folks who spent four years working retail full-time or whatever after HS are going to be better employees than graduates (unless they also worked full-time because at least they'll be used to putting effort in (not "Alexa, do my capstone project for me."), and will be better off financially because they spent years earning money instead of borrowing money. It's not just that there will be no way of distinguishing honestly earned degrees from degrees earned by ChatGPT, it's that there will be essentially no honestly earned degrees - initially honest students will essentially realize their classmates are doing as well or better by cheating and at some point will stop putting up with basically being punished for honesty - if you can't beat them, join them. The only silver lining I'm seeing is that the first to go are going to be the worst institutions I think.

Caracal

Quote from: spork on August 05, 2023, 10:38:36 AM
Quote from: Parasaurolophus on August 04, 2023, 01:51:23 PM[. . .]

seeing them primarily as if they want to be part of the collective project of learning.

[. . .]


I disagree with the premise. The proportion of undergrads at the schools I've worked at who have wanted to learn has declined steadily over time. Now it's a distinct minority.

Much of the brouhaha about ChatGPT comes from making the emperor's lack of clothing even more obvious. Students have been cheating on writing assignments for decades, if not centuries, yet until recently professors could assuage themselves with delusions like "What I teach is so meaningful and how I teach is so engaging that cheating would never ever even enter into the minds of my students."

Now there is no way to avoid acknowledging the fact that students never learned anything from those term papers assigned since the dawn of time by faculty who felt as though students were as eager to write in the passive voice as they themselves still were. The mirage of self-importance many professors lived in has disappeared.

The widespread use of ChatGPT by students has given me even more reason to disengage further from my job. I get paid to teach. If students want to pay $100K-$200K for a lifestyle experience that doesn't involve learning, I don't care. But ChatGPT has really hit my wife, an academic in a field that uses writing a lot. ChatGPT and her university's (lack of) response to it has called into question her professional raison d'etre, if you'll pardon the French.

I have to say, I think this really confused. Are fewer and fewer students interested in learning? Or were they never interested in learning? The problem is that academic despair itself is a really old (and really boring) trope. It isn't actually about the students, its about the professors.

I also think the basic model is flawed. There's this idea that the students are either engaged or not engaged. Engagement isn't a personality trait. It waxes and wanes with the class, the instructor, the week, the minute.

dlehman

Discussions about ChatGPT and cheating show a remarkable lack of imagination.  Simply require students to include ChatGPT interrogations as part of their answer, along with a critique/improvement to that answer, and the cheating disappears.  It isn't a bad assignment either.

artalot

Quote from: dlehman on August 11, 2023, 05:32:52 AMSimply require students to include ChatGPT interrogations as part of their answer, along with a critique/improvement to that answer, and the cheating disappears.  It isn't a bad assignment either.

I do something similar, and it works pretty well. Students like it and they are generally fun to read. I'd rather get them familiar with the technology and its limitations than treat it like it doesn't exist.

the_geneticist

I have a colleague at another school that is taking the "if you use it, cite it" approach to ChatGPT.  Students need to put in the original prompt & put the reply from ChatGPT in quotes.  I think it's a great way to show them the strengths (has a lot of information; writing sounds nice) and limitations (not all the information is correct or relevant; the writing is boring).

Human brains are still way more quirky & complicated that AI.  We're not going to be replaced by a program that is the equivalent of a fancy auto-correct sentence generator.

apl68

Quote from: the_geneticist on August 16, 2023, 11:23:09 AMI have a colleague at another school that is taking the "if you use it, cite it" approach to ChatGPT.  Students need to put in the original prompt & put the reply from ChatGPT in quotes.  I think it's a great way to show them the strengths (has a lot of information; writing sounds nice) and limitations (not all the information is correct or relevant; the writing is boring).

Human brains are still way more quirky & complicated that AI.  We're not going to be replaced by a program that is the equivalent of a fancy auto-correct sentence generator.

I read in the news today that ChatGPT's free service is eating Chegg's lunch, as cheating students turn to the former.  It couldn't have happened to some nicer people.
And you will cry out on that day because of the king you have chosen for yourselves, and the Lord will not hear you on that day.

mbelvadi

Our "teaching and learning" office recommends these two blog postings:
Assignment Makeovers in the AI Age: Reading Response Edition
https://derekbruff.org/?p=4083
and
Assignment Makeovers in the AI Age: Essay Edition
https://derekbruff.org/?p=4105

eigen

I'm taking the "cite it" approach this semester.

If students use AI (and they're free to), they need to include a short section before the references describing how it was used. That could include editing work they'd written, finding information, or a full draft from a prompt.

We're also spending some time discussing the increasing importance of analyzing outputs / information critically, including fabricated/wrong information from AI and other tools.

I'm increasingly hearing companies that want students to be able to use these tools when they're out, might as well start now.
Quote from: Caracal
Actually reading posts before responding to them seems to be a problem for a number of people on here...

the_geneticist

Quote from: mbelvadi on August 22, 2023, 06:33:59 AMOur "teaching and learning" office recommends these two blog postings:
Assignment Makeovers in the AI Age: Reading Response Edition
https://derekbruff.org/?p=4083
and
Assignment Makeovers in the AI Age: Essay Edition
https://derekbruff.org/?p=4105

I really like the idea of the "reading prompt" questions to be more personal and open-ended, then use the student responses to guide the discussion for the day.  You really are wanting to build the ability to think critically, reason, and share ideas.
And the responses from AI are honestly rather dull.

But I am feeling a LOT of sympathy for my colleagues in the humanities & arts.  It's going to be a rough few years while this sudden shift in technology changes what is considered acceptable use of AI in writing.

apl68

Quote from: the_geneticist on August 23, 2023, 10:45:23 AM
Quote from: mbelvadi on August 22, 2023, 06:33:59 AMOur "teaching and learning" office recommends these two blog postings:
Assignment Makeovers in the AI Age: Reading Response Edition
https://derekbruff.org/?p=4083
and
Assignment Makeovers in the AI Age: Essay Edition
https://derekbruff.org/?p=4105

I really like the idea of the "reading prompt" questions to be more personal and open-ended, then use the student responses to guide the discussion for the day.  You really are wanting to build the ability to think critically, reason, and share ideas.
And the responses from AI are honestly rather dull.

But I am feeling a LOT of sympathy for my colleagues in the humanities & arts.  It's going to be a rough few years while this sudden shift in technology changes what is considered acceptable use of AI in writing.

You said it!  Reading and writing about what you read is fundamental to what humanities students do.  The work they produce in the elementary levels of their disciplines may be dull and poorly thought out, but producing it gives them good practice to eventually do better.  The ease with which they can now use these bots to fake the lower-level work creates a serious risk that they'll fail to derive any benefit from the practice their teachers were trying to give them.  It would be like a baseball pitching practice where the practicing pitchers were somehow secretly shooting balls out of a batting practice "pitching" machine instead.
And you will cry out on that day because of the king you have chosen for yourselves, and the Lord will not hear you on that day.

Caracal

Quote from: the_geneticist on August 23, 2023, 10:45:23 AM
Quote from: mbelvadi on August 22, 2023, 06:33:59 AMOur "teaching and learning" office recommends these two blog postings:
Assignment Makeovers in the AI Age: Reading Response Edition
https://derekbruff.org/?p=4083
and
Assignment Makeovers in the AI Age: Essay Edition
https://derekbruff.org/?p=4105

I really like the idea of the "reading prompt" questions to be more personal and open-ended, then use the student responses to guide the discussion for the day.  You really are wanting to build the ability to think critically, reason, and share ideas.
And the responses from AI are honestly rather dull.

But I am feeling a LOT of sympathy for my colleagues in the humanities & arts.  It's going to be a rough few years while this sudden shift in technology changes what is considered acceptable use of AI in writing.

Really, we don't need the sympathy. It will be fine. The AI is not nearly as good as everyone seems to think it is, and the adjustments required to deal with it aren't that extreme. At least they aren't for me in history. Mostly I'm asking students to write on particular primary sources from particular databases.The AI can't do that because it hasn't read the sources and it can't copy anything from the internet because nobody else has written on them.