News:

Welcome to the new (and now only) Fora!

Main Menu

Defending Academic Integrity Against ChatGPT

Started by Rochallor, October 25, 2023, 09:48:54 AM

Previous topic - Next topic

Kron3007

Quote from: fishbrains on December 30, 2023, 09:50:44 AM
Quote from: downer on December 29, 2023, 02:59:36 PMThe issue of proof and punishment is a pain in the neck. I am not going to spend a lot of time following elaborate college procedures and going to meetings about it. So it's partly about creating grading rubrics that will give major penalties to students whose work looks like it used AI, whether I can prove it or not. I'm still working on that.

For next semester, I plan on adding a reflection assignment that students complete after they hand in their essays but before I grade them where I take a paragraph or two from their essay and ask them to explain what a paragraph means, where they found particular sources, and what specific words mean. In-class. Handwritten.

This should be easy for students doing the work, very difficult/impossible for students relying on AI for everything, and non-time-consuming on my part (Turnitin does pretty well with its AI report). We'll see how it goes. This approach might only produce more students lying to my face. The punishment will depend on what my administration will support.

AI is doing a better job at not just making up sources, but it tends to use sources freshizzles would not normally access. I don't think my students are reading that 500-page book they might have eventually found on JSTOR to employ that three-sentence summary using technical terms they really don't know.

The big challenge I see is that we are not allowed to punish students for academic misco duct.  We are required to report the incident to the university where it goes through a standard process and they determine guilt and punishment.  This makes sense so there is a central record for the student across courses and punishments are standard etc.

The challenge, is that we can just assign a punishment based on suspicion or apply degrees of penalty based on how serious we feel an an infraction is. 

The next issue is that many AI cases are based on gut feeling and hard to really prove.  It isn't like plagiarism, where you can clearly show the evidence.  I suspect if I submitted cases like this up the chain, most would just be dismissed anyway. AI is also continually improving, so the signs you are using this semester may no longer apply next time around. 

So, as I have been saying for a while it seems better to shift the weight of assignments (more presentations, live writing, etc), and embrace AI.  For example, one colleague has them using AI to generate an essay, and then critiquing the essay.  Maybe students have teg AI critiquing its own writing, but my take is that we simply don't have the time of ability to eliminate AI so we need to accept that.

fishbrains

Not disagreeing, but I should have noted that I teach mostly freshman composition classes, so I can't move to non-essay assignments.

I also place a lot of emphasis on the scaffolding part of the essay process (topic proposal, outline, annotated bib, show me your first two paragraphs, etc.)--as in if I don't see these preliminary components, I won't accept the final paper.

All this, coupled with the reflective exercise after they submit the essay and their Turnitin report, will probably work for our admin in case of a challenge. We'll see.

I wish I could find a way to show people how much I love them, despite all my words and actions. ~ Maria Bamford

downer

The challenge of finding good assignments is especially strong when teaching asynchronous online.

I have taken to using AI to generate assessments of the likelihood that their work was generated by AI, and sharing the result with the students.
"When fascism comes to America, it will be wrapped in the flag and carrying a cross."—Sinclair Lewis

Thursday's_Child

Quote from: fishbrains on December 30, 2023, 09:50:44 AM
Quote from: downer on December 29, 2023, 02:59:36 PMThe issue of proof and punishment is a pain in the neck. I am not going to spend a lot of time following elaborate college procedures and going to meetings about it. So it's partly about creating grading rubrics that will give major penalties to students whose work looks like it used AI, whether I can prove it or not. I'm still working on that.

For next semester, I plan on adding a reflection assignment that students complete after they hand in their essays but before I grade them where I take a paragraph or two from their essay and ask them to explain what a paragraph means, where they found particular sources, and what specific words mean. In-class. Handwritten.

This should be easy for students doing the work, very difficult/impossible for students relying on AI for everything, and non-time-consuming on my part (Turnitin does pretty well with its AI report). We'll see how it goes. This approach might only produce more students lying to my face. The punishment will depend on what my administration will support.

AI is doing a better job at not just making up sources, but it tends to use sources freshizzles would not normally access. I don't think my students are reading that 500-page book they might have eventually found on JSTOR to employ that three-sentence summary using technical terms they really don't know.

One of the many things I've learned from these Fora (although this was probably from the old CHE one & I don't remember the original source) is one simple test that should easily identify those who didn't do their own writing:  make a copy (or print it out); white-out a major word or two from each sentence; make a clean copy of it; ask the supposed writer to fill in the blanks.  If they wrote it, there should be few errors.

Kron3007

Quote from: Thursday's_Child on January 03, 2024, 06:58:56 AM
Quote from: fishbrains on December 30, 2023, 09:50:44 AM
Quote from: downer on December 29, 2023, 02:59:36 PMThe issue of proof and punishment is a pain in the neck. I am not going to spend a lot of time following elaborate college procedures and going to meetings about it. So it's partly about creating grading rubrics that will give major penalties to students whose work looks like it used AI, whether I can prove it or not. I'm still working on that.

For next semester, I plan on adding a reflection assignment that students complete after they hand in their essays but before I grade them where I take a paragraph or two from their essay and ask them to explain what a paragraph means, where they found particular sources, and what specific words mean. In-class. Handwritten.

This should be easy for students doing the work, very difficult/impossible for students relying on AI for everything, and non-time-consuming on my part (Turnitin does pretty well with its AI report). We'll see how it goes. This approach might only produce more students lying to my face. The punishment will depend on what my administration will support.

AI is doing a better job at not just making up sources, but it tends to use sources freshizzles would not normally access. I don't think my students are reading that 500-page book they might have eventually found on JSTOR to employ that three-sentence summary using technical terms they really don't know.

One of the many things I've learned from these Fora (although this was probably from the old CHE one & I don't remember the original source) is one simple test that should easily identify those who didn't do their own writing:  make a copy (or print it out); white-out a major word or two from each sentence; make a clean copy of it; ask the supposed writer to fill in the blanks.  If they wrote it, there should be few errors.

Sure, but if you suspect 20% of a larger class is using AI is this really practical?  Personally, I don't have the time for that.  It also violates my university policy on the matter.

It is also not very definitive.  A clever student will simply claim they froze under the pressure of the accusation.  It is a reasonable defense.  In the end, it would not be definitive enough to penalize them on it (if you are allowed to do that where you are).

So, I'm sure there are ways to sleuth it out but they are often not practical or compliant with existing policy. 

fishbrains

Yes, one thing I've learned from all of our discussions on plagiarism is that you have to forgive yourself if you know a student cheated/plagiarized but you can't prove it.

At some point you just have to say, "F*ck it all!", move on, and go outside and gnaw on a tree for a while.
I wish I could find a way to show people how much I love them, despite all my words and actions. ~ Maria Bamford

RatGuy

How's this for weird: assigned an in-class prewriting exercise, in which students provided their personal opinions on the reading (i.e., their emotional and psychological reactions, but no real argument). The situation depicted in the reading was ambiguous and nuanced, so I wanted to students to reflect on their reactions before responding during discussion.

I'd say at least 5 students begun their prewriting in the exact same way: "The issue of Basketweaving in The Great American Novel is a simple one. But Basketweaving means something different to every person." If I didn't know any better, I'd assume there was some AI usage here, especially since their "personal opinions" were all quite similar (and similarly vague). So maybe some students are beginning to replicate the weird language of ChatGPT in their own writing?

downer

Quote from: RatGuy on January 24, 2024, 08:27:49 AMHow's this for weird: assigned an in-class prewriting exercise, in which students provided their personal opinions on the reading (i.e., their emotional and psychological reactions, but no real argument). The situation depicted in the reading was ambiguous and nuanced, so I wanted to students to reflect on their reactions before responding during discussion.

I'd say at least 5 students begun their prewriting in the exact same way: "The issue of Basketweaving in The Great American Novel is a simple one. But Basketweaving means something different to every person." If I didn't know any better, I'd assume there was some AI usage here, especially since their "personal opinions" were all quite similar (and similarly vague). So maybe some students are beginning to replicate the weird language of ChatGPT in their own writing?

Now my grading rubric has a new extra category: don't sound like ChatGPT wrote your work. We will see how it goes this semester.
"When fascism comes to America, it will be wrapped in the flag and carrying a cross."—Sinclair Lewis

Parasaurolophus

Quote from: RatGuy on January 24, 2024, 08:27:49 AMHow's this for weird: assigned an in-class prewriting exercise, in which students provided their personal opinions on the reading (i.e., their emotional and psychological reactions, but no real argument). The situation depicted in the reading was ambiguous and nuanced, so I wanted to students to reflect on their reactions before responding during discussion.

I'd say at least 5 students begun their prewriting in the exact same way: "The issue of Basketweaving in The Great American Novel is a simple one. But Basketweaving means something different to every person." If I didn't know any better, I'd assume there was some AI usage here, especially since their "personal opinions" were all quite similar (and similarly vague). So maybe some students are beginning to replicate the weird language of ChatGPT in their own writing?

Could be. I've also encountered students who memorize GPT output so they can produce something in-class or orally (obviously harder to do if they don't have the prompt beforehand, but still).
I know it's a genus.

apl68

Quote from: RatGuy on January 24, 2024, 08:27:49 AMHow's this for weird: assigned an in-class prewriting exercise, in which students provided their personal opinions on the reading (i.e., their emotional and psychological reactions, but no real argument). The situation depicted in the reading was ambiguous and nuanced, so I wanted to students to reflect on their reactions before responding during discussion.

I'd say at least 5 students begun their prewriting in the exact same way: "The issue of Basketweaving in The Great American Novel is a simple one. But Basketweaving means something different to every person." If I didn't know any better, I'd assume there was some AI usage here, especially since their "personal opinions" were all quite similar (and similarly vague). So maybe some students are beginning to replicate the weird language of ChatGPT in their own writing?


This feeds into concerns that a tidal wave of AI-generated content is polluting the whole online ecosystem, including what AIs (and students) are trained on to produce more content.  It could be turning into a giant ouroboros of garbage in, garbage out, garbage all around.
For our light affliction, which is only for a moment, works for us a far greater and eternal weight of glory.  We look not at the things we can see, but at those we can't.  For the things we can see are temporary, but those we can't see are eternal.

the_geneticist

Quote from: RatGuy on January 24, 2024, 08:27:49 AMHow's this for weird: assigned an in-class prewriting exercise, in which students provided their personal opinions on the reading (i.e., their emotional and psychological reactions, but no real argument). The situation depicted in the reading was ambiguous and nuanced, so I wanted to students to reflect on their reactions before responding during discussion.

I'd say at least 5 students begun their prewriting in the exact same way: "The issue of Basketweaving in The Great American Novel is a simple one. But Basketweaving means something different to every person." If I didn't know any better, I'd assume there was some AI usage here, especially since their "personal opinions" were all quite similar (and similarly vague). So maybe some students are beginning to replicate the weird language of ChatGPT in their own writing?

Or they learned this as a standard (and honestly boring) way to start all of their essays.  Kind of like the stereotypical "Since the dawn of time, humans have looked to the [baskets] and pondered [about things]"

apl68

Quote from: the_geneticist on January 24, 2024, 01:17:37 PM
Quote from: RatGuy on January 24, 2024, 08:27:49 AMHow's this for weird: assigned an in-class prewriting exercise, in which students provided their personal opinions on the reading (i.e., their emotional and psychological reactions, but no real argument). The situation depicted in the reading was ambiguous and nuanced, so I wanted to students to reflect on their reactions before responding during discussion.

I'd say at least 5 students begun their prewriting in the exact same way: "The issue of Basketweaving in The Great American Novel is a simple one. But Basketweaving means something different to every person." If I didn't know any better, I'd assume there was some AI usage here, especially since their "personal opinions" were all quite similar (and similarly vague). So maybe some students are beginning to replicate the weird language of ChatGPT in their own writing?

Or they learned this as a standard (and honestly boring) way to start all of their essays.  Kind of like the stereotypical "Since the dawn of time, humans have looked to the [baskets] and pondered [about things]"

K-12 writing instruction--to the extent that there is such a thing in the first place--does seem to produce a lot of very stereotyped results.  Speaking of which, are Forumites still seeing the notorious five-paragraph essay?
For our light affliction, which is only for a moment, works for us a far greater and eternal weight of glory.  We look not at the things we can see, but at those we can't.  For the things we can see are temporary, but those we can't see are eternal.

Larimar

Quote from: apl68 on January 25, 2024, 07:36:22 AM
Quote from: the_geneticist on January 24, 2024, 01:17:37 PM
Quote from: RatGuy on January 24, 2024, 08:27:49 AMHow's this for weird: assigned an in-class prewriting exercise, in which students provided their personal opinions on the reading (i.e., their emotional and psychological reactions, but no real argument). The situation depicted in the reading was ambiguous and nuanced, so I wanted to students to reflect on their reactions before responding during discussion.

I'd say at least 5 students begun their prewriting in the exact same way: "The issue of Basketweaving in The Great American Novel is a simple one. But Basketweaving means something different to every person." If I didn't know any better, I'd assume there was some AI usage here, especially since their "personal opinions" were all quite similar (and similarly vague). So maybe some students are beginning to replicate the weird language of ChatGPT in their own writing?

Or they learned this as a standard (and honestly boring) way to start all of their essays.  Kind of like the stereotypical "Since the dawn of time, humans have looked to the [baskets] and pondered [about things]"

K-12 writing instruction--to the extent that there is such a thing in the first place--does seem to produce a lot of very stereotyped results.  Speaking of which, are Forumites still seeing the notorious five-paragraph essay?


Oh, yes. The students have no idea there's any other kind.


Larimar

poiuy

A former student asked me for a letter of recommendation to graduate school a few days ago.  She was not a great student, but people learn and grow, and I was willing to write her a simple, non-glowing but non-negative letter.

I asked her for the usual inputs: her resume, and her graduate school application statement. The tone and vocabulary of the graduate school statement were a dead giveaway, confirmed by zerogpt as 100% AI generated. :/

I am having second thoughts about writing her LOR, and am wondering how to tell her ....

Maybe I should ask chatgpt to draft an email to her?

dismalist

Quote from: poiuy on February 09, 2024, 04:56:31 PMA former student asked me for a letter of recommendation to graduate school a few days ago.  She was not a great student, but people learn and grow, and I was willing to write her a simple, non-glowing but non-negative letter.

I asked her for the usual inputs: her resume, and her graduate school application statement. The tone and vocabulary of the graduate school statement were a dead giveaway, confirmed by zerogpt as 100% AI generated. :/

I am having second thoughts about writing her LOR, and am wondering how to tell her ....

Maybe I should ask chatgpt to draft an email to her?

No. Have ChatGPT write the LoR!
That's not even wrong!
--Wolfgang Pauli