News:

Welcome to the new (and now only) Fora!

Main Menu

New inventive ways of cheating

Started by Hegemony, March 18, 2023, 06:13:14 AM

Previous topic - Next topic

Caracal

Quote from: fizzycist on May 02, 2023, 10:02:44 PM
Quote from: Sun_Worshiper on May 02, 2023, 05:35:37 PM
Quote from: Caracal on May 02, 2023, 11:23:22 AM
Quote from: Sun_Worshiper on May 01, 2023, 09:10:49 AM


So, in fact, it is not necessarily a bad tool for writing your first draft (although probably not great for history, specifically, since the facts it gives are often wrong*), if you come into it with an argument, writing capabilities of your own, and a willingness to fact check the content it provides.


The problem for a history essay isn't the facts, but the argument. Even if you told it what to argue, it wouldn't really argue it, it just sort of circles the drain, restating the claim again and again. I can see how it might do a better job with something like a policy strategy where there are clearer frameworks it can draw on.

We're going in circles here. It can't create an argument, but it can write an argument if you give it proper guidance. From there, you have to do editing/rewriting.

I'm not sure where you are going with this subworshipper. Are you saying chatgpt is helpful for faculty to produce quality scholarship?

I dunno about your field, but in mine I probably type 50-100 new pages of scholarship per year in papers, proposals, thoughtful emails, etc. The vast majority of my time is spent thinking about what I've learned and want to say. A relatively small portion is actually typing. By the time I've figured out what I want to say, the typing is pretty fast. Maybe I could disrupt my flow by asking chatgpt a bunch of prompts to try and do the easy part for me even faster, but it's hardly an essential tool. For me, editing the text in subsequent drafts is very time-consuming, but I don't think chatgpt is particularly useful here--the reason it takes so long is because I am refining my thinking.

Anyway, I think chatgpt is useful for writing code in a language you're not great with or for generating boilerplate text that is not going to be scrutinized much.  And it's probably quite good at generating a B- essay in a lower-division undergrad class when you are strapped for time and/or haven't read the assignd texts.

Right, when I sit down to write something scholarly, I generally have an idea of the argument I want to make, but even when I think I have things pretty fleshed out, I usually find out as I'm trying to write that my ideas need some work. When I see the actual sentences on the screen as I'm typing them, I'll realize that a claim I'm making looks like a stretch based on the evidence I've presented. Usually, it's nothing disastrous, maybe I just need to explain the evidence in a different way to make a compelling case, or I'm using the wrong pieces of evidence, or not enough of them or all of the above. Or it could be a bigger problem-maybe I need to modify the claim, or discard it isn't really crucial to the larger argument.

Whatever, but the point is that the writing is the the thinking so I can't just get AI to do it for me and end up with something useful.

Sun_Worshiper

Quote from: fizzycist on May 02, 2023, 10:02:44 PM
Quote from: Sun_Worshiper on May 02, 2023, 05:35:37 PM
Quote from: Caracal on May 02, 2023, 11:23:22 AM
Quote from: Sun_Worshiper on May 01, 2023, 09:10:49 AM


So, in fact, it is not necessarily a bad tool for writing your first draft (although probably not great for history, specifically, since the facts it gives are often wrong*), if you come into it with an argument, writing capabilities of your own, and a willingness to fact check the content it provides.


The problem for a history essay isn't the facts, but the argument. Even if you told it what to argue, it wouldn't really argue it, it just sort of circles the drain, restating the claim again and again. I can see how it might do a better job with something like a policy strategy where there are clearer frameworks it can draw on.

We're going in circles here. It can't create an argument, but it can write an argument if you give it proper guidance. From there, you have to do editing/rewriting.

I'm not sure where you are going with this subworshipper. Are you saying chatgpt is helpful for faculty to produce quality scholarship?

I dunno about your field, but in mine I probably type 50-100 new pages of scholarship per year in papers, proposals, thoughtful emails, etc. The vast majority of my time is spent thinking about what I've learned and want to say. A relatively small portion is actually typing. By the time I've figured out what I want to say, the typing is pretty fast. Maybe I could disrupt my flow by asking chatgpt a bunch of prompts to try and do the easy part for me even faster, but it's hardly an essential tool. For me, editing the text in subsequent drafts is very time-consuming, but I don't think chatgpt is particularly useful here--the reason it takes so long is because I am refining my thinking.

Anyway, I think chatgpt is useful for writing code in a language you're not great with or for generating boilerplate text that is not going to be scrutinized much.  And it's probably quite good at generating a B- essay in a lower-division undergrad class when you are strapped for time and/or haven't read the assignd texts.

No, I'm talking about students using it to write first drafts of essays, including argumentative essays. It can write a decent first draft, which can then be turned into a nice essay with some extra work.

For people who do serious writing, particularly academic writing, it can stretch stances into paragraphs or create a table that summarizes a typology, but not much more than that at this point.

Sun_Worshiper

There is a post in the other thread where I talk about how I have used it, and it is very helpful in those ways. It is also good for writing emails and other busywork that many people struggle with, and it is certainly something that students can use to write decent (even quite good) essays.

At the end of the day, this technology is here and as faculty we are going to have to learn to deal with it. For me, that will mean teaching students its strengths and weaknesses, while also including some cheat-proof assignments, such as in-class exams on a lockdown browser.

Antiphon1

Should our admin refuse to address faculty concerns about AI, I plan to completely cease any automated grading.  Back to the Stone Age: it's all in person paper and oral exams all the time.  A little labor intensive, but I'd rather hand grade than waste time trying to outfox students who have no intention of attempting an honest effort. 

downer

Quote from: Antiphon1 on May 08, 2023, 01:28:55 PM
Should our admin refuse to address faculty concerns about AI, I plan to completely cease any automated grading.  Back to the Stone Age: it's all in person paper and oral exams all the time.  A little labor intensive, but I'd rather hand grade than waste time trying to outfox students who have no intention of attempting an honest effort.

How many students do you have each semester?
"When fascism comes to America, it will be wrapped in the flag and carrying a cross."—Sinclair Lewis

Caracal

Quote from: Antiphon1 on May 08, 2023, 01:28:55 PM
Should our admin refuse to address faculty concerns about AI, I plan to completely cease any automated grading.  Back to the Stone Age: it's all in person paper and oral exams all the time.  A little labor intensive, but I'd rather hand grade than waste time trying to outfox students who have no intention of attempting an honest effort.

I suppose it depends on the subject, but for my classes there's really not much extra effort involved in grading paper essay exams. I grade them all directly on the CMS so I don't have to write down grades separately anywhere. It's a little less convenient because I have to haul around the blue books and take precautions to make sure I don't lose them, and the gods of academia do sometimes punish me for my own terrible handwriting by giving me students who also have dreadful handwriting, but it really isn't much extra work.

downer

Antiphon is saying they will replace automated grading with reading student hardwritten work. That clearly is a lot more work.
"When fascism comes to America, it will be wrapped in the flag and carrying a cross."—Sinclair Lewis

Caracal

Quote from: downer on May 09, 2023, 04:52:03 AM
Antiphon is saying they will replace automated grading with reading student hardwritten work. That clearly is a lot more work.

What does that have to do with chat though? If a take home exam can be graded automatically, I assume it's just about providing the correct answer. Is chat going to make cheating easier or harder to detect in some way if you're giving those sorts of exams as take homes?

Hibush

Quote from: Caracal on May 09, 2023, 07:52:15 AM
Quote from: downer on May 09, 2023, 04:52:03 AM
Antiphon is saying they will replace automated grading with reading student hardwritten work. That clearly is a lot more work.

What does that have to do with chat though? If a take home exam can be graded automatically, I assume it's just about providing the correct answer. Is chat going to make cheating easier or harder to detect in some way if you're giving those sorts of exams as take homes?

If the machine learning program has a big enough corpus of graded source material, I bet it could learn to distinguish papers that engage with the subject deeply. If the corpus were graded against a rubric, so there were multiple scores in each piece, the ML might be able to assign reasonably close scores to new material. On average it should do pretty well (e.g. the class mean), though the deviation from a human grader might be unacceptable on an individual paper level.

If you teach really large courses, it might be possible to test this out. You'd have to use the software on a private corpus of yours. That corpus and the rubric would be your IP.

downer

Quote from: Caracal on May 09, 2023, 07:52:15 AM
Quote from: downer on May 09, 2023, 04:52:03 AM
Antiphon is saying they will replace automated grading with reading student hardwritten work. That clearly is a lot more work.

What does that have to do with chat though? If a take home exam can be graded automatically, I assume it's just about providing the correct answer. Is chat going to make cheating easier or harder to detect in some way if you're giving those sorts of exams as take homes?

That's a fair point. I was wondering how there could be automatic grading of text work that were not one or two word answers.

Is it quicker to grade work that is submitted via the LMS than it is to grade supposedly equivalent work in blue books? Maybe. But it also occurs to me that I would give rather different assignments for work to be submitted online than I would for work to be written in class in blue books.
"When fascism comes to America, it will be wrapped in the flag and carrying a cross."—Sinclair Lewis

Caracal

Quote from: Hibush on May 09, 2023, 08:12:02 AM
Quote from: Caracal on May 09, 2023, 07:52:15 AM
Quote from: downer on May 09, 2023, 04:52:03 AM
Antiphon is saying they will replace automated grading with reading student hardwritten work. That clearly is a lot more work.

What does that have to do with chat though? If a take home exam can be graded automatically, I assume it's just about providing the correct answer. Is chat going to make cheating easier or harder to detect in some way if you're giving those sorts of exams as take homes?

If the machine learning program has a big enough corpus of graded source material, I bet it could learn to distinguish papers that engage with the subject deeply. If the corpus were graded against a rubric, so there were multiple scores in each piece, the ML might be able to assign reasonably close scores to new material. On average it should do pretty well (e.g. the class mean), though the deviation from a human grader might be unacceptable on an individual paper level.

If you teach really large courses, it might be possible to test this out. You'd have to use the software on a private corpus of yours. That corpus and the rubric would be your IP.

I bet that I could read only the first paragraph of essays and assign grades and those grades would be pretty close to the ones I give having read the entire essay. I still can't do that, however, much I might be tempted when I start a really dreadful essay. Same thing for some sort of automated essay grading. If I'm going to make the students write the thing, I have to read it.

Antiphon1

Quote from: downer on May 08, 2023, 07:13:48 PM
Quote from: Antiphon1 on May 08, 2023, 01:28:55 PM
Should our admin refuse to address faculty concerns about AI, I plan to completely cease any automated grading.  Back to the Stone Age: it's all in person paper and oral exams all the time.  A little labor intensive, but I'd rather hand grade than waste time trying to outfox students who have no intention of attempting an honest effort.

How many students do you have each semester?

200-250 students per long semester-  I do I two in class paper tests.  It's a hard few days, but totally worth it when the students discover the error of their low down ways.  I'd rather bust  the fool writing notes on their body parts than have to prove the student videoed themselves pretending to take the test to get around video proctoring.  My international students are on board.  These students are used to written and oral exams. 

AmLitHist

Quote from: Caracal on May 09, 2023, 10:49:47 AM
I bet that I could read only the first paragraph of essays and assign grades and those grades would be pretty close to the ones I give having read the entire essay. I still can't do that, however, much I might be tempted when I start a really dreadful essay. Same thing for some sort of automated essay grading. If I'm going to make the students write the thing, I have to read it.

I find this increasingly true with my students; I've actually periodically spot checked this by reading that first paragraph, jotting down the grade on scratch paper, then coming back later to read the whole paper. The grade is nearly always the same, either way.

Similarly, I find in both my Comp I and Comp II classes, a student's grade on the first essay is nearly always where they end up at the end of the semester.  There are some partial grade jumps (e.g., C- to a C+) along the way, but it's been a long, long time since I've seen a student start out with a C and end up with a B for the class. Generally, when I've analyzed these, those jumps usually happen when a student just blows it on the first paper (misinterprets the instructions, usually) and then settles into their more normal pattern for the rest of the class. 

I'd like to say I see those epiphanies/"catch fire and learn" situations where a student really applies themself and, you know, learns something, but even the ones who actually do learn quite a bit are the same ones who come in with decent skills and motivation to begin with. Of course, that's not surprising, given our CC is drawing from some abysmal feeder schools that are only getting worse as time goes on.

downer

Quote from: Antiphon1 on May 10, 2023, 10:28:29 PM
Quote from: downer on May 08, 2023, 07:13:48 PM
Quote from: Antiphon1 on May 08, 2023, 01:28:55 PM
Should our admin refuse to address faculty concerns about AI, I plan to completely cease any automated grading.  Back to the Stone Age: it's all in person paper and oral exams all the time.  A little labor intensive, but I'd rather hand grade than waste time trying to outfox students who have no intention of attempting an honest effort.

How many students do you have each semester?

200-250 students per long semester-  I do I two in class paper tests.  It's a hard few days, but totally worth it when the students discover the error of their low down ways.  I'd rather bust  the fool writing notes on their body parts than have to prove the student videoed themselves pretending to take the test to get around video proctoring.  My international students are on board.  These students are used to written and oral exams.

That's still pretty efficient for such a large number of students. Do you give individual comments or use a rubric?
"When fascism comes to America, it will be wrapped in the flag and carrying a cross."—Sinclair Lewis

Antiphon1

Quote from: downer on May 11, 2023, 07:16:07 AM
Quote from: Antiphon1 on May 10, 2023, 10:28:29 PM
Quote from: downer on May 08, 2023, 07:13:48 PM
Quote from: Antiphon1 on May 08, 2023, 01:28:55 PM
Should our admin refuse to address faculty concerns about AI, I plan to completely cease any automated grading.  Back to the Stone Age: it's all in person paper and oral exams all the time.  A little labor intensive, but I'd rather hand grade than waste time trying to outfox students who have no intention of attempting an honest effort.

How many students do you have each semester?

200-250 students per long semester-  I do I two in class paper tests.  It's a hard few days, but totally worth it when the students discover the error of their low down ways.  I'd rather bust  the fool writing notes on their body parts than have to prove the student videoed themselves pretending to take the test to get around video proctoring.  My international students are on board.  These students are used to written and oral exams.

That's still pretty efficient for such a large number of students. Do you give individual comments or use a rubric?

Individual comments based on a rubric.