News:

Welcome to the new (and now only) Fora!

Main Menu

A whole new ballgame in cheating. Introducing ChatGPT

Started by Diogenes, December 08, 2022, 02:48:37 PM

Previous topic - Next topic

Thursday's_Child

Quote from: apl68 on December 28, 2022, 06:30:38 AM

So, it won't help them to cheat their way to better grades, it will merely enable them to flunk or get a bare pass without even the minimal effort they had been making?  It's still concerning to think that students may be able to use this tool to scrape a pass.  Especially in their elementary comp classes.  If they had to do some actual work to pass, there's a chance they'd learn something, in spite of themselves.

[/snark] But they'd probably flunk a bunch of classes if this happened, and that won't be good for retention & progression!  Admin will never allow this. [\snark]  Can you tell that I had a graduating senior recently tell me about an amazing discovery she'd made:  that you do better in class if you take notes from the verbal portion of lecture, instead of busily copying down the PPT?  This is b/c the PPT is actually available on the CMS, so you can get to that info later.  Got to love the lightbulb moments these days....

Caracal

Quote from: Sun_Worshiper on December 28, 2022, 08:45:13 AM
I have to respectfully disagree with Caracal and anyone else who thinks this is not a serious challenge to academic integrity on several levels. The technology, in its current form, can easily write an essay that is just as good as those written by most students, and the technology underlying this will improve exponentially in the coming years. Soon enough it won't just be students using AI to write papers that they pass off as their own, but also their professors, not to mention writers outside of academia.

As for new tools to detect it, we will soon have those as well.

I really don't think it can write an essay as good as most students. I'm looking at a thing it wrote right now in response to another of my exam questions. This one is better than the first. However.

Things it can do: Write in a natural way. Hit on some of the major points around some issue.

Things it doesn't really seem to be able to do:
1. Use Evidence. Even when I asked it to provide examples, it just restated the points. No actual examples of any sort. Not really even anything beyond generalities.
2. Actually develop an argument. It just makes claims, it doesn't really provide explanations. Some of the things it claims don't make much sense to me and they are just hanging out without evidence.

And this is an in class essay which is much easier to write. Part of the reason I don't do take home essays is because I don't want students to grab some word soup from the internet. This is basically just a jazzed up and harder to detect version of that.

I actually don't see how you could use this to write any out of class essay I assign for upper level classes. I tried putting in a few queries and it basically just gives you things that sound like encyclopedia entries. I don't assign in class essays that aren't based on research and primary sources. I'm asking students to make an argument about those sources using detailed evidence, and chatgp pretty clearly can't do that in a way that would be credible in any way.

I think you could use it to write a fair amount of a mediocre paper for you in one of my classes if you combined it with some of your own writing where you actually did talk about these sources. If you stuck quotes in at the right places from the appropriate sources, it could be ok. I suppose if you had a clear outline, you might be able to get the thing to write you paragraphs that came closer to developing a point if you put them together. But, if you're going to do all of that, you might as well write your own half-decent paper.

I'm sort of perplexed by the claim that pretty soon everyone is going to be using AI to write things. It strikes me as a fundamental misunderstanding of what writing is. The hard part about writing is not producing vaguely cogent text. If that's all you had to do it would be easy enough. The hard part is to actually figure out what you want to say and say it. The reason first drafts are often a  mess on the sentence level is because the writer is showing the strain of trying to express ideas.

Any dingus can write some basically meaningless text and lots of dinguses do. If some of them use ai bots to do that, I'm not sure how much it will matter.

dismalist

QuoteIt strikes me as a fundamental misunderstanding of what writing is.

It's also a fundamental misunderstanding of what AI is! AI is data compression, turning an overwhelming myriad of facts into regularities.

I don't think AI is anywhere near Galileo at the moment. In any case, its results would still have to be tested. There will be many results.  Who has the money to test them all?

And this isn't even writing!
That's not even wrong!
--Wolfgang Pauli

onthefringe

#63
Ok, I finally gave it an email and a phone number so I could play with it. Here are my first observations:

As several people have noted, given a vague, open-ended prompt it produces grammatically correct C level work that restates the question, vagues around for a while, and then wends its way around to "in conclusion"

Given a sufficiently specific prompt that relies on someone having (for example) read a specific paper it either retreats further into vague restatements or lies through its teeth (but confidently and occasionally in the realm of possibility). For example, when asked how a specific paper (I gave the title) used a mouse model to test something it gave me a perfectly plausible (and scientifically reasonable) answer that could have been used as an approach to test the hypothesis. It just wasn't the way the paper in question did it.

Asked who wrote the paper in question, and it gave me a list of people, none of whom were authors. I said I thought it was wrong and gave it the list of authors (as cited with initials) and it apologized and said the authors were that list of people. But weirdly it converted their initials to full names and got them right (so if I said one of the authors was J.R. Billings it correctly stated that one of the authors was James R. Billings).

Asked for a list of papers I could read for more information and it gave me completely made up citations with plausible titles, authors and journals and fake PMCID numbers.

So check sources in any essays you get next year!

Zeus Bird

Yet another reason for me to ask our chair for in-person classes where I assign significant amounts of in-class work.

Caracal

Quote from: onthefringe on December 28, 2022, 02:58:10 PM
Ok, I finally gave it an email and a phone number so I could play with it. Here are my first observations:

As several people have noted, given a vague, open-ended prompt it produces grammatically correct C level work that restates the question, vagues around for a while, and then wends its way around to "in conclusion"

Given a sufficiently specific prompt that relies on someone having (for example) read a specific paper it either retreats further into vague restatements or lies through its teeth (but confidently and occasionally in the realm of possibility). For example, when asked how a specific paper (I gave the title) used a mouse model to test something it gave me a perfectly plausible (and scientifically reasonable) answer that could have been used as an approach to test the hypothesis. It just wasn't the way the paper in question did it.

Asked who wrote the paper in question, and it gave me a list of people, none of whom were authors. I said I thought it was wrong and gave it the list of authors (as cited with initials) and it apologized and said the authors were that list of people. But weirdly it converted their initials to full names and got them right (so if I said one of the authors was J.R. Billings it correctly stated that one of the authors was James R. Billings).

Asked for a list of papers I could read for more information and it gave me completely made up citations with plausible titles, authors and journals and fake PMCID numbers.

So check sources in any essays you get next year!

Yeah, I'm not really worried at this point about the viability of essays. It has made me think that I probably need to stop giving mercy Cs to word soup papers that don't actually meet the requirements and just give more failing grades, perhaps with options to rewrite.

aprof

Other than the obvious fact that these tools will only improve (and probably quite rapidly given the fact that this capability didn't exist at all 4 or 5 years ago), I think one false assumption many on here are making is that students cannot more adeptly use the machine learning tools than any novice user (such as most posters on this forum).  As has already been demonstrated by ML-generated art, there is some level of know-how required to get useful results.  With appropriate coaching, you can tell the program to edit and revise portions of the text, without necessarily understanding the underlying ideas being written about.

Also, depending on the accessibility of tools that become available next, models can be trained in more focused ways - to write more like a scholarly article or more like a LinkedIn post, depending on what an individual is looking to achieve. 

I'm not sure why people are saying that achieving C-level work in 5 minutes is not a concern? Presumably, a student would need to put in considerably more effort to achieve this without ChatGPT.  What if they put 30 minutes or 2 hours into it?  Perhaps they can cobble together something approaching A-level, even with current free tools.

I do think this completely changes the way that out-of-class essays will need to be evaluated.

the_geneticist

Look, another way to stop this sort of cheating from being useful is to have scaffolded assignments and drafts.  I very much doubt that there is AI that can turn an outline into an essay AND include comment-by-comment feedback to address how they fixed errors/expanded ideas/etc.

Or just have in-class writing assignments.

Puget

Quote from: the_geneticist on January 03, 2023, 01:32:11 PM
Look, another way to stop this sort of cheating from being useful is to have scaffolded assignments and drafts.  I very much doubt that there is AI that can turn an outline into an essay AND include comment-by-comment feedback to address how they fixed errors/expanded ideas/etc.

Or just have in-class writing assignments.

My paper assignments are already scaffolded because it is just more effective pedagogy anyway. Plus, there are already very effective AI detectors that folks have made available, which I'm sure will also be incorporated into TurnItIn soon. In the meantime, anything with citations is easily caught because chatGPT just blatantly makes up fake citations (complete with fake Pubmed links that don't work). So I'm really not worried about paper assignments.

I think it is more take-home exams with short essay (few paragraph, factual) type questions that are the real problem. I've already gone back to in class paper exams for my large class, and may have to do so for my seminar as well.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

marshwiggle

Quote from: aprof on January 03, 2023, 01:25:06 PM

I'm not sure why people are saying that achieving C-level work in 5 minutes is not a concern? Presumably, a student would need to put in considerably more effort to achieve this without ChatGPT.  What if they put 30 minutes or 2 hours into it?  Perhaps they can cobble together something approaching A-level, even with current free tools.

We've had calculators for about 50 years. Arithmetic is still a thing. We've had spreadsheets for about 40 years, and computer algebra systems for 30+ years. Algebra and calculus are still "things".

While the kinds of things that people do by hand vs using tools has changed, the need to learn the underlying process hasn't.

The same thing will no doubt happen with new tools like this.
It takes so little to be above average.

aprof

Quote from: marshwiggle on January 03, 2023, 04:12:36 PM
Quote from: aprof on January 03, 2023, 01:25:06 PM

I'm not sure why people are saying that achieving C-level work in 5 minutes is not a concern? Presumably, a student would need to put in considerably more effort to achieve this without ChatGPT.  What if they put 30 minutes or 2 hours into it?  Perhaps they can cobble together something approaching A-level, even with current free tools.

We've had calculators for about 50 years. Arithmetic is still a thing. We've had spreadsheets for about 40 years, and computer algebra systems for 30+ years. Algebra and calculus are still "things".

While the kinds of things that people do by hand vs using tools has changed, the need to learn the underlying process hasn't.

The same thing will no doubt happen with new tools like this.
I completely agree that these programs are tools and will eventually be accepted as such, but I think there's a transition period. I recall a time back very early in my primary education where we were discouraged from using a computer word processor to type assignments, instead handwriting or typewriting them. I don't know the history of the calculator but presumably when they were first introduced, there was some disagreement about how they were allowed to be used.

marshwiggle

Quote from: aprof on January 03, 2023, 05:27:48 PM
Quote from: marshwiggle on January 03, 2023, 04:12:36 PM
Quote from: aprof on January 03, 2023, 01:25:06 PM

I'm not sure why people are saying that achieving C-level work in 5 minutes is not a concern? Presumably, a student would need to put in considerably more effort to achieve this without ChatGPT.  What if they put 30 minutes or 2 hours into it?  Perhaps they can cobble together something approaching A-level, even with current free tools.

We've had calculators for about 50 years. Arithmetic is still a thing. We've had spreadsheets for about 40 years, and computer algebra systems for 30+ years. Algebra and calculus are still "things".

While the kinds of things that people do by hand vs using tools has changed, the need to learn the underlying process hasn't.

The same thing will no doubt happen with new tools like this.
I completely agree that these programs are tools and will eventually be accepted as such, but I think there's a transition period. I recall a time back very early in my primary education where we were discouraged from using a computer word processor to type assignments, instead handwriting or typewriting them. I don't know the history of the calculator but presumably when they were first introduced, there was some disagreement about how they were allowed to be used.

When I was in  middle school, a couple of my friends got calculators; 4 basic functions and square root. (I think one was 4 functions and (1) continuous memory.) In high school, we could use slide rules but not calculators on exams. By university, we could use non-programmable calculators on exams. That transition period of a decade or less basically made the changes that then lasted for about the next 20 or 30 years. (By the end of high school I had a programmable scientific calculator, with basically the same functions as a scientific calculator today.)

It takes so little to be above average.

Caracal

Quote from: aprof on January 03, 2023, 01:25:06 PM
Perhaps they can cobble together something approaching A-level, even with current free tools.


No, definitely not if we are talking about any kind of paper that asks the student to do research and use it to make an argument. The rubric I basically use for research paper is:
F and D: Something that doesn't remotely fulfill the requirements of the assignment. If the student actually spent more than 15 minutes trying to write the paper, there's no way I could tell that from what I'm seeing.
C: Doesn't actually involve any original research outside of the reading assignments. Either doesn't make an argument at all and restates details or facts, or makes an argument that isn't academic in nature.
B: Either makes an actual argument, or approaches making one, or uses actual evidence that the student gathered, but doesn't do both, at least not adequately. On the low B side, the student is usually making a weak argument and using some, but not nearly enough evidence to support it. Middle Bs usually are doing a perfectly fine job on either the argument or the evidence (usually the evidence) but having no argument or very little evidence to support it. High Bs are usually papers where the student had an actual argument and actual evidence and seemed to have an idea about how to put them together, but the execution fell apart. The plane ended up hung up in some telephone wires, but on the plus side it definitely got off the ground!
A: An actual argument and the use of actual evidence to support it! I learned something I didn't previously know because you did some research and told me about it! And you provided some context so I understood why it might matter! Since I'm surveying the wreckage of a lot of planes that skidded off the runway or ended up in the wires above us, I'm going to be pretty generous about what I consider to be a successful landing. The essay got off the ground and returned to the ground more or less intact. It seems churlish to complain too much about structural damage, fires, or missing the runway. You can still have an A-. An A is for an actually good paper.

Chat GPT doesn't seem to be able to either use evidence to support points, or make a real argument. These aren't small details a student can fix with a couple of hours of editing. It's the hardest part about writing a paper.

Wahoo Redux

I am a little surprised by these discussions of what ChatGPT can and cannot do.  Whatever the program cannot do right now it will be able to do in the near future.

NBC News: ChatGPT banned from New York City public schools' devices and networks

Lower Deck:
Quote
A spokesperson for OpenAI, which developed ChatGPT, said it is "already developing mitigations to help anyone identify text generated by that system."

Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter--and the Bird is on the Wing.

Caracal

Quote from: Wahoo Redux on January 05, 2023, 07:06:05 PM
I am a little surprised by these discussions of what ChatGPT can and cannot do.  Whatever the program cannot do right now it will be able to do in the near future.

NBC News: ChatGPT banned from New York City public schools' devices and networks

Lower Deck:
Quote
A spokesperson for OpenAI, which developed ChatGPT, said it is "already developing mitigations to help anyone identify text generated by that system."

Huh? Why would we think that? I don't know much about AI or programming, but it isn't magic. They've created something that is very good at scrubbing the internet and paraphrasing what they find there fluently. It can't really make arguments or develop a thesis. This isn't a small thing. It is much of what you're trying to teach students in an intro writing course. Students do the same thing that Chat GPT does-just restate a thesis over and over in different words because they don't know how to develop the thesis and make it persuasive.

It's like watching a Roomba vacuum a room and concluding that the people developing it are on the verge of releasing a robot capable of cleaning your own entire house without your assistance. You just need to add some arms so it can climb up on surfaces and grab trash, and mops so it can wash floors-and oh, yeah, it would need to be able to know how to climb up on your table without damaging it and distinguish trash from non trash. Maybe somebody is trying to build this this robot but it isn't a Roomba with a few tweaks.