News:

Welcome to the new (and now only) Fora!

Main Menu

A whole new ballgame in cheating. Introducing ChatGPT

Started by Diogenes, December 08, 2022, 02:48:37 PM

Previous topic - Next topic

fosca

I suspect I've been getting some AI writing in my online class.  I think the fact that I only allow students to use in-course sources (the text, some other readings) in their writing might help.  If someone writes about something that isn't in the course sources (like a part of the brain or a term that isn't in the sources), they lose a lot of points, and I know a lot of the AI uses things like Wikipedia and such for research.  It's not perfect, but I'm tired of students Googling everything anyway, and this hopefully makes them actually think about the material.

MarathonRunner

Quote from: aprof on December 09, 2022, 08:01:52 AM
For fun, I gave ChatGPT the T/F portion of my final exam (1st year grad STEM course).  It scored 64%. Hard to point to a pattern on the problems it missed.  Anything that is just a basic definition check is easy for it to get correct, just as it would be for a student who had the opportunity to Google an answer.  The more inferences that must be drawn to concepts beyond what's in the statement, the more it seems to struggle to connect them properly.

I also gave it a few calculated problems but it bombed quite badly on those.  I would have probably given it 25% partial credit for identifying a few equations and concepts that were required but then applying them all wrong.

Overall, its responses read to me as that undergrad who is very confident but not quite as smart as they think they are.  It says things so authoritatively that they sound correct, and if you're just skimming or not an expert on the subject, you'd probably overlook them. I'm not sure if this is better or worse that actually being smart and capable.  Seems to me it can just lead to greater spread of disinformation and false expertise.

My physiology exam has a question about an alien species, and if we assume their physiology is the same as humans, if they have "this" then what would that mean? The AI might not be able to manage such questions

In my undergrad, some of the toughest exams were open book. The book had the equations and related items, but if you didn't know your stuff, you had no idea which equation to use, or which section to look at to find an answer. I guess I see AI similarly. If you know your stuff, very helpful. If you don't know your stuff, it might be marginally useful.

Sun_Worshiper


foralurker

Quote from: Sun_Worshiper on December 09, 2022, 04:13:42 PM
The arms race continues.

This has been my takeaway. Ironically, I caught two plagiarized papers thanks to a bot, though not in the way you may think.

On two occasions my students had used the voice-to-text feature in Word to compose their papers by pressing play on a YouTube video similar to that week's topic.

The only reason I found it was because Turnitin matched a source on a "content farm" who also skimmed content from YouTube to generate bot authored articles.

Wahoo Redux

Reddit:  ChatGPT got a 100/100 on my Computational Physics problem set.

Quote
I'm having a bit of an existential crisis and have to figure out how to move forward, because all I have to do is type a question into ChatGPT and it accurately solved all of my Comp Phys questions. This is a graduate-level class.



Disclosure: It initially got a 90/100 until someone on twitter pointed out my question could have been refined to give a hint, in which case ChatGPT got 100/100.

https://twitter.com/timcopia/status/1601280803723304960



Are the robots just taking over from now on? Academia just had a schema shift this week and it will never be the same.
Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter--and the Bird is on the Wing.

Sun_Worshiper

I'd like to see what happens if I tell it to p-hack me significant but defensible results.

Diogenes

#21
Quote from: Sun_Worshiper on December 10, 2022, 11:04:59 AM
I'd like to see what happens if I tell it to p-hack me significant but defensible results.

I can't remember where I read it, but I read that there's a real concern that researchers will just use this technology to write their articles for them. But honestly, with how bad papers are often written in the sciences, and the words between the numbers not mattering that much, I'm not sure I'd be opposed to it.

Parasaurolophus

I'd like to farm my marking out to an AI... For science, of course.
I know it's a genus.

Sun_Worshiper

Quote from: Diogenes on December 10, 2022, 12:49:26 PM
Quote from: Sun_Worshiper on December 10, 2022, 11:04:59 AM
I'd like to see what happens if I tell it to p-hack me significant but defensible results.

I can't remember where I read it, but I read that there's a real concern that researchers will just use this technology to write their articles for them. But honestly, with how bad papers are often written in the sciences, and the words between the numbers not mattering that much, I'm not sure I'd be opposed to it.

Yes, seems inevitable that people will use this as you describe. I'm not sure what to think about it either. On the one hand, the purpose of research is to create knowledge and if this becomes a tool to help people do that, then perhaps we should celebrate it. On the other hand, in practice research about career advancement for many/most people and so giving folks such a shortcut to pad their CVs in the ever expanding arms race of research productivity is not so healthy.

It is also interesting to imagine a world in the not-too-distant future when people can create a whole dissertation in a few minutes through a platform like this.


Sun_Worshiper

Overall I think this may be more of a danger to the integrity of academic research than teaching. We can give students in-class exams with blue books, but can journal editors stop researchers from submitting an article they generated with this technology?

Puget

Quote from: Sun_Worshiper on December 10, 2022, 02:08:55 PM
Overall I think this may be more of a danger to the integrity of academic research than teaching. We can give students in-class exams with blue books, but can journal editors stop researchers from submitting an article they generated with this technology?

If the field and journal in question would happily publish the kind of thing ChatGPT produces (which is pretty equivalent to a wikipedia article-- not bad, but nothing novel since it is just based on the training set), then that is the real problem.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

Sun_Worshiper

Quote from: Puget on December 10, 2022, 02:40:19 PM
Quote from: Sun_Worshiper on December 10, 2022, 02:08:55 PM
Overall I think this may be more of a danger to the integrity of academic research than teaching. We can give students in-class exams with blue books, but can journal editors stop researchers from submitting an article they generated with this technology?

If the field and journal in question would happily publish the kind of thing ChatGPT produces (which is pretty equivalent to a wikipedia article-- not bad, but nothing novel since it is just based on the training set), then that is the real problem.

Sure, for now, but I have to think that this this technology is going to evolve quickly.

Puget

Quote from: Sun_Worshiper on December 10, 2022, 03:09:11 PM
Quote from: Puget on December 10, 2022, 02:40:19 PM
Quote from: Sun_Worshiper on December 10, 2022, 02:08:55 PM
Overall I think this may be more of a danger to the integrity of academic research than teaching. We can give students in-class exams with blue books, but can journal editors stop researchers from submitting an article they generated with this technology?

If the field and journal in question would happily publish the kind of thing ChatGPT produces (which is pretty equivalent to a wikipedia article-- not bad, but nothing novel since it is just based on the training set), then that is the real problem.

Sure, for now, but I have to think that this this technology is going to evolve quickly.

Other AI technology may well, but ChatGPT is a large language model, NOT a general AI (it will tell you so itself if you ask it to do things it can't), which means all it "knows" is the language in its training set. It can do remarkable stuff with that, but it can't reason, and it isn't truly generative. It could probably write a mediocre lit review for a paper (although currently it seems to make up fictitious citations!), but it can't come up with novel hypotheses or figure out how to test them.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

Wahoo Redux

Quote from: Puget on December 10, 2022, 03:37:59 PM
it can't come up with novel hypotheses or figure out how to test them.

Not yet.

But AI probably will someday.

Is that bad?
Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter--and the Bird is on the Wing.

Sun_Worshiper

Quote from: Puget on December 10, 2022, 03:37:59 PM
Quote from: Sun_Worshiper on December 10, 2022, 03:09:11 PM
Quote from: Puget on December 10, 2022, 02:40:19 PM
Quote from: Sun_Worshiper on December 10, 2022, 02:08:55 PM
Overall I think this may be more of a danger to the integrity of academic research than teaching. We can give students in-class exams with blue books, but can journal editors stop researchers from submitting an article they generated with this technology?

If the field and journal in question would happily publish the kind of thing ChatGPT produces (which is pretty equivalent to a wikipedia article-- not bad, but nothing novel since it is just based on the training set), then that is the real problem.

Sure, for now, but I have to think that this this technology is going to evolve quickly.

Other AI technology may well, but ChatGPT is a large language model, NOT a general AI (it will tell you so itself if you ask it to do things it can't), which means all it "knows" is the language in its training set. It can do remarkable stuff with that, but it can't reason, and it isn't truly generative. It could probably write a mediocre lit review for a paper (although currently it seems to make up fictitious citations!), but it can't come up with novel hypotheses or figure out how to test them.

Ok, but the point is that it is just a matter of time (and probably not much time) before this kind of technology makes its way into the research/publishing world.