News:

Welcome to the new (and now only) Fora!

Main Menu

A whole new ballgame in cheating. Introducing ChatGPT

Started by Diogenes, December 08, 2022, 02:48:37 PM

Previous topic - Next topic

marshwiggle

Quote from: Sun_Worshiper on December 11, 2022, 08:59:40 AM
Quote from: Puget on December 10, 2022, 03:37:59 PM
Quote from: Sun_Worshiper on December 10, 2022, 03:09:11 PM
Quote from: Puget on December 10, 2022, 02:40:19 PM
Quote from: Sun_Worshiper on December 10, 2022, 02:08:55 PM
Overall I think this may be more of a danger to the integrity of academic research than teaching. We can give students in-class exams with blue books, but can journal editors stop researchers from submitting an article they generated with this technology?

If the field and journal in question would happily publish the kind of thing ChatGPT produces (which is pretty equivalent to a wikipedia article-- not bad, but nothing novel since it is just based on the training set), then that is the real problem.

Sure, for now, but I have to think that this this technology is going to evolve quickly.

Other AI technology may well, but ChatGPT is a large language model, NOT a general AI (it will tell you so itself if you ask it to do things it can't), which means all it "knows" is the language in its training set. It can do remarkable stuff with that, but it can't reason, and it isn't truly generative. It could probably write a mediocre lit review for a paper (although currently it seems to make up fictitious citations!), but it can't come up with novel hypotheses or figure out how to test them.

Ok, but the point is that it is just a matter of time (and probably not much time) before this kind of technology makes its way into the research/publishing world.

It's worth noting here that many car companies have given up on self-driving cars, because it's way more complicated than they thought. All of the driver assistance tech is still useful though. The same will be true for language processing; there will be lots of ways it will be useful even though problems requiring actual sentience wil be totally out of reach.
It takes so little to be above average.

Sun_Worshiper

Quote from: marshwiggle on December 11, 2022, 09:12:15 AM
Quote from: Sun_Worshiper on December 11, 2022, 08:59:40 AM
Quote from: Puget on December 10, 2022, 03:37:59 PM
Quote from: Sun_Worshiper on December 10, 2022, 03:09:11 PM
Quote from: Puget on December 10, 2022, 02:40:19 PM
Quote from: Sun_Worshiper on December 10, 2022, 02:08:55 PM
Overall I think this may be more of a danger to the integrity of academic research than teaching. We can give students in-class exams with blue books, but can journal editors stop researchers from submitting an article they generated with this technology?

If the field and journal in question would happily publish the kind of thing ChatGPT produces (which is pretty equivalent to a wikipedia article-- not bad, but nothing novel since it is just based on the training set), then that is the real problem.

Sure, for now, but I have to think that this this technology is going to evolve quickly.

Other AI technology may well, but ChatGPT is a large language model, NOT a general AI (it will tell you so itself if you ask it to do things it can't), which means all it "knows" is the language in its training set. It can do remarkable stuff with that, but it can't reason, and it isn't truly generative. It could probably write a mediocre lit review for a paper (although currently it seems to make up fictitious citations!), but it can't come up with novel hypotheses or figure out how to test them.

Ok, but the point is that it is just a matter of time (and probably not much time) before this kind of technology makes its way into the research/publishing world.

It's worth noting here that many car companies have given up on self-driving cars, because it's way more complicated than they thought. All of the driver assistance tech is still useful though. The same will be true for language processing; there will be lots of ways it will be useful even though problems requiring actual sentience wil be totally out of reach.

I'm not saying that machines will create research papers on their own, although perhaps that will happen at some point. I'm saying that researchers will use these tools as a shortcut to put together papers fast and with just a few lines of instruction, and pass that research off as their own. The same things we are worried about students doing will soon be done by their many of their professors as well.

And look, academic research and writing is not that complicated. People collect data, which they often download or scrape from public websites, run statistical models, often using open source software like r, and write up the results. These are all things that AI, with a little human guidance, will be able to do soon enough.

Is that a bad thing? I don't know. But it will raise thorny ethical questions and putting our heads in the sand will just leave us unprepared when it begins to happen.

Puget

Quote from: Sun_Worshiper on December 11, 2022, 09:32:53 AM
I'm not saying that machines will create research papers on their own, although perhaps that will happen at some point. I'm saying that researchers will use these tools as a shortcut to put together papers fast and with just a few lines of instruction, and pass that research off as their own. The same things we are worried about students doing will soon be done by their many of their professors as well.

And look, academic research and writing is not that complicated. People collect data, which they often download or scrape from public websites, run statistical models, often using open source software like r, and write up the results. These are all things that AI, with a little human guidance, will be able to do soon enough.

Is that a bad thing? I don't know. But it will raise thorny ethical questions and putting our heads in the sand will just leave us unprepared when it begins to happen.

I mean, the bolded text could be used to describe data analysis by someone from the pre-computer age who had to do the math by hand. I give the software a few lines of instruction (code) and it runs massively complex analyses for me that they could never have dreamed of! And then I publish it as my own work!

But I'm the one telling it what analyses to run and properly interpreting the results-- those are tasks that are an order of magnitude (at least) harder than any current AI is capable of. Will they do it someday? Maybe, but they're not coming for my job anytime soon.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

Sun_Worshiper

Quote from: Puget on December 11, 2022, 10:05:20 AM
Quote from: Sun_Worshiper on December 11, 2022, 09:32:53 AM
I'm not saying that machines will create research papers on their own, although perhaps that will happen at some point. I'm saying that researchers will use these tools as a shortcut to put together papers fast and with just a few lines of instruction, and pass that research off as their own. The same things we are worried about students doing will soon be done by their many of their professors as well.

And look, academic research and writing is not that complicated. People collect data, which they often download or scrape from public websites, run statistical models, often using open source software like r, and write up the results. These are all things that AI, with a little human guidance, will be able to do soon enough.

Is that a bad thing? I don't know. But it will raise thorny ethical questions and putting our heads in the sand will just leave us unprepared when it begins to happen.

I mean, the bolded text could be used to describe data analysis by someone from the pre-computer age who had to do the math by hand. I give the software a few lines of instruction (code) and it runs massively complex analyses for me that they could never have dreamed of! And then I publish it as my own work!

But I'm the one telling it what analyses to run and properly interpreting the results-- those are tasks that are an order of magnitude (at least) harder than any current AI is capable of. Will they do it someday? Maybe, but they're not coming for my job anytime soon.

Re the bolded, yes there is a parallel. But software to run models is a bit different than software that can collect data, run models, interpret and write up results, and write the introduction, literature review, theory and hypothesis development, and conclusion in minutes. We aren't there yet, but it is probably coming soon.

And my point is not about technology coming for your job. There are two issues: Technology taking peoples' jobs and technology helping people pass off work that is not their own in order to publish as much as possible in the escalating publishing arms race - something like p-hacking or HARKing on steroids. I'm talking about the latter.

I'm also not saying that the sky is necessarily falling, but you seem to be completely brushing aside what is likely to be a next front line of questions about ethical research.

MarathonRunner

Quote from: Diogenes on December 10, 2022, 12:49:26 PM
Quote from: Sun_Worshiper on December 10, 2022, 11:04:59 AM
I'd like to see what happens if I tell it to p-hack me significant but defensible results.

I can't remember where I read it, but I read that there's a real concern that researchers will just use this technology to write their articles for them. But honestly, with how bad papers are often written in the sciences, and the words between the numbers not mattering that much, I'm not sure I'd be opposed to it.

Guess why I will no longer review for a particular publisher? Not that they are using this tech, but that some of the articles may have well been.

Puget

Why ChatGPT will definitely not be authoring (publishable) journal articles anytime soon:
https://twitter.com/paniterka_ch/status/1599893718214901760?s=20&t=swpMT6q7LCx476i45g2uQA

This whole thread is worth a read. Not only does it make up completely factious citations (which any decent peer reviewer would quickly catch), it makes up whole new phenomena in physics! (Which obviously would also not pass any sort of peer review).

It is easy to mistake facility with re-purposing language in a training set (which indeed makes it very good at basic essay question type responses) for actual cognition.  It is cool to play with, and very good for certain types of tasks, but let's not mistake it for being more than it is.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

marshwiggle

Quote from: Puget on December 12, 2022, 08:27:32 AM
Why ChatGPT will definitely not be authoring (publishable) journal articles anytime soon:
https://twitter.com/paniterka_ch/status/1599893718214901760?s=20&t=swpMT6q7LCx476i45g2uQA

This whole thread is worth a read. Not only does it make up completely factious citations (which any decent peer reviewer would quickly catch), it makes up whole new phenomena in physics! (Which obviously would also not pass any sort of peer review).

It is easy to mistake facility with re-purposing language in a training set (which indeed makes it very good at basic essay question type responses) for actual cognition.  It is cool to play with, and very good for certain types of tasks, but let's not mistake it for being more than it is.

The same thing happened with https://en.wikipedia.org/wiki/ELIZAEliza. Even supposedly smart people can overestimate technology's capabilities.

It takes so little to be above average.

Sun_Worshiper

Nobody in this thread is saying that ChatGPT will write an article for you. Rather that programs that can do some or all of the article journal analysis and writing are probably not far behind.

Caracal

Quote from: Puget on December 11, 2022, 10:05:20 AM
Quote from: Sun_Worshiper on December 11, 2022, 09:32:53 AM
I'm not saying that machines will create research papers on their own, although perhaps that will happen at some point. I'm saying that researchers will use these tools as a shortcut to put together papers fast and with just a few lines of instruction, and pass that research off as their own. The same things we are worried about students doing will soon be done by their many of their professors as well.

And look, academic research and writing is not that complicated. People collect data, which they often download or scrape from public websites, run statistical models, often using open source software like r, and write up the results. These are all things that AI, with a little human guidance, will be able to do soon enough.

Is that a bad thing? I don't know. But it will raise thorny ethical questions and putting our heads in the sand will just leave us unprepared when it begins to happen.

I mean, the bolded text could be used to describe data analysis by someone from the pre-computer age who had to do the math by hand. I give the software a few lines of instruction (code) and it runs massively complex analyses for me that they could never have dreamed of! And then I publish it as my own work!

But I'm the one telling it what analyses to run and properly interpreting the results-- those are tasks that are an order of magnitude (at least) harder than any current AI is capable of. Will they do it someday? Maybe, but they're not coming for my job anytime soon.

Farjhad Manju had a good column about this. AI can often make humans more efficient or less likely to make mistakes, but predictions that it will make us redundant never seem to pan out.

Academic research and writing is a vastly different process than it was 35 years ago, or even 10 years ago. Lots of stuff that used to involve manual tasks have been automated. It can be easy to forget how much is involved in something like taking a paragraph from the body of a paper and putting it in the footnotes, or even correcting a small grammatical error.

Things like search functions have really changed the way academic research works in many fields. Some of it has gotten extremely sophisticated. When I use ancestry to find census records for my research, when you pull up a record, it now lists other records they think might be related. Often, it finds stuff that it would have been really hard for me to find with my own searches. But, it's also sometimes wrong. It will think someone is the same person, when they are actually aren't. You still need a human researcher to actually look at the results.

arcturus

I find this development fascinating. Back when I was a graduate student (eons ago...), we used to joke about writing a computer program to write our dissertations for us. It looks like CS folks have finally started to work on this. I wonder if my friends and I can claim intellectual property rights? After all, we had this idea decades ago! /snark

Wahoo Redux

NYTimes: An A.I.-Generated Picture Won an Art Prize. Artists Aren't Happy.

Quote
This year, the Colorado State Fair's annual art competition gave out prizes in all the usual categories: painting, quilting, sculpture.

But one entrant, Jason M. Allen of Pueblo West, Colo., didn't make his entry with a brush or a lump of clay. He created it with Midjourney, an artificial intelligence program that turns lines of text into hyper-realistic graphics.

Mr. Allen's work, "Théâtre D'opéra Spatial," took home the blue ribbon in the fair's contest for emerging digital artists — making it one of the first A.I.-generated pieces to win such a prize, and setting off a fierce backlash from artists who accused him of, essentially, cheating.

Reached by phone on Wednesday, Mr. Allen defended his work. He said that he had made clear that his work — which was submitted under the name "Jason M. Allen via Midjourney" — was created using A.I., and that he hadn't deceived anyone about its origins.

The piece is really cool, too.
Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter--and the Bird is on the Wing.

Parasaurolophus

Caught my first AI essay. It was a trainwreck of random attributions and citations, nand not very on-topic. My student didn't put much work into it.

I don't normally bother reporting plagiarists, but I'll be reporting this one.
I know it's a genus.

Morden

Quote from: Parasaurolophus on December 18, 2022, 02:10:18 PM
Caught my first AI essay. It was a trainwreck of random attributions and citations, nand not very on-topic. My student didn't put much work into it.

I don't normally bother reporting plagiarists, but I'll be reporting this one.

How did you discover it was AI and not just regular plagiarism?

Parasaurolophus

Quote from: Morden on December 18, 2022, 03:28:26 PM
Quote from: Parasaurolophus on December 18, 2022, 02:10:18 PM
Caught my first AI essay. It was a trainwreck of random attributions and citations, nand not very on-topic. My student didn't put much work into it.

I don't normally bother reporting plagiarists, but I'll be reporting this one.

How did you discover it was AI and not just regular plagiarism?

The main hint was the structure of the paper, which is cut up into small sections that you can generate with just basic prompts (clearly derived from section titles). The weirdness and garbling of the attributions and the citations, too, are just not at all what you would see from material either generated by a student (I have never seen anything quite like it, and my students often do a very poor job of relating class content) or taken from someone else's work. The huggingface detector confirmed my intuition, and I'll be having a conversation with the student which I expect will seal the deal.
I know it's a genus.