News:

Welcome to the new (and now only) Fora!

Main Menu

A whole new ballgame in cheating. Introducing ChatGPT

Started by Diogenes, December 08, 2022, 02:48:37 PM

Previous topic - Next topic

dismalist

Quote from: Kron3007 on May 23, 2023, 01:02:30 PM
Quote from: Antiphon1 on May 23, 2023, 12:29:22 PM
When I initially saw.the report, my thought was, "Huh.  How could this work as a tool to help students better understand the material?"  Then I realized that the bot was just a giant google search that has no context or internal critical analysis.  I'm afraid the process might be worse than the word salad students sometimes think passes for research.  And, yes, it's just another hurdle to jump.  I'm hoping we can figure out how to corral the tool before it goes the way of the social media wild, wild west.

It is a far cry from a giant google search IMO. Yes, it does a giant google search, but then compiles it into something variably coherent.  This is exactly what my students do as well.

It is also worth noting that it is continually improving and can do other things such as write code (very useful sometimes). 

Dismiss at your own peril.   

It's as though we had a calculator and had to check every result for correctness.
That's not even wrong!
--Wolfgang Pauli

Sun_Worshiper

Quote from: Kron3007 on May 23, 2023, 01:02:30 PM
Quote from: Antiphon1 on May 23, 2023, 12:29:22 PM
When I initially saw.the report, my thought was, "Huh.  How could this work as a tool to help students better understand the material?"  Then I realized that the bot was just a giant google search that has no context or internal critical analysis.  I'm afraid the process might be worse than the word salad students sometimes think passes for research.  And, yes, it's just another hurdle to jump.  I'm hoping we can figure out how to corral the tool before it goes the way of the social media wild, wild west.

It is a far cry from a giant google search IMO. Yes, it does a giant google search, but then compiles it into something variably coherent.  This is exactly what my students do as well.

It is also worth noting that it is continually improving and can do other things such as write code (very useful sometimes). 

Dismiss at your own peril.   

This is exactly right, especially the bolded.

I've said it before and I'll say it again now: Trying to stop students (or anyone else) from using this is futile and wrongheaded. A better approach is to teach them how to use it and to understand its strengths and weaknesses.

secundem_artem

I wonder what would happen if you asked it to write a paper on "Clog Dancing in Post Meiji Japan" and then took that essay, entered it back into Chat GPT and asked it to do a critical analysis of the paper it just wrote?

I ask because I may start to use assignments like "Use Chat GPT to write 1000 word essay on Topic X.  Then, critically review Topic X for wrong or missing information, presenting multiple perspectives on the topic, quality of reference list, and ......"

Just wondering if students could use the AI to do both parts of an assignment like that- both write AND critical analysis.
Funeral by funeral, the academy advances

Antiphon1

Quote from: secundem_artem on May 23, 2023, 03:12:24 PM
I wonder what would happen if you asked it to write a paper on "Clog Dancing in Post Meiji Japan" and then took that essay, entered it back into Chat GPT and asked it to do a critical analysis of the paper it just wrote?

I ask because I may start to use assignments like "Use Chat GPT to write 1000 word essay on Topic X.  Then, critically review Topic X for wrong or missing information, presenting multiple perspectives on the topic, quality of reference list, and ......"

Just wondering if students could use the AI to do both parts of an assignment like that- both write AND critical analysis.

I gave it a try.  No go.  While it's true that ChatGPT is more than a Google search, it's really not much more than Google search plus Grammarly.  I've watched it write an essay on 16th century female Western poets.  It was an out of body experience.  However, after reading the essay, we realized the bot had borrowed liberally from Wiki, social media and random blogs and interspersed this information with online encyclopedias and annotations.  When the essay was rewashed through the program for critical analysis (very simple compare and contrast prompt) the result was exactly the mess I sometimes get from underprepared undergrads.  The program isn't perfect.  It is a tool, though.  Hammers are great at driving nails, but not very effective for painting the Sistine chapel.  We have to know how it functions before we can out smart it. 

the_geneticist

I've played around and ChatGPT is terrible at conditional statements. An example from genetics is if a calico cat can have an orange, female kitten.  The answer is yes.  And the father is an orange male cat.  It will tell you calico cats are female.  But doesn't apply that knowledge to what is being asked.  And it gets the genetics wrong, even though orange vs. black fur in cats is a classic example of an x-linked trait.

Odd example with a recipe for an eel pie. It calls for "eel fillets".  When asked if removing the head from the eel is necessary, it will say yes.   But an eel fillet by definition does not include the head.  No internal cross-checking

Kron3007

#380
Quote from: dismalist on May 23, 2023, 01:19:38 PM
Quote from: Kron3007 on May 23, 2023, 01:02:30 PM
Quote from: Antiphon1 on May 23, 2023, 12:29:22 PM
When I initially saw.the report, my thought was, "Huh.  How could this work as a tool to help students better understand the material?"  Then I realized that the bot was just a giant google search that has no context or internal critical analysis.  I'm afraid the process might be worse than the word salad students sometimes think passes for research.  And, yes, it's just another hurdle to jump.  I'm hoping we can figure out how to corral the tool before it goes the way of the social media wild, wild west.

It is a far cry from a giant google search IMO. Yes, it does a giant google search, but then compiles it into something variably coherent.  This is exactly what my students do as well.

It is also worth noting that it is continually improving and can do other things such as write code (very useful sometimes). 

Dismiss at your own peril.   

It's as though we had a calculator and had to check every result for correctness.

If you take a simple calculator and enter 4+4, you will get the right answer.  If you take the same calculator and enter 4+4*3+2, you would get the wrong answer as it doesn't know order of operations.  If you do the same with a more sophisticated scientific calculator, you will get the right answer to both.

We are in the very early days of AI akin to a simple calculator, or perhaps an abacus or slide rule.  It gets simple things right (and some relatively complicated things), but starts to fall apart with more complex topics. It will develop under your nose.  In fact, it has already improved during the life of this thread....

Again, dismiss at your peril.

marshwiggle

Quote from: Kron3007 on May 24, 2023, 03:20:05 AM
We are in the very early days of AI akin to a simple calculator, or perhaps an abacus or slide rule.  It gets simple things right (and some relatively complicated things), but starts to fall apart with more complex topics. It will develop under your nose.  In fact, it has already improved during the life of this thread....

Again, dismiss at your peril.

The real issue here is that, like all kinds of automation, it can do better than a certain portion of the population. As someone pointed out, the US military can't recruit anyone with an IQ below 85, because they can't train them to do anything useful. And that is 15% of the population. If the "writing" of an AI like ChatGPT is better than that of some portion of society, it will make that group that much harder to employ. Can society and the economy function if 10% or 20% are unemployable?

Time will tell.

It takes so little to be above average.

Kron3007

Quote from: marshwiggle on May 24, 2023, 05:53:10 AM
Quote from: Kron3007 on May 24, 2023, 03:20:05 AM
We are in the very early days of AI akin to a simple calculator, or perhaps an abacus or slide rule.  It gets simple things right (and some relatively complicated things), but starts to fall apart with more complex topics. It will develop under your nose.  In fact, it has already improved during the life of this thread....

Again, dismiss at your peril.

The real issue here is that, like all kinds of automation, it can do better than a certain portion of the population. As someone pointed out, the US military can't recruit anyone with an IQ below 85, because they can't train them to do anything useful. And that is 15% of the population. If the "writing" of an AI like ChatGPT is better than that of some portion of society, it will make that group that much harder to employ. Can society and the economy function if 10% or 20% are unemployable?

Time will tell.

That portion of the population is not employed writing articles and such anyway.  The real issue is that AI will likely outperform 80% of the population in the near future, and take seconds to do so.  As such, many more people will be automated out, but the job market will adjust accordingly.

What's a little funny is that this crisis already hit many other industries.  People are just freaking out because it is threatening more sacred ,"human" jobs.  I have artist friends who just hate AI art.  To me, it produces similar results in a fraction of the time, so I will use it when needed, as will many. 

Stockmann

Quote from: Kron3007 on May 24, 2023, 03:20:05 AM
Quote from: dismalist on May 23, 2023, 01:19:38 PM
Quote from: Kron3007 on May 23, 2023, 01:02:30 PM
Quote from: Antiphon1 on May 23, 2023, 12:29:22 PM
When I initially saw.the report, my thought was, "Huh.  How could this work as a tool to help students better understand the material?"  Then I realized that the bot was just a giant google search that has no context or internal critical analysis.  I'm afraid the process might be worse than the word salad students sometimes think passes for research.  And, yes, it's just another hurdle to jump.  I'm hoping we can figure out how to corral the tool before it goes the way of the social media wild, wild west.

It is a far cry from a giant google search IMO. Yes, it does a giant google search, but then compiles it into something variably coherent.  This is exactly what my students do as well.

It is also worth noting that it is continually improving and can do other things such as write code (very useful sometimes). 

Dismiss at your own peril.   

It's as though we had a calculator and had to check every result for correctness.

If you take a simple calculator and enter 4+4, you will get the right answer.  If you take the same calculator and enter 4+4*3+2, you would get the wrong answer as it doesn't know order of operations.  If you do the same with a more sophisticated scientific calculator, you will get the right answer to both.

We are in the very early days of AI akin to a simple calculator, or perhaps an abacus or slide rule.  It gets simple things right (and some relatively complicated things), but starts to fall apart with more complex topics. It will develop under your nose.  In fact, it has already improved during the life of this thread....

Again, dismiss at your peril.

Perhaps a closer analogue is translating. Once upon a time, when I was working as an ESL teacher, I was asked to "correct" some translating that was done with some translation software. The result was so poor (it basically couldn't manage grammar - it was doing little more than looking up words) that it might've been easier and faster if I'd done the translation from scratch - so this translation software, that someone had presumably paid for, meant no real savings in time or effort by a human. Google translate, even in the beginning, was a huge improvement over that. Now, of course there are all sorts of subtle cultural nuances, in-depth context of a given work, cases in which the most literal translation may be a poor choice, etc that current software is probably not able to handle, and I've seen results of taking a text in English, translating it sequentially into umpteen languages and back into English, etc. However, there are absolutely god-awful "professional" translations out there made by humans such that google translate would've almost certainly done a much better job - you can't justify employing the worst translators even if they worked for free and in seconds. There are other issues as well, other than human incompetence, particularly when dealing with translating to or from relatively obscure languages or involving highly distinct dialects - an AI only has to "learn" the language or dialect once.
My point is that for many things it will simply no longer make sense to employ less than stellar humans (and remember, half of employees are below the median) - and for some things, it will no longer make sense to employ humans at all.

MarathonRunner

Quote from: Stockmann on May 25, 2023, 08:47:54 AM
Quote from: Kron3007 on May 24, 2023, 03:20:05 AM
Quote from: dismalist on May 23, 2023, 01:19:38 PM
Quote from: Kron3007 on May 23, 2023, 01:02:30 PM
Quote from: Antiphon1 on May 23, 2023, 12:29:22 PM
When I initially saw.the report, my thought was, "Huh.  How could this work as a tool to help students better understand the material?"  Then I realized that the bot was just a giant google search that has no context or internal critical analysis.  I'm afraid the process might be worse than the word salad students sometimes think passes for research.  And, yes, it's just another hurdle to jump.  I'm hoping we can figure out how to corral the tool before it goes the way of the social media wild, wild west.

It is a far cry from a giant google search IMO. Yes, it does a giant google search, but then compiles it into something variably coherent.  This is exactly what my students do as well.

It is also worth noting that it is continually improving and can do other things such as write code (very useful sometimes). 

Dismiss at your own peril.   

It's as though we had a calculator and had to check every result for correctness.

If you take a simple calculator and enter 4+4, you will get the right answer.  If you take the same calculator and enter 4+4*3+2, you would get the wrong answer as it doesn't know order of operations.  If you do the same with a more sophisticated scientific calculator, you will get the right answer to both.

We are in the very early days of AI akin to a simple calculator, or perhaps an abacus or slide rule.  It gets simple things right (and some relatively complicated things), but starts to fall apart with more complex topics. It will develop under your nose.  In fact, it has already improved during the life of this thread....

Again, dismiss at your peril.

Perhaps a closer analogue is translating. Once upon a time, when I was working as an ESL teacher, I was asked to "correct" some translating that was done with some translation software. The result was so poor (it basically couldn't manage grammar - it was doing little more than looking up words) that it might've been easier and faster if I'd done the translation from scratch - so this translation software, that someone had presumably paid for, meant no real savings in time or effort by a human. Google translate, even in the beginning, was a huge improvement over that. Now, of course there are all sorts of subtle cultural nuances, in-depth context of a given work, cases in which the most literal translation may be a poor choice, etc that current software is probably not able to handle, and I've seen results of taking a text in English, translating it sequentially into umpteen languages and back into English, etc. However, there are absolutely god-awful "professional" translations out there made by humans such that google translate would've almost certainly done a much better job - you can't justify employing the worst translators even if they worked for free and in seconds. There are other issues as well, other than human incompetence, particularly when dealing with translating to or from relatively obscure languages or involving highly distinct dialects - an AI only has to "learn" the language or dialect once.
My point is that for many things it will simply no longer make sense to employ less than stellar humans (and remember, half of employees are below the median) - and for some things, it will no longer make sense to employ humans at all.

I've seen some bizarre things from google translate because vet can be either veteran or veterinarian. Yes, there are veteran veterinarians, but not many. It's clear google can't parse context.

And ChatGPT can't design crochet either. It's attempts look nothing like the prompt (two spheres sewn together is supposed to be a horse).

Stockmann

I don't know about crochet, but I've seen human translation errors more egregious than mixing up vet=veteran and vet=veterinarian - including cases where the words merely resemble each other rather than the use of identical shorthands.

In any case, I think a big issue with AI is that it's hard to tell what jobs it's going to displace all but the best workers, and what jobs will be automated away entirely, and what jobs will be relatively unscathed. During the early Industrial Revolution, if you made a living as a writer or a physician, for example, machines clearly could do nothing even remotely in the ballpark of what you did. If your job was manually operating a pump or shoveling rocks, at least in hindsight it's clear you needed other skills. But today, it's not clear at all who is safe(r). AI can code, so humans programming may go the way of writing longhand - not a viable career choice no matter how good your penmanship. AI can do award-winning "creative" work so clearly the arts aren't safe. Even shiny new jobs like youtuber - given the reality of deepfakes, they'll have to compete against entirely synthetic, wholly AI generated videos very soon (surely deepfakes can already act better than Kristen Stewart?). AI is already being actively used in STEM research. Given the increased prevalence of remote work, the tools to manage it, and growing AI capabilities, I doubt junior and middle management are safe (esp. if there are fewer humans to manage in the first place). So apart from jobs created by fiat, like judges or senators, what is safe?

dismalist

Quote from: Kron3007 on May 24, 2023, 03:20:05 AM
Quote from: dismalist on May 23, 2023, 01:19:38 PM
Quote from: Kron3007 on May 23, 2023, 01:02:30 PM
Quote from: Antiphon1 on May 23, 2023, 12:29:22 PM
When I initially saw.the report, my thought was, "Huh.  How could this work as a tool to help students better understand the material?"  Then I realized that the bot was just a giant google search that has no context or internal critical analysis.  I'm afraid the process might be worse than the word salad students sometimes think passes for research.  And, yes, it's just another hurdle to jump.  I'm hoping we can figure out how to corral the tool before it goes the way of the social media wild, wild west.

It is a far cry from a giant google search IMO. Yes, it does a giant google search, but then compiles it into something variably coherent.  This is exactly what my students do as well.

It is also worth noting that it is continually improving and can do other things such as write code (very useful sometimes). 

Dismiss at your own peril.   

It's as though we had a calculator and had to check every result for correctness.

If you take a simple calculator and enter 4+4, you will get the right answer.  If you take the same calculator and enter 4+4*3+2, you would get the wrong answer as it doesn't know order of operations.  If you do the same with a more sophisticated scientific calculator, you will get the right answer to both.

We are in the very early days of AI akin to a simple calculator, or perhaps an abacus or slide rule.  It gets simple things right (and some relatively complicated things), but starts to fall apart with more complex topics. It will develop under your nose.  In fact, it has already improved during the life of this thread....

Again, dismiss at your peril.

The calculator gives the correct answer to questions whose answers are known. The abacus and the slide rule did the same thing from the day they were invented. Their results do not need to be checked.

ChatGPT -- by searching for frequency of use of past word orders -- can tell us something thought in the past, correct or incorrect, even if it were perfect.

Whatever comes out of ChatGPT, no matter how good it gets, will have to be checked for current correctness.

[My personal piffle is not the technology. Use it as one finds it useful. It's calling it "intelligence" that irritates me. Just a marketing ploy.]
That's not even wrong!
--Wolfgang Pauli

Parasaurolophus

#387
Quote from: Stockmann on May 25, 2023, 12:40:01 PM
I don't know about crochet, but I've seen human translation errors more egregious than mixing up vet=veteran and vet=veterinarian - including cases where the words merely resemble each other rather than the use of identical shorthands.



The first English translation of The Little Prince is notorious for translating 'ami' (friend) as 'sheep' (in French, that would have been 'mouton'). =/


I can't work out how that coould possibly have happened, unless it was deliberate. But there you have it. It gave rise to what's called 'the sheep test' for identifying derivative instances of this translation in other languages.
I know it's a genus.

ciao_yall

#388
Quote from: Parasaurolophus on May 25, 2023, 01:52:34 PM
Quote from: Stockmann on May 25, 2023, 12:40:01 PM
I don't know about crochet, but I've seen human translation errors more egregious than mixing up vet=veteran and vet=veterinarian - including cases where the words merely resemble each other rather than the use of identical shorthands.



The first English translation of The Little Prince is notorious for translating 'ami' (friend) as 'sheep' (in French, that would have been 'mouton'). =/


I can't work out how that coould possibly have happened, unless it was deliberate. But there you have it. It gave rise to what's called 'the sheep test' for identifying derivative instances of this translation in other languages.

That is really odd. As I recall, the author does end up drawing a sheep for The Little Prince. And TLP had a pet sheep on his planet which he hoped would eat the baobab plants.

Online sources insist it's a "mistake" but those are pretty basic words in French. Unless it was an idiom?


marshwiggle

Quote from: Stockmann on May 25, 2023, 12:40:01 PM
I don't know about crochet, but I've seen human translation errors more egregious than mixing up vet=veteran and vet=veterinarian - including cases where the words merely resemble each other rather than the use of identical shorthands.

In any case, I think a big issue with AI is that it's hard to tell what jobs it's going to displace all but the best workers, and what jobs will be automated away entirely, and what jobs will be relatively unscathed. During the early Industrial Revolution, if you made a living as a writer or a physician, for example, machines clearly could do nothing even remotely in the ballpark of what you did. If your job was manually operating a pump or shoveling rocks, at least in hindsight it's clear you needed other skills. But today, it's not clear at all who is safe(r). AI can code, so humans programming may go the way of writing longhand - not a viable career choice no matter how good your penmanship. AI can do award-winning "creative" work so clearly the arts aren't safe. Even shiny new jobs like youtuber - given the reality of deepfakes, they'll have to compete against entirely synthetic, wholly AI generated videos very soon (surely deepfakes can already act better than Kristen Stewart?). AI is already being actively used in STEM research. Given the increased prevalence of remote work, the tools to manage it, and growing AI capabilities, I doubt junior and middle management are safe (esp. if there are fewer humans to manage in the first place). So apart from jobs created by fiat, like judges or senators, what is safe?

So AI can just solve those unsolved math problems, by being tasked with coding a solution? No, but it can do boilerplate stuff that comes up in lots of projects. Basically just like more flexible libraries. When all that is needed is a tailored version of something already in existence, it will be useful. When what's required is an original idea, it won't.
It takes so little to be above average.