News:

Welcome to the new (and now only) Fora!

Main Menu

A whole new ballgame in cheating. Introducing ChatGPT

Started by Diogenes, December 08, 2022, 02:48:37 PM

Previous topic - Next topic

lightning

I'm thinking that one day, asynchronous online teachers will get replaced by AI.

You can already use AI to provide comments (although done offline and copy-and-pasted in  manually).

You can already use AI to interact on a LMS discussion board, if the school allowed it. Troll bots have been tormenting interactive discussion boards for decades.

Are they quality comments/feedback and quality interactions? Of course not, but it's not about quality of instruction, when it comes to a lot of online courses.

As soon as universities figure out how they can teach their online courses in this manner, online teachers will be out in the streets.

apl68

I don't see an AI being able to "teach" a very good online course anytime soon.  Probably never.  That said, a lot of online courses aren't very good.  I can recall one I took in library school some years back where the instructor of record added so little to the course that I can see a bot doing the job about as well as he did.
For our light affliction, which is only for a moment, works for us a far greater and eternal weight of glory.  We look not at the things we can see, but at those we can't.  For the things we can see are temporary, but those we can't see are eternal.

marshwiggle

Quote from: apl68 on March 09, 2023, 07:14:22 AM
I don't see an AI being able to "teach" a very good online course anytime soon.  Probably never.  That said, a lot of online courses aren't very good.  I can recall one I took in library school some years back where the instructor of record added so little to the course that I can see a bot doing the job about as well as he did.

Yup.And we already have all kinds of training modules that are completely automated and auto-graded, which are deemed sufficient for all kinds of legal requirements. The question is really more one of "Up to what level can an AI deliver a reasonable online course?" The answer that we've implicitly accepted already is something above "none". (And for all kinds of remedial instruction, it may work quite well.)

It takes so little to be above average.

fosca

I teach 101-level classes fully online, and I'm glad I'm within 10 years or so of retirement, because I fully expect to lose my job to AI.  Whether they're good or not won't matter (even in the several years I've had this job the quality is declining), but they will surely be cheaper, and that's what will matter in the end.

Kron3007

Quote from: apl68 on March 09, 2023, 07:14:22 AM
I don't see an AI being able to "teach" a very good online course anytime soon.  Probably never.  That said, a lot of online courses aren't very good.  I can recall one I took in library school some years back where the instructor of record added so little to the course that I can see a bot doing the job about as well as he did.

Probably not yet, but it is likely coming.  Further, a lot of online courses are really just read the book or what the video and do the assignment.  Not a very high bar ...

One of the bigger questions regarding online courses is if AI will replace the student.  Without any actual interaction, how would you even know if it is a real person or a bit doing the course?  Hopefully A silver lining to bots is the death of low quality online courses....

jerseyjay

I am taking an online  introductory French course. The assignments are integrated into the LMS and the online textbook, and they are graded automatically. (They are mainly grammar and vocabulary questions.) The professor is a nice guy, but there is no real interaction except a once a week (optional) Zoom meet-up to talk in French.

I do not think that a living professor could be replaced entirely by AI. However, I could see one professor (maybe an adjunct) supervising several sections of the course for a fraction of what a professor gets paid to teach an entire section.

In history, I am not sure if it would work. My online asynchronous courses comprised a certain amount of such work that could be graded by machine (i.e., discussion boards where anybody who wrote something relevant got points) and some multiple choice exams graded automatically. However, there were two essay exams and a final exam that some sort of person is needed to grade.

(As an aside, I have real qualms about teaching language in online asynchronous courses, since language requires social interaction. I would also find it hard for maths, where many students need quite a bit of explanation before they "get" it.)

I am not sure how AI would work for upper-level courses that require analytical writing--although maybe I just do not have enough imagination.

secundem_artem

They have just announced "improvements" to ChatGPT. 

https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html


Do these techno-libertarians think much about the potential human costs of their work?
Funeral by funeral, the academy advances

Wahoo Redux

Quote from: secundem_artem on March 14, 2023, 10:55:42 AM
They have just announced "improvements" to ChatGPT. 

https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html


Do these techno-libertarians think much about the potential human costs of their work?

When has technological advancement ever taken into account potential human costs (except to trumpet improvements in efficiency, etc.)?
Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter--and the Bird is on the Wing.

marshwiggle

Quote from: Wahoo Redux on March 14, 2023, 12:41:03 PM
Quote from: secundem_artem on March 14, 2023, 10:55:42 AM
They have just announced "improvements" to ChatGPT. 

https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html


Do these techno-libertarians think much about the potential human costs of their work?

When has technological advancement ever taken into account potential human costs (except to trumpet improvements in efficiency, etc.)?

The real issue tends to be how and at what scale the technology is applied, rather than the existence of the technology itself. (My understanding is that Dr. Guillotine was trying to produce a humane form of *execution. The fact that it became useful for its speed and efficiency during the revolution wasn't his fault.)

*Given that is had been going on forever in various forms, and was likely to continue for the forseable future.
It takes so little to be above average.

Hibush

#234
A bit late: Please join us today, March 14th, at 1 pm PDT for a live demo of GPT-4.
Here is a replay: https://www.youtube.com/live/outcGtbnMuQ?feature=share&t=3268
"GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and advanced reasoning capabilities."

apl68

Quote from: Wahoo Redux on March 14, 2023, 12:41:03 PM
Quote from: secundem_artem on March 14, 2023, 10:55:42 AM
They have just announced "improvements" to ChatGPT. 

https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html


Do these techno-libertarians think much about the potential human costs of their work?

When has technological advancement ever taken into account potential human costs (except to trumpet improvements in efficiency, etc.)?

It's always about "what can we do?" not "what should we do?"
For our light affliction, which is only for a moment, works for us a far greater and eternal weight of glory.  We look not at the things we can see, but at those we can't.  For the things we can see are temporary, but those we can't see are eternal.

marshwiggle

Quote from: apl68 on March 14, 2023, 02:03:32 PM
Quote from: Wahoo Redux on March 14, 2023, 12:41:03 PM
Quote from: secundem_artem on March 14, 2023, 10:55:42 AM
They have just announced "improvements" to ChatGPT. 

https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html


Do these techno-libertarians think much about the potential human costs of their work?

When has technological advancement ever taken into account potential human costs (except to trumpet improvements in efficiency, etc.)?

It's always about "what can we do?" not "what should we do?"

That's often because the ultimate bad outcome of some technological advancement isn't in sight as the many incremental developments along the way happen. Most of the problems are due to unintended consequences, rather than desired goals.
So at each stage of incremental development, there's no obvious moral reason to avoid getting over the current technological hurdle.
It takes so little to be above average.

Diogenes

"Is it legal/ethical/safe?"
"Don't care- do it anyway." -Uber, Lyft, Airbnb, Lime, Byrd, ChatGPT...
"Uh there were consequences we could have prevented had you given us time/oversight"
"Too bad, we are already here! Your little town needs to figure out how to keep us here."

Kron3007

Quote from: secundem_artem on March 14, 2023, 10:55:42 AM
They have just announced "improvements" to ChatGPT. 

https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html


Do these techno-libertarians think much about the potential human costs of their work?

Progress is inevitable.  You will be assimilated.  Resistance is futile...




Caracal

Quote from: marshwiggle on March 15, 2023, 05:41:13 AM
Quote from: apl68 on March 14, 2023, 02:03:32 PM
Quote from: Wahoo Redux on March 14, 2023, 12:41:03 PM
Quote from: secundem_artem on March 14, 2023, 10:55:42 AM
They have just announced "improvements" to ChatGPT. 

https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html


Do these techno-libertarians think much about the potential human costs of their work?

When has technological advancement ever taken into account potential human costs (except to trumpet improvements in efficiency, etc.)?

It's always about "what can we do?" not "what should we do?"

That's often because the ultimate bad outcome of some technological advancement isn't in sight as the many incremental developments along the way happen. Most of the problems are due to unintended consequences, rather than desired goals.
So at each stage of incremental development, there's no obvious moral reason to avoid getting over the current technological hurdle.

Although it's worth pointing out that the historical track record of correctly predicting potential harms is not great. People tend to worry about the wrong things. There's no reason anyone should trust my predictions, but it does seem to me that we often tend to fixate on the dangers of Artificial Intelligence and don't pay enough attention to the ways in which technologies enable and amplify the bad behavior and evil of old fashioned humans. Nobody spent much time worrying about Facebook and YouTube, because they were just platforms for people to display their creativity without traditional gatekeepers. It was Deep Blue and the fact that computers could beat the best humans at chess that got all the attention. We're always afraid of being replaced or undermined by alien technology, but all the bad stuff keeps being the result of people sitting around typing on their phones.