News:

Welcome to the new (and now only) Fora!

Main Menu

The Alternate Universes of AI & Teaching

Started by spork, July 10, 2024, 05:57:14 AM

Previous topic - Next topic

spork

It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

Sun_Worshiper

I can't read the articles due to paywall, but I will just say that AI is here to stay and we're better off teaching students to use it effectively, with an understanding of its strengths and weaknesses, then telling them not to use it at all.
 

Wahoo Redux

Quote from: Sun_Worshiper on July 10, 2024, 09:16:04 AMI can't read the articles due to paywall, but I will just say that AI is here to stay and we're better off teaching students to use it effectively, with an understanding of its strengths and weaknesses, then telling them not to use it at all.
 

Amen.

We want it to be dumb and useless, but it is neither.  Nor is it going away. 
Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter--and the Bird is on the Wing.

Parasaurolophus

#3
I'm only grading robots, for basically anything I assign. My colleagues have found students bringing burner phones into in-person exams so that they can take snapshots of questions and upload them to ChatGPT.

Cheating was rampant here before. It's now fully out of control. I have no idea what to do. There are a lot of structural problems I can't overcome (e.g., widespread functional illiteracy in English, having to juggle several full-time jobs to make the rent, not having a stable living situation because of the rent, etc.), and then a million tiny obstacles that I can whackamole, but for which whackamoling is really not very helpful (e.g., not knowing what it means when I refer to "the readings", not knowing whether the class is online or in-person, or the difference between synchronous and asychronous learning, not knowing where to find the schedule, not knowing that the dates in the schedule correspond to dates when we do stuff, not knowing how to log in to the LMS, not following short, basic instructions, not reading the instructions at all, not showing up to class, leaving class after five minutes, etc.).

Then there's the bigger suite of problems LLMs have introduced, which include simply not believing that they are capable of doing the work in the first place (which, to be fair, is true of many in general, but it's certainly not true for many of the assignments).


Quote from: Sun_Worshiper on July 10, 2024, 09:16:04 AMI can't read the articles due to paywall, but I will just say that AI is here to stay and we're better off teaching students to use it effectively, with an understanding of its strengths and weaknesses, then telling them not to use it at all.
 

Before they can "use it effectively," they need to know how to do stuff (forget knowing content!). I'm trying to show them how to do stuff in the first place.

FFS, I would teach them how to cheat effectively, except that I'd run into the same problems.



I know it's a genus.

Sun_Worshiper

Quote from: Parasaurolophus on July 12, 2024, 09:57:18 AMI'm only grading robots, for basically anything I assign. My colleagues have found students bringing burner phones into in-person exams so that they can take snapshots of questions and upload them to ChatGPT.

Cheating was rampant here before. It's now fully out of control. I have no idea what to do. There are a lot of structural problems I can't overcome (e.g., widespread functional illiteracy in English, having to juggle several full-time jobs to make the rent, not having a stable living situation because of the rent, etc.), and then a million tiny obstacles that I can whackamole, but for which whackamoling is really not very helpful (e.g., not knowing what it means when I refer to "the readings", not knowing whether the class is online or in-person, or the difference between synchronous and asychronous learning, not knowing where to find the schedule, not knowing that the dates in the schedule correspond to dates when we do stuff, not knowing how to log in to the LMS, not following short, basic instructions, not reading the instructions at all, not showing up to class, leaving class after five minutes, etc.).

Then there's the bigger suite of problems LLMs have introduced, which include simply not believing that they are capable of doing the work in the first place (which, to be fair, is true of many in general, but it's certainly not true for many of the assignments).


Quote from: Sun_Worshiper on July 10, 2024, 09:16:04 AMI can't read the articles due to paywall, but I will just say that AI is here to stay and we're better off teaching students to use it effectively, with an understanding of its strengths and weaknesses, then telling them not to use it at all.
 

Before they can "use it effectively," they need to know how to do stuff (forget knowing content!). I'm trying to show them how to do stuff in the first place.

FFS, I would teach them how to cheat effectively, except that I'd run into the same problems.





I share your concerns - particularly about wasting my time grading essays that students did not even write. But I've observed the stronger students get that AI is a great supplemental tool to make their work better. Weak and lazy students that use it as a substitute for writing and thinking will come out unprepared for the real world, just as weak and lazy students did pre-ChatGPT.

Parasaurolophus

Yeah, but... We're getting, like, 80%+ AI work. On essays, discussion posts, quizzes, exams, in-class work, presentations, etc.

We've completely lost control.
I know it's a genus.

Sun_Worshiper

Quote from: Parasaurolophus on July 12, 2024, 01:58:58 PMYeah, but... We're getting, like, 80%+ AI work. On essays, discussion posts, quizzes, exams, in-class work, presentations, etc.

We've completely lost control.

I hear you. It is frustrating. But that's the way of the world. Gotta adjust.

Parasaurolophus

Sure. But how? At this point, I've (and we've, departmentally) tried an awful lot of stuff, and it hasn't made a dent.
I know it's a genus.

Sun_Worshiper

Quote from: Parasaurolophus on July 12, 2024, 04:23:10 PMSure. But how? At this point, I've (and we've, departmentally) tried an awful lot of stuff, and it hasn't made a dent.

I have not found the magic solution, but hopefully each of the steps that I have taken (probably the same ones you are using) help at the margins.

The main thing I emphasize to them is that AI is great, and that they should learn to use it, but it also has some significant weaknesses, from hallucinating to offering much more in the way of style than substance. Does this get through to all of them, or even most of them? No. But the strongest students hear me and their work is better than ever.

As for grading, I give a much less substantive response to a paper that I suspect was written entirely by AI and with little-to-no input from the student. And a lower grade.



spork

Quote from: Sun_Worshiper on July 12, 2024, 06:49:47 PM[...]

As for grading, I give a much less substantive response to a paper that I suspect was written entirely by AI and with little-to-no input from the student. And a lower grade.


What criteria do you use that result in lower grades for obvious AI-generated slop?

Only a handful of my undergrad students have any intrinsic motivation to read. Previously I assigned short (no more than a half-page) pieces of writing with prompts on reading assignments, and graded them with a two-column (yes/no), three-row rubric:

  • Organized, clearly-stated argument.
  • Relevant use of assigned sources, correctly cited (i.e., must include page number).
  • Correct spelling, punctuation, and grammar.

For the coming fall semester, I'm collapsing the three rows to two -- (1) organized/concise argument and correct mechanics, and (2) use of assigned sources. I'm hoping that this causes students who just copy/paste from ChatGPT will get a score of no more than 50% on these assignments.

Probably more importantly, I'm converting at least half of these writing assignments into annotation exercises on Perusall. I assume that students are less likely to copy/paste AI output from prompts if they are annotating a text with classmates, and Perusall already has some AI-detection ability (it flags comments that are pasted or otherwise suspicious).
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

fishbrains

The way I explain this to students (and administrators) is that AI is super-cool. And so is Wikipedia. And the Google. We all use these tools for information. However, while these tools can help us, they should not be writing our sentences, paragraphs, and essays for us. This is flat-out cheating and I will do my best to catch and fail students who do this. In short, using AI to write a paragraph or more for an essay isn't really any different than copying and pasting an article from Wikipedia or Google--even if it's kind of neat to watch the program produce it in front of you.

I now place much more emphasis on the scaffolding side of essay assignments: topic proposal, topic development sheet, annotated bibliography (even for fairly short papers), outline, first two paragraphs, draft, peer review, revised final version. Most of this is a quick review and check-off kind of grade so it's not too burdensome.

The big trick is that will not grade an essay that hasn't met the scaffolding requirements, and so far my dean has backed us up as long as this is in the syllabus.

In my syllabus, I also reserve the option of quizzing students on their submissions (as in "Tell me more about Sartre's version of existialism that you referenced in your essay") as a way to show an essay isn't AI generated. I also state that I may ask for paper copies of their sources (totally old-school!).

So far, I've queried six students about their essays, and two of them passed. The other four caved pretty quickly. There have been four or five others where I did not even have to query, because the AI detector showed 100%.

And, as always, there are some submissions in the gray spots where I'll just never know.

And I have pretty much abandoned shorter written responses for quizzing (under 200 words or so) unless they are hand-written in class. This is much more challenging thing for an internet course.

All that said, I have a very supportive dean, and the folks on the student services side are tired of reading AI-generated drivel from students in admissions letters and scholarship applications. So I am lucky there.

I guess we'll see what kind of post I'm writing this time next year.
 

I wish I could find a way to show people how much I love them, despite all my words and actions. ~ Maria Bamford

Sun_Worshiper

Quote from: spork on July 13, 2024, 02:27:57 AM
Quote from: Sun_Worshiper on July 12, 2024, 06:49:47 PM[...]

As for grading, I give a much less substantive response to a paper that I suspect was written entirely by AI and with little-to-no input from the student. And a lower grade.


What criteria do you use that result in lower grades for obvious AI-generated slop?

Only a handful of my undergrad students have any intrinsic motivation to read. Previously I assigned short (no more than a half-page) pieces of writing with prompts on reading assignments, and graded them with a two-column (yes/no), three-row rubric:

  • Organized, clearly-stated argument.
  • Relevant use of assigned sources, correctly cited (i.e., must include page number).
  • Correct spelling, punctuation, and grammar.

For the coming fall semester, I'm collapsing the three rows to two -- (1) organized/concise argument and correct mechanics, and (2) use of assigned sources. I'm hoping that this causes students who just copy/paste from ChatGPT will get a score of no more than 50% on these assignments.

Probably more importantly, I'm converting at least half of these writing assignments into annotation exercises on Perusall. I assume that students are less likely to copy/paste AI output from prompts if they are annotating a text with classmates, and Perusall already has some AI-detection ability (it flags comments that are pasted or otherwise suspicious).

Sounds like a good strategy to me.

At this point, I think every essay or discussion post should be well written. AI does that part for the student. But if the substance is not there - not just citing an article or book, but engaging with its contents - then I give lower marks. In some cases, it may not be shallow because the student used AI, but I'm expecting more from students now than I was before.

I also try not to worry about it too much. When students take shortcuts, they take a risk that the work will be weak. For those that constantly take shortcuts, it will usually catch up to them.

And from what I've seen, AI is not making C-students into A-students. Instead it is turning C-students into C+/B- students.

spork

Quote from: Sun_Worshiper on July 13, 2024, 09:50:37 AM[...]

not just citing an article or book, but engaging with its contents

[...]

I'm expecting more from students now than I was before.

[..]

risk that the work will be weak.

[...]

How do you make these standards explicit in a rubric? What language do you use?

At this stage of my life, good writing is like porn. I instantly recognize it when I see it. But students don't have this ability.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

Sun_Worshiper

Quote from: spork on July 13, 2024, 02:10:34 PM
Quote from: Sun_Worshiper on July 13, 2024, 09:50:37 AM[...]

not just citing an article or book, but engaging with its contents

[...]

I'm expecting more from students now than I was before.

[..]

risk that the work will be weak.

[...]

How do you make these standards explicit in a rubric? What language do you use?

At this stage of my life, good writing is like porn. I instantly recognize it when I see it. But students don't have this ability.

It depends on the class and the assignment, and sometimes I do not use rubrics, but for the final paper in one of my spring courses my rubric had five components:
1. Understanding of the Topic
2. Argument and Analysis
3. Use of Sources
4. Organization and Coherence
5. Writing Quality

So writing quality matters, but other factors matter more (writing quality is worth only 15%). For Use of Sources, earning full points requires students to "integrate at least two high-quality sources effectively to support claims." Points go down as degree of integration decreases. Same for other components of the rubric.*

Beyond rubrics, what I'm saying is that if the work is not substantive then it will be docked points, even if it is well written. AI can write well on its own in terms of prose and structure, but students have to go the extra step to substantively engage with course content.

My approach is not a cure for lazy students relying on AI. But it does reduce the burden on me to give substantive feedback to these student on work that they barely contributed to.

*Btw I had ChatGPT create the rubric for me, although of course I had to edit it a bit. AI is saving me time on all sorts of annoying little tasks, while also acting as a sort of research assistant for me in more substantive areas.

spork

I teach asynchronous online courses at the graduate level. The grad students, at least for now, are intrinsically motivated to learn and have good writing skills. But given the course environment, my exams are all "take home" essays. Do you know of any strategies for designing essay exams, or writing assignments in general, that circumvent (or at least don't reward) AI usage? I can't have the students fly in to sit for a proctored exam. Synchronous online sessions with locked down browsers and webcams are a nightmare to organize given technical problems and students being located all over the world. For the subjects I teach, a timed exam session in which students vomit words onto the page is not a good way to assess learning.

For example, I can now upload sources that I haven't read and AI will produce a reasonably good analysis of all of them. This is one reason I have abandoned "write a review of such-and-such published scholarship on topic X" assignments. Yet my exams require students to integrate references to sources listed in the syllabus into their essays.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.