News:

Welcome to the new (and now only) Fora!

Main Menu

The Alternate Universes of AI & Teaching

Started by spork, July 10, 2024, 05:57:14 AM

Previous topic - Next topic

RatGuy

One of the requirements in my assignments is analysis of direct quotations. For my lower-level undergrads, I give them a set of "steps" for direct textual analysis. So while I've seen AI generate an okay analysis of a given text, I've yet to see it accurately quote from a text, much less find a passage relevant to the topic, quote it, and offer a specific analysis.

For example, a student might ask AI to write an analysis of pastoral images in The Blithedale Romance and get something decently cogent. But ask AI to quote from the novel and you'll get something that SOUNDS like Hawthorne to someone unfamiliar with the novel, but won't actually be. And if the student feeds AI a quote from, say, Bartleby.com, again the analysis may be OK but it won't be able to offer accurate context.

Of course, I have no way of knowing if a student snuck an AI-generated paper past me, but given the other things I know about my students I believe that (so far) none have submitted a passing paper using completely generated by AI

spork

Quote from: RatGuy on July 14, 2024, 01:59:43 PMOne of the requirements in my assignments is analysis of direct quotations. For my lower-level undergrads, I give them a set of "steps" for direct textual analysis. So while I've seen AI generate an okay analysis of a given text, I've yet to see it accurately quote from a text, much less find a passage relevant to the topic, quote it, and offer a specific analysis.

For example, a student might ask AI to write an analysis of pastoral images in The Blithedale Romance and get something decently cogent. But ask AI to quote from the novel and you'll get something that SOUNDS like Hawthorne to someone unfamiliar with the novel, but won't actually be. And if the student feeds AI a quote from, say, Bartleby.com, again the analysis may be OK but it won't be able to offer accurate context.

Of course, I have no way of knowing if a student snuck an AI-generated paper past me, but given the other things I know about my students I believe that (so far) none have submitted a passing paper using completely generated by AI


So students get:

1. Topic X.
2. Text Y.
3. "Find a part of text Y that's relevant to topic X."
4. "Analyze that passage from Y in terms of X."
5. "Include quotations from Y as part of your analysis."

?
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

Parasaurolophus

Quote from: spork on July 14, 2024, 05:33:11 AMI teach asynchronous online courses at the graduate level. The grad students, at least for now, are intrinsically motivated to learn and have good writing skills. But given the course environment, my exams are all "take home" essays. Do you know of any strategies for designing essay exams, or writing assignments in general, that circumvent (or at least don't reward) AI usage? I can't have the students fly in to sit for a proctored exam. Synchronous online sessions with locked down browsers and webcams are a nightmare to organize given technical problems and students being located all over the world. For the subjects I teach, a timed exam session in which students vomit words onto the page is not a good way to assess learning.

For example, I can now upload sources that I haven't read and AI will produce a reasonably good analysis of all of them. This is one reason I have abandoned "write a review of such-and-such published scholarship on topic X" assignments. Yet my exams require students to integrate references to sources listed in the syllabus into their essays.

You can insert a Trojan horse to catch the dumbest copy/pasting. Just make sure your prompt is divided into at least two sections, and at the end of the first (or on the line in between), write an instruction to say something silly or wildly off-topic (e.g. about shark anatomy). Make that text as small as you can, and colour it white.
I know it's a genus.

spork

Quote from: Parasaurolophus on July 14, 2024, 03:55:43 PM[...]

You can insert a Trojan horse to catch the dumbest copy/pasting. Just make sure your prompt is divided into at least two sections, and at the end of the first (or on the line in between), write an instruction to say something silly or wildly off-topic (e.g. about shark anatomy). Make that text as small as you can, and colour it white.

I'm not interested in playing "gotcha" with grad students who are mid- to upper-level government and military personnel. I'm looking for a method of assessment where it doesn't matter if they use or don't use AI.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

Parasaurolophus

Quote from: spork on July 14, 2024, 04:52:58 PM
Quote from: Parasaurolophus on July 14, 2024, 03:55:43 PM[...]

You can insert a Trojan horse to catch the dumbest copy/pasting. Just make sure your prompt is divided into at least two sections, and at the end of the first (or on the line in between), write an instruction to say something silly or wildly off-topic (e.g. about shark anatomy). Make that text as small as you can, and colour it white.

I'm not interested in playing "gotcha" with grad students who are mid- to upper-level government and military personnel. I'm looking for a method of assessment where it doesn't matter if they use or don't use AI.


Ah, I missed the 'graduate' part. For my part, I think I'd be comfortable trusting graduate students to not use it/use it responsibly. But maybe that's naïve. Has their work raised flags for you before?
I know it's a genus.

spork

Quote from: Parasaurolophus on July 14, 2024, 08:33:16 PM
Quote from: spork on July 14, 2024, 04:52:58 PM
Quote from: Parasaurolophus on July 14, 2024, 03:55:43 PM[...]

You can insert a Trojan horse to catch the dumbest copy/pasting. Just make sure your prompt is divided into at least two sections, and at the end of the first (or on the line in between), write an instruction to say something silly or wildly off-topic (e.g. about shark anatomy). Make that text as small as you can, and colour it white.

I'm not interested in playing "gotcha" with grad students who are mid- to upper-level government and military personnel. I'm looking for a method of assessment where it doesn't matter if they use or don't use AI.


Ah, I missed the 'graduate' part. For my part, I think I'd be comfortable trusting graduate students to not use it/use it responsibly. But maybe that's naïve. Has their work raised flags for you before?

No, but, to analogize with mathematics, current methods for assessing factual knowledge and analytical thinking ability are the equivalent of expecting students to calculate the average of ten two-digit numbers by hand. I would much rather have students demonstrate that they can solve a problem by knowing when to use a mean vs. a median; I don't care if they use a computer to calculate the figures.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

Larimar

AI is one of the biggest reasons why I'm transitioning from teaching first year comp to tutoring writing for "learning support services" at OnlyGameInTown CC. Maybe I'm turning into a cranky old lady, but I just can't stomach the incomprehension I see that not doing their own work is cheating, and the devaluing of writing as a skill and an art due to knowing that it's something a computer can do for them. Writing is now seen as a useless, needless chore instead of the wonderful means of learning and expression that I have experienced it as all my life. Trying to convince them hasn't worked. I'm burned out on that. Working with them one on one as a tutor is different. Because it's specifically about their work, they see it as relevant, and they pay attention.

spork

Quote from: Larimar on July 15, 2024, 04:29:17 AMAI is one of the biggest reasons why I'm transitioning from teaching first year comp to tutoring writing for "learning support services" at OnlyGameInTown CC. Maybe I'm turning into a cranky old lady, but I just can't stomach the incomprehension I see that not doing their own work is cheating, and the devaluing of writing as a skill and an art due to knowing that it's something a computer can do for them. Writing is now seen as a useless, needless chore instead of the wonderful means of learning and expression that I have experienced it as all my life. Trying to convince them hasn't worked. I'm burned out on that. Working with them one on one as a tutor is different. Because it's specifically about their work, they see it as relevant, and they pay attention.


I'm getting closer to the burnout point myself. My wife, who has a PhD in rhetoric and composition, taught me the phrase "sloppy writing indicates sloppy thinking." With the decline in K-12 outcomes and the lowering of my employer's admissions standards, I find myself increasingly unable to persuade undergraduate students that writing is a valuable skill because it trains them how to think better.

With AI, the situation has rapidly worsened. I would like to design assignments that require better writing from students, even if they use AI, but they see AI as a way to circumvent writing altogether rather than as a tool that might help them improve.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

apl68

Quote from: spork on July 15, 2024, 07:35:42 AMWith AI, the situation has rapidly worsened. I would like to design assignments that require better writing from students, even if they use AI, but they see AI as a way to circumvent writing altogether rather than as a tool that might help them improve.

I can't help thinking that AI applications in general are creating this sort of mentality, not just in writing.  I've said before that the more AI we see, the less the real article seems in evidence.
And you will cry out on that day because of the king you have chosen for yourselves, and the Lord will not hear you on that day.

spork

Quote from: apl68 on July 15, 2024, 10:38:08 AM
Quote from: spork on July 15, 2024, 07:35:42 AMWith AI, the situation has rapidly worsened. I would like to design assignments that require better writing from students, even if they use AI, but they see AI as a way to circumvent writing altogether rather than as a tool that might help them improve.

I can't help thinking that AI applications in general are creating this sort of mentality, not just in writing.  I've said before that the more AI we see, the less the real article seems in evidence.

In my opinion, the mentality started long before the invention of LLM/generative AI. I think the real rupture occurred with smartphones and social media. AI is just making it a few orders of magnitude worse.

I talked with a dean who I respect about AI. Given the substance of our conversation, I think we're pedagogically fucked for at least the next decade (in other words, until I retire).
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

Parasaurolophus

Quote from: spork on July 15, 2024, 07:35:42 AMI would like to design assignments that require better writing from students, even if they use AI, but they see AI as a way to circumvent writing altogether rather than as a tool that might help them improve.

That's really it. I can believe that this isn't the case in many places, but here, it's really what's going on. The idea is to do as little of the work required to take a class (let alone pass it) as possible. No matter how much I ease standards, how low the stakes, or what new assignments I craft, the bulk of every lower-level class turns to AI to avoid reading, writing, and, worse, thinking. It wasn't the case for my 300-level class this past winter, but we run, like, two of those a year. Everything else is at the 100 and 200 level.

Cheating was rampant before, and I had made real inroads, but now the battle is well and truly lost. It just piggy-backed on the existing cheating culture, and now...
I know it's a genus.

Ancient Fellow

The solution that keeps coming to my mind would probably be difficult to implement anywhere that had large class sizes. That would be to restructure the class to be evaluated primarily on an individual in-person oral examination and secondarily on performance in discussion seminars. Even with a small class size or in grad school, it would mean devoting an increased portion of the semester to evaluation.

spork

Quote from: Ancient Fellow on July 17, 2024, 03:00:45 AMThe solution that keeps coming to my mind would probably be difficult to implement anywhere that had large class sizes. That would be to restructure the class to be evaluated primarily on an individual in-person oral examination and secondarily on performance in discussion seminars. Even with a small class size or in grad school, it would mean devoting an increased portion of the semester to evaluation.

I think this method is helpful for assessing students' learning, but:

  • Students would gin up disability accommodations to avoid in-person, live oral examinations.
  • I'm not willing to put in the time and effort required for interviewing all of my students. If I worked at Princeton and taught only a dozen undergrads each semester, I might feel differently.
  • Students would be doing even less writing than they are now.


I suppose I could base the majority of the course grade on an end-of-semester written self-assessment, a la Reed College. But I'm already seeing students turn to AI for reflective writing assignments that are supposed to be autobiographical.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

apl68

Quote from: spork on July 17, 2024, 04:35:54 AMI suppose I could base the majority of the course grade on an end-of-semester written self-assessment, a la Reed College. But I'm already seeing students turn to AI for reflective writing assignments that are supposed to be autobiographical.

You mean that even when they essentially get a chance to talk all about themselves, they let AI do it?  This really is a strange new world.  It's like they'd rather pretend to live a life than actually do so.
And you will cry out on that day because of the king you have chosen for yourselves, and the Lord will not hear you on that day.

spork

Quote from: apl68 on July 17, 2024, 06:41:10 AM
Quote from: spork on July 17, 2024, 04:35:54 AMI suppose I could base the majority of the course grade on an end-of-semester written self-assessment, a la Reed College. But I'm already seeing students turn to AI for reflective writing assignments that are supposed to be autobiographical.

You mean that even when they essentially get a chance to talk all about themselves, they let AI do it?  This really is a strange new world.  It's like they'd rather pretend to live a life than actually do so.

Yes. It's no longer a self-centered world, it's an imaginary self-centered world in which the goal is to appear perfect and avoid the possibility of failure at all costs.

Here is an example, which I will note is nearly a year old (AI has improved greatly since then): https://activelearningps.com/2023/09/14/what-chatgpt-can-do/.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.