News:

Welcome to the new (and now only) Fora!

Main Menu

Welcome to Campus: Here's Your ChatGPT

Started by Langue_doc, June 07, 2025, 04:23:21 PM

Previous topic - Next topic

Langue_doc

NYT article on ChatGPT
QuoteWelcome to Campus. Here's Your ChatGPT.
OpenAI, the firm that helped spark chatbot cheating, wants to embed A.I. in every facet of college. First up: 460,000 students at Cal State.

Behind a paywall, so here's the article:
QuoteOpenAI, the maker of ChatGPT, has a plan to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life.

If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test.

OpenAI dubs its sales pitch "A.I.-native universities."

"Our vision is that, over time, A.I. would become part of the core infrastructure of higher education," Leah Belsky, OpenAI's vice president of education, said in an interview. In the same way that colleges give students school email accounts, she said, soon "every student who comes to campus would have access to their personalized A.I. account."

To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT.

Some universities, including the University of Maryland and California State University, are already working to make A.I. tools part of students' everyday experiences. In early June, Duke University began offering unlimited ChatGPT access to students, faculty and staff. The school also introduced a university platform, called DukeGPT, with A.I. tools developed by Duke.

OpenAI's campaign is part of an escalating A.I. arms race among tech giants to win over universities and students with their chatbots. The company is following in the footsteps of rivals like Google and Microsoft that have for years pushed to get their computers and software into schools, and court students as future customers.

The competition is so heated that Sam Altman, OpenAI's chief executive, and Elon Musk, who founded the rival xAI, posted dueling announcements on social media this spring offering free premium A.I. services for college students during exam period. Then Google upped the ante, announcing free student access to its premium chatbot service "through finals 2026."

OpenAI ignited the recent A.I. education trend. In late 2022, the company's rollout of ChatGPT, which can produce human-sounding essays and term papers, helped set off a wave of chatbot-fueled cheating. Generative A.I. tools like ChatGPT, which are trained on large databases of texts, also make stuff up, which can mislead students.

Less than three years later, millions of college students regularly use A.I. chatbots as research, writing, computer programming and idea-generating aides. Now OpenAI is capitalizing on ChatGPT's popularity to promote the company's A.I. services to universities as the new infrastructure for college education.

OpenAI's service for universities, ChatGPT Edu, offers more features, including certain privacy protections, than the company's free chatbot. ChatGPT Edu also enables faculty and staff to create custom chatbots for university use. (OpenAI offers consumers premium versions of its chatbot for a monthly fee.)

OpenAI's push to A.I.-ify college education amounts to a national experiment on millions of students. The use of these chatbots in schools is so new that their potential long-term educational benefits, and possible side effects, are not yet established.

A few early studies have found that outsourcing tasks like research and writing to chatbots can diminish skills like critical thinking. And some critics argue that colleges going all-in on chatbots are glossing over issues like societal risks, A.I. labor exploitation and environmental costs.

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training.

California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system."

Some universities say they are embracing the new A.I. tools in part because they want their schools to help guide, and develop guardrails for, the technologies.

"You're worried about the ecological concerns. You're worried about misinformation and bias," Edmund Clark, the chief information officer of California State University, said at a recent education conference in San Diego. "Well, join in. Help us shape the future."

Last spring, OpenAI introduced ChatGPT Edu, its first product for universities, which offers access to the company's latest artificial intelligence. Paying clients like universities also get more privacy: OpenAI says it does not use the information that students, faculty and administrators enter into ChatGPT Edu to train its A.I.

(The New York Times has sued OpenAI and its partner, Microsoft, over copyright infringement. Both companies have denied wrongdoing.)

Last fall, OpenAI hired Ms. Belsky to oversee its education efforts. An ed tech start-up veteran, she previously worked at Coursera, which offers college and professional training courses.

She is pursuing a two-pronged strategy: marketing OpenAI's premium services to universities for a fee while advertising free ChatGPT directly to students. OpenAI also convened a panel of college students recently to help get their peers to start using the tech.

Among those students are power users like Delphine Tai-Beauchamp, a computer science major at the University of California, Irvine. She has used the chatbot to explain complicated course concepts, as well as help explain coding errors and make charts diagraming the connections between ideas.

"I wouldn't recommend students use A.I. to avoid the hard parts of learning," Ms. Tai-Beauchamp said. She did recommend students try A.I. as a study aid. "Ask it to explain something five different ways."

Ms. Belsky said these kinds of suggestions helped the company create its first billboard campaign aimed at college students.

"Can you quiz me on the muscles of the leg?" asked one ChatGPT billboard, posted this spring in Chicago. "Give me a guide for mastering this Calc 101 syllabus," another said.

Ms. Belsky said OpenAI had also begun funding research into the educational effects of its chatbots.

"The challenge is, how do you actually identify what are the use cases for A.I. in the university that are most impactful?" Ms. Belsky said during a December A.I. event at Cornell Tech in New York City. "And then how do you replicate those best practices across the ecosystem?"

Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.

Jared DeForest, the chair of environmental and plant biology at Ohio University, created his own tutoring bot, called SoilSage, which can answer students' questions based on his published research papers and science knowledge. Limiting the chatbot to trusted information sources has improved its accuracy, he said.

"The curated chatbot allows me to control the information in there to get the product that I want at the college level," Professor DeForest said.

But even when trained on specific course materials, A.I. can make mistakes. In a new study — "Can A.I. Hold Office Hours?" — law school professors uploaded a patent law casebook into A.I. models from OpenAI, Google and Anthropic. Then they asked dozens of patent law questions based on the casebook and found that all three A.I. chatbots made "significant" legal errors that could be "harmful for learning."

"This is a good way to lead students astray," said Jonathan S. Masur, a professor at the University of Chicago Law School and a co-author of the study. "So I think that everyone needs to take a little bit of a deep breath and slow down."

OpenAI said the 250,000-word casebook used for the study was more than twice the length of text that its GPT-4o model can process at once. Anthropic said the study had limited usefulness because it did not compare the A.I. with human performance. Google said its model accuracy had improved since the study was conducted.

Ms. Belsky said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn."

Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance.

In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.


Parasaurolophus

Just lovely. What a relief that CSULA ghosted me after my interview!
I know it's a genus.

Hibush

ChatGPT stores their whole request history. Presumably that will be available to the instructors as part of the assessment of learning.

Langue_doc

Don't you love this quote: "universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation"?

Hegemony

As if what is missing from college students' lives is a machine that will summarize stuff.

What is really missing (when something is missing) is the determination to do the work. I guess the idea is that if you make the work easy enough, they won't need any determination.

I don't think I want a doctor who's had a machine summarize the material for them, nor do I want to fly on a plane designed by someone who's had a machine summarize the material for them.

Sun_Worshiper

AI is here. It is unfortunate in some ways, but it isn't going back in the box. Better we all (including students) get accustomed to using it and leveraging its strengths.

Minervabird

I am glad I am retired, that is for sure.

spork

Quote from: Langue_doc on June 07, 2025, 04:23:21 PMNYT article on ChatGPT
QuoteWelcome to Campus. Here's Your ChatGPT.
OpenAI, the firm that helped spark chatbot cheating, wants to embed A.I. in every facet of college. First up: 460,000 students at Cal State.

...

I mentioned the CSU deal in early February.

I see nothing wrong with this. Cheating was rife long before chatbots. I'd rather have students testing themselves via the university-level equivalent of Khanmigo than downloading problem set solutions from Chegg.

A few months ago I played with a chatbot now being used by a medical school to help train medical students in patient communication. It worked well.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

jimbogumbo

Quote from: spork on June 08, 2025, 01:40:13 PMA few months ago I played with a chatbot now being used by a medical school to help train medical students in patient communication. It worked well.

Completely agree. IF it is done well of course. Here is an article (inter threaduality) of it not being used well at all. Hint: DOGE. Full disclosure: child's company works with VA

https://www.propublica.org/article/trump-doge-veterans-affairs-ai-contracts-health-care

Parasaurolophus

#9
Quote from: spork on June 08, 2025, 01:40:13 PM
Quote from: Langue_doc on June 07, 2025, 04:23:21 PMNYT article on ChatGPT
QuoteWelcome to Campus. Here's Your ChatGPT.
OpenAI, the firm that helped spark chatbot cheating, wants to embed A.I. in every facet of college. First up: 460,000 students at Cal State.

...

I mentioned the CSU deal in early February.

I see nothing wrong with this. Cheating was rife long before chatbots. I'd rather have students testing themselves via the university-level equivalent of Khanmigo than downloading problem set solutions from Chegg.

A few months ago I played with a chatbot now being used by a medical school to help train medical students in patient communication. It worked well.

My cheating rate was 25-30% before. It's 90%+ now.

The students are using it exactly like Chegg.

Edit: worse, really, since they're using it to answer discussion prompts and throwout questions and stuff too.
I know it's a genus.

spork

I sympathize. My wife teaches writing. AI has demolished most of her pedagogical repertoire and she's been forced to professionally reinvent herself. But in my opinion AI has just made the systemic rot in higher education harder to ignore.

Ecclesiastical authorities in Europe made the same complaints about the printing press that we're now making about AI.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.


AmLitHist

A sneaky trick I'd like to try is one I heard from a colleague at another school:  include some completely irrelevant term in the essay prompt, and put it in white font. Then, if the student simply copies and pastes the prompt into an AI generator, that term would be a prominent point in the resulting AI-generated paper. (E.g., for an argument prompt on universal health care, insert a comment at the end of each paragraph and in the title concerning "rutabagas" or whatever, in white font - it won't be visible to the student, but AI will wax poetic about the effects of the vegetable on health insurance costs.) The colleague got some hilarious results using this during the spring term.

I fought AI so much this past spring that I'm kind of past caring for these two summer classes, and I'm not teaching in fall. But if I were in the trenches for the fall, I would definitely do this with my prompts - all in the interest of pedagogical research, of course. Ahem.

Hegemony

Those white-font tricks are impossible to do on Canvas. And as soon as the student copies and pastes the instructions, the white font turns to black, so the student could argue "I saw that it required a mention of rutabagas, and that's why I put one in."

I kinda think this white-font trick is an urban legend — I've never found a way to make it work.

What does work is for some of the required reading to be in a PDF, which cannot be scanned by AI. Thus if you require examples from the reading, the student had to put in the examples by hand, so to speak. Which means actually doing some of the reading. And writing.

Puget

It is hard to see how all online classes are going to prevent rampant cheating in any way.

For in person, you can simply give exams and in-class assignments on paper, which is what I do. For the in class assignments, I thought students might complain about not being able to type them, but they seemed to actually really like doing them on paper, and were much more interactive with their groups.

A few students have accommodations to type, and that's fine -- I don't mind them using laptops for in class assignments and for exams they can use a special no-internet connection laptop issued by student support services. 

Quote from: Hegemony on June 09, 2025, 12:25:48 PMI kinda think this white-font trick is an urban legend — I've never found a way to make it work.

What does work is for some of the required reading to be in a PDF, which cannot be scanned by AI. Thus if you require examples from the reading, the student had to put in the examples by hand, so to speak. Which means actually doing some of the reading. And writing.

I don't know about within an LMS, but the white font thing does work for online research platforms - it is one of the main ways people catch bots and participants using AI to complete surveys for them.

Regarding the PDFs, do you mean scanned PDFs that are not screen readable? If so, how do you then deal with accessibility issues? We're required to have our materials be accessible to screen readers.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes