News:

Welcome to the new (and now only) Fora!

Main Menu

Open Book Test

Started by HigherEd7, November 15, 2019, 08:40:19 AM

Previous topic - Next topic

polly_mer

Yep, I saw all that when I became director of online education.  I insisted on having a lot of talks about how academic freedom meant creativity in the backwards design from the course goals to ensure that students who put in enough effort to learn would indeed learn and be able to meet the goals.  It did not mean altering the course goals to the point that anyone who did about an hour's work over the course of the term should get an A.

At the time, I was also knee-deep in what constituted acceptable work levels for the Carnegie credit so I could write a solid accreditation report and keep us out of the media for going on probation with the accreditors for offering too little work for college credit.  The online program at Adams State University was in the news about that time for having far too little content for such an accelerated course.
Quote from: hmaria1609 on June 27, 2019, 07:07:43 PM
Do whatever you want--I'm just the background dancer in your show!

bibliothecula

You must have a lot of time on your hands if you have the time to police your peers' exams and grading so closely. Have you give any thought to the answers polly_mer and I provided?

Quote from: Aster on December 18, 2019, 06:30:56 AM
Thank you for the responses. I am pleased to see that some people actually put careful thought into their assessments. And not just the assessment itself, but the delivery of the assessment, and who actually may be completing the assessment.

I used to think that professors were more professional with their assessment strategies to ensure fairness, appropriate rigor, and security.

But it grated on me to keep going to department meetings and having people constantly complain about cheating, other professors' lax grading standards, being stuck with terrible students because the pre-requisite professors weren't assessing properly, blaming adjuncts for everything, blaming the administration for everything, blah blah blah. Plus, the complaints never seemed to go anywhere. Most complaints were based on anecdotal hearsay. Complainers rarely offered solutions.

And then I stopped taking my colleagues' words for it, and began directly checking into professors' assessments and grade distributions. The resources at Big Urban College allow me to do this very easily.

Curiously, the majority of the courses with severe problems (within my discipline) are the fully online courses. With these specific issues.
- online assessments are *not* high-level Bloom's Taxonomy (they are regurgitation knowledge questions that are easily googled)
- online assessments are *not* secured from repeat viewing or copying (and yes, I've seen some of my colleague's exams pirated onto the internet. Answer keys also)
- online assessments are *not* group-work based, but there is no way to prevent students from completing as groups or with anybody else (professors are not implementing the available security tools)
- online assessments are *not* original nor frequently changed out (to offset successful cheating)
- online assessments are few and high-stakes (this enhances pressure to cheat)
- there are online advertisements in the area where students can pay somebody else to take their online assessments for them (and then they list example courses which happen to be courses in my department)

These courses are our department's "problem" courses that we get no support from administration to correct. The grade distributions are among the best in the college. The classes instantly fill. That's "success" to any outside of the Academy. The TT and adjunct faculty that engage in these shenanigans are extremely difficult to remove from their courses. They just leapfrog the department heads, deans, and provosts and talk directly to the president about their sky-high pass rates, sky-high attendance figures, and sky-high Rate-My-Professor ratings.

So while I will apologize if I'm coming off sounding rough when people describe fully open-book, non-secured assessments, know that I have seen these practices heavily abused, and I continue to see them heavily abused. I would like to see professors take their assessment duties both seriously and competently, and I would like professors to understand the special challenges of operating fully online assessments.

I know very few professors that don't recycle their same exams over and over again. This is not so terrible if exams are heavily secured and never released to students. But a professor is an idiot if he/she releases exams (with or without keys) and then reuses the exact same exam within consecutive terms.

I know far too few professors that even know what Bloom's Taxonomy is (or any other standardized learning strategy hierarchies). Not that this is essential knowledge in my discipline, but it certainly nips the next problem in the bud.

I know far too many professors that only assess on regurgitation, memorization-only knowledge. This is fine if assessments are heavily secured, and the completion of those assessment heavily proctored. But a professor is an idiot if he/she crafts purely knowledge-based exams online, doesn't secure them online, and doesn't proctor online assessments or activate any of the standard preventative tools (e.g. randomized questions, randomized responses, single question viewing, timed responses, single attempts).

Aster

#17
If you have the right tools, it takes 10 minutes to screen grade distributions across disciplines, across professors, and across campuses at Big Urban College. I can actually say with a straight face that Big Urban College is truly excellent at something. We're good at tracking professor grades. Super.

And if your department's enrollments are highly uneven because Professor Cheating Magnet grabs disproportionately most of the students, and then demands overloads-for-pay to suck even up more disproportionate enrollments, you'd spend the extra time documenting his course's curriculum and assessment problems to the Dean also. It's not so great having concerned faculty beating on your door wondering if their classes will make, when all of the students are mysteriously gravitating into Professor Cheating Magnet's classes. A bit of curriculum and assessment checking clears up that mystery pretty quick.

And when you're part of the academic administrative team at your college, evaluating professors (including your peers) is something that's expected of you. Observations. Curriculum oversight. Assessment examination. Compliance.

HigherEd7


Great response and I agree, so what is the solution? Online education is a moneymaker for schools and it is basically an open book course and if you offer face to face or online at the same university students will register for online courses!!!!!!!!!!!!!!!!!! So how can we make online courses more of a challenge for students?




Quote from: Aster on December 18, 2019, 06:30:56 AM
Thank you for the responses. I am pleased to see that some people actually put careful thought into their assessments. And not just the assessment itself, but the delivery of the assessment, and who actually may be completing the assessment.

I used to think that professors were more professional with their assessment strategies to ensure fairness, appropriate rigor, and security.

But it grated on me to keep going to department meetings and having people constantly complain about cheating, other professors' lax grading standards, being stuck with terrible students because the pre-requisite professors weren't assessing properly, blaming adjuncts for everything, blaming the administration for everything, blah blah blah. Plus, the complaints never seemed to go anywhere. Most complaints were based on anecdotal hearsay. Complainers rarely offered solutions.

And then I stopped taking my colleagues' words for it, and began directly checking into professors' assessments and grade distributions. The resources at Big Urban College allow me to do this very easily.

Curiously, the majority of the courses with severe problems (within my discipline) are the fully online courses. With these specific issues.
- online assessments are *not* high-level Bloom's Taxonomy (they are regurgitation knowledge questions that are easily googled)
- online assessments are *not* secured from repeat viewing or copying (and yes, I've seen some of my colleague's exams pirated onto the internet. Answer keys also)
- online assessments are *not* group-work based, but there is no way to prevent students from completing as groups or with anybody else (professors are not implementing the available security tools)
- online assessments are *not* original nor frequently changed out (to offset successful cheating)
- online assessments are few and high-stakes (this enhances pressure to cheat)
- there are online advertisements in the area where students can pay somebody else to take their online assessments for them (and then they list example courses which happen to be courses in my department)

These courses are our department's "problem" courses that we get no support from administration to correct. The grade distributions are among the best in the college. The classes instantly fill. That's "success" to any outside of the Academy. The TT and adjunct faculty that engage in these shenanigans are extremely difficult to remove from their courses. They just leapfrog the department heads, deans, and provosts and talk directly to the president about their sky-high pass rates, sky-high attendance figures, and sky-high Rate-My-Professor ratings.

So while I will apologize if I'm coming off sounding rough when people describe fully open-book, non-secured assessments, know that I have seen these practices heavily abused, and I continue to see them heavily abused. I would like to see professors take their assessment duties both seriously and competently, and I would like professors to understand the special challenges of operating fully online assessments.

I know very few professors that don't recycle their same exams over and over again. This is not so terrible if exams are heavily secured and never released to students. But a professor is an idiot if he/she releases exams (with or without keys) and then reuses the exact same exam within consecutive terms.

I know far too few professors that even know what Bloom's Taxonomy is (or any other standardized learning strategy hierarchies). Not that this is essential knowledge in my discipline, but it certainly nips the next problem in the bud.

I know far too many professors that only assess on regurgitation, memorization-only knowledge. This is fine if assessments are heavily secured, and the completion of those assessment heavily proctored. But a professor is an idiot if he/she crafts purely knowledge-based exams online, doesn't secure them online, and doesn't proctor online assessments or activate any of the standard preventative tools (e.g. randomized questions, randomized responses, single question viewing, timed responses, single attempts).

Hegemony

Well, my own online courses show no sign of falling victim to these problems.  In fact I had to make the tests easier, because, since none of the questions were merely factual — all involve advanced synthesizing — they are harder than my in-person tests, which have a proportion of merely factual questions.  So students were getting markedly lower scores on the online tests, until I wrote a selection of somewhat easier questions to mitigate the problem.

Here's how I manage the tests in the online courses:

All tests are non-proctored, since our students are taking the courses from all over the place and finding a university-approved proctor is cumbersome and sometimes impossible.

Every week of the course finishes with a test.  Each test has 20 questions and has a cut-off time of 20 minutes.  This is meant to prevent them spending much time flipping through books looking for clues, googling the terms, texting their friends, etc.  The tests are multiple-choice.

Each test takes the questions randomly from a bank that starts at 80 questions.  So no two students will have the same exact set, and since they appear in random order (which Canvas does) and the multiple choices within each question appear in random order, it would be hard for someone to transmit answers effectively to someone else.

After each course is finished, I look at the Canvas analytic statistics for the various questions and see which ones were duds — the ones were obviously students misunderstood something (because they all gave the same wrong answer), or whatever.  I rewrite or drop those questions.  I also add 5-15 questions.  So the question bank is growing over the years, and the questions are getting better, since I redo or drop the duds.

The questions all require synthesis of the readings.  For instance, something like:

Judging from his work 'The Morals of Government,' if Jones [featured in reading A] were to encounter [a situation described in reading B], he would be most likely to:
a. condemn it as ungodly
b. condemn it as unconstitutional
c. approve of it only if 'captains of industry' were in favor of it
d. approve of it because it elevates the 'morals of the common man'

As new readings are published, I switch out the old readings and put in a few new ones every year.  This means I take out the questions that relate to the old readings and add some relating to the new readings. 

This sounds as if it takes up a lot of time, and it did before the first running of the course. But now it takes a lot less time to do every year than it would to grade in-person exams.  The LMS grades the whole thing without my lifting a finger, a feature which I love.

So if a student finds a set of my questions posted on the web somewhere, they'd be lucky to get the actual questions that were going to appear on their exam.  And if they want to try to memorize all the answers to the some 1200+ possible questions, well, they'd probably actually be learning the material of the course, so more power to them.

HigherEd7

Do you think be giving them 20 questions and 20 minutes to complete the exam is to much time? It seems like they would have time to look in their book and find the answers. Thoughts?




Quote from: Hegemony on December 18, 2019, 08:16:57 PM
Well, my own online courses show no sign of falling victim to these problems.  In fact I had to make the tests easier, because, since none of the questions were merely factual — all involve advanced synthesizing — they are harder than my in-person tests, which have a proportion of merely factual questions.  So students were getting markedly lower scores on the online tests, until I wrote a selection of somewhat easier questions to mitigate the problem.

Here's how I manage the tests in the online courses:

All tests are non-proctored, since our students are taking the courses from all over the place and finding a university-approved proctor is cumbersome and sometimes impossible.

Every week of the course finishes with a test.  Each test has 20 questions and has a cut-off time of 20 minutes.  This is meant to prevent them spending much time flipping through books looking for clues, googling the terms, texting their friends, etc.  The tests are multiple-choice.

Each test takes the questions randomly from a bank that starts at 80 questions.  So no two students will have the same exact set, and since they appear in random order (which Canvas does) and the multiple choices within each question appear in random order, it would be hard for someone to transmit answers effectively to someone else.

After each course is finished, I look at the Canvas analytic statistics for the various questions and see which ones were duds — the ones were obviously students misunderstood something (because they all gave the same wrong answer), or whatever.  I rewrite or drop those questions.  I also add 5-15 questions.  So the question bank is growing over the years, and the questions are getting better, since I redo or drop the duds.

The questions all require synthesis of the readings.  For instance, something like:

Judging from his work 'The Morals of Government,' if Jones [featured in reading A] were to encounter [a situation described in reading B], he would be most likely to:
a. condemn it as ungodly
b. condemn it as unconstitutional
c. approve of it only if 'captains of industry' were in favor of it
d. approve of it because it elevates the 'morals of the common man'

As new readings are published, I switch out the old readings and put in a few new ones every year.  This means I take out the questions that relate to the old readings and add some relating to the new readings. 

This sounds as if it takes up a lot of time, and it did before the first running of the course. But now it takes a lot less time to do every year than it would to grade in-person exams.  The LMS grades the whole thing without my lifting a finger, a feature which I love.

So if a student finds a set of my questions posted on the web somewhere, they'd be lucky to get the actual questions that were going to appear on their exam.  And if they want to try to memorize all the answers to the some 1200+ possible questions, well, they'd probably actually be learning the material of the course, so more power to them.

Aster

Quote from: HigherEd7 on December 23, 2019, 04:45:38 PM
Do you think be giving them 20 questions and 20 minutes to complete the exam is to much time? It seems like they would have time to look in their book and find the answers. Thoughts?

Up until a year or two ago, I would have agreed that a 1 minute, 1 question, timed online test format would defeat "google it" style cheating.

There would be just not enough time for a student to type in an online search query and then read the results, even for some of the most rudimentary questions.

But now, we're entering the era of voice-activated online searches. Speech recognition software on smartphones is now advanced enough for people to ask complex questions and get fast, accurate results.

Case in point. My spouse routinely calls on Alexa to make complex cooking ingredient calculations. Alexa is routinely called on to define complex vocabulary (in a short, concise manner). Alexa is routinely used to translate words in other languages.

So now, with the ability of students to replace their internet searches with just a call-out on their phone to Siri/Google/Cortana/Alexa, I would be even more careful with un-monitored, unsecured assessments. And I would no longer take it for granted that a student could not look up an answer to something in under a minute. Because they can do that. And they do do that now.

wwwdotcom

Quote from: HigherEd7 on December 23, 2019, 04:45:38 PM
Do you think be giving them 20 questions and 20 minutes to complete the exam is to much time? It seems like they would have time to look in their book and find the answers. Thoughts?

If the answers can easily be found in the textbook then maybe you need to develop higher-level questions.

Hegemony

Yeah, the questions I design for my online tests could not be googled even if you had 24 hours to do it.  You'd have to google to find relevant readings (or you could just look at the assigned course readings), read them carefully, integrate them with the other course readings in your understanding, add in the aspects developed in my course lecture and probably aspects also discussed on the Discussion Boards, and then tackle the question.  I could make my online tests unlimited-time tests, except that that would cause other problems.  Students starting them and forgetting to complete them, students gathering in groups to discuss the probable answers at length, student's internet failing at some point during the test and then having to get the system to let them start again, etc etc etc.

If you have a question that can be answered via an easy search on google, you've written the wrong kind of a question for an internet exam.

HigherEd7

What is your thought on using test banks that come with textbooks?


Quote from: Hegemony on December 30, 2019, 10:04:19 PM
Yeah, the questions I design for my online tests could not be googled even if you had 24 hours to do it.  You'd have to google to find relevant readings (or you could just look at the assigned course readings), read them carefully, integrate them with the other course readings in your understanding, add in the aspects developed in my course lecture and probably aspects also discussed on the Discussion Boards, and then tackle the question.  I could make my online tests unlimited-time tests, except that that would cause other problems.  Students starting them and forgetting to complete them, students gathering in groups to discuss the probable answers at length, student's internet failing at some point during the test and then having to get the system to let them start again, etc etc etc.

If you have a question that can be answered via an easy search on google, you've written the wrong kind of a question for an internet exam.