News:

Welcome to the new (and now only) Fora!

Main Menu

University forcing professors to adopt standardized testing?

Started by Aster, December 30, 2019, 09:36:35 AM

Previous topic - Next topic

Aster

Our own academic unit has been following the "ignore it" strategy for over a year. We had expectations that the edu-wonk office would eventually realize just how dumb and ridiculous, and highly damaging-to-student-learning their idea was.

But no, it's been over a year, and the edu-wonk office is buckling down hard. You see, Big Urban College's leadership has already decided that this will be the major razzle dazzle project that we produce for our next regional accreditation review. So the edu-wonk office has over-committed itself, and the clock is now ticking to get the plan implemented.

But assessment submissions by professors have been coming in at a trickle. The problem is obvious. How would any type of major assessment by one professor possibly be viewed as academically equivalent/appropriate, or even moderately transferrable into another professor's course? Even in the same course type, what/how/when things are taught and assessed is highly dynamic and individually customized for optimal results.

I've submitted a couple of things, but they were low-stakes, little dinky things for a general elective course. I would never presume to volunteer one of my major exams up to be forced onto my faculty peers. Nor would I be stupid enough to submit one of my major exams (that I'm currently using) up for public use by the edu-wonk office and all other professors teaching the same course. How could I ensure that those professors did not leak that test out to students? How could I ensure that students in those other courses did not find a way to procure copies of that test? Or worse, the answer key?

This is just so awful. I could submit my own exam to the edu-wonk center and pretty much expect it's security to be compromised after 1 semester, and have every other professor pissed at me because my assessment is not like their assessment on so many levels. Or I could be forced to use some other professor's major exam in place of one of my own, and be stuck with all of the damage control associated with that.

Every choice is a bad choice.

spork

Faculty members have to individually send in their own assessments to the edu-wonk office, which then unilaterally decides which assessment to use? There is no decision making at the department level? If not, and I was a department chair, I'd be raising bloody hell about this.
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

San Joaquin

It's a head-shaker.  Are they maybe trying to assess something across the curriculum, and just hopelessly inarticulate about explaining what that is?

I actually did explain to a Dean once, in my hotheaded early days, that administrative convenience was not really the same thing as student benefit.  It went over about as you might imagine.

Can you pick something that could be a holistic measure found in multiple places, like professional writing standards for your field, or critical thinking skills, and build from that?  You could use multiple assignment formats with the same rubric if they all meet those specific measurable outcomes.

Feel free to message me if you think I can help further.

Aster

Quote from: spork on December 31, 2019, 11:46:07 AM
Faculty members have to individually send in their own assessments to the edu-wonk office, which then unilaterally decides which assessment to use? There is no decision making at the department level? If not, and I was a department chair, I'd be raising bloody hell about this.

No, there is no review process for the standardized exams within academic units. We don't even get to see the proposed assessments. The only review is done by a committee group that reviews *all submissions for all courses across the entire college*. A finalist assessment will be selected by the edu-wonk office and then delivered to all faculty for mandatory implementation.

Department heads don't have any authority over any of this, as the edu-wonk office is not a formal part of Academic Affairs. The edu-wonk office is basically a couple of high-level staffers who work out of the Reporting Analytics office. That office only reports to the VP's and the President.

The edu-wonk office is not directly trying to measure anything specific. They are just needing to have something to do. They are the group that is always placed in charge of special accreditation projects. The college does not give out any meaningful incentives for the faculty for this, so the college wastes money permanently funding a staff office instead.

The only "measurement" requirements for each standardized test are that one learning outcome within a course type must be directly assessed, and that the selected learning outcome must fall under an extremely broad umbrella of highly subjective core educational values (e.g. "critical thinking", "literary awareness").

eigen

I'm guessing the aversion to this is somewhat discipline specific.

In chemistry, at least, common final exams across sections isn't an uncommon strategy. Not done everywhere, but certainly not something I'd be surprised to find. Even places that don't give common exams, faculty work to make sure students are being assessed by similar metrics and at similar times across sections of the same course.

Similarly, it's not uncommon to give final exams that are nationally standardized (from the American Chemical Society), where everyone in the country gives the same final exam for the same level course.

It obviously works better in some disciplines than others, but it's not like the concept is foreign- just maybe not a good fit for your department.
Quote from: Caracal
Actually reading posts before responding to them seems to be a problem for a number of people on here...

Parasaurolophus

There's a measure of this at my instituion, and it's frustrating but not crippling. If it were as you're describing here, though, it would be crippling in my humanities-but-barely field, where tests and exams are not usually great means of assessing student progress and output. They work well enough for classes in formal methods, but for other classes (the bulk of classes in the field, to say nothing of at this institution!), they're really not great. And to have a single, unchangeable exam administered in every iteration of the class... ugh. That would force standardization of the course content, and I just know that the classes which would be standardized would suck--they'd have shitty topics and, worse, shitty readings for each topic. We'd be stuck covering the state of the field 100-200 years ago, all because whoever's exam was chosen had a shitty, non-contemporary syllabus.

Here, the university mandates our evaluation profiles, within a certain range. So each course has a pre-set assessment rubric, which your assessments must fall into: e.g. attendance 0-10%, essays 20-40%, tests 20-45%, final exam 20-30%. You can pick and choose your weights within those ranges, and there's some latitude about what counts as 'tests', 'essays', etc., but you must hit every category, and no others. It's a pain, because it doesn't give us much freedom to try new assessment techniques, or to skip those that don't line up well with the course content. The rationale behind it is pretty much what's been mentioned and hinted at already: apparently there used to be lots of variation between sections of the same course, and that led to student complaints, so they've tried to standardize the experience of every section of every course. Of course, we still have full freedom over course content, which means that the sections all differ quite a bit anyway...
I know it's a genus.

Antiphon1

Well, that sucks. 

However, what does the accrediting association say about content area control?  Look in the accrediting associations standards concerning duties and control of course content.  I'd wager there is an area devoted to faculty and faulty duties in the accreditation handbook which strongly supports course level control by content masters (that's you). 

As a former administrator, I can guarantee there would be strong push back from me and my faculty on any measure foisted upon the department/college from a person or unit without the proper credentials to be suggesting such a universal measurement.  There appears to be no coordination with or input at the department level.  In any demi-legitimate process an exploratory committee should have been formed to examine feasibility before demands were made.  Since you've not mentioned this process your institution probably short handed the decisions. 

It's your job to throw rocks in the gears to slow this down.  Failing that, a letter to the accreditation agency requesting clarification may be one option to be considered.  I would also look at any recent legislation from your state concerning graduation or retention rates.  Further, I also suspect a plan to standardize and can lower division classes to be taught by grad students and/or adjuncts. 

I'm sorry you are dealing with this foolishness. 

Aster

Quote from: eigen on December 31, 2019, 01:55:05 PM
In chemistry, at least, common final exams across sections isn't an uncommon strategy. Not done everywhere, but certainly not something I'd be surprised to find. Even places that don't give common exams, faculty work to make sure students are being assessed by similar metrics and at similar times across sections of the same course.

Similarly, it's not uncommon to give final exams that are nationally standardized (from the American Chemical Society), where everyone in the country gives the same final exam for the same level course.

It obviously works better in some disciplines than others, but it's not like the concept is foreign- just maybe not a good fit for your department.

This is very different from a "common exam". None of this is collaborative or cooperative. The selected assessments are not crafted as a group nor even agreed upon as a group. Nor are the selected assessments secured, editable, or controlled in any way by the professors, or by a responsible academic oversight board (e.g. American Chemical Society, SAT board).

Here's how it would work if implemented at your university. One of your chemistry professors would have one of his/her *personal* major exams adopted for everyone else to use. Nobody *at all* could alter it, and it would be used over and over again for consecutive terms. It would not be switched out with alternate exams to prevent cheating like most standardized exams routinely are.

eigen

Quote from: Aster on January 01, 2020, 01:23:22 PM
Quote from: eigen on December 31, 2019, 01:55:05 PM
In chemistry, at least, common final exams across sections isn't an uncommon strategy. Not done everywhere, but certainly not something I'd be surprised to find. Even places that don't give common exams, faculty work to make sure students are being assessed by similar metrics and at similar times across sections of the same course.

Similarly, it's not uncommon to give final exams that are nationally standardized (from the American Chemical Society), where everyone in the country gives the same final exam for the same level course.

It obviously works better in some disciplines than others, but it's not like the concept is foreign- just maybe not a good fit for your department.

This is very different from a "common exam". None of this is collaborative or cooperative. The selected assessments are not crafted as a group nor even agreed upon as a group. Nor are the selected assessments secured, editable, or controlled in any way by the professors, or by a responsible academic oversight board (e.g. American Chemical Society, SAT board).

Here's how it would work if implemented at your university. One of your chemistry professors would have one of his/her *personal* major exams adopted for everyone else to use. Nobody *at all* could alter it, and it would be used over and over again for consecutive terms. It would not be switched out with alternate exams to prevent cheating like most standardized exams routinely are.

Depends how long they're planning on doing this. ACS exams get used for 3-5 years, depending on the school.

It sounded like from your OP that they were asking faculty to submit assessments? So why can't the department discuss what gets submitted?

Make it a "major" graded assignment (5-10%) in addition to what you would normally do in the course, worth enough to make the students take it seriously but not enough to override your regular assignments.

I feel like if the department embraced how to make this work cohesively, it would be doable and might actually yield some interesting data.
Quote from: Caracal
Actually reading posts before responding to them seems to be a problem for a number of people on here...

Aster

Quote from: eigen on January 01, 2020, 03:57:24 PM
Quote from: Aster on January 01, 2020, 01:23:22 PM
Quote from: eigen on December 31, 2019, 01:55:05 PM
In chemistry, at least, common final exams across sections isn't an uncommon strategy. Not done everywhere, but certainly not something I'd be surprised to find. Even places that don't give common exams, faculty work to make sure students are being assessed by similar metrics and at similar times across sections of the same course.

Similarly, it's not uncommon to give final exams that are nationally standardized (from the American Chemical Society), where everyone in the country gives the same final exam for the same level course.

It obviously works better in some disciplines than others, but it's not like the concept is foreign- just maybe not a good fit for your department.



This is very different from a "common exam". None of this is collaborative or cooperative. The selected assessments are not crafted as a group nor even agreed upon as a group. Nor are the selected assessments secured, editable, or controlled in any way by the professors, or by a responsible academic oversight board (e.g. American Chemical Society, SAT board).

Here's how it would work if implemented at your university. One of your chemistry professors would have one of his/her *personal* major exams adopted for everyone else to use. Nobody *at all* could alter it, and it would be used over and over again for consecutive terms. It would not be switched out with alternate exams to prevent cheating like most standardized exams routinely are.

It sounded like from your OP that they were asking faculty to submit assessments? So why can't the department discuss what gets submitted?

Make it a "major" graded assignment (5-10%) in addition to what you would normally do in the course, worth enough to make the students take it seriously but not enough to override your regular assignments.

I feel like if the department embraced how to make this work cohesively, it would be doable and might actually yield some interesting data.

I am well familiar with cooperative assessments. This is not the same thing. Academic units (e.g. departments) are not being allowed to submit collectively-created and collectively-agreed-upon assessments. The only assessments allowed for submission must be a pre-existing assessment from an individual instructor. An assessment can only be submitted by a professor, not a department or any group of professors. It has to be someone's personal major assessment. We even have to document our existing use of the assessment (how much its worth, how we use it) when we submit it.

Yes, it's that stupid.

I was thinking about secretly organizing some of my colleagues into creating a "fake exam" to submit, that I've never used nor would ever use myself. The exam would be dumbed-down sufficiently to fairly accommodate the wide diversity of pedagogies, textbooks, instructional schedules, and course formats from most of our professors. Our secret faculty group would agree that everyone could nominally work with this stripped-down thing, and then I would submit this as "my exam" to the edu-wonk office.

All this work would go towards only *one* course type. I teach *five* different course types. Our department offers over a *dozen* different course types. Most of these other course types are also being required to participate in the standardized exam project. Now multiply that by every other department at the university. It's insane.

eigen

No, that part was clear. I'm just surprised there's so much difference between different sections of the same course that you couldn't sit down and pick someone's already existing assessment to submit based on the one that was the best fit.

But like I said, field differences. In any given semester it would very well be possible for me to use any of my colleagues exams for my course. In fact, typical practice for a makeup exam is to use an exam from someone else's section.

Our courses are based around the same learning objectives, we all agree on a textbook to use for any given course, and the content we cover is pretty much the same. The idea is that all sections of any one of our classes are interchangeable and equal to the students- they should come out of any of them with the same skills able to go on to the same next course. Honestly, if I'm doing my job right, a student from my class should be able to pass the assessments in my colleagues classes reasonably well.

I get that this doesn't work for your field, but this would be only a minor headache for my department. We'd sit down, look at the assessments we have that we like for each class, and pick one that we will all give. Issues with keeping it secure are certainly significant, but we manage to do it with external standardized exams.
Quote from: Caracal
Actually reading posts before responding to them seems to be a problem for a number of people on here...

Aster

Quote from: eigen on January 01, 2020, 06:31:00 PM
No, that part was clear. I'm just surprised there's so much difference between different sections of the same course that you couldn't sit down and pick someone's already existing assessment to submit based on the one that was the best fit.

But like I said, field differences. In any given semester it would very well be possible for me to use any of my colleagues exams for my course. In fact, typical practice for a makeup exam is to use an exam from someone else's section.

Our courses are based around the same learning objectives, we all agree on a textbook to use for any given course, and the content we cover is pretty much the same. The idea is that all sections of any one of our classes are interchangeable and equal to the students- they should come out of any of them with the same skills able to go on to the same next course. Honestly, if I'm doing my job right, a student from my class should be able to pass the assessments in my colleagues classes reasonably well.

I get that this doesn't work for your field, but this would be only a minor headache for my department. We'd sit down, look at the assessments we have that we like for each class, and pick one that we will all give. Issues with keeping it secure are certainly significant, but we manage to do it with external standardized exams.


Well, we're a teaching-based institution. So we try to hire people based on teaching excellence and innovation. That means that our faculty may teach and assess a course in many different ways, and are regularly modifying instruction and assessment to adapt and improve student performance.

But because of other factors (e.g. geographic location), we also hire whoever we can get sometimes. This creates a lot of unevenness in faculty qualifications and motivations. We have some truly amazing teachers. And we have some truly awful teachers that would never have been hired anywhere else.


But to get down to specifics, there are all these issues.

1.  We don't all teach topics in the same order. And some of us are on 6-week or 10-week short terms. This means that students in one class are often learning stuff only God knows when compared to other sections of the same course.

1b. Many of our courses are taught by adjuncts. Adjuncts may come and go every year. Adjuncts that may have never taught before. Adjuncts that may not have taught for 20 years or more. Adjuncts that don't know how to lock down assessments on a CMS. Adjuncts that feel that their continued employment are contingent on passing all of their students so that nobody complains.

2.  We aren't required to use a textbook. And if we do use a textbook, people can adopt different versions of it. Some faculty heavily integrate CourseSmart and 3rd party instructional resources into the course design, for example. Assessments for those courses will necessarily have to be calibrated differently than for other courses.

3. Our fully online professors mostly all assess with multiple choice assessments. But our brick-and-mortar and hybrid professors assess with every color of the rainbow pedagogy that can be imagined. Inverted classroom assessment. Group assessment. Project-based assessment. Homework-heavy assessment. Open notes assessment. Extended time assessment. Take home assessment.

4. Not all professors teach online, or even use a CMS for some course types. But the assessment we will be required to adopt must be put on every professor's CMS in an identical format. Many of our professors have highly customized CMS-based assessment designs. Slapping on a secondhand major assessment can easily disrupt an existing professor's CMS instructional flow.

All of these different assessments, teaching formats, and different instructional schedules make it pretty much impossible to "standardize" something that will work well for more than a few people. It could only really work if the assessment was dumbed down to the lowest level of Bloom's Taxonomy (for the professors that assess that way), and covered specific assessment components that could only be taught very early in any iteration. This would cover folks that are innovating their teaching with customized learning schedules.

And that's for just creating the mk1 version of the exam. We're not allowed to create a mk2 version of the exam. Or a mk3 version. Without alternate versions, replacement versions, or updated//improved versions, there is no way to correct for mistakes, problems, and the 100 pound gorilla in the room, student cheating.

eigen

Sure, so are we. Most of us publish regularly on pedagogy.

That still doesn't mean that by the end of the semester, students in different sections of the same course should not be able to complete the same assessment. In fact, it gives some fascinating insight into strengths and weaknesses of those different styles and how they effect the end goal of student learning. Surely you have shared learning objectives? If so, I'm having a hard time visualizing how one or more of those objectives could not be assessed uniformly.

Like I said, sounds like some field specific differences. In my field, there's a pretty consistent expectation of what a student should be able to do by the end of the course based on our defined learning objectives. Even if we all teach differently and assess differently, adding a new 5% assessement at the end of the semester shouldn't be a big deal.
Quote from: Caracal
Actually reading posts before responding to them seems to be a problem for a number of people on here...

jerseyjay

Quote from: Aster on January 01, 2020, 07:00:59 PM
Quote from: eigen on January 01, 2020, 06:31:00 PM
No, that part was clear. I'm just surprised there's so much difference between different sections of the same course that you couldn't sit down and pick someone's already existing assessment to submit based on the one that was the best fit.

But like I said, field differences. In any given semester it would very well be possible for me to use any of my colleagues exams for my course. In fact, typical practice for a makeup exam is to use an exam from someone else's section.

Our courses are based around the same learning objectives, we all agree on a textbook to use for any given course, and the content we cover is pretty much the same. The idea is that all sections of any one of our classes are interchangeable and equal to the students- they should come out of any of them with the same skills able to go on to the same next course. Honestly, if I'm doing my job right, a student from my class should be able to pass the assessments in my colleagues classes reasonably well.

I get that this doesn't work for your field, but this would be only a minor headache for my department. We'd sit down, look at the assessments we have that we like for each class, and pick one that we will all give. Issues with keeping it secure are certainly significant, but we manage to do it with external standardized exams.


Well, we're a teaching-based institution. So we try to hire people based on teaching excellence and innovation. That means that our faculty may teach and assess a course in many different ways, and are regularly modifying instruction and assessment to adapt and improve student performance.

But because of other factors (e.g. geographic location), we also hire whoever we can get sometimes. This creates a lot of unevenness in faculty qualifications and motivations. We have some truly amazing teachers. And we have some truly awful teachers that would never have been hired anywhere else.


But to get down to specifics, there are all these issues.

1.  We don't all teach topics in the same order. And some of us are on 6-week or 10-week short terms. This means that students in one class are often learning stuff only God knows when compared to other sections of the same course.

1b. Many of our courses are taught by adjuncts. Adjuncts may come and go every year. Adjuncts that may have never taught before. Adjuncts that may not have taught for 20 years or more. Adjuncts that don't know how to lock down assessments on a CMS. Adjuncts that feel that their continued employment are contingent on passing all of their students so that nobody complains.

2.  We aren't required to use a textbook. And if we do use a textbook, people can adopt different versions of it. Some faculty heavily integrate CourseSmart and 3rd party instructional resources into the course design, for example. Assessments for those courses will necessarily have to be calibrated differently than for other courses.

3. Our fully online professors mostly all assess with multiple choice assessments. But our brick-and-mortar and hybrid professors assess with every color of the rainbow pedagogy that can be imagined. Inverted classroom assessment. Group assessment. Project-based assessment. Homework-heavy assessment. Open notes assessment. Extended time assessment. Take home assessment.

4. Not all professors teach online, or even use a CMS for some course types. But the assessment we will be required to adopt must be put on every professor's CMS in an identical format. Many of our professors have highly customized CMS-based assessment designs. Slapping on a secondhand major assessment can easily disrupt an existing professor's CMS instructional flow.

All of these different assessments, teaching formats, and different instructional schedules make it pretty much impossible to "standardize" something that will work well for more than a few people. It could only really work if the assessment was dumbed down to the lowest level of Bloom's Taxonomy (for the professors that assess that way), and covered specific assessment components that could only be taught very early in any iteration. This would cover folks that are innovating their teaching with customized learning schedules.

And that's for just creating the mk1 version of the exam. We're not allowed to create a mk2 version of the exam. Or a mk3 version. Without alternate versions, replacement versions, or updated//improved versions, there is no way to correct for mistakes, problems, and the 100 pound gorilla in the room, student cheating.

It seems that what you have described may, in fact, be one of the reasons that this policy has been instituted.

If you have professors of different approaches, different views, and differing levels of competency, how do you assess whether a program is accomplishing what it is supposed to do (i.e., whether it is effective)?

When it comes time for accreditation, this is the kind of stuff that administrators start getting worked up about. So they try to come up with, ahem, metrics of institutional effectiveness. Like much that administrators do, it is not clear if they actually care about this, or if they just jump through the hoops to make it look like they care.

At my school--and urban public teachers' college--we have a general education program that comprises a wide range of disciplines taught be a mixture of full-time and adjunct faculty. For my introductory course, students are supposed to be able to (a) know all the discipline-specific content stuff that nobody but my department really cares about; (b) do basic university-level stuff (express themselves in writing, analyze information, etc) that serves as a foundation for whatever field a student studies. The idea is that the general-education courses are useful to students outside a major in ways besides just expanding their horizons and exposing them to different disciplines. [We do not have enough majors to teach only to majors.]

The question is, how do we assess if all these general education courses are doing this? So far we have tried having each student turn in a final project (not a standardized test) specific to that course, and then having other professors rate them, with both the student and the professor details removed. That was cumbersome and didn't really work.

Then each professor assessed the students in his or her own courses and report this data at the end. While less cumbersome, this created real problems in reliability (i.e., different professors had widely different criteria).

Now they are going to try something else. With probably the same results.

The main problem that I see with what the OP described is
(1) in some disciplines objective tests are not the norm,  so professors probably cannot design valid questions and students may not be experienced in taking them;
(2) linking data to specific professors raises many, many problems (will the professors be rewarded or penalized on how their students do? this leads to problems; also student performances vary greatly for reasons way beyond the control of the individual professor making comparisons problematic)
(3) it is not clear what it means that an assignment is a significant part of the final grade

xerprofrn

Quote from: ciao_yall on December 31, 2019, 11:17:48 AM
Once the test questions and answers are out there, students will start sharing them pretty freely.

Remarkably, test scores will improve dramatically.

The edu-wonks will break their own arms patting themselves on the back demonstrating that the improvement in test scores shows an improvement in learning.

This will continue until someone decides to update the test... and all the faculty teaching quality will suddenly go downhill fast.

Everyone will wonder why.

Maybe consultants will be hired.

This.  So much this.  Especially because it will make up a significant portion of the grade.  This is unethically encouraging academic dishonesty.  Once students figure out that the same test is being given in multiple sections (and probably unchanged for multiple terms), and it is worth a ridiculous percentage of the final grade, students will cheat. And, can you blame them with this stupid policy?