News:

Welcome to the new (and now only) Fora!

Main Menu

Topic: Bang Your Head on Your Desk - the thread of teaching despair!

Started by the_geneticist, May 21, 2019, 08:49:54 AM

Previous topic - Next topic

Puget

Quote from: ergative on January 13, 2022, 01:41:38 PM
Quote from: the_geneticist on January 13, 2022, 11:36:15 AM
Quote from: FishProf on January 13, 2022, 11:09:23 AM
Quote from: Istiblennius on January 13, 2022, 09:50:53 AM
Largely due to the if-then fallacy that seems to have been drummed into them in high school.

I always start my lecture in experimental design and the scientific 'method' with "You were lied to in elementary, middle, and high school..."

Eventually I can get them to "If you are telling me WHAT will happen - THAT is your prediction.  If you tell me WHY - THAT is your hypothesis (more or less)."

Don't even get me started on the difficultly in teaching them that you start with testing your Null Hypothesis.  Why?  Because if you run the stats and see NO DIFFERENCE between your data sets, you are done.  No need to test for if X>Y if X is no different from Y.

My problem isn't so much that, as the difficulty in getting them to understand that you can't legitimately conclude that two things are the same (well, with NHST you can't). You can find that there is insufficient evidence to conclude that they're different, but that's not the same as concluding that they're the same. And then my students turn around and design experiments with Condition A and Condition B and say 'If my hypothesis is correct, then there should be no difference in reaction time between Condition A and Condition B.' And instead of assigning the B- for understanding what a condition is and moving on, I'm somehow still writing marginal comments earnestly trying to point out everything that's wrong with that statement.

(I don't dare give up the marginal comments, because these projects are feeders for senior year capstone projects, and if I don't get sufficiently discouraging now, I run the risk of ending up with students wanting me to supervise their capstone experiments that just reuse the same project design, complete with the same problems.)

Well, as you note you can't with NHST, but you could with Bayesian analysis (or at least that the differences is smaller than whatever you would consider a meaningful difference). But I'm assuming you don't want to go there with your students.

I currently have a related head bang, not from a student (presumably) but from a reviewer, who earnestly explains that a better way (compared to the correct way we used which they don't like because of bias against certain types of models) to test our hypothesis that [predictors] are related to the shared variance among [outcome variables] is to predict one outcome variable covarying all the others and if the effect of the predictor goes away that means it was due to the shared variance among the outcome variables.  I don't really know how to explain the problem with using a null effect as evidence in this way without coming across as condescending and "unresponsive to reviews". Sigh. Hopefully one of my co-authors will come up with something diplomatic.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

secundem_artem

Quote from: Puget on January 13, 2022, 03:00:18 PM
Quote from: ergative on January 13, 2022, 01:41:38 PM
Quote from: the_geneticist on January 13, 2022, 11:36:15 AM
Quote from: FishProf on January 13, 2022, 11:09:23 AM
Quote from: Istiblennius on January 13, 2022, 09:50:53 AM
Largely due to the if-then fallacy that seems to have been drummed into them in high school.

I always start my lecture in experimental design and the scientific 'method' with "You were lied to in elementary, middle, and high school..."

Eventually I can get them to "If you are telling me WHAT will happen - THAT is your prediction.  If you tell me WHY - THAT is your hypothesis (more or less)."

Don't even get me started on the difficultly in teaching them that you start with testing your Null Hypothesis.  Why?  Because if you run the stats and see NO DIFFERENCE between your data sets, you are done.  No need to test for if X>Y if X is no different from Y.

My problem isn't so much that, as the difficulty in getting them to understand that you can't legitimately conclude that two things are the same (well, with NHST you can't). You can find that there is insufficient evidence to conclude that they're different, but that's not the same as concluding that they're the same. And then my students turn around and design experiments with Condition A and Condition B and say 'If my hypothesis is correct, then there should be no difference in reaction time between Condition A and Condition B.' And instead of assigning the B- for understanding what a condition is and moving on, I'm somehow still writing marginal comments earnestly trying to point out everything that's wrong with that statement.

(I don't dare give up the marginal comments, because these projects are feeders for senior year capstone projects, and if I don't get sufficiently discouraging now, I run the risk of ending up with students wanting me to supervise their capstone experiments that just reuse the same project design, complete with the same problems.)

Well, as you note you can't with NHST, but you could with Bayesian analysis (or at least that the differences is smaller than whatever you would consider a meaningful difference). But I'm assuming you don't want to go there with your students.

I currently have a related head bang, not from a student (presumably) but from a reviewer, who earnestly explains that a better way (compared to the correct way we used which they don't like because of bias against certain types of models) to test our hypothesis that [predictors] are related to the shared variance among [outcome variables] is to predict one outcome variable covarying all the others and if the effect of the predictor goes away that means it was due to the shared variance among the outcome variables.  I don't really know how to explain the problem with using a null effect as evidence in this way without coming across as condescending and "unresponsive to reviews". Sigh. Hopefully one of my co-authors will come up with something diplomatic.

Statistics as taught at Hogwarts.  p<0.05 forever!!!
Funeral by funeral, the academy advances

evil_physics_witchcraft

One of my students sent me an email and asked if we had class today since there was nobody else in the room. Um, YES, we DID have class- in the same room we've been having it in. Were you in the wrong room?

Banging my damn head.

ergative

Quote from: Puget on January 13, 2022, 03:00:18 PM
Quote from: ergative on January 13, 2022, 01:41:38 PM
Quote from: the_geneticist on January 13, 2022, 11:36:15 AM
Quote from: FishProf on January 13, 2022, 11:09:23 AM
Quote from: Istiblennius on January 13, 2022, 09:50:53 AM
Largely due to the if-then fallacy that seems to have been drummed into them in high school.

I always start my lecture in experimental design and the scientific 'method' with "You were lied to in elementary, middle, and high school..."

Eventually I can get them to "If you are telling me WHAT will happen - THAT is your prediction.  If you tell me WHY - THAT is your hypothesis (more or less)."

Don't even get me started on the difficultly in teaching them that you start with testing your Null Hypothesis.  Why?  Because if you run the stats and see NO DIFFERENCE between your data sets, you are done.  No need to test for if X>Y if X is no different from Y.

My problem isn't so much that, as the difficulty in getting them to understand that you can't legitimately conclude that two things are the same (well, with NHST you can't). You can find that there is insufficient evidence to conclude that they're different, but that's not the same as concluding that they're the same. And then my students turn around and design experiments with Condition A and Condition B and say 'If my hypothesis is correct, then there should be no difference in reaction time between Condition A and Condition B.' And instead of assigning the B- for understanding what a condition is and moving on, I'm somehow still writing marginal comments earnestly trying to point out everything that's wrong with that statement.

(I don't dare give up the marginal comments, because these projects are feeders for senior year capstone projects, and if I don't get sufficiently discouraging now, I run the risk of ending up with students wanting me to supervise their capstone experiments that just reuse the same project design, complete with the same problems.)

Well, as you note you can't with NHST, but you could with Bayesian analysis (or at least that the differences is smaller than whatever you would consider a meaningful difference). But I'm assuming you don't want to go there with your students.

I currently have a related head bang, not from a student (presumably) but from a reviewer, who earnestly explains that a better way (compared to the correct way we used which they don't like because of bias against certain types of models) to test our hypothesis that [predictors] are related to the shared variance among [outcome variables] is to predict one outcome variable covarying all the others and if the effect of the predictor goes away that means it was due to the shared variance among the outcome variables.  I don't really know how to explain the problem with using a null effect as evidence in this way without coming across as condescending and "unresponsive to reviews". Sigh. Hopefully one of my co-authors will come up with something diplomatic.

I actually put a parenthetical about Bayesian analysis into my original response, and then took it out because it was getting too convoluted already and I didn't feel like typing it out. Thank you very much for picking up the slack there.

I hear you about the reviewer problem. Good luck on diplomacy. I am never very good at that bit of this job.

RatGuy

Quote from: evil_physics_witchcraft on January 13, 2022, 07:44:56 PM
One of my students sent me an email and asked if we had class today since there was nobody else in the room. Um, YES, we DID have class- in the same room we've been having it in. Were you in the wrong room?

Banging my damn head.

I can't wait for the inevitable "the building was locked and I couldn't get in" emails this Monday, a federal holiday. Usually from students who have yet to attend.

the_geneticist

Quote from: RatGuy on January 14, 2022, 06:32:22 AM
Quote from: evil_physics_witchcraft on January 13, 2022, 07:44:56 PM
One of my students sent me an email and asked if we had class today since there was nobody else in the room. Um, YES, we DID have class- in the same room we've been having it in. Were you in the wrong room?

Banging my damn head.

I can't wait for the inevitable "the building was locked and I couldn't get in" emails this Monday, a federal holiday. Usually from students who have yet to attend.

I'm getting the "do we have lab next week?" emails.  At least they are planning ahead, but it's in the syllabus.  Or even better "does my [other class] meet next week?".  How would I know?
If only there was some sort of summary document for you to check, like the syllabus.

apl68

Quote from: RatGuy on January 14, 2022, 06:32:22 AM
Quote from: evil_physics_witchcraft on January 13, 2022, 07:44:56 PM
One of my students sent me an email and asked if we had class today since there was nobody else in the room. Um, YES, we DID have class- in the same room we've been having it in. Were you in the wrong room?


I can't wait for the inevitable "the building was locked and I couldn't get in" emails this Monday, a federal holiday. Usually from students who have yet to attend.

These sound like anecdotes from Louis Sachar's Wayside School stories.
And you will cry out on that day because of the king you have chosen for yourselves, and the Lord will not hear you on that day.

Wahoo Redux

Don't know if this belongs here, and maybe this guy WANTED to be suspended, but this is either a calculated attempt at surviving or a meltdown.

Trigger warning: Bad and non-PC language.

Ferris State Prof goes off.
Come, fill the Cup, and in the fire of Spring
Your Winter-garment of Repentance fling:
The Bird of Time has but a little way
To flutter--and the Bird is on the Wing.

spork

Quote from: Puget on January 13, 2022, 03:00:18 PM
Quote from: ergative on January 13, 2022, 01:41:38 PM
Quote from: the_geneticist on January 13, 2022, 11:36:15 AM
Quote from: FishProf on January 13, 2022, 11:09:23 AM
Quote from: Istiblennius on January 13, 2022, 09:50:53 AM
Largely due to the if-then fallacy that seems to have been drummed into them in high school.

I always start my lecture in experimental design and the scientific 'method' with "You were lied to in elementary, middle, and high school..."

Eventually I can get them to "If you are telling me WHAT will happen - THAT is your prediction.  If you tell me WHY - THAT is your hypothesis (more or less)."

Don't even get me started on the difficultly in teaching them that you start with testing your Null Hypothesis.  Why?  Because if you run the stats and see NO DIFFERENCE between your data sets, you are done.  No need to test for if X>Y if X is no different from Y.

My problem isn't so much that, as the difficulty in getting them to understand that you can't legitimately conclude that two things are the same (well, with NHST you can't). You can find that there is insufficient evidence to conclude that they're different, but that's not the same as concluding that they're the same. And then my students turn around and design experiments with Condition A and Condition B and say 'If my hypothesis is correct, then there should be no difference in reaction time between Condition A and Condition B.' And instead of assigning the B- for understanding what a condition is and moving on, I'm somehow still writing marginal comments earnestly trying to point out everything that's wrong with that statement.

(I don't dare give up the marginal comments, because these projects are feeders for senior year capstone projects, and if I don't get sufficiently discouraging now, I run the risk of ending up with students wanting me to supervise their capstone experiments that just reuse the same project design, complete with the same problems.)

Well, as you note you can't with NHST, but you could with Bayesian analysis (or at least that the differences is smaller than whatever you would consider a meaningful difference). But I'm assuming you don't want to go there with your students.

I currently have a related head bang, not from a student (presumably) but from a reviewer, who earnestly explains that a better way (compared to the correct way we used which they don't like because of bias against certain types of models) to test our hypothesis that [predictors] are related to the shared variance among [outcome variables] is to predict one outcome variable covarying all the others and if the effect of the predictor goes away that means it was due to the shared variance among the outcome variables.  I don't really know how to explain the problem with using a null effect as evidence in this way without coming across as condescending and "unresponsive to reviews". Sigh. Hopefully one of my co-authors will come up with something diplomatic.

Simpson's Paradox?
It's terrible writing, used to obfuscate the fact that the authors actually have nothing to say.

marshwiggle

Quote from: Wahoo Redux on January 14, 2022, 10:10:04 AM
Don't know if this belongs here, and maybe this guy WANTED to be suspended, but this is either a calculated attempt at surviving or a meltdown.

Trigger warning: Bad and non-PC language.

Ferris State Prof goes off.

Maybe since he's tenured and retiring at the end of they year, he's hoping the institution won't want to face a lawsuit but will just relieve him of teaching effectively giving him an early start to his retirement.

It is kind of funny in a "I'm mad as hell and not going to take it anymore" kind of way. Especially since he blasts essentially everybody; clueless admins and entitled students.
It takes so little to be above average.

Puget

Quote from: marshwiggle on January 14, 2022, 11:37:27 AM
Quote from: Wahoo Redux on January 14, 2022, 10:10:04 AM
Don't know if this belongs here, and maybe this guy WANTED to be suspended, but this is either a calculated attempt at surviving or a meltdown.

Trigger warning: Bad and non-PC language.

Ferris State Prof goes off.

Maybe since he's tenured and retiring at the end of they year, he's hoping the institution won't want to face a lawsuit but will just relieve him of teaching effectively giving him an early start to his retirement.

It is kind of funny in a "I'm mad as hell and not going to take it anymore" kind of way. Especially since he blasts essentially everybody; clueless admins and entitled students.

Well it seems like that's probably worked as he's been suspended, though the article I read didn't say if it was with pay or not. Still, there have to be less crazy ways of getting out of teaching your last semester before retirement. It seems like medical leave for mental health reasons might have been a good idea for example.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

Stockmann

The instructions say, and I mentioned, that in general your report should be 4-7 pages. Not 34 friggin' pages! This is not due an extensive bibliographical search or to extensive analysis. It's all filler, like tables and tables of raw data (of which plots are also included) and plotting data sets in separate figures when they really ought to be on the same set of axes.

Brevity is the soul of wit.

the_geneticist

Quote from: Stockmann on January 14, 2022, 12:20:07 PM
The instructions say, and I mentioned, that in general your report should be 4-7 pages. Not 34 friggin' pages! This is not due an extensive bibliographical search or to extensive analysis. It's all filler, like tables and tables of raw data (of which plots are also included) and plotting data sets in separate figures when they really ought to be on the same set of axes.

Brevity is the soul of wit.

"Revise & Resubmit"

apl68

Quote from: Stockmann on January 14, 2022, 12:20:07 PM
The instructions say, and I mentioned, that in general your report should be 4-7 pages. Not 34 friggin' pages! This is not due an extensive bibliographical search or to extensive analysis. It's all filler, like tables and tables of raw data (of which plots are also included) and plotting data sets in separate figures when they really ought to be on the same set of axes.

Brevity is the soul of wit.

Things like this remind me of the undergrad Interlibrary Loan patron who requested all several hundred results of an online search for articles made using an overly broad set of keywords, instead of trying to refine her results.  I've often wondered whether that led to an immensely long and incoherent paper, or whether the panicked student figured out how to ask for help in time to prevent that.
And you will cry out on that day because of the king you have chosen for yourselves, and the Lord will not hear you on that day.

mamselle

Or they figure, "Longer is better and will get me a higher grade."

M.
Forsake the foolish, and live; and go in the way of understanding.

Reprove not a scorner, lest they hate thee: rebuke the wise, and they will love thee.

Give instruction to the wise, and they will be yet wiser: teach the just, and they will increase in learning.