News:

Welcome to the new (and now only) Fora!

Main Menu

Limit on number of Letters of Recommendation for student?

Started by Diogenes, December 07, 2020, 09:50:03 AM

Previous topic - Next topic

pink_

In my field, 10-12 is the norm, and it's typical for recommenders to write one letter that they modify lightly per institution. But it is definitely easier to manage this when the student provides a set list of schools to which they are applying. I have a student this year who is applying, and I agreed to write. She's not my advisee and she's applying in a field that is similar to my own but different enough that I really could give advice on programs. I got one big batch of rec requests over Thanksgiving, and I've gotten three others since then.

AvidReader

My undergraduate school did not send many students to grad school, so I did most things by trial and error. I applied to seven schools and thought seven was way too many letters for any one faculty member to write, so I asked five different faculty members and divvied up the letters so that none of them had to write more than four (one school only wanted two letters, I think). In retrospect, I created more work overall, but I didn't realize that at the time.

I've balanced it out by (ma)lingering on the job market for 8 years and asking referees to update their Interfolio letters periodically. I cringe every time I send a new letter request, but my wonderful referees are (outwardly) patient and encouraging.

AR.

the_geneticist

Ha!  I'm just old enough to remember compiling packets of addressed, stamped envelopes with my list of Ph.D program application names & due dates to give to each of my letter writers.  I was very proud of myself for typing the address labels.

And then there were the weird transition years where an electronic letter was the placeholder for the hand-signed "real" letter.

I'm very glad that most places now accept an electronic copy (with an electronic signature!).  But not all . . .

teach_write_research

sidebar: if you are on a grad admissions committee, how *do* you use the ratings in your decisions?

I appreciate the due diligence of clicking Strongly Recommend or Recommend, etc., and for certain professional Masters programs asking about ratings on a few specific competencies, but yeah, otherwise, it's a rough quick click through top 10% or whatever is consistent with my letter.

I suspect that the ratings are actually internal research from the companies that provide the application sites. It's not about ratings to help current decision making on which applicants to admit. Maybe it helps flag things that other materials don't show but that seems doutbful. My money is on a goal of retroactively trying to determine characteristics associated with successful program completion. I would think they have enough data by now and we can end the era of LOR rating scales.

Kron3007

Definitely field specific.  I discourage my students from taking this approach and encourage them to contact potential advisors directly.  In my field (fairly applied STEM field), or at least my department, it is very rare for graduate students to get accepted when they apply to the department without first identifying an advisor.  I guess the silver lining for me is that it cuts down on the LORs I have to write... 

nonsensical

Quote from: teach_write_research on December 09, 2020, 01:05:00 PM
sidebar: if you are on a grad admissions committee, how *do* you use the ratings in your decisions?

For strong applicants whom we are seriously considering, the letter writers almost always select the top ratings. Lower ratings usually go with applications that also include other components that prevent the person from being seriously considered, like low grades, little or no research experience, etc. Sometimes the overall packet is very strong and the letter itself is enthusiastic, with some ratings in the top category and some in the next category down, and maybe one in the category after that. That says to me that the letter writer was taking the labels on the rating form seriously and almost never reflects poorly on the candidate, because the first few categories are all good if taken at face value. Sometimes the overall packet is strong and the letter is enthusiastic but doesn't mention the categories where the letter writer marked the applicant lower; for instance, the letter writer marks the top category for everything except "writing skills," selects the middle category for that line, and then doesn't address writing skills in the letter. If I were seriously considering a candidate like that, I'd probably ask the letter writer about writing skills in a phone call.

Kron3007

Quote from: nonsensical on December 10, 2020, 04:23:46 AM
Quote from: teach_write_research on December 09, 2020, 01:05:00 PM
sidebar: if you are on a grad admissions committee, how *do* you use the ratings in your decisions?

For strong applicants whom we are seriously considering, the letter writers almost always select the top ratings. Lower ratings usually go with applications that also include other components that prevent the person from being seriously considered, like low grades, little or no research experience, etc. Sometimes the overall packet is very strong and the letter itself is enthusiastic, with some ratings in the top category and some in the next category down, and maybe one in the category after that. That says to me that the letter writer was taking the labels on the rating form seriously and almost never reflects poorly on the candidate, because the first few categories are all good if taken at face value. Sometimes the overall packet is strong and the letter is enthusiastic but doesn't mention the categories where the letter writer marked the applicant lower; for instance, the letter writer marks the top category for everything except "writing skills," selects the middle category for that line, and then doesn't address writing skills in the letter. If I were seriously considering a candidate like that, I'd probably ask the letter writer about writing skills in a phone call.

In our system we have the ratings along with a written section.  I pay more attention to the written component and we have accepted many that didn't have the top scores.  The problem with the scores are that they are not standardized in any fashion.

Puget

Quote from: Kron3007 on December 10, 2020, 04:37:00 AM
Quote from: nonsensical on December 10, 2020, 04:23:46 AM
Quote from: teach_write_research on December 09, 2020, 01:05:00 PM
sidebar: if you are on a grad admissions committee, how *do* you use the ratings in your decisions?

For strong applicants whom we are seriously considering, the letter writers almost always select the top ratings. Lower ratings usually go with applications that also include other components that prevent the person from being seriously considered, like low grades, little or no research experience, etc. Sometimes the overall packet is very strong and the letter itself is enthusiastic, with some ratings in the top category and some in the next category down, and maybe one in the category after that. That says to me that the letter writer was taking the labels on the rating form seriously and almost never reflects poorly on the candidate, because the first few categories are all good if taken at face value. Sometimes the overall packet is strong and the letter is enthusiastic but doesn't mention the categories where the letter writer marked the applicant lower; for instance, the letter writer marks the top category for everything except "writing skills," selects the middle category for that line, and then doesn't address writing skills in the letter. If I were seriously considering a candidate like that, I'd probably ask the letter writer about writing skills in a phone call.

In our system we have the ratings along with a written section.  I pay more attention to the written component and we have accepted many that didn't have the top scores.  The problem with the scores are that they are not standardized in any fashion.

We, thank goodness, do not have ratings, just a simple system for uploading letters. I can only assume that at some point faculty were vociferous in explaining to admissions that the ratings are useless to department admissions committees, and disrespectful of the time of letter writers.

We do weight letters a fair amount-- yes, they are generally always positive, but there is a world of difference between a "I had this student in a class or two/ as an research assistant for a little while and they are bright and hard working" positive letter, and a "this student is already functioning in my lab like a great grad student, and there is all the specific evidence for that" positive letter.
"Never get separated from your lunch. Never get separated from your friends. Never climb up anything you can't climb down."
–Best Colorado Peak Hikes

Hibush

Quote from: Puget on December 10, 2020, 06:04:37 AM
Quote from: Kron3007 on December 10, 2020, 04:37:00 AM
Quote from: nonsensical on December 10, 2020, 04:23:46 AM
Quote from: teach_write_research on December 09, 2020, 01:05:00 PM
sidebar: if you are on a grad admissions committee, how *do* you use the ratings in your decisions?

For strong applicants whom we are seriously considering, the letter writers almost always select the top ratings. Lower ratings usually go with applications that also include other components that prevent the person from being seriously considered, like low grades, little or no research experience, etc. Sometimes the overall packet is very strong and the letter itself is enthusiastic, with some ratings in the top category and some in the next category down, and maybe one in the category after that. That says to me that the letter writer was taking the labels on the rating form seriously and almost never reflects poorly on the candidate, because the first few categories are all good if taken at face value. Sometimes the overall packet is strong and the letter is enthusiastic but doesn't mention the categories where the letter writer marked the applicant lower; for instance, the letter writer marks the top category for everything except "writing skills," selects the middle category for that line, and then doesn't address writing skills in the letter. If I were seriously considering a candidate like that, I'd probably ask the letter writer about writing skills in a phone call.

In our system we have the ratings along with a written section.  I pay more attention to the written component and we have accepted many that didn't have the top scores.  The problem with the scores are that they are not standardized in any fashion.

We, thank goodness, do not have ratings, just a simple system for uploading letters. I can only assume that at some point faculty were vociferous in explaining to admissions that the ratings are useless to department admissions committees, and disrespectful of the time of letter writers.

We do weight letters a fair amount-- yes, they are generally always positive, but there is a world of difference between a "I had this student in a class or two/ as an research assistant for a little while and they are bright and hard working" positive letter, and a "this student is already functioning in my lab like a great grad student, and there is all the specific evidence for that" positive letter.

We also have rankings in several categories to go with the letters. I pay little attention and focus instead on what Puget looks for. Most applicants have fairly high rankings in all the categories, and I don't think small differences matter.

If there is a notably low score on an absolute or relative basis, I know to look for an explanation in the letter. That is a good pointer. If the writer ranks the applicant in the top (2%) group in all categories, odds are that it is an inexperienced letter writer. I tend to expect less insightful, but frothier, letters when I see that. The exception is when I know the letter writer, and I know that it is a discerning person. Then it means, "check this one out closely".

Nobody on the admissions committee has ever suggested doing any statistics with those numbers.