News:

Welcome to the new (and now only) Fora!

Main Menu

Academia: What we did wrong with Covid-19

Started by nonntt, April 05, 2020, 02:39:19 PM

Previous topic - Next topic

Caracal

Quote from: nonntt on April 19, 2020, 03:46:39 PM


Or in other words: Academia has a problem. Usually we can ignore the problem with few consequences, but eventually institutions risk their credibility. And this time, the consequences include additional people dying.

But, I'm not really sure it is a solvable problem, or even one particular problem, or in some cases a problem at all.You seem to be talking about rather different things.  Ding is just an example of how an unscrupulous person can manipulate the media into thinking he's an expert. What are we going to do? Roust every self-obsessed self-promoting jerk out of the profession? Some of the other things you talk about are just disagreements. Ok Ionnidis thought thought there wasn't enough evidence for the actions being taken. Other academics thought that while he made some good points, other parts of that argument were irresponsible and wrong and they said so.

I can't judge whether anybody did anything irresponsible with that study, but it seems like the issue isn't necessarily the study, or even that the results were publicized. You just have a lot of people from outside a profession trying to draw conclusions based on data they don't really understand.

I'm in the humanities, but I still would say that what I really learned from my graduate training was caution and skepticism about data and conclusions. Before I'm going to adopt some new idea proposed by a book or article, I'm going to need to be convinced that it actually applies more broadly. There are some things where I believe the flaws are so obvious that I find the argument totally unconvincing. There are rare cases where I read a new book, and the argument is so convincing, powerful and careful that my skepticism is overpowered and I just completely adopt the new idea. More often though, my reaction is in the middle. "Huh, that's interesting, I wonder  if that would still work if you looked at x records, or y region." If it relates to my own work, I might start thinking about if there is some way I could test that idea with evidence.

My strong impression is that in different ways disease experts' training has taught them to think similarly. A small study is just a data point and you certainly aren't going to draw big conclusions from it. You don't have to be a statistician to know that it is a bad idea to take a study of 2000 people, get 40 positives and then try to calculate an overall death rate based on 70 deaths and extrapolate that for the world. That doesn't necessarily mean there's something awful about the actual study, even if it is flawed.

pigou

I've gotten flak for an op-ed I've written in my discipline (unrelated to COVID) and it strikes me that lots of academics don't know what it's like to write for the general public. The criticism I got was largely for over-generalizing and failing to point out the nuance... in a 700 word article that needs to set up the background, introduce a problem, apply the field's insights, then suggest how we can resolve the problem. All without relying on any background knowledge and written at an 8th grade level without jargon. The place for nuance is in research papers, but if you want to affect policy, you can't end with a cop-out like "more research is needed." When policy-relevant decisions have to be made now, you need to apply everything you know to come up with a recommendation. You can't make it contingent on data that don't exist or on research that will be published in 5 years, if ever.

I feel the same way about books aimed at a lay audience. Experts inevitably find them boring, because the "groundbreaking" research is 20 years old. That may be true, but the research is likely novel to people whose exposure to the field, if any, is limited to an introductory college class. I read this scathing review of an author who has been extremely influential in policymaking: The Sameness of Cass Sunstein. He writes a book a year and the criticism largely boils down to "his books are repetitive." Which is a fair criticism if you think people read all his books, but nonsensical if you treat them as written for different audiences. I think this partially explains why successful academic writers with influence over policy aren't viewed favorably by many academics: the critics suffer from the curse of knowledge, thinking the book is aimed at them when it is aimed at people for whom the basic insights of the field are novel.

I think we also have to be careful about what exactly "expertise" covers. In climate science, for example, expertise may be related to climate modeling. That's vital to make predictions about what happens under different scenarios. But you see lots of climate scientists jumping into endorsing policies like the Green New Deal. That's politics and economics, both very different areas of expertise. There's a big difference between "we need to take urgent action" and "we need to adopt this specific set of policies." We see this similarly with epi modeling now: anyone with some basic quantitative training can understand the qualitative patterns and the logic of "flattening the curve." That doesn't mean we can all do our own modeling: that requires advanced expertise. But the concrete policy question might be "do we keep the parks open?" Now, you need to integrate the mental health benefits of going outside and getting fresh air and the potential transmission risk if people don't fully adhere to social distancing.

After the fact, we can compare Sweden against its neighbors to see the benefits of going from a very limited response to mostly stay-at-home orders. Right now, Sweden has more cases, but the real comparison can't happen for a few more months: Sweden can maintain their existing policies, while other countries will have to start easing restrictions soon. And number of deaths is not the right measure to begin with: we also need to balance this against economic harm. Social safety nets are extremely strained everywhere and we're just a month into this. Zero chance this can be maintained for another year. But once the papers on this come out, two or three years from now, it'll be way too late to inform policy. The same also holds for medical trials: a remdesivir trial that's now running is expected to have results by April 2023. That's clinically useless: we can't wait three years to figure out if we should treat patients with something or just watch them die without an intervention.

marshwiggle

Quote from: pigou on April 20, 2020, 07:37:16 AM
I've gotten flak for an op-ed I've written in my discipline (unrelated to COVID) and it strikes me that lots of academics don't know what it's like to write for the general public. The criticism I got was largely for over-generalizing and failing to point out the nuance... in a 700 word article that needs to set up the background, introduce a problem, apply the field's insights, then suggest how we can resolve the problem. All without relying on any background knowledge and written at an 8th grade level without jargon. The place for nuance is in research papers, but if you want to affect policy, you can't end with a cop-out like "more research is needed."

This raises the point that the media are part of the problem as well. When the media ask "an expert" about a topic, they don't want a nuanced perspective; they want a 30 second sound bite that they can pop into several news highlight clips. Unless and until the public are interested in more complete (but less black-and-white) analysis, the situation's not likely to get better.
It takes so little to be above average.

pigou

I'd think of media in general as the equivalent of an elevator pitch. The point is not necessarily to convince anyone, but to highlight how your expertise is relevant to a problem -- and to show people that you can communicate it effectively. People who rely on this expertise can then get in touch and that opens the door for a half hour long conversation. The sound bite or short article should convince someone that it's worth their time to reach out to you, when there are a thousand other PhDs they could reach out to instead.

It's easy to realize the value of our own expertise and time: none of us would spend an hour talking with some crackpot with a random YouTube channel. But the time of key decision-makers is even more valuable: they get approached by crackpots constantly and there's just no way to assess who's presenting an informed opinion based on deep knowledge and who's just pushing their own agenda. A tweet I saw the other day captured this nicely, with something along the lines of: "The coronavirus is making it ever more clear that we need to pass the policies I've long been advocating for. -- Sorry, I don't remember. What was the issue again?"

The other thing to keep in mind is what the audience really wants to get out of it. For the most part, they don't want to be educated on the nuance of a research domain or get a lecture on the topic. Instead, they usually face a specific problem and they want expertise to help them resolve it. It's like none of us would hire a CPA to walk us through the tax forms and explain the meanings of all the terms. We hire them to use their expertise to do it for us and leave us with a deliverable product (the completed tax return).

Caracal

Quote from: pigou on April 20, 2020, 07:37:16 AM
When policy-relevant decisions have to be made now, you need to apply everything you know to come up with a recommendation. You can't make it contingent on data that don't exist or on research that will be published in 5 years, if ever.

There's a big difference between "we need to take urgent action" and "we need to adopt this specific set of policies." We see this similarly with epi modeling now: anyone with some basic quantitative training can understand the qualitative patterns and the logic of "flattening the curve." That doesn't mean we can all do our own modeling: that requires advanced expertise. But the concrete policy question might be "do we keep the parks open?" Now, you need to integrate the mental health benefits of going outside and getting fresh air and the potential transmission risk if people don't fully adhere to social distancing.



I agree with a lot of this, including the point about expertise. Decisions about restrictions and government actions are inherently political and you want people who are elected to be making them. Questions about trade offs and costs of policies are obviously open to discussion. I would argue, however, that when you are dealing with incomplete information and the need to make immediate choices, actual expertise becomes really important. What you get from studying something deeply over a long period of time is an ability to weigh possibilities and make educated guesses. One of the better examples of this with Covid is the whole discussion about antibodies. There's no clear evidence on how protective antibodies are, or whether people who were infected before can get reinfected. However, just about all virologists seem to think it is extremely likely that if you get the virus and then recover, that should provide substantial immunity from getting it again. That isn't based on studies yet, it is just based on knowledge of how this works and the fact that people who recover have antibodies. They are working on the operating assumption that this virus is going to work like other viruses. But, people who don't seem to understand this, including Ding have cast doubt on this based on supposed cases of reinfection. Most actual experts in the field think it is much more likely that's a testing sensitivity issue than actual reinfection.


nonntt

Quote from: nonntt on April 05, 2020, 02:39:19 PM
If you read general interest message boards, listen to Fox news, click on YouTube links, or read social media, you've probably seen an Internet rando/your brother-in-law write something like this:

"But Dr. Jim Jones, professor of pharmacy at Stanford, shows that the coronavirus isn't that dangerous, or is not as deadly as the flu, or certainly doesn't warrant the huge economic damage that stay-home orders are causing."

And if you click the link, sure enough, the academic credentials check out and Dr. Jones's argument has been summarized more or less accurately. I haven't made a systematic effort to list them, but I've seen at least a half-dozen cases like this so far, including people who are leaders of their respective fields (except their fields are not, you know, epidemiology). Our country has a massive problem with disinformation, and academia is not always part of the solution.

There are enough cases already that we can't just dismiss them as exceptions. We try to teach our students how to recognize and use reliable sources of information, but academia has structural issues that are contributing to the problem.


  • We've stressed the importance of outreach and showing the relevance of our fields. We need to remember the opposing virtue of shutting up and listening during a worldwide crisis.
  • We valorize counter-intuitive findings, tearing down others' publications around the seminar table and shaking up accepted wisdom of the "Aphra Benn wrote Hamlet!" type. The principal offenders this time are themselves scientists. Hot take: Disputing the accepted wisdom in a pandemic kills people.
  • We love interdisciplinarity and breaking down knowledge silos. We need to relearn respect for people with deep knowledge and experience in the fields that have been our passions for a good 3 weeks now.
  • A position at an elite R1 gives you instant credibility both within and outside academia. We need the people in those positions to accept the responsibilities that come with that privilege.
  • "But what about Liberty?" is distracting whataboutism. Liberty U. will have to have its own reckoning. The academic coronavirus skeptics are at Stanford, Yale, and other research schools.
  • Collegiality and respectful discourse is important. But when lives are at stake, it's better that someone is willing to say, "Dr. Jones is an ignorant blowhard whose policy recommendations will result in tens of thousands of needless deaths."

A lot of the people who solve this crisis will be academics and I'm thankful for them, and the system that trained them and gives them the chance to research. But the crisis is also exposing some flaws with our academic culture.

Eight months later, I'd say this has held up pretty well. Academia accomplished a lot of remarkable things. And it failed badly in some other ways. The academic contrarians and overnight experts made a significant contribution to killing hundreds of thousands of people.

And people have noticed. Ed Yong, author of several of the most important journalistic articles on the pandemic, has a new article out in The Atlantic that includes both what academics got right and how the incentives of academia led to some things going very wrong.

QuoteBut the COVID‑19 pivot has also revealed the all-too-human frailties of the scientific enterprise. Flawed research made the pandemic more confusing, influencing misguided policies. Clinicians wasted millions of dollars on trials that were so sloppy as to be pointless. Overconfident poseurs published misleading work on topics in which they had no expertise. Racial and gender inequalities in the scientific field widened.

Amid a long winter of sickness, it's hard not to focus on the political failures that led us to a third surge. But when people look back on this period, decades from now, they will also tell stories, both good and bad, about this extraordinary moment for science. At its best, science is a self-correcting march toward greater knowledge for the betterment of humanity. At its worst, it is a self-interested pursuit of greater prestige at the cost of truth and rigor. The pandemic brought both aspects to the fore.

There's a lot more detail, and it's well worth reading.