Saturday, February 01, 2014

"Scientific Pride and Prejudice" in the @nytimes makes claims about sciences not using evidence correctly; alas no evidence presented

Well, I guess I can say I was not pleased to see this tweet from Carl Zimmer.
It is not that I have a problem with what Carl wrote. It is just that, then I went and read the article he referred to: Scientific Pride and Prejudice in the New York Times By Michael Suk-Young Chwe. And it just did not make me happy. I reread it. Again and again. And I was still unhappy.

What bugs me about this article? Well, alas, a lot. The general gist of the article is that "natural" scientists are not aware enough of how their own preconceptions might bias their work. And furthermore that literary criticism is the place to look for such self-awareness. Well, interesting idea I guess but alas, the irony is, this essay presents no evidence that literary criticism does better with evidence than natural science. Below are some of the lines / comments in the article that I am skeptical of:
  • "Scientists now worry that many published scientific results are simply not true."
  • "Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions. Psychologists call this “confirmation bias. We seek out information that confirms what we already believe. ”
    • This statement is misleading. Confirmation bias according to all definitions I could find is something more subtle. For example Oxford Dictionaries defines it as "the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories." That is, it is a tendency - a leaning - a bias of sorts.
    • I would very much like to see evidence behind the much more extreme claim of this author that scientists focus "only on evidence that supports their preconceptions". 
    • In my readings of actual research on confirmation bias I can find no evidence to this claim. For example see the following paper Confirmation bias: a ubiquitous phenomenon in many guises. which states:


    • As the term is used in this article and, I believe, generally by psychologists, confirmation bias connotes a less explicit, less consciously one-sided case-building process. It refers usually to unwitting selectivity in the acquisition and use of evidence. The line between deliberate selectivity in the use of evidence and unwitting molding of facts to fit hypotheses or beliefs is a difficult one to draw in practice, but the distinction is meaningful conceptually, and confirmation bias has more to do with the latter than with the former.

  • "Despite the popular belief that anything goes in literary criticism, the field has real standards of scholarly validity"
    • This is a red herring to me. I can find no evidence that their there is a popular belief that "anything goes" in literary criticism. So the author here sets a very low bar and then basically any presentation of standards is supposed to impress us.
  • "Rather, “the important thing is to be aware of one’s own bias."
    • The author then goes on to discuss how those in the humanities are aware of the issues of confirmation bias and rather than trying to get rid of it, they just deal with it, as implied in the quote.
    • The author then writes "To deal with the problem of selective use of data, the scientific community must become self-aware and realize that it has a problem. In literary criticism, the question of how one’s arguments are influenced by one’s prejudgments has been a central methodological issue for decades."
    • Again, this implies that scientists have not been thinking about this at all which is just wrong.
  • And then the author uses the Arsenic-life story as an example of how scientists suffer from "confirmation bias."  If you do not know about the arsenic life story see here.  What is the evidence that this was "confirmation bias"?.  I think more likely this was a case of purposeful misleading, overhyping, and bad science.  
  • Then the author gives as an example of how science actually is prone to confirmation bias by presenting a discussion of Robert Millikan's notebooks in relation to a classic "oil drop" experiment.  Apparently, these notebooks show that the experiments got better and better over time and closer to the truth.  And in the notebooks Millikan annotated them with things like "Best yet - Beauty - Publish".  And then the author concludes this means "In other words, Millikan excluded the data that seemed erroneous and included data that he liked, embracing his own confirmation bias."  I don't see evidence that this is confirmation bias.  I think better examples of confirmation bias would be cases where we have now concluded the research conclusions were wrong.  But instead, Millikan was and still is as far as I know, considered to have been correct.  He won the Nobel Prize in 1923 for his work.  Yes, there has been some criticism of his work but as far as I can tell, there is no evidence that he had confirmation bias. 
  • I am going to skip commenting on the game theory claims in this article.
  • Then the author writes "Perhaps because of its self-awareness about what Austen would call the “whims and caprices” of human reasoning, the field of psychology has been most aggressive in dealing with doubts about the validity of its research."  Again - what is the evidence for this? Is there any evidence that the field of psychology is somehow different?
I could go on and on.  But I won't.  I will just say one thing.  I find it disappointing and incredibly ironic that an article that makes claims about how some fields deal better with evidence and conformation bias than other fields does not present any actual evidence to back up its claims.  And many of the claims pretty clearly run counter to available evidence.

UPDATE 9:20 AM 2/2/2014: Storify of discussions on Twitter



40 comments:

  1. Great points, Jonathan. Horrible, horrible article that I am astonished was published in NYT. The author is very naive to think natural scientists don't have a skeptical bone in their body. It is one of our pillars.

    ReplyDelete
    Replies
    1. Pretty much every scientist I know walks around assuming that every experiment they do is wrong most of the time. This is what 90% of our weekly lab meeting is dedicated to-- why are our experiments wrong? This is also what 99.9% of our journal clubs are dedicate to-- why are our colleagues' experiment wrong?

      Delete
    2. Thanks both of you. This is what troubled me most here. All we do is talk about how our work might be wrong or biased and also about how others work might be.

      Delete
  2. Michael Suk-Young Chwe undercuts his point in a weird way. First, he tells us that science should look to literary criticism for aid. Then, to support this, he tells us that his eyes were opened to the problems of bias and preconception...by his physics teacher.

    (Goodstein, it happens, was a colleague of Feynman, who used the Millikan oil-drop experiment as an example of preconceptions skewing scientific results, all the way back in his 1974 address "Cargo Cult Science". A decade later, Goodstein included it in The Mechanical Universe, Caltech's televised version of a first-year college physics course.)

    Perhaps because of its self-awareness about what Austen would call the "whims and caprices" of human reasoning, the field of psychology has been most aggressive in dealing with doubts about the validity of its research.

    Or, perhaps psychology has been worse than other sciences, more a victim of researchers' biases than other fields have been, and this failure has become entrenched over decades of inadequate self-scrutiny. Perhaps the aggression is born of belated desperation.

    ReplyDelete
    Replies
    1. That possibility would no doubt tickle the egos of those who don't work in psychology, particularly those who've inherited a "hrraf, we're the harder science" tradition of looking down on it. The point is, without a more thorough investigation, we can't say, and just throwing in the claim as our op-ed scrivener does is remarkably facile.

      Delete
  3. Thanks for your discussion of my article. I'm traveling right now but will answer all of your points in detail when I come back on Tuesday.

    For now:

    1. I did not mean to imply that scientists were not aware of the various problems with replication before the Begley and Ellis article. I thought it was clear that many disciplines, not just cancer research, were vulnerable (my piece later mentions psychology, for example). After reading your comment, I understand how someone reading it might get that implication. I should have written something like, "For example, two years ago, C. Glenn Begley and Lee M. Ellis reported in Nature..." There is nothing special about the Begley and Ellis article---I included it because it was well known, had such a discouraging result, and because it talked about how hundreds of articles can start from a (seemingly) flawed premise.

    2. It's true that "confirmation bias" refers to several related things, but one of them (for example in the wikipedia page) is "biased search for information": "Experiments have found repeatedly that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with their current hypothesis." One paper which finds evidence for this is Hart et al (2009), at

    http://www.sscnet.ucla.edu/polisci/faculty/chwe/austen/hart2009.pdf

    Hart et al use the term "congeniality bias." If you google "confirmation bias congeniality bias," you will find many people using them as synonymous.

    ReplyDelete
    Replies
    1. Thanks for responding and clarifying what you meant with the Nature paper. Just for the record, I am a bit skeptical of the implications of their findings, although I completely agree that replicability can be a challenge. The question to me is - why are findings not reproducible. In some cases it is because there was some bias or some other flaw in the work. But there are many other explanations. For example - in many many cases work is not replicable because the methods are not described fully. This is why many, including myself, have been pushing for authors to more fully describe everything that was done and for journals to require this.

      Delete
    2. Thanks for your comment. I don't know at all how cancer research works. In political science, sometimes students in our department try to replicate published studies, and as you say, often they get nowhere because the original authors refuse to give them the data or any details about their statistical methods.

      Delete
    3. What do you think of this argument by Bissell? She says that many results are sensitive to the "microenvironment" and that if researcher A has difficulty replicating a result of researcher B, then researcher A should go research B's lab and work together. To me this is kind of nuts. Science should be a statement about the world, not a statement about what works in my lab.
      http://www.nature.com/news/reproducibility-the-risks-of-the-replication-drive-1.14184

      Delete
    4. I love Mina Bissell but I do not agree with her conclusions in her Nature commentary.

      Delete
    5. I am much more inclined to believe that we have done a very poor job of describing the details of our work and we need to do that better before we understand how reproducible certain experiments are. For example see my commentary about this in regard to the ARRIVE standards for animal research.

      Delete
    6. And I note - I am coorganizing a meeting at UC Davis in two weeks where one of the topics will be peer review and reproducibility. We have a session on peer review with the following speakers

      Victoria Stodden, Columbia University
      Emily Ford, PDX Scholar & Portland State University
      Ivan Oransky, Retraction Watch & New York University
      Cesar A. Berrios-Otero, Faculty of 1000
      Jonathan Dugan, Public Library Of Science

      A few of them focus on reproducibility issues which I think are critical in science right now. But I think it is still unclear how much bias vs differences between studies vs. statistics vs. other issues are the things leading to limited reproducibility.

      Delete
    7. Best wishes on the conference! I of course agree that limited reproducibility has many possible causes, and that confirmation bias (or what humanists would call "the hermeneutic circle") is just one of several. My piece was not intended to be a full discussion of the problem of reducibility.

      Delete
  4. 3. By saying, "A major root of the crisis is selective use of data. Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions," I do not mean that _all_ scientists focus only on evidence that supports their preconceptions. This would be a very strong and obviously unsupportable statement.

    If I write, "A major root of the water shortage is careless use of water. Families, washing their cars and watering their lawns, carelessly leave hoses running," I don't think that anyone would take this as a statement that _all_ families carelessly leave their water hoses running.

    ReplyDelete
  5. 4. I don't know how popular it is, but in 1976 Hayden White wrote, "Modern literary critics recognize no disciplinary barriers, either as subject matter or to methods. In literary criticism, anything goes."
    http://www.jstor.org/stable/1207644

    For another example, in his "A Physicist Experiments With Cultural Studies," Alan Sokal writes, "Social Text's acceptance of my article exemplifies the intellectual arrogance of Theory -- meaning postmodernist literary theory -- carried to its logical extreme. No wonder they didn't bother to consult a physicist. If all is discourse and ``text,'' then knowledge of the real world is superfluous; even physics becomes just another branch of Cultural Studies. If, moreover, all is rhetoric and ``language games,'' then internal logical consistency is superfluous too: a patina of theoretical sophistication serves equally well. Incomprehensibility becomes a virtue; allusions, metaphors and puns substitute for evidence and logic." I think that Sokal is arguing that in postmodern literary theory, you can say anything you want, without evidence or logic.

    ReplyDelete
  6. 5. By saying "To deal with the problem of selective use of data, the scientific community must become self-aware and realize that it has a problem. In literary criticism, the question of how one’s arguments are influenced by one’s prejudgments has been a central methodological issue for decades," I don't think that there is an implication that scientists have not been thinking about the subject at all. My piece starts by mentioning the Begley and Ellis article, and of course Begley and Ellis are scientists. My piece quotes Kahneman, who is a scientist.

    Again, if I write, "Americans should realize that North Korea is a country, where real people, not soulless automatons, live," I don't think that anyone would think that I am implying that there have been no discussions among Americans which treat North Koreans as real people.

    ReplyDelete
    Replies
    1. If you had said something like "the scientific community must become MORE self-aware and better address its problem" then perhaps I would not have responded this way. But I see no way to interpret the words you used there other than that they mean "scientists have not done this at all"

      Delete
    2. This comment has been removed by the author.

      Delete
    3. I agree, your wording would have been clearer. But again, for example, if the governor says, "we must become aware of the current water shortage," this is not normally understood as implying very much about how many people are already aware.

      Delete
    4. I don't think this is really comparable to be honest. Suppose the Governor write an editorial for the Sacramento Bee that started with

      "Farming in California is in crisis, just when we need it most. Two years ago, Joe Smith and Elena Jackson reported in Farm America that they could only see evidence of water conservation on six out of 53 farms in California. Farmers now worry that many reports of water conservation are just not true. The farmers often offer themselves as a model to others in terms of water conservation. But this time farmers might look for help to the state government, and to the State House in particular."

      This would set the tone for the whole article as being a critique of farmers and their behaviors.

      Then what if the governor wrote

      "To deal with the problem of water being wasted, the farming community must become self-aware and realize that it has a problem. In the State House, the question of how to conserve water has been a central issue for decades."

      And so on. It is the overall tone of your article that is off in my opinion. And I think, in that context, one almost has to interpret the comment that "scientists must become aware" as implying that they are not aware.

      Delete
    5. I don't want to prolong this discussion too much because it concerns what my words can be taken to imply, as opposed to what I actually believe. Of course, I am aware that many scientists have been thinking about these issues for a while. If scientists had not been writing about issues with replicability, I would not have been aware of their extent.

      About tone, all I would say is that yes, the piece is a critique of scientists and their behaviors (and of course that does not mean _all_ scientists and _all_ of their behaviors). This critique applies to many disciplines, including my own, which is equally vulnerable.

      For a comparison in terms of tone, see Kahneman's letter to social priming researchers (quoted in the article and available here:
      http://www.nature.com/polopoly_fs/7.6716.1349271308!/suppinfoFile/Kahneman%20Letter.pdf

      My piece is much milder in tone than Kahneman's. When Kahneman says, "To deal effectively with the doubts you should acknowledge their existence and confront them straight on," I don't think that the implication is that no one has done this yet.

      Delete
    6. Unless you publish clarifications on the New York Times website - your words are what people will see. They will not see what you believe. And this I think it is critical to focus on your words in the piece

      Delete
    7. That should say "And THUS I think it is critical to focus on your words in the piece."

      Delete
  7. 6. To get real evidence that the authors of the arsenic-in-DNA study suffered from confirmation bias, I suppose we would have to interview the authors and determine what their beliefs were before running the experiment. Short of this , I think that the fact that NASA hyped the press conference announcing the result is evidence that they a priori wanted the result to be true. The authors named the bacterium "GFAJ-1", reportedly, for "Get Felisa a Job" (the lead author was Felisa Wolfe-Simon). So perhaps the authors thought that the result would be striking enough to help a person's career. There's nothing wrong with this, and this is understandable. It does constitute evidence of "wishful thinking" or confirmation bias, in my opinion.

    ReplyDelete
    Replies
    1. Certainly there was wishful thinking and confirmation bias wrapped into this story. But my issue here relates to the other comment about the definition of confirmation bias. I think this was certainly not "unwitting molding of facts to fit hypotheses or beliefs" or even "deliberate selectivity in the use of evidence". This seems to have been purposeful deceptiveness rather than just selectivity.

      Delete
    2. I see your point. If the authors were deliberately deceptive, than confirmation bias is not an issue. However, the claim that the authors were purposefully deceptive is a stronger claim, and I think requires more evidence. In my reading about the case, I don't recall anyone making the stronger claim of deliberate deception.

      Delete
    3. Well, OK it is VERY hard to prove purposeful deception. But I think it is very clear that the authors (or at least one author) of the paper misrepresented what was in the paper when talking to the press - for years. The paper itself does not make any where near as many out landing claims as she did in the original press conference and in other conversations with the press. So -I am not sure what to call this but the paper itself (which to me represents the research) was less of an issue than the discussions with the public and the press. (See for example my post here )

      Delete
  8. 7. About the Millikan-Fletcher oil-drop experiment---of course you can have confirmation bias and still be right. You can have any kind of bias and still be right. The point I was making in the article, I think fairly explicitly, was that bias can sometimes be a good thing (as in Millikan's case).

    There's a bit of literature about the extent of Millikan's pre-weeding of his data, and there is a survey by Goodstein at
    http://www.its.caltech.edu/~dg/MillikanII.pdf

    ReplyDelete
    Replies
    1. My issue here is how one determines that there was "confirmation bias". The fact that he refined his experiments and got results he felt happy with could certainly be some form of confirmation bias. It could also be that he was getting better at doing the experiment and was more confident it had worked. And of course it could be both. But I do not think it is easy to document any type of confirmation bias in the cases where experimental results get closer and closer to what we consider to be the truth. This is why I think this is an awkward example. I agree completely that one can be biased and still be right. The challenge lies in proving bias in such cases.

      Delete
    2. Proving bias (for example racial preferences in hiring) is always difficult. I think that when Millikan wrote "Best yet - Beauty - Publish" in his lab notebooks after a successful run, this is strong evidence of his having a preconception of what the results should be. He could have written "Worked OK" or "Successful run" or "valid data point" or "I think I did it right this time" if he was concerned only with whether the experiment worked.

      Delete
    3. It seems that you are bringing your own confirmation bias to interpret what Milikin meant with his comments. My own bias, as a researcher who does difficult experiments, I wouldn't at all be surprised that Milikin is referring here to his improving technique and the overall quality of the experiment and the data.

      By the way, that is one of the challenges to Eisen's demand for more detailed methods. There are shortcuts to methods reporting that we need to stamp out, but we also learn to recognize from experience when an experiment went well and when it went poorly. Yet, it can be difficult to articulate at times precisely what was improved about the experiment. Which might mean our feeling about the experiment is wrong, or it might mean that, like a Grandmaster chess player who just (from experience) "feels" a certain move is right or wrong, we have collected a set of skills and observations that we automatically associate with a quality experiment.

      And that is how *I* read Milikin.

      Delete
  9. 8. I wrote "Perhaps because of its self-awareness about what Austen would call the “whims and caprices” of human reasoning, the field of psychology has been most aggressive in dealing with doubts about the validity of its research."

    In an earlier version of the piece, I quoted in support of this claim the psychologist Brian Nosek, who stated: "Psychology is really taking the lead in many ways. . . All of the sciences are confronting this. But we understand many of the ways that human factors can affect results. And so I hope that our work will be extended in other fields."
    http://www.apa.org/monitor/2013/02/results.aspx

    ReplyDelete
    Replies
    1. The field of psychology has been dealing with this because they hit a crisis point of HUGE fraud and even worse reproducibility of results, *actual* confirmation bias, and poor statistical worshipping of a p-value = 0.05 = publish.

      Delete
  10. Eisen already makes some great points about what I found annoying in the extreme about this article. I'd also like to add a load groan at the idea that Jane Austen was a Game Theorist. We already had to suffer through Proust being a Neuroscientist, and we all saw how that played out.

    I think we need fewer non-scientists writing about science and more scientists writing about science.

    ReplyDelete
  11. Hi Jonathan, we tweeted about this yesterday (@dawnbazely) and I was thrilled to see your rapid response to Michael's NYT piece, which I read with a mixture of familiar dismay. I direct the York University (Toronto) pan-university research institute on Sustainability. Our mandate is to cut across all faculties (silos) and to stimulate inclusive, collaborative research. I just edited a book with a Political Science prof. from Tromso, Norway on Arctic Environmental & Human Security.
    Consequently, I am entirely familiar with both Michael's social sciences perspective (and some very general, long-term gripes from my very smart colleagues in SocSci, as well as the perceived shortcomings of my #STEM colleagues - who are, fundamentally "my people" - I publish boring papers on grass, fungi & twigs, in #STEM journals).
    The notion that #STEM researchers lack self-awareness, self-reflection & self-criticism IMO comes out of the our collective use of Popper's Scientific Method. So we don't argue the philosophical nature of our approach to data collection. This, many in SocSci, have believed & continue to believe (for decades) leads to #STEM people thinking that what we do is uber-neutral, more objective and "better" than what other scholars do. This annoys them. Good papers by folk like Lele & Norgaard in BioScience discuss this and there's the "Science Wars" of the 1960-70s, the RAND inst. and going back further to the 1930s, interesting writing from sociologists.
    PROBLEM is: all good #STEM people discuss these issues with data, experimental approach, review bias, confirmation bias, every bias, all the F***ing time.
    BUT, we tend to keep these discussions in the family, so to speak.

    The main proposition/thesis of my science-policy-politics research of the last 7 yrs has been that academics must actively seek out these broader conversations and engage in more inter-, trans-, multi-disciplinary research, recognizing that there are fundamental distinctions between different Teams in Team Science (I use science here in its broadest definition of being about knowledge creation).
    Your response to Michael's NYT piece is a BRILLIANT example of the kind of dialogue and conversation that IMO needs to happen much more frequently.

    OK, now I really need to get back to my nice, quiet, boring lab, so that I can talk to some endophyte-infected grasses.... And do what Randy Olsen means when he urges us "Don't be Such a Scientist!"

    ReplyDelete
    Replies
    1. "PROBLEM is: all good #STEM people discuss these issues with data, experimental approach, review bias, confirmation bias, every bias, all the F***ing time."

      ^THIS!

      I'm not sure what spurred the author to write this opinion piece, but I don't see how it serves anyone any good by not offering much of a resolution other than essentially saying that scientists should think more like literary critics. I'll admit, I'm probably not even well aware of said line of thinking carried out in the humanities, but it's not like the issue at hand is some hidden secret festering within the ranks of STEM departments across the globe, ready to burst at any moment and shed doubt on all prior empirical work to date.

      Bottom line: people in STEM fields are critical of their own work, the work of others, and are aware of problems with any sort of biases. Let's not get so hyperbolic here and make it out to be an epidemic (or a "crisis" as stated in the article).

      Researchers should certainly be aware of problems arising from confirmation bias and avoid that pitfall whether it be intentional or not. But let's keep in mind that the scientific method requires some "prejudgmental" thinking, in a sense...and also that most scientists, at least in the STEM fields, are usually perfectly ok with being wrong, so to speak, and arriving at different or inconclusive conclusions (it may hurt for a bit, and lead to some "WTF!?" moments). But that should inevitably then bring up the question: WHY? Which hopefully leads to more research. We're all for that.

      Dawn, your work seems really interesting by the way. I imagine there's something innately fascinating (and probably frustrating) with getting to interact with such different people across all those fields that appear so different on the surface. I consider myself to be "Such a Scientist" most times when I start thinking, which means I should probably just stop typing write know and stop trying to construct an argument that probably ended up being a complete mess.

      So to wrap up, I don't mean to downplay concerns about confirmation bias or problems with reproducing experiments or other similar issues. It's just that when you try to break into the bubble of the STEM sciences and attempt to critique the processes I think we expect something more original and less akin to Jane Austen meets game theory meets Chicken Little.

      Delete
    2. I agree with everything you say. YES, we are critical, but people working outside of labs (I use that generically for sci-tech-math-med sci) just don't hear those conversations. Over the years, many colleagues in the social sciences on my v. large campus (3rd largest university in Canada) have shyly asked me if they can "see" my lab (we try to tidy it up a bit). They've NEVER really talked much with STEM colleagues and often never been in the sci-eng-meds buildings on campus. To me, that's nuts.

      On the other hand, many of my favourite colleagues to work with in STEM who've wanted to work in interdisciplinary teams, have dropped out of these collaborations, because our adorable (I love you all) social sciences colleagues have driven them completely mad, with A. too many words, not enough action and B. perceived or actual poor logic (did I say, that social scientists use FAR too many words and the passive voice, grrrrr).

      My research and teaching activity at the science-policy-politics interface is, in some ways, a self-imposed penance (a holdover, no doubt, from all that catholic upbringing & arguing with priests about evolution) of "learning to get along" with the other 85% of society who truly DO NOT tend to naturally think in the logical, abstracted, theoretical framework that characterizes the scientific method.

      I've always been interested in the arts, culture, politics (though lack talent to make a career in these areas), and I have enough friends and relatives in these fields to know that STEM people are generally NOT SPEAKING in public about ourselves and our work: it's usually other people who lack the training, speaking for us. And, the academic training is SO VERY different.

      At the end of the day, the outstanding colleagues, with whom I'm fortunate enough to work, in Social Sciences, are as smart as top notch STEM colleagues, work just as hard, and are open to learning from me, as much as they are willing to learn from me.

      The biggest way that STEM people undersell ourselves, is that while we are extremely open to criticism, though, truth be told, the way we deal with this can be very brutal (and off-putting to young women). What I've learned from Social Sciences colleagues, is to be much more civil and respectful in how I go about demolishing their arguments. I'm incredibly competitive, but my family has really not approved of this for decades, and I've just had to confront how to be more palatable as a scientist in a very non-sciencey family (though I have buckets of engineer uncles).

      Th civility that has characterized Jonathan Eisen's exchange with Michael Chwe, is marvellous, and I will use it as a case study in showing that STEM researchers can and do have social skills. The whole exchange; the way Jonathan handled it and Michael responded, has given me a warm, fuzzy feeling - and made me very proud to be an academic. As you may have heard, scientists don;t have much political currency in Canada, these days!

      Delete
    3. Dawn - thanks for noticing my attempts to be non hostile in my post. While writing it I was thinking "some people are going to respond to this with personal attacks and with a mocking tone - try to focus on the article and just make comments about it" -- and I am really glad he responded. I disagree with the tone of what he wrote. But the back and forth is healthy ..

      Delete
    4. Yes, there isn't much transparency when it comes to the behind the scenes action, so to speak. Although, I suppose the assumption is that between self-policing and peer review that the "bad science" will be mostly weeded out, and that criticism thereafter along with attempts to reproduce results will keep authors honest. The apparent problems presented definitely shouldn't be taken lightly, and maybe those outside of the closed doors of the STEM departments deserve to be in-the-loop as well. The cynic in me also worries that too much transparency will override the entire process with criticism from those who never understood science to begin with, if you catch my drift.

      Definitely agree about the lack of social skills (I too am literally the only science-oriented member of my family when extended out quite a bit). At my undergraduate university, a VERY engineering heavy "tech" school, there was a running joke that the students were book smart but not people smart.

      Lots of STEM students/researchers/professors, myself included, could probably use some touching up when it comes to science of communication, particularly outside of academia (i.e. the public and policy makers). The open access "experiment" is an interesting concept as well, which from what I understand, was geared towards widening the base of scientific lit readers. I'm optimistic but hesitant, and that really opens up a different topic for discussion.

      Delete
  12. As a postscript, biologist Sui Huang sent me his article, "When peers are not peers and don't know it: The Dunning-Kruger effect and self-fulfilling prophecy in peer-review."

    http://onlinelibrary.wiley.com/doi/10.1002/bies.201200182/abstract

    Abstract:
    The fateful combination of (i) the Dunning-Kruger effect (ignorance of one's own ignorance) with (ii) the nonlinear dynamics of the echo-chamber between reviewers and editors fuels a self-reinforcing collective delusion system that sometimes spirals uncontrollably away from objectivity and truth. Escape from this subconscious meta-ignorance is a formidable challenge but if achieved will help correct a central deficit of the peer-review process that stifles innovation and paradigm shifts.

    ReplyDelete

Most recent post

My Ode to Yolo Bypass

Gave my 1st ever talk about Yolo Bypass and my 1st ever talk about Nature Photography. Here it is ...