Carrying out research can be an expensive endeavor. Good research costs a lot of time, effort and money. Therefore, reading studies with low experimental power, weird analytic choices, etc., leaves me at unease: I have no clue how and whether to interpret such studies and I think they are a waste of time, effort and money. There is not much I can do about it. As a my own silent act of protest, I decided to start listing papers which I cannot believe in…
4. Learning by heart – the relationship between resting vagal tone and metacognitive judgments: a pilot study, Cognitive Processing. Interesting study that tests whether Heart Rate Variability (HRV) is related to metacognitive judgments. Given that HRV is sensitive to executive control, this would be a very cool (albeit very indirect way) to confirm the executive control – metacognition link. However, as the title indicates: this is a pilot study! With only N=20 and about ±13 correlations that are tested, it’s not really clear to me what this study learns us.
3. Behavioural, modeling, and electrophysiological evidence for supramodality in human metacognition, Journal of Neuroscience. In three experiments (N=15, N=15, N=20), the correlation between metacognitive accuracy tested in different modalities is tested. A very interesting and great idea, but testing correlations with such small sample sizes is pointless (the lowest correlation you can detect is r = .567). Unsurprisingly, none of these correlations are convincing (i.e., a lot of p=.04), leaving me hanging whether and how to interpret this…
2. Distinguishing the roles of dorsolateral and anterior PFC in visual metacognition, Journal of Neuroscience. Using TMS (N=18), the role of the dlPFC and aPFC in visual metacognition was tested (a promising and interesting idea!). In the primary analyses, metacognitive accuracy was not affected by TMS. When splitting data into two parts, there is an effect in the second part only (p = .03). This is (strictly speaking) not significant because this is clearly an unplanned secondary test, so alpha = .05/2. Shame, because it is a very interesting hypothesis, but I am simply not convinced by these data. Imo, more participants should have been tested so the authors wouldn’t have to deal with all these just significant effects…
1. When Both the Original Study and Its Failed Replication Are Correct:
Feeling Observed Eliminates the Facial-Feedback Effect, Journal of Personality and Social Psychology. It was tested whether the Wagenmakers et al. failure to replicate Strack’s pen study, was due to the usage of video cameras. Great idea, but the authors fail to comply with their own preregistration (testing 26pp too little). There is indeed an effect in the no-camera (p=.01) but not in the camera condition (p = .85). Since Nieuwenhuis et al. (2011) everyone knows only a significant interaction supports their claim, which is lacking (p = .102). I wonder how this got through…