Solar Silliness: The Heart-Sun Connection

Neuroskeptic iconNeuroskeptic
By Neuroskeptic
Mar 23, 2018 12:11 AMNov 20, 2019 4:45 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

, I learned about a curious new paper in Scientific Reports: Long-Term Study of Heart Rate Variability Responses to Changes in the Solar and Geomagnetic Environment by Abdullah Alabdulgader and colleagues. According to this article, the human heart "responds to changes in geomagnetic and solar activity". This paper claims that things like solar flares, cosmic rays and sunspots affect the beating of our hearts. Spoiler warning: I don't think this is true. In fact, I think the whole paper is based on a simple statistical error. But more on that later. Here's how the study worked. The authors - an international team including researchers from Saudi Arabia, Lithuania, NASA, and the HeartMath Institute (no, really) - recorded the heartbeats of 16 female volunteers. Data collection spanned a period of five months, with the cardiac recordings running for up to 72 hours at a stretch. These ECG recordings were then used to calculate the heart rate variability (HRV) from moment to moment. HRV measures the beat-to-beat variability in the heart rate, and is thought to be an index of heart health as well as emotional arousal. The main part of the study was the correlation of the HRV data against 9 different 'solar and geomagnetic' phenomena. Here's an overview of these cosmic variables:

On Twitter

For each participant in the study, the authors correlated aspects of the HRV timeseries against the geosolar variables. This was done using linear regression. A large number of these regressions were performed, because the authors wanted to try various 'lags' for each of the geosolar measures, to test whether HRV was associated with (say) cosmic ray count 3 hours previously (or 4 hours, or 5 hours... up to 40 hours.) If this sounds like a lot of statistical tests, it was - but the authors corrected for multiple comparisons in a rigorous way. Based on the results of this analysis, the authors found that "HRV measures react to changes in geomagnetic and solar and activity during periods of normal undisturbed activity... cosmic rays, solar radio flux, and Schumann resonance power are all associated with increased HRV." Unfortunately, I think the analysis is fatally flawed. The problem is one that regular readers may remember: autocorrelation, also known as non-independence of observations. Simply put, you shouldn't use linear regression to compare two time-series. This is because the basic assumption of any regression (or correlation) analysis is that the data points are independent of each other, and in a time-series, the points are not independent, because two observations close together in time are likely to be more similar than two observations far apart in time (or in technical terms, time-series are usually autocorrelated). Non-independence is an insidious statistical problem that accounts for many spurious results. Previously I've blogged about two(1, 2) published papers which were, I believe, based on false conclusions caused by failing to account for non-independent data. This paper makes a third.

*

Here's a simple analysis I ran to illustrate how autocorrelation can generate spurious correlations. I couldn't use the data from the sun-heart study for this purpose, because the authors don't seem to have shared it, so I took two time-series datasets from the internet. The first dataset is the monthly temperature average

for London, England. The second is the yearly number of publications on PubMed containing the words 'heart rate variability'

for the past 12 years (2006-2017). These are the first two variables I thought up: I did not cherry-pick them.

Clearly, there cannot be a true relationship between these two time-series. They are unrelated in every way. They don't even have the same timescale: one is in months, the other is in years. However, if you calculate the correlation coefficient between these two, it is statistically significant (p < 0.05), in 7 / 12 cases! The 12 different cases are the 12 possible ways of lining up the two time-series - i.e. January = 2006, or January = 2007, or January = 2008... and so on. Remember, Alabdulgader et al. tried lots of different alignments (lags) too. The reason for the high false positive rate is autocorrelation: both of the time-series show gradual changes over time, so each datapoint is quite similar to the previous one, meaning that the datapoints are not independent. Alabdulgader et al. corrected for the problem of multiple comparisons, but this correction does not solve the problem of autocorrelation. They're two quite different problems. If I'm right about this, the associations reported in the Alabdulgader et al. paper are most likely spurious and due to chance alone. As the data from this paper don't seem to be public, I can't prove that this is true, but I would be surprised if I'm wrong. See also: Orac's take

on this paper.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group