There has been some hand-wringing of late about the tension between the value of the expansion rate of the universe – the famous Hubble constant, H0, measured directly from observed redshifts and distances, and that obtained by multi-parameter fits to the cosmic microwave background. Direct determinations consistently give values in the low to mid-70s, like Riess et al. (2016): H0 = 73.24 ± 1.74 km/s/Mpc while the latest CMB fit from Planck gives H0 = 67.8 ± 0.9 km/s/Mpc. These are formally discrepant at a modest level: enough to be annoying, but not enough to be conclusive.
The widespread presumption is that there is a subtle systematic error somewhere. Who is to blame depends on what you work on. People who work on the CMB and appreciate its phenomenal sensitivity to cosmic geometry generally presume the problem is with galaxy measurements. To people who work on local galaxies, the CMB value is a non-starter.
This subject has a long and sordid history which entire books have been written about. Many systematic errors have plagued the cosmic distance ladder. Hubble’s earliest (c. 1930) estimate of H0 = 500 km/s/Mpc was an order of magnitude off, and made the universe impossibly young by what was known to geologists at the time. Recalibration of the distance scale brought the number steadily down. There followed a long (1960s – 1990s) stand-off between H0 = 50 as advocated by Sandage and 100 as advocated by de Vaucouleurs. Obviously, there were some pernicious systematic errors lurking about. Given this history, it is easy to imagine that even today there persists some subtle systematic error in local galaxy distance measurements.
In the mid-90s, I realized that the Tully-Fisher method was effectively a first approximation – there should be more information in the full shape of the rotation curve. Playing around with this, I arrived at H0 = 72 ± 2. My work relied heavily on the work of Begeman, Broeils, & Sanders and in turn on the distances they had assumed. This was a much large systematic uncertainty. To firm up my estimate would require improved calibration of those distances quite beyond the scope of what I was willing to take on at that time, so I never published it.
In 2001, the HST Key Project on the Distance Scale – the primary motivation to build the Hubble Space Telescope – reported H0 = 72 ± 8. That uncertainty was still plagued by the same systematics that had befuddled me. Since that time, the errors have been beaten down. There have been many other estimates of increasing precision, mostly in the range 72 – 75. The serious-minded cosmologist always worries about some subtle remaining systematic error, but the issue seemed finally to be settled.
One weird consequence of this was that all my extensive notes on the distance scale no longer seemed essential to teaching graduate cosmology: all the arcane details that had occupied the field for decades suddenly seemed like boring minutia. That was OK – about that time, there finally started to be interesting data on the the cosmic microwave background. Explaining that neatly displaced the class time spent on the distance scale. No longer were the physics students stopping to ask, appalled, “what’s a distance modulus?”; now it was the astronomy students who were appalled to be confronted by the spherical harmonics they’d seen but not learned in quantum mechanics.
The first results from WMAP were entirely consistent with the results of the HST key project. This reinforced the feeling that the problem was solved. In the new century, we finally knew the value of the Hubble constant!
Over the past decade, the best-fit value of H0 from the CMB has done a slow walk away from the direct measurements in the local universe. It has gotten far enough to result in the present tension. The problem is that the CMB doesn’t measure the Hubble constant directly; it constrains a multi-dimensional parameters space that approximately projects to a constant of the product ΩmH03, as illustrated below.
Much of the progress in cosmology has been the steady reduction in the allowed range in the above parameter space. The CMB data now allow only a narrow trench. I worry that it may wink out entirely. Were that to happen, it would falsify our current model of cosmology.
For now the only thing that seems to be happening is that the χ2 for the CMB data is ever so slightly better for lower values of the Hubble constant. While the lines of the trench represent no-go zones – the data require cosmological parameters to fall between the lines – there isn’t much difference along the trench. It is like walking along the floor of the Grand Canyon: exiting by climbing up the cliffs is disfavored; meandering downstream is energetically favored.
That’s what it looks like to me. The CMB χ2 has meandered a bit down the trench. It is not obvious to me that the current Planck best-fit is all that preferable to that from WMAP3. I have asked a few experts what would be so terrible about imposing the local distance scale as a strong prior. Have yet to hear a good answer, so chime in if you know one. If we put the clamps on H0 it must come out somewhere else. Where? How terrible would it be?
This is not an idle question. If one can recover the local Hubble constant with only a small tweak to, say, the baryon density, then fine – we’ve already got a huge problem there with lithium that we’re largely ignoring – why argue about the Hubble constant if this tension can be resolved where there’s already a bigger problem? If instead, it requires something more radical, like a clear difference from the standard number of neutrinos, then OK, that’s interesting and potentially a big deal.
So what is it? What does it take to reconcile to Planck with local H0? Since this is an issue of geometry, I suspect it might be something like the best fit geometry of the universe becoming ever so slightly not-flat, at the 2σ level instead of 1σ.
While I have not come across a satisfactory explanation of what it would take to reconcile Planck with the local distance scale, I have seen many joint analyses of Planck plus lots of other data. They all seem consistent, so long as you ignore the high-L (L > 600) Planck data. It is only the high-L data that are driving the discrepancy (low L appear to be OK).
So I will say the obvious, for those who are too timid: it looks like the systematic error is most likely with the high-L data of Planck itself.