Solution Aversion

Solution Aversion

I have had the misfortune to encounter many terms for psychological dysfunction in many venues. Cognitive dissonance, confirmation bias, the Dunning-Kruger effect – I have witnessed them all, all too often, both in the context of science and elsewhere. Those of us who are trained as scientists are still human: though we fancy ourselves immune, we are still subject to the same cognitive foibles as everyone else. Generally our training only suffices us to get past the oft-repeated ones.

Solution aversion is the knee-jerk reaction we have to deny the legitimacy of a problem when we don’t like the solution admitting said problem would entail. An obvious example in the modern era is climate change. People who deny the existence of this problem are usually averse to its solution.

Let me give an example from my own experience. To give some context requires some circuitous story-telling. We’ll start with climate change, but eventually get to cosmology.

Recently I encountered a lot of yakking on social media about an encounter between Bill Nye (the science guy) and Will Happer in a dispute about climate change. The basic gist of most of the posts was that of people (mostly scientists, mostly young enough to have watched Bill Nye growing up) cheering on Nye as he “eviscerated” Happer’s denialism. I did not watch any of the exchange, so I cannot evaluate the relative merits of their arguments. However, there is a more important issue at stake here: credibility.

Bill Nye has done wonderful work promoting science. Younger scientists often seem to revere him as a sort of Mr. Rogers of science. Which is great. But he is a science-themed entertainer, not an actual scientist. His show demonstrates basic, well known phenomena at a really, well, juvenile level. That’s a good thing – it clearly helped motivate a lot of talented people to become scientists. But recapitulating well-known results is very different from doing the cutting edge science that establishes new results that will become the fodder of future textbooks.

Will Happer is a serious scientist. He has made numerous fundamental contributions to physics. For example, he pointed out that the sodium layer in the upper atmosphere could be excited by a laser to create artificial guide stars for adaptive optics, enabling ground-based telescopes to achieve resolutions comparable to that of the Hubble space telescope. I suspect his work for the JASON advisory group led to the implementation of adaptive optics on Air Force telescopes long before us astronomers were doing it. (This is speculation on my part: I wouldn’t know; it’s classified.)

My point is that, contrary to the wishful thinking on social media, Nye has no more standing to debate Happer than Mickey Mouse has to debate Einstein. Nye, like Mickey Mouse, is an entertainer. Einstein is a scientist. If you think that comparison is extreme, that’s because there aren’t many famous scientists whose name I can expect everyone to know. A better analogy might be comparing Jon Hirschtick (a successful mechanical engineer, Nye’s field) to I.I. Rabi (a prominent atomic physicist like Happer), but you’re less likely to know who those people are. Most serious scientists do not cultivate public fame, and the modern examples I can think of all gave up doing real science for the limelight of their roles as science entertainers.

Another important contribution Happer made was to the study and technology of spin polarized nuclei. If you place an alkali element and a noble gas together in vapor, they may form weak van der Waals molecules. An alkali is basically a noble gas with a spare electron, so the two can become loosely bound, sharing the unwanted electron between them. It turns out – as Happer found and explained – that the wavefunction of the spare electron overlaps with the nucleus of the noble. By spin polarizing the electron through the well known process of optical pumping with a laser, it is possible to transfer the spin polarization to the nucleus. In this way, one can create large quantities of polarized nuclei, an amazing feat. This has found use in medical imaging technology. Noble gases are chemically inert, so safe to inhale. By doing so, one can light up lung tissue that is otherwise invisible to MRI and other imaging technologies.

I know this because I worked on it with Happer in the mid-80s. I was a first year graduate student in physics at Princeton where he was a professor. I did not appreciate the importance of what we were doing at the time. Will was a nice guy, but he was also my boss and though I respected him I did not much like him. I was a high-strung, highly stressed, 21 year old graduate student displaced from friends and familiar settings, so he may not have liked me much, or simply despaired of me amounting to anything. Mostly I blame the toxic arrogance of the physics department we were both in – Princeton is very much the Slytherin of science schools.

In this environment, there weren’t many opportunities for unguarded conversations. I do vividly recall some of the few that happened. In one instance, we had heard a talk about the potential for industrial activity to add enough carbon dioxide to the atmosphere to cause an imbalance in the climate. This was 1986, and it was the first I had heard of what is now commonly referred to as climate change. I was skeptical, and asked Will’s opinion. I was surprised by the sudden vehemence of his reaction:

“We can’t turn off the wheels of industry, and go back to living like cavemen.”

I hadn’t suggested any such thing. I don’t even recall expressing support for the speaker’s contention. In retrospect, this is a crystal clear example of solution aversion in action. Will is a brilliant guy. He leapt ahead of the problem at hand to see the solution being a future he did not want. Rejecting that unacceptable solution became intimately tied, psychologically, to the problem itself. This attitude has persisted to the present day, and Happer is now known as one of the most prominent scientists who is also a climate change denier.

Being brilliant never makes us foolproof against being wrong. If anything, it sets us up for making mistakes of enormous magnitude.

There is a difference between the problem and the solution. Before we debate the solution, we must first agree on the problem. That should, ideally, be done dispassionately and without reference to the solutions that might stem from it. Only after we agree on the problem can we hope to find a fitting solution.

In the case of climate change, it might be that we decide the problem is not so large as to require drastic action. Or we might hope that we can gradually wean ourselves away from fossil fuels. That is easier said than done, as many people do not seem to appreciate the magnitude of the energy budget what needs replacing. But does that mean we shouldn’t even try? That seems to be the psychological result of solution aversion.

Either way, we have to agree and accept that there is a problem before we can legitimately decide what to do about it. Which brings me back to cosmology. I did promise you a circuitous bit of story-telling.

Happer’s is just the first example I encountered of a brilliant person coming to a dubious conclusion because of solution aversion. I have had many colleagues who work on cosmology and galaxy formation say straight out to me that they would only consider MOND “as a last resort.” This is a glaring, if understandable, example of solution aversion. We don’t like MOND, so we’re only willing to consider it when all other options have failed.

I hope it is obvious from the above that this attitude is not a healthy one in science. In cosmology, it is doubly bad. Just when, exactly, do we reach the last resort?

We’ve already accepted that the universe is full of dark matter, some invisible form of mass that interacts gravitationally but not otherwise, has no place in the ridiculously well tested Standard Model of particle physics, and has yet to leave a single shred of credible evidence in dozens of super-sensitive laboratory experiments. On top of that, we’ve accepted that there is also a distinct dark energy that acts like antigravity to drive the apparent acceleration of the expansion rate of the universe, conserving energy by the magic trick of a sign error in the equation of state that any earlier generation of physicists would have immediately rejected as obviously unphysical. In accepting these dark denizens of cosmology we have granted ourselves essentially infinite freedom to fine-tune any solution that strikes our fancy. Just what could possibly constitute the last resort of that?

hammerandnails
When you have a supercomputer, every problem looks like a simulation in need of more parameters.

Being a brilliant scientist never precludes one from being wrong. At best, it lengthens the odds. All too often, it leads to a dangerous hubris: we’re so convinced by, and enamored of, our elaborate and beautiful theories that we see only the successes and turn a blind eye to the failures, or in true partisan fashion, try to paint them as successes. We can’t have a sensible discussion about what might be right until we’re willing to admit – seriously, deep-down-in-our-souls admit – that maybe ΛCDM is wrong.

I fear the field has gone beyond that, and is fissioning into multiple, distinct branches of science that use the same words to mean different things. Already “dark matter” means something different to particle physicists and astronomers, though they don’t usually realize it. Soon our languages may become unrecognizable dialects to one another; already communication across disciplinary boundaries is strained. I think Kuhn noted something about different scientists not recognizing what other scientists were doing as science, nor regarding the same evidence in the same way. Certainly we’ve got that far already, as successful predictions of the “other” theory are dismissed as so much fake news in a world unhinged from reality.

Advertisements

Degenerating problemshift: a wedged paradigm in great tightness

Degenerating problemshift: a wedged paradigm in great tightness

Reading Merritt’s paper on the philosophy of cosmology, I was struck by a particular quote from Lakatos:

A research programme is said to be progressing as long as its theoretical growth anticipates its empirical growth, that is as long as it keeps predicting novel facts with some success (“progressive problemshift”); it is stagnating if its theoretical growth lags behind its empirical growth, that is as long as it gives only post-hoc explanations either of chance discoveries or of facts anticipated by, and discovered in, a rival programme (“degenerating problemshift”) (Lakatos, 1971, pp. 104–105).

The recent history of modern cosmology is rife with post-hoc explanations of unanticipated facts. The cusp-core problem and the missing satellites problem are prominent examples. These are explained after the fact by invoking feedback, a vague catch-all that many people agree solves these problems even though none of them agree on how it actually works.

FeedbackCartoonSilkMamon
Cartoon of the feedback explanation for the difference between the galaxy luminosity function (blue line) and the halo mass function (red line). From Silk & Mamon (2012).

There are plenty of other problems. To name just a few: satellite planes (unanticipated correlations in phase space), the emptiness of voids, and the early formation of structure  (see section 4 of Famaey & McGaugh for a longer list and section 6 of Silk & Mamon for a positive spin on our list). Each problem is dealt with in a piecemeal fashion, often by invoking solutions that contradict each other while buggering the principle of parsimony.

It goes like this. A new observation is made that does not align with the concordance cosmology. Hands are wrung. Debate is had. Serious concern is expressed. A solution is put forward. Sometimes it is reasonable, sometimes it is not. In either case it is rapidly accepted so long as it saves the paradigm and prevents the need for serious thought. (“Oh, feedback does that.”) The observation is no longer considered a problem through familiarity and exhaustion of patience with the debate, regardless of how [un]satisfactory the proffered solution is. The details of the solution are generally forgotten (if ever learned). When the next problem appears the process repeats, with the new solution often contradicting the now-forgotten solution to the previous problem.

This has been going on for so long that many junior scientists now seem to think this is how science is suppose to work. It is all they’ve experienced. And despite our claims to be interested in fundamental issues, most of us are impatient with re-examining issues that were thought to be settled. All it takes is one bold assertion that everything is OK, and the problem is perceived to be solved whether it actually is or not.

8631e895433bc3d1fa87e3d857fc7500
“Is there any more?”

That is the process we apply to little problems. The Big Problems remain the post hoc elements of dark matter and dark energy. These are things we made up to explain unanticipated phenomena. That we need to invoke them immediately casts the paradigm into what Lakatos called degenerating problemshift. Once we’re there, it is hard to see how to get out, given our propensity to overindulge in the honey that is the infinity of free parameters in dark matter models.

Note that there is another aspect to what Lakatos said about facts anticipated by, and discovered in, a rival programme. Two examples spring immediately to mind: the Baryonic Tully-Fisher Relation and the Radial Acceleration Relation. These are predictions of MOND that were unanticipated in the conventional dark matter picture. Perhaps we can come up with post hoc explanations for them, but that is exactly what Lakatos would describe as degenerating problemshift. The rival programme beat us to it.

In my experience, this is a good description of what is going on. The field of dark matter has stagnated. Experimenters look harder and harder for the same thing, repeating the same experiments in hope of a different result. Theorists turn knobs on elaborate models, gifting themselves new free parameters every time they get stuck.

On the flip side, MOND keeps predicting novel facts with some success, so it remains in the stage of progressive problemshift. Unfortunately, MOND remains incomplete as a theory, and doesn’t address many basic issues in cosmology. This is a different kind of unsatisfactory.

In the mean time, I’m still waiting to hear a satisfactory answer to the question I’ve been posing for over two decades now. Why does MOND get any predictions right? It has had many a priori predictions come true. Why does this happen? It shouldn’t. Ever.

Cepheids & Gaia: No Systematic in the Hubble Constant

Cepheids & Gaia: No Systematic in the Hubble Constant

Casertano et al. have used Gaia to provide a small but important update in the debate over the value of the Hubble Constant. The ESA Gaia mission is measuring parallaxes for billions of stars. This is fundamental data that will advance astronomy in many ways, no doubt settling long standing problems but also raising new ones – or complicating existing ones.

Traditional measurements of the H0 are built on the distance scale ladder, in which distances to nearby objects are used to bootstrap outwards to more distant ones. This works, but is also an invitation to the propagation of error. A mistake in the first step affects all others. This is a long-standing problem that informs the assumption that the tension between H0 = 67 km/s/Mpc from Planck and H0 = 73 km/s/Mpc from local measurements will be resolved by some systematic error – presumably in the calibration of the distance ladder.

Well, not so far. Gaia has now measured enough Cepheids in our own Milky Way to test the calibration used to measure the distances of external galaxies via Cepheids. This was one of the shaky steps where things seemed most likely to go off. But no – the scales are consistent at the 0.3% level. For now, direct measurement of the expansion rate remains H0 = 73 km/s/Mpc.

Critical Examination of the Impossible

Critical Examination of the Impossible

It has been proposal season for the Hubble Space Telescope, so many astronomers have been busy with that. I am no exception. Talking to others, it is clear that there remain many more excellent Hubble projects than available observing time.

So I haven’t written here for a bit, and I have other tasks to get on with. I did get requests for a report on the last conference I went to, Beyond WIMPs: from Theory to Detection. They have posted video from the talks, so anyone who is interested may watch.

I think this is the worst talk I’ve given in 20 years. Maybe more. Made the classic mistake of trying to give the talk the organizers asked for rather than the one I wanted to give. Conference organizers mean well, but they usually only have a vague idea of what they imagine you’ll say. You should always ignore that and say what you think is important.

When speaking or writing, there are three rules: audience, audience, audience. I was unclear what the audience would be when I wrote the talk, and it turns out there were at least four identifiably distinct audiences in attendance. There were skeptics – particle physicists who were concerned with the state of their field and that of cosmology, there were the faithful – particle physicists who were not in the least concerned about this state of affairs, there were the innocent – grad students with little to no background in astronomy, and there were experts – astroparticle physicists who have a deep but rather narrow knowledge of relevant astronomical data. I don’t think it would have been possible to address the assigned topic (a “Critical Examination of the Existence of Dark Matter“) in a way that satisfied all of these distinct audiences, and certainly not in the time allotted (or even in an entire semester).

It is tempting to give an interruption by interruption breakdown of the sociology, but you may judge that for yourselves. The one thing I got right was what I said at the outset: Attitude Matters. You can see that on display throughout.

IMG_5460
This comic has been hanging on a colleague’s door for decades.

In science as in all matters, if you come to a problem sure that you already know the answer, you will leave with that conviction. No data nor argument will shake your faith. Only you can open your own mind.

Cosmology and Convention (continued)

Cosmology and Convention (continued)
Note: this is a guest post by David Merritt, following on from his paper on the philosophy of science as applied to aspects of modern cosmology.

Stacy kindly invited me to write a guest post, expanding on some of the arguments in my paper. I’ll start out by saying that I certainly don’t think of my paper as a final word on anything. I see it more like an opening argument — and I say this, because it’s my impression that the issues which it raises have not gotten nearly the attention they deserve from the philosophers of science. It is that community that I was hoping to reach, and that fact dictated much about the content and style of the paper. Of course, I’m delighted if astrophysicists find something interesting there too.

My paper is about epistemology, and in particular, whether the standard cosmological model respects Popper’s criterion of falsifiability — which he argued (quite convincingly) is a necessary condition for a theory to be considered scientific. Now, falsifying a theory requires testing it, and testing it means (i) using the theory to make a prediction, then (ii) checking to see if the prediction is correct. In the case of dark matter, the cleanest way I could think of to do this was via so-called  “direct detection”, since the rotation curve of the Milky Way makes a pretty definite prediction about the density of dark matter at the Sun’s location. (Although as I argued, even this is not enough, since the theory says nothing at all about the likelihood that the DM particles will interact with normal matter even if they are present in a detector.)

What about the large-scale evidence for dark matter — things like the power spectrum of density fluctuations, baryon acoustic oscillations, the CMB spectrum etc.? In the spirit of falsification, we can ask what the standard model predicts for these things; and the answer is: it does not make any definite prediction. The reason is that — to predict quantities like these — one needs first to specify the values of a set of additional parameters: things like the mean densities of dark and normal matter; the numbers that determine the spectrum of initial density fluctuations; etc. There are roughly half a dozen such “free parameters”. Cosmologists never even try to use data like these to falsify their theory; their goal is to make the theory work, and they do this by picking the parameter values that optimize the fit between theory and data.

Philosophers of science are quite familiar with this sort of thing, and they have a rule: “You can’t use the data twice.” You can’t use data to adjust the parameters of a theory, and then turn around and claim that those same data support the theory.  But this is exactly what cosmologists do when they argue that the existence of a “concordance model” implies that the standard cosmological model is correct. What “concordance” actually shows is that the standard model can be made consistent: i.e. that one does not require different values for the same parameter. Consistency is good, but by itself it is a very weak argument in favor of a theory’s correctness. Furthermore, as Stacy has emphasized, the supposed “concordance” vanishes when you look at the values of the same parameters as they are determined in other, independent ways. The apparent tension in the Hubble constant is just the latest example of this; another, long-standing example is the very different value for the mean baryon density implied by the observed lithium abundance. There are other examples. True “convergence” in the sense understood by the philosophers — confirmation of the value of a single parameter in multiple, independent experiments — is essentially lacking in cosmology.

Now, even though those half-dozen parameters give cosmologists a great deal of freedom to adjust their model and to fit the data, the freedom is not complete. This is because — when adjusting parameters — they fix certain things: what Imre Lakatos called the “hard core” of a research program: the assumptions that a theorist is absolutely unwilling to abandon, come hell or high water. In our case, the “hard core” includes Einstein’s theory of gravity, but it also includes a number of less-obvious things; for instance, the assumption that the dark matter responds to gravity in the same way as any collisionless fluid of normal matter would respond. (The latter assumption is not made in many alternative theories.) Because of the inflexibility of the “hard core”, there are going to be certain parameter values that are also more-or-less fixed by the data. When a cosmologist says “The third peak in the CMB requires dark matter”, what she is really saying is: “Assuming the fixed hard core, I find that any reasonable fit to the data requires the parameter defining the dark-matter density to be significantly greater than zero.” That is a much weaker statement than “Dark matter must exist”. Statements like “We know that dark matter exists” put me in mind of the 18th century chemists who said things like “Based on my combustion experiments, I conclude that phlogiston exists and that it has a negative mass”. We know now that the behavior the chemists were ascribing to the release of phlogiston was actually due to oxidation. But the “hard core” of their theory (“Combustibles contain an inflammable principle which they release upon burning”) forbade them from considering different models. It took Lavoisier’s arguments to finally convince them of the existence of oxygen.

The fact that the current cosmological model has a fixed “hard core” also implies that — in principle — it can be falsified. But, at the risk of being called a cynic, I have little doubt that if a new, falsifying observation should appear, even a very compelling one, the community will respond as it has so often in the past: via a conventionalist stratagem. Pavel Kroupa has a wonderful graphic, reproduced below, that shows just how often predictions of the standard cosmological model have been falsified — a couple of dozen times, according to latest count; and these are only the major instances. Historians and philosophers of science have documented that theories that evolve in this way often end up on the scrap heap. To the extent that my paper is of interest to the astronomical community, I hope that it gets people to thinking about whether the current cosmological model is headed in that direction.

Kroupa_F14_SMoCfailures
Fig. 14 from Kroupa (2012) quantifying setbacks to the Standard Model of Cosmology (SMoC).

 

 

Hubble constant redux

Hubble constant redux

There is a new article in Science on the expansion rate of the universe, very much along the lines of my recent post. It is a good read that I recommend. It includes some of the human elements that influence the science.

When I started this blog, I recalled my experience in the ’80s moving from a theory-infused institution to a more observationally and empirically oriented one. At that time, the theory-infused cosmologists assured us that Sandage had to be correct: H0 = 50. As a young student, I bought into this. Big time. I had no reason not to; I was very certain of the transmitted lore. The reasons to believe it then seemed every bit as convincing a the reasons to believe ΛCDM today. When I encountered people actually making the measurement, like Greg Bothun, they said “looks to be about 80.”

This caused me a lot of cognitive dissonance. This couldn’t be true. The universe would be too young (at most ∼12 Gyr) to contain the oldest stars (thought to be ∼18 Gyr at that time). Worse, there was no way to reconcile this with Inflation, which demanded Ωm = 1. The large deceleration of the expansion caused by high Ωm greatly exacerbated the age problem (only ∼8 Gyr accounting for deceleration). Reconciling the age problem with Ωm = 1 was hard enough without raising the Hubble constant.

Presented with this dissonant information, I did what most of us humans do: I ignored it. Some of my first work involved computing the luminosity function of quasars. With the huge distance scale of H0 = 50, I remember noticing how more distant quasars got progressively brighter. By a lot. Yes, they’re the most luminous things in the early universe. But they weren’t just outshining a galaxy’s worth of stars; they were outshining a galaxy of galaxies.

That was a clue that the metric I was assuming was very wrong. And indeed, since that time, every number of cosmological significance that I was assured in confident tones by Great Men that I Had to Believe has changed by far more than its formal uncertainty. In struggling with this, I’ve learned not to be so presumptuous in my beliefs. The universe is there for us to explore and discover. We inevitably err when we try to dictate how it Must Be.

The amplitude of the discrepancy in the Hubble constant is smaller now, but the same attitudes are playing out. Individual attitudes vary, of course, but there are many in the cosmological community who take the attitude that the Planck data give H0 = 67.8 so that is the right number. All other data are irrelevant; or at best flawed until brought into concordance with the right number.

It is Known, Khaleesi. 

Often these are the same people who assured us we had to believe Ωm = 1 and H0 = 50 back in the day. This continues the tradition of arrogance about how things must be. This attitude remains rampant in cosmology, and is subsumed by new generations of students just as it was by me. They’re very certain of the transmitted lore. I’ve even been trolled by some who seem particularly eager to repeat the mistakes of the past.

From hard experience, I would advocate a little humility. Yes, Virginia, there is a real tension in the Hubble constant. And yes, it remains quite possible that essential elements of our cosmology may prove to be wrong. I personally have no doubt about the empirical pillars of the Big Bang – cosmic expansion, Big Bang Nucleosynthesis, and the primordial nature of the Cosmic Microwave Background. But Dark Matter and Dark Energy may well turn out to be mere proxies for some deeper cosmic truth. IF that is so, we will never recognize it if we proceed with the attitude that LCDM is Known, Khaleesi.

Neutrinos got mass!

Neutrinos got mass!

In 1984, I heard Hans Bethe give a talk in which he suggested the dark matter might be neutrinos. This sounded outlandish – from what I had just been taught about the Standard Model, neutrinos were massless. Worse, I had been given the clear impression that it would screw everything up if they did have mass. This was the pervasive attitude, even though the solar neutrino problem was known at the time. This did not compute! so many of us were inclined to ignore it. But, I thought, in the unlikely event it turned out that neutrinos did have mass, surely that would be the answer to the dark matter problem.

Flash forward a few decades, and sure enough, neutrinos do have mass. Oscillations between flavors of neutrinos have been observed in both solar and atmospheric neutrinos. This implies non-zero mass eigenstates. We don’t yet know the absolute value of the neutrino mass, but the oscillations do constrain the separation between mass states (Δmν,212 = 7.53×10−5 eV2 for solar neutrinos, and Δmν,312 = 2.44×10−3 eV2 for atmospheric neutrinos).

Though the absolute values of the neutrino mass eigenstates are not yet known, there are upper limits. These don’t allow enough mass to explain the cosmological missing mass problem. The relic density of neutrinos is

Ωνh2 = ∑mν/(93.5 eV)

In order to make up the dark matter density (Ω ≈ 1/4), we need ∑mν ≈ 12 eV. The experimental upper limit on the electron neutrino mass is mν < 2 eV. There are three neutrino mass eigenstates, and the difference in mass between them is tiny, so ∑mν < 6 eV. Neutrinos could conceivably add up to more mass than baryons, but they cannot add up to be the dark matter.

In recent years, I have started to hear the assertion that we have already detected dark matter, with neutrinos given as the example. They are particles with mass that only interact with us through the weak nuclear force and gravity. In this respect, they are like WIMPs.

Here the equivalence ends. Neutrinos are Standard Model particles that have been known for decades. WIMPs are hypothetical particles that reside in a hypothetical supersymmetric sector beyond the Standard Model. Conflating the two to imply that WIMPs are just as natural as neutrinos is a false equivalency.

That said, massive neutrinos might be one of the few ways in which hierarchical cosmogony, as we currently understand it, is falsifiable. Whatever the dark matter is, we need it to be dynamically cold. This property is necessary for it to clump into dark matter halos that seed galaxy formation. Too much hot (relativistic) dark matter (neutrinos) suppresses structure formation. A nascent dark matter halo is nary a speed bump to a neutrino moving near the speed of light: if those fast neutrinos carry too much mass, they erase structure before it can form.

One of the great successes of ΛCDM is its explanation of structure formation: the growth of large scale structure from the small fluctuations in the density field at early times. This is usually quantified by the power spectrum – in the CMB at z > 1000 and from the spatial distribution of galaxies at z = 0. This all works well provided the dominant dark mass is dynamically cold, and there isn’t too much hot dark matter fighting it.

t16_galaxy_power_spectrum
The power spectrum from the CMB (low frequency/large scales) and the galaxy distribution (high frequency/”small” scales). Adapted from Whittle.

How much is too much? The power spectrum puts strong limits on the amount of hot dark matter that is tolerable. The upper limit is ∑mν < 0.12 eV. This is an order of magnitude stronger than direct experimental constraints.

Usually, it is assumed that the experimental limit will eventually come down to the structure formation limit. That does seem likely, but it is also conceivable that the neutrino mass has some intermediate value, say mν ≈ 1 eV. Such a result, were it to be obtained experimentally, would falsify the current CDM cosmogony.

Such a result seems unlikely, of course. Shooting for a narrow window such as the gap between the current cosmological and experimental limits is like drawing to an inside straight. It can happen, but it is unwise to bet the farm on it.

It should be noted that a circa 1 eV neutrino would have some desirable properties in an MONDian universe. MOND can form large scale structure, much like CDM, but it does so faster. This is good for clearing out the voids and getting structure in place early, but it tends to overproduce structure by z = 0. An admixture of neutrinos might help with that. A neutrino with an appreciable mass would also help with the residual mass discrepancy MOND suffers in clusters of galaxies.

If experiments measure a neutrino mass in excess of the cosmological limit, it would be powerful motivation to consider MOND-like theories as a driver of structure formation. If instead the neutrino does prove to be tiny, ΛCDM will have survived another test. That wouldn’t falsify MOND (or really have any bearing on it), but it would remove one potential “out” for the galaxy cluster problem.

Tiny though they be, neutrinos got mass! And it matters!