Neutrinos got mass!

Neutrinos got mass!

In 1984, I heard Hans Bethe give a talk in which he suggested the dark matter might be neutrinos. This sounded outlandish – from what I had just been taught about the Standard Model, neutrinos were massless. Worse, I had been given the clear impression that it would screw everything up if they did have mass. This was the pervasive attitude, even though the solar neutrino problem was known at the time. This did not compute! so many of us were inclined to ignore it. But, I thought, in the unlikely event it turned out that neutrinos did have mass, surely that would be the answer to the dark matter problem.

Flash forward a few decades, and sure enough, neutrinos do have mass. Oscillations between flavors of neutrinos have been observed in both solar and atmospheric neutrinos. This implies non-zero mass eigenstates. We don’t yet know the absolute value of the neutrino mass, but the oscillations do constrain the separation between mass states (Δmν,212 = 7.53×10−5 eV2 for solar neutrinos, and Δmν,312 = 2.44×10−3 eV2 for atmospheric neutrinos).

Though the absolute values of the neutrino mass eigenstates are not yet known, there are upper limits. These don’t allow enough mass to explain the cosmological missing mass problem. The relic density of neutrinos is

Ωνh2 = ∑mν/(93.5 eV)

In order to make up the dark matter density (Ω ≈ 1/4), we need ∑mν ≈ 12 eV. The experimental upper limit on the electron neutrino mass is mν < 2 eV. There are three neutrino mass eigenstates, and the difference in mass between them is tiny, so ∑mν < 6 eV. Neutrinos could conceivably add up to more mass than baryons, but they cannot add up to be the dark matter.

In recent years, I have started to hear the assertion that we have already detected dark matter, with neutrinos given as the example. They are particles with mass that only interact with us through the weak nuclear force and gravity. In this respect, they are like WIMPs.

Here the equivalence ends. Neutrinos are Standard Model particles that have been known for decades. WIMPs are hypothetical particles that reside in a hypothetical supersymmetric sector beyond the Standard Model. Conflating the two to imply that WIMPs are just as natural as neutrinos is a false equivalency.

That said, massive neutrinos might be one of the few ways in which hierarchical cosmogony, as we currently understand it, is falsifiable. Whatever the dark matter is, we need it to be dynamically cold. This property is necessary for it to clump into dark matter halos that seed galaxy formation. Too much hot (relativistic) dark matter (neutrinos) suppresses structure formation. A nascent dark matter halo is nary a speed bump to a neutrino moving near the speed of light: if those fast neutrinos carry too much mass, they erase structure before it can form.

One of the great successes of ΛCDM is its explanation of structure formation: the growth of large scale structure from the small fluctuations in the density field at early times. This is usually quantified by the power spectrum – in the CMB at z > 1000 and from the spatial distribution of galaxies at z = 0. This all works well provided the dominant dark mass is dynamically cold, and there isn’t too much hot dark matter fighting it.

t16_galaxy_power_spectrum
The power spectrum from the CMB (low frequency/large scales) and the galaxy distribution (high frequency/”small” scales). Adapted from Whittle.

How much is too much? The power spectrum puts strong limits on the amount of hot dark matter that is tolerable. The upper limit is ∑mν < 0.12 eV. This is an order of magnitude stronger than direct experimental constraints.

Usually, it is assumed that the experimental limit will eventually come down to the structure formation limit. That does seem likely, but it is also conceivable that the neutrino mass has some intermediate value, say mν ≈ 1 eV. Such a result, were it to be obtained experimentally, would falsify the current CDM cosmogony.

Such a result seems unlikely, of course. Shooting for a narrow window such as the gap between the current cosmological and experimental limits is like drawing to an inside straight. It can happen, but it is unwise to bet the farm on it.

It should be noted that a circa 1 eV neutrino would have some desirable properties in an MONDian universe. MOND can form large scale structure, much like CDM, but it does so faster. This is good for clearing out the voids and getting structure in place early, but it tends to overproduce structure by z = 0. An admixture of neutrinos might help with that. A neutrino with an appreciable mass would also help with the residual mass discrepancy MOND suffers in clusters of galaxies.

If experiments measure a neutrino mass in excess of the cosmological limit, it would be powerful motivation to consider MOND-like theories as a driver of structure formation. If instead the neutrino does prove to be tiny, ΛCDM will have survived another test. That wouldn’t falsify MOND (or really have any bearing on it), but it would remove one potential “out” for the galaxy cluster problem.

Tiny though they be, neutrinos got mass! And it matters!

Emergent Gravity hits a pothole

Emergent Gravity hits a pothole

We have in MOND a formula that has had repeated predictive successes. Many of these have been true a priori predictions, like the absolute nature of the baryonic Tully-Fisher relation, the large mass discrepancies evinced by low surface brightness galaxies, and the velocity dispersions of  many individual dwarf Spheroidal galaxies like Crater 2. I don’t see how these can be an accident. But what we lack is an underlying theoretical basis for the observed MONDian phenomenology: Why does this happen?

One apparently promising idea is the emergent gravity hypothesis by Erik Verlinde. Gravity is not a fundamental force so much as a consequence of microscopic entanglement. This manifests on scales comparable to the Hubble horizon, in particular, with an acceleration of order the speed of light times the Hubble expansion rate. This is  close to the acceleration scale of MOND.

An early test of emergent gravity was provided by weak gravitational lensing. It does remarkably well at predicting the observed lensing signal with no free parameters. This is promising – perhaps we finally have a physical basis for MOND. Indeed, as Milgrom points out, the equivalent success was already known in MOND.

brouweremergentgravitylensing
The weak lensing signal for galaxies stacked in several mass bins from Brouwer et al. (2016). The straight line predicted by emergent gravity (EG) is a good fit to the data with no free parameters to adjust.

Weak lensing occurs deep in the MOND regime, at very low accelerations far from the lensing galaxy. In that regard, the results of Brouwer et al. can be seen as an extension of the radial acceleration relation to much lower accelerations. In this limit, it is fair to treat galaxies as point masses – hence the similarity of the solid and dashed lines in the figure above.

For rotation curves, it is not fair to approximate galaxies as point masses. Rotation curves are observed in the midst of the stellar and gaseous mass distribution. One must take account of the detailed distribution of baryons to treat the problem properly. This is something MOND is very successful at.

Emergent gravity converges to the same limit as MOND in the point mass case, which holds for any mass distribution once you get far enough away. It is not identical for finite mass distributions. When one solves the equation of emergent gravity for a finite mass distribution, you get one term that looks like MOND, which gives the success noted above. But you also get an additional term that depends on the gradient of the mass distribution, dM/dr.

The additional term that emerges for extended mass distributions in emergent gravity lead to different predictions than MOND. This is good, in that it makes the theories distinguishable. This is bad, in that MOND already provides good fits to rotation curves. Additional terms are likely to mess that up.

And so it does. Two independent studies recently come to this conclusion: one including myself (Lelli et al. 2017) and another by Hees et al. (2017). The studies differ in their approach. We show that the additional term in emergent gravity leads to the prediction of a larger mass discrepancy than in MOND, driving one to abnormally low stellar mass-to-light ratios and degrading the empirical radial acceleration relation. Hees et al. make detailed rotation curve fits, showing that the dM/dr term over-amplifies bumps & wiggles in the rotation curve. It has always been intriguing that MOND gets these right: this is a non-trivial success to reproduce.

mondvseg
Rotation curves predicted for various exponential disks (top row) by Newton (dotted line), MOND (red dashed line), and emergent gravity (solid line). Note that emergent gravity predicts more of a hump at the peak of the rotation curve, leading to the hook in the corresponding radial acceleration relation (lower panels). From Lelli et al. (2017).

The situation looks bad for emergent gravity. One caveat is that at present we only have solutions for emergent gravity in the case of a spherical cow. Conceivably a better treatment of the geometry would change the result, but it won’t eliminate the dM/dr term. So this seems unlikely to help with the fundamental problem: this term needs not to exist.

Perhaps emergent gravity is a clue to what ultimately is going on – a single step in the right direction. Or perhaps the similarity to MOND is misleading. For now, the search for a satisfactory explanation for the observed phenomenology continues.

Crater 2: the Bullet Cluster of LCDM

Crater 2: the Bullet Cluster of LCDM

Recently I have been complaining about the low standards to which science has sunk. It has become normal to be surprised by an observation, express doubt about the data, blame the observers, slowly let it sink in, bicker and argue for a while, construct an unsatisfactory model that sort-of, kind-of explains the surprising data but not really, call it natural, then pretend like that’s what we expected all along. This has been going on for so long that younger scientists might be forgiven if they think this is how science is suppose to work. It is not.

At the root of the scientific method is hypothesis testing through prediction and subsequent observation. Ideally, the prediction comes before the experiment. The highest standard is a prediction made before the fact in ignorance of the ultimate result. This is incontrovertibly superior to post-hoc fits and hand-waving explanations: it is how we’re suppose to avoid playing favorites.

I predicted the velocity dispersion of Crater 2 in advance of the observation, for both ΛCDM and MOND. The prediction for MOND is reasonably straightforward. That for ΛCDM is fraught. There is no agreed method by which to do this, and it may be that the real prediction is that this sort of thing is not possible to predict.

The reason it is difficult to predict the velocity dispersions of specific, individual dwarf satellite galaxies in ΛCDM is that the stellar mass-halo mass relation must be strongly non-linear to reconcile the steep mass function of dark matter sub-halos with their small observed numbers. This is closely related to the M*-Mhalo relation found by abundance matching. The consequence is that the luminosity of dwarf satellites can change a lot for tiny changes in halo mass.

apj374168f11_lr
Fig. 11 from Tollerud et al. (2011, ApJ, 726, 108). The width of the bands illustrates the minimal scatter expected between dark halo and measurable properties. A dwarf of a given luminosity could reside in dark halos differing be two decades in mass, with a corresponding effect on the velocity dispersion.

Long story short, the nominal expectation for ΛCDM is a lot of scatter. Photometrically identical dwarfs can live in halos with very different velocity dispersions. The trend between mass, luminosity, and velocity dispersion is so weak that it might barely be perceptible. The photometric data should not be predictive of the velocity dispersion.

It is hard to get even a ballpark answer that doesn’t make reference to other measurements. Empirically, there is some correlation between size and velocity dispersion. This “predicts” σ = 17 km/s. That is not a true theoretical prediction; it is just the application of data to anticipate other data.

Abundance matching relations provide a highly uncertain estimate. The first time I tried to do this, I got unphysical answers (σ = 0.1 km/s, which is less than the stars alone would cause without dark matter – about 0.5 km/s). The application of abundance matching requires extrapolation of fits to data at high mass to very low mass. Extrapolating the M*-Mhalo relation over many decades in mass is very sensitive to the low mass slope of the fitted relation, so it depends on which one you pick.

he-chose-poorly

Since my first pick did not work, lets go with the value suggested to me by James Bullock: σ = 11 km/s. That is the mid-value (the blue lines in the figure above); the true value could easily scatter higher or lower. Very hard to predict with any precision. But given the luminosity and size of Crater 2, we expect numbers like 11 or 17 km/s.

The measured velocity dispersion is σ = 2.7 ± 0.3 km/s.

This is incredibly low. Shockingly so, considering the enormous size of the system (1 kpc half light radius). The NFW halos predicted by ΛCDM don’t do that.

To illustrate how far off this is, I have adopted this figure from Boylan-Kolchin et al. (2012).

mbkplusdwarfswcraterii
Fig. 1 of MNRAS, 422, 1203 illustrating the “too big to fail” problem: observed dwarfs have lower velocity dispersions than sub-halos that must exist and should host similar or even more luminous dwarfs that apparently do not exist. I have had to extend the range of the original graph to lower velocities in order to include Crater 2.

Basically, NFW halos, including the sub-halos imagined to host dwarf satellite galaxies, have rotation curves that rise rapidly and stay high in proportion to the cube root of the halo mass. This property makes it very challenging to explain a low velocity at a large radius: exactly the properties observed in Crater 2.

Lets not fail to appreciate how extremely wrong this is. The original version of the graph above stopped at 5 km/s. It didn’t extend to lower values because they were absurd. There was no reason to imagine that this would be possible. Indeed, the point of their paper was that the observed dwarf velocity dispersions were already too low. To get to lower velocity, you need an absurdly low mass sub-halo – around 107 M. In contrast, the usual inference of masses for sub-halos containing dwarfs of similar luminosity is around 109 Mto 1010 M. So the low observed velocity dispersion – especially at such a large radius – seems nigh on impossible.

More generally, there is no way in ΛCDM to predict the velocity dispersions of particular individual dwarfs. There is too much intrinsic scatter in the highly non-linear relation between luminosity and halo mass. Given the photometry, all we can say is “somewhere in this ballpark.” Making an object-specific prediction is impossible.

Except that it is possible. I did it. In advance.

The predicted velocity dispersion is σ = 2.1 +0.9/-0.6 km/s.

I’m an equal opportunity scientist. In addition to ΛCDM, I also considered MOND. The successful prediction is that of MOND. (The quoted uncertainty reflects the uncertainty in the stellar mass-to-light ratio.) The difference is that MOND makes a specific prediction for every individual object. And it comes true. Again.

MOND is a funny theory. The amplitude of the mass discrepancy it induces depends on how low the acceleration of a system is. If Crater 2 were off by itself in the middle of intergalactic space, MOND would predict it should have a velocity dispersion of about 4 km/s.

But Crater 2 is not isolated. It is close enough to the Milky Way that there is an additional, external acceleration imposed by the Milky Way. The net result is that the acceleration isn’t quite as low as it would be were Crater 2 al by its lonesome. Consequently, the predicted velocity dispersion is a measly 2 km/s. As observed.

In MOND, this is called the External Field Effect (EFE). Theoretically, the EFE is rather disturbing, as it breaks the Strong Equivalence Principle. In particular, Local Position Invariance in gravitational experiments is violated: the velocity dispersion of a dwarf satellite depends on whether it is isolated from its host or not. Weak equivalence (the universality of free fall) and the Einstein Equivalence Principle (which excludes gravitational experiments) may still hold.

We identified several pairs of photometrically identical dwarfs around Andromeda. Some are subject to the EFE while others are not. We see the predicted effect of the EFE: isolated dwarfs have higher velocity dispersions than their twins afflicted by the EFE.

If it is just a matter of sub-halo mass, the current location of the dwarf should not matter. The velocity dispersion certainly should not depend on the bizarre MOND criterion for whether a dwarf is affected by the EFE or not. It isn’t a simple distance-dependency. It depends on the ratio of internal to external acceleration. A relatively dense dwarf might still behave as an isolated system close to its host, while a really diffuse one might be affected by the EFE even when very remote.

When Crater 2 was first discovered, I ground through the math and tweeted the prediction. I didn’t want to write a paper for just one object. However, I eventually did so because I realized that Crater 2 is important as an extreme example of a dwarf so diffuse that it is affected by the EFE despite being very remote (120 kpc from the Milky Way). This is not easy to reproduce any other way. Indeed, MOND with the EFE is the only way that I am aware of whereby it is possible to predict, in advance, the velocity dispersion of this particular dwarf.

If I put my ΛCDM hat back on, it gives me pause that any method can make this prediction. As discussed above, this shouldn’t be possible. There is too much intrinsic scatter in the halo mass-luminosity relation.

If we cook up an explanation for the radial acceleration relation, we still can’t make this prediction. The RAR fit we obtained empirically predicts 4 km/s. This is indistinguishable from MOND for isolated objects. But the RAR itself is just an empirical law – it provides no reason to expect deviations, nor how to predict them. MOND does both, does it right, and has done so before, repeatedly. In contrast, the acceleration of Crater 2 is below the minimum allowed in ΛCDM according to Navarro et al.

For these reasons I consider Crater 2 to be the bullet cluster of ΛCDM. Just as the bullet cluster seems like a straight-up contradiction to MOND, so too does Crater 2 for ΛCDM. It is something ΛCDM really can’t do. The difference is that you can just look at the bullet cluster. With Crater 2 you actually have to understand MOND as well as ΛCDM, and think it through.

So what can we do to save ΛCDM?

Whatever it takes, per usual.

One possibility is that Crater II may represent the “bright” tip of the extremely low surface brightness “stealth” fossils predicted by Bovill & Ricotti. Their predictions are encouraging for getting the size and surface brightness in the right ballpark. But I see no reason in this context to expect such a low velocity dispersion. They anticipate dispersions consistent with the ΛCDM discussion above, and correspondingly high mass-to-light ratios that are greater than observed for Crater 2 (M/L ≈ 104 rather than ~50).

plausible suggestion I heard was from James Bullock. While noting that reionization should preclude the existence of galaxies in halos below 5 km/s, as we need for Crater 2, he suggested that tidal stripping could reduce an initially larger sub-halo to this point. I am dubious about this, as my impression from the simulations of Penarrubia  was that the outer regions of the sub-halo were stripped first while leaving the inner regions (where the NFW cusp predicts high velocity dispersions) largely intact until near complete dissolution. In this context, it is important to bear in mind that the low velocity dispersion of Crater 2 is observed at large radii (1 kpc, not tens of pc). Still, I can imagine ways in which this might be made to work in this particular case, depending on its orbit. Tony Sohn has an HST program to measure the proper motion; this should constrain whether the object has ever passed close enough to the center of the Milky Way to have been tidally disrupted.

Josh Bland-Hawthorn pointed out to me that he made simulations that suggest a halo with a mass as low as 107 Mcould make stars before reionization and retain them. This contradicts much of the conventional wisdom outlined above because they find a much lower (and in my opinion, more realistic) feedback efficiency for supernova feedback than assumed in most other simulations. If this is correct (as it may well be!) then it might explain Crater 2, but it would wreck all the feedback-based explanations given for all sorts of other things in ΛCDM, like the missing satellite problem and the cusp-core problem. We can’t have it both ways.

maxresdefault
Without super-efficient supernova feedback, the Local Group would be filled with a million billion ultrafaint dwarf galaxies!

I’m sure people will come up with other clever ideas. These will inevitably be ad hoc suggestions cooked up in response to a previously inconceivable situation. This will ring hollow to me until we explain why MOND can predict anything right at all.

In the case of Crater 2, it isn’t just a matter of retrospectively explaining the radial acceleration relation. One also has to explain why exceptions to the RAR occur following the very specific, bizarre, and unique EFE formulation of MOND. If I could do that, I would have done so a long time ago.

No matter what we come up with, the best we can hope to do is a post facto explanation of something that MOND predicted correctly in advance. Can that be satisfactory?

Crater 2: prediction verified.

Crater 2: prediction verified.

The arXiv brought an early Xmas gift in the form of a measurement of the velocity dispersion of Crater 2. Crater 2 is an extremely diffuse dwarf satellite of the Milky Way. Upon its discovery, I realized there was an opportunity to predict its velocity dispersion based on the reported photometry. The fact that it is very large (half light radius a bit over 1 kpc) and relatively far from the Milky Way (120 kpc) make it a unique and critical case. I will expand on that in another post, or you could read the paper. But for now:

The predicted velocity dispersion is σ = 2.1 +0.9/-0.6 km/s.

This prediction appeared in press in advance of the measurement (ApJ, 832, L8). The uncertainty reflects the uncertainty in the mass-to-light ratio.

The measured velocity dispersion is σ = 2.7 ± 0.3 km/s

as reported by Caldwell et al.

Isn’t that how science is suppose to work? Make the prediction first? Not just scramble to explain it after the fact?

Pulp Science

Pulp Science

g1_pulp_fiction

Vincent: Want to talk about MOND?

Jules: No man, I don’t consider MOND.

Vincent: Are you biased?

Jules: Nah, I ain’t biased, I just don’t dig MOND, that’s all.

Vincent: Why not?

Jules: MOND is an ugly theory. I don’t consider ugly theories.

Vincent: MOND makes predictions that come true. Fits galaxy data gooood.

Jules: Hey, MOND may fit every galaxy in the universe, but I’d never know ’cause I wouldn’t consider the ugly theory. MOND has no generally covariant extension. That’s an ugly theory. I ain’t considering nothin’ that ain’t got a proper cosmology.

Vincent: How about ΛCDM? ΛCDM has lots of small scale problems.

Jules: I don’t care about small scale problems.

Vincent: Yeah, but do you consider ΛCDM to be an ugly theory?

Jules: I wouldn’t go so far as to call ΛCDM ugly, but it’s definitely fine-tuned. But, ΛCDM’s got the CMB. The CMB goes a long way.

Vincent: Ah, so by that rationale, if a theory of modified dynamics fit the CMB, it would cease to be an ugly theory. Is that true?

Jules: Well, we’d have to be talkin’ about one charming eff’n theory of modified dynamics. I mean, it’d have to be ten times more charmin’ than MOND, you know what I’m sayin’?

xkcd’d

xkcd’d

So the always humorous, unabashedly nerdy xkcd recently published this comic:

astrophysics

This hits close to home for me, in many ways.

First, this is an every day experience for me. Hardly a day goes by that I don’t get an email, or worse, a phone call, from some wanna-be who has the next theory of everything. I try to be polite. I even read some of what I get sent. Mostly this is a waste of my time. News flash: at most, only one of you can be right. If the next Einstein is buried somewhere amongst these unsolicited, unrefereed, would-be theories, I wouldn’t know, because I do not have the time to sort through them all.

Second, it is true – it is a logical possibility that what we call dark matter is really just a proxy for a change in the law of gravity on galactic scales. It is also true that attempts to change the law of gravity on large scales do not work to explain the dark matter problem. (Attempts to do this to address the dark energy problem are a separate matter.)

Third, it is a logical fallacy. The implication of the structure of the statement is that the answer has to be dark matter. One could just as accurately turn the statement on its head and say “Yes, everybody has already had the idea, maybe it isn’t modified gravity – there’s just a lot of invisible mass on large scales!’ It sounds good but it doesn’t really fit the data.”

The trick is what data we’re talking about.

I have reviewed this problem many times (e.g., McGaugh & de Blok 1998, Sanders & McGaugh 2002, McGaugh 2006Famaey & McGaugh 2012, McGaugh 2015). Some of the data favor dark matter, some favor modified gravity. Which is preferable depends on how we weigh the different lines of evidence. If you think the situation is clear cut, you are not well informed of all the facts.* Most of the data that we cite to require dark matter are rather ambiguous and can usually be just as well interpreted in terms of modified gravity. The data that isn’t ambiguous points in opposite directions – see the review papers.

Note that I was careful above to say “galactic scales.” The scale that turns out to matter is not a size scale but an acceleration scale. Galaxies aren’t just big. The centripetal accelerations that hold stars in their orbits are incredibly low: about one part in 1011 of what we feel on the surface of the Earth. The only data that test gravity on this acceleration scale are the data that evince the missing mass problem. We only infer the need for dark matter at these very low accelerations. So while it is not possible to construct an empirically successful theory that modifies gravity on some large length scale, it remains a possibility that a modification can be made on an acceleration scale.

That the mass discrepancy problem occurs on an acceleration scale and not at some length scale has been known for many years. Failing to make the distinction between a length scale and an acceleration scale is fine for a comic strip. It is not OK for scientists working in the field. And yet I routinely encounter reasonable, intelligent scientists who are experts in some aspect of the dark matter problem but are unaware of this essential fact.

To end with another comic, the entire field is easily mocked:

bloomcountydarkmatter

The astute scientific reader will recognize that Mr. Breathed is conflating dark matter with dark energy. Before getting too dismissive, consider how you would go about explaining to him that our cosmic paradigm requires not just invisible mass to provide extra gravity, but also dark energy to act like antigravity. Do you really think that doubling down on ad hoc hypotheses makes for a strong case?

*Or worse, you may fall prey to cognitive dissonance and confirmation bias.

La Fin de Quoi?

La Fin de Quoi?

Last time, I addressed some of the problems posed by the radial acceleration relation for galaxy formation theory in the LCDM cosmogony. Predictably, some have been quick to assert there is no problem at all. The first such claim is by Keller & Wadsley in a preprint titled La Fin du MOND: LCDM is Fully Consistent with SPARC Acceleration Data.”

There are good things about this paper, bad things, and the potential for great ugliness.

good_bad_ugly

The good:

  This is exactly the reaction that I had hoped to see in response to the radial acceleration relation (RAR): people going to their existing simulations and checking what answer they got. The answer looks promising. The same relation is apparent in the simulations as in the data. That’s good.

  These simulations already existed. They haven’t been tuned to match this particular observations. That’s good.  The cynic might note that the last 15+ years of galaxy formation simulations have been driven by the need to add feedback to match data, including the shapes of rotation curves. Nevertheless, I see no guarantee that the RAR will fall out of this process.

  The scatter in the simulations is 0.05 dex. The scatter in the data not explained by random errors is 0.06 dex. This agreement is good. I think the source of the scatter needs to be explored further (see below), but it is at least in the right ballpark, which is by no means guaranteed.

  The authors make a genuine prediction for how the RAR should evolve with redshift. That isn’t just good; it is bold and laudable.

The bad:

  There are only 18 simulated galaxies to compare to 153 real ones. I appreciate the difficulty in generating these simulations, but we really need a bigger sample. The large number of sampled points (1800) is less important given the simulators’ ability to parse the data as finely as their CPU allows them to resolve. I also wonder if the lowest acceleration points extend beyond the range sampled in comparable galaxies. Typically the data peter out around an HI surface density of 1 Msun/pc^2.

  The comparison they make to Fig. 3 of arxiv:1609.05917 is great.  I would like to see something like Fig. 1 and 2 from that paper as well. What range of galaxy properties do the models span? What do individual mass models looks like?

rar_fig1and2-001
Fig. 1 from McGaugh, Lelli, & Schombert (2016) showing the range of luminosity and surface brightness covered by the SPARC data. Galaxies range over a factor of 50,000 in luminosity. The shaded region shows the range explored by the simulations discussed by Keller & Wadsley, which cover a factor of 15. Note that this is a logarithmic scale. On a linear scale, the simulations cover 0.03% of the range covered by the data along the x-axis. The range covered along the y-axis was not specified.

  My biggest concern is that there is a limited dynamic range in the simulations, which span only a factor of 15 in disk mass: from 1.7E10 to 2.7E11 Msun. For comparison, the data span 1E7 to 5E11 Lsun in [3.6] luminosity, a factor of 50,000. The simulations only sample the top 0.03% of this range.

  Basically, the simulated galaxies go from a little less massive than the Milky Way up to a bit more massive than Andromeda. Comparing this range to the RAR and declaring the problem solved is like fitting the Milky Way and Andromeda and declaring all problems in the Local Group solved without looking at any of the dwarfs. It is at lower mass scales and for lower surface brightness galaxies that problems become severe. Consequently, the most the authors can claim is a promising start on understanding a tiny fraction of bright galaxies, not a complete explanation of the RAR.

  Indeed, while the authors quantify the mass range over which their simulated galaxies extend, they make no mention of either size or surface brightness. Are these comparable to real galaxies of similar mass? Too narrow a range in size at fixed mass, as seems likely in a small sample, may act to artificially suppress the scatter.  Put another way: if the simulated galaxies only cover a tiny region of Fig. 1 above, it is hardly surprising if they exhibit little scatter.

  The apparent match between the simulated and observed scatter seems good. But the “left over” observational scatter of 0.06 dex is the same as what we expect from scatter in the mass-to-light ratio.  That is irreducible. There has to be some variation in stellar populations, and it is much easier to imagine this number getting bigger than being much smaller.

  In the simulations, the stellar mass is presumably known perfectly, so I expect the scatter has a different source. Presumably there is scatter from halo to halo as seen in other simulations. That’s natural in LCDM, but there isn’t any room for it if we also have to accommodate scatter from the mass-to-light ratio. The apparent equality of observed and simulated scatter is meaningless if they represent scatter in different quantities.

  I have trouble believing that the RAR follows simply from dissipative collapse without feedback. I’ve worked on this before, so I’m pretty sure it does not work this way. It is true that a single model does something like this as a result of dissipative collapse. It is not true that an ensemble of such models are guaranteed to fall on the same relation.

  There are many examples of galaxies with the same mass but very different scale lengths. In the absence of feedback, shorter scale lengths lead to more compression of the dark matter halo. One winds up with more dark matter where there are more baryons. This is the opposite of what we see in the data.

  This makes me suspect the dynamic range in the simulations is a problem. Not only do they cover little range in mass compared to the data, but this particular conclusion may only be reached if there is virtually no dynamic range in size at a given mass. That is hardly surprising given the small sample size.

The ugly:

  The title.

  This paper has nothing to do with MOND, nor says anything about it. Why is it in the title?

  At best, the authors have shown that, over a rather limited dynamic range, simulations in LCDM might reproduce post facto what MOND predicted a priori. If so, LCDM survives this test (as far as it goes). But in no respect can this be considered a problem for MOND, which predicted the phenomenon over 30 years ago. This is a classic problem in the philosophy of science: should we put more weight on the a priori prediction, or on the capacity of a more flexible theory to accommodate the same observation later on?

The title is revealing of a deep-rooted bias. It tarnishes what might be an important results and does a disservice to the objectivity we’re suppose to value in science.

DO OTHER SIMULATIONS AGREE?

  I am eager to see whether other simulations agree with these results. Not all simulators implement feedback in the same way, nor get the same results. The most dangerous aspect of this paper is that it may give people an excuse to think the problem is solved so they never have to think about it again. The RAR is a test that needs to be applied every time to each and every batch of simulations. If they don’t pass this test, they’re wrong. Unfortunately, there is precedent in the galaxy formation community to take reassurances such as this for granted, and not to bother to perform the test.

THE RAR TEST MUST BE PERFORMED FOR ALL SIMULATIONS. ALWAYS.

674d6672e4ac0bfbfef7d141db03ee12