One experience I’ve frequently had in Astronomy is that there is no result so obvious that someone won’t claim the exact opposite. Indeed, the more obvious the result, the louder the claim to contradict it.
There is a very obvious acceleration scale in galaxies. It can be seen in several ways. Here I describe a nice way that is completely independent of any statistics or model fitting: no need to argue over how to set priors.
Simple dimensional analysis shows that a galaxy with a flat rotation curve has a characteristic acceleration
g† = 0.8 Vf4/(G Mb)
where Vf is the flat rotation speed, Mb is the baryonic mass, and G is Newton’s constant. The factor 0.8 arises from the disk geometry of rotating galaxies, which are not spherical cows. This is first year grad school material: see Binney & Tremaine. I include it here merely to place the characteristic acceleration g† on the same scale as Milgrom’s acceleration constant a0.
These are all known numbers or measurable quantities. There are no free parameters: nothing to fiddle; nothing to fit. The only slightly tricky quantity is the baryonic mass, which is the sum of stars and gas. For the stars, we measure the light but need the mass, so we must adopt a mass-to-light ratio, M*/L. Here I adopt the simple model used to construct the radial acceleration relation: a constant 0.5 M⊙/L⊙ at 3.6 microns for galaxy disks, and 0.7 M⊙/L⊙ for bulges. This is the best present choice from stellar population models; the basic story does not change with plausible variations.
This is all it takes to compute the characteristic acceleration of galaxies. Here is the resulting histogram for SPARCgalaxies:
Do you see the acceleration scale? It’s right there in the data.
I first employed this method in 2011, where I found <g†> = 1.24 ± 0.14 Å s-2 for a sample of gas rich galaxies that predates and is largely independent of the SPARC data. This is consistent with the SPARC result <g†> = 1.20 ± 0.02 Å s-2. This consistency provides some reassurance that the mass-to-light scale is near to correct since the gas rich galaxies are not sensitive to the choice of M*/L. Indeed, the value of Milgrom’s constant has not changed meaningfully since Begeman, Broeils, & Sanders (1991).
The width of the acceleration histogram is dominated by measurement uncertainties and scatter in M*/L. We have assumed that M*/L is constant here, but this cannot be exactly true. It is a good approximation in the near-infrared, but there must be some variation from galaxy to galaxy, as each galaxy has its own unique star formation history. Intrinsic scatter in M*/L due to population difference broadens the distribution. The intrinsic distribution of characteristic accelerations must be smaller.
I have computed the scatter budget many times. It always comes up the same: known uncertainties and scatter in M*/L gobble up the entire budget. There is very little room left for intrinsic variation in <g†>. The upper limit is < 0.06 dex, an absurdly tiny number by the standards of extragalactic astronomy. The data are consistent with negligible intrinsic scatter, i.e., a universal acceleration scale. Apparently a fundamental acceleration scale is present in galaxies.
The radial acceleration relation connects what we see in visible mass with what we get in galaxy dynamics. This is true in a statistical sense, with remarkably little scatter. The SPARC data are consistent with a single, universal force law in galaxies. One that appears to be sourced by the baryons alone.
This was not expected with dark matter. Indeed, it would be hard to imagine a less natural result. We can only salvage the dark matter picture by tweaking it to make it mimic its chief rival. This is not a healthy situation for a theory.
On the other hand, if these results really do indicate the action of a single universal force law, then it should be possible to fit each individual galaxy. This has been done manytimesbefore, with surprisingly positive results. Does it work for the entirety of SPARC?
For the impatient, the answer is yes. Graduate student Pengfei Li has addressed this issue in a paper in press at A&A. There are some inevitable goofballs; this is astronomy after all. But by and large, it works much better than I expected – the goof rate is only about 10%, and the worst goofs are for the worst data.
Fig. 1 from the paper gives the example of NGC 2841. This case has been historically problematic for MOND, but a good fit falls out of the Bayesian MCMC procedure employed. We marginalize over the nuisance parameters (distance and inclination) in addition to the stellar mass-to-light ratio of disk and bulge. These come out a tad high in this case, but everything is within the uncertainties. A long standing historical problem is easily solved by application of Bayesian statistics.
Another example is provided by the low surface brightness (LSB) dwarf galaxy IC 2574. Note that like all LSB galaxies, it lies at the low acceleration end of the RAR. This is what attracted my attention to the problem a long time ago: the mass discrepancy is large everywhere, so conventionally dark matter dominates. And yet, the luminous matter tells you everything you need to know to predict the rotation curve. This makes no physical sense whatsoever: it is as if the baryonic tail wags the dark matter dog.
In this case, the mass-to-light ratio of the stars comes out a bit low. LSB galaxies like IC 2574 are gas rich; the stellar mass is pretty much an afterthought to the fitting process. That’s good: there is very little freedom; the rotation curve has to follow almost directly from the observed gas distribution. If it doesn’t, there’s nothing to be done to fix it. But it is also bad: since the stars contribute little to the total mass budget, their mass-to-light ratio is not well constrained by the fit – changing it a lot makes little overall difference. This renders the formal uncertainty on the mass-to-light ratio highly dubious. The quoted number is correct for the data as presented, but it does not reflect the inevitable systematic errors that afflict astronomical observations in a variety of subtle ways. In this case, a small change in the innermost velocity measurements (as happens in the THINGS data) could change the mass-to-light ratio by a huge factor (and well outside the stated error) without doing squat to the overall fit.
We can address statistically how [un]reasonable the required fit parameters are. Short answer: they’re pretty darn reasonable. Here is the distribution of 3.6 micron band mass-to-light ratios.
From a stellar population perspective, we expect roughly constant mass-to-light ratios in the near-infrared, with some scatter. The fits to the rotation curves give just that. There is no guarantee that this should work out. It could be a meaningless fit parameter with no connection to stellar astrophysics. Instead, it reproduces the normalization, color dependence, and scatter expected from completely independent stellar population models.
The stellar mass-to-light ratio is practically inaccessible in the context of dark matter fits to rotation curves, as it is horribly degenerate with the parameters of the dark matter halo. That MOND returns reasonable mass-to-light ratios is one of those important details that keeps me wondering. It seems like there must be something to it.
Unsurprisingly, once we fit the mass-to-light ratio and the nuisance parameters, the scatter in the RAR itself practically vanishes. It does not entirely go away, as we fit only one mass-to-light ratio per galaxy (two in the handful of cases with a bulge). The scatter in the individual velocity measurements has been minimized, but some remains. The amount that remains is tiny (0.06 dex) and consistent with what we’d expect from measurement errors and mild asymmetries (non-circular motions).
For those unfamiliar with extragalactic astronomy, it is common for “correlations” to be weak and have enormous intrinsic scatter. Early versions of the Tully-Fisher relation were considered spooky-tight with a mere 0.4 mag. of scatter. In the RAR we have a relation as near to perfect as we’re likely to get. The data are consistent with a single, universal force law – at least in the radial direction in rotating galaxies.
That’s a strong statement. It is hard to understand in the context of dark matter. If you think you do, you are not thinking clearly.
So how strong is this statement? Very. We tried fits allowing additional freedom. None is necessary. One can of course introduce more parameters, but we find that no more are needed. The bare minimum is the mass-to-light ratio (plus the nuisance parameters of distance and inclination); these entirely suffice to describe the data. Allowing more freedom does not meaningfully improve the fits.
For example, I have often seen it asserted that MOND fits require variation in the acceleration constant of the theory. If this were true, I would have zero interest in the theory. So we checked.
Here we learn something important about the role of priors in Bayesian fits. If we allow the critical acceleration g† to vary from galaxy to galaxy with a flat prior, it does indeed do so: it flops around all over the place. Aha! So g† is not constant! MOND is falsified!
Well, no. Flat priors are often problematic, as they have no physical motivation. By allowing for a wide variation in g†, one is inviting covariance with other parameters. As g† goes wild, so too does the mass-to-light ratio. This wrecks the stellar mass Tully-Fisher relation by introducing a lot of unnecessary variation in the mass-to-light ratio: luminosity correlates nicely with rotation speed, but stellar mass picks up a lot of extraneous scatter. Worse, all this variation in both g† and the mass-to-light ratio does very little to improve the fits. It does a tiny bit – χ2 gets infinitesimally better, so the fitting program takes it. But the improvement is not statistically meaningful.
In contrast, with a Gaussian prior, we get essentially the same fits, but with practically zero variation in g†. wee The reduced χ2 actually gets a bit worse thanks to the extra, unnecessary, degree of freedom. This demonstrates that for these data, g† is consistent with a single, universal value. For whatever reason it may occur physically, this number is in the data.
We have made the SPARC data public, so anyone who wants to reproduce these results may easily do so. Just mind your priors, and don’t take every individual error bar too seriously. There is a long tail to high χ2 that persists for any type of model. If you get a bad fit with the RAR, you will almost certainly get a bad fit with your favorite dark matter halo model as well. This is astronomy, fergodssake.
A recently discovered dwarf galaxy designated NGC1052-DF2 has been in the news lately. Apparently a satellite of the giant elliptical NGC 1052, DF2 (as I’ll call it from here on out) is remarkable for having a surprisingly low velocity dispersion for a galaxy of its type. These results were reported in Nature last week by van Dokkum et al., and have caused a bit of a stir.
It is common for giant galaxies to have some dwarf satellite galaxies. As can be seen from the image published by van Dokkum et al., there are a number of galaxies in the neighborhood of NGC 1052. Whether these are associated physically into a group of galaxies or are chance projections on the sky depends on the distance to each galaxy.
NGC 1052 is listed by the NASA Extragalactic Database (NED) as having a recession velocity of 1510 km/s and a distance of 20.6 Mpc. The next nearest big beastie is NGC 1042, at 1371 km/s. The difference of 139 km/s is not much different from 115 km/s, which is the velocity that Andromeda is heading towards the Milky Way, so one could imagine that this is a group similar to the Local Group. Except that NED says the distance to NGC 1042 is 7.8 Mpc, so apparently it is a foreground object seen in projection.
Van Dokkum et al. assume DF2 and NGC 1052 are both about 20 Mpc distant. They offer two independent estimates of the distance, one consistent with the distance to NGC 1052 and the other more consistent with the distance to NGC 1042. Rather than wring our hands over this, I will trust their judgement and simply note, as they do, that the nearer distance would change many of their conclusions. The redshift is 1803 km/s, larger than either of the giants. It could still be a satellite of NGC 1052, as ~300 km/s is not unreasonable for an orbital velocity.
So why the big fuss? Unlike most galaxies in the universe, DF2 appears not to require dark matter. This is inferred from the measured velocity dispersion of ten globular clusters, which is 8.4 km/s. That’s fast to you and me, but rather sluggish on the scale of galaxies. Spread over a few kiloparsecs, that adds up to a dynamical mass about equal to what we expect for the stars, leaving little room for the otherwise ubiquitous dark matter.
This is important. If the universe is composed of dark matter, it should on occasion be possible to segregate the dark from the light. Tidal interactions between galaxies can in principle do this, so a galaxy devoid of dark matter would be good evidence that this happened. It would also be evidence against a modified gravity interpretation of the missing mass problem, because the force law is always on: you can’t strip it from the luminous matter the way you can dark matter. So ironically, the occasional galaxy lacking dark matter would constitute evidence that dark matter does indeed exist!
DF2 appears to be such a case. But how weird is it? Morphologically, it resembles the dwarf spheroidal satellite galaxies of the Local Group. I have a handy compilation of those (from Lelli et al.), so we can compute the mass-to-light ratio for all of these beasties in the same fashion, shown in the figure below. It is customary to refer quantities to the radius that contains half of the total light, which is 2.2 kpc for DF2.
Perhaps the most obvious respect in which DF2 is a bit unusual relative to the dwarfs of the Local Group is that it is big and bright. Most nearby dwarfs have half light radii well below 1 kpc. After DF2, the next most luminous dwarfs is Fornax, which is a factor of 5 lower in luminosity.
DF2 is called an ultradiffuse galaxy (UDG), which is apparently newspeak for low surface brightness (LSB) galaxy. I’ve been working on LSB galaxies my entire career. While DF2 is indeed low surface brightness – the stars are spread thin – I wouldn’t call it ultra diffuse. It is actually one of the higher surface brightness objects of this type. Crater 2 and And XIX (the leftmost points in the right panel) are ultradiffuse.
Astronomers love vague terminology, and as a result often reinvent terms that already exist. Dwarf, LSB, UDG, have all been used interchangeably and with considerable slop. I was sufficiently put out by this that I tried to define some categories is the mid-90s. This didn’t catch on, but by my definition, DF2 is VLSB – very LSB, but only by a little – it is much closer to regular LSB than to extremely (ELSB). Crater 2 and And XIX, now they’re ELSB, being more diffuse than DF2 by 2 orders of magnitude.
Whatever you call it, DF2 is low surface brightness, and LSB galaxies are always dark matter dominated. Always, at least among disk galaxies: here is the analogous figure for galaxies that rotate:
Pressure supported dwarfs generally evince large mass discrepancies as well. So in this regard, DF2 is indeed very unusual. So what gives?
Perhaps DF2 formed that way, without dark matter. This is anathema to everything we know about galaxy formation in ΛCDM cosmology. Dark halos have to form first, with baryons following.
Perhaps DF2 suffered one or more tidal interactions with NGC 1052. Sub-halos in simulations are often seen to be on highly radial orbits; perhaps DF2 has had its dark matter halo stripped away by repeated close passages. Since the stars reside deep in the center of the subhalo, they’re the last thing to be stripped away. So perhaps we’ve caught this one at that special time when the dark matter has been removed but the stars still remain.
This is improbable, but ought to happen once in a while. The bigger problem I see is that one cannot simply remove the dark matter halo like yanking a tablecloth and leaving the plates. The stars must respond to the change in the gravitational potential; they too must diffuse away. That might be a good way to make the galaxy diffuse, ultimately perhaps even ultradiffuse, but the observed motions are then not representative of an equilibrium situation. This is critical to the mass estimate, which must perforce assume an equilibrium in which the gravitational potential well of the galaxy is balanced against the kinetic motion of its contents. Yank away the dark matter halo, and the assumption underlying the mass estimate gets yanked with it. While such a situation may arise, it makes it very difficult to interpret the velocities: all tests are off. This is doubly true in MOND, in which dwarfs are even more susceptible to disruption.
Then there are the data themselves. Blaming the data should be avoided, but it does happen once in a while that some observation is misleading. In this case, I am made queasy by the fact that the velocity dispersion is estimated from only ten tracers. I’ve seen plenty of cases where the velocity dispersion changes in important ways when more data are obtained, even starting from more than 10 tracers. Andromeda II comes to mind as an example. Indeed, several people have pointed out that if we did the same exercise with Fornax, using its globular clusters as the velocity tracers, we’d get a similar answer to what we find in DF2. But we also have measurements of many hundreds of stars in Fornax, so we know that answer is wrong. Perhaps the same thing is happening with DF2? The fact that DF2 is an outlier from everything else we know empirically suggests caution.
Throwing caution and fact-checking to the wind, many people have been predictably eager to cite DF2 as a falsification of MOND. Van Dokkum et al. point out the the velocity dispersion predicted for this object by MOND is 20 km/s, more than a factor of two above their measured value. They make the MOND prediction for the case of an isolated object. DF2 is not isolated, so one must consider the external field effect (EFE).
The criterion by which to judge isolation in MOND is whether the acceleration due to the mutual self-gravity of the stars is less than the acceleration from an external source, in this case the host NGC 1052. Following the method outlined by McGaugh & Milgrom, and based on the stellar mass (adopting M/L=2 as both we and van Dokkum assume), I estimate an internal acceleration of DF2 to be gin = 0.15 a0. Here a0 is the critical acceleration scale in MOND, 1.2 x 10-10 m/s/s. Using this number and treating DF2 as isolated, I get the same 20 km/s van Dokkum et al. estimate.
Estimating the external field is more challenging. It depends on the mass of NGC 1052, and the separation between it and DF2. The projected separation at the assumed distance is 80 kpc. That is well within the range that the EFE is commonly observed to matter in the Local Group. It could be a bit further granted some distance along the line of sight, but if this becomes too large then the distance by association with NGC 1052 has to be questioned, and all bets are off. The mass of NGC 1052 is also rather uncertain, or at least I have heard wildly different values quoted in discussions about this object. Here I adopt 1011 M☉ as estimated by SLUGGS. To get the acceleration, I estimate the asymptotic rotation velocity we’d expect in MOND, V4 = a0GM. This gives 200 km/s, which is conservative relative to the ~300 km/s quoted by van Dokkum et al. At a distance of 80 kpc, the corresponding external acceleration gex = 0.14 a0. This is very uncertain, but taken at face value is indistinguishable from the internal acceleration. Consequently, it cannot be ignored: the calculation published by van Dokkum et al. is not the correct prediction for MOND.
The velocity dispersion estimator in MOND differs when gex < gin and gex > gin (see equations 2 and 3 of McGaugh & Milgrom). Strictly speaking, these apply in the limits where one or the other field dominates. When they are comparable, the math gets more involved (see equation 59 of Famaey & McGaugh). The input data are too uncertain to warrant an elaborate calculation for a blog, so I note simply that the amplitude of the mass discrepancy in MOND depends on how deep in the MOND regime a system is. That is, how far below the critical acceleration scale it is. The lower the acceleration, the larger the discrepancy. This is why LSB galaxies appear to be dark matter dominated; their low surface densities result in low accelerations.
For DF2, the absolute magnitude of the acceleration is approximately doubled by the presence of the external field. It is not as deep in the MOND regime as assumed in the isolated case, so the mass discrepancy is smaller, decreasing the MOND-predicted velocity dispersion by roughly the square root of 2. For a factor of 2 range in the stellar mass-to-light ratio (as in McGaugh & Milgrom), this crude MOND prediction becomes
σ = 14 ± 4 km/s.
Like any erstwhile theorist, I reserve the right to modify this prediction granted more elaborate calculations, or new input data, especially given the uncertainties in the distance and mass of the host. Indeed, we should consider the possibility of tidal disruption, which can happen in MOND more readily than with dark matter. Indeed, at one point I came very close to declaring MOND dead because the velocity dispersions of the ultrafaint dwarf galaxies were off, only realizing late in the day that MOND actually predicts that these things should be getting tidally disrupted (as is also expected, albeit somewhat differently, in ΛCDM), so that the velocity dispersions might not reflect the equilibrium expectation.
In DF2, the external field almost certainly matters. Barring wild errors of the sort discussed or unforeseen, I find it hard to envision the MONDian velocity dispersion falling outside the range 10 – 18 km/s. This is not as high as the 20 km/s predicted by van Dokkum et al. for an isolated object, nor as small as they measure for DF2 (8.4 km/s). They quote a 90% confidence upper limit of 10 km/s, which is marginally consistent with the lower end of the prediction (corresponding to M/L = 1). So we cannot exclude MOND based on these data.
That said, the agreement is marginal. Still, 90% is not very high confidence by scientific standards. Based on experience with such data, this likely overstates how well we know the velocity dispersion of DF2. Put another way, I am 90% confident that when better data are obtained, the measured velocity dispersion will increase above the 10 km/s threshold.
More generally, experience has taught me three things:
In matters of particle physics, do not bet against the Standard Model.
In matters cosmological, do not bet against ΛCDM.
In matters of galaxy dynamics, do not bet against MOND.
The astute reader will realize that these three assertions are mutually exclusive. The dark matter of ΛCDM is a bet that there are new particles beyond the Standard Model. MOND is a bet that what we call dark matter is really the manifestation of physics beyond General Relativity, on which cosmology is based. Which is all to say, there is still some interesting physics to be discovered.
The week of June 5, 2017, we held a workshop on dwarf galaxies and the dark matter problem. The workshop was attended by many leaders in the field – giants of dwarf galaxy research. It was held on the campus of Case Western Reserve University and supported by the John Templeton Foundation. It resulted in many fascinating discussions which I can’t possibly begin to share in full here, but I’ll say a few words.
Dwarf galaxies are among the most dark matter dominated objects in the universe. Or, stated more properly, they exhibit the largest mass discrepancies. This makes them great places to test theories of dark matter and modified gravity. By the end, we had come up with a few important tests for both ΛCDM and MOND. A few of these we managed to put on a white board. These are hardly a complete list, but provide a basis for discussion.
UFDs in field: Over the past few years, a number of extremely tiny dwarf galaxies have been identified as satellites of the Milky Way galaxy. These “ultrafaint dwarfs” are vaguely defined as being fainter than 100,000 solar luminosities, with the smallest examples having only a few hundred stars. This is absurdly small by galactic standards, having the stellar content of individual star clusters within the Milky Way. Indeed, it is not obvious to me that all of the ultrafaint dwarfs deserve to be recognized as dwarf galaxies, as some may merely be fragmentary portions of the Galactic stellar halo composed of stars coincident in phase space. Nevertheless, many may well be stellar systems external to the Milky Way that orbit it as dwarf satellites.
That multitudes of minuscule dark matter halos exist is a fundamental prediction of the ΛCDM cosmogony. These should often contain ultrafaint dwarf galaxies, and not only as satellites of giant galaxies like the Milky Way. Indeed, one expects to see many ultrafaints in the “field” beyond the orbital vicinity of the Milky Way where we have found them so far. These are predicted to exist in great numbers, and contain uniformly old stars. The “old stars” portion of the prediction stems from the reionization of the universe impeding star formation in the smallest dark matter halos. Upcoming surveys like LSST should provide a test of this prediction.
From an empirical perspective, I do expect that we will continue to discover galaxies of ever lower luminosity and surface brightness. In the field, I expect that these will be predominantly gas rich dwarfs like Leo P rather than gas-free, old stellar systems like the satellite ultrafaints. My expectation is an extrapolation of past experience, not a theory-specific prediction.
No Large Cores: Many of the simulators present at the workshop showed that if the energy released by supernovae was well directed, it could reshape the steep (‘cuspy’) interior density profiles of dark matter halos into something more like the shallow (‘cored’) interiors that are favored by data. I highlight the if because I remain skeptical that supernova energy couples as strongly as required and assumed (basically 100%). Even assuming favorable feedback, there seemed to be broad (in not unanimous) consensus among the simulators present that at sufficiently low masses, not enough stars would form to produce the requisite energy. Consequently, low mass halos should not have shallow cores, but instead retain their primordial density cusps. Hence clear measurement of a large core in a low mass dwarf galaxy (stellar mass < 1 million solar masses) would be a serious problem. Unfortunately, I’m not clear that we quantified “large,” but something more than a few hundred parsecs should qualify.
Radial Orbit for Crater 2: Several speakers highlighted the importance of the recently discovered dwarf satellite Crater 2. This object has a velocity dispersion that is unexpectedly low in ΛCDM, but was predicted by MOND. The “fix” in ΛCDM is to imagine that Crater 2 has suffered a large amount of tidal stripping by a close passage of the Milky Way. Hence it is predicted to be on a radial orbit (one that basically just plunges in and out). This can be tested by measuring the proper motion of its stars with Hubble Space Telescope, for which there exists a recently approved program.
DM Substructures: As noted above, there must exist numerous low mass dark matter halos in the cold dark matter cosmogony. These may be detected as substructure in the halos of larger galaxies by means of their gravitational lensing even if they do not contain dwarf galaxies. Basically, a lumpy dark matter halo bends light in subtly but detectably different ways from a smooth halo.
No Wide Binaries in UFDs: As a consequence of dynamical friction against the background dark matter, binary stars cannot remain at large separations over a Hubble time: their orbits should decay. In the absence of dark matter, this should not happen (it cannot if there is nowhere for the orbital energy to go, like into dark matter particles). Thus the detection of a population of widely separated binary stars would be problematic. Indeed, Pavel Kroupa argued that the apparent absence of strong dynamical friction already excludes particle dark matter as it is usually imagined.
Short dynamical times/common mergers: This is related to dynamical friction. In the hierarchical cosmogony of cold dark matter, mergers of halos (and the galaxies they contain) must be frequent and rapid. Dark matter halos are dynamically sticky, soaking up the orbital energy and angular momentum between colliding galaxies to allow them to stick and merge. Such mergers should go to completion on fairly short timescales (a mere few hundred million years).
A few distinctive predictions for MOND were also identified.
Tangential Orbit for Crater 2: In contrast to ΛCDM, we expect that the `feeble giant’ Crater 2 could not survive a close encounter with the Milky Way. Even at its rather large distance of 120 kpc from the Milky Way, it is so feeble that it is not immune from the external field of its giant host. Consequently, we expect that Crater 2 must be on a more nearly circular orbit, and not on a radial orbit as suggested in ΛCDM. The orbit does not need to be perfectly circular of course, but is should be more tangential than radial.
This provides a nice test that distinguishes between the two theories. Either the orbit of Crater 2 is more radial or more tangential. Bear in mind that Crater 2 already constitutes a problem for ΛCDM. What we’re discussing here is how to close what is basically a loophole whereby we can excuse an otherwise unanticipated result in ΛCDM.
I believe the question mark was added on the white board to permit the logical if unlikely possibility that one could write a MOND theory with an undetectably small EFE.
Position of UFDs on RAR: We chose to avoid making the radial acceleration relation (RAR) a focus of the meeting – there was quite enough to talk about as it was – but it certainly came up. The ultrafaint dwarfs sit “too high” on the RAR, an apparent problem for MOND. Indeed, when I first worked on this subject with Joe Wolf, I initially thought this was a fatal problem for MOND.
My initial thought was wrong. This is not a problem for MOND. The RAR applies to systems in dynamical equilibrium. There is a criterion in MOND to check whether this essential condition may be satisfied. Basically all of the ultrafaints flunk this test. There is no reason to think they are in dynamical equilibrium, so no reason to expect that they should be exactly on the RAR.
Some advocates of ΛCDM seemed to think this was a fudge, a lame excuse morally equivalent to the fudges made in ΛCDM that its critics complain about. This is a false equivalency that reminds me of this cartoon:
The ultrafaints are a handful of the least-well measured galaxies on the RAR. Before we obsess about these, it is necessary to provide a satisfactory explanation for the more numerous, much better measured galaxies that establish the RAR in the first place. MOND does this. ΛCDM does not. Holding one theory to account for the least reliable of measurements before holding another to account for everything up to that point is like, well, like the cartoon… I could put an NGC number to each of the lines Bugs draws in the sand.
Long dynamical times/less common mergers: Unlike ΛCDM, dynamical friction should be relatively ineffective in MOND. It lacks the large halos of dark matter that act as invisible catchers’ mitts to make galaxies stick and merge. Personally, I do not think this is a great test, because we are a long way from understanding dynamical friction in MOND.
These are just a few of the topics discussed at the workshop, and all of those are only a few of the issues that matter to the bigger picture. While the workshop was great in every respect, perhaps the best thing was that it got people from different fields/camps/perspectives talking. That is progress.
I am grateful for progress, but I must confess that to me it feels excruciatingly slow. Models of galaxy formation in the context of ΛCDM have made credible steps forward in addressing some of the phenomenological issues that concern me. Yet they still seem to me to be very far from where they need to be. In particular, there seems to be no engagement with the fundamental question I have posed here before, and that I posed at the beginning of the workshop: Why does MOND get any predictions right?
A research programme is said to be progressing as long as its theoretical growth anticipates its empirical growth, that is as long as it keeps predicting novel facts with some success (“progressive problemshift”); it is stagnating if its theoretical growth lags behind its empirical growth, that is as long as it gives only post-hoc explanations either of chance discoveries or of facts anticipated by, and discovered in, a rival programme (“degenerating problemshift”) (Lakatos, 1971, pp. 104–105).
The recent history of modern cosmology is rife with post-hoc explanations of unanticipated facts. The cusp-core problem and the missing satellites problem are prominent examples. These are explained after the fact by invoking feedback, a vague catch-all that many people agree solves these problems even though none of them agree on how it actually works.
There are plenty of other problems. To name just a few: satellite planes (unanticipated correlations in phase space), the emptiness of voids, and the early formation of structure (see section 4 of Famaey & McGaugh for a longer list and section 6 of Silk & Mamon for a positive spin on our list). Each problem is dealt with in a piecemeal fashion, often by invoking solutions that contradict each other while buggering the principle of parsimony.
It goes like this. A new observation is made that does not align with the concordance cosmology. Hands are wrung. Debate is had. Serious concern is expressed. A solution is put forward. Sometimes it is reasonable, sometimes it is not. In either case it is rapidly accepted so long as it saves the paradigm and prevents the need for serious thought. (“Oh, feedback does that.”) The observation is no longer considered a problem through familiarity and exhaustion of patience with the debate, regardless of how [un]satisfactory the proffered solution is. The details of the solution are generally forgotten (if ever learned). When the next problem appears the process repeats, with the new solution often contradicting the now-forgotten solution to the previous problem.
This has been going on for so long that many junior scientists now seem to think this is how science is suppose to work. It is all they’ve experienced. And despite our claims to be interested in fundamental issues, most of us are impatient with re-examining issues that were thought to be settled. All it takes is one bold assertion that everything is OK, and the problem is perceived to be solved whether it actually is or not.
That is the process we apply to little problems. The Big Problems remain the post hoc elements of dark matter and dark energy. These are things we made up to explain unanticipated phenomena. That we need to invoke them immediately casts the paradigm into what Lakatos called degenerating problemshift. Once we’re there, it is hard to see how to get out, given our propensity to overindulge in the honey that is the infinity of free parameters in dark matter models.
Note that there is another aspect to what Lakatos said about facts anticipated by, and discovered in, a rival programme. Two examples spring immediately to mind: the Baryonic Tully-Fisher Relation and the Radial Acceleration Relation. These are predictions of MOND that were unanticipated in the conventional dark matter picture. Perhaps we can come up with post hoc explanations for them, but that is exactly what Lakatos would describe as degenerating problemshift. The rival programme beat us to it.
In my experience, this is a good description of what is going on. The field of dark matter has stagnated. Experimenters look harder and harder for the same thing, repeating the same experiments in hope of a different result. Theorists turn knobs on elaborate models, gifting themselves new free parameters every time they get stuck.
On the flip side, MOND keeps predicting novel facts with some success, so it remains in the stage of progressive problemshift. Unfortunately, MOND remains incomplete as a theory, and doesn’t address many basic issues in cosmology. This is a different kind of unsatisfactory.
In the mean time, I’m still waiting to hear a satisfactory answer to the question I’ve been posing for over two decades now. Why does MOND get any predictions right? It has had many a priori predictions come true. Why does this happen? It shouldn’t. Ever.
In 1984, I heard Hans Bethe give a talk in which he suggested the dark matter might be neutrinos. This sounded outlandish – from what I had just been taught about the Standard Model, neutrinos were massless. Worse, I had been given the clear impression that it would screw everything up if they did have mass. This was the pervasive attitude, even though the solar neutrino problem was known at the time. This did not compute! so many of us were inclined to ignore it. But, I thought, in the unlikely event it turned out that neutrinos did have mass, surely that would be the answer to the dark matter problem.
Flash forward a few decades, and sure enough, neutrinos do have mass. Oscillations between flavors of neutrinos have been observed in both solar and atmospheric neutrinos. This implies non-zero mass eigenstates. We don’t yet know the absolute value of the neutrino mass, but the oscillations do constrain the separation between mass states (Δmν,212 = 7.53×10−5 eV2 for solar neutrinos, and Δmν,312 = 2.44×10−3 eV2 for atmospheric neutrinos).
Though the absolute values of the neutrino mass eigenstates are not yet known, there are upper limits. These don’t allow enough mass to explain the cosmological missing mass problem. The relic density of neutrinos is
Ωνh2 = ∑mν/(93.5 eV)
In order to make up the dark matter density (Ω ≈ 1/4), we need ∑mν ≈ 12 eV. The experimental upper limit on the electron neutrino mass is mν < 2 eV. There are three neutrino mass eigenstates, and the difference in mass between them is tiny, so ∑mν < 6 eV. Neutrinos could conceivably add up to more mass than baryons, but they cannot add up to be the dark matter.
In recent years, I have started to hear the assertion that we have already detected dark matter, with neutrinos given as the example. They are particles with mass that only interact with us through the weak nuclear force and gravity. In this respect, they are like WIMPs.
Here the equivalence ends. Neutrinos are Standard Model particles that have been known for decades. WIMPs are hypothetical particles that reside in a hypothetical supersymmetric sector beyond the Standard Model. Conflating the two to imply that WIMPs are just as natural as neutrinos is a false equivalency.
That said, massive neutrinos might be one of the few ways in which hierarchical cosmogony, as we currently understand it, is falsifiable. Whatever the dark matter is, we need it to be dynamically cold. This property is necessary for it to clump into dark matter halos that seed galaxy formation. Too much hot (relativistic) dark matter (neutrinos) suppresses structure formation. A nascent dark matter halo is nary a speed bump to a neutrino moving near the speed of light: if those fast neutrinos carry too much mass, they erase structure before it can form.
One of the great successes of ΛCDM is its explanation of structure formation: the growth of large scale structure from the small fluctuations in the density field at early times. This is usually quantified by the power spectrum – in the CMB at z > 1000 and from the spatial distribution of galaxies at z = 0. This all works well provided the dominant dark mass is dynamically cold, and there isn’t too much hot dark matter fighting it.
How much is too much? The power spectrum puts strong limits on the amount of hot dark matter that is tolerable. The upper limit is ∑mν < 0.12 eV. This is an order of magnitude stronger than direct experimental constraints.
Usually, it is assumed that the experimental limit will eventually come down to the structure formation limit. That does seem likely, but it is also conceivable that the neutrino mass has some intermediate value, say mν ≈ 1 eV. Such a result, were it to be obtained experimentally, would falsify the current CDM cosmogony.
Such a result seems unlikely, of course. Shooting for a narrow window such as the gap between the current cosmological and experimental limits is like drawing to an inside straight. It can happen, but it is unwise to bet the farm on it.
If experiments measure a neutrino mass in excess of the cosmological limit, it would be powerful motivation to consider MOND-like theories as a driver of structure formation. If instead the neutrino does prove to be tiny, ΛCDM will have survived another test. That wouldn’t falsify MOND (or really have any bearing on it), but it would remove one potential “out” for the galaxy cluster problem.
Tiny though they be, neutrinos got mass! And it matters!
One apparently promising idea is the emergent gravity hypothesis by Erik Verlinde. Gravity is not a fundamental force so much as a consequence of microscopic entanglement. This manifests on scales comparable to the Hubble horizon, in particular, with an acceleration of order the speed of light times the Hubble expansion rate. This is close to the acceleration scale of MOND.
An early test of emergent gravity was provided by weak gravitational lensing. It does remarkably well at predicting the observed lensing signal with no free parameters. This is promising – perhaps we finally have a physical basis for MOND. Indeed, as Milgrom points out, the equivalent success was already known in MOND.
Weak lensing occurs deep in the MOND regime, at very low accelerations far from the lensing galaxy. In that regard, the results of Brouwer et al. can be seen as an extension of the radial acceleration relation to much lower accelerations. In this limit, it is fair to treat galaxies as point masses – hence the similarity of the solid and dashed lines in the figure above.
For rotation curves, it is not fair to approximate galaxies as point masses. Rotation curves are observed in the midst of the stellar and gaseous mass distribution. One must take account of the detailed distribution of baryons to treat the problem properly. This is something MOND is very successful at.
Emergent gravity converges to the same limit as MOND in the point mass case, which holds for any mass distribution once you get far enough away. It is not identical for finite mass distributions. When one solves the equation of emergent gravity for a finite mass distribution, you get one term that looks like MOND, which gives the success noted above. But you also get an additional term that depends on the gradient of the mass distribution, dM/dr.
The additional term that emerges for extended mass distributions in emergent gravity lead to different predictions than MOND. This is good, in that it makes the theories distinguishable. This is bad, in that MOND already provides good fits to rotation curves. Additional terms are likely to mess that up.
And so it does. Two independent studies recently come to this conclusion: one including myself (Lelli et al. 2017) and another by Hees et al. (2017). The studies differ in their approach. We show that the additional term in emergent gravity leads to the prediction of a larger mass discrepancy than in MOND, driving one to abnormally low stellar mass-to-light ratios and degrading the empirical radial acceleration relation. Hees et al. make detailed rotation curve fits, showing that the dM/dr term over-amplifies bumps & wiggles in the rotation curve. It has always been intriguing that MOND gets these right: this is a non-trivial success to reproduce.
The situation looks bad for emergent gravity. One caveat is that at present we only have solutions for emergent gravity in the case of a spherical cow. Conceivably a better treatment of the geometry would change the result, but it won’t eliminate the dM/dr term. So this seems unlikely to help with the fundamental problem: this term needs not to exist.
Perhaps emergent gravity is a clue to what ultimately is going on – a single step in the right direction. Or perhaps the similarity to MOND is misleading. For now, the search for a satisfactory explanation for the observed phenomenology continues.