Hubble constant redux

Hubble constant redux

There is a new article in Science on the expansion rate of the universe, very much along the lines of my recent post. It is a good read that I recommend. It includes some of the human elements that influence the science.

When I started this blog, I recalled my experience in the ’80s moving from a theory-infused institution to a more observationally and empirically oriented one. At that time, the theory-infused cosmologists assured us that Sandage had to be correct: H0 = 50. As a young student, I bought into this. Big time. I had no reason not to; I was very certain of the transmitted lore. The reasons to believe it then seemed every bit as convincing a the reasons to believe ΛCDM today. When I encountered people actually making the measurement, like Greg Bothun, they said “looks to be about 80.”

This caused me a lot of cognitive dissonance. This couldn’t be true. The universe would be too young (at most ∼12 Gyr) to contain the oldest stars (thought to be ∼18 Gyr at that time). Worse, there was no way to reconcile this with Inflation, which demanded Ωm = 1. The large deceleration of the expansion caused by high Ωm greatly exacerbated the age problem (only ∼8 Gyr accounting for deceleration). Reconciling the age problem with Ωm = 1 was hard enough without raising the Hubble constant.

Presented with this dissonant information, I did what most of us humans do: I ignored it. Some of my first work involved computing the luminosity function of quasars. With the huge distance scale of H0 = 50, I remember noticing how more distant quasars got progressively brighter. By a lot. Yes, they’re the most luminous things in the early universe. But they weren’t just outshining a galaxy’s worth of stars; they were outshining a galaxy of galaxies.

That was a clue that the metric I was assuming was very wrong. And indeed, since that time, every number of cosmological significance that I was assured in confident tones by Great Men that I Had to Believe has changed by far more than its formal uncertainty. In struggling with this, I’ve learned not to be so presumptuous in my beliefs. The universe is there for us to explore and discover. We inevitably err when we try to dictate how it Must Be.

The amplitude of the discrepancy in the Hubble constant is smaller now, but the same attitudes are playing out. Individual attitudes vary, of course, but there are many in the cosmological community who take the attitude that the Planck data give H0 = 67.8 so that is the right number. All other data are irrelevant; or at best flawed until brought into concordance with the right number.

It is Known, Khaleesi. 

Often these are the same people who assured us we had to believe Ωm = 1 and H0 = 50 back in the day. This continues the tradition of arrogance about how things must be. This attitude remains rampant in cosmology, and is subsumed by new generations of students just as it was by me. They’re very certain of the transmitted lore. I’ve even been trolled by some who seem particularly eager to repeat the mistakes of the past.

From hard experience, I would advocate a little humility. Yes, Virginia, there is a real tension in the Hubble constant. And yes, it remains quite possible that essential elements of our cosmology may prove to be wrong. I personally have no doubt about the empirical pillars of the Big Bang – cosmic expansion, Big Bang Nucleosynthesis, and the primordial nature of the Cosmic Microwave Background. But Dark Matter and Dark Energy may well turn out to be mere proxies for some deeper cosmic truth. IF that is so, we will never recognize it if we proceed with the attitude that LCDM is Known, Khaleesi.

Neutrinos got mass!

Neutrinos got mass!

In 1984, I heard Hans Bethe give a talk in which he suggested the dark matter might be neutrinos. This sounded outlandish – from what I had just been taught about the Standard Model, neutrinos were massless. Worse, I had been given the clear impression that it would screw everything up if they did have mass. This was the pervasive attitude, even though the solar neutrino problem was known at the time. This did not compute! so many of us were inclined to ignore it. But, I thought, in the unlikely event it turned out that neutrinos did have mass, surely that would be the answer to the dark matter problem.

Flash forward a few decades, and sure enough, neutrinos do have mass. Oscillations between flavors of neutrinos have been observed in both solar and atmospheric neutrinos. This implies non-zero mass eigenstates. We don’t yet know the absolute value of the neutrino mass, but the oscillations do constrain the separation between mass states (Δmν,212 = 7.53×10−5 eV2 for solar neutrinos, and Δmν,312 = 2.44×10−3 eV2 for atmospheric neutrinos).

Though the absolute values of the neutrino mass eigenstates are not yet known, there are upper limits. These don’t allow enough mass to explain the cosmological missing mass problem. The relic density of neutrinos is

Ωνh2 = ∑mν/(93.5 eV)

In order to make up the dark matter density (Ω ≈ 1/4), we need ∑mν ≈ 12 eV. The experimental upper limit on the electron neutrino mass is mν < 2 eV. There are three neutrino mass eigenstates, and the difference in mass between them is tiny, so ∑mν < 6 eV. Neutrinos could conceivably add up to more mass than baryons, but they cannot add up to be the dark matter.

In recent years, I have started to hear the assertion that we have already detected dark matter, with neutrinos given as the example. They are particles with mass that only interact with us through the weak nuclear force and gravity. In this respect, they are like WIMPs.

Here the equivalence ends. Neutrinos are Standard Model particles that have been known for decades. WIMPs are hypothetical particles that reside in a hypothetical supersymmetric sector beyond the Standard Model. Conflating the two to imply that WIMPs are just as natural as neutrinos is a false equivalency.

That said, massive neutrinos might be one of the few ways in which hierarchical cosmogony, as we currently understand it, is falsifiable. Whatever the dark matter is, we need it to be dynamically cold. This property is necessary for it to clump into dark matter halos that seed galaxy formation. Too much hot (relativistic) dark matter (neutrinos) suppresses structure formation. A nascent dark matter halo is nary a speed bump to a neutrino moving near the speed of light: if those fast neutrinos carry too much mass, they erase structure before it can form.

One of the great successes of ΛCDM is its explanation of structure formation: the growth of large scale structure from the small fluctuations in the density field at early times. This is usually quantified by the power spectrum – in the CMB at z > 1000 and from the spatial distribution of galaxies at z = 0. This all works well provided the dominant dark mass is dynamically cold, and there isn’t too much hot dark matter fighting it.

t16_galaxy_power_spectrum
The power spectrum from the CMB (low frequency/large scales) and the galaxy distribution (high frequency/”small” scales). Adapted from Whittle.

How much is too much? The power spectrum puts strong limits on the amount of hot dark matter that is tolerable. The upper limit is ∑mν < 0.12 eV. This is an order of magnitude stronger than direct experimental constraints.

Usually, it is assumed that the experimental limit will eventually come down to the structure formation limit. That does seem likely, but it is also conceivable that the neutrino mass has some intermediate value, say mν ≈ 1 eV. Such a result, were it to be obtained experimentally, would falsify the current CDM cosmogony.

Such a result seems unlikely, of course. Shooting for a narrow window such as the gap between the current cosmological and experimental limits is like drawing to an inside straight. It can happen, but it is unwise to bet the farm on it.

It should be noted that a circa 1 eV neutrino would have some desirable properties in an MONDian universe. MOND can form large scale structure, much like CDM, but it does so faster. This is good for clearing out the voids and getting structure in place early, but it tends to overproduce structure by z = 0. An admixture of neutrinos might help with that. A neutrino with an appreciable mass would also help with the residual mass discrepancy MOND suffers in clusters of galaxies.

If experiments measure a neutrino mass in excess of the cosmological limit, it would be powerful motivation to consider MOND-like theories as a driver of structure formation. If instead the neutrino does prove to be tiny, ΛCDM will have survived another test. That wouldn’t falsify MOND (or really have any bearing on it), but it would remove one potential “out” for the galaxy cluster problem.

Tiny though they be, neutrinos got mass! And it matters!

LCDM has met the enemy, and it is itself

LCDM has met the enemy, and it is itself

David Merritt recently published the article “Cosmology and convention” in Studies in History and Philosophy of Science. This article is remarkable in many respects. For starters, it is rare that a practicing scientist reads a paper on the philosophy of science, much less publishes one in a philosophy journal.

I was initially loathe to start reading this article, frankly for fear of boredom: me reading about cosmology and the philosophy of science is like coals to Newcastle. I could not have been more wrong. It is a genuine page turner that should be read by everyone interested in cosmology.

I have struggled for a long time with whether dark matter constitutes a falsifiable scientific hypothesis. It straddles the border: specific dark matter candidates (e.g., WIMPs) are confirmable – a laboratory detection is both possible and plausible – but the concept of dark matter can never be excluded. If we fail to find WIMPs in the range of mass-cross section parameters space where we expected them, we can change the prediction. This moving of the goal post has already happened repeatedly.

wimplimits2017
The cross-section vs. mass parameter space for WIMPs. The original, “natural” weak interaction cross-section (10-39) was excluded long ago, as were early attempts to map out the theoretically expected parameter space (upper pink region). Later predictions drifted to progressively lower cross-sections. These evaded experimental limits at the time, and confident predictions were made that the dark matter would be found.  More recent data show otherwise: the gray region is excluded by PandaX (2016). [This plot was generated with the help of DMTools hosted at Brown.]
I do not find it encouraging that the goal posts keep moving. This raises the question, how far can we go? Arbitrarily low cross-sections can be extracted from theory if we work at it hard enough. How hard should we work? That is, what criteria do we set whereby we decide the WIMP hypothesis is mistaken?

There has to be some criterion by which we would consider the WIMP hypothesis to be falsified. Without such a criterion, it does not satisfy the strictest definition of a scientific hypothesis. If at some point we fail to find WIMPs and are dissatisfied with the theoretical fine-tuning required to keep them hidden, we are free to invent some other dark matter candidate. No WIMPs? Must be axions. Not axions? Would you believe light dark matter? [Worst. Name. Ever.] And so on, ad infinitum. The concept of dark matter is not falsifiable, even if specific dark matter candidates are subject to being made to seem very unlikely (e.g., brown dwarfs).

Faced with this situation, we can consult the philosophy science. Merritt discusses how many of the essential tenets of modern cosmology follow from what Popper would term “conventionalist stratagems” – ways to dodge serious consideration that a treasured theory is threatened. I find this a compelling terminology, as it formalizes an attitude I have witnessed among scientists, especially cosmologists, many times. It was put more colloquially by J.K. Galbraith:

“Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everybody gets busy on the proof.”

Boiled down (Keuth 2005), the conventionalist strategems Popper identifies are

  1. ad hoc hypotheses
  2. modification of ostensive definitions
  3. doubting the reliability of the experimenter
  4. doubting the acumen of the theorist

These are stratagems to be avoided according to Popper. At the least they are pitfalls to be aware of, but as Merritt discusses, modern cosmology has marched down exactly this path, doing each of these in turn.

The ad hoc hypotheses of ΛCDM are of course Λ and CDM. Faced with the observation of a metric that cannot be reconciled with the prior expectation of a decelerating expansion rate, we re-invoke Einstein’s greatest blunder, Λ. We even generalize the notion and give it a fancy new name, dark energy, which has the convenient property that it can fit any observed set of monotonic distance-redshift pairs. Faced with an excess of gravitational attraction over what can be explained by normal matter, we invoke non-baryonic dark matter: some novel form of mass that has no place in the standard model of particle physics, has yet to show any hint of itself in the laboratory, and cannot be decisively excluded by experiment.

We didn’t accept these ad hoc add-ons easily or overnight. Persuasive astronomical evidence drove us there, but all these data really show is that something dire is wrong: General Relativity plus known standard model particles cannot explain the universe. Λ and CDM are more a first guess than a final answer. They’ve been around long enough that they have become familiar, almost beyond doubt. Nevertheless, they remain unproven ad hoc hypotheses.

The sentiment that is often asserted is that cosmology works so well that dark matter and dark energy must exist. But a more conservative statement would be that our present understanding of cosmology is correct if and only if these dark entities exist. The onus is on us to detect dark matter particles in the laboratory.

That’s just the first conventionalist stratagem. I could given many examples of violations of the other three, just from my own experience. That would make for a very long post indeed.

Instead, you should go read Merritt’s paper. There are too many things there to discuss, at least in a single post. You’re best going to the source. Be prepared for some cognitive dissonance.

19133887

DTM’s Remembering Vera

DTM’s Remembering Vera

I wrote my own recollection of Vera Rubin recently. Her long time home institution, the Department of Terrestrial Magnetism (DTM) of the Carnegie Institution of Washington recently held a lunch in her honor. Unfortunately my travel schedule precluded me from attending. However, they have put together a wonderful website that I recommend to everyone. The depth and variety of the materials published there – testimonials, photos, her list of published papers – is outstanding.

Of historical interest are a series of papers written in the mid-60s in collaboration with Margaret Burbidge. These show some early rotation curves. Many peter out around the turn-over of the rotation curve. With the benefit of hindsight, one can see what the data will do – extend more or less flat from the last measured points.

Here is an example from Burbidge et al. (1964). In this case, NGC 3521, they got a bit further than the turnover. You may judge for yourself how convincing the detection of flat rotation is.

ngc3521_brbidgerubun1964

As it happens, NGC 3521 is a near kinematic twin to the Milky Way. Here is the modern rotation curve from THINGS compared with an estimate of the Milky Way rotation curve.

mw_ngc3521_twins

Hopefully it is obvious why it helps to have extended data (usually from 21 cm data, as in the example from THINGS).

This reminds me of something Vera frequently said. Early Days. In many ways, we are far down the path of dark matter. But we still have no idea what it is, or even whether what we call dark matter now is merely a proxy for some more general concept.

Vera always appreciated this. In many ways, these are still Early Days.

Tension in the Hubble constant

Tension in the Hubble constant

There has been some hand-wringing of late about the tension between the value of the expansion rate of the universe – the famous Hubble constant, H0, measured directly from observed redshifts and distances, and that obtained by multi-parameter fits to the cosmic microwave background. Direct determinations consistently give values in the low to mid-70s, like Riess et al. (2016): H0 = 73.24 ± 1.74 km/s/Mpc while the latest CMB fit from Planck gives H0 = 67.8 ± 0.9 km/s/Mpc. These are formally discrepant at a modest level: enough to be annoying, but not enough to be conclusive.

The widespread presumption is that there is a subtle systematic error somewhere. Who is to blame depends on what you work on. People who work on the CMB and appreciate its phenomenal sensitivity to cosmic geometry generally presume the problem is with galaxy measurements. To people who work on local galaxies, the CMB value is a non-starter.

This subject has a long and sordid history which entire books have been written about. Many systematic errors have plagued the cosmic distance ladder. Hubble’s earliest (c. 1930) estimate of H0 = 500 km/s/Mpc was an order of magnitude off, and made the universe impossibly young by what was known to geologists at the time. Recalibration of the distance scale brought the number steadily down. There followed a long (1960s – 1990s) stand-off between H0 = 50 as advocated by Sandage and 100 as advocated by de Vaucouleurs. Obviously, there were some pernicious systematic errors lurking about. Given this history, it is easy to imagine that even today there persists some subtle systematic error in local galaxy distance measurements.

In the mid-90s, I realized that the Tully-Fisher method was effectively a first approximation – there should be more information in the full shape of the rotation curve. Playing around with this, I arrived at H0 = 72 ± 2. My work relied heavily on the work of Begeman, Broeils, & Sanders and in turn on the distances they had assumed. This was a much large systematic uncertainty. To firm up my estimate would require improved calibration of those distances quite beyond the scope of what I was willing to take on at that time, so I never published it.

In 2001, the HST Key Project on the Distance Scale – the primary motivation to build the Hubble Space Telescope – reported H0 = 72 ± 8. That uncertainty was still plagued by the same systematics that had befuddled me. Since that time, the errors have been beaten down. There have been many other estimates of increasing precision, mostly in the range 72 – 75. The serious-minded cosmologist always worries about some subtle remaining systematic error, but the issue seemed finally to be settled.

One weird consequence of this was that all my extensive notes on the distance scale no longer seemed essential to teaching graduate cosmology: all the arcane details that had occupied the field for decades suddenly seemed like boring minutia. That was OK – about that time, there finally started to be interesting data on the the cosmic microwave background. Explaining that neatly displaced the class time spent on the distance scale. No longer were the physics students stopping to ask, appalled, “what’s a distance modulus?”; now it was the astronomy students who were appalled to be confronted by the spherical harmonics they’d seen but not learned in quantum mechanics.

The first results from WMAP were entirely consistent with the results of the HST key project. This reinforced the feeling that the problem was solved. In the new century, we finally knew the value of the Hubble constant!

Over the past decade, the best-fit value of H0 from the CMB has done a slow walk away from the direct measurements in the local universe. It has gotten far enough to result in the present tension. The problem is that the CMB doesn’t measure the Hubble constant directly; it constrains a multi-dimensional parameters space that approximately projects to a constant of the product ΩmH03, as illustrated below.

omh-2015wh0
Best fit values of the Hubble constant and the mass density from CMB satellite experiments (labeled). The blue lines demarcate the trench allowed in the space by the Planck data. Only the narrow space between the lines is allowed; the region above and below is excluded. The best fit values have simply marched along the floor of this trench over time.

Much of the progress in cosmology has been the steady reduction in the allowed range in the above parameter space. The CMB data now allow only a narrow trench. I worry that it may wink out entirely. Were that to happen, it would falsify our current model of cosmology.

For now the only thing that seems to be happening is that the χ2 for the CMB data  is ever so slightly better for lower values of the Hubble constant. While the lines of the trench represent no-go zones – the data require cosmological parameters to fall between the lines – there isn’t much difference along the trench. It is like walking along the floor of the Grand Canyon: exiting by climbing up the cliffs is disfavored; meandering downstream is energetically favored.

That’s what it looks like to me. The CMB χ2 has meandered a bit down the trench. It is not obvious to me that the current Planck best-fit is all that preferable to that from WMAP3. I have asked a few experts what would be so terrible about imposing the local distance scale as a strong prior. Have yet to hear a good answer, so chime in if you know one. If we put the clamps on H0 it must come out somewhere else. Where? How terrible would it be?

This is not an idle question. If one can recover the local Hubble constant with only a small tweak to, say, the baryon density, then fine – we’ve already got a huge problem there with lithium that we’re largely ignoring – why argue about the Hubble constant if this tension can be resolved where there’s already a bigger problem? If instead, it requires something more radical, like a clear difference from the standard number of neutrinos, then OK, that’s interesting and potentially a big deal.

So what is it? What does it take to reconcile to Planck with local H0? Since this is an issue of geometry, I suspect it might be something like the best fit geometry of the universe becoming ever so slightly not-flat, at the 2σ level instead of 1σ.

img_5358

While I have not come across a satisfactory explanation of what it would take to reconcile Planck with the local distance scale, I have seen many joint analyses of Planck plus lots of other data. They all seem consistent, so long as you ignore the high-L (L > 600) Planck data. It is only the high-L data that are driving the discrepancy (low L appear to be OK).

So I will say the obvious, for those who are too timid: it looks like the systematic error is most likely with the high-L data of Planck itself.

Emergent Gravity hits a pothole

Emergent Gravity hits a pothole

We have in MOND a formula that has had repeated predictive successes. Many of these have been true a priori predictions, like the absolute nature of the baryonic Tully-Fisher relation, the large mass discrepancies evinced by low surface brightness galaxies, and the velocity dispersions of  many individual dwarf Spheroidal galaxies like Crater 2. I don’t see how these can be an accident. But what we lack is an underlying theoretical basis for the observed MONDian phenomenology: Why does this happen?

One apparently promising idea is the emergent gravity hypothesis by Erik Verlinde. Gravity is not a fundamental force so much as a consequence of microscopic entanglement. This manifests on scales comparable to the Hubble horizon, in particular, with an acceleration of order the speed of light times the Hubble expansion rate. This is  close to the acceleration scale of MOND.

An early test of emergent gravity was provided by weak gravitational lensing. It does remarkably well at predicting the observed lensing signal with no free parameters. This is promising – perhaps we finally have a physical basis for MOND. Indeed, as Milgrom points out, the equivalent success was already known in MOND.

brouweremergentgravitylensing
The weak lensing signal for galaxies stacked in several mass bins from Brouwer et al. (2016). The straight line predicted by emergent gravity (EG) is a good fit to the data with no free parameters to adjust.

Weak lensing occurs deep in the MOND regime, at very low accelerations far from the lensing galaxy. In that regard, the results of Brouwer et al. can be seen as an extension of the radial acceleration relation to much lower accelerations. In this limit, it is fair to treat galaxies as point masses – hence the similarity of the solid and dashed lines in the figure above.

For rotation curves, it is not fair to approximate galaxies as point masses. Rotation curves are observed in the midst of the stellar and gaseous mass distribution. One must take account of the detailed distribution of baryons to treat the problem properly. This is something MOND is very successful at.

Emergent gravity converges to the same limit as MOND in the point mass case, which holds for any mass distribution once you get far enough away. It is not identical for finite mass distributions. When one solves the equation of emergent gravity for a finite mass distribution, you get one term that looks like MOND, which gives the success noted above. But you also get an additional term that depends on the gradient of the mass distribution, dM/dr.

The additional term that emerges for extended mass distributions in emergent gravity lead to different predictions than MOND. This is good, in that it makes the theories distinguishable. This is bad, in that MOND already provides good fits to rotation curves. Additional terms are likely to mess that up.

And so it does. Two independent studies recently come to this conclusion: one including myself (Lelli et al. 2017) and another by Hees et al. (2017). The studies differ in their approach. We show that the additional term in emergent gravity leads to the prediction of a larger mass discrepancy than in MOND, driving one to abnormally low stellar mass-to-light ratios and degrading the empirical radial acceleration relation. Hees et al. make detailed rotation curve fits, showing that the dM/dr term over-amplifies bumps & wiggles in the rotation curve. It has always been intriguing that MOND gets these right: this is a non-trivial success to reproduce.

mondvseg
Rotation curves predicted for various exponential disks (top row) by Newton (dotted line), MOND (red dashed line), and emergent gravity (solid line). Note that emergent gravity predicts more of a hump at the peak of the rotation curve, leading to the hook in the corresponding radial acceleration relation (lower panels). From Lelli et al. (2017).

The situation looks bad for emergent gravity. One caveat is that at present we only have solutions for emergent gravity in the case of a spherical cow. Conceivably a better treatment of the geometry would change the result, but it won’t eliminate the dM/dr term. So this seems unlikely to help with the fundamental problem: this term needs not to exist.

Perhaps emergent gravity is a clue to what ultimately is going on – a single step in the right direction. Or perhaps the similarity to MOND is misleading. For now, the search for a satisfactory explanation for the observed phenomenology continues.

One Law to Rule Them All

One Law to Rule Them All

One Law to rule them all, One Law to guide them,
One Law to form them all and in the dark halo bind them.

ringline1

Galaxies appear to obey a single universal effective force law.

ringline2

Early indications of this have been around for some time. It has become particularly clear in our work using near-infrared surface photometry to trace the stellar mass distribution of late type galaxies (SPARC). It takes a while to wrap our heads around the implications.

ringline3

The observed phenomenology constitutes a new law of nature. One Law to rule all galaxies.

ringline4

The Astrophysical Journal just published our long and thorough investigation of this issue eponymously titled One Law to Rule Them All: The Radial Acceleration Relation of GalaxiesIt includes this movie showing the build-up of the radial acceleration relation in the data.

So far, the ubiquitous effective force law had only been clearly demonstrated in rotating galaxies. Federico Lelli and Marcel Pawlowski went to great lengths to also include pressure supported galaxies, from giant ellipticals to dwarf spheroidals. They appear to follow the same effective force law as rotating galaxies.

rar_todo_raronly
The Radial Acceleration Relation defined by rotating late type galaxies (blue points) is also obeyed by early type galaxies, regardless of whether they be fast rotators (orange points) or pressure supported slow rotators (red point) or dark matter dominated dwarf spheroidal satellite galaxies (grey and green points).

This is not a fluke of a few special galaxies. It involves galaxies of all known morphological types spanning an enormous range in mass, size, and surface brightness. I have spent the last twenty years adding new data for all varieties of galaxy types to this relation in the expectation that it would break. Instead it has become stronger and clearer.

Understanding the observed relation is one of the pre-eminent challenges in modern physics. Once we exclude metaphysical nonsense like multiverses, it is arguably the most important unsolved problem. Why does this happen?

The usual ad hoc interpretation of rotation curves in terms of dark matter does nothing to anticipate the observed phenomenology, which is in fact quite troubling from this perspective as it requires excessive fine-tuning. This has been known (if widely ignored) for a while, but doesn’t preclude the more rabid advocates of dark matter from asserting that it all comes about naturally. Lets not mince words here: claims that the radial acceleration relation occurs naturally with dark matter are pure, unadulterated bullshit fueled by confirmation bias and cognitive dissonance. Perhaps dark matter is the root cause, but there is nothing natural about it.

The natural explanation of a single effective force law is that it is caused by a truly universal force law.

So far, the theory that comes closest to explaining these data is MOND. Milgrom, understandably enough, argues that these data require MOND. He has a valid point. It is a good argument, but does it suffice to overcome the other problems MOND faces? These are not as great as widely portrayed, but they aren’t exactly negligible, either. I tried to look at the problem from both perspectives in this review for the Canadian Journal of Physics. [Being able to see things from both sides is an essential skill if one is to be objective, an important value in science that seems disturbingly absent in its modern practice.]

MOND anticipates an asymptotic slope of 1/2 at low acceleration (gobs ~ gbar1/2). In the figure above, the data for the faintest (“ultrafaint”) dwarf spheroidals show a flattening in the empirical law at low accelerations that is not predicted by MOND. Perhaps the underlying force law is subtly different from pure MOND? On the other hand, weak lensing observations show that the MOND slope extrapolates well to much lower accelerations.

It is possible that the data for ultrafaint dwarfs are in some cases misleading. Are these objects in dynamical equilibrium (a prerequisite for analysis)? Are they even dwarf galaxies? Some of the ultrafaints are not clearly distinct objects in the sense of dSph satellites like Crater 2: it is not clear that all of them deserve the status of “dwarf galaxy.” Some are little more than a handful of stars that occupy a similar cell in phase space – perhaps they are fragmentary structures in the Galactic stellar halo? Or the rump end of dissolving satellites? This is anticipated to occur in both ΛCDM and MOND. If so, their velocity dispersions probably tell us more about their disruption history than their gravitational potential, in which case their location in the plot is misleading.

Detailed questions like these are the subject of much current research. For now, lets take a step back and appreciate the data for what they say, irrespective of the underlying theoretical reason for it. We’re looking at a new law of nature! How cool is that?

Ash nazg durbatulûk, ash nazg gimbatul, ash nazg thrakatulûk, agh burzum-ishi krimpatul.