Virginia, your little friends are wrong. They have been affected by the skepticism of a skeptical age. They do not believe except they see. They think that nothing can be which is not comprehensible by their little minds. All minds, Virginia, whether they be men’s or children’s, are little. In this great universe of ours man is a mere insect, an ant, in his intellect, as compared with the boundless world about him, as measured by the intelligence capable of grasping the whole of truth and knowledge.
Yes, Virginia, there is a Dark Matter. It exists as certainly as squarks and sleptons and Higgsinos exist, and you know that they abound and give to your life its highest beauty and joy. Alas! how dreary would be the world if there were no Dark Matter. It would be as dreary as if there were no supersymmetry. There would be no childlike faith then, no papers, no grants to make tolerable this existence. We should have no enjoyment, except in observation and experiment. The eternal light with which childhood fills the world would be extinguished.
Not believe in Dark Matter! You might as well not believe in Dark Energy! You might get the DOE to hire men to watch in all the underground laboratories to catch Dark Matter, but even if they did not see Dark Matter coming down, what would that prove? Nobody sees Dark Matter, but that is no sign that there is no Dark Matter. The most real things in the world are those that neither children nor men can see. Did you ever see fairies dancing on the lawn? Of course not, but that’s no proof that they are not there. Nobody can conceive or imagine all the wonders there are unseen and unseeable in the world.
You may tear apart the baby’s rattle and see what makes the noise inside, but there is a veil covering the unseen world which not the best experiment, nor even the united efforts of all the keenest experiments ever conducted, could tear apart. Only faith, fancy, poetry, love, romance, can push aside that curtain and view and picture the supernal beauty and glory beyond. Is it all real? Ah, Virginia, in all this world there is nothing else real and abiding.
No Dark Matter! Thank God! It exists, and it exists forever. A thousand years from now, Virginia, nay, ten times ten thousand years from now, it will continue to make glad the coffers of science.
It has been twenty years since we coined the phrase NFW halo to describe the cuspy halos that emerge from dark matter simulations of structure formation. Since that time, observations have persistently contradicted this fundamental prediction of the cold dark matter cosmogony. There have, of course, been some theorists who cling to the false hope that somehow it is the data to blame and not a shortcoming of the model.
That this false hope has persisted in some corners for so long is a tribute to the power of ideas over facts and the influence that strident personalities wield over the sort objective evaluation we allegedly value in science. This history is a bit like this skit by Arsenio Hall. Hall is pestered by someone calling, demanding Thelma. Just substitute “cusps” for “Thelma” and that pretty much sums it up.
All during this time, I have never questioned the results of the simulations. While it is a logical possibility that they screwed something up, I don’t think that is likely. Moreover, it is inappropriate to pour derision on one’s scientific colleagues just because you disagree. Such disagreements are part and parcel of the scientific method. We don’t need to be jerks about it.
But some people are jerks about it. There are some – and merely some, certainly not all – theorists who make a habit of pouring scorn on the data for not showing what they want it to show. And that’s what it really boils down to. They’re so sure that their models are right that any disagreement with data must be the fault of the data.
This has been going on so long that in 1996, George Efstathiou was already making light of it in his colleagues, in the form of the Frenk Principle:
“If the Cold Dark Matter Model does not agree with observations, there must be physical processes, no matter how bizarre or unlikely, that can explain the discrepancy.”
There are even different flavors of the Strong Frenk Principle:
1: “The physical processes must be the most bizarre and unlikely.”
2: “If we are incapable of finding any physical processes to explain the discrepancy between CDM models and observations, then observations are wrong.”
In the late ’90s, blame was frequently placed on beam smearing. The resolution of 21 cm data cubes at that time was typically 13 to 30 arcseconds, which made it challenging to resolve the shape of some rotation curves. Some but not all. Nevertheless, beam smearing became the default excuse to pretend the observations were wrong.
This persisted for a number of years, until we obtained better data – long slit optical spectra with 1 or 2 arcsecond resolution. These data did show up a few cases where beam smearing had been a legitimate concern. It also confirmed the rotation curves of many other galaxies where it had not been.
So they made up a different systematic error. Beam smearing was no longer an issue, but longslit data only gave a slice along the major axis, not the whole velocity field. So it was imagined that we observers had placed the slits in the wrong place, thereby missing the signature of the cusps.
This was obviously wrong from the start. It boiled down to an assertion that Vera Rubin didn’t know how to measure rotation curves. If that were true, we wouldn’t have dark matter in the first place. The real lesson of this episode was to never underestimate the power of cognitive dissonance. People believed one thing about the data quality when it agreed with their preconceptions (rotation curves prove dark matter!) and another when it didn’t (rotation curves don’t constrain cusps!)
So, back to the telescope. Now we obtained 2D velocity fields at optical resolution (a few arcseconds). When you do this, there is no where for a cusp to hide. Such a dense concentration makes a pronounced mark on the velocity field.
To give a real world example (O’Neil et. al 2000; yes, we could already do this in the previous millennium), here is a galaxy with a cusp and one without:
It is easy to see the signature of a cusp in a 2D velocity field. You can’t miss it. It stands out like a sore thumb.
The absence of cusps is typical of dwarf and low surface brightness galaxies. In the vast majority of these, we see approximately solid body rotation, as in UGC 12695. This is incredibly reproducible. See, for example, the case of UGC 4325 (Fig. 3 of Bosma 2004), where six independent observations employing three distinct observational techniques all obtain the same result.
There are cases where we do see a cusp. These are inevitably associated with a dense concentration of stars, like a bulge component. There is no need to invoke dark matter cusps when the luminous matter makes the same prediction. Worse, it becomes ambiguous: you can certainly fit a cuspy halo by reducing the fractional contribution of the stars. But this only succeeds by having the dark matter mimic the light distribution. Maybe such galaxies do have cuspy halos, but the data do not require it.
All this was settled a decade ago. Most of the field has moved on, with many theorists trying to simulate the effects of baryonic feedback. An emerging consensus is that such feedback can transform cusps into cores on scales that matter to real galaxies. The problem then moves to finding observational tests of feedback: does it work in the real universe as it must do in the simulations in order to get the “right” result?
Not everyone has kept up with the times. A recent preprint tries to spin the story that non-circular motions make it hard to obtain the true circular velocity curve, and therefore we can still get away with cusps. Like all good misinformation, there is a grain of truth to this. It can indeed be challenging to get the precisely correct 1D rotation curve V(R) in a way that properly accounts for non-circular motions. Challenging but not impossible. Some of the most intense arguments I’ve had have been over how to do this right. But these were arguments among perfectionists about details. We agreed on the basic result.
High quality data paint a clear and compelling picture. The data show an incredible amount of order in the form of Renzo’s rule, the Baryonic Tully-Fisher relation, and the Radial Acceleration Relation. Such order cannot emerge from a series of systematic errors. Models that fail to reproduce these observed relations can be immediately dismissed as incorrect.
The high degree of order in the data has been known for decades, and yet many modeling papers simply ignore these inconvenient facts. Perhaps the authors of such papers are simply unaware of them. Worse, some seem to be fooling themselves through the liberal application of the Frenk’s Principle. This places a notional belief system (dark matter halos must have cusps) above observational reality. This attitude has more in common with religious faith than with the scientific method.
The week of June 5, 2017, we held a workshop on dwarf galaxies and the dark matter problem. The workshop was attended by many leaders in the field – giants of dwarf galaxy research. It was held on the campus of Case Western Reserve University and supported by the John Templeton Foundation. It resulted in many fascinating discussions which I can’t possibly begin to share in full here, but I’ll say a few words.
Dwarf galaxies are among the most dark matter dominated objects in the universe. Or, stated more properly, they exhibit the largest mass discrepancies. This makes them great places to test theories of dark matter and modified gravity. By the end, we had come up with a few important tests for both ΛCDM and MOND. A few of these we managed to put on a white board. These are hardly a complete list, but provide a basis for discussion.
UFDs in field: Over the past few years, a number of extremely tiny dwarf galaxies have been identified as satellites of the Milky Way galaxy. These “ultrafaint dwarfs” are vaguely defined as being fainter than 100,000 solar luminosities, with the smallest examples having only a few hundred stars. This is absurdly small by galactic standards, having the stellar content of individual star clusters within the Milky Way. Indeed, it is not obvious to me that all of the ultrafaint dwarfs deserve to be recognized as dwarf galaxies, as some may merely be fragmentary portions of the Galactic stellar halo composed of stars coincident in phase space. Nevertheless, many may well be stellar systems external to the Milky Way that orbit it as dwarf satellites.
That multitudes of minuscule dark matter halos exist is a fundamental prediction of the ΛCDM cosmogony. These should often contain ultrafaint dwarf galaxies, and not only as satellites of giant galaxies like the Milky Way. Indeed, one expects to see many ultrafaints in the “field” beyond the orbital vicinity of the Milky Way where we have found them so far. These are predicted to exist in great numbers, and contain uniformly old stars. The “old stars” portion of the prediction stems from the reionization of the universe impeding star formation in the smallest dark matter halos. Upcoming surveys like LSST should provide a test of this prediction.
From an empirical perspective, I do expect that we will continue to discover galaxies of ever lower luminosity and surface brightness. In the field, I expect that these will be predominantly gas rich dwarfs like Leo P rather than gas-free, old stellar systems like the satellite ultrafaints. My expectation is an extrapolation of past experience, not a theory-specific prediction.
No Large Cores: Many of the simulators present at the workshop showed that if the energy released by supernovae was well directed, it could reshape the steep (‘cuspy’) interior density profiles of dark matter halos into something more like the shallow (‘cored’) interiors that are favored by data. I highlight the if because I remain skeptical that supernova energy couples as strongly as required and assumed (basically 100%). Even assuming favorable feedback, there seemed to be broad (in not unanimous) consensus among the simulators present that at sufficiently low masses, not enough stars would form to produce the requisite energy. Consequently, low mass halos should not have shallow cores, but instead retain their primordial density cusps. Hence clear measurement of a large core in a low mass dwarf galaxy (stellar mass < 1 million solar masses) would be a serious problem. Unfortunately, I’m not clear that we quantified “large,” but something more than a few hundred parsecs should qualify.
Radial Orbit for Crater 2: Several speakers highlighted the importance of the recently discovered dwarf satellite Crater 2. This object has a velocity dispersion that is unexpectedly low in ΛCDM, but was predicted by MOND. The “fix” in ΛCDM is to imagine that Crater 2 has suffered a large amount of tidal stripping by a close passage of the Milky Way. Hence it is predicted to be on a radial orbit (one that basically just plunges in and out). This can be tested by measuring the proper motion of its stars with Hubble Space Telescope, for which there exists a recently approved program.
DM Substructures: As noted above, there must exist numerous low mass dark matter halos in the cold dark matter cosmogony. These may be detected as substructure in the halos of larger galaxies by means of their gravitational lensing even if they do not contain dwarf galaxies. Basically, a lumpy dark matter halo bends light in subtly but detectably different ways from a smooth halo.
No Wide Binaries in UFDs: As a consequence of dynamical friction against the background dark matter, binary stars cannot remain at large separations over a Hubble time: their orbits should decay. In the absence of dark matter, this should not happen (it cannot if there is nowhere for the orbital energy to go, like into dark matter particles). Thus the detection of a population of widely separated binary stars would be problematic. Indeed, Pavel Kroupa argued that the apparent absence of strong dynamical friction already excludes particle dark matter as it is usually imagined.
Short dynamical times/common mergers: This is related to dynamical friction. In the hierarchical cosmogony of cold dark matter, mergers of halos (and the galaxies they contain) must be frequent and rapid. Dark matter halos are dynamically sticky, soaking up the orbital energy and angular momentum between colliding galaxies to allow them to stick and merge. Such mergers should go to completion on fairly short timescales (a mere few hundred million years).
A few distinctive predictions for MOND were also identified.
Tangential Orbit for Crater 2: In contrast to ΛCDM, we expect that the `feeble giant’ Crater 2 could not survive a close encounter with the Milky Way. Even at its rather large distance of 120 kpc from the Milky Way, it is so feeble that it is not immune from the external field of its giant host. Consequently, we expect that Crater 2 must be on a more nearly circular orbit, and not on a radial orbit as suggested in ΛCDM. The orbit does not need to be perfectly circular of course, but is should be more tangential than radial.
This provides a nice test that distinguishes between the two theories. Either the orbit of Crater 2 is more radial or more tangential. Bear in mind that Crater 2 already constitutes a problem for ΛCDM. What we’re discussing here is how to close what is basically a loophole whereby we can excuse an otherwise unanticipated result in ΛCDM.
I believe the question mark was added on the white board to permit the logical if unlikely possibility that one could write a MOND theory with an undetectably small EFE.
Position of UFDs on RAR: We chose to avoid making the radial acceleration relation (RAR) a focus of the meeting – there was quite enough to talk about as it was – but it certainly came up. The ultrafaint dwarfs sit “too high” on the RAR, an apparent problem for MOND. Indeed, when I first worked on this subject with Joe Wolf, I initially thought this was a fatal problem for MOND.
My initial thought was wrong. This is not a problem for MOND. The RAR applies to systems in dynamical equilibrium. There is a criterion in MOND to check whether this essential condition may be satisfied. Basically all of the ultrafaints flunk this test. There is no reason to think they are in dynamical equilibrium, so no reason to expect that they should be exactly on the RAR.
Some advocates of ΛCDM seemed to think this was a fudge, a lame excuse morally equivalent to the fudges made in ΛCDM that its critics complain about. This is a false equivalency that reminds me of this cartoon:
The ultrafaints are a handful of the least-well measured galaxies on the RAR. Before we obsess about these, it is necessary to provide a satisfactory explanation for the more numerous, much better measured galaxies that establish the RAR in the first place. MOND does this. ΛCDM does not. Holding one theory to account for the least reliable of measurements before holding another to account for everything up to that point is like, well, like the cartoon… I could put an NGC number to each of the lines Bugs draws in the sand.
Long dynamical times/less common mergers: Unlike ΛCDM, dynamical friction should be relatively ineffective in MOND. It lacks the large halos of dark matter that act as invisible catchers’ mitts to make galaxies stick and merge. Personally, I do not think this is a great test, because we are a long way from understanding dynamical friction in MOND.
These are just a few of the topics discussed at the workshop, and all of those are only a few of the issues that matter to the bigger picture. While the workshop was great in every respect, perhaps the best thing was that it got people from different fields/camps/perspectives talking. That is progress.
I am grateful for progress, but I must confess that to me it feels excruciatingly slow. Models of galaxy formation in the context of ΛCDM have made credible steps forward in addressing some of the phenomenological issues that concern me. Yet they still seem to me to be very far from where they need to be. In particular, there seems to be no engagement with the fundamental question I have posed here before, and that I posed at the beginning of the workshop: Why does MOND get any predictions right?
A research programme is said to be progressing as long as its theoretical growth anticipates its empirical growth, that is as long as it keeps predicting novel facts with some success (“progressive problemshift”); it is stagnating if its theoretical growth lags behind its empirical growth, that is as long as it gives only post-hoc explanations either of chance discoveries or of facts anticipated by, and discovered in, a rival programme (“degenerating problemshift”) (Lakatos, 1971, pp. 104–105).
The recent history of modern cosmology is rife with post-hoc explanations of unanticipated facts. The cusp-core problem and the missing satellites problem are prominent examples. These are explained after the fact by invoking feedback, a vague catch-all that many people agree solves these problems even though none of them agree on how it actually works.
There are plenty of other problems. To name just a few: satellite planes (unanticipated correlations in phase space), the emptiness of voids, and the early formation of structure (see section 4 of Famaey & McGaugh for a longer list and section 6 of Silk & Mamon for a positive spin on our list). Each problem is dealt with in a piecemeal fashion, often by invoking solutions that contradict each other while buggering the principle of parsimony.
It goes like this. A new observation is made that does not align with the concordance cosmology. Hands are wrung. Debate is had. Serious concern is expressed. A solution is put forward. Sometimes it is reasonable, sometimes it is not. In either case it is rapidly accepted so long as it saves the paradigm and prevents the need for serious thought. (“Oh, feedback does that.”) The observation is no longer considered a problem through familiarity and exhaustion of patience with the debate, regardless of how [un]satisfactory the proffered solution is. The details of the solution are generally forgotten (if ever learned). When the next problem appears the process repeats, with the new solution often contradicting the now-forgotten solution to the previous problem.
This has been going on for so long that many junior scientists now seem to think this is how science is suppose to work. It is all they’ve experienced. And despite our claims to be interested in fundamental issues, most of us are impatient with re-examining issues that were thought to be settled. All it takes is one bold assertion that everything is OK, and the problem is perceived to be solved whether it actually is or not.
That is the process we apply to little problems. The Big Problems remain the post hoc elements of dark matter and dark energy. These are things we made up to explain unanticipated phenomena. That we need to invoke them immediately casts the paradigm into what Lakatos called degenerating problemshift. Once we’re there, it is hard to see how to get out, given our propensity to overindulge in the honey that is the infinity of free parameters in dark matter models.
Note that there is another aspect to what Lakatos said about facts anticipated by, and discovered in, a rival programme. Two examples spring immediately to mind: the Baryonic Tully-Fisher Relation and the Radial Acceleration Relation. These are predictions of MOND that were unanticipated in the conventional dark matter picture. Perhaps we can come up with post hoc explanations for them, but that is exactly what Lakatos would describe as degenerating problemshift. The rival programme beat us to it.
In my experience, this is a good description of what is going on. The field of dark matter has stagnated. Experimenters look harder and harder for the same thing, repeating the same experiments in hope of a different result. Theorists turn knobs on elaborate models, gifting themselves new free parameters every time they get stuck.
On the flip side, MOND keeps predicting novel facts with some success, so it remains in the stage of progressive problemshift. Unfortunately, MOND remains incomplete as a theory, and doesn’t address many basic issues in cosmology. This is a different kind of unsatisfactory.
In the mean time, I’m still waiting to hear a satisfactory answer to the question I’ve been posing for over two decades now. Why does MOND get any predictions right? It has had many a priori predictions come true. Why does this happen? It shouldn’t. Ever.
It has been proposal season for the Hubble Space Telescope, so many astronomers have been busy with that. I am no exception. Talking to others, it is clear that there remain many more excellent Hubble projects than available observing time.
So I haven’t written here for a bit, and I have other tasks to get on with. I did get requests for a report on the last conference I went to, Beyond WIMPs: from Theory to Detection. They have posted video from the talks, so anyone who is interested may watch.
I think this is the worst talk I’ve given in 20 years. Maybe more. Made the classic mistake of trying to give the talk the organizers asked for rather than the one I wanted to give. Conference organizers mean well, but they usually only have a vague idea of what they imagine you’ll say. You should always ignore that and say what you think is important.
When speaking or writing, there are three rules: audience, audience, audience. I was unclear what the audience would be when I wrote the talk, and it turns out there were at least four identifiably distinct audiences in attendance. There were skeptics – particle physicists who were concerned with the state of their field and that of cosmology, there were the faithful – particle physicists who were not in the least concerned about this state of affairs, there were the innocent – grad students with little to no background in astronomy, and there were experts – astroparticle physicists who have a deep but rather narrow knowledge of relevant astronomical data. I don’t think it would have been possible to address the assigned topic (a “Critical Examination of the Existence of Dark Matter“) in a way that satisfied all of these distinct audiences, and certainly not in the time allotted (or even in an entire semester).
It is tempting to give an interruption by interruption breakdown of the sociology, but you may judge that for yourselves. The one thing I got right was what I said at the outset: Attitude Matters. You can see that on display throughout.
In science as in all matters, if you come to a problem sure that you already know the answer, you will leave with that conviction. No data nor argument will shake your faith. Only you can open your own mind.
In 1984, I heard Hans Bethe give a talk in which he suggested the dark matter might be neutrinos. This sounded outlandish – from what I had just been taught about the Standard Model, neutrinos were massless. Worse, I had been given the clear impression that it would screw everything up if they did have mass. This was the pervasive attitude, even though the solar neutrino problem was known at the time. This did not compute! so many of us were inclined to ignore it. But, I thought, in the unlikely event it turned out that neutrinos did have mass, surely that would be the answer to the dark matter problem.
Flash forward a few decades, and sure enough, neutrinos do have mass. Oscillations between flavors of neutrinos have been observed in both solar and atmospheric neutrinos. This implies non-zero mass eigenstates. We don’t yet know the absolute value of the neutrino mass, but the oscillations do constrain the separation between mass states (Δmν,212 = 7.53×10−5 eV2 for solar neutrinos, and Δmν,312 = 2.44×10−3 eV2 for atmospheric neutrinos).
Though the absolute values of the neutrino mass eigenstates are not yet known, there are upper limits. These don’t allow enough mass to explain the cosmological missing mass problem. The relic density of neutrinos is
Ωνh2 = ∑mν/(93.5 eV)
In order to make up the dark matter density (Ω ≈ 1/4), we need ∑mν ≈ 12 eV. The experimental upper limit on the electron neutrino mass is mν < 2 eV. There are three neutrino mass eigenstates, and the difference in mass between them is tiny, so ∑mν < 6 eV. Neutrinos could conceivably add up to more mass than baryons, but they cannot add up to be the dark matter.
In recent years, I have started to hear the assertion that we have already detected dark matter, with neutrinos given as the example. They are particles with mass that only interact with us through the weak nuclear force and gravity. In this respect, they are like WIMPs.
Here the equivalence ends. Neutrinos are Standard Model particles that have been known for decades. WIMPs are hypothetical particles that reside in a hypothetical supersymmetric sector beyond the Standard Model. Conflating the two to imply that WIMPs are just as natural as neutrinos is a false equivalency.
That said, massive neutrinos might be one of the few ways in which hierarchical cosmogony, as we currently understand it, is falsifiable. Whatever the dark matter is, we need it to be dynamically cold. This property is necessary for it to clump into dark matter halos that seed galaxy formation. Too much hot (relativistic) dark matter (neutrinos) suppresses structure formation. A nascent dark matter halo is nary a speed bump to a neutrino moving near the speed of light: if those fast neutrinos carry too much mass, they erase structure before it can form.
One of the great successes of ΛCDM is its explanation of structure formation: the growth of large scale structure from the small fluctuations in the density field at early times. This is usually quantified by the power spectrum – in the CMB at z > 1000 and from the spatial distribution of galaxies at z = 0. This all works well provided the dominant dark mass is dynamically cold, and there isn’t too much hot dark matter fighting it.
How much is too much? The power spectrum puts strong limits on the amount of hot dark matter that is tolerable. The upper limit is ∑mν < 0.12 eV. This is an order of magnitude stronger than direct experimental constraints.
Usually, it is assumed that the experimental limit will eventually come down to the structure formation limit. That does seem likely, but it is also conceivable that the neutrino mass has some intermediate value, say mν ≈ 1 eV. Such a result, were it to be obtained experimentally, would falsify the current CDM cosmogony.
Such a result seems unlikely, of course. Shooting for a narrow window such as the gap between the current cosmological and experimental limits is like drawing to an inside straight. It can happen, but it is unwise to bet the farm on it.
If experiments measure a neutrino mass in excess of the cosmological limit, it would be powerful motivation to consider MOND-like theories as a driver of structure formation. If instead the neutrino does prove to be tiny, ΛCDM will have survived another test. That wouldn’t falsify MOND (or really have any bearing on it), but it would remove one potential “out” for the galaxy cluster problem.
Tiny though they be, neutrinos got mass! And it matters!
David Merritt recently published the article “Cosmology and convention” in Studies in History and Philosophy of Science. This article is remarkable in many respects. For starters, it is rare that a practicing scientist reads a paper on the philosophy of science, much less publishes one in a philosophy journal.
I was initially loathe to start reading this article, frankly for fear of boredom: me reading about cosmology and the philosophy of science is like coals to Newcastle. I could not have been more wrong. It is a genuine page turner that should be read by everyone interested in cosmology.
I have struggled for a long time with whether dark matter constitutes a falsifiable scientific hypothesis. It straddles the border: specific dark matter candidates (e.g., WIMPs) are confirmable – a laboratory detection is both possible and plausible – but the concept of dark matter can never be excluded. If we fail to find WIMPs in the range of mass-cross section parameters space where we expected them, we can change the prediction. This moving of the goal post has already happened repeatedly.
I do not find it encouraging that the goal posts keep moving. This raises the question, how far can we go? Arbitrarily low cross-sections can be extracted from theory if we work at it hard enough. How hard should we work? That is, what criteria do we set whereby we decide the WIMP hypothesis is mistaken?
There has to be some criterion by which we would consider the WIMP hypothesis to be falsified. Without such a criterion, it does not satisfy the strictest definition of a scientific hypothesis. If at some point we fail to find WIMPs and are dissatisfied with the theoretical fine-tuning required to keep them hidden, we are free to invent some other dark matter candidate. No WIMPs? Must be axions. Not axions? Would you believe light dark matter? [Worst. Name. Ever.] And so on, ad infinitum. The concept of dark matter is not falsifiable, even if specific dark matter candidates are subject to being made to seem very unlikely (e.g., brown dwarfs).
Faced with this situation, we can consult the philosophy science. Merritt discusses how many of the essential tenets of modern cosmology follow from what Popper would term “conventionalist stratagems” – ways to dodge serious consideration that a treasured theory is threatened. I find this a compelling terminology, as it formalizes an attitude I have witnessed among scientists, especially cosmologists, many times. It was put more colloquially by J.K. Galbraith:
“Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everybody gets busy on the proof.”
Boiled down (Keuth 2005), the conventionalist strategems Popper identifies are
ad hoc hypotheses
modification of ostensive definitions
doubting the reliability of the experimenter
doubting the acumen of the theorist
These are stratagems to be avoided according to Popper. At the least they are pitfalls to be aware of, but as Merritt discusses, modern cosmology has marched down exactly this path, doing each of these in turn.
The ad hoc hypotheses of ΛCDM are of course Λ and CDM. Faced with the observation of a metric that cannot be reconciled with the prior expectation of a decelerating expansion rate, we re-invoke Einstein’s greatest blunder, Λ. We even generalize the notion and give it a fancy new name, dark energy, which has the convenient property that it can fit any observed set of monotonic distance-redshift pairs. Faced with an excess of gravitational attraction over what can be explained by normal matter, we invoke non-baryonic dark matter: some novel form of mass that has no place in the standard model of particle physics, has yet to show any hint of itself in the laboratory, and cannot be decisively excluded by experiment.
We didn’t accept these ad hoc add-ons easily or overnight. Persuasive astronomical evidence drove us there, but all these data really show is that something dire is wrong: General Relativity plus known standard model particles cannot explain the universe. Λ and CDM are more a first guess than a final answer. They’ve been around long enough that they have become familiar, almost beyond doubt. Nevertheless, they remain unproven ad hoc hypotheses.
The sentiment that is often asserted is that cosmology works so well that dark matter and dark energy must exist. But a more conservative statement would be that our present understanding of cosmology is correct if and only if these dark entities exist. The onus is on us to detect dark matter particles in the laboratory.
That’s just the first conventionalist stratagem. I could given many examples of violations of the other three, just from my own experience. That would make for a very long post indeed.
Instead, you should go read Merritt’s paper. There are too many things there to discuss, at least in a single post. You’re best going to the source. Be prepared for some cognitive dissonance.