There is a tendency when teaching science to oversimplify its history for the sake of getting on with the science. How it came to be isn’t necessary to learn it. But to do science requires a proper understanding of the process by which it came to be.

The story taught to cosmology students seems to have become: we didn’t believe in the cosmological constant (Λ), then in 1998 the Type Ia supernovae (SN) monitoring campaigns detected accelerated expansion, then all of a sudden we did believe in Λ. The actual history was, of course, rather more involved – to the point where this oversimplification verges on disingenuous. There were many observational indications of Λ that were essential in paving the way.

Modern cosmology starts in the early 20th century with the recognition that the universe should be expanding or contracting – a theoretical inevitability of General Relativity that Einstein initially tried to dodge by inventing the cosmological constant – and is expanding in fact, as observationally established by Hubble and Slipher and many others since. The Big Bang was largely considered settled truth after the discovery of the existence of the cosmic microwave background (CMB) in 1964.

The CMB held a puzzle, as it quickly was shown to be too smooth. The early universe was both isotropic and homogeneous. Too homogeneous. We couldn’t detect the density variations that could grow into galaxies and other immense structures. Though such density variations are now well measured as temperature fluctuations that are statistically well described by the acoustic power spectrum, the starting point was that these fluctuations were a disappointing no-show. We should have been able to see them much sooner, unless something really weird was going on…

That something weird was non-baryonic cold dark matter (CDM). For structure to grow, it needed the helping hand of the gravity of some unseen substance. Normal matter matter did not suffice. The most elegant cosmology, the Einstein-de Sitter universe, had a mass density Ωm= 1. But the measured abundances of the light elements were only consistent with the calculations of big bang nucleosynthesis if normal matter amounted to only 5% of Ωm = 1. This, plus the need to grow structure, led to the weird but seemingly unavoidable inference that the universe must be full of invisible dark matter. This dark matter needed to be some slow moving, massive particle that does not interact with light nor reside within the menagerie of particles present in the Standard Model of Particle Physics.

CDM and early universe Inflation were established in the 1980s. Inflation gave a mechanism that drove the mass density to exactly one (elegant!), and CDM gave us hope for enough mass to get to that value. Together, they gave us the Standard CDM (SCDM) paradigm with Ωm = 1.000 and H0 = 50 km/s/Mpc.

elrondwasthere
I was there when SCDM failed.

It is hard to overstate the ferver with which the SCDM paradigm was believed. Inflation required that the mass density be exactly one; Ωm < 1 was inconceivable. For an Einstein-de Sitter universe to be old enough to contain the oldest stars, the Hubble constant had to be the lower of the two (50 or 100) commonly discussed at that time. That meant that H0 > 50 was Right Out. We didn’t even discuss Λ. Λ was Unmentionable. Unclean.

SCDM was Known, Khaleesi.

scdm_rightout

Λ had attained unmentionable status in part because of its origin as Einstein’s greatest blunder, and in part through its association with the debunked Steady State model. But serious mention of it creeps back into the literature by 1990. The first time I personally heard Λ mentioned as a serious scientific possibility was by Yoshii at a conference in 1993. Yoshii based his argument on a classic cosmological test, N(m) – the number of galaxies as a function of how faint they appeared. The deeper you look, the more you see, in a way that depends on the intrinsic luminosity of galaxies, and how they fill space. Look deep enough, and you begin to trace the geometry of the cosmos.

At this time, one of the serious problems confronting the field was the faint blue galaxies problem. There were so many faint galaxies on the sky, it was incredibly difficult to explain them all. Yoshii made a simple argument. To get so many galaxies, we needed a big volume. The only way to do that in the context of the Robertson-Walker metric that describes the geometry of the universe is if we have a large cosmological constant, Λ. He was arguing for ΛCDM five years before the SN results.

gold_hat_portrayed_by_alfonso_bedoya
Lambda? We don’t need no stinking Lambda!

Yoshii was shouted down. NO! Galaxies evolve! We don’t need no stinking Λ! In retrospect, Yoshii & Peterson (1995) looks like a good detection of Λ. Perhaps Yoshii & Peterson also deserve a Nobel prize?

Indeed, there were many hints that Λ (or at least low Ωm) was needed, e.g., the baryon catastrophe in clusters, the power spectrum of IRAS galaxies, the early appearance of bound structures, the statistics of gravitational lensesand so on. Certainly by the mid-90s it was clear that we were not going to make it to Ωm = 1. Inflation was threatened – it requires Ωm = 1 – or at least a flat geometry: ΩmΛ = 1.

SCDM was in crisis.

A very influential 1995 paper by Ostriker & Steinhardt did a lot to launch ΛCDM. I was impressed by the breadth of data Ostriker & Steinhardt discussed, all of which demanded low Ωm. I thought the case for Λ was less compelling, as it hinged on the age problem in a way that might also have been solved, at that time, by simply having an open universe (low Ωm with no Λ). This would ruin Inflation, but I wasn’t bothered by that. I expect they were. Regardless, they definitely made that case for ΛCDM three years before the supernovae results. Their arguments were accepted by almost everyone who was paying attention, including myself. I heard Ostriker give a talk around this time during which he was asked “what cosmology are you assuming?” to which he replied “the right one.” Called the “concordance” cosmology by Ostriker & Steinhardt, ΛCDM had already achieved the status of most-favored cosmology by the mid-90s.

omhannotated
A simplified version of the diagram of Ostriker & Steinhardt (1995) illustrating just a few of the constraints they discussed. Direct measurements of the expansion rate, mass density, and ages of the oldest stars excluded SCDM, instead converging on a narrow window – what we now call ΛCDM.

Ostriker & Steinhardt neglected to mention an important prediction of Λ: not only should the universe expand, but that expansion rate should accelerate! In 1995, that sounded completely absurd. People had looked for such an effect, and claimed not to see it. So I wrote a brief note pointing out the predicted acceleration of the expansion rate. I meant it in a bad way: how crazy would it be if the expansion of the universe was accelerating?! This was an obvious and inevitable consequence of ΛCDM that was largely being swept under the rug at that time.

I mean[t], surely we could live with Ωm < 1 but no Λ. Can’t we all just get along? Not really, as it turned out. I remember Mike Turner pushing the SN people very hard in Aspen in 1997 to Admit Λ. He had an obvious bias: as an Inflationary cosmologist, he had spent the previous decade castigating observers for repeatedly finding Ωm < 1. That’s too little mass, you fools! Inflation demands Ωm = 1.000! Look harder!

By 1997, Turner had, like many cosmologists, finally wrapped his head around the fact that we weren’t going to find enough mass for Ωm = 1. This was a huge problem for Inflation. The only possible solution, albeit an ugly one, was if Λ made up the difference. So there he was at Aspen, pressuring the people who observed supernova to Admit Λ. One, in particular, was Richard Ellis, a great and accomplished astronomer who had led the charge in shouting down Yoshii. They didn’t yet have enough data to Admit Λ. Not.Yet.

By 1998, there were many more high redshift SNIa. Enough to see Λ. This time, after the long series of results only partially described above, we were intellectually prepared to accept it – unlike in 1993. Had the SN experiments been conducted five years earlier, and obtained exactly the same result, they would not have been awarded the Nobel prize. They would instead have been dismissed as a trick of astrophysics: the universe evolves, metallicity was lower at earlier times, that made SN then different from now, they evolve and so cannot be used as standard candles. This sounds silly now, as we’ve figured out how to calibrate for intrinsic variations in the luminosities of Type Ia SN, but that is absolutely how we would have reacted in 1993, and no amount of improvements in the method would have convinced us. This is exactly what we did with faint galaxy counts: galaxies evolve; you can’t hope to understand that well enough to constrain cosmology. Do you ever hear them cited as evidence for Λ?

Great as the supernovae experiments to measure the metric genuinely were, they were not a discovery so much as a confirmation of what cosmologists had already decided to believe. There was no singular discovery that changed the way we all thought. There was a steady drip, drip, drip of results pointing towards Λ all through the ’90s – the age problem in which the oldest stars appeared to be older than the universe in which they reside, the early appearance of massive clusters and galaxies, the power spectrum of galaxies from redshift surveys that preceded Sloan, the statistics of gravitational lenses, and the repeated measurement of 1/4 < Ωm < 1/3 in a large variety of independent ways – just to name a few. By the mid-90’s, SCDM was dead. We just refused to bury it until we could accept ΛCDM as a replacement. That was what the Type Ia SN results really provided: a fresh and dramatic reason to accept the accelerated expansion that we’d already come to terms with privately but had kept hidden in the closet.

Note that the acoustic power spectrum of temperature fluctuations in the cosmic microwave background (as opposed to the mere existence of the highly uniform CMB) plays no role in this history. That’s because temperature fluctuations hadn’t yet been measured beyond their rudimentary detection by COBE. COBE demonstrated that temperature fluctuations did indeed exist (finally!) as they must, but precious little beyond that. Eventually, after the settling of much dust, COBE was recognized as one of many reasons why Ωm ≠ 1, but it was neither the most clear nor most convincing reason at that time. Now, in the 21st century, the acoustic power spectrum provides a great way to constrain what all the parameters of ΛCDM have to be, but it was a bit player in its development. The water there was carried by traditional observational cosmology using general purpose optical telescopes in a great variety of different ways, combined with a deep astrophysical understanding of how stars, galaxies, quasars and the whole menagerie of objects found in the sky work. All the vast knowledge incorporated in textbooks like those by Harrison, by Peebles, and by Peacock – knowledge that often seems to be lacking in scientists trained in the post-WMAP era.

Despite being a late arrival, the CMB power spectrum measured in 2000 by Boomerang and 2003 by WMAP did one important new thing to corroborate the ΛCDM picture. The supernovae data didn’t detect accelerated expansion so much as exclude the deceleration we had nominally expected. The data were also roughly consistent with a coasting universe (neither accelerating nor decelerating); the case for acceleration only became clear when we assumed that the geometry of the universe was flat (ΩmΛ = 1). That didn’t have to work out, so it was a great success of the paradigm when the location of the first peak of the power spectrum appeared in exactly the right place for a flat FLRW geometry.

The consistency of these data have given ΛCDM an air of invincibility among cosmologists. But a modern reconstruction of the Ostriker & Steinhardt diagram leaves zero room remaining – hence the tension between H0 = 73 measured directly and H0 = 67 from multiparameter CMB fits.

omhannotated_cmb
Constraints from the acoustic power spectrum of the CMB overplotted on the direct measurements from the plot above. Initially in great consistency with those measurement, the best fit CMB values have steadily wandered away from the most-favored region of parameter space that established ΛCDM in the first place. This is most apparent in the tension with H0.

In cosmology, we are accustomed to having to find our way through apparently conflicting data. The difference between an expansion rate of 67 and 73 seems trivial given that the field was long riven – in living memory – by the dispute between 50 and 100. This gives rise to the expectation that the current difference is just a matter of some subtle systematic error somewhere. That may well be correct. But it is also conceivable that FLRW is inadequate to describe the universe, and we have been driven to the objectively bizarre parameters of ΛCDM because it happens to be the best approximation that can be obtained to what is really going on when we insist on approximating it with FLRW.

Though a logical possibility, that last sentence will likely drive many cosmologists to reach for their torches and pitchforks. Before killing the messenger, we should remember that we once endowed SCDM with the same absolute certainty we now attribute to ΛCDM. I was there, 3,000 internet years ago, when SCDM failed. There is nothing so sacred in ΛCDM that it can’t suffer the same fate, as has every single cosmology ever devised by humanity.

Today, we still lack definitive knowledge of either dark matter or dark energy. These add up to 95% of the mass-energy of the universe according to ΛCDM. These dark materials must exist.

It is Known, Khaleesi.

Advertisements

82 thoughts on “A personal recollection of how we learned to stop worrying and love the Lambda

  1. Being a huge fan of Dr. Strangelove I couldn’t help but love this essay.
    It is really interesting to look behind the curtain see how these changes in cosmology come about. Not like the old days, when someone had to be burned at the stake, or at least put on house arrest.
    To me this shows that there is always hope for change, and that, in the long run, even if LCDM is nonsense, there will always be a chance to replace it with something less insane. However such a thing will probably seem more insane at first.
    So, a question. I understand that MoND seems to under predict cluster mass by a factor of two (probably not the right way to word that). Is there a similar numeric for how MoND does with respect to the total Cosmos, i.e. is it off by a factor of 5, or 10, or 100?

    Like

  2. “it is also conceivable that FLRW is inadequate to describe the universe, and we have been driven to the objectively bizarre parameters of ΛCDM because it happens to be the best approximation that can be obtained to what is really going on when we insist on approximating it with FLRW.”

    What is your opinion about the attempts made by some theoricians to build inhomogenous cosmologies (in which Lambda may appear as an artefact caused by the averaging of the inhomogeneities at an higher scale)?

    Liked by 1 person

  3. A great, interesting and well-written post. I have not been aware of all points mentioned, and indead there is a strong tendency to ’round off’ the way to current theories creating the impression of either a straightforward path or an unexpected, ingenious and immediately convincing discovery. Without awareness of the twists and turns leading to our current theories they appear more convlusive in hindsight and almost inevitable.
    Thus remembering earlier considerations and doubts may help keeping an open might for limitations and slight inconsistencies in the current concepts, which may lead beyond to a next step of understanding.
    And of course it does a little justice to those pushed aside and forgotten only because their insights appeared too early to be accepted.

    Liked by 1 person

  4. Really interesting read! What does your gut tell you? Will the tension between the various values of H0 be resolved, or does it point to new physics? Could it be something as “simple” as H0 is increasing with time? How wrong am I when I refer to that as a simple explanation?

    Like

  5. Thanks for all your comments. I’ll take them in order best I can, but let me put off discussion of MOND to the end. I have worn many hats during my career, and chose to write this post wearing a standard cosmology hat.
    As yves mentions, some people have tried to explain the appearance of acceleration as the result of inhomogeneities. The universe stars off homogeneous – that’s what we see in the CMB – but is very far from homogeneous now, up to surprisingly large scales. (5 Mpc was an estimate I heard in the ’80s for how big you had to get to see an average sample of the universe. Now that scales is at least 200 Mpc, and I’m not entirely convinced it has converged.) At any rate, the solution of the Friedman equation that tells us the expansion rate of the universe and how it varies with time is based on the assumption of homogeneity. Once enough structure forms, that no longer holds – the Milky Way is an example of a highly non-linear structure that has decoupled from the Hubble flow and is not itself expanding. So it is conceivable that as the universe clumps up and fragments, some pieces will expand differently than others. If we’re in a big hole, cosmically speaking, then we’d perceive a net acceleration of the expansion relative to the “correct” average. Long story short, I don’t think there is much merit to this idea. it takes more structure, and us in the middle of a bigger hole, than seems reasonable. Living in a big hole was an idea also put forward to explain away the faint blue galaxies problem at one point. In either case, it has to be a really big hole. That’s an oversimplification; part of the problem with this idea is that we think we understand how structure grows in the conventional cosmology, and it doesn’t grow fast enough to do this. So to save ourselves from Lambda in this way, we have to break something else – at least, that’s my reading of this particular idea.
    Intriguingly, Kolb & Turner, who wrote a cosmology textbook together, split over this issue. Both are big champions of Inflation, which predicts that the universe should be geometrically flat. That’s a whole post to describe, but the short version is that the only natural prediction of Inflation is a mass density of one. Not just geometrically flat, but all of Omega in mass, not energy. That’s why they kept telling us observers that we were stupid to measure Omega = 1/4 all through the ’80s. It was inconceivable to them that the mass density could be so close to 1 and not be exactly 1. That’s because Omega evolves. If it is exactly 1, it stays one forever. But even a tiny bit below 1 to begin with, and it should be zero by today. That was a big reason we accepted Inflation in the first place; it gave a compelling reason why Omega we close to one. That’s why it was so hard to accept it was less than one. This is where they split. Turner accepted low Omega but refused to abandon Inflation, so was forced to invoke Lambda to make up the difference in geometry. That works, but it is incredibly fine-tuned – more so than the original problem Inflation solved! So if we were worried about the coincidence of Omega being close to but not unity before, we should be even more worried about it now. People aren’t, for the most part. Kolb is; he seems to believe this is a genuine problem – hence the search for another explanation, like inhomogenetiies.
    Mike says the SN don’t show accelerated expansion. As I tried to make clear, that interpretation is only clear when one makes some fairly strong assumptions. On their own, the SN data are consistent with a range of possibilities, depending on how you slice the data. About a decade ago, I was shocked to find that the best fit to the 2008 Union SN data was a coasting universe – neither accelerating nor decelerating. I got excited enough to read those papers in detail, thinking I might write a paper about this myself, as others later did. I was just taking all the data; the SN team defined a “gold” subsample of things that they were most sure were indeed Type Ia SN. That subsample returns LCDM. Now, a skeptic might suspect confirmation bias, which is always a worry. But it is also true that lots of things go bang in the dark, and so it is both proper and necessary to define clean subsamples like this. I am not an expert on spectral typing SN, so I do not feel qualified to second guess their classifications and hence their definition of the gold subsample.
    Sandy asks if it is as simple as H0 increasing with time. Yes. That is exactly what we mean by the cosmic expansion rate accelerating. The Friedman equations tell us how H0 should vary with time. Normally, gravity is a strictly attractive force, never repulsive. So H0 should only decline – the expansion should be retarded by the mutual attraction of everything in the universe. In order to get acceleration, we have to have something that acts, in effect, as antigravity. We achieve this with dark energy by sticking a negative sign in the equation of state (the dark energy exerts a negative pressure). If those things sound unphysical to you, you would not be alone.
    OK, so, MOND. Ron asked about applying it to the whole universe; at a crude level this sorta works out. That is, if you attempt to correct the conventionally measured mass density (0.3 or so), you realize this is overstated by however low the mean acceleration is. For example, if the mean acceleration is 0.1 a0, then Omega = 0.3 is really Omega = 0.03. That’s about right for a universe full of just normal matter. So one can imagine a MOND cosmology in which we live in a low density universe, the coincidence problem goes away (nothing special about Omgea=1), and all the dynamical indications of cosmic scale dark matter are just a chimera of using the wrong equations. But in detail it is much harder. There is no MOND analog of the Friedman equation; we don’t even know how to pose the problem properly. Even if we did, inhomogeneities probably could not be ignored as we do conventionally. MOND is highly nonlinear. It’ll make structure form fast and furious, after which there is no such thing as a mean cosmic acceleration – what you infer Omega to be will depend on where and how you measure it. A right mess.That might not be a bad description of the universe in which we live, but a right mess is tough to calculate. This is one of my long-standing gripes: what is easy and elegant in standard cosmology is highly non-linear in MOND, and vice-versa (galaxies should be complicated beasts in LCDM, but instead they are simple as they should be in MOND). So where you come down on this depends for the start on how you choose to look at it.
    Another intriguing tidbit is that the rate at which the expansion of the universe is inferred to be accelerating is of the same order as the MOND acceleration constant. That is, a0 ~ sqrt(Lambda) [with factors of the spped of light thrown in to make the units work out]. One might infer something Deep from this numerical equivalence, or one might dismiss it as a mere coincidence.
    In both cases, there is something profound that we do not understand about the universe that kicks in at very low accelerations of order Milgrom’s constant, a0.

    Liked by 2 people

  6. The cosmological constant is overrated. It is just a free parameter in Einstein’s field equations. A fudge factor you adjust to fit theory and observations. We need another theory to explain why the cosmological constant exists at all. General relativity is incomplete because it doesn’t explain CC. Like the gas laws that were discovered by experiments but scientists didn’t know why these ratios exist until the invention of kinetic molecular theory and statistical mechanics.

    Dark matter and dark energy are elegantly explained in my Darkside Force theory (pun intended). It also explains the observed discrepancy in the Hubble constant. Amazingly the evolution of the universe is related to the fundamental theorem of algebra. A beautiful harmony of mathematical law and physical law.

    Like

  7. Any comment on the idea that at very low accelerations, perhaps 3 spatial dimensions are now too many and 1 time dimension is too few? Maybe a metric transformation should become evident at a0, or is this idea something already tried and disgarded?

    Like

  8. Yes, there were people who were seriously (or at least semi-seriously) suggesting Lambda in 1996 and 1997. I think you are write that those were mostly theorists who “knew” there was no curvature. I think the vast majority of observers thought an open Omega = 0.3 Universe was just fine (as you did), and were not in the slightest interested in adding a new parameter (Lambda) without strong evidence.

    The H0 shaded regions in the figures above are misleading; those are today’s estimates, not ca. 1995. Ostriker and Steinhardt used 70 +/- 15, nearly the entire plot.

    You mentioned some observational papers suggesting Lambda earlier, but did not mention these influential papers: Kochanek (1995) https://arxiv.org/abs/astro-ph/9510077 and Perlmutter+ (1997) https://arxiv.org/abs/astro-ph/9608192 that argued the opposite.

    Some other minor corrections: H0 doesn’t (necessarily) increase in an accelerating Universe. H = a’/a, so in a Lambda-dominated exponential Universe H asymptotes to a constant value. In a coasting Universe (neither accelerating nor decelerating always), H ~ 1/t. To get H to increase with time you need super-acceleration (phantom dark energy with w < -1). Accelerating or decelerating refers to a'' not exactly dH/dt.

    Speaking of the coasting Universe — this comes up repeatedly, but SN by themselves only slightly prefer Om=0.3, Ol=0.7 to say Om=0, Ol=0. It was 2-3 sigma in the 1998/1998 papers and has not gotten that much stronger since (just over 4 sigma now). This is just because of the degeneracy in the luminosity distance (if all the SN were at one redshift, the degeneracy would be total). However, if you put any reasonable prior on Om (greater than 0.1 or 0.2 for example, or take flatness from CMB data), then Lambda is hugely favored. See, for example, Rubin and Hayden (2016) https://arxiv.org/abs/1610.08972.

    Mike Smith, your analysis is flawed. The fact that the fits are done in a logarithmic space (magnitude vs redshift) is not problematic; indeed, it is the more natural space for the data. We do not measure distances to supernovae, we measure their fluxes. More accurately we measure flux ratios of supernovae. The fact that these are ratios makes the log space the correct one to use. If you look at low-redshift SN Ia you will find that the scatter in SN fluxes is a proportional one, i.e. the corrected fluxes randomly scatter by ~15 percent from supernova to supernova. It is not a scatter in linear flux or linear distance. You question why the distant SN don't have much larger uncertainties than the nearby ones; two reasons: 1. we use much more powerful telescopes for the faint SN than the bright ones, 2. the scatter is dominated by the SN to SN variation, which as mentioned above is a multiplicative scatter in the flux, not an additive one. There are many other advantages of using magnitudes including the fact that the lambda analysis is independent of the degeneracy between H0 and the absolute brightness of SN Ia. You mention Hubble's original diagram as your ideal but you didn't note that Hubble's constant was 550! This is because he had a scale error on the distance axis. With log(d) that becomes a zeropoint shift that can be absorbed in the calibration. (Of course if you want to measure H0 then you need to do the calibration itself, with something else, like Cepheids).

    Like

    1. Yes. I agree with the vast majority of this. Had even cited the same Kochanek paper you mention in a draft. But as I said, I was intentionally trying to keep it brief. Decided not to delve into the debate about what lensing statistics were showing us (No Lambda! Yes Lambda!) as this was just one of many micro debates on the topic at that time.
      It is a good technical point that Lambda becomes strongly favored once one insists on a sizeable mass density. People definitely had an implicit lower limit of Omega_m > 0.2 in their heads. That isn’t at all obvious from the SN alone. That’s why I said they exclude deceleration more than demand acceleration.

      Liked by 1 person

    2. Trevor – There are many drawbacks using the mag vs. redshift (pseudo-H-routine) for model testing. First, this routine non-uniformly compresses the dispersion and emission standard errors; errors of distant observations are systematically compressed over errors of more nearby emissions and can exacerbate skewness. The pseudo-H-routine incorrectly emphasises the more imprecise, distant data, (SNe Ia) leading to incorrect regression minima and suspect results that can be analysis artefacts. Second, the best data pair, recession velocity and position of earth or the local group (1,0) without error, cannot be used with the pseudo-H-routine, the redshift becoming -infinity and this exclusion can never be justified, so the estimated parameter values are highly suspicious. Third, because the error values have been depressed, goodness of fit estimates are also depressed complicating model discrimination. Fourth, information from both intercepts are lost so an estimate of the origin is impossible. If the pseudo-H-routine were valid, parameter estimates should be similar between the Hubble relationship and the pseudo-H-routine, but in practice are not. Here and for other examples using SNe Ia data, the two analytic methods do not agree. All of the above problems with the mag vs. redshift regression routine are very serious.

      There are many reasons to perform regression analysis using the Hubble routine (distance vs. expansion factor – the recession velocity) data rather than mag vs. z. First, distance is a physical metric but mag is not and never will be. Second, the true measurement errors of distant objects are used for Hubble-routine and are not dampened, this is very important for running a valid, computerized regression analysis. Third, the very best data pair (1,0) without error, is used and automatically anchors the regression estimates. The position of the earth and recession velocity are the very best anchors for astronomical distance data and estimates of the origin can be directly determined using the Hubble-routine. Fourth, the Hubble-routine allows examination of the relative worth of data, which is important when these are billions of years old. Examples can be found in SNe Ia emission collections which display standard deviations of similar distance as the Universe age, these data may be judged of questionable value. Fifth, the difference between goodness of fit estimates, such as chi^2, are not depressed easing model discrimination as we will show here.

      Finally – the reason why Hubble overestimated the Universe expansion rate was due to imprecise data not improper analytical technique – not nice to write bad things about the dead.

      Like

  9. “If we’re in a big hole, cosmically speaking, then we’d perceive a net acceleration of the expansion relative to the “correct” average. Long story short, I don’t think there is much merit to this idea.”

    I wasn’t referring to this (maybe too) obvious idea, but to the work of Thomas Buchert (e.g. https://arxiv.org/abs/gr-qc/9906015, https://arxiv.org/abs/1303.4444v3) or David Wiltshire.

    See also https://cqgplus.com/2016/01/20/the-universe-is-inhomogeneous-does-it-matter:(T Buchert, M Carfora, G F R Ellis, E W Kolb, M A H MacCallum, J J Ostrowski, S Räsänen, B Roukema, L Andersson, A Coley, D Wiltshire):
    “The coincidence that the expansion of the Universe appears to have started accelerating at the same epoch when complex nonlinear structures emerged has led a number of researchers to question some basic assumptions of the standard cosmology. In particular, does the growth of structure on scales smaller than 500 million light years result in an average cosmic evolution significantly different from that of a spatially homogeneous and isotropic Friedmann–Lemaître–Robertson–Walker (FLRW) model? After all, this model keeps spatial curvature uniform everywhere and decouples its evolution from that of matter, which is again not a generic consequence of Einstein’s equations.

    The difference between an averaged generic evolution and an ideal FLRW evolution is usually called backreaction, and is potentially significant for interpreting observations of the actual inhomogeneous Universe—perhaps even to the extent of challenging ideas about cosmic acceleration.”

    Liked by 1 person

    1. There are many flavors of this idea. The coincidence problem is real and manifests in several ways. At some point and at some scale, non-linearities should become large enough that the usual approximation should cease to be valid. Back of the envelope, we’re no where close to that point, so I one think one still has to break something fairly basic – if not the expansion rate, then the growth rate of structure. Pick your poison.

      Liked by 1 person

  10. Silly WordPress. It did not organize these replies in the order they were given, and I do not care enough to follow ever technical change they make. So the “Yes. I agree…” was in response to Trevor and the “many flavors” was in response to yves reply to me.
    Enrico – certainly I share your concern that we have added free parameters at a rate equal or greater than our need to explain the data. Cosmology had already become ungainly complicated in the mid-90s. Now there is too much for one human to know, so the default behavior seems to be to accept whatever fudge someone tells you works to explain the thing you already believe in.
    JB – something important is happening around the scale a0. I don’t know what or why that is.

    Liked by 1 person

  11. “Another intriguing tidbit is that the rate at which the expansion of the universe is inferred to be accelerating is of the same order as the MOND acceleration constant. That is, a0 ~ sqrt(Lambda) [with factors of the spped of light thrown in to make the units work out]. ”

    i know positive dark energy is repulsive, acts as negative pressure, but doesn’t dark energy in itself contain energy and therefore gravitatesince dark energy is itself energy, wouldn’t it act as an attractive force via gravity

    Like

    1. the reason i ask is the relation of MOND and dark energy, couldn’t the mass-energy self-gravitates be the explanation for MOND, at least on the scale of a galaxy. the mass-energy of dark energy contained in the volume of a galaxy have some sort of additional gravitational pull, one that reproduces MOND

      Like

  12. “Modern cosmology starts in the early 20th century with the recognition that the universe should be expanding or contracting – a theoretical inevitability of General Relativity that Einstein initially tried to dodge by inventing the cosmological constant – and is expanding in fact, as observationally established by Hubble and Slipher and many others since.”

    On the contrary, the expanding/contracting universe is antithetical to GR, not a consequence thereof. Such a “universe” arises as a result of misapplying GR to the FLRW metric. That metric essentially attributes to the cosmos a “universal” frame, which GR precludes. The FLRW metric invokes a “universe” which is unstable in the context of GR. It is this misapplication of GR to the FLRW “universe” that is the underlying problem of modern cosmology.

    Either GR is correct and there is no “universal frame” and therefore no “universe” to which GR can be applied, or GR is wrong and cannot be relevant to the FLRW “universe”. FLRW assumes that the cosmos can be treated as a unitary entity, one that GR assumes does not exist. Modern cosmology is thus oxymoronic. The self-contradictory nature of the standard cosmological model is encapsulated in its statement of a non-relativistic, universal simultaneity, “the universe is 13.8 billion years old”.

    If GR is correct then that statement is false, if that statement is correct then GR is falsified. Take your pick, but given the generally, incoherent (inexplicable original condition), self-contradictory, and absurd (95% of the matter-energy content of the “universe” is invisible but necessary to reconcile the model with observations), nature of the standard model, jettisoning FLRW would seem a reasonable first step toward the development of a modern scientific cosmology. Currently, all we appear to have is a secular creation myth.

    Of course, dispensing with “the cosmos is a universe” error means tossing the redshift = recessional velocity interpretation that is the other axiomatic pillar of the SMC. This might be problematic but for the fact that a crude redshift-distance relationship can be calculated by simply applying the standard GR gravitational redshift equation to an expanding spherical wavefront of light emitted by a galaxy.

    Doing the calculations at cosmologically significant radial distances, and using a mass term inclusive of all the matter contained within the volume of the expanding sphere at each calculated instance, points to a way forward in which the perceived expansion of the “universe” is more naturally understood as merely a consequence of the expansion of individual galaxy-sourced spherical wavefronts.

    Dispensing with the two foundational assumptions of cosmology immediately eliminates the need for dark energy, dark matter (on the cosmological scale), the big bang singularity and subsequent expansion event, inflation, and expanding spacetime. That seems like a pretty good payoff for ditching two archaic and empirically baseless concepts. The downside is there won’t be a “universe” for theorists to do maths about anymore – but then, they’ll always have string theory.

    Like

  13. @budrap

    “Such a “universe” arises as a result of misapplying GR to the FLRW metric. That metric essentially attributes to the cosmos a “universal” frame, which GR precludes. “

    FLRW equations are a solution to GR not a misapplication. Either the solution is correct or wrong. In SR, there is no absolute space and no absolute time but there is absolute spacetime. Spacetime is like a block of rubber. Its length, width and height are not fixed. They are deformable but the volume is constant.

    x’ = x/L
    t’ = t L
    x’ t’ = (x/L) (t L)
    x’ t’ = x t
    where:
    x’ = length contraction, x = length, L = Lorentz factor, t’ = time dilation, t = time, x t = spacetime
    Hence, spacetime is constant despite relativistic effects

    In GR, in addition to deformations, the volume can also change.

    (Xo to) / (X t) = a > 1
    Xo to > X t
    Where:
    X = distance between observer and emitter when light was emitted, Xo = distance between observer and emitter when light was observed, t = time in observer’s frame when light was emitted, to = time in observer’s frame when light was observed, a = scale factor, X t = spacetime
    Hence, spacetime has increased. The scale factor is not due to relativistic effects.

    “The self-contradictory nature of the standard cosmological model is encapsulated in its statement of a non-relativistic, universal simultaneity, “the universe is 13.8 billion years old”

    In SR, when a ruler shrinks, how do you know it shrank? It still reads 12 inches. You need another ruler to measure the shrinkage. 13.8 billion years is our Earth ruler to measure length and time in the universe. It is not absolute time. It is the clock in observer’s frame.

    “a crude redshift-distance relationship can be calculated by simply applying the standard GR gravitational redshift equation to an expanding spherical wavefront of light emitted by a galaxy.”

    Gravitational redshift works when distance of light from gravitating mass is greater than the Schwarzschild radius. Take the mass of ordinary matter plus dark matter. The Schwarzschild radius is greater than the radius of the observable universe (46 billion ly). The whole universe is inside a black hole!

    There’s a way out of this paradox: Dark energy. Cosmologists invented an anti-gravity (a.k.a. dark energy) to escape their spaghettification. But supermassive black holes are at the centers of galaxies. Where’s the anti-gravity? Dark energy disappeared on galactic scale. My Darkside Force theory answers these paradoxes.

    Like

  14. “FLRW equations are a solution to GR not a misapplication.”

    This statement is questionable (even if not for the reason given by budrap): are FLRW equations for an homogeneous and isotropic universe relevant to our recent universe, where the scale of homogeneity is a few hundred Mpc?

    At best they are an approximation, only applicable to a coarse grained model in which the size of the grains is of the order of 1% the size of the observable universe. Inside these grains the local expansion rate varies from 0 (in the virialized structures) to a value greater than the average H0 (in the cosmic voids).
    Are you sure that applying GR to average values of the scalars (a, H, rho, …) gives the same results as averaging exact results of the local GR equations over the above mentioned grains?
    (check https://arxiv.org/abs/gr-qc/9906015)

    If the apparent cosmological (not so-) constant of the standard model is in fact due to the backreaction of the growing inhomogeneities of the matter distribution on the average curvature, it’s no surprise that this “constant” is also growing, and that the value of H0 computed from the observations of the CMB with the (constant Lambda)-CDM model differs from the value obtained through more local and less model-dependent methods.

    Like

  15. @ Enrico,

    “FLRW equations are a solution to GR not a misapplication. Either the solution is correct or wrong.”

    But then I specified the FLRW metric, not the equation. If you are confused about this you might want to review this course synopsis: http://www.astro.yale.edu/vdbosch/astro610_lecture2.pdf (scroll down to the section labeled Cosmology in a Nutshell). The metric is not a solution to GR. It is an independent model of the cosmos that assumes the cosmos can be treated as singular entity, essentially creating a model with a universal frame. GR assumes the absence of such a universal frame.

    Putting the two together produces the Friedman equations which are the basis of the standard cosmological model. That model is a hot mess. Even if it is mathematically “correct”, it makes no physical sense. To recapitulate, the SMC is:

    1. Incoherent (inexplicable original condition),

    2. Self-contradictory (GR is incompatible with the FLRW metric), and

    3. Absurd (95% of the matter-energy content of the “universe” is invisible but necessary to reconcile the model with observations)

    “In SR, there is no absolute space and no absolute time but there is absolute spacetime. Spacetime is like a block of rubber. Its length, width and height are not fixed. They are deformable but the volume is constant.”

    There is no scientific evidence that the spacetime of SR is a substantival entity, i.e., has an empirically verifiable physical existence. If you choose to believe in its existence that is your prerogative, but that belief has no scientific basis.

    “13.8 billion years is our Earth ruler to measure length and time in the universe. It is not absolute time. It is the clock in observer’s frame.”

    But if there is no universal time to which the 13.8 GY age can be converted then “the universe is 13.8 billion years old” is an utterly meaningless statement. Either there is a “universe” or there is not. Take your pick, but if you choose to accept that there was a “universal” gestation event, then your position is incompatible with relativity theory.

    Like

  16. “There were many observational indications of Λ that were essential
    in paving the way.”

    Indeed. For some, the possibility that Λ was non-zero never went away.
    Some just assumed Λ=0 since it makes things easy to calculate. (These
    days, not only are more analytic solutions known, but also more
    numerical algorithms, and computers are much faster and no longer female.)
    Many people disliked it, of course. In the early 1990s, many people
    pointed out that a positive cosmological constant makes sense when
    interpreting observations.

    The biggest mistake, I think, were those who said “well, QFT says it
    should be 120 orders of magnitude larger, but it’s not, so it must be
    exactly zero”, which is wrong on several levels.

    “Inflation gave a mechanism that drove the mass density to exactly
    one”

    Small quibble: not exactly one, but one to very high accuracy, probably
    never observationally distinguishable from one.

    “It is hard to overstate the ferver with which the SCDM paradigm was
    believed. Inflation required that the mass density be exactly one; Ωm 50 was Right Out. We didn’t even discuss Λ. Λ was Unmentionable.
    Unclean.”

    This is a good summary of those times. Here’s a comment I posted
    somewhere else:

    Dennis Overbye, in “Lonely Hearts in the Cosmos”, told the story
    about a student looking at Omega < 1 simulations, justifying it by
    namechecking Simon White. The bigwig advisor replied that "Simon White
    never understood inflation" and that the student was committing a Big
    Sin: "thinking like an astronomer instead of like a physicist". The
    bigwig advisor was revealed to be David Schramm. (Ironically, in the
    rightly famous Gott, Gunn, Schramm and Tinsley paper, Schramm had
    previously argued for a low-density universe based on observational
    evidence.)

    Let's face it: most people who believe in inflation do so not because
    they became convinced through rational arguments, but because Rocky Kolb
    told them it was true.

    I once heard a series of lectures by Allan Sandage. He was very clear
    (more so than in print) that the Hubble constant had to be low because
    otherwise the universe would be too young. He was assuming that the
    cosmological constant was zero, but also stated explicitly that he knew
    that Omega was one because of “grand unification”.

    “Ostriker & Steinhardt neglected to mention an important prediction
    of Λ: not only should the universe expand, but that expansion rate
    should accelerate!”

    Maybe they didn’t mention it because it was obvious. (Obvious for the
    value of Λ they were talking about. Actually, Λ can be positive and the
    universe is always decelerating, but only for Omega>1. To solve
    the age problem, though, one needs a universe which accelerates at least
    some of the time.)

    “This was an obvious and inevitable consequence of ΛCDM that was
    largely being swept under the rug at that time.”

    I don’t remember it that way (and, yes, I was there). For people who
    grew up with sigma and q rather than Omega and lambda it was even more
    obvious.

    “This was a huge problem for Inflation. The only possible solution,
    albeit an ugly one, was if Λ made up the difference. So there he was at
    Aspen, pressuring the people who observed supernova to admit Λ.”

    True, but the implication is not that a positive Λ is somehow bogus
    because of something Mike Turner said. He’s not that powerful. 🙂

    Great as the supernovae experiments to measure the metric genuinely
    were, they were not a discovery so much as a confirmation of what
    cosmologists had already decided to believe. There was no singular
    discovery that changed the way we all thought. There was a steady drip,
    drip, drip of results pointing towards Λ all through the ’90s”

    Indeed. Sort of like how the reality of atoms was accepted: several
    different lines of research (including an important paper by one A.
    Einstein) led to the same value of Avogadro’s number.

    “Note that the acoustic power spectrum of temperature fluctuations in
    the cosmic microwave background (as opposed to the mere existence of the
    highly uniform CMB) plays no role in this history. “

    To me, this is one of the biggest reasons for believing in the
    concordance model. The parameters were nailed before anyone had even
    seen a peak in the CMB power spectrum, and when the CMB data became good
    enough, they confirmed the concordance model with no fudge factors.

    “All the vast knowledge incorporated in textbooks like those by
    Harrison, by Peebles, and by Peacock ? knowledge that often seems to be
    lacking in scientists trained in the post-WMAP era.”

    If you haven’t read Harrison’s textbook at least twice, you are not a cosmologist. 🙂

    “There is nothing so sacred in ΛCDM that it can’t suffer the same
    fate, as has every single cosmology ever devised by humanity.”

    True, but in my view too early to make too much of this discrepancy. I
    remember a discussion of the Hubble constant where Paul Schechter cried
    out from the audience “Where’s the problem? They agree at three sigma!”
    🙂

    “Today, we still lack definitive knowledge of either dark matter or
    dark energy.”

    Unless you believe that dark energy is just the cosmological constant
    (and there is not one shred of evidence to suggest otherwise), in which
    case explaining its value is at the same level as explaining the value
    of the gravitational constant, and explaining “what it is” is at the
    same level as asking why a given inertial mass corresponds (via the
    gravitational constant) to a certain gravitational mass.

    Like

    1. “computers are much faster and no longer female”

      I originally took this to be just another garbled spell-check mutation, but it has been subsequently quoted without correction. So I have to ask, what the hell is that, some weird nerd-boy colloquialsm, or what???

      Like

        1. Funny I didn’t know that usage. I worked with the machine variety for years. Never too late to learn something new – that’s what keeps it all interesting. Thanks.

          Like

  17. I find this whole discussion interesting because part of the tension seems to be how comfortable (or not) one is with not understanding. Is a discrepancy exciting or threatening.

    I am curious if you think the citation incentives that seem to cause a group-think problem in other fields are also influencing cosmologists to coalesce around standard models too.

    Like

  18. “That’s because Omega evolves. If it is exactly 1, it stays one forever. But even a tiny bit below 1 to begin with, and it should be zero by today.”

    This is the infamous flatness problem, and (in the form it was used in the discussion during the times Stacy discusses here) is bogus. For some proof of my claim, see my article in Monthly Notices of the Royal Astronomical Society, Kayll Lake’s paper in Physical Review Letters, an interesting paper by Adler and Overduin in General Relativity and Gravitation, and Marc Holman’s nice review in Foundations of Physics and, of course, all references in all of these papers.

    Like

  19. Ohmygawdyes there is a huge amount of groupthink in cosmology. Always has been. That’s why we all bought into SCDM despite any credible evidence that Omega_m = 1. Cosmological groupthink has less to do with citation incentives than with Presumed Knowledge. Cosmology has always been the subject where science, philosophy, and religion collide. Science has rarely been the strongest leg of that tripod. An example that that the universe “must be so” due to some argument or other (as opposed to data) is provided above by Phillip Helbig, who gives the example of what Schramm considered to be a Big Sin. Differences in how is the “right” way to think about the problem persists to this day – see, e.g., https://arxiv.org/abs/0704.2291 and https://arxiv.org/abs/0708.1199

    Like

    1. Might any of the Group Think be the result of an inferiority complex with respect to Quantum Physics Standard Model?
      ‘We have to show the world that our science is just as complete as those particle physicists!’

      Like

  20. “I wasn’t referring to this (maybe too) obvious idea, but to the work of Thomas Buchert (e.g. https://arxiv.org/abs/gr-qc/9906015, https://arxiv.org/abs/1303.4444v3) or David Wiltshire.

    See also https://cqgplus.com/2016/01/20/the-universe-is-inhomogeneous-does-it-matter:(T Buchert, M Carfora, G F R Ellis, E W Kolb, M A H MacCallum, J J Ostrowski, S Räsänen, B Roukema, L Andersson, A Coley, D Wiltshire):”

    These are all really, really, really smart people. And they are all nice guys as well! Still, I think that there is a simple argument against the claim “backreaction is fooling us into believing in a positive cosmological constant when in fact the cosmological constant is zero”: Of all the things which backreaction could do, it just happens to do something which is capable of being interpreted in the terms of 1920s cosmology. Not only that, but the parameters one gets from the fits assuming FLRW cosmology agree with values obtained independently. To me, this seems really, really unlikely. Some belief in backreaction is motivated by people who just don’t like the cosmological constant. Some are concerned about how certain our knowledge is. Even if just as a devil’s advocate, it is important to look at this stuff, but for now I think that if it walks like a duck and quacks like a duck, then it is probably a duck.

    Like

    1. “Of all the things which backreaction could do, it just happens to do something which is capable of being interpreted in the terms of 1920s cosmology.”

      Only if one’s “belief in backreaction is motivated by people who just don’t like the cosmological constant” and if this belief results in a bias in the interpretation of the data.

      But one can also believe in a true cosmological constant (w=-1) and think that a part of the current acceleration of the expansion might be caused by the backreaction. This could solve the question of the discrepancy in the values of H0 obtained from different methods.

      Like

      1. Yes, one should definitely consider the case that there is a cosmological constant and backreaction. Occam’s razor doesn’t say that the simplest explanation is probably correct, but rather the simplest explanation which explains all the data. Similarly, dark matter and MOND shouldn’t be considered an either/or choice. Having worked neither on MOND nor on LambdaCDM, I am completely objective. 🙂 I am convinced that the concordance cosmology is essentially correct. I am also convinced that MOND phenomenology is real. I don’t know if more involved baryonic physics in structure formation can explain MOND phenomenology in a convincing way, but I am sceptical. I also don’t know whether there will ever be a convincing relativisitic theory with traditional MOND as a limiting case, but am sceptical here as well. I think it is too early to choose, but I wouldn’t be surprised if something like superfluid dark matter is the answer. If not, my bet would be neither on a relativistic MOND theory which explains the observations leading to the concordance cosmology, nor on more detailed CDM structure formation which explains MOND phenomenology, but rather something else entirely.

        Like

        1. @Phillip,
          Do you think there would be a lot more flexibility to accommodate new ideas and resolve particular problems if there was a descriptive framework involving a duality between suitable dynamic and static models?

          Like

            1. Sorry Phillip, I didn’t explain that question well at all.

              Einstein’s static universe was incomplete, and lost favor to a dynamic Big Bang, which also appears incomplete.

              My question is basically do you think that by encouraging revisions or extensions to these models in the context of a duality between them, people might better help to resolve DM and dark energy problems?

              Like

  21. budrap, Yves,

    FLRW metric and Friedmann equations are a solution to Einstein’s field equations that is accepted by Einstein himself and by almost all astrophysicists today. It is not perfect but a good approximation of the expansion of the whole universe. In your Yale lecture notes, read the thermodynamics section. Go beyond the introductory course. My Darkside Force theory is a more sophisticated treatment of the thermodynamics of spacetime. Therein you will find the Force.

    Dr. McGaugh,
    What do you think of this new findings on galaxies without dark matter?
    https://www.scientificamerican.com/article/ghostly-galaxies-hint-at-dark-matter-breakthrough/

    Danieli’s empire strikes back. The dark side of the Force is getting stronger. Is this the end of the MOND resistance? (assuming further observations confirm it)

    Like

    1. “FLRW metric and Friedmann equations are a solution to Einstein’s field equations that is accepted by Einstein himself and by almost all astrophysicists today.”
      Yes, because they are the only exact solution to Einstein’s equations for of a perfectly (spatially) homogeneous and isotropic space-time.

      But, as stated by Dr. McGaugh, “it is also conceivable that FLRW is inadequate to describe the universe, and we have been driven to the objectively bizarre parameters of ΛCDM because it happens to be the best approximation that can be obtained to what is really going on when we insist on approximating it with FLRW.”

      The discrepancy in the values of H0 obtained from the differents methods tends to indicate that this approximation is not so good, at least with the w parameter of the “dark energy” equal to -1 (corresponding to a true cosmological constant, which, as stated by Phillip Helbig, might not more need an explanation than the value of G).

      Thanks to the flexibility of its “objectively bizarre parameters”, the ΛCDM model might remain a good model (“the best approximation that can be obtained to what is really going on when we insist on approximating it with FLRW”), with w < -1 (corresponding to a growing density of the "dark energy").
      But if Λ is not a constant, as long as we don't know which is the physical phenomenon which hides behind Λ, ΛCDM is only an effective model. As well about CDM…

      One possible explanation for a growing "dark energy density" could be a growing backreaction of the growing inhomogeneities. Just look at the equations (10a) and (10b) in https://arxiv.org/abs/gr-qc/9906015

      Like

      1. i agree we have to go beyond FLRW. That’s what my Darkside Force theory did. I calculated the discrepancy in H0 using my Cosmological Equations. My theory and observations match. Yes Lambda is not constant. I also calculated the past, present and future changes in Lambda using my Cosmological Equations. In my theory, dark energy density is constant (rest mass energy-equivalent) but the relativistic effect of kinetic energy makes the density variable. It is more complicated than backreaction.

        Like

  22. Some additional thoughts on a very general level . . . . when solving a puzzle one usually starts with the edges. Now since the problems Dr. McGaugh describes about cosmology seem so persistent, maybe this is not a single-sided puzzle after all.

    Perhaps the discussions should gravitate a little more towards duality as a way to get people looking more deeply at ideas that may be correct even if they are incomplete or incompatible with some facet of observation.

    If in this context there is a cosmic black hole at the space where z approaches infinity, a new framework develops for some of the existing pieces of theory and observation. For example, an observer may be considered as a virtual or instantaneous pole in space-time. Looking out into space the observer may ultimately see space curve back to a cosmic black hole at another pole.

    This implies that the only way to look away from this cosmic black hole would be not to look outwardly, but inwardly! Here there is a connection to aspects of the holographic principle. Looking inwardly towards quantum phenomenon we may see that each observation represents an ensemble of states from a space we cannot see at any instant, and possibly from the space that is much further away from the cosmic black hole than we are.

    There are potential implications towards the flow of time and the rate of clocks within the universe, additional boundary conditions for extended theories of gravity, and maybe even the potential relationship between random processes and some form of excitation of a super fluid BEC (grasping at straws here).

    But really the point is that by considering that a kind of duality might be needed to better describe observations in cosmology, the conversation could move away from telling people what they did was wrong and gets them looking at what parts of what they did is worth keeping.

    Like

  23. Dr McGaugh,

    “But it is also conceivable that FLRW is inadequate to describe the universe…”

    It should also be conceivable that the cosmos is not a singular, unitary entity capable of engaging in a vast, coordinated expansion which is, in turn, dependent on a unique “universal” gestation event, one that is deeply redolent of an atavistic cultural conceit.

    In other words, it isn’t the inadequacy of FLRW’s description of the universe that is the problem. It is the inadequacy of the cosmos as “universe” paradigm that is the underlying cause of “the objectively bizarre parameters of ΛCDM”. Modern cosmology seems congenitally incapable of reconsidering its foundational preconception.

    Like

    1. Modern Cosmologists are not alone in being reluctant to reconsider their preconceptions. This is standard operating procedure for everyone in all walks of life, and always has been. It is perhaps a bit more obvious in cosmology because it is kinda their jobs to test preconceptions.

      Like

  24. “computers are much faster and no longer female” (Dr. Helbig)

    I understood and appreciated this remark because my first boss at GE, Doris Clarke, had been a computer, in her younger days. You have to work very hard and follow instructions precisely to be a computer, without getting much glory for it.

    Great essay and some good comments (among the usual chaff, like this one).

    Dr. Sean Carroll’s current post at his blog asserts that “Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located.” I hope he reads this post and responds here or at his blog, but it is unlikely as he doesn’t seem to be interested in blog discussions anymore. The message I take from this post is that reasonable people can disagree on the merits of Lambda-CDM but it is too soon to be dogmatic about it.

    Like

  25. To JimV – Twelve years ago I told Sean Carroll face-to-face that the math he claims supports the idea of dark energy, in fact does not. I then sent him a copy of that published article but never heard from him. DOI: 10.1007/s10773-006-9082-7
    I recently published another article pointing out the deep problems with the math “supporting” dark energy but do not expect to hear from him. His entire career is based on the cosmological constant and him simply cannot afford any real science, DOI: 10.1093/mnras/sty221

    Like

      1. Ahmet Oztas and I have pointed to some more important and critical math-physics shortcomings of the current standard model of cosmology, the LCDM world view. This article has been published in MNRAS (2018) and will be made free in another year or so. DOI: 10.1093/mnras/sty221
        Why people persist believing in the failed notion of dark energy, in any form, is beyond me. I suppose it is easier than thinking.

        Like

  26. If you are still maintaining the bibliography of scientific literature at the MOND Pages, I would respectfully suggest that you consider adding the seven papers and/or preprints in my bibliography at the following link to your list: http://dispatchesfromturtleisland.blogspot.com/p/deurs-work-on-gravity-and-related.html Five of them set forth Alexandre Deur’s very promising work in developing a MOND-like theory with a broader range of applicability that also addresses dark energy issues from plain vanilla quantum gravity foundations by simplifying the math to approximate full quantum gravity in the static case with self-interacting scalar gravitons. One provides a discussion of how QCD confinement explains the quantum vacuum energy field v. cosmological constant disparity (a concept which by analogy is also relevant to Deur’s approach to dark energy). One compares the outcomes of scalar graviton models of gravity to post-Newtonian approximations to confirm that these models are viable approximations of full quantum gravity at the solar system scale.

    Like

  27. Anyone care to comment on the following question?

    1). Is it reasonable to consider that the angular positions of the peaks in the CMB power spectrum could arise from a “holographic” dark matter that is imprinted on the horizon?

    2). Could something like this be possible if there was a black hole at the center of the universe? Essentially the higher spatial curvature results from “dark matter” that does not reside within the observable universe, but is imprinted on the background for all cosmological observations.

    3). Might this form one part of a dualistic description having a static character (i.e. finite space & infinite time) vs one of dynamic character (i.e. finite time & infinite space)?

    Like

    1. 1) Many theories are possible mathematically. But cosmology is science not math. We apply Occam’s razor. Cut out the theories that are “not even wrong.” Extra spatial dimensions that are unobservable cannot be falsified. Scientific theories are observable and they can be tested empirically whether true or false. Or else we will be dealing with math-fiction, metaphysics or theology.

      2) The observable universe is flat or the curvature is too small to detect. In Riemannian geometry, the curvature is measurable on the manifold regardless of the number of dimensions. So far we have not detected the curvature from higher dimensions, if they exist.

      3) Infinite time and infinite space are beyond the observable universe, if they exist. See #1

      Like

      1. Enrico,
        Thanks for the feedback. I didn’t mean to suggest that there are any extra dimensions needed. Simply maybe that the dimensions can be interpreted in a duality. We already speak of finite time and possibly infinite space in the concordance model. Maybe the dual is finite space and nearly infinite time. These are just two perhaps complimentary models using the same dimensions.
        As for science, it may be as physically real as it can be. The horizon of a black hole may not be completely understood, but it is far from a metaphysical topic.

        Like

        1. We know black holes exist because we can observe their gravitational force. How do you prove infinite space? “The mind of God is music resonating in 11-dimensional hyperspace.” Actual quote from a famous string theorist. Though doubtful if he’s talking about physics or theology.

          Like

          1. Enrico,
            I see your point. Yet infinite can also mean the inability to prove something is finite, which brings us back to science. I may have been wrong to use the word infinite in this context, and admit that distinction may be discarded without necessarily falsifying the main idea.

            Like

            1. A more precise way to express the possible duality requires a distinction between the curvature of space and of time, and which of those we care to project beyond the cosmological horizon.

              Like

          2. ‘We know black holes exist because we can observe their gravitational force.”

            Since no observations entail either an event horizon or a singularity which are the common, defining properties of theoretical black holes, it can’t really be claimed that they are observed to exist as described. Are there compact objects with steep gravitational gradients? It would appear so. Black holes with singularities? That is not a physically meaningful proposition and it is certainly not empirically established by observations.

            Like

            1. I never mentioned the word singularity, nor intended to infer one. You may be objecting to something that in my opinion is not part of the original comment.

              Like

  28. JB, I quoted and remarked on a statement by Enrico about the existence of black holes. I was not replying to anything you said. Singularities are a part of standard black hole theory: https://en.wikipedia.org/wiki/Black_hole#Singularity

    There is some debate as to whether singularities are physically real but they are definitely part of the theoretical description, and if they are not physically real then the theory that contains them constitutes a dubious representation of physical reality across its entire range of applicability, not just at the singularity breakdown. It is specious reasoning to insist that a model that resolves to a physical absurdity (singularity) is somehow, otherwise a reliable representation of reality. This is true for both black hole theory and ΛCDM.

    Like

    1. The historical definition of black hole since the 18th century is a gravitational field where escape velocity is greater than light speed. Only since 1915 from general relativity that black hole singularity was invented.

      My Non-Singularity Theorem disproves black hole singularity. It shows that singularity contradicts quantum mechanics. Either QM or singularity is wrong. I assert QM is right. My theorem is a mathematical proof. It should be easy to disprove it. I asked famous astrophysicists to disprove my theorem. Nobody can disprove it.

      Like

        1. Not yet published. I can post it here with Dr. McGaugh’s approval, it’s quite long.

          In the meantime, here’s my Transmission Paradox. Nothing can escape a black hole’s event horizon because it has to move faster than light and this is prohibited in theory of relativity. However, bodies outside the event horizon can feel the black hole’s gravity. Since black holes can have electric charge, charged particles outside the horizon can feel electromagnetic attraction or repulsion.

          There is a paradox here. Gravitational and electromagnetic forces are mediated by graviton and photon respectively. These particles move at light speed. How can they escape the event horizon? Particles inside and outside the event horizon are exchanging multitudes of gravitons and photons. They are passing through the event horizon with impunity. It may be possible if they cross the event horizon at zero time interval. That means the particles inside and outside are interacting instantaneously. Apparently, gravity and electromagnetism are non-local fields.

          Is this a validation of Bohm’s interpretation of quantum mechanics? Bohmian mechanics differ from both relativity and Copenhagen interpretation. Relativity is local and Copenhagen is non-realistic. Bohm is non-local and realistic. The solution to my Transmission Paradox could lead to unification of relativity and quantum mechanics. Perhaps quantum gravity lies in Bohmian mechanics combined with Kaluza-Klein theory. A non-local 5-dimensional unified electromagnetic-gravity field.

          Like

            1. I did not submit it to any peer-reviewed journal. Roger Penrose and Lord Rees are better reviewers.

              Dr. McGaugh,
              I’m sure you know that orbital velocity for Gaussian surface of disk mass distribution is proportional to radius:
              v = (G pi D t)^0.5 R
              Where: v = orbital velocity, D = mass density, t = thickness of disk, R = radius

              Hence, the question for disk galaxies is not why is velocity not decreasing with R, but why is velocity not increasing with R. Orbital velocity is constant if D is inversely proportional to R^2
              Let: D = k/R^2 where: k = constant

              In a disk galaxy where the constant velocity occurs at acceleration scale (ao), the mass density can be written as follows:
              D = (M + Do pi (R^2 – Ro^2) t) / (V + pi (R^2 – Ro^2) t)
              Where: Ro = radius corresponding to acceleration ao, R = radius beyond Ro, M = mass inside Ro, V = volume inside Ro, Do = mass density from Ro to R, t = thickness of disk from Ro to R

              Substitute the inverse R^2 to mass density:
              k/R^2 = (M + Do pi (R^2 – Ro^2) t) / (V + pi (R^2 – Ro^2) t)
              k = R^2 (M + Do pi (R^2 – Ro^2) t) / (V + pi (R^2 – Ro^2) t)

              The above is my Galactic Equation. k may be unique for each galaxy but k remains constant for various R, Do and t in a galaxy. Please check your disk galaxy data if they satisfy the Galactic Equation. If they do, it means the galaxies with acceleration scale (ao) obey Gauss law. I call this theory Gaussian MOND or GMOND

              Like

              1. The surface density profiles of disk galaxies are not Gaussian. To a crude approximation, they are exponential. V(R) rises and then falls as first derived by Freeman (1970) and discussed in detail in the text book of Binney & Tremaine. Once you get far enough out, any mass distribution looks like a point source, and reverts to Keplerian behavior. That we hadn’t got far enough was a serious consideration and matter of discussion in 1980. This is 2019. Many, many galaxies have been observed far enough out that we should have detected a decline in their rotation speed if all we see was what we got with good ol’ Newton.

                Like

              2. Thanks for the info. I doubt the certainty. “Far enough” should not be the criterion for Keplerian. It should be mass density. The greater the radius, the greater the volume enclosed and the greater its sensitivity to mass density. Are astronomers sure it’s one proton per cm^3 not two protons? That would double the mass inside the radius and determine whether Keplerian or Gaussian.

                Like

  29. An extreme outlier galaxy about 5 sigma from a lambda CDM prediction that doesn’t follow the RAR either is a new mystery. It is the opposite the the “no DM galaxies”. This has lots of DM where all existing predictions would suggest it shouldn’t have much at all.”The Extremely High Dark Matter Halo Concentration of the Relic Compact Elliptical Galaxy Mrk 1216″ https://arxiv.org/abs/1902.02938

    Like

  30. Thanks for pointing this out. I have not yet had time to read it closely. In general, claims of high dark matter content in elliptical galaxies are highly degenerate with the adopted mass-to-light ratio of the stars, and it is difficult to disentangle what mass is dark and what is light. I will have to read the paper before I can comment on this particular case.

    Like

  31. @Enrico – Yes, we are sure about this! We had this debate circa 1980, and it has been replicated innumerable times since. It is not simply a matter of doubling the density of atoms in interstellar space. We see that the distribution of normal matter falls off too fast to explain the observations. So you have to make the number of unseen atoms a continuously increasing function of radius. Such a rolling fudge factor is no different from dark matter. There is clearly something going on that is not explained by conventional physics. This is established beyond a shadow of a doubt and has been for a very long time. One might as well assert that the Earth is flat or only 6,000 years old.

    Like

    1. If you look at my Gravitational Equation, what you call a rolling fudge factor is a precise formula in GMOND – Inverse radius squared mass density. Since you admit the existence of unseen atoms, whether or not they obey the inverse square formula is an assumption not an observation.

      That said, I cannot prove GMOND is true. However, dark energy can also reproduce the MOND acceleration scale (ao = 1.2 E-10 m/s^2). Dark matter and dark energy are equivalent. DM particles have mass and self-gravitation but they also have repulsive force. Their net effect is weak repulsion. The basic equation is:
      Gaussian – dark matter = MOND

      The observed MOND effect is the combined effects of Gaussian gravitational attraction and dark matter/dark energy repulsion. It doesn’t require any fudge factor. Once a threshold gravitational acceleration is reached, MOND will appear at various radii without adjusting mass density (constant). The basic equation translates into accelerations:
      g – a = ao
      Where: g is threshold gravitational acceleration; a is acceleration due to dark energy (negative sign means repulsive force); ao is acceleration scale of MOND
      Substituting values in above equation:
      (1.8 E-10) – (6.0 E-11) = 1.2 E-10
      The value of a is obtained by solving my Cosmological Equations. This is from my Darkside Force theory.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s