Theories of Everything with Curt Jaimungal - Subir Sarkar: Why Dark Energy is a Local Illusion
Episode Date: January 26, 2026Hot off the press, Professor Subir Sarkar makes the case that dark energy doesn’t exist (and he’s not being provocative for its own sake). He’s the former head of Oxford’s particle theory grou...p, serves on the Particle Data Group. Sarkar's group has found that the cosmic acceleration supposedly driving the universe's expansion is directional—not uniform as required by a cosmological constant—appearing only in the direction we're moving through space. He claims the 2011 Nobel Prize-winning discovery rests on a century-old assumption of cosmic isotropy that his data now falsifies at over 5 sigma. "We need to go back to square one." SUPPORT: - Support me on Substack: https://curtjaimungal.substack.com/subscribe - Support me on Crypto: https://commerce.coinbase.com/checkout/de803625-87d3-4300-ab6d-85d4258834a9 - Support me on PayPal: https://www.paypal.com/donate?hosted_button_id=XUBHNMFXUX5S4 LINKS MENTIONED: - https://inspirehep.net/literature/52370 - https://journals.aps.org/pr/abstract/10.1103/PhysRev.174.2168 - https://www.nature.com/articles/srep35596 - https://arxiv.org/abs/1610.08972 - https://ui.adsabs.harvard.edu/abs/1993ApJ...413L.105P/abstract - https://arxiv.org/abs/1503.01229 - https://inference-review.com/article/heart-of-darkness - https://academic.oup.com/mnras/article/206/2/377/1024995 - https://www.researchgate.net/publication/235329300_The_NRAO_VLA_sky_survey - https://arxiv.org/pdf/1304.3627 - https://arxiv.org/abs/1608.06483 - https://scholar.google.com/citations?view_op=view_citation&hl=en&user=hYPXSjkAAAAJ&citation_for_view=hYPXSjkAAAAJ:k_IJM867U9cC - https://www.biorxiv.org/content/10.1101/2023.03.18.533281v2.full - https://amazon.com/dp/0486472051?tag=toe08-20 - https://youtu.be/xZnafO__IZ0 - https://youtu.be/kUHOoMX4Bqw - https://youtu.be/5pOpcCT6AmY - https://youtu.be/guQIkV6yCik - https://youtu.be/6I2OhmVWLMs - https://youtu.be/dG_uKJx6Lpg - https://youtu.be/sGm505TFMbU - https://youtu.be/Ve_Mpd6dGv8 - https://youtu.be/hF4SAketEHY - https://youtu.be/X4PdPnQuwjY - https://youtu.be/zNZCa1pVE20 - https://youtu.be/ZUp9x44N3uE - https://youtu.be/fAaXk_WoQqQ - https://arxiv.org/abs/1506.01354 - https://www.nature.com/articles/366029a0 - https://arxiv.org/pdf/1808.04597 - https://www.aanda.org/articles/aa/pdf/2014/08/aa23413-14.pdf - https://arxiv.org/pdf/2411.10838 - https://archive.org/details/generalprinciple0000paul/page/n1/mode/2up - https://arxiv.org/pdf/1205.3365 - https://amazon.com/dp/0471925675?tag=toe08-20 - https://arxiv.org/abs/1112.3108 - https://arxiv.org/abs/hep-ph/9506283 - https://www.sciencedirect.com/science/article/abs/pii/037026938491565X - https://journals.aps.org/rmp/pdf/10.1103/RevModPhys.79.1349 - https://link.springer.com/article/10.1140/epjs/s11734-021-00199-6 - https://link.springer.com/article/10.1007/s10701-005-9042-8 - https://journals.aps.org/rmp/abstract/10.1103/9ygx-z2yq Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Most of my best ideas don't happen during interviews.
They come spontaneously, maybe in the shower or while I'm walking.
And until Plaud, I kept losing them because by the time I write it down, half of it's gone.
I've tried voice capture before, like Google Home and just cuts me off in the middle of a thought.
And I don't know about you, but my ideas don't come in these 10-second short sound bites.
They're ponderous, they wind, they're often five minutes long.
And Apple notes, Google Keep, the transcription is quite horrible,
and you even have to do multiple taps to get to it.
Plod lets me talk for as long as I want to.
There's no interruptions.
It's accurate capture,
and it organizes everything into clear summaries,
key takeaways, and action items.
I can even come back later and say,
okay, what was that thread that I was talking about
about consciousness and information?
My personal workflow is that I have their auto flow feature enabled,
and it sends me an email whenever I take a note.
I have the note pin S for the shower,
and then I carried this one around me in the apartment,
and I love them both.
very much, especially this one. The fact that I can just press it and it turns on instantly
and starts recording without a delay is an extremely underrated feature. And it's battery. I
haven't had to charge this since I received it. Over one and a half million people use plot
around the world. If your work depends on conversations or the ideas that come after them,
it's worth checking out. That's p-l-a-u-d-a-I-slash-T-O-E. Use code T-O-E for 10% off at checkout.
Loop quantum gravity, string theory, whatever you name,
none of them have been able to address the cosmological constant problem.
There is something very big we are yet to find.
In 1933, Wolfgang Powley discovered that quantum vacuum fluctuations
should have stopped the universe from expanding.
He wrote that it's more consistent to exclude zero point energy
because, evidently from experience,
it does not interact with the gravitational field.
We still don't know why.
We need to go back to square one.
Professor Subir Sarkar of Oxford tells me that this unsolved problem should have made us suspicious
when astronomers claim to detect a cosmological constant from supernova data in 1998, something
which resulted in the Nobel Prize in 2011.
My name's Kurt Jymongle and as usual here on theories of everything, this podcast is technical
because I want to show you the details, since the Popsai accounts are egregiously misleading.
To make sense of Sarkar's claims, it's useful.
to know the general story. Here it is. Supernova are those
elephantine star explosions, and certain distant supernova appear 30% or so
fainter than we traditionally expected. The interpretation is that cosmic
acceleration is driven by dark energy. This sounds reasonable, since if they're
fainter than expected, then they're farther than expected, and if they're
farther than expected at a given redshift, it means space expanded more than
predicted, which implies the expansion rates have been increasing across time. Increasing
expansion rate is the same as acceleration, but this whole interpretation assumes that the universe
is perfectly the same, no matter how you translate yourself across space or what angle you view it
from. This was an assumption made in 1920 when we had almost no data. The FLRW metric is the
sine qua non of every dark energy inference. Sarkar's group tested it. The cosmic microwave background,
which is that afterglow of radiation from the early universe, shows a hotspot, so one direction
appears slightly warmer, potentially because we're moving toward it.
If our motion is what's causing this hotspot, then distant matter should show the same pattern,
but it doesn't. The matter dipole is twice as large, and this is confirmed at over 5 Sigma,
meaning it's a roughly 1 in 3.5 million chance that it's a fluke. And moreover,
the acceleration is directional, not isotropic as dark energy requires. Today we cover the vagaries
of supernova standardization, how cosmologists stratify
parameters while violating sacrosanct principles, and why the professor argues that a century-old
metric requires a theoretical revolution before acceptance.
Professor, I'm excited to be speaking with you.
I've been prepping for this interview in many respects for years, going through your work
and the responses to your work fairly extensively.
So, welcome, and thank you.
Thank you.
It's great to be on, and in fact,
I should say I've been watching some of the videos you have already recorded.
And I was quite struck by, of course, looked at the ones which caught my interest,
like your video with Kumran Vafa and with John Donoghue
and things concerning gravity and cosmology and so on.
And I must say I was impressed that you had done your homework.
You asked them very relevant questions.
And it's clear that you have a math package.
ground, otherwise you wouldn't have known what they were talking about.
So all that was good.
But what struck me was that these, you know, they are of course professionals, that is to say,
we make a living out of this, whereas you make a living out of asking us what we do for a
living.
So that's a kind of an interesting perspective.
And, you know, one day, I think I would like to turn the tables and interview you to ask
you what you make out of the fact that people can have such different viewpoints on what is
essentially fundamentally the same, you know, ontologically or philosophically the same issue.
And yet we come across, come at it from so many different points of view.
But there is supposed to be one underlying truth there.
And that's the one you are trying to capture.
And somehow this whole enterprise works.
I mean, we do make progress.
I know that there is a school of thought that holds that everything is constructed,
but we know bloody well that, you know, physical laws aren't constructed.
They exist independent of us.
And it's kind of quite fascinating to see how we do things without actually being aware of what we are doing.
You know, we don't reflect on what we are doing, which philosophers usually do.
And so it's kind of very interesting to be someone, I think, like yourself at the boundary between, you know, philosophy and physics, as it were, and to ask these questions.
That's extremely kind of you, sir.
So I have great respect for you as well.
And in large part, I'd like this podcast to also be about what sigma results mean.
And there are at least two kinds of sigma.
So one is that when someone says this result has five.
Sigma. Most people don't know that there's a hidden asterisk. Most of the lay public don't know that
that just means conditional on the systematic errors not being there, that everything is correctly
modeled. So let's call that a nominal sigma. That's the one that's reported on Wikipedia.
But then there's an effective sigma, like the actual reliability. Maybe it's systematically lower.
Six Sigma results have disintegrated because of loose cables or contamination of some sort.
These sigma results actually tend to disappear more often than just a straight looking at the sigma, the nominal sigma would lead you to believe.
Yes.
So this episode is about dark energy.
And I'm going to open with a provocative question.
Do you think the 2011 Nobel Prize was prematurely awarded?
Ah, well, that's a slow-rate question.
let me say that it's not really for me to say
because I know the
I know people who serve on the Nobel Committee
they do their best they can
and they are trying to be fair
as much as they can
in fact if anything on the whole
I would say they are very conservative
and they have been conservative historically
you know after 50 years
they can release the decision
discussions that were on the committee 50 years ago.
And I've had the opportunity to look at some of those on the day they were released.
And it's incredible how people who we now think of as household names were considered to be
rather radical or not having quite established what they were doing, you know, 50 years ago
when they were considered.
But by the same token, there is, of course, the Nobel Committee.
does not operate in a social vacuum
and there has
and is, I suppose,
pressure on them to
promote certain results
so as it happens,
I do in fact know the
background to how this
prize was awarded
but I'm afraid I can't really discuss it
because the Nobel Committee does maintain
total confidentiality about
their process for 50 years.
So I'm afraid you'll have to wait
till, you know,
19, sorry, 2062
to really know what went on
about that award.
But for the moment, all I would say
is that it was a important award
because it was, you know,
the first time that,
actually not quite the first time,
there had been an award earlier
for Pulsars, for example,
radio astronomy.
However, it was one of the first awards
for cosmology.
and it was important in establishing cosmology as a physical science
and raising its prestige in the community.
So I think on the whole it was a good award
and the fact that some of us are questioning the result today,
I think that's just how science progresses.
It's not always obvious at any given point in time
whether something is beyond doubt.
So that's a long-winded way for me to try to evade.
your question because I don't want to offend people and, you know, I do have respect for their
bodies, so I don't want to diss them. You don't want to diss them. Now, I didn't realize that
this was as spicy a question. I knew it was a bit controversial, but I didn't think it would have
this much background to it. What can you tell me? Well, since you raised it, let me say that
I did have one of my papers rejected by a well-known journalist. You know, well-known journalism.
I will not name it, but it sort of, I challenged the decision because I didn't like the
referee report.
And the receiving editor basically said, well, you can't, we can't publish your paper because
it questions a Nobel Prize winning result.
And that did annoy me because I don't think any result in science is above being questioned,
whether it's got a Nobel Prize or not.
receiving the Nobel Prize as a kind of imprimatur
that it is an important result
and they usually are right
but well actually
let me relate an anecdote
I don't think
Stephen would mind
so Stephen Hawking
used to have this
when you was still with us
used to have this sort of
what can only be called Swarys
in a country house
generously supported by
a foundation and invite people who he had heard
were doing something interesting to come and talk about it.
And I once was invited to talk about the work we had done
on the significance of the isolation.
And having heard it, and he sat through the whole talk,
which was quite terrifying because it would have been
very upsetting if he had to wield his chair out halfway through.
But he sat through it.
And then he composed a question, and we had to wait, you know, a fair bit of time for him to compose the question on his machine.
And he finally said, has anybody been awarded a Nobel Prize for the wrong reason or something like that?
Interesting, okay.
And actually, I didn't know the answer offhand, but my colleague from Cambridge who had invited me, he did.
He said, yes, it was Enrico Fermi.
And Enrico Fermi Averinti was awarded the Nobel Prize
for the discovery of trans-Euranic elements,
which was not true.
They were in fact discovered later.
What he was in fact seeing was nuclear fission
and he didn't realize it.
So, of course, that, you know,
nobody is ever going to question
that Enrico Fermi fully deserved a Nobel Prize
for any one of the number of things that it did.
But historically, that is a case
where a Nobel Prize was awarded for the wrong stated reason.
So the one in 2011 was awarded for the discovery of cosmic acceleration.
Not dark energy, strictly speaking,
they awarded it for the discovery of cosmic acceleration.
And that actually, I think, was not right.
There is no cosmic acceleration.
We do see cosmic acceleration,
but it is only in one direction in the sky.
not all over the sky.
And it has therefore nothing to do with a cosmological constant.
It is a local effect which we think is due to the fact that we are embedded in a deep
bulk flow and the energy density of that flow affects the interpretation of our measurements
such that we think we are exhilarating when actually it's really a kind of a perception.
It is a illusion, if you like.
overall universe might be decelerating.
Interesting. Okay, so let's take this step by step.
I'd like to build up to various results and even go further into cosmic voids and
barionic acoustic oscillations and so forth. So in 2016, your analysis found evidence that
when taken on its own, something that was thought to be a 5-sigma result, which is extremely
high, the higher the sigma, the better the result, was actually a 3-sigma result, or 3-Sigma
result or three standard deviations.
And it wasn't a discovery of.
It was more like evidence four.
So can you please explain about that?
Yes.
Well, what we discovered was that for a start, the initial discovery, the two Nobel Prize
winning teams, they essentially had of order 50 supernova each, right?
So a total of 100 supernovaeuvre.
There was, in fact, a overlap in the two samples.
And the other thing is also worth mentioning,
the two teams are not quite independent
because this is something that people, in fact, that's stored by.
We like to see 5 Sigma results from different independent experiments.
So, for example, the discovery of the Higgs
was by the Atlas and CMS collaborations.
They were completely independent of each other,
and both had 5 Sigma.
And that result, as you know, has...
stuck and it has grown with increasing amounts of data.
So the basic test is if you have a threshold, you know, if you cross the 5 Sigma threshold,
then you can claim a discovery.
But actually, as you alluded to earlier, that is not a guarantee that what you are seeing
is in fact the case.
Sometimes even 5 sigma results go away.
I know of a 7 sigma result measured in the laboratory that went away.
This was a discovery in 1991 of a neutrino of Mars 17KEV.
It actually happened at Oxford.
I know the story pretty well.
And it was confirmed by an experiment at Berkeley and then another one,
but then other experiments subsequent didn't find it.
And it turned out to be a conspiracy of three separate systematic effects
in the experiment that had been done at Oxford.
The thing is that we don't talk too much.
about when experiments go wrong,
where when we get failures,
we usually talk only about successes and discoveries.
And I think that is actually a,
well, this way, not just the public,
but even the practicing scientific community,
young people, they don't really get
the kind of education that they need
to understand the nature of scientific discovery.
That for every discovery that is,
made, there are a lot of false alarms, there are a lot of wrong alleys that are explored.
And, you know, by definition, if you're seeing something, 5 Sigma is meant to be the kind
of accidental chance of seeing something at 5 Sigma is three and half in a million.
Yes.
The odds are, you know, yeah, which is pretty small.
But the joke in the physics community, this was said by a famous,
particle phase is, is that half of all
three sigma results are wrong.
Now, three sigma is 99.7%
right?
Yes, interesting.
That is that you think is pretty unlikely.
How can 50% of some result
which is meant to be 99.7% significantly wrong?
Well, that's because of the systematics
that you alluded to earlier, right?
Because this three sigma refers only
to the idealized case
when you do not consider systematics,
you consider only statistical fluctuations
and you believe that those fluctuations
are governed by Gaussian statistics.
So you have the, you know,
bell curve and you are looking at the area under the curve
and one, two, and three sigma and going to five
is just the area under the curve.
But so it's a convenient kind of language we talk about.
But we have found empirically,
at least in particle physics,
that five sigma results generally ten
tend to lead on to results that are established beyond doubt subsequently,
especially when obtained by two independent experiments, right?
This is the rule of thumb.
It is not a theorem.
In principle, as I said, there can be cases where even 7-Sigma results have been wrong,
but usually 2-5-sigma results independently obtained are generally considered to be the threshold
for discovery.
All right.
So coming back to this supernovae,
there had been two teams
and they had seen about
of order 50 supernova each.
What we discovered subsequently
that was interesting
is that they had mainly been looking
at just one part of the sky.
And that is the part of the sky
towards which we see
the cosmic microwave background
is slightly hotter
by about, you know,
one part in a thousand than the opposite direction.
And that we believe is because we are moving towards that direction.
And the reason we are moving is because the universe today is in homogeneous.
If the universe was homogeneous, then every direction would be the same.
There would be nowhere to move to.
But because the universe is in homogeneous,
we have local attractions and even repelling motions,
which are not the Hubble flow.
The Hubble flow is radial, smooth.
Every galaxy is supposed to be moving away
from all of the galaxies.
But in practice, if you look very locally,
that is not true.
So if you look at our nearest galaxy,
which is Andromeda,
it's actually falling towards us.
It has a, not a redshift, but a blue shift.
And actually, we are falling towards Andromeda.
We are falling towards each other
because we are bound in an orbit.
And in fact, we'll pretty merge in about
billion years, right? That's not the Hubble flow. However, the belief is that if we average on
large enough scales, then we start seeing the Hubble flow. So if you imagine drawing contours
of velocity around us, they'll be very ragged in our local neighborhood. But as you go further
and further out, they'll become smoother and smoother nice circles. And the scale at which
this is supposed to set in is believed to be of order, you know, 100,000. You know,
200 mega parsecs.
A parsec is 3.3 light-eus
and mega is million.
So you'll have to go out to that
kind of scale.
And I'll tell you later
how that number is obtained.
But that's supposed to be
the expectation
in the standard cosmological model,
which is also a theory
of structure formation,
the standard cosmological model.
However,
we find that,
I mean,
this,
so therefore this peculiar
motion that we have, which is causing us to see half the sky as hotter than the other half,
that is not unexpected. In fact, it was predicted. So, Dennis Sharma, the cosmologist that I first
worked with, who is known for, you know, for his very famous students like Hawking and he also turned
Roger Penrose onto cosmology and had many other brilliant students. So Dennis actually wrote a paper in
which said of the newly discovered cosmic microwave background,
well, I think you will find that it has a dipole anisotropy.
Half the sky will be hotter than the other side
because we have a peculiar motion.
We are not in the frame of reference in which the expansion is isotropic,
in which all the galaxies are going away from each other.
That paper, by the way, I looked at up recently.
It had about 20 citations since 1967.
It has, somehow the, you know, most profound papers often are overlooked.
This is another kind of aspect of, you know, real life.
Anyway, but of course, that typhyl anisotropia was found subsequently,
and it was, since it had been predicted, as due to our kinematic motion,
and what we saw matched it.
So, Jim Peebles and David Wilkinson at Princeton,
they actually calculated what sort of a temperature,
a variation we should see across the sky
if it is entirely due to
our motion so you can calculate it
because that is just special relativity
you know today we would set it as
a first year problem
how to calculate this thing
anyway they did all that so that was
established now
it is interesting that the
supernova that were looked at by the
Suburnova Cosmology
project and the Heise
Z supernova team
they were
We're also looking mainly at Suburnaway in the direction of the hotspot, the direction towards which you are moving.
And that's interesting because that means that they are looking at, not at the whole sky, but only at one part of the sky.
And according to this idea that I mentioned earlier, that we are in a deep bulb flow, which is actually moving in roughly the same direction, we should be seeing acceleration only along the direction we are.
moving. So if the two teams had actually had the opportunity to look over the whole sky,
they would have found that the overall expansion is actually, it's in fact a dipole.
We are seeing acceleration in one direction and deceleration in the opposite direction.
If you average over the whole sky, you will find something pretty close to expansion at a
constant rate, which used to be called the mill universe, the mill universe, the mill
model because Mill had a kinematic theory in which the scale factor just increases proportional
to time, right?
And so what we found in this paper in 2016 using principal statistical methods is that the
expansion is actually pretty close to expanding at a constant rate, neither exhilarating
not decelerating, if you average over the sky.
But subsequently, when you looked at it as you.
a function of what direction you are looking in the sky,
then it became evident that it was actually a dipole pattern.
The acceleration is in this direction,
opposite direction, this deceleration.
But that you could not see until we had supernovae
covering the whole sky.
And that was only, that catalogue was only released in 2014.
It was called the Joint Light Curve Analysis Catalog of 740 Subalobé.
And today the latest catalogs,
there's one called Pancheon plus Union 3,
they have of orders twice that number,
you know, 1,600, 1600, supernova.
So now that dipole pattern is pretty clear.
And to go back to the question you asked then,
what we found was that at the acceleration,
which had been claimed,
could only be, was,
the data was consistent with the acceleration
only at three standard deviations.
In other words, it wasn't the kind of discovery level of evidence.
Okay, so this is almost exactly 10 years ago.
We're now in 2026, and this was in 2016.
Yes.
I recall Rubin and Hayden, if I'm pronouncing their names correctly.
Yeah.
They said that what you did, and I think it was the same year, 2016 as well,
so that you used a profile likelihood and what you should have done
because you had many nuanced parameters like light curve shape
and color and host galaxy masses.
But Bayesian marginalization is the more correct approach.
And they redid your analysis
with what they considered to be the proper analysis,
the proper marginalization.
And they recovered a larger than 5 Sigma or 5 Sigma.
So do you think that they got something incorrect or what?
Actually, the irony is that Rubin uses
exactly the same statistics that we did.
It's to your right.
they use what they call a Bayesian hierarchical model,
but the underlying engine for that is exactly the same
maximum likelihood method.
It's just dressed up in Bayesian language,
but it's exactly the same technique.
In fact, they recovered a result.
So in their paper,
they actually first do the analysis the way you do it,
and they recover our result.
Then they say something else,
which recovers the significance of 5-Sigma again.
And that is, they say, that the light curves were adjusted.
This was done by the joint light curve analysis people.
They are the astronomers.
So every astronomer in the world who was doing supernova cosmology was on that JLA paper,
including the Nobel Prize winning people that I ventured earlier.
And what they said was, look, different cacknogs have used different techniques
and, you know, see, it's hard to kind of combine the data.
So we're going to take all the data, whether from the, you know, Sloan Digital Sky Survey
or something called the Supernova Legislacy Survey, which was four directions in the sky.
The SDSS was just a strip on the sky.
Or there was a dozen supernovae from the Hubble telescope, right?
There was a so-called Good Survey.
Not very many, but they were important because these are at high redship.
so you can't see them from the ground.
You have to go out into a satellite
to be able to get above the atmosphere
to see this because they're in the infrared.
They're so redshifted.
And then there are about half the sample,
half the catalog is actually a very local supernovae
within a few hundred megaparsecs.
They are all over the sky.
So there are four catalogs which they combine together.
These four catalogs have different distributions
in redshift.
They have different sky coverages.
But at least they were all
analyzed using exactly the same light curve fit,
which they called a spectral adaptive light curve template,
salt. It's called salt too.
That was the thing.
And that essentially is the technique by which
supernovae can be used to do cosmology.
So, Supernovae by themselves, are not standard candles.
They vary a lot in their intrinsic luminosity.
But it was found empirically,
in fact, by an astronomer called Phillips,
that the peak luminosity of the supernova
is correlated with the width of the light curve.
Purely and periphery from this.
And actually, finding that correlation
is a crucial clue to supernova cosmology.
And I might remark that this was only made possible
because the astronomers,
including the Nobel Prize winning astronomers,
did, got this breakthrough technique
of surveying the sky with CCTV cameras.
So what then happens,
is that you look at the sky
and then you go back and look at it again
and next night and next round.
And occasionally a supernova goes off.
But then you have the luxury
of going back in time
to see what that patch of sky
looked like two weeks ago
when it was just starting to rise.
You see, ordinarily,
this would never have been caught
because you just see something
which has just exploded.
You catch it at maximum,
but you don't know how it got there.
Yes.
But thanks to the CCDs, you can.
And that means you get the whole.
whole light curve and that means you can measure
its width. Yes.
This was the real kind of breakthrough.
And then they discovered that there was a correlation
between the peak luminosity and the width.
And using this, you could correct for that scatter
in the supernova absolute
brightnesses and
fit them all onto one template.
You could stretch the light curve
which is called a stretch correction.
And the stretch correction is different
in different color bands
so there's a color
correction
because you observe
all of the
suburbia in three
different bands
and so you have
a handle on
their spectral
qualities as well
so the light
curve tablet
fitter simply says
you measure a
supanova
it's got some
magnitude
in other words
some amount of
light coming out
of it
you also measure
it Z-shift
if you can
that is the other
quantity
you can measure
but that
magnitude
can be corrected
for the observed width of its light curve
in three different bands.
And by doing that, magically,
the scatter which was a factor of 10
can be reduced to less than a factor of 2.
And then they are so-called standardizable candles.
They're not standard candles, right?
So this part is usually glossed over.
I have to tell you, you know,
I'm not a, you know, I didn't know about any of this.
We had to teach ourselves all this stuff
because this is pretty hardcore astronomical,
a lot of rigorous and very careful work
and done by a lot of people
and, you know, one needs to learn what they're doing
and why it matters.
However, we took these corrections
which had been tabulated by this JLA team
and they had done the work.
So we took it as they gave it.
What Rubin and Hayden said
was actually these corrections
are themselves dependent on redshift.
These corrections are not universal properties of supernovae,
but they depend on the sample
and they depend on the redshift of the object, right?
Okay, so what, though?
Sorry, so this stretch and color corrections
that had been given to us, which we're using,
they said that we had taken them to be constant with redshift
because that's actually what the collaboration had told us in the paper,
but that actually, if you looked at the corrections carefully,
you could imagine that they, in fact, varied with redshift, right?
So they introduced 12 more parameters.
In our fit, we are 10 parameters,
and they doubled the number of parameters.
Okay.
I said, if we allow for that,
then, in fact, the significance of the acceleration again goes back to, you know,
something close to 5 sigma.
This was their argument, right?
And the reason why we didn't go along with that is because, you know, if you add enough parameters to a problem,
and you can, you know, as Oppenheimer famous, you said, you can fit an elephant.
Okay.
So technically, you're allowed to add parameters if it improves your fit substantially.
So there is a criterion.
Roughly speaking, if you add, you know, one parameter, your kai-square,
per degree of freedom should decrease by two.
You know, twice the number of parameters that you are adding,
something like that.
More formally, there is something called a, you know,
there is a Bayesian information criterion.
So there are ways to check on this.
And we didn't like the fact that they were adding so many parameters.
But more to the point, and this is important,
if they're saying that the light curve properties can change with redshift,
then why can't the absolute magnitude,
of the supernova change with redshift, right?
I mean, if you're saying that the intrinsic properties
of a supernova are redship dependent, right,
that would completely undermine their use for cosmology
if their absolute amount of light they're putting out
also where to change with redshift, right?
So you're kind of opening a Pandora's box there,
you really shouldn't do that, right?
So then what is their response to your counter paper?
Well, we wrote a counter paper,
but we didn't feel it was,
no, we don't like publishing
sort of umpteen papers.
We just, it is a note on the archive
and we showed quantitatively
that what they were saying
is not actually valid
from an information theoretic point of view, right?
So we didn't, we didn't take it seriously
because it thought about it themselves.
They would have realized that actually
they can't just say
that we are going to let the stretch
and color corrections be redship dependent
because that raises the possibility
that the absolute magnitude is also
red ship dependent.
And in fact,
20 years later,
that crow has come home to roost
because a group of Korean astronomers
have found that there is actually a correlation
between the absolute magnitude of the supernova
and the age of the progenitor,
the star that blew up, right?
And in fact, what the first is,
find is that as you look back in
redshift,
you're looking at typically younger
objects, right?
And there would be intrinsically fainter,
which is just the effect that you are
ascribing to cosmic aggeneration.
So in other words,
it looks like other astronomers
have following that train of thought
have actually now been looking into it
and claiming
that the absolute magnitude of supernovae,
the thing that is supposed to make
the standard candles
is in fact not a standard candle.
It is correlated with a property
of the host object
that blew up the star.
And if the star was younger,
then the subando would be intrinsically fainter.
And in fact,
when we take that Korean group's suggestion
and implement it to your analysis,
it is evident that actually
on large, very large scales,
there is deceleration.
There is no acceleration.
All the acceleration is happening locally.
and it is dipolar, so it can't be to do with the cosmological and constant.
Now, I'm sorry, this is getting rather technical,
but I'm afraid it is true that most people don't actually follow this arcane details
of supernova cosmology.
They just are told the simple story that you're looking at exploding stars
and they have put out exactly the same amount of energy wherever they are in the universe.
So by just observing them, we can tell how far away they are.
And then if you can also measure the redshift, the redshift versus the brightness that determines the cosmology and that requires that the universe is exhilarating, right?
That is the story you're told.
But the statement, the real statement is the following.
When we look at distance of bonnove, we see them to be about 0.3 magnitudes, which is roughly 30%, is a little more fainter than they would be if the universe is.
was expanding at a constant rate.
That's the statement.
So then you have to ask yourself,
if you are a phoises,
this 0.3 magnitudes,
which is the difference
between the exhilarating case
and the non-a-sigilating case,
right?
How does that compare
with the kind of scatter,
the fluctuations,
the kind of corrections that you are applying?
If you're applying a correction
that is 0.15 magnitudes,
then you can't really claim
that a 0.3 magnitude is a
big deal, right?
I mean, all your corrections,
all your adjustments should be much smaller
than the effect that you are claiming
is the evidence for new physics,
something as startling as cosmic acceleration, right?
And I'm afraid it is the case
that now it is becoming quite evident
that the evidence for acceleration
is actually very weak.
In fact, as more and more data comes in,
it is becoming more and more evident.
This is in total contrast to the case,
where you start by seeing a little bump
in some cross-section
in some laboratory reaction.
It is not significant,
but you take more data and the bump grows
and you take more data and the bump grows
and then one day the bump is so strong
that you can't deny it.
It is there, right?
Yes.
Now, that is the traditional way
particle physics is done
and that's what's called
frequent statistics.
You use things like P values.
You ask what is the random
chance, you have some null hypothesis, what's the random chance of something, you know, the data
that you are getting really. The point being that you can only disprove a null hypothesis, okay,
it's very hard to prove anything, right? You can prove things wrong. This is much simpler to do,
and actually philosophically, that's what we do. You know, we just prove things wrong. We take a null hypothesis.
So the null hypothesis in the other case would be that the universe is accelerating or expanding at a constant rate.
If you can prove that wrong, then you have a result.
The Bayesian part comes in in cosmology because you don't have the luxury there of repeating the same experiment over and over again.
There's just one experiment.
It happened.
You got one sample of data.
You can't necessarily repeat the same experiment identically.
experiment over and over again.
So the whole premise of these
sigma, the Gaussian distributions,
etc., that is
really a bit dodgy.
So in cosmology,
what you do is this Bayesian analysis
where you, you know,
Bayesian statistics and frequently statistics
should give the same answer
if you are asking the right questions.
There's no, statistics can't change the physics.
But in some situations,
Bayesian is much better suited
than frequentist, right?
For laboratory work,
Frequentist is perfect.
You know, I'm on the particle data group.
We maintain this review of particle properties,
which is like the Bible of particle physics.
Everything in there is quoted
according to frequency statistics.
Because if you were to use Bayesian statistics,
you could do Bayesian statistics.
But then you have to state,
along with the result,
you have to state all your prior assumptions.
Right, right.
Okay.
Now, Bayesian Statistics
is in a ideological sense better.
I agree with that because it...
Why is that?
Because it makes explicit
that everything that we say
is subject to some prior assumptions we have made.
We have some priors in our head.
You know, biases, if you like, right?
Yeah, but I don't see why that's superior
to a frequentist approach.
Because the frequentist statistics
works fine under ideal conditions,
but in non-ideal conditions,
it is sometimes helpful
if you state exactly what you assumed
as the prior distribution of your variables
in order to arrive at the answer that you gave, right?
In the case of frequent statistics,
you don't really, you have the luxury of getting more data.
So to give you an explicit example,
this Higgs boson, okay?
Think of the Higgs boson.
So people were looking for it.
We didn't know where it was.
It was somewhere between 100 GEV and 900 GEV.
That's all we knew, right?
So you look for it.
And then they said, well, there is a little bump at a 125 GEB.
It was initially just 3, 3 and a half sigma, right?
But it was seen by both Atlas and CMS.
And at this point, a lot of cosmology started shouting,
oh, you know, these guys, they don't know about the look elsewhere effect.
The point is that, you know, once somebody says there is a bump at 125 GEV,
if I'm doing another experiment, I would look there.
Yes.
But that's a kind of a bias.
I should be looking everywhere, not just there.
Yes, yes.
But the point is that is what you should do.
If you're doing cosmology, you should do that.
You should not just focus on what somebody else has said.
It should be on prejudiced.
Freeman Dyson actually talked about how he's happy that the Higgs was discovered,
but he's unhappy about how it was discovered
because of this exact reason of that you have to throw away plenty of the data.
And so you know what you're looking for.
And so you look there.
That's right.
But the proof of the pudding is the eating.
You see, they took 10 times more data.
The peak was not only there at 125, it didn't go away, but it got higher.
And it got to 5 Sigma.
And it got to 5 Sigma in both experiments.
And at that point, it was pretty clear that it was Jalia Higgs.
And then that's when the big press announcements were made.
But data has continued to come in.
And, you know, lots of properties of the Higgs have been measured.
There is absolutely no doubt in our minds today that there's something.
However, this may not have been so.
Subsequent to the Higgs discovery, there was a false alarm that there was a bump at 740 GEV.
Hundreds of papers, actually 740 papers, were written trying to explain this bump, right?
And it was, if I'm not mistaken, it got to over 4-Sigma in both Atlas and CMS.
That was what was misleading most of the theories
because, you know, the rumors were flying left and right
and everyone heard that both experiments had seen it.
So it came to over 4-Sigma and both experiments,
and then it went away because such things can happen.
You can have fluctuations.
I mean, nature is indifferent.
Nature is not either good to us or bad to us.
Nature doesn't care.
Nature is tossing dice.
And sometimes you can have a situation where
the wheel at Monte Carlo
can come up red ten times in a row.
It's not a biased wheel.
It is a fair wheel.
At least we think so.
But it can happen.
The odds of that is just half to the power 10.
It's not that small.
You know, two to the 10 is 1 over 1024.
It's one in a thousand.
It's hardly anything.
The point of making is that
in principle,
you cannot ever tell the difference
between a biased coin
and a very rare event
where the coin
jumps up, say, heads,
you can never tell the difference.
That's just a matter of your attitude.
So if you're a mathematician,
you would say every time you toss a coin,
the odds are one and two,
heads or tails, right?
Now.
Let me make this clear.
Yeah.
So my understanding is that you were not claiming
that dark energy doesn't exist.
It was just that the 5-Sigma result
should be a 3-sigma result.
But now sounds to me
as if you're claiming
that you believe dark energy doesn't exist.
That's correct.
Okay, so I want to know what changed, how it changed,
but also I want to know about this variable dark energy results,
the variable dark energy results from DESE, DR1 and DR2,
about its weakening.
So does that comport with your worldview?
Does that take a sledgehammer to it?
So tell me how you think about all of it.
Okay, so to answer your first question, that's right.
in we started doing this analysis
when the catalog was first made public
the supernova data was made public in 2014.
It actually took us a year and a half
to get our paper published
because there was a lot of pushback.
I mentioned earlier a famous journal
that has turned us down
because it was a Nobel Prize winning discovery.
But anyway, it did get published
in scientific reports in nature
and caused a bit of a stir
because, you know, we could show using rigorous statistics
that it was not a significant result.
But that was just the start of it.
So as you said, then there was this pushback
by Rubin and Hayden saying,
well, we should allow for the supernovae
to evolve with redshift.
But subsequently, when we got looked at the distribution
of this acceleration on the sky,
so normally we analyze all data in cosmology
assuming that the universe is isotropic.
Okay.
And that we, the expansion rate, you can do a, if you like a tailor expansion.
You can say it's got a velocity, then it's got acceleration, then it's got a third derivative,
that's called the jerk and so on.
You can do a expansion like that.
We allowed for this acceleration to have one more degree of freedom.
We said, let it have an angular dependence.
Let it be a monopole plus a dipole, right?
we didn't add more terms
because you want to minimize the number of parameters.
We have only 740 supernovae.
We don't want to increase the number of parameters too much.
But then we let our maximum likelihood estimator
lose on this dataset.
And it told us that actually the data
overwhelmingly thinks that the acceleration is a dipole.
It is only in that direction
and it's negative in the other direction
and basically it's a dipolar pattern on the sky.
So when we published this results,
this was in 2019,
that caused even more of a sort of hala-boo.
And I recall Adam Brise,
who is one of the Nobel laureates,
he told physics world,
while these guys, they're using old data,
although he was actually one of the authors
of this 2014 data set.
And we have many more supernovae now.
And also there is other evidence that dark energy exists.
These were these two main arguments.
So with regard to the first statement,
indeed we are using the old data set
because we thought it was a good data set.
But now there is a new data set called Pantheon Plus,
of which Adam Rees is a leading author.
So we have taken that data set
and redone our analysis from 2019 on it.
And that paper was published last year in 2025,
right, after rigorous refring.
and we find that the diplaus signal is stronger than ever.
It's even stronger in the, you know,
1700 Suburna or so in the Pantheon Plus data set.
Why is this?
And, oh, that's the first part of the statement.
And now, if we allow for the supernovae to also be,
have a luminosity that depends on their redshift,
depends on the age of their progeritor star,
which the Korean astronomers are saying,
then in fact, as we go out,
it's a dipole, the dipole dies out,
but there is still a monopole component left.
There is a isotropic component.
That isotropic component also disappears
if we allow the supernovae to have a luminosity
that depends on the age of the progenitor.
And the universe then looks like it is actually decelerating,
but locally it looks like acceleration
because it is aligned with a local bulk flow
and it could be just a local effect because of that.
So now I'm of the opinion that, in fact,
there is no cosmic acceleration at all,
that certainty is not evidence for dark energy
because something that I should stress
is that a cosmological constant vacuum energy
has to be isotropic.
Yes.
Because otherwise you violate lawrence invariants.
You know, different observers will not see the same.
vacuum and that is sacrosan.
The problem is, and I should make a strong statement about this, I really get very upset
when I see people astronomers, especially in Subunovar cosmology, fit things to Suburnova data
without putting in any physical priors on it, in other words, prohibiting things which are
unphysical.
So they consider the possibility of equations of state which are so extreme.
that you would violate unitarity,
you know, you'd violate things
that are absolutely sacrosan.
But it is allowed in their fate
and the data, one dispensured data point
can pick out something like a equation of state
which violates the strong energy condition, for example.
But there is a unfortunate gap
between relativists on the one hand,
astronomers on the other hand,
and particle physicists, right?
There's a gap, meaning what?
Well, the culture gap, both a gap in the background and the way that one looks at, you know, phenomena,
and the ability to appreciate that a cosmological constant, for example, to an astronomer, is just a number.
It's like, you know, omega, lambda is 0.7, right?
And they think it's a simple model because they consider it to be one parameter in a six-parameter or seven-parameter model.
they don't appreciate that for Lambda to have that value it does,
which is something of order the present-day Hubble parameter square, right?
To interpret it as vacuum energy,
you would have to adjust operators in the standard model
to 60 places of decimals.
You'd have to cancel terms against each other
until everything is gone except one nab one in the 60th decimal place
because the natural scale of the standard model is a T-EV
and the energy scale of this dark energy is an EV.
So there is, you know, a lot of gap,
there is a huge hierarchy between those two scales.
And it simply makes no sense
for to call it something to do with the vacuum
because it has got nothing to do with the standard model.
There are no energy scales in the standard model
of order the vacuum energy scale.
Yeah.
And in particular,
it cannot have anything to do with anything
because the scale is set by the present-day Hubble parameter
which is what enters into every measurement, right?
But the Hubble parameter is neither fundamental nor a constant.
Why should it determine the cosmological constant?
I mean, I really don't get why more people are not,
you know, struck by this.
That's what I wrote in my essay.
I said at this point, alarm bell should be ringing.
The Heart of Darkness, I say?
Yes, that's right.
That is the main point of it.
Just for people who may be tuning in and wondering,
what are you referring to when you talk about this gap
and where do you, Professor, lie on this,
you from 2011 to 2019, I believe,
were the head of Oxford's particle theory group.
That's right, yeah.
So you have a perspective on cosmological...
The constant problem.
On cosmological issues that someone...
who is a pure relativist or a pure astronomer wouldn't have?
Well, I kind of see the points of view of all these three groups.
I kind of, you know, I can see how each one looks at them.
But I find it, as I said, somewhat frustrating that people who fit the data don't
appreciate that sometimes they are playing havoc with fundamental rules of nature.
They're violating sacrosan things like in unitarity, which you cannot violate, you know, however radical you are.
Well, if you were to violate that and you were able to show it, then that itself would be worthy of a Nobel Prize.
That would be astounding.
But, well, I would very much doubt because in that reality or universe do not exist.
So no Nobel Prizes should exist either.
But that's another matter.
What I'm trying to say is that it's just to put it very simply.
If I am analyzing data in the framework of a model,
which forces the only unknown quantity, namely lambda,
which is a quantity that is allowed to be added in the standard cosmological model,
simply reflecting the fact that we have assumed the model to be strictly isotropic and homogeneous,
so it has only got three parameters.
There is a matter content, there is a curvature,
of spatial surfaces, and then there is this quantity lambda,
which simply reflects the underlying symmetry of general relativity
that you have local coordinate invariance, right?
And then in the, you know, when field theory was discovered,
I mean, this was done at a time,
and Einstein sort of got all this,
when we still thought of matter as particles,
don't radiativeistic particles.
Of course, later we realized that if you heat up particles enough,
they become radiation, but that still matter.
got pressure, but the pressure is positive.
However, in the 1930s, when field theory was discovered,
Wolfgang Paoli, I think he was the first guy, to realize
that the ground state, the zero point fluctuations of all these quantum fields,
also act like a cosmological constant.
And the magnitude of these fluctuations is huge.
I mean, he looked at the fluctuations that would come just from electromagnetic,
quantum world, right?
And he wrote that if this coupled to gravity,
then the universe would not even reach to the moon.
It would be prevented from becoming as large as it is
because of the cosmological constant of the vacuum.
In fact, he actually made a calculational mistake.
The answer is that it would not have been any bigger than this,
but bigger than few millimeters.
And he in fact wrote,
this is in the handbook of physics, 1930.
as is obvious from experience,
the zero point energy does not couple to gravity.
Now, this is the so-called cosmological constant problem,
which Stephen Weinberg memorably called.
He said it was the bone in our throat.
The catastrophe.
Yeah, it just defied explanation
because according to general relativity,
which is a classical theory, after all,
all forms of energy density must couple
to gravity, right? But as Paoli said, as is obvious from experience, zero point energy does not
couple to gravity because if it did, then the universe could not ever have expanded to be as
large and as old as it is today. So imagine the universe. Professor, I don't see how this
helps your case. So help me understand. Firstly, is there a reason to believe that dark energy
that we should get to a calculation of what its value should be from,
vacuum fluctuations. Is there a reason to tie those two together? Not necessarily. I know naively,
we do a calculation. It's all, we'll tend to the one, two, three, or something like that,
where orders of magnitude off. But is there a reason to tie the vacuum fluctuations to dark energy?
Well, strictly speaking, in field theory, you can never actually calculate the absolute value
of the vacuum energy. You can only calculate differences in vacuum states, not absolute values.
technically speaking, it's a so-called super-renormalizable term in the Lagrangian.
So you can't actually calculate.
But what you can say is that without fine-tuning, its value should be set by the energy scale
up to which the theory is valid.
And we believe that our standard model is valid to a few hundred GV energies at least, right?
That's statement one.
We can't actually calculate the vacuum energy, but it should be something whose scale is set by a few hundred G.
call it a TEV.
Statement 2, how does this vacuum energy affect the expansion of the universe?
That's the second question you can ask.
The naive answer is it would have stopped the universe from ever expanding beyond the size
of a nucleus.
As the universe cooled out and it got to a temperature of a TEV, so the thermal energy has
gone, all that is then left is the vacuum energy, that would have either caused the universe
to recolapse or to go in.
to eternal inflation, one or the other, depending on the sign.
And we can't even calculate the sign in field theory.
The very fact that that did not happen that we are here discussing it seems to suggest
that vacuum energy does not couple to gravity, as Paoli correctly noted, I think, in 1933.
However, we do not understand why that should be so, because according to Einstein,
everything couples to gravity.
So that in a nutshell is the essence of the cosmological constant problem
That our best theory of gravity which is a classical general relativity
Right
Does not when when coupled with the fact that you have the quantum field theory of the other fundamental interactions
Right
At their interface at their very uneasy interface is this cosmological constant problem
Right
If vacuum energy coupled to gravity we should not be here
but we do not know any reason
why it should not couple to gravity
in the formulations that we currently
have of these theories.
And my biggest disappointment
with all attempts to construct theories
of quantum gravity is that none of
them, you know, loop quantum gravity,
string theory, whatever you name,
none of them have been able to address
the cosmological constant problem.
They usually evade it.
Doesn't string theory address it with supersymmetry?
Well, the point is that
the real world is not supersymmetric in any case,
even if they were supersymmetry,
it would be violated at an energy scale of a T.E.V.
And they have the same cosmological constant problem as before, right?
So that's the simple answer to that question.
The real answer is a bit more involved, but that should do.
So there is no way that they address the cosmological constant problem.
And therefore, we are missing something really big, right?
every attempt to sort of solve the cosmological constant problem has failed.
There are actually now review papers classifying all the different ways.
People have attempted to solve it.
Many clever and intricate ways and they have all failed,
which means we are missing something really big.
And I like to tell young students,
they do not worry that all the big problems have been solved
and there is nothing for them to do
because we have missed solving the biggest one of them all.
Now, into this situation, so what happened?
Particle pharices realize they couldn't solve the cosmological constant problem,
but then it doesn't really affect them.
It affects nothing that is done at the LHC.
This is something that only affects stuff on the larger scales,
on the scales of the universe.
So, as I put it in my essay,
it was an accident waiting to happen
that when you start making observations of distant objects
and you allow for a cosmological constant in your equations,
you are going to find it to be non-zero
because you are fitting the data in a very constrained framework
with the only unknown quantity in your equation
is the cosmological constant.
You have matter, you have curvature,
and you have the cosmological constant.
You can measure the matter independently.
You can measure the curvature independently.
And you determine the cosmological constant
from what's called a sum rule,
which is that the fraction and energy density
in matter, curvature, and cosmological constant
add up to one.
This is a restatement of the Friedman-Lemeth equation,
which is the workforce of standard cosmology.
And that equation is derived,
assuming exact isotropian homogeneity,
the so-called Friedman-Lemeth-Roberson-Walker metric.
And that leads me to answer your second questions.
I'm getting there.
You asked about evolving dark energy and Daisy.
Daisy has got some brilliant data.
It's got millions and millions,
you know, of 10, 12 million spectra.
Red Ship measured with this beautiful instrument they had on this telescope at Kitt Peak.
But, and it covers almost, I don't know, 35% of the sky by now, right?
I think they get to about 40% of the sky eventually.
They are still analyzing this data, this beautiful data set.
The first equation in their paper is we assume the Friedman-Lamette-Robertson-Walker metric.
They assume exact isotropy and homogeneity to analyze their data.
Which is the very thing you're questioning.
Yeah, which I'll come to in a second,
why I'm so doubtful about it.
But they allow an extra degree of freedom now.
They say it's dark energy,
instead of just being something
which has got a purely diagonal energy momentum tensor,
has got a second term,
which is proportional to redshift.
This parameterization is actually without any physical basis
whatsoever. It is simply for
calculational convenience.
It's self-serving. It's so that
you can integrate observational
quantities like the luminosity
distance or the angular diameter distance
in a neat way because
it's in the integral. So if you
make it proportional to Z, then you
can integrate over Z. It's just that.
And then they claim that it differs
from zero. They have got evolving dark energy.
My question, which I've asked
Dasey people several times, is
have you checked that your signal
that you're actually measuring
like the BEO signal,
is it the same in every direction in the sky?
I mean, you've got like 12 million data points.
Are they implicating a peak
at exactly the same position
whichever direction you look
over the bit of sky that you have looked at?
And I have not had any answer to the question.
It is certainly not stated in any of their papers.
In other words,
they are analyzing the data
according to a assumption of a metric
that was actually made
over a hundred years ago.
It was in the 1911 paper
by Alexander Friedman.
He assumed the same metric
because at that time we had no data.
We didn't even know he lived in a galaxy.
Lemeth assumed the same thing
and Friedman and Robertson and Walker
then gave the mathematical basis for it,
but it was exactly this metric.
So what is your metric of choice then?
Is it LPB?
Well, I don't know.
As I said earlier, we can only prove things wrong.
It is much harder to find what is right.
So what we have done more recently, and this is something I'm very excited about,
because this is free of the complications that we discussed earlier about the supernova data
and all these technical stuff about stretch corrections, light corrections,
progenitor, age dependence and so on, messy stuff.
People can argue back and forth, and you never get to a,
consensus. I'm now going to tell you as briefly as I can about something. It's so simple
that it was actually proposed 40 years ago. It was proposed in 1983 and published in 84 by George Ellis,
who is a well-known radioist from South Africa, and John Baldwin, who was the head of the
Bullard Radio Astronomy Lab at Cambridge, at Cavendish Lab. And he was undertaking the Cambridge
survey of radio sources.
And what these guys
realized,
I was quite intrigued to find
that they had actually met
at a ex-monestry
on the island of Crete
where their whole conferences
and this is called
the Orthodox Academy of Crete.
And they are clearly,
I can just imagine them,
I'd be to that place,
chatting, you know,
about this thing
that Baldwin was about
to start counting
radio sources over the sky and Ellis says to him, look, if the microwave background has this
dipole anisotropi due to our motion, then shouldn't any distribution of objects from the sky
at large distances have the same anastotropy. Because that is arising due to well-known physics.
It's arising due to the phenomenon of aberration. So aberration is the phenomenon with that
when you are moving, a star will appear to be displaced in the direction.
of our motion.
And this was actually, it's a
radiativeistic effect, but it was actually found
by an astronomer called
Bradley. In fact, he was the
professor of astronomy at Oxford.
Over 200 years before
Einstein, he actually
measured, said that the angle at which
you see the star is related
to the true angle where the star is at
via a formula, which is just
a radiativeistic formula,
taking into account that light has finite
speed. So Bradley actually worked out,
to within one person, the speed of light,
by observing that this aberration was, you know,
so many, so many, whatever it was,
minutes of arc in the sky, right?
So aberration causes a uniform and isotropic distribution of the sky
to be sort of focused in the direction you're moving.
And you have to also allow for the fact that when you're moving,
you see an object at a different frequency
than when you are not moving.
So if it has a spectrum,
you have to allow for the fact that
there is going to be different number of objects
in that window, etc.
So when you do all this,
you find that you get a dipole pattern on the sky,
which is the same as the CMB dipole.
It is V over C.
It's a radiativeistic effect.
And V over C is about 10 to the minus 3
because we are moving at a few hundred kilometers per second.
in fact, 298 kilometers per second to be precise.
And if you put in all, do all the maths,
then for the case of objects of the sky,
like radio sources or quasars,
it's actually enhanced a bit.
It's about 5 times 10 to the minus 3, right?
So what is it?
510 to the minus 3, 0.5% percent.
Okay.
It's a tiny, tiny effect.
But, you know, to measure something like that,
you need at least a million objects on the sky.
Now, in 1984, when they published their paper,
there weren't a million objects known on the sky.
When they publish Ellis?
Alice and Baldwin.
So they published their paper in monthly notices
of the Royal Astronomical Society in 1984.
But the first sky map of objects, radio sources,
that was approaching that kind of a number,
came out from the very large array in Sokuru in New Mexico,
around the millennium.
It was published in, I think, 1998 or 99.
And this catalog, it's called NVSS,
this had more than a million radio sources.
But actually, you've got to throw a lot of them away
because, you know, you need uniform seeing over the sky.
You are looking for the number of objects
above some threshold as you are going in one direction,
which will be within your band.
So as you are moving,
objects which would have been too faint to be seen
in your survey
will be boosted above the threshold
and oppositely in the reverse direction.
So you'll see a hotspot and a cold spot.
But to do this right,
you really need a very controlled survey.
You need to have the same sensitivity
all over the sky.
And when you do counting from anton on the ground,
as you get closer to the horizon,
your sensory weight changes
that are at ionospheric reflection
all kinds of experimental uncertainties enter into the thing.
So you've got to cut out quite a bit of your data, right?
So when you do all that data cutting, etc.,
you aren't really left with sufficient number of sources,
but people did this and they said that we are seeing a dipole.
It is in the same direction as we expect.
But its amplitude seems to be off.
It's a lot higher than it should be, right?
So we did this exercise ourselves in 2017.
In fact, one of the criticisms was that NVSS sees only the northern sky
because from one position on the ground, you can't see the whole sky.
You can only see about 40% of the sky, right?
But there is another telescope in Bologna in Australia, which sees the other half.
So we combined the two surveys and he made a full sky map.
And we saw this dipole and it was indeed higher than expected.
But the same sigma strike again.
We did Monte Carlo simulations.
We asked how many times would we see something like this by chance, by pure chance, right?
And the answer to that question is 2.7 sigma.
This effect is not significant at all.
It is even less than 3 sigma.
So, you know, we have to be consistent.
We are accusing other people of not having a significant result.
The same applies to our result.
so we can't claim that it is significant.
So it's a curiosity, but we realize we need better data.
And then great fortune struck, we happened to meet up with an astronomer.
His name is Nathan Sechrist.
He's at the Naval Observatory in Washington.
And he actually serves on the committee that maintains the reference frame,
which is used for satellite navigation.
And these guys realize.
need to get everything absolutely right.
So they use quasars, you know,
which are the most powerful sources in the sky.
You can see them out to redshift of a few.
They're point sources.
They're initially called quasi-stellar sources.
That's why they're called quasars.
And he had a catalog with, you know,
a million and a half of these guys, right?
Measured all over the sky by a satellite called Wise,
which was the infrared satellite.
which operated for years and years.
We were fortunate to meet up with this astronomer
who, in fact, had a catalogue of quasars mapped in the infrared.
So quasars are basically supermassive black holes at the centers of galaxies
that are gobbling up matter and spewing it out in jets.
So basically this catalog is of objects,
which are, in fact, quasars are supermassive black holes.
which are absorbing, accreting matter from around them
and shooting them out as sort of beams of plasma.
But the ones that we are looking at are in the infrared.
So, in fact, they are really the accretion disks
around the central black holes, right?
Similar objects shoot out these beams and they become radio sources.
So, you know, the radio sources and the quasars that you are looking at
do have a connection,
but the catalog that we are using
has got no radio sources in it at all.
It's a completely independent catalog.
Right?
I mean, the way that they're cataloged
is completely different.
So a different class of objects
is picked out in the survey.
The quasars that we are looking at
have been mapped by this satellite
all over the sky.
But again, we have to make various quality cuts
because we need a map of these quasas on the sky,
which gives us how many quasars there are per unit pixel on the sky.
And we have to make sure that this is not affected by something mundane
like the ability to tell them apart when they're very close to each other.
We don't want to count two as one, for example.
Or we don't want to be somewhere where there is absorption
and the number is diminished because of that.
So these are the kind of things that astronomers have to do.
deal with and they have developed various methods to, you know, check for these things.
And our colleague Nathan is an expert on these.
He is a totally meticulous about checking looking for systematics, which is, you know,
again and again, that's what comes to bite you on the ass when he didn't give a result.
You have to be very, very cautious.
But let me just summarize the situation by saying that we have convinced ourselves
that what we see is indeed a dipole on the sky.
And that dipole is twice as large
as the dipole in the cosmic microwave background.
What that means is that normally,
since we believe that the cosmic microwave background dipole
is the result of our local motion,
the idea is that if I do a Lawrence boost
at 369.8 kilometers per second,
in that frame,
I would then see the microwave background as isotropic.
The dipole would disappear.
Okay.
Because the dipole is simply because I'm moving
with respect to the frame
in which everything is isotropic.
Right.
And the implicit assumption has always been
that in that frame,
the distribution of matter is also isotropic.
Okay, so right now you're distinguishing
between a kinematical dipole and a matter dipole?
Yeah.
So I'm saying the matter.
Matter dipole should also be
chromatical in the standard model.
The dipole that we're seeing
in the matter should be exactly
due to the same reason that we are seeing
a dipole in the radiation
because the two are connected together
in the early universe, then they decoupled, right?
But the last scattering surface
of the cosmic microwave background that we see
looks pretty isotropic, apart from little fluctuations,
right? And then we see
this huge dipole, but we say, well,
That's nothing to do with the early universe.
That's to do with just our local motion.
And by the same token, if we do this boost,
we apply this boost to all the things that we measure about matter,
namely the red shapes, the luminosity distances,
then the belief is, the hope is,
that we will have corrected for our local motion
and we can then analyze the data according to the Friedman-Leh-Emet equations
and, you know, deduce dark energy,
deduce devolving dark energy.
whatever, the usual missionary.
But now I'm saying that there is a huge sort of, you know, spoke in the wheel.
It turns out that that whole procedure that is followed as standard,
including by the DZ collaboration, is now subject to question.
Because we finding that the dipole in the matter is not the same as the dipole in the CMB.
That is the necessary requirement for the standard procedure to go through.
Right.
So let me step back one and just describe it in a few short sentences.
Let's say the universe is on large scale,
homogeneous and isotropic as we want it to be
or take it to be in the standard model.
In reality, we look out and we see,
yes, the cosmic microwave background looks pretty isotropic
that supports what I always thought.
Except that there is a huge dipole,
a hundred times bigger than the other little fluctuating.
which we are, which are negligibly small.
They are one part in 10 to the five.
What is this dipole?
And then somebody says, oh, no, no, that we understand.
That's just due to our local kinematic motion.
We just need to correct for it.
And then we are back in the reference frame where everything is really isotropic.
And we can, you know, business as usual.
We can use the same model that was invented over 100 years ago by Alexander Friedman.
That still works.
Great.
and we carry on.
That's what Daisy are doing.
That's what they're doing.
That's what every collaboration is doing.
They're using the same equations
that were written down by Friedman and Lemath
nearly 100 years ago.
I'm saying that to date,
it has been okay to do that
because there was no reason to believe otherwise.
But just in the last few years,
we have got the data that allows us
to do a consistency check
of whether the metric of the universe
is indeed Friedman-Lamad Robertson Walker.
That is the Ellis Baldwin test.
To do the Ellis Baldwin test,
which is simplicity itself,
it's just special relativity.
Just saying, if we see a dipole in the cosmic background
due to our local motion,
you should see the same dipole
in the sky distribution
of cosmologically distant sources.
And when we do that test,
we find that yes, there is a dipole in the distribution of distance sources like quasars and radio sources,
but its amplitude does not match. Its amplitude is not what it should be.
And that has been now established at more than 5 sigma by multiple data sets, radio data,
quasar data taken from the ground, data taken from a satellite, data analyzed by three independent groups.
You are all getting the same answer. It's significant.
So in a recent review and reviews of modern physics,
we have given a detailed description of these different data sets.
We have discussed all the possible systematic uncertainties
that people are quite rightly concerned about.
And we show that to date, there are no showstoppers.
There is no reason to believe that this result is not correct.
And if that is the case, yeah.
Okay.
So let me see if I got this correct.
the critics may say that the evidence for dark energy or lambda is quite overwhelming because there's
various sources like supernova or nova and then there's CMB and then there's BAO and so your response is
well all of these rely on the FLRW metric and what warrants the validity of this metric is something
called the cosmological principle which is just this twin of homogeneity and isotropy
that you've been referencing.
Homogeneity means that if you're at a point P here in this universe
and you look out, then it should be spatially the same
as if you go to another point, say, Q.
And then isotropy means that you have rotational invariance.
So you're saying that we don't have rotational invariance.
Yeah.
So the point is that in reality,
we accept that the real universe is, of course, got structure.
So these statements are reduced to statistical isotropic.
and statistical homogeneity,
which means average to a sufficiently large volumes
will have isotropine homogeneity.
I mean, strictly speaking, in a universe with fluctuations,
you don't ever have exact homogeneity.
Even on the scale of the Hubble radius,
things are in homogeneous by about one part in 10 to the 5.
But, you know, that's small enough that it's sensibly homogeneous, right?
We are not going to kind of be sort of too picky about that.
However, if that is the case that the universe was initially smooth and isotropic and homogeneous,
then it got little fluctuations, maybe from inflation, whatever, but we don't need those fluctuations
because that's what grows into structure, that's what creates galaxies, that's why we are here discussing all this.
This structured then means that today's world is in homogeneous, but the belief is that
averaged on sufficiently large scales
will still have statistical
isotropial statistical homogeneity.
That is the backbone.
That is the foundation
of today's cosmological model.
So the cosmological principle
that you alluded to
that's a, you know,
we shouldn't really do physics
due to by principles,
but by empirical evidence.
So that was the starting point
at a time and there was no data.
In fact, Weinberg says
that in his textbook
on gravitation and relativity,
He says, why do we have this cosmological principle?
Well, because there is not much data.
But then he says, when, you know, if the data comes,
we really should check this because nothing could be more interesting
than to show that the cosmological principle is wrong.
And all I'm saying is that that data has taken a long time to arrive.
We have had to wait, you know, till round 2020 to actually have the data.
And because of that, now we are in.
in a position to say that
the
cosmological principle has been
falsified at more than five
standard deviations, because
the
kinematic dipole in
the microwave background does not
match what should be the
kinematic dipole in the matter distribution.
The two are different.
And what that then means
is that the standard procedure
of boosting to this
hypothetical, so-called,
called CNB frame or cosmic rest frame, as it is sometimes called,
in order to apply the equations to the data,
which is the standard practice that people do to deduce dark energy,
to deduce evolving dark energy in the case of Daisy,
that entire procedure is now open to question.
And in fact, I am personally sufficiently convinced that you have a result
that I think we need to go back to square one.
We cannot any longer proceed with this Friedman-Lemann-Robertson-W metric.
And there is the rub.
You asked earlier, what do I have, what sort of a metric do I prefer?
The point is that the FLRW metric is the maximally symmetric metric you can have for space time.
It's a unique, simple solution.
It's got so much symmetry that you can reduce the 10 coupled equations of Einstein
to just one single
Friedman Limit equation
and the Rajjudiri equation
for the acceleration
which are also simplified
to a simple equation, right?
The point is that
only because of that
cosmology became tractable
because, you know,
everybody can solve
a simple differential equation.
Right?
And we do.
And then we can
confront this simple model
with a wide variety of data
as it kept coming in.
And the data,
And the data has been, in my view, overtaking the theory for quite some time now.
The theory is still that 100-year-old theory.
It has been supplemented by a theory of structure formation,
which is that you say on top of the homogeneity and isotropy,
we have a Gaussian random field of small fluctuations with the scale invariant spectrum,
and these grew under gravity.
And amazingly, that actually gives a pretty impressive fit to data on a wide variety of scales.
So that is a success, right?
But the basic underlying under the hood,
the basic metric structure is still the FLRW metric
from a hundred years ago.
The problem is that once you start dropping symmetry,
then Einstein's equations, as you really know quite well,
are extremely hard to solve.
And at the same time as Einstein gave his solution,
there was a parallel solution due to limit,
Tolman and Bondi, LTV, which said, well, we'll still keep isotropy, but maybe we can be in
homogeneity in one direction, in the radial direction. We can have a radial function,
which is an additional degree of freedom, right? Incidentally, just that alone is enough to make
the supernova evidence go away. If I'm allowed to have a radially varying function,
which determines how light rays propagate, right?
then I can absorb any difference between distant supernovae and nearby supernovae
by soably adjusting that function.
So, for example, if you are in a large void or in over-density or something,
but things were still isotropic,
that would be enough to do away with the evidence for acceleration.
Because you could imagine, as trivial, to give your example,
that it could be that, you know, locally the Hubble parameter is,
whatever it is, 70 kilometers per second, let's say.
But let's say we are in a void, so we are expanding quite fast, and outside the bubble character parameter is 50 kilometers per second.
And that in itself would be enough to take away all the evidence for acceleration.
Wouldn't an LTB void model require you to be within, to be in the center within, say, 1% because there's spherical symmetry?
Right. You'd have to be pretty close to the center and, you know, then people say that's fine-tuned.
but that's what I find funny.
The same people who think that an LTV model
in which we are within a few percent of the center is fine-tuned
are prepared to live with a cosmological constant
that is fine-tuned not to a few percent
but to one part in 10 to the 60.
So that's what I meant by saying there's a culture gap.
People don't realize just how fine-tune
is this cosmological constant that they clibly write down, right?
I imagine they would say, well,
the cosmological constant,
for it is overdetermined because you have multiple independent measurements that agree on this
six parameter model like you have the C&B for the matter density. Yeah, but in truth, when you actually
look at the data that ain't true, first of all, many of those lines, so there are only two or three
lines of evidence where the data is actually good enough to say something. Then you have other
lines of argument where the data is not actually good enough, but people, when they are fit with the
concordencing, they publish a paper saying, you know, I looked at, I don't know, distribution
of clusters and this triple correlation function and it is consistent with the standard model,
right?
They don't tell you when it didn't fit.
They tell you only when it fit.
So I know someone who actually did a study of this is amusing fact, also to do with
lambda's, that of the, I don't remember the precise number, but something like of the 30 measurements of lambda
following the WMAP paper,
which claimed to have established
the standard model with lambda,
of these,
only three were outside the 1-Sigma range
from the quote true value of lambda, right?
So you know that 1-1-Sigma
is meant to be roughly 67%.
So a third of your data point
should lie outside 1-Sigma, right?
But in their case,
only about, you know,
less than 10% were outside.
So there clearly is a selection bias.
People, you know, like to be on the winning side.
So there is, so we really should ignore all the, say, data sets
that by themselves don't have the confidence to claim something.
They can only claim consistency with something,
which is a different statement.
They are consistent with lambda,
but they're also consistent with no lambda.
So let's ignore those.
Let's just look at the data set.
which definitely say we need lambda, right?
Now, of these, the supernova data set is the most interesting
because lambda is directly entering into the measurement
of the luminosity distance, which you observe on the sky, right?
When you look at the CMB, you don't actually measure anything to do with lambda.
When you look at the CNB, you don't measure lambda.
Lambda is totally unimportant at the time of CMB decoupling
because if it is comparable to matter today,
the matter density was 10 to the 9 times more important
at a rest of 1,000,
because matter goes as z-Q.
Lambda does not change at all,
so it was subdominant by a factor of 10 to the 9.
What the CMB tells you is that the curvature of space-time,
the average value between us and the last scattering surface,
that's pretty close to zero, right?
It doesn't measure lambda at all.
then you have a measurement of omega matter
from looking at how much matter
there is in clusters of galaxies
or the barrenac fus-dic oscillation scale tells you that
so you have a measurement of omega matter
you have a measurement of omega-cappa
and you have a measurement from the supernovae
which is like a point
I can't remember the exact value
which is something like 0.8 times omega matter
minus let me get
is right. I just have to remind myself.
Yeah, it's 0.8 times omega matter
minus 0.6 times omega lambda.
That is the combination that is measured
by the supernova people, right?
And that's slightly negative.
So when you put all this together
using the sum rule,
omega lambda plus omega matter plus omega k
equal to 1, then you get
omega lambda is 0.7.
But it is using the sum rule.
And the sum rule
is directly based on
assumption of homogeneity and isotropy.
Because if you had other terms, if you had viscosity, vauticity, angular momentum, whatever,
that would be additional terms in the Friedman equation.
So then you just don't have just three terms.
Yeah.
Let me ask you this.
In the FLRW, there's an initial singularity.
Yes.
That's said to be the Big Bang.
So if you're discarding the FLRW, are you also contesting the Biggesting the Big
bang model. But the point is the big bang is simply a speculation. The FLRW model cannot be
extended back to the big bang in any case because we know that the whole metric description of
space time itself, really is not valid up to the quantum gravity scale. Earlier you said you are
going to ask me about inflation. One of the big jokes about the kind of grounds for saying we need
inflation is the so-called horizon problem, which says that, you know, light could not have
traveled far enough to causally connect opposite parts of the sky, which we see as having the same
temperature. What people forget is that in order to make that argument, you have to construct a light
cone back to T of zero. You have to assume that the FLRW metric holds all the way back, back to
the Big Bang. And we actually don't know if that is the case. That integral that you are
calculating may not ever converge if space time becomes fractal or whatever, you know.
So you can't actually, formally speaking, you cannot establish that there is a horizon problem,
except by assuming that FLRW holds all the way back to the Big Bang, which everybody knows is an outlandish thing to claim, right?
What did you mean when you said if space time becomes fractal?
Well, I mean, that is one possibility.
and we don't actually have any idea what space time does
when at the quantum gravity scale,
but it is reasonable to believe
that it doesn't stay smooth and, you know,
exactly isotropic and homogeneous.
In fact, it would be extremely unlikely
if it was, you know, like the simplest,
the most symmetric possible form of space time that there is.
And yet, for example,
the current formulation of string theories
and assumes that you have Lawrence invariants
all the way up to the Planck scale,
right? It's an assumption. Ideally speaking, you should be able to compute the background from the propagation of the strings themselves.
And that didn't be self-consistent. And that was the original proposal. But, you know, it's turned up to be mathematically very difficult. I mean, just even on a fixed background string theory is extremely complicated. So, you know, I'm not saying it can be done. But strictly speaking, we have no idea of how space theory.
time is going to behave as you approves the plant scale.
And whereas in the standard cosmology,
we take it to be still described by FLRW metric
all the way back to the Big Bang.
In practice, that makes no sense at all.
In fact, the number of particles within a causal horizon
decreases to less than one,
long before you actually get to the plant scale.
So you can't even have a notion of a temperature.
You can't have a notion of equilibrium.
All these things go out of the window.
I mean, not many people know this.
You can't have a temperature higher than about 100th of the granification scale.
Above that temperature, there is no interaction that we know
which can make particles scatter fast enough to stay in equilibrium.
Because the strong interaction is asymptotically free,
it becomes weaker and weaker as we go to higher temperatures.
Inflation predicts Gaussian statistically isotropic perturbations, though,
which is what is observed in the CMB.
It does.
And that is its greatest appeal.
That it actually, if inflation is driven by the vacuum energy of a slowly lowering scalar field,
then in that quasi-de-sitter space, you can generate fluctuations in any scalar field.
For example, the inflatron or for that matter in the graviton, you get gravity waves as well.
And these fluctuations, because it's such a weakly coupled field, will be very close to Gaussian
and it'll have a more or less a scalenarian spectrum
because the thing is rolling it on a slope
with a very small tilt.
So you can get exactly the kind of fluctuations
that we know are required to create structure
and this had been pointed out long before inflation
by Ed Harrison and Yaakov Zeldovich.
It's called the Harrison-Zeldovich spectrum.
We already knew this, that we needed this.
So then along came this idea that it is due to a slowly lowering scalar field
and that is just what the doctor ordered.
So we have grabbed it and we call it inflation.
Ignoring the fact that that model has got no physical basis whatsoever.
I mean, we discussed earlier the cosmological constant problem, right?
The cosmological constant problem has been unsolved.
We don't know whether vacuum energy couples to gravity at all.
and in fact if it did
then we would not have a universe
but we completely ignore that
knowledge that we have
when we then start talking about the vacuum
energy of his slowly rolling scalar
scalar field dominating the universe
and driving inflation
and then at some point
somehow it magically falls into its minimum
and all the vacuum energy is converted
into radiation and we start up
the hot big bang
you know
so when this bicep experiment
claim to have seen the signature of gravitational waves from inflation, you know,
that would have required a cancellation of one part in 10 to the 160 in the energy from the top
of the plateau to the bottom of the plateau. That's why I didn't believe it that it could
even be possible. But the point is for some strange reason, there is this whole industry of
inflation. And I have to say, I've written papers on inflation. We are trying to consider.
construct models of supersymmetric inflation, which are slightly more predictive.
But they all had this fundamental issue that we don't understand how vacuum energy couples
to gravity.
It's a big, big problem.
And to my knowledge, no theory of quantum gravity has addressed that question.
It's a unsolved problem.
But, you know, to come back what I was saying, you know, the strict fact is that if we go
away from the Friedman-Lemeth-Rawards and Walker metric,
then it is possible to consider more general metrics.
LTV is one,
but more generally the so-called Zekyllis metrics,
which allow for both homogeneity and anesthetropy.
The problem is that the number of parameters proliferate like mad.
Yes.
And we can't pin them down with data,
except that some people think that the data
that is now going to come from the new missions
that are being undertaken,
today, like this
you know, this
legislative survey of space
and time that will be undertaken by the
Rubin Observatory.
The sphere X, which is
a satellite that is currently mapping the sky,
it is the successor in a sense
to the wise satellite
that I mentioned earlier.
And then there is Euclid
already in orbit taking data and so on.
This avalanche of data
might be enough that we can actually
start fitting more complicated
metrics to the data. We can't do that in the old way, but maybe machine learning here
will actually be helpful. This is a dream that people have. It might well become reality.
But philosophically, the point is this, that sticking to the simple model means that more
people can play. It means that astronomers, radio waves, particle theories, everybody can do cosmology
because the mass is so simple, right?
If the mass becomes complicated,
you know, very complicated,
then I'm afraid it will become a more arcane, you know, activity.
It will not be accessible.
It will not be as inclusive as it is today.
And what this really reminds me of,
I was, you know, complaining to a senior astrophysicist
that we have such hard evidence
and that people are not taking much notice of it.
And he said, you know, did you know that the evidence for continental drift,
it took 50 plus years for it to be accepted,
even though the evidence was overwhelming.
You know, it's not just that the continent seemed to fit together.
There was fossil evidence, right?
And this was already known in the 1920s, 30s, right?
But it took 50 years before it was accepted
because there was no theory of tectonic plates.
People didn't know, could figure out
how could the coordinates move about.
So they needed to have a physical understanding
of what this evidence was telling us
before they accepted the evidence.
And in a strange way,
I think that is being repeated now.
We have evidence,
but it will not be accepted
until we provide an alternative theoretical,
background for how we could have a mismatch between the matter frame and the CNB frame,
and then people can start fitting the data to that new model.
That's what they want to do, right?
Interesting.
Now, do you think your alternate model that is, there's question marks over it right now because you don't know what it is,
do you think it will ultimately be another metric or do you think it'll be a theory of
quantum gravity?
Well, the cosmological constant problem certainly has not gone away.
And even if I say that dark energy does not exist,
that doesn't solve the cosmological constant problem.
That problem is still there.
The cosmological problem is overwhelming.
The problem is why are we here at all?
How is it that the universe can possibly have expanded
to this huge size and be expanding at only 70 kilometers per second per megaparsec
after whatever 14 billion years?
When its natural time should have been eight,
have been set by the, you know, time at which we reached the energy scale, at which the standard
model vacuum energy would have kicked in. That would have been something like 10 to the minus
12, 13 seconds after the big bang, right? When the universe was no more than a millimeter big.
That is how big we could have got before vacuum energy began the most important thing in town
and took over, right? It did. We had here to tell the tale. And yet, we seem to have forgotten.
important that. And today we start
inventing models of the early universe
where we
invoke vacuum energy and then lose
it. As the famous
physicist used to say, use it
then lose it. That's what you're doing.
That's inflation.
So
maybe there is a clue
there. Maybe inflation lasted for a
small, you know, short time.
Some people are thinking
today the decider space itself is
something which is abhorrent to nature.
You know, you can't have decedars space
because in decedal space,
there is no future infinity.
When you scatter two particles,
you can't check unitary,
you can't check the outgoing states
at future infinity
because there's no such thing.
And that is one reason to believe.
This was apparent from the very early days of strength theory
because this is what was realized
back in 1983 when the first anomaly cancellation theories
came up, etc.
So people do worry,
about it
and actually it is
that realization
that has led
to this
notion that you would
have,
you would know
about this
notion of this
swamp land
and landscape
of string theory
versus
that is all
driven by
this realization
that there is
something
fundamentally
dodgy about
decider space
right?
You can't have
decider space
in nature
you have to have
an arrow of time
you have to
have a little
tilt to it
right
and
then of course you have to worry about the cosmological constant problem itself, right?
So basically the cosmological constant vacuum energy or whatever sign
is a huge problem in our most fundamental theories.
In string theory, as you know, you typically get a negative cosmological constant.
You get anti-decider space.
And then they have to do a bit more manipulation to try to make it slightly decider
because that's what the thing the real world is like.
Why? Because the astronomers have told us.
that so, right? Even if I tell them that, okay, it's not
decider, it is zero, even that would involve
the same fine-tuning. So, you know, as far as they're concerned.
Right, because it's not anti. Yeah, it's not anti. So
might as well be hung for a sheep as for a lamb, you know.
It's a very difficult situation. And
on the one hand, it's intensely frustrating that you have
made no headway on that problem. On the other hand, I think
it is actually pretty encouraging
because it means that we really
there is something very big
we are yet to find.
I mean, I personally may not be able to
kind of make any progress towards that.
I mean, certainly brighter people
than we have failed if that's any comfort.
But I have always reminded of something
that David Gross, I heard him say,
He says, my dog doesn't understand quantum mechanics.
You know, my dog is pretty smart.
It brings me in the newspaper.
When I talk to it, it listens like you to understand what I'm saying.
But if I say Schrodinger's equation to it, neither.
No response.
It definitely does not understand quantum mechanics.
So maybe there are some things that we'll never understand, you know.
Just we won't be able to solve that problem.
Now, that may be true,
but it is also true
that you are never going to stop trying.
We are not going to ever say
we have found a problem,
it's uncrackable, they're giving up,
we are going off home, right?
So I think
you are not going to make progress in cosmology,
I think, until we solve
the cosmological constant problem
because everything else
will be just bashing stuff up.
But certainly,
if there is empirical evidence
that the universe
cannot be described by Friedman
Lamede Robertson-Wker metric
as I think we have established,
then it behooves us to find a better
description geometrically.
And Ellis, George Ellis,
had already given a prescription
for how to do that.
He called it the cosmological fitting problem,
also 40 years ago, right?
And he talked about how to actually use observations
to construct the metric of the universe,
not start with an assumption
and try to see if it fits the data,
but use the data.
to infer the metric from.
And that is again a program
that would have been inconceivable some years ago,
but now might be possible to do,
if you can frame it in the right way,
using all this new tools that are available
of dealing with huge masses of data,
huge number of parameters,
all these advances in machine learning, etc.,
maybe used to do it.
Now, I am not saying that, you know, we will do it.
I'm saying that there is a large community of cosmologies out there,
and most of them are busy doing parameter fitting,
and it would be great if at least, you know, some of them join us
in trying to find this new description.
You know, this, if you like this equivalent of the theory of tectonic plates,
that will then make everybody accept that, yes, the continents are moving,
and, you know, we have a physical framework for how to understand that.
Professor, what is a piece of advice that you found to be true that you come back to often
that was told to you or given to you by a senior, by one of your professors, by a colleague?
Well, I actually started out my research career doing experimental cosmic physics,
and it was actually after my PhD
when I first went to Oxford
that I met this very
charismatic
relativist Dennis Sharma
who I mentioned earlier
who had already
he had mentored many famous people
and Dennis had written a book
back in the 1970s
1878 I think called the
Unity of the Universe
and he wrote very well
and I remember being struck by the fact
that Dennis was a
you know, mathematical relativist,
he did rigorous work
and also on quantum field theory
in curve space time and so on.
But he wrote,
none of us can understand
why there is a universe at all.
Why anything should exist?
That is the ultimate question.
But while we cannot answer this question,
we can make progress
with the next simpler one
what the universe as a whole
is actually like.
In other words,
Here was Dennis telling us that our first instinct when he come to cosmology is to ask the deep questions, the philosophical questions, right?
What is this all about? Why we are here? What does it all mean? What is our place in the universe?
But he says, be empirical. Be like a physicist. First describe the universe, right? And that in a way is what we are now doing. I mean, you know, it might seem a bit late.
but actually I'm saying that our description of the universe for the past century,
which was the sensible thing to do at the time,
describe it as the isotropic homogeneous substrate, right?
On closer examination, as more data comes in, it turns out that that is not quite true.
We are not talking about a huge anisotropy.
We are talking about 0.5 of a person.
It's tiny, right?
But it is important enough that it gets rid of
two-thirds of the energy density of the universe
which is supposed to be in the form of a cosmological constant
or dark energy.
That's how important this anesthetropy is.
Even though strictly speaking,
you would not be able to see it on the sky
unless you had statistics of millions of objects.
It's so small.
Even in the cosmic microwave background,
you see this anesthetrope of one part in a thousand
because there are billions of photons.
So you can see the pattern quite clearly.
With astronomical objects, you have to work a lot harder.
It's a tiny effect.
But it undermines our notion
of the perfect symmetry that our model assumed.
Now, that should not surprise us.
In every other place in physics,
we have come across the idea that perfect symmetries
are broken or hidden in the real universe we live in.
There was a symmetry
between, you know, weak and electromagnetic interactions
in the early universe.
Today there isn't.
That is why the W. Boson is a mass
and the photon doesn't.
That is a major effect
that breaking up that symmetry.
And this notion of spontaneously broken
or spontaneously hidden symmetries
is fundamental to our construction
of the standard model of gauge forces.
And yet in cosmology,
we have held on to this notion of the,
you know, what gradually
students called the spherical cow approximation for over a century.
I mean, so, you know, when I say it, people often say, so what's so great about that?
After all, I've never believed that the universe should be perfectly spherical or isotropic or
whatever.
And I say, yeah, well, that is true.
But did you know that the inference that omega lambda is 0.7, in other words, dark energy
makes up two-thirds of the energy of the universe, that that is based on making exactly
the assumption?
that connection has not been made by many people.
So a lot is at stake because our deduction
that we live in an universe dominated by dark energy
which is exhilarating
is ultimately based on this very strong assumption
of isotropian homogeneity.
And that is what is called into question
by the data tests that we have and others have performed.
So I think in that respect,
it is not particularly complicated or deep.
It's very simple.
And that is why I think for the same reason,
it should be more easy for people to understand what this is about
and to take it on board.
And we could talk more about the philosophical implications of this,
but maybe another time.
Professor, thank you so much for speaking with me for so long.
And I know you're staying up past midnight.
and I know we're both dealing with sleep issues of various sorts,
so I'm glad that we're able to find the time.
Thank you for, yeah.
I'm afraid, as you can see,
I'm actually quite engaged with this subject.
I've been working on it now for quite some time,
and it has been frustrating to not have ready acceptance
of what you find in a lot of pushback.
And I think that's part of the scientific method,
but I do hope that the younger generation of cosmologies
will start thinking with more of an open mind
than has been the case for the past take-go-dot two
when this so-called standard model of cosmology
has got such a grip on people's minds
that they think that there is no other game in town.
And this, I think, is completely against the spirit
in which science should be done
and especially cosmology should be done.
So I hope that that change has now started happening.
And I thank you for giving me a platform to say this
because I know a lot of people who may not have read our paper in detail
might listen to this and think, well, maybe I should take a look.
And that would be reward enough for me.
Hi there. Kurt here.
If you'd like more content from theories of everything
and the very best listening experience,
then be sure to check out my substack
at kurtjymongle.org.
Some of the top perks are that every week
you get brand new episodes ahead of time.
You also get bonus written content exclusively for our members.
That's c-U-R-T-J-A-I-M-U-N-G-A-L.org.
You can also just search my name
and the word substack on Google.
Since I started that substack,
It somehow already became number two in the science category.
Now, substack for those who are unfamiliar is like a newsletter, one that's beautifully
formatted, there's zero spam, this is the best place to follow the content of this channel
that isn't anywhere else.
It's not on YouTube, it's not on Patreon.
It's exclusive to the substack.
It's free.
There are ways for you to support me on substack if you want, and you'll get special bonuses
if you do. Several people ask me like, hey, Kurt, you've spoken to so many people in the fields
of theoretical physics, of philosophy, of consciousness. What are your thoughts, man? Well,
while I remain impartial in interviews, this substack is a way to peer into my present deliberations
on these topics. And it's the perfect way to support me directly.
Kurtjymongle.org or search Kurtjymongle substack.
on Google. Oh, and I've received several messages, emails, and comments from professors and researchers
saying that they recommend theories of everything to their students. That's fantastic. If you're a professor
or a lecturer or what have you and there's a particular standout episode that students can
benefit from or your friends, please do share. And of course, a huge thank you to our advertising
sponsor, The Economist. Visit Economist.com.
slash tow toe to get a massive discount on their annual subscription. I subscribe to the economist and you'll love it as well.
Toe is actually the only podcast that they currently partner with. So it's a huge honor for me and for you, you're getting an exclusive discount. That's economist.com slash tow, T OE.
And finally, you should know this podcast is on iTunes. It's on Spotify. It's on all the audio platforms. All you have to do is type in theory.
of everything and you'll find it. I know my last name is complicated, so maybe you don't want to
type in Jymongle, but you can type in theories of everything and you'll find it. Personally, I gain
from re-watching lectures and podcasts. I also read in the comment that Toll listeners also gain
from replaying, so how about instead you re-listen on one of those platforms like iTunes, Spotify,
Google Podcasts? Whatever podcast catcher you use, I'm there with you. Thank you for listening.
