Theories of Everything with Curt Jaimungal - Jenny Wagner: The "Inverse Problem" Of Dark Matter Is Insane
Episode Date: March 26, 2026SPONSORS: - Go to https://shortform.com/toe for a free trial and an exclusive $50 OFF on your annual subscription - As a listener of TOE you can get a special 35% off discount to The Economist and a...ll it has to offer! Visit https://www.economist.com/toe What if 85% of the universe's matter isn't missing — it's just that our models were never clean enough to know? Dr. Jenny Wagner proves mathematically that every dark matter map ever made is extrapolation. The data only tells you something local. Everything else is a model assumption wearing the costume of evidence. She then connects this to Einstein's own 1917 warning — that homogeneity and isotropy were always a placeholder, never a truth — and makes the case that cosmology is not in crisis. It's finally ready for the next level of detail. FOLLOW: - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Substack: https://curtjaimungal.substack.com/subscribe - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - Crypto: https://commerce.coinbase.com/checkout/de803625-87d3-4300-ab6d-85d4258834a9 - PayPal: https://www.paypal.com/donate?hosted_button_id=XUBHNMFXUX5S4 TIMESTAMPS: - 00:00 - Dark Matter Model Fallacy - 05:06 - Gravitational Lensing Mechanics - 12:14 - Local vs. Global Information - 21:58 - Statistical Mechanics vs. Gravity - 31:28 - Inverse Problem Methodology - 39:58 - MOND and Modified Gravity - 51:58 - CMB Data Processing Biases - 59:56 - Bullet Cluster Re-evaluation - 01:12:41 - Functional Analysis Breakthrough - 01:24:39 - Challenging the Cosmological Principle - 01:42:44 - Minimalist Neutrino Solutions - 01:51:02 - Naked Singularity Detection - 02:04:27 - AI Limits in Cosmology - 02:11:02 - Scientific Method Evolution LINKS MENTIONED: - Jenny's Site: https://thegravitygrinch.blogspot.com/ - Jenny's Papers: https://scholar.google.com/citations?user=HBSfYZIAAAAJ - Millennium Sim.: https://wwwmpa.mpa-garching.mpg.de/galform/virgo/millennium/ - Cosmic Structures (Math): https://arxiv.org/abs/2002.00960 - MOND (Milgrom 1983): https://doi.org/10.1086/160167 - Bullet Cluster JWST: https://arxiv.org/abs/2503.21870 - Much Ado About No Offset: https://arxiv.org/abs/2306.11779 - Model-Indep. Gravitational Lenses: https://arxiv.org/abs/2207.01630 - Einstein's 1917 Cosmo. Paper: https://www.scribd.com/doc/211769217/Cosmological-Considerations-In-The-General-Theory-of-Relativity - Imre Lakatos: https://plato.stanford.edu/entries/lakatos/ - Against Cosmological Principle: https://youtu.be/nASUsWQyemc - Obs. Universe & Cosmo. Principle: https://arxiv.org/abs/2207.05765 - Galaxy Cluster Scaling Anisotropy: https://arxiv.org/abs/2103.13904 - Giant Arc on the Sky: https://arxiv.org/abs/2201.06875 - Hassabis AI Lecture: https://www.nobelprize.org/uploads/2024/12/hassabis-lecture.pdf - Concentric Circles: https://arxiv.org/abs/1011.3706 - No Low-Variance Circles: https://arxiv.org/abs/1012.1305 - Cumrun Vafa: https://youtu.be/kUHOoMX4Bqw - Daniel Dennett: https://youtu.be/bH553zzjQlI - David Kaiser: https://youtu.be/_yebLXsIdwo - Roger Penrose: https://youtu.be/iO03t21xhdk - Barry Loewer & Eddy Chen: https://youtu.be/xZnafO__IZ0 - Subir Sarkar: https://youtu.be/epkuoytFJWA - Neil Turok: https://youtu.be/ZUp9x44N3uE - Carlo Rovelli: https://youtu.be/hF4SAketEHY - JB Manchak: https://youtu.be/iGOGxaZZHwE - Jacob Barandes: https://youtu.be/wrUvtqr4wOs - Tim Maudlin: https://youtu.be/fU1bs5o3nss - Michael Levin: https://youtu.be/c8iFtaltX-s - Karl Friston: https://youtu.be/uk4NZorRjCo - Yang-Hui He: https://youtu.be/spIquD_mBFk - Erik Verlinde: https://youtu.be/ilVImMHcr_g - Eva Miranda: https://youtu.be/6XyMepn-AZo - Curt on Determinism: https://youtu.be/tJsghrZQaYU More links at https://curtjaimungal.substack.com Guests do not pay to appear. #science Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
I remember the doubt before launching this podcast.
What if no one listens?
What if I'm wasting my time?
If you've ever felt that way about starting a business,
Shopify is the partner that turns uncertainty into momentum.
They power millions of businesses and 10% of all U.S. e-commerce,
from all birds to gym sharks to brands just getting started.
No straggler left behind.
Shopify's AI tool writes your product descriptions for you.
It enhances your photography.
It builds you a stunning store from hundreds of
templates. Forget about the dormative haze of bouncing between separate platforms. Shopify puts
inventory payments and analytics under one roof with the propriety of a true commerce expert.
Their award-winning 24-7 support means you're never alone. And that iconic purple shop pay button,
it's the backbone of their checkout, the best converting on the planet, turning abandoned carts
into actual sales.
It's time to turn those what-ifs into with Shopify today.
Sign up for your $1 per month trial at Shopify.com slash tow.
That's Shopify.com slash T-O-E.
I saw the equations and suddenly this is my gravitational lensing problem.
He just said, oh, finally, you got it 30 years after we found out.
And it was quite astonishing how disruptive you think something could be.
Dark matter.
85% of the universe's matter is supposedly missing.
But what's the technical truth behind the popular science headlines?
Dr. Jenny Wagner, a scientist at the Institute of Astronomy and Astrophysics
and the Helsinki Institute of Physics,
and winner of the Prize for Courageous Science,
disagrees about how we got that number.
This is because she proved mathematically that most of the first
of what we call evidence for dark matter is driven by the models we insert and not the data we
collect. On this channel, I, Kurtzai Mungle, interview researchers regarding their theories of
reality with rigor and technical depth. Today's episode is no different. Wagner found that when you
strip out the model's assumptions, the only information that the data gives you is local.
This means every grand dark matter map you've ever seen is extrapolation. Now, it's not wild
speculation. That's not what she's saying at all. Specifically, she's saying a profusion of models
can all fit the same data equally well. This is called lensing data. By the end of this podcast,
Wagner makes the case that this is what Einstein predicted over a century ago. The homogenous,
isotropic, spherical cow model of the universe eventually needs to be replaced by the next level of
detail. Today, she shows us how we may finally be there. What do we actually know about dark matter?
Dark matter. Dark matter is something that I wouldn't say it's something that we know about. It's rather
something, it's a term that came up because we missed matter when we looked at our observations.
And so dark matter was the replacement or was the substitute for this missing mass in our
explanations of certain observations. For instance, in observations of the cosmic microwave background
in the very early universe, but also in the late universe,
where we have cosmic structures that seem to be hold together
in a gravitationally bound object.
But then if this object is actually crevitationally bound,
then we miss a lot of matter because the luminous matter that we see,
the baryonic matter, this is too little matter in order to keep this structure together.
So if we do not have dark matter, then the structure.
would dissolve over cosmic time.
Then we wouldn't see them as they are.
So what's the problem with dark matter?
What's the matter with matter?
What's the matter with dark matter?
The matter with matter is that I would say in the modern cosmology,
people started out with observations,
with observing stars, observing gas clouds,
observing x-rays from the cosmos.
And all of this seemed to be fine until we suddenly realized
if we take a look at all of these things and we want to explain them in the way that we want to
explain stuff on earth, like to transfer the loss from Earth to the cosmos, we realize that
matter is missing. And first people thought, okay, we can just improve our telescopes in order to
get a better estimate of all of these missing masses. But on the other hand, if we now estimate
forward, how much mass are we actually missing? It's
85% of the entire matter content of the universe that seems to be missing.
And this is something that we cannot just make up by this stuff, for instance,
small-scale planets or very faint gas that we haven't detected.
So a lot of mass seems to be missing.
And this is the puzzling thing that we think we need dark matter
in order to explain all of the structures, how they behave,
and all of the cosmic structure evolution as well, how the galaxies form.
So what does gravitational lensing have to do with any of this?
I know you have a wine glass, so why don't you tell the audience, or explain or visually,
or show the audience, what is gravitational lensing, and then what the heck does it have to do with dark matter?
So if we take a look at all the cosmic structures that we have,
then we usually need a lot of assumptions to find that missing matter.
For instance, that some gas is in equilibrium or that a certain stage of evolution has been reached,
that the cluster is not merged, that the structures are not merging, something like this.
And strong gravitational lensing is a much more pure probe of the total matter content of such
a structure because it purely relies on general relativity that heavy masses curve space time.
And so the light will not go straightforward to us, but it will be bent along the so-called Nalgae or D-6.
And in that sense, the only assumption we make to probe the entire mass of a structure is general relativity and how mass is bent space time.
And in order to visualize this, here you see a background source, like a pattern on the wallpaper.
And here is my earth-like gravitational lens.
It's like an inhomogeneous, imperfect glass.
And if I now slide as a foreground object bending space time,
if I know slide this gravitational lens in front of the background source,
in front of the background source, yes, you will see that the background source
will not look the same to us as an observer because now the light goes through the
gravitational lens.
And this means now that we do not see the source as it is, but we see a highly distorted,
sometimes magnified and demagnified pattern
that is a distorted image of the source.
And strong gravitational lensing is even worse.
It doesn't only create a single distorted image,
but it creates multiple distorted images
of the same background source.
And so the good thing is,
if we have multiple images that are created by exactly the same background source,
we can then correlate all the information
from all of these objects together
in order to infer
properties of the source
or properties of the lens.
And I say
or because if we know
the lens perfectly
so if we can describe our lens,
then we can reconstruct the source.
On the other hand, if we know the source
as it is naturally
without the lensing effect,
then we can reconstruct the lens.
But in cosmology,
we neither know the source.
It's an object in the early universe far far away from us.
That's one point.
And the other point is our lens, if it really contains 85% of dark matter,
we do not know where it is because it doesn't interact with light.
So how do we know anything about the lens,
about the mass distribution that curves the spacetime in order to reconstruct the source?
So it's a chicken egg problem.
We neither know the source nor do we know the lens.
Ah, okay, so speaking of wine, many people, when they're younger, well, almost everyone, doesn't like wine.
They say to their mom or to the dad, can I taste? And then they taste it. They're like, ah, I don't like this.
But then at some point that changes. And then you wonder, do I taste wine the same as I did when I was a kid and I just like it now?
Or am I tasting something different? And had I tasted what I taste now as a kid, it would have tasted good.
So a similar problem is there with pain perception.
So if person A can feel the same amount of pain as person B and say, I'm okay with it,
but person B, that person A just thinks person B is a wimp.
Well, you don't know, is person A just having high pain tolerance,
or do they actually experience the pain as less?
And Daniel Dennett also talked about this.
We don't actually know which one it is.
We know the difference.
So we know, say, A minus B, but equals five.
but we don't know what's the value of A, what's the value of B.
Is that similar?
Yes, yes.
It's pretty similar as in, I would say, as in everything in this world,
we can only measure or we can only experience changes,
but we cannot have an absolute reference frame.
I mean, if I see that my dad likes wine,
I have the expectation because I like my dad.
I may also like wine.
So if you try the wine and then you have a certain expectation
and it's completely different,
you may not like it at first because it doesn't fit with your expectation because you have nothing
else to refer to and only if you have tasted a certain amount of different wines, maybe even
different qualities, different ages and all of these, then you will realize, oh, I like this wine
more than I like that wine. Or you may find out I've tried a lot of wines and I don't like any of them.
But at first, you only have your expectation.
You have to extrapolate from what you know or what you think other people know.
That's your ground starting point.
And then you gain experience.
And then you realize, okay, I have a certain amount of experience.
And then I can relate one to the other.
And I think this is the same in cosmology.
The only thing we can measure is changes.
And at first, this was Newton's great, I would say great insight,
to transfer the expectation that laws in physics on Earth are exactly the same on the moon or even in farther out space.
This was Newton's idea.
Let's transfer what we know from Earth, our expectations, into space.
It could be different.
I mean, nobody says that this is true.
Could be that this wine is not good.
But on the other hand, we found out, in a lot of examples, with lots of experiences probing ourselves through the
local universe, going to the larger scales, that actually a lot of these physical laws seem to be
also plausible in farther out space. So it seems the same experiences that we make could be made
at other positions in the universe. When I'm wrestling with a guest's argument about, say,
the hard problem with consciousness or quantum foundations, I refuse to let even a scintilla of
confusion remain unexamined. Claude is my thinking partner here. Actually,
they just released something major, which is Claude Opus 4.6, a state-of-the-art model.
Claude is the AI for minds that don't stop at good enough.
It's the collaborator that actually understands your entire workflow thinks with you, not for you,
whether you're debugging code at midnight or strategizing your next business move.
Claude extends your thinking to tackle problems that matter to you.
I use Claude, actually live right here during this interview with Eva Miranda.
That's actually a feature called artifacts, and none of the other LLM providers have something that
even comes close to rivaling it. Claude handles, interalia, technical philosophy, mathematical rigor,
and deep research synthesis, all without producing slovenly reasoning. The responses are decorous,
precise, well-structured, never sycophantic, unlike some other models, and it doesn't just hand me the
answers. The way that I prompted it is that it helps me think through problems. Ready to tackle larger
problems, sign up for Claude today and get 50% off Claude Pro when you use my link,
clod.a.ai slash theories of everything, all one word.
Okay, and now the problem is what? Like, okay, sure, we don't know the mass distribution,
but if we knew it, we could predict the lensing, and if we knew the lensing that we could
predict the mass distribution, the problem is what, that we don't know either and we have to
infer both or what? Yes, the problem is,
When we know this source perfectly, we can reconstruct the lens.
And when we know the lens perfectly, we can reconstruct the source.
But the problem is in the universe, we are not sitting at the source position.
So we do not know what the source looks like.
And we do not know what the lens looks like either because the lens is also very far away.
And in both of these cases, we have the problem that the source is only visible to us in a very distorted way.
So we do not even know what's the morphology.
of it. And on the other side, the lens, this is something that we have even less knowledge of
because we see, let's for instance take a galaxy cluster, we see a few galaxies in this cluster,
but the rest, if we assume that dark matter exists, 85% of this cluster is most likely dark.
And then, how do we make sense out of this lens? What is the distribution of the mass in this
lens if I know less than 20% of this matter? I have a lot of room to work. I have a lot of room to
wiggle to distribute the mass of the dark matter.
It, as well, a lot of room to wiggle how the galaxies move that I actually see.
Okay, so let's get to Jenny's large claim.
What is your big claim?
And then we'll spend the rest of this episode, the next hour or so, dissecting it and
getting into the details.
What's the punchline?
The punchline that I hope to put forward is dare not to know.
I would like to say if we take a look at the observables that we have, we should clearly separate our model assumptions from the data, and this implies in the end that we may not need as much dark matter as we currently think we need to describe the cosmic structures.
So I don't want to fill the knowledge gaps in our mass reconstruction with model assumptions.
I would like to say, let's stick to the information that the data gives us, and that's it.
And the rest is whatever.
There might be something.
There might be nothing we don't know.
Okay, so the amount of dark matter is potentially drastically overestimated?
Yes, it could be.
I mean, the calculations that I do and my colleagues did, I mean, I'm alone in this.
We're a team.
And so what we found out is that if we take a look at the multiple images,
in such a strong gravitational lensing event,
then we can infer local properties of the lensing structure,
and we can also infer how the source looks like up to an overall scaling constant.
So we get the morphology of the source, like the relative view of it,
and we get the local information of the light bending object in spacetime,
But only the local properties of this lens, which means we know the local distortion directions,
like how is the lens distorting each multiple image?
And we also know what is the relative size between these multiple images.
So we know what's the relative power of the lens between the different positions.
And all of this is the maximum information that is completely given by the data.
So we do not make any additional assumptions,
how the dark matter could be distributed in the entire structure.
So I'm sure a critic would just say, look, we have so much converging evidence from simulations,
rotation curves, the CMB, the bullet cluster, I'm sure it's always thrown out.
How can you say that this is just model-driven?
Because most of these, what you think is evidence, is actually model-driven in exactly the same way.
because we only have a limited amount of reasonable models,
and we usually imply these,
because on the one hand, if we start from the early days of modern cosmology,
people tried to understand everything analytically.
So the first models were spheres, ellipsoids,
and maybe something a bit more complicated.
And then in the 1970s, 1980s, computers started to take over,
and this was the time when people said,
hey, we can do this numerically, we can implement an entire universe in a computer.
For instance, Volker Springer, he has one of the biggest cosmological simulations on Earth that
he says, I can model an entire universe, including all galaxy clusters and everything, in a computer.
And this is then numerical.
But on the other hand, if you just take this numerical simulation, you have the question, what did
you put in?
You put in a certain amount of assumptions.
Like, for instance, that your cluster is subject to Newtonian gravity,
or that your cluster is taken together by Newtonian gravity
and expanding on a certain cosmological model.
You implement a lot of assumptions,
and so based on these assumptions,
you will see certain structures or you will not see certain structures.
This is why Volcker has additional simulations to say,
let's assume dark matter, for instance, has different properties.
So let's play around if dark matter is just a collisionist fluid and these particles do not interact, what happens then? How do my structures look like?
If I then let these particles collide, how would my structures look like then? And he finds out it's vastly different.
So I'm not a dark matterist. I'm not a relativist. I'm not a cosmologist or an astrophysicist. I'm just a fool. A fool with some good looks maybe, let's say. But there's a
They're not fools. So when you bring it up to other physicists, the model-dependent argument,
that must have occurred to them. What's the reception like?
I would say it depends on whom you ask. I mean, I would say the community is divided,
that half of the community says, yeah, of course we believe in dark matter because we can simulate it.
We see that it's missing in all of these different probes together. But on the other hand,
if you take a look, all of them have a very large overlap of the same assumptions and everything
is filling a lot of voids where we do not have data with models. This is the camp that I would say
is in favor of whatever I model should actually be there in reality. And then there is the other
camp which I belong to. And these people, they say, if we now take a look at our observables,
what have we actually measured?
And if we now use a model to describe this, we know that this is not the reality.
So we know that we make a simplification to reality.
As George Ellis usually says, the cosmos is much more difficult and much more complex than just a spherical cow.
Because our cosmological standard model is a spherical cow.
It's homogeneous, isotropic.
So the same in every direction.
And the metadensity is the same everywhere.
So this is something that is very, very simplistic.
And if we use this model, we need to make sure that we know in how far does it fit our data?
And if we get better data, is the model still appropriate to describe the degree of details that we have in the new data?
How did Mark Gorenstein react?
Oh, he was when, yeah, I mean, when I told him that I think that it's the local information,
of gravitational lenses
that is actually the information
in the data and anything
else is just model dependent
he just said
oh finally you got it
30 years after we found out
that lensing is degenerate
and lensing models can
fit the data equally well
so this was
for us this was a puzzle
and funny enough he came from
astrophysicists from astrophysics
and then he moved on
bio physics and he looked also a lot into the optical analogy between strong gravitational
lensing and optical stuff. And I came from biophysics and I moved into astrophysics. So we had
some kind of like overlap, which was quite funny. So in the end, I think he could have found it as
well, but he didn't see that to get rid of the model. He thought, okay, let's just probe a lot of
models and then we then we will find out what is the underlying principle. So, for
forward modeling to insert different models and finding out are the observables still the same?
Or does the model predict different observables?
He thought if we do this long enough, we will find it out.
But actually, I came the other way around.
I said, we need to get rid of the model.
We need to find what is the common thing that the formalism itself tells us without inserting
any model.
And I found out the formalism is local.
Every equation that is there has an X there, meaning,
position of the multiple image. There's always an X. It's the potential of X. It's the deflection
angle depending on the position. Everything is depending. So I thought, okay, it can only be something
local. It cannot be something global. And then I found out if we leave out the model, the only thing
that remains is the local information. This video is sponsored by Shortform. If you want a free trial
and an exclusive $50 off their annual plan, then go to the link in my description, short
form.com slash T-O-E. If you're like me, you've encountered books that are so dense, finishing them
is actually just the beginning. Short form helps with that. Their book guides go far beyond pastiche
summaries. They critique, they add context, they include interactive exercises, and connect ideas
across authors. Take Gertell-Echer-Bach, or the master and his emissary, two of the most
demanding reads in consciousness studies on the popular market. My method is I read the guide first,
then the book, then I read the guide again.
So it's a triptic of engagement that cements understanding,
better understanding for me.
The GEB guide maps recursive structures
in a way that exhibits intellectual pliotropy
where one insight branches into consciousness,
computation, and self-reference simultaneously.
Short-form covers philosophy, science, and psychology,
ipso facto, the intellectual core of this channel.
They publish new guides weekly,
and subscribers vote on what books get covered next.
Their browser extension, short-form AI,
summarizes articles and YouTube videos with a single click.
Go to shortform.com slash T-O-E for a free trial and an exclusive $50 off your annual subscription.
That's shortform.com slash T-O-E.
Okay, so walk me through.
What does that mean physically?
What does local information physically mean?
Local information physically means that usually people want to determine the entire mass of a gravitational land.
like of a big object.
And so thereafter making an assumption based on the multiple images that they see,
what is the total mass of this object?
So they're inferring a huge global mass for one object based on a few data points.
But if you now say the local information is the maximum information you have,
it's obvious, you cannot get the entire mass.
All you can get is some directions where the lens is to see.
towarding. You only know the properties of the lens at these positions, and this doesn't give you
any information about the mass directly, not even a mass at these image positions.
Wouldn't the critic just say, hey, we always have incomplete data for everything, not just
astrophysically down on Earth, we have incomplete data, we have incomplete data, we have incomplete data
abound. And so what you're articulating is just fallibilism. We could be wrong or some form of
epistemic humility. But at some point, one has to commit. Like every measurement, yeah,
has model dependencies. But at some point, you just say, okay, well, this is the model. And this model
works or something like that. So tell me what you think. Yes. Yes. Yes. I mean, in principle,
when I realized that it's only local information that we have, then I had the problem, okay, this model that we
have, we insert it. So on the one hand, we cannot say much without a model, but then how good
of a quality are our models that we have? And there, if you go back to what's their origin,
it started as some idea that we have, it's called a singular isothermal sphere, meaning it's a
gas in equilibrium that's spherically distributed because this comes out as a solution of an
analytic problem, how would a gas cloud look like that is very idealized and that comes from like
the first calculations that we did for astrophysics when we said we want to know how stars
form so we consider a gas cloud that is collapsing under its own gravity and all of this.
So we had these power laws with like one over R to the power of something and the singular
isothermal sphere happened to be one over R squared. This was one of the first, I would
say like structure descriptions that people had.
And then they said, hey, if we now have some structure that is not an ideal gas cloud,
let's just stick to these power laws and let's just fit for whatever goes,
one to the power of alpha.
And then they found this still doesn't fit.
And then they did a more elaborate model based on simulations to fit for something that is in the end a heuristic,
mass density profile that seems to come up in simulations.
So you can now ask, we have a model and we could say, okay, we validated it in some sense by
simulations.
But do I believe that these simulations are actually mimicking reality?
That's one point.
And the second point is a theoretical, fundamental physical explanation, unfortunately,
is still missing.
This is why I also started to develop a derivation of some.
much mass density profiles in order to reason why this power law is actually reasonable and why
we should use it, why we should trust it. And this is very complicated, of course, because a lot of
people suffer here that normally we have statistical mechanics. That's what we've learned since
1900 something. But the problem is that this statistical mechanics is not suitable for gravity.
And I think Roger Penrose, he would definitely agree to this. He also propagates this quite a lot
of times that he says, gravity is exactly the opposite of what we think in statistical mechanics
happens. The entropy should always increase and then you have like an uniform distribution of
everything. But what does gravity do? Gravity takes exactly the opposite direction. Everything
that is distributed is just collapsed into a single point. This is, I would say, maximum
order instead of maximum entropy in that sense. So how do you use a statistical mechanism?
approach that is doing the opposite of what gravity does in order to describe gravity.
And this was my problem when I started to set out, okay, how can we describe a mass density profile,
how can we derive this, and I'm not allowed to use statistical mechanics.
And so I called it demon for dark emergent meta-halo explanation.
And that was my approach that I put forward.
And I found out that it is possible to reason why this power.
law is good, but not using statistical mechanics to just say, okay, we start with Newtonian gravity,
and then we say Newtonian gravity is scale-free. And this scale-freeness is obviously something
that leads to power laws. This is what we also see in nature on Earth. And then you can reason
why a power law should be a good approach. And then depending on how you coarse-crain your galaxy
distribution into some density, you can then argue why a certain experiment is reasonable or not
and in which limit you can get this. So I'm more and more convinced that these power laws are
actually reasonable, but still, I mean, we need to further understand from a fundamental physical
perspective which model is actually good. And this is what I would like to see in the near future
for astrophysicists. Don't use black boxes. I mean, simulations are important, but don't use them
as blackboxes and just heuristically fit, try to understand to bridge the gap between theory
and simulation in order to get a better understanding what we're actually doing.
Why don't you give us an intuition as to what a power law is and then what scale-free is
and why it's reasonable to think of Newtonian gravity is scale-free and how that may imply power
laws? What does this all have to do with one another?
I mean power loss is an inverse proportionality in that sense, but to an higher exponent.
So you have one over R to the power of alpha.
And this means that if you have the curve, I guess you can show the graph of the curve,
how this looks like for one over R squared or something like this.
And then you see that the farther you go away from your origin, the less important this becomes.
But the point is it is still there.
is not going exactly to zero. It's not really being damped by quite a lot. So this is the power law
that you can, for instance, describe what is the richness of people? So what is the wealth of people?
And then this usually follows a power law that you have a lot of people who earn hardly anything.
And then you have like your middle class. And then you have the bunch of millionaires. But obviously
it's not zero. And you cannot say if you have 10 millionaires who have a certain amount of money,
that you now know exactly what's the total amount of wealth that a person can have.
Because there's always the surprising moment that somebody, like a billionaire,
could step into the room and say, here's my paycheck, I earn much more.
And this is the, I would say this is the perk of a power law,
that sometimes there could be, depending on the exponent,
there is no mean, there is no variance.
And this means that the universe could keep on surprising.
us that it's not that it's not something that we can fully grab by okay here's the mean
and here's the variance and now anything else is not allowed or anything else would be
completely challenging everything and the perks of these power laws is there is always a
chance of surprise all that if we find a structure that is too big to be true in a power law
statistics this can come up this can happen and it's not unusual in a sense that it's
completely challenging the worldview immediately this is why I like
like this idea of power law statistics. And Newtonian gravity is one example because it has a
gravitational law, the force, is 1 over R squared. So this means that the force only depends on
the distance between two objects. It doesn't depend on the time at which these two objects
come together. It doesn't depend on any other property than just the masses, which are in the
numerator, and this are squared in the denominator.
And this is a nice thing of scale-free gravity like this Newtonian law.
Most of my best ideas don't happen during interviews.
They come spontaneously, most of the time in the shower, actually, or while I'm walking.
Until I had plod, I would frequently lose them because by the time I write down half of it,
it's gone.
I tried voice capture before, like Google, Home, and it just cuts me off in the middle.
It's so frustrating.
Most of my ideas aren't these 10-second sound bites.
They're ponderous.
They're long-winded, and I wind around.
They're discursive.
They're five minutes long.
Apple Notes, even Google Keep, the transcription there is horrible.
But POD lets me talk for as long as I want, and there's no interruptions.
It's accurate capture.
It organizes everything into clear summaries, key takeaways, action items.
I can even come back later and say, hey, what was that thread I was talking about regarding
consciousness and information?
In fact, this episode itself has a plot summary below, and I'm using it right now over here.
My personal workflow is that I have their auto flow feature enabled, so it sends me an email
anytime I take a note.
Look, the fact that I can just press it and it turns on instantly like right now it's starting
to record without a delay is extremely underrated.
This, by the way, is the note pro and then this is the note pin.
I have both.
Over 1.5 million people use Plod around the world.
If your work depends on conversations or the ideas that come after them, it's worth
checking out. That's plod.a.ai slash T-O-E. Use code T-O-E for 10% off at checkout.
Now, I have the pleasure of speaking with you before this call, and so I have a quote from you.
I want to read out. Yes, yes, sure. I hope I wrote it down correctly. It's roughly correct.
The entire community suffers from the problem that people like to model. Forward models can be
generated by every bachelor student in the first week. It's an entire machinery. But if you want to
solve the inverse problem, then it's much harder. It requires much more math. It's ugly. It may
take months or years. I hope I've roughly captured you correctly. Yes. I hope I didn't
lens. I'd hope I didn't distort what you said. So anyhow, please explain the difference between
forward modeling and then inverse modeling because it's extremely important.
Yes, I think that forward modeling is something that everybody intuitively does. It's
also in everyday life. When you see something, you immediately try to find a cause and a reason
and how everything happened from the start to the point at which you observe something.
And so forward modeling is something that everybody does. And it's something that we build up
a certain expectation and the moment that we get a confirmation from our forward model was
predicting something correctly, we are happy. So this is something that gives us some,
Yeah, some endomorphines if you want to see it like this.
But on the other hand, the moment that we forward model, which is quite easy to do, obviously,
to just think from start and then think, how do I get to the point that I observe my scenario?
This is the point that when you then say, okay, I get the prediction, but now if the prediction is wrong,
nature tells you, sorry, no, this is not the way that I work.
Then you try a different model.
And then if you get another rejection, you get another rejection.
And then another, another, another.
And this is where we stand in cosmology right now.
We ask, okay, we have tensions in cosmology.
We have a lot of things, a lot of stuff we don't understand.
So is it this direction?
And nature tells us no.
Then we ask, okay, can we wiggle around with this other parameter?
No.
Or can we wiggle around here?
No.
And if this is not solving the tension, we don't get a direction where to
go. And I don't know why the community likes to do this. I guess only it's because you think you
are close to the solution and you want to guess it right. Because it seems that from a philosophical
point of view, having a forward model and then predicting something and getting the approval of
nature, yes, this is right. This is supported by the experiment. This is something that everybody
values as it's a good scientific theory because it made a prediction and it turned a
to be true. But on the other hand, if you do the inverse problem, you see something and then you say,
based on what I see, how can I reason, how could I have gotten there in all possible ways that
is there or what is the necessary, the necessary model that gets me there? If a mug lies on the floor
and is in a thousand pieces, it doesn't matter whether it was the dog who put it over the table or
whether it was the kid or whether it was myself.
It doesn't matter.
It just matters.
The necessary assumption is the mug is in a thousand pieces on the floor.
It must have fallen from the table.
This is the necessary model that I have there if I see the table and if I see the mug.
Okay.
But this is all.
This is all.
And this is the point that I say, if we have this necessary model, we know that this is something that is required by,
the data. And if I now say, okay, I know the mug is broken, I know it must have fallen from the table,
or maybe it was pushed from the table, then I look, could it have been the dog, could it have
been my daughter, could it have been my husband, whoever? Then I start to look, who could it be?
Like, looking for the culprit after I fixed the point, I can go to the next part of the model in order
to make my causal relation working. And I think that this is a more,
positive way of doing science
to start with the necessary thing
because if I see
it must have fallen off the table
I have a positive idea
okay, yeah, the mug is broken,
but I have a positive idea of
I have a model that must be
true and the next
step is now to find who
pushed it if it was pushed at all.
Maybe the window was open and it fell from itself.
But all of these additional
assumptions are for the first point
not necessary. But for the next
point of my causal relation. But if I'm wrong that it wasn't my dog, then it's not, I don't
fall back to square zero. I just fall back to it must have fallen from the table. And then I just
look for the next possible reason. Okay. So if I'm understanding this correctly, you're explaining
some of my psychology. So I very much like cop shows or true crime shows. And I think that's because
I'm so theoretical that I'm constantly doing forward modeling. So I care about
what is the cause and then I try to produce what the effect would be.
So I'm thinking, okay, here's an equation, here's a cannonball.
It's a first year's student.
You think of balls that are thrown in the air.
And then you think what would happen if I did this?
But you're thinking about it experimentally from a data analysis point of view.
No, the universe doesn't work from our model and then runs it forward.
The universe gives us data and we're trying to get the model that accurately matches the data.
So this is what cops do.
This is what detectives do.
They have the effect.
They have the then and they're trying to infer the if.
Whereas theorists start with the if and then they do the N.
I mean, the then.
Yes, I'm CSI cosmology if you want to see it like that.
Right.
Great, great.
Okay.
So explain how that forward versus inverse, which every student does forward.
The inverse is the reason why it's messy and it's not a one-to-one map.
it's a one to many map. So as you gave the example with the shattered glass on the floor,
you look at that it could have come from multiple sources. There are many coarse grainings
that are consistent with that shattering. So it could have been your dog, could have been
your daughter, could have been your husband, could have been you, and when you're sleepwalking,
blah, blah, blah. So how does this now, this abstract concept of forward versus inverse modeling
relate to dark matter? And some of the problems that you think we have with modeling dark matter,
maybe we've overestimated the amount of dark matter.
I mean, the biggest problem that I see is we are not short of models.
I mean, since Swiki had the idea that there could be some missing mass in the Coma cluster,
which is one of our neighboring clusters, and he found out there seems to be missing mass.
He was actually more convinced that it's just a missing mass that we have from,
I mean, that hasn't been discovered.
It wasn't a big thing.
But then when people saw that there is something,
some potential for something new.
Of course, the particle physicists
they immediately jumped at this.
They said, okay, if this is a new particle,
if this is some new substance,
then we can go and look for it at CERN.
We can go and look for it at the Fermilab and everywhere.
So theoreticians in particle physics got excited about this.
And then there are by now myriads of ideas,
which particles it could be.
And of course, CERN, Fermilab, Slack,
they all went and tried to find a particle.
And then the string theoreticians, I mean, they got their hearts up that it could be something
like a minimally coupled thing from supersymmetry.
They said, okay, maybe this is finally a minimum stable particle in supersymmetry.
They were happy about this.
But then, after all of the particle physicists brought their idea forward and went to measure,
there hasn't been any detection.
So there are lots of models.
people tried a lot of models.
So far, no clear evidence, this is a dark matter particle.
And then people said, okay, maybe it's not a particle.
And then there were people like Motty Milgram or there are several other people who said,
actually maybe we're just misguided that we're not missing mass,
that we are actually having something wrong in our gravity.
Worst statement Einstein was wrong.
the best clickbait.
Einstein was wrong.
It's not missing mass.
It's actually missing gravitational understanding.
And so this is why Mottie Milgram,
he started with the modified Newtonian dynamics
because you think that most of these cosmic structures
in their own vicinity can still be described
more or less quasi-neutonium.
That you say, I can say to a good approximation,
Newtonian physics still works for them.
And so Motti thought, how about we modify
Newtonian dynamics because our best probe that we have how gravity works is the pioneer satellites.
And then he said the scale on which they have probed the universe, we can be sure that the
gravitation works as Newton said. But if we now go out, we don't know. It could be different.
So let's modify Newton's law based on a scale that is larger than what we have measured with our
satellites. It's a totally valid assumption. And so he went on and he found there is actually,
there seems to be a law that seems to work. And Mont has been there. There was a conference
Mont at 40, meaning the 40th anniversary of Mont. And of course, it suffers from problems.
It's Newtonian. It's not general relativistic. Then people said, yeah, it suffers from certain
problems. You have one scale parameter in there. You cannot argue where it comes from. You fix it by
fitting galaxies, for instance, or you're fitting some galaxies and then you get the scale
parameter based on the data you measure. There are lots of issues. And people say, this is why
we don't want to have this modified Newtonian dynamics as an alternative to dark matter.
But on the other hand, Monde survived 40 years and it was able to solve some of the problems
to extend the theory. And I would say if we compare the ideas, missing Mars to
use dark matter, whatever dark matter model it may be, or modifying gravity, we're still
missing mass and it seems both of them have their issues and we could say both of them are in
some sense not sufficient. There still is some question here. But on the other hand, as long as we
do not know it can't be a particle, but is it really some modification of gravity? Maybe not really.
We cannot know what is actually out there. And I think as long as we don't,
No for certainty, both of these ideas, both of these explanations are still somehow compatible with the data within their realms.
I mean, both of them have problems, but on the other hand, I do not see that I could now definitely refute Mont on something that I see.
And I cannot definitely refute dark matter on something that I see.
So I'm happy.
The data tell me both of them have problems, but in some sense, both of them are compatible with most of the data that I have.
So for me, I don't see any problem.
Why should I now go for dark matter or go for month or go for anything else in favor and leave the other stuff aside?
I cannot say this works better than that one, purely based on the data that I see.
The Economist covers math, physics, philosophy, and AI in a manner that shows how different countries perceive developments and how the impact markets.
They recently published a piece on China's new neutrino detector.
They cover extending life via mitochondrial transplants, creating an entirely new field of medicine.
But it's also not just science.
They analyze culture.
They analyze finance, economics, business, international affairs across every region.
I'm particularly liking their new insider feature.
It was just launched this month.
It gives you, it gives me, a front-roll access to the economist's internal editorial debates,
where senior editors argue through the news with world leaders and policy makers and twice-weekly
long format shows. Basically, an extremely high-quality podcast. Something else you should know about
is that if you go to their app, they not only have daily articles, but they also have long-form
podcasts with their editors and writers. This is also available online. Whether it's scientific
innovation or shifting global politics, the Economist provides comprehensive coverage beyond
headlines. As a toll listener, you get a special discount. Head over to economist.com
slash T-O-E to subscribe.
That's Economist.com slash T-O-E for your discount.
Okay, so to stick and get this point fully across about forward versus inverse modeling,
as it's super, super important.
And one of the reasons why I'm so excited to speak with you is that you've developed these
data analysis techniques that not only will be useful for cosmology,
but potentially, we spoke about this before, but off-air,
potentially for particle collider physics, particle physics, and for interdisciplinary work.
Biosophysics, biomedical, whatever.
Yeah, your background, you have a background in machine learning and cancer of various sorts.
And so it's super interesting because it's a template that, I don't know if you can also help solve crimes, but it's a template.
Yes, in principle, I would say it's also valid for CSI.
Okay, so I'm in Toronto and there are condos all about.
And so let's just imagine, let me give some analogy and see if it's roughly on track.
Suppose I'm trying to model that building's foot traffic.
And the way I can't, it's too difficult for me to do so, it's intrusive, I don't want to,
but I can at least measure doors opening and closing.
I'm going to use that as some proxy.
Then I have some model of what the residents do, their work schedules, the concierge's work
schedules, visitors, and so forth. Then imagine that the doors open, I collect that data,
more than my model predicts. I could postulate an invisible occupant called dark tenants,
which you would get a lawsuit if you refer to your tenants as dark tenants, especially
if they look like me. But let's just say, so it could be that. It could be that there's some
unknown residents. It also could just be delivery drivers. It could be that there's automatic doors opening and closing.
You can call them phantom people. Yes. Great, great. So another one is, I think I did this a while ago. I was
trying to measure my own electricity bill a decade ago. And then I said, okay, my fridge takes up this much electricity and so on, so on. And then the bill was a bit higher than I thought. So I could have
posited some dark electricity. Or it could just be that my model of the fridge, that it's nonlinear and
the compressor or that there was some other source or maybe the electric bill is wrong.
So is that what you're saying?
It's like we don't, we need to be careful about what entities we're positing to exist.
Yes, that, yes, that's exactly the point.
I mean, if you try to model who goes in and out the next door condos, then first and
foremost, the way you described your model, have a look.
How many parameters did you put in there?
I mean, you had to look, you need to know the working schedule of the people, you need to know how many people live there, you need to know, do these people have family who visit quite often? I mean, all of this, this kind of like makes your parameter space explode. You need to keep track of a lot of things. And now imagine you just are wrong in one of these things by a factor of two or so. Then your model would be completely off because you have to look at all of the families in this building. And then if you just
with one estimate you are a bit off, this adds up.
And this is, as you said, your nonlinear effect in the fridge, in the fridge electricity.
And then you can see that all of this forward modeling is really fragile.
And this is why I also think that having the inverse model is easier.
It's actually, okay, it's mathematically more challenging, but it's easier in the sense of you only have a few,
you have fewer amounts of parameters and reducing stuff to the necessary,
ingredient, it gives you a clearer understanding of what is actually there. You do not need to know
all of the family members of each condo. You do not need to know what's the working schedule of them.
You just need to know, is there a buck in the door openers that when some bee flies by, the door is
open? Or is there something that the moment that the automatic light goes on, the door goes open?
You just need to understand, when does the door open? Because this is a...
what you're interested in. So your inverse model would be instead of trying to assemble all
information you can about all the inhabitants in the condo, you just need to understand how does
this door work? And this is a much, I would say a much easier problem to solve if you have
figured out that this is what you actually want. Now going on Mond v. Dark Matter, wouldn't the
dark matter is, the proponent of Dark Matter, say, well, look for the CMB,
that has nothing to do with lensing,
nothing to do with galaxy models or rotation curves,
and there's plasma oscillations from 400,000 years or so after the Big Bang,
and that's independent physics.
There's one of those acoustic peak rays, sorry, acoustic peak.
Berion acoustic oscillations.
Yes, yes, B-A-O, B-A-O, yes, right, right.
All this is, again, it's independent evidence.
It's not just we're looking at the galaxy and saying that it's spinning too fast,
or too slow or not in the right places or what have you.
So what do you say to that?
Yes, I agree.
I mean, the CMB is one of the, I would say, the most complicated things to analyze.
And it's definitely a complicated and complementary probe.
So you have the very early universe where we think it was more or less a rather homogeneous soup.
And then we see these tiny wiggles on top of it, these fluctuations on the CMB.
So in that sense, we have an easy game to this.
describe this cosmic microwave background with our cosmological standard model.
So you would say it's a really clear and clean probe that you have something that is more or less
homogeneous and isotropic. If you now fit something homogeneous and isotropic plus epsilon,
you would get really good constraining power with the data. And we have an all-sky
survey with the highest resolution we could get from the Planck satellite after years of
expertise from Kobe WMAP and all of these.
So CMB is really something that we would say, yes, that is really a high precision probe.
But to see the CMB, we need to get through all of the foregrounds.
I mean, the CMB was the baby picture of the universe.
Until this light reaches us, it has to go through a lot of stuff.
And this means, have we corrected the foregrounds correctly?
I mean, there is the band of the Milky Way.
And usually we see the CMB picture, like the fluctuation picture.
We usually see this as an all-sky map, but this is actually not what we measure.
What we measure is something that is rather at the upper parts and the lower parts.
And in the middle, there is this band from the Milky Way that is usually with less high significance.
The signal-to-noise ratio there is lower.
So what people do is they improve on this data by simulations and also by extrapolations.
and by additional data.
And all of this goes into this picture that we see,
and then everybody says, that's a very clean probe.
But I would say the thing itself, yes.
But the data processing that you need to do
in order to actually get to this is much more intricate.
So in that sense, you could say we are fitting something clear.
We get a high precision, admittedly.
But still, have we gotten all of the foregrounds right?
And concerning dark matter here, for instance,
we could say, okay, we have a parametric model,
and we fit this parametric model to our observables.
So we find that these peaks,
the relative height between the second and the third peak,
which is supposed to give us the amount of dark matter that we know
based on our models again.
So this means we fit a model to data,
and then we find if we fit this model,
we are missing so much mass.
So now the question is,
And this is definitely a very, very hard task.
Can we explain the cosmic microwave background without dark matter?
Or what would then be the replacement for dark matter?
This is something that maybe we missed an understanding there.
This is possible, but on the other hand, I say the CMB is the hardest thing to explain
in terms of getting rid of dark matter.
This is definitely true.
But on the other hand, I would say,
If we start from us on Earth, we haven't seen dark matter.
And if we then go further out in the local universe, it doesn't seem that we need a lot of dark matter there either.
So how come that suddenly the farther away we go from our own position where we have lots of data,
how is it possible that suddenly when we go far away that we need a lot of data,
that we need a lot of dark matter or that we miss Mars?
is it rather, I would say it could also be, well possible, that we are actually missing something in our models, that we are not missing mass, but just that nature is more complicated than we think, and the models that we have are not complex enough to capture that.
I mean, you mentioned the bullet cluster.
The bullet cluster is one of the famous examples where people said, this is the evidence for dark matter.
The smoking gun.
The smoking gun, in the literal sense,
this is why it's called the bullet cluster,
because you see that you have two big clumps of matter
that have been merging, have been colliding,
and then they fly away, so they flew through each other,
and now they fly away at a certain distance,
and then you see that you have in the x-ray signal,
you have the baryonic hot plasma
that also interact.
in the collision. And then because there is a lot of physics going on there, like the plasma
interactions and all of this, so this is the merger is then delayed for this x-ray signal. And then
you see two bullets, one of them is really clearly like a bullet, flying out again after this
collision. And then people say, if we now take a look at the position of this x-ray cloud,
or the two x-ray clouds, and the position of the galaxies,
involved in this murder, we see that there is an offset between the optical image that we have
and of course also the dark matter part that has been reconstructed from lensing again
around these optically detected galaxies.
This is one part and this is offset to the x-ray signal that is the bullets, the bullets and the gas
cloud.
And this offset then was taken as an evidence.
that here dark matter does not follow the luminous matter, the barons,
because we have this offset.
So the total mass is not where the mass of the visible things are, mostly.
This was propagated from 2004 onwards.
And a recent study, just last year in 2025,
there was a big NASA press release from Sang Joon Cha at all.
Pardon my bad Korean.
Sure, sure.
And these persons, they had a look at the intercluster light, meaning all of the stars that are not bound into the individual galaxies of the cluster.
So because if you have a violent merger, then of course not only, not any x-ray gas is emitted, but they are also like galaxies that lose their stars.
And these stars, they go into a collective cloud of stars, which is called the intra-cluster light.
light in the intra-cluster medium.
And with James Webb, it became possible to track this inter-cluster medium, because the
infrared wavelength at which James Webb observes makes it now possible to look at this.
And so they took a map of this gas cloud, sorry, the map of the stellar cloud, and they found
that this merger is much more complicated than just two clumps colliding and moving apart again.
They said it could have been that the biggest clump actually underwent an earlier merger,
and then they saw that there was an elongated arm going in the direction of the merger.
And so it wasn't just two spherical clumps colliding.
There was no offset in that sense between the luminous mass and the dark matter mass,
because all of the stars in this intercluster light were actually nicely following this merging structure.
And so they said this cannot be a clear hint for dark matter is offset to luminous matter.
And again, refuting moment.
And so they said if we now take a closer look with new data that's actually possible now for the first time,
we see we can reconcile everything we know with the new data here.
So there is nothing spectacular that the dark matter would be completely detached here.
At Medcan, we know that life's greatest moments are built on a foundation of good health,
from the big milestones to the quiet winds.
That's why our annual health assessment offers a physician-led, full-body checkup that provides
a clear picture of your health today and may uncover early signs of conditions like heart
disease and cancer.
The healthier you means more moments to cherish.
Take control of your well-being and book an assessment today.
Medcan. Live well for life.
visit medcan.com
slash moments to get started.
Interesting.
I didn't know about that paper.
So I'll place that paper on screen
and the link will be in the description
and for those who were driving
and just listening to this
maybe running or what have you,
but you didn't see it
and you were thinking,
oh, I wish I could see
the bullet cluster merger.
Yes, you can.
That will just rewind.
We'll place the visuals of that on screen.
Okay, so what's the reception
to that paper been like then?
I'm sure as with anything,
philosophy, math, physics,
everywhere. There's usually counters to, not usually in math, sorry, in physics or philosophy,
but there's usually counters to when someone puts up a position. So what's the counter there?
I haven't heard much about objections or so. I would rather think that it's, I would say it's a clear
evidence that this cluster is much more complicated. And this comes actually in a row of several
other findings that have now become possible with better technologies and better data, that people
say, okay, first back in the 90s, and I mean, the first gravitational lens was found in 1979,
and so we do not have that much experience in lensing when we go to 1990, so everything was
pretty coarse-grained and preliminary. But now, today, after all of these years, we have a lot
of data and we have a lot of experience with different galaxy clusters. So if we now go back to
the findings of the earlier times, and we re-look at all of these galaxies or galaxy clusters,
we find much more detail.
And this detail is then the thing that resolves the puzzle.
Like, for instance, here in the bullet cluster, we have this inter-cluster, the intercluster stars,
which now we solve the discrepancy between dark matter and luminous matter.
And the same happened for a galaxy cluster that my team and I, we were investigating.
It was Abel's 3827.
This is another very beautiful galaxy cluster.
And in the center, you have four luminous galaxies.
They're almost equally bright.
And from this very unusual configuration, it's pretty obvious that this is a dynamically very active cluster.
Normally, people like to look at galaxy cluster that are relaxed, as we say, meaning we have one brightest cluster galaxy.
and the rest is then just nicely, hopefully, isotropically distributed.
So the ideal case would be a spherical galaxy cluster with one very bright galaxy in the center.
But this one here is exactly the opposite.
Four luminous galaxies in the center.
And then we have strong gravitational lensing around these four galaxies.
And this lensing almost forms a complete ring.
This was why it was such a really beautiful, huge gravitational lensing event around these four galaxies.
So people had a look at this, and then it was pretty obvious at the start when you do forward modeling.
You need to impose a lot of dark matter, and when you reconstruct your total mass distribution with such a model,
you find, again, that the dark matter seems to be at different places than these four luminous galaxies.
And there, it was exactly the same process.
People first thought, this is an evidence that dark matter and luminous matter can be decoupled from each other.
So what physicists call light does not trace mass, meaning you can have the luminous stuff at a different position.
And then people first thought this was yet another bullet cluster.
But with more and more observations, you could see how this discrepancy was shrinking.
even in modeling. And then my colleague Chennai, we analyzed this cluster with our model-independent
approach. And there we could definitely prove that this offset between the luminous part and
the dark matter part was completely driven by the models. And this is the perk of our approach
that we can say we have a look at the local lens properties and we can track how fast these
properties change over our multiple images. So we had huge, huge multiple images in this case. It's a very
beautiful example. And in each multiple image, we found star forming regions. And so we could use
these star forming regions to chop up the multiple image, each multiple image, into several parts.
And then we mapped only these parts onto each other in all of these multiple images to get the
local lens properties.
And so we saw that over the area of the entire multiple images, the properties were changing.
And this for us meant, okay, our approach relies on the fact that over a certain area that
we look at, that the properties do not change.
So we assume that we have constant properties in the area where we look at.
And if we now have patches across a multiple image and we see each patch has a different property for this local lens property, sorry, that each patch gives us a different value for a certain local lens property, then we know, okay, for instance, the mass density needs to change or the distortion strength needs to change over the multiple image.
And this means the lens is quite turbulent.
It's changing.
So if we now assume we have a huge dark matter halo that is smooth and hardly changing,
then we can see that this is not fitting the actual reality that is given by the data.
And so we could say if the mass density already changes over a single multiple image,
how can we extrapolate this lensing properties?
into a region that is much farther away from these multiple images than just an epsilon,
than just the individual patches that we aligned together.
Can you explain to me in an elementary fashion,
how is it that your approach can be called a model independent approach?
Yeah.
How can you do science if you're not using models?
Yes.
What does that look like?
The only true model independent that I can think of, anything,
is just to say, here's the data.
and you just list the data with conjunctions.
You just say this and, this and this.
Right, right.
You always have to specify which model it is independent of.
And in this sense, I would say it's independent of a lens model.
So our formalism is independent of an assumption
how the global mass distribution of such an object,
such a gravitational lens, looks like.
So we just want to have a look at the pure lensing formalism,
which still makes assumptions.
I mean, which still describes how this light is curved around an object.
But what we do is in this lensing formalism,
we track the light bundles that are emitted from the source.
And then these light bundles are more or less shot through the gravitational lens.
And then the observables that we get,
this is what the lensing formalism tells us,
it's these observables depending on the individual positions.
And people who want to use a lens model, they now use the formalism.
And in this formalism, we have, for instance, that we need to specify a gravitational potential.
And then they say, my gravitational potential is of this and that form.
So at position X, it has this form, at position Y, it has this form and that value.
And so this is how this lens model is inserted into the formalism.
And I just work on the basis of the formalism.
So for me, I have a gravitational potential at a position.
X. And what I do then is to say, how does it change to a position Y? So I just work on the changes
that can be directly tracked in the observables. While other people say, I want to make it more
complicated, I use a model. And then this model gives me what is the potential at position X,
and what is the potential at position Y? And then, of course, if you have a model, you know how
it changes from X to Y, but then this is a prediction of your model.
Why don't you sum this up into a core thesis that holds this all together?
So what's occurring to me is something like data constrains local information, everything else,
every dark matter map that people have seen, it's model-driven extrapolation.
It doesn't mean it's speculation or just unconstrained, surmising and conjecturing.
It is exactly that.
It is exactly that.
You got it 100% right.
You got it 100% right.
That's exactly how you describe it.
So the summary is that the local properties that we extract from these gravitational
lensing datasets, that they are the maximum information we can gain about this
gravitational lensing object, about, for instance, a galaxy cluster or a galaxy.
And what we know is that these local information only encode the local shearing strength,
so how much does a lens distort objects and the relative power,
how much can a lens enlarge the image at a certain position with respect to another position?
This is the information we can get from the data.
And now the point is the farther away we go from the data,
of course, the more any model assumption will take over,
like a regularization in mathematics.
When you say I have a certain regularization function that I say, if I go far away from the data, I want to have a flat, like for instance, if you say what's the gravitational potential in the vicinity of these images, if you say, if I go just far away from these multiple images, I don't want to have any potential.
Let's assume I just set it to zero or I set it to some constant.
then this is a regularization that you make.
And of course, the farther away you go from these images,
the more all of these assumptions will take over
to describe your global lens or your global mass density profile.
Where you have data, you have information.
Where you do not have data, you have extrapolation, speculation,
speculation, prediction, whatever you want to call it.
But in principle, this is the point.
local evidence and I would say global extrapolation.
Because the moment you use a model, you want to have this based on fundamental principles.
So you thought about what makes sense to combine all of these data points into some overall model.
So in that sense, I would say you make an extrapolation, physically informed extrapolation, if you want it like this.
Yes, sure. Is that the same as curve fitting?
so you have discrete data points
and then you have a curve
that you can place through it.
Yes, yes, that's exactly it.
So would the analogy be
the data is the local part
and then the curve fitting,
the model that you suppose produces
that data, of course, in the discrete amounts,
but that is the global.
Yes, yes.
I mean, you're a mathematician,
you can understand
if I have a sample of data points
and I want to know
what is the curve going through
all of these data points,
then you can assume if I have sampled at a certain frequency,
then I can reconstruct my signal, my smooth signal, or my curve, if you want.
But if you do not have enough data points, then you have ambiguities.
So what could be the actual signal?
It's the Nyquist-chanin theorem, that you say, if I have sampled at twice the frequency
and I have a signal of finite bandwidth, then I can exactly reconstruct my signal.
If I lack data points, if I don't have the equidistant sampling, or if I do not have finite
bandwidth, I immediately get ambiguities, but I can resolve them if I have an idea, if I have
prior assumptions, how my signal looked like.
So then using additional assumptions, I may still uniquely fix my signal, I can still uniquely
reconstruct.
If I do not want to use these assumptions, that's the inverse problem method, I don't want to use any assumption, then I have to live with the ambiguities.
As good as it gets, I have different curves that fit the same data equally well.
If I have more idea what my curve actually was, then I can use the assumptions to uniquely fix the curve.
So the universe doesn't give you these equidistant samples?
Yes, unfortunately not.
And you don't know the bandwidth.
You don't know what spatial frequencies the mass distribution has?
No.
I mean, can we observe the entire universe?
We can't.
I mean, usually you need to see the entire signal to know that it has a finite bandwidth.
And the only thing we can do, and this is why I'm an observational cosmologist,
I say I have limited bandwidth physically by my particle horizon,
everything that I have ever observed
or that we can theoretically ever observe
because the speed of light is finite.
So we have a light cone that we can observe
and all of this in our past that we can have observed.
That's our finite bandwidth in that sense.
But we do not know what's beyond that.
So in that sense, we could do it,
but we don't know the rest.
There's always the philosophical boundary
of what comes behind the horizon.
Now, your breakthrough, if I recall correctly, came from reading someone else's thesis in a completely different field.
I believe it was quantum chemistry or electromagnics or something like that.
Yes.
Yeah, okay.
And it had to do with the Laplace operator and greens functions.
Tell me about that, please.
Yes.
I mean, in principle, I did my master's at CERN in particle physics.
Then I moved on to cancer research and machine learning because astrophysm.
because astrophysicists didn't want to take on machine learning at that time.
And afterwards, I moved to astrophysics, but in principle, all I've ever done is analyzing data
under mathematical principles.
And then a friend of mine, he graduated in quantum chemistry.
And so he asked me to have a look at his thesis, which was more or less the mathematics
of how to form molecules and all of the prescriptions, how we can understand.
understand which molecules can form based on all of the principles that we know.
And if I then had a look at it, I realized gravity and electrodynamics have a lot of things
in common, most, most importantly, the mathematics.
And I saw the equations and suddenly, I mean, he also described everything as local
positions of electrons and then you had the, how do you call this, the ions in the molecules
where you have like the positive ions and then you have the electron cloud around this.
And when I saw all of this, I thought, this is my gravitational lensing problem.
I saw the Laplace operator.
I saw all of the functions and how to solve it.
And suddenly it was pretty obvious how to solve my problem to describe all of the degeneracies in this lensing formalism.
So how can I wiggle around potential?
to keep all my observables invariant.
And then I saw in this thesis that the mathematics is exactly the same.
So I went back to my math book that I still have on the bookshelf here.
And I found exactly the theorems in functional analysis to describe my lensing degeneracies.
So all I had to do was copy the theorems from mathematics and translate them into the language of this gravitational lensing problem.
So why didn't others see what you saw?
I think it was 30 years between when you made an application to an adjacent field.
It's not even that far.
It's adjacent.
I mean, it's not directly, but it was 30 years or so.
So what did you see that others missed?
And what allowed you to see it?
I think, I mean, the most astrophysicists that I have met,
they are more phenomenologists.
they look at something and they have an intuition.
If I have twice the mass, I have like that many lens power.
Or if I have twice the mass, these things should run double fast or something like this.
And for me, I cannot say much until I have written down the equations.
So for me, it's the mathematical framework that in the end gives me the reasons to interpret
physical things.
So I think that this is why I found this degeneracy because I saw the mathematical,
formalism and I saw I can one to one I can translate this here and only afterwards I found out
that this actually makes sense in physics. So I first solved the equations and I knew this is the
solution, this must be right. But then I said, okay, I have the equations, but now they need to
get some physical meaning because most people, they either live in the world of mathematics
and then they have like variables, parameters,
but then these parameters are usually called in a certain name,
and then they say, but this is lambda,
like for instance, the cosmological constant,
it has this lambda name.
And then people say, yeah, but this is lambda.
But this is a mathematical term.
I want to know what's the physical realism behind all of this,
what's the physical interpretation in this model?
And after I had the equations,
I then went on to understand
what do all of my parameters and variables
mean in my equation.
Carlo Rovelli said, you should not write down a single thing that you cannot attribute a clear
physical meaning.
And this is something that I really took to my heart.
And I found out that the mathematical formalism completely shows you the degeneracies.
Because it's obvious if I have a parameter that is called the reduced shear, I mean, it's a small G.
This is what's in the equation.
And then I thought, what does it mean?
And it means that I can only constrain the local shearing power.
So what is the local shearing power?
But this shearing power is independent of the mass that it takes to shear this.
It's just, okay, this is the amplitude of the shear and this is the direction.
But how much mass physically it takes to create that shear is irrelevant.
And so I saw that this is something that makes total sense physically because I do not
know the total mass, that was obvious. So I cannot constrain anything that is related to the
mass, but I can create, but I can constrain properties that are more or less something with
respect to a certain mass. You always see ratios in these equations. And you can ask why.
Yeah, well, it's always something divided by the mass. So this is the getting rid of the degeneracies
that we cannot constrain. So this was the, I would say, the, the,
nice and beautiful part when I realized mathematics works out and then I can learn something about
the physics from these equations. Can you tell me who else inspires you or has inspired you?
So for instance, you mentioned Carlo Rovelli with the gist of it is that don't write down
anything that doesn't have a clear physical motivation, something like that. Who else? What else?
Well, my mathematical professor in my first few semesters, he was really great Professor Yeager,
Professor Rilly Yeager, and he had several honorary doctorates, and in lesson number three,
you immediately knew why. He was a mathematician doing calculus, functional analysis,
and in this direction, so more the, I would say, the numerical part of, and the practical part
of applied mathematics. And he taught me.
calculus, functional analysis, and also a little bit of finite element theory. And whenever he did
something, like proof, theorem, whatever he did, he first explained what is it good for. And we had a
lecture with a lot of people from biology, chemistry, physics, everybody was sitting there,
also mathematicians. But he always made sure that we knew the practical applications and what is
the problem in the proof where we have to really take.
care that reality and the proof still matches.
And this was really inspiring for me.
And this is, I think, what went through all of my data analysis that I always remembered
that you need to make sense out of this and your mathematics should not, I mean, should
not be somewhere in an abstract space.
You need to be sure that all the requirements of your proof are actually fulfilled in your
physical problem.
This is from the mathematical side
and another very inspiring person
is George Ellis from Cape Town
he is the one who brought forward
based on, he did his PhD in Cambridge
with Dennis Shiama
and I think Dennis Shiama was one of the first
persons in modern cosmology
who tried to ask this inverse problem question
and George Ellis and all of the collaborators
from the In Homogeneous Cosmology community
that I very much like and appreciate
they have brought forward that this inverse problem approach should be pursued further.
And I find this very inspiring, and this is half of the camp of the cosmologists who say,
let's have a look what's in the data and not model too much.
You said something that stuck with me.
You said that you can place infinitely many black holes.
You can stack them in a null set.
What are you talking about?
Yes.
Yes, this is something that completely.
strictly struck me. When I tried to derive the lensing degeneracies mathematically,
this all lives in the very abstract notion of a Sobolive space in mathematics.
And when I was studying Sobelief spaces and all of these, what does a function require to be integrated
or what does a function require to be differentiated? As a student, I thought,
who the hell needs this? Why on earth should I care? And as a physicist,
in third, fourth term, you do not encounter these issues because you're not as deep in the research
that you would actually care. But then when I had this formalism to solve and I suddenly realized
it makes a difference if my function is smooth. It's just differentiable twice. Is it even
continuous? I mean, what do I know about this lensing potential? I have no clue. So I wanted to have
a function for my lensing potential that is the most agnostic in terms of what do I have to put in here.
And suddenly I realized, oh, it makes suddenly sense to say my function, my potential should just be
integrable or maybe I want it to be smooth.
Let's assume it's smooth.
What can I say then?
Or then let's just be completely agnostic.
What happens if this function is not even continuous or so?
And then I discovered suddenly, if I say the function should just have the minimum requirements
so that it's integrable in my formulation, I suddenly end up in these Sobolev spaces.
And in this subalive space, if a function is integrable, you can still say that if you change
the contents under the integral, meaning your function, by a so-called null set,
then you wouldn't change the integral.
And since all you care for in this formalism is the final result of the integral,
you are allowed to change your potential by this null set.
And what is a null set?
A null set could be in physics.
Now you go from mathematics to physics.
What does it mean in physics?
It could be a countable set of black holes, an countable set of point masses.
And if you now think in a physical sense, a potential that is really very nice and smooth,
that's something completely different than a potential that can have a lot of point masses everywhere
and is actually humpy, bumpy, bumpy full of black holes.
And this is something that quite struck me back then when I thought,
suddenly this makes sense because this was this one sentence in one of the books on strong
gravitational lensing.
Oh, and we can also put some black holes in this potential, it doesn't matter.
And I thought, where's this coming from?
I don't know.
And for me, this sounded disruptive.
I couldn't imagine this.
But the moment that I worked through the mathematics,
it was obvious why I could do this.
But this came from a much more, I would say,
fundamental sound mathematical theory
that I suddenly understood where is it coming from.
But physically, I would say,
does this make sense?
Could it really be that I have an infinite amount,
as long as it's countable,
an infinite amount of black holes in my potential?
So I would say mathematically, it's clear you can have it, you have this freedom,
but physically you could ask, is this reasonable?
But the formalism gives it to you, so your choice.
So let's zoom out.
You and Subur Sarkar, who I spoke to on this channel, I'll place the link on screen,
you both go against the standard model of cosmology, but in different respects.
So where do you agree with him and where do you defer?
Well, I mean, I've also talked to Subir quite a lot
and he's a great discussion partner and he's very knowledgeable.
And this is why I think that the stuff that he has put forward
makes a lot of sense and I would definitely trust the results.
So whatever he finds is there.
We cannot discuss it away.
And he comes from particle physics.
So he says anything below this 5-Sigma that he also mentioned in Urients.
interview is nothing because it could go away as just a side effect or just some fluke,
but everything above this five sigma, whatever it means, however you define your statistics,
as you also said, is something that has to be taken seriously.
And in several works he has shown that he has an issue to reconcile the early universe
with the late universe in terms of a fundamental reference frame that he says,
if I now on Earth and I look into the cosmic microwave background,
of course we do not see the nice picture as we've already spoke about,
we do not see this nice homogeneous isotropic picture,
but we have to boost ourselves into this reference frame,
meaning we are moving with respect to this reference frame.
And now if Subir comes and says,
if this reference frame is fundamental,
I should also see it in a later part of the universe.
So if I take a look at far away quasars, they live in a bit later universe, then I could say I should do exactly the same boost and I could still, should see a homogeneous and isotropic distribution of these quasars in this reference frame.
And so far I'm with him, but now the question is, how do you now make sure that you are actually boosting yourself in the right reference frame?
I mean, how do you make sure that you found all the quasars at the right redshift,
that it's really exactly that in this shell of our cosmic time?
Because normally, like 10 years ago or 15 years ago,
it was quite expensive to make these estimates how far these galaxies are away,
like at which redshift are they.
And he also had a study to show that there was some contamination of low redshift,
shifthes shasers in his high red shift samples so he had to get rid of them and on the other hand it's
also a question how do you make sure that you have actually sampled homogeneously or that you really
got all the quasers that you need in order to say I boost myself into this reference frame and I'm sure
that I have enough quasers at the right places I mean he also argues that he needed more than a million
quasars in order to get his probe to be significant.
But the question is still, have we found all of them?
Have we really found all of the quasars and not something else?
Because there could also be a contamination from other similarly looking sources, for instance.
And here I would be a bit more cautious to say he found an effect that is definitely something
there.
But the question is the statistics that we now apply to say this was really a fact
5-Sigma detection, or is it really something that is just as he analyzed it?
Because there are also people who say it could be that these quasars change over time.
And if these quasars change over time, this could also create this effect.
I mean, for me, I would say it is something that is definitely to be taken seriously and to be
investigated. But I also see the other camp saying, yeah, but what about,
this, what about the change in the quasars?
What about this?
What about that?
You will always have the, I would say, the underdetermination that you can say,
I see this as a hint for an inhomogeneous cosmology or for some breakdown of Lambda CDM.
I would say the breakdown of Lambda CDM is not as bad as breaking down the cosmological principle.
This is even the better point and the stronger probe, I would say.
that most people just try to say something is rotten in Lambda CDM,
meaning we have our standard cosmological model,
and this has a certain set of parameters for one specific class of models,
which is homogeneous and isotropic.
And now Subir even says,
it's not that something is rotten in Lambda CDM,
it's that something is rotten in the entire class of these models.
So it's not just that we have a problem that,
for instance, a certain parameter has this value and somebody else says, no, it has a lower value or a higher value.
It's the entire class of these models, this very simple assumption that the universe is homogeneous and isotropic, that this is actually at stake.
So if his effect is actually true, we may not only have to abandon Lambda CDM.
We will also have to abandon the entire class of these spherical cow models.
So this is, I would say, the more groundbreaking thing.
But on the other hand, the in homogeneous cosmology community, like George Ellis always says,
the universe is more complex than that, actually is waiting for this because, I mean,
I would say Einstein, 1917, already knew that this assumption that he made in his first
cosmology paper to say, let's assume that the universe is homogeneous and isotropic,
or statistically even homogeneous and isotropic, that's what he wrote, he wrote clearly,
I assume this.
But I do not think that this is true because even with my limited knowledge of stars and a little bit
of a galaxy, I see that there are stars and there is nothing.
So I know that I make a simplified assumption and I hope that with more data, we will overcome
this simplified assumption and we will get to the next level of detail.
So this is the interesting part.
When WestJet first took flight in 1996, the vibes were a bit different.
People thought denim on denim was peak fashion, inline skates were everywhere,
and two out of three women rocked, the Rachel.
While those things stayed in the 90s,
one thing that hasn't is that fuzzy feeling you get when WestJet welcomes you on board.
Here's to WestJetting since 96.
Travel back in time with us and actually travel with us at westjet.com slash 30 years.
So an opponent can always say to Subir, hey, your quasars are more variable than you think.
You haven't completed your sky coverage.
There may be other prior assumptions.
So do you think that his model has more wiggle room than Lambda CDM?
And is at the core here that if a theory is flexible and it's difficult to kill, then that's not such a great theory.
I think Lachatose, I don't know if you know who the philosopher of science Lachatos,
but he said a degenerating research program is one that can absorb away these issues
because you don't want your science, your theories to bend so much
and accommodate everything or potentially accommodate.
Yes, yes.
I mean, Sabir is not making any model.
He's just saying if I interpret everything in Lambda CDM within this framework,
I run into an inconsistency.
So in principle, he is, or not only Lambda CDM, he's even saying, I use the cosmological principle that the universe is the same around every point in the universe, homogeneous and isotropic.
And then he says, okay, if this is true, then I can do the boost in the same way in the early and in the late universe, and they end up at the same homogeneous and isotropic distribution.
And now he says, I don't find this.
So in that sense, he doesn't have a more complex model or so.
He just shows that our current model has an inconsistency.
So he's just proving our model wrong if all of his assumptions are true.
That's his statement.
Didn't you give a talk about the Lambda CDM where you played both sides,
like for it and then played devil's advocate?
That's a rare quality to be able to do it.
and then also to do it publicly.
So tell me about that.
Yes, yes.
I mean, the point is, I mean, I started at CERN as particle physicists.
So I also think anything below 5 Sigma is maybe worth looking at,
but it's not worth something that we say, okay, now we take this seriously.
So my colleagues and I, we wrote a paper, we decided to write the paper to collect all of the issues
that we have, not only for the Lambda CDM,
tensions, but also for the entire tensions of the cosmological principle, like one of the
probes is suburbia's metadipole. So to say, let's have a look, what do the data tell us about
the cosmological principle, like about this homogeneous and isotropic universe that Einstein
assumed 1917? I mean, we wrote this paper in 2023, so more than 100 years later. And Einstein
back then said we have to overcome this. There must be the next level of detail. And we wanted to
ask, have we reached that level yet? And so we teamed up with 22 or 23 scientists, and we collected
in a review paper all of the hints that seemed to violate the cosmological principle. And
Subir's probe was one of them, and this was one of the few that actually, one of the few that
actually reached the five sigma for the statistics.
Another one, which is also at 5 Sigma, is the bulk flow of Costas Micas.
So Costas mecas, he analyzed galaxy clusters and he wanted to know in how far these galaxy clusters on the sky, how they are moving as an entire like entity.
Do they have some so-called bulk flow?
Do they moving together in a certain direction?
Or does every cluster move in a certain direction?
and statistically the motion is averaging out.
This started as a PhD thesis and he thought,
okay, I now prove that nothing happens here
and then I can move on to my actual project.
No, he didn't set out to find this,
but he found a certain bulk flow on the sky
that he said there is something that is moving in a certain direction.
It doesn't average out.
And he turned out after hundreds of pages of papers that he wrote
and his PhD thesis and a lot of different data sets that he analyzed,
this effect was also at the five sigma significance.
And so in our paper we found we had Costas as a co-author, of course.
So we found that there are actually many effects that hint at a violation of this cosmological principle.
So beers is one of the most prominence with five sigma.
Costa's Mika's bulk flows is also one at five sigma.
and then we had several other effects as well that were not at 5 sigma but maybe at 3 or 4 sigma.
For instance, Alexia Lopez and Roger Glovis, they found what was called back then the giant arc on the sky,
a really very big structure on the sky that seemed to be like clustering but on a larger scale.
And by now they even found a giant ring, so the whole structure is growing and growing.
it's getting larger and larger.
And this means now, if we have a finite time from the Big Bang till now, how can we assemble
such a huge structure?
The counter argument is, this is a chance alignment, this can happen, and you just have one
of these structure, or maybe like, let's say we have three of these structures.
On a statistical basis, this doesn't challenge our universe.
That's one of the arguments.
But on the other hand, if you take this in context that we have,
the cosmic matter dipole, that we have these galaxy clusters that are flowing in a certain direction,
that we have two large structures for our, that seem to be challenging our standard model,
that we have additional bulk flows of other things, not only galaxy clusters, that seem to challenge our model.
And we have a lot of these probes, the CMB, I mean, you think it's a clear probe.
We talked to CMB scientists, and we found out there are also access of evil in the CMB,
and there are also anisotropies or even like asymmetries in the CMB.
And all of these things together, I mean, this was for me one of the key moments to say
either our universe is more complex than a spherical cow or all of these probes are missing
something and it's not even the same thing because they all seem to miss something different.
Like do we get all of these calibrations wrong in each probe?
do we have different calibrations and all of them do not seem to be right?
This was something for me that I found most convincing to say we are at the stage that Einstein said,
now we see the next level of detail.
Now we should look what is the next level of detail for our cosmology.
I would see it positively.
I mean, I wouldn't say it's cosmology in crisis.
We have, after 100 years of research, we have reached a stage where we can say we are not,
ready to move on. After 100 years of the first standard model that we have, we are now ready
to see the next level of detail. I mean, homogeneous and isotropic, that's the second most
boring thing you can do. And as long as your data is bad enough, you will not see the next
level of detail. But if you increase your data, if you increase your quality, you will see
the more details than homogeneous and isotropic things. You will start to see that.
dipoles, you will start to see directions that are preferred.
It seems that we are at this point.
I would say it's rather a very positive thing to think of than a negative thing.
Now, those rings that you mentioned, Roger Penrose claims that those C&B rings are echoes from a previous universe,
one that was not a part of the Big Bang, a prior Big Bang, and then prior Big Bangs, at infinitum, his conformal cyclic cosmology.
you work with that kind of data.
You just referenced it.
Is he onto something?
You sound like a typical forward modeling speculator.
Do we have something here?
And I think that's the fun with forward modeling
that you can kind of like speculate and gamble.
Is he onto something?
And here, this is a brilliant example again of forward modeling.
What do you want to believe?
I mean, Roger's idea goes far beyond anything
that goes currently to our observational points,
the first picture that we have is the cosmic microwave background.
And he now claims we have space and time before the Big Bang.
This is revolutionary.
And then you could say, but how do you test this with our data?
And here, which I find is really great, that he makes predictions that can actually be tested
in our own data that we have at the moment.
And he says that in this cosmic microwave background, there should be these remnants of the
black hole mergers, the so-called circles of low varieg.
and he and another person, they also teamed up in order to measure these, to find these rings.
And in the W-map data after seven years of data-taking, they claimed that they found evidence for these concentric rings.
But now again, the question is the other camps is, and there is another paper that says, yeah, he claims that he has a significant evidence.
They do not state how many sigma, if I'm not mistaken.
But the other camp says, yeah, but what about the foregrounds to the CMB?
You need to take this into account as well.
And then the significant changes.
And on the other hand, they claim if you now try to say it's a significant detection with respect to what.
And then Roger Penrose and his colleague, they used a certain kind of statistics that they said,
this is now significant.
But the other team says, we used a different.
crowned truth, a different simulation in order to see, is this actually significant? And if you add
everything up, they say at the moment in this data that we have, we cannot find a significant evidence
for this. So it doesn't mean that Roger is wrong. It doesn't mean that Roger's theory is supported.
It means the jury is still out on this and we need more data, better data from the cosmic
microbeft background in order to now see, is this theory supported or not. But the problem
here is, this theory just stands now at the moment as is. There is neither evidence against it
nor evidence supporting it. So it's another theory that we can gamble on.
Does this mean that you found an alternate way to test Rogers' theory?
No, I mean, I didn't work on this at all. I'm more of a late universe person. But in principle,
the question is always, if you try to find the statistical significance, you need to compare to a simulated
universes that you think are in accordance or consistent, as Neil Turok would say,
with the data that we have and all the knowledge and our theory that we currently have.
And here, this is the difference between these two camps, that they say,
we do not have different data, we just have a different way of interpreting the data.
Speaking of Neil Turok, he's also been on this channel and you referenced him before as well,
he pushes back much like yourself against exotic dark matter.
He thinks that it may just be neutrinos.
You're both minimalists in this sense.
Do you find your work converging with his
or do you see yourself as coming from completely different directions?
Oh yeah.
Neil is a great guy we even met at the conference and we talked to each other.
I very much support his view that if we do not see more particles coming from
detectors, we should stick to the contents that we have.
and then we should make a self-consistent picture of the universe with the stuff we have
without going to look for mysterious things.
And in that sense, I think the idea he's proposing with the right-handed neutrinos is pretty
cool because we have already seen neutrinos, just left-handed, but it looks like reasonable
to make a small extension in our sector to say we allow for the right-handed ones as well.
So in that sense, we're definitely converging.
and I also agree that we should try to find a theory that does not use any exotic stuff.
But where I see myself a little bit converging with him and more agreeing to Subeyazaka
is his idea how to solve the cosmological constant problem.
So at the moment he says we have explanations for two of his five parameters,
but for three he still needs to have phenomenological fitting functions.
And for dark matter, this is still one fitting function,
but he puts forward half of an explanation that he says we still need to make more sense out of this,
but the right-handed neutrinos might do the job.
That's good.
But for the cosmological constant, he says he might have an explanation why this constant is so low,
and on the other hand, why it comes from the vacuum energy of quantum field theory.
And here, Subeyazaka brought forward that this explanation has a lot of problems,
and if Neil finds a solution that's great, but in my opinion,
Lambda is coming from a classical theory.
It came into the world based on some classical invariance.
So I think we should find an explanation of Lambda,
which is first based on classical mechanics or classical theories.
And only then say, okay, there might also be an attempt to unify gravity and quantum
afterwards to say there is also an explanation for lambda in quantum theory.
Okay, so you think that the vacuum catastrophe,
is somehow misplace or misconceived or ill-conceived
because you're trying to come up with an explanation for this lambda,
which stems from general relativity, a classical theory,
and you're trying to find its roots in a quantum theory.
Yes.
Now, of course, the counter would just be,
well, anything that's classical emerges from the quantum,
so that's a natural predilection.
Yes, yes.
I mean, that's a thing that would be nice to have to say,
we can argue from quantum fields to the classical regime.
But on the other hand, it seems that there is a problem there.
I'm not sure how it can be resolved.
I mean, it's not my field of expertise.
But Subir says that it seems pretty hard to solve it from the quantum point of view.
So I would say maybe to go step backward and to say it came in classically,
let's try to solve it classically.
That might be the more reasonable first approach, in my opinion.
I mean, people like Thomas Bukha or Subia Saka, we all try to make sense out of this also from the classical point of view, to see is the data maybe splitable or partitionable in a different way so that Landa is a phenomenological parameter that is actually standing for some inhomogeneous space-time curvature or something else that can come from some classical explanation and not immediately some quantum explanation.
Hi, everyone. Hope you're enjoying today's episode.
If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections,
you'll find it all on my substack.
Subscribers get first access to new episodes, new posts as well, behind the scenes insights,
and the chance to be a part of a thriving community of like-minded pilgrimers.
By joining, you'll directly be supporting my work and helping keep these conversations at the cutting edge.
So click the link on screen here.
Hit subscribe, and let's keep pushing the boundaries of not.
knowledge together. Thank you and enjoy the show. Just so you know, if you're listening,
it's C-U-R-T-J-I-M-G-A-L.org. Kurtjimungle.org.
Yes, see, many people, when challenging something as widely held as Lambda CDM,
which is not just it's held because scientists are obstinate or a calcitrant and they're fools
and they're just unwilling to move on, that's foolish. That's not why it's held.
There's great reasons for it. There's great evidence for it and so forth.
but when someone comes along like a Subir or a U and puts cracks in it,
sometimes it's thought that it's a dismal view of science.
But actually, you're saying, no, no, no, it's a hopeful view.
We're on the precipice of something.
Yes, yes.
I mean, I always see, I don't know why this is the case.
I mean, Subir always says particle physicists try to tear down their models.
They try to find something new.
They want to be proven wrong.
And in astrophysics, it seems to be the other way around.
I wouldn't say that, but I think that people get used to things.
I mean, humans are like, they want to have like traditions.
They want to have stuff that is predictable.
And if we now say, wait a moment, our universe is surprising,
it may give us something that is unexpected all the time and we cannot control this.
Maybe that's the reason to panic.
And on the other hand, after 40 years of getting used to dark matter, I mean, there are people alive who fought for dark matter to be in the picture and who were the first ones to do these simulations to show dark matter actually makes sense.
They fought for it to be there.
And then do they want to see you die again after like 40 years of research in all of this?
And it's always the question, did we do all of this in vain?
What's super interesting is you're more radical for being conservative.
So, for instance, when we were speaking before, you were talking to me about a colleague of yours who invented or proposed some crazy form of dark matter.
And that the more that the data was collected, the less there was speculation and then the less of that sort of dark matter was permitted.
And you're actually sobering them.
But this sobering process, if people have been used to being intoxicated,
can feel like, well, I want to stay where I was before.
But you're the more conservative one, at least in this respect.
Yes, yes.
I mean, I have, because I have a problem of trying to chase something.
I mean, I'm willing for crazy ideas.
I mean, I also investigated naked singularities and how we could detect them.
I mean, I also tried to write a paper on that.
It's not that I don't like crazy ideas.
I have a video about that, actually.
Oh, really?
Yeah, it's a crazy idea.
thing. So a whole podcast with J.B. Manchak about naked singularities and then what is a pathological
space time? Because many people will say the word that a closed time like curve space time or one that
doesn't have a one that's not globally hyperbolic is pathological. Well, we have rotating black holes
and so forth and they're not globally hyperbolic. But it's also somewhat of a sociological
word to say pathological. Einstein thought black holes were pathological in and of themselves and
potentially the Big Bang.
I think people even doubted their existence until we got this first picture of a black hole
like this from the event who is in telescope that people said,
okay, finally we have a proof that there is something experimental that we can attribute
to these space time singularities, whatever you want to call this.
And I mean, when I saw, it was just a conference during the pandemic.
And then there was a guy who said, okay, I think I can't even.
explain dark energy with primordial black holes.
And then he went on and he thought, if they repel each other with a certain charge,
then I could explain the space time expansion.
He tried to understand dark energy in terms of each other,
in terms of repelling dark holes.
And repelling black holes, then he tried to calculate the charges.
And it turned out they were naked singularities.
And so I thought, because everybody was skeptical, and so I thought, hey, it's winter break, I had nothing to do.
How about I just proved this wrong in like 14 days?
So I sat on my desk and I took his work.
I took the charges and the masses that he predicted.
And I went on to conservatively think, if I had such black holes in my universe, what would be the consequences for this?
like forward modeling, I do this.
And I found out I cannot kill this theory because you think of naked singularities as something,
naked singularities disruptive things in space time.
It turns out if you have a naked singularity and you send a light ray through a plasma cloud
and you think, okay, what happens to this plasma, sorry, if you have a naked singularity and this is a charged singularity,
and this is a charge singularity
and you have a plasma cloud
in the vicinity,
what happens to the charges
in the plasma cloud?
I thought this would just be disruptive.
You would even have like proton decay
or, I mean, I was thinking about the worst things.
It turns out even naked singularities,
you need to take this plasma cloud
very close to it in order to actually dissolve it.
And for normal settings
or for when you say,
this is likely to be observed,
you wouldn't find this thing because it's a singularity.
It's really a point-like thing.
You need a very perfect alignment between us, a plasma cloud and something to observe an effect.
And the other thing is, I mean, I'm a gravitational lensing person.
So normally you have a point mass, and a point mass is a very easy model to describe the lensing signals.
And then you would say, I have a mass and the Einstein ring is proportional to the square root of the mass.
So if I have a star that sends light over this point mass,
I would see an Einstein ring around this point mass,
and it's square root of mass.
So if I have an exorbitantly large mass,
I should have a huge Einstein ring.
And then I looked at the equations and I found out,
no, actually, because it's a naked singularity
and it also carries a charge,
this huge Einstein ring doesn't occur
because the equations tell me that the Einstein rings
are actually much smaller and I couldn't observe them.
And it was quite astonishing how disruptive you think something could be.
And it's so hard to disprove that theory.
And in the end, I talked to an observer from the Twingelo Radio Telescope.
And he said, yeah, I've seen many of these really exorbitantly crazy ideas and models that couldn't be proven wrong.
You cannot prove something wrong.
That's really difficult.
And he also said, yeah, I definitely believe you that you tried your best potential.
it's really hard to challenge something like this.
And with dark matter, it's even more, it's more intricate, as you said.
I mean, you have a theory, dark matter is just a word.
So what do you call dark matter now?
Is it just the missing mass in terms of gas clouds that you can't observe?
Or do you want to have a new particle?
And then depending on what kind of dark matter,
you can always wiggle yourself around if there is a new observation.
you could say, ah, sorry, dark matter wasn't collisionless.
I see an offset, so it has to be collisional.
So then we need to constrain your offsets.
So you can always amend your theory.
Where are my gloves?
Come on, heat.
Any day now?
Winter is hard, but your groceries don't have to be.
This winter, stay warm.
Tap the banner to order your groceries online at voila.
enjoy in-store prices without leaving your home. You'll find the same regular prices online as in-store.
Many promotions are available both in-store and online, though some may vary.
You know, I was going to say that I've seen a pattern, that the more restrained the theory,
unexpectedly, you get more critiqued because you would think that it'd be more, the more speculative
you are, the more crazy you are, as you mentioned, the crazy dark matter, that that would be less
accepted, but someone like yourself, someone like Subir, someone like Jacob Barnes, and Tim
Modlin in the quantum mechanics side, who are more restrained, actually, or they have realist
interpretations of quantum mechanics, that they get critiqued. But then I also realized everyone
gets critiqued. There was this comedian who was saying that people think everyone hates America,
if when you're in America, you feel like every other country hates you. And then he said,
yeah, but every country hates every other country. And you always
hate your neighbor more than you hate America.
Just everyone, everyone critiques everyone.
And everyone always feels like they're the ones that's on the defense and, and they're the
underdog.
So I think it's just par for the course.
But I think that's exactly the point.
I mean, when I grew up, I thought, okay, math and physics, that's something that has
something absolute.
There is some truth in it, some absolute truth.
You can say one plus one is two.
There is math that you cannot be wrong in that.
No, one plus one is not two.
depends on the system you were in.
One plus one can be something else.
I mean, if it's in the binary system,
you would add up differently.
So it depends on your reference frame.
And in physics, you have more room to wiggle.
It's not math even.
It's even less, I would say, with less rigor.
But on the other hand, I usually feel like a lawyer.
I have a certain case.
And then I need to take some sides.
What do I think is the most reasonable thing?
Then I defend this.
And I try to find a.
arguments in favor of this of this way to to see the world.
I mean, this is, this is how I see physics.
It's not absolute anymore for me.
It's more, yeah, I'm a lawyer in that sense to say I now have chosen a certain camp,
namely the camp of I want the least amount of magic in my universe and the most amount
of things that I can actually experience and I can see like empirism.
Yes.
And then to say, how can I, how can I keep this?
this worldview or how can I live in this worldview given the input from outside?
Let's get into applications.
Yes.
So particle physics has a similar issue where there's a huge amounts of data, huge amounts
of model dependence and triggering and reconstructing events and
the background and subtracting away the background.
So can your methods transfer to collider physics?
In principle, yes.
I mean, there's lots of data.
So in principle, you would actually have the luxury to say for the inverse problem,
we just try to find what are all the possibilities that we have tracked here.
So in that sense, that's even better.
Because this is what I think that cosmology lacks.
Cosmology is a merely observational science.
So we cannot make experiments.
So we cannot interact with the things that we think are there.
We set up models and then we are,
we can only watch, but we cannot say, we believe that this mechanism is happening,
and now I go into the lab and I show if I apply all of the requirements, I can create something.
And I think maybe this is the reason now that I think of it, this is why forward modeling is so
popular, because usually we have a chance of being the agent of producing it, given these prerequisites.
So on earth, being a particle physicist, it makes sense to forward model, to say,
say, if I do this and that, if I collide this particle with that particle, I will see this collision,
then there will be this particle and that particle going out.
I will have some hadron jets.
And in the end, I will see here, there's my Higgs.
And this is why forward modeling makes sense, because I can prove myself wrong or right
by exactly performing my cooking recipe that I've set up.
And then I know this formalism leads to this.
But in cosmology, we have a different setting.
We cannot just go into the lab and say, here I have my forward model.
Let me rerun the universe, yeah.
Yes, exactly.
So now I want to have modified Newtonian dynamics for sure.
So let me rerun the universe.
And the simulations, they try to replace this.
But obviously, simulations are a bit different in the sense that we face it's on a computer.
We have numerical instabilities.
It's not the real thing.
it's just surrogate of a real thing.
So in that sense, you can always say
simulations are as good as it gets,
but it's still not 100% like an experiment.
And so in that sense,
I would say maybe particle physics
does not need that much of inverse problem solving
because they have lots of data
and they have this agency
that they can just create particles as they want.
Or do the collisions and not find a particle,
which has also happened.
In that sense,
maybe particle physics is not the best target,
but I would say I'm coming from cancer research and biophysics.
And there I saw exactly the same problem.
For instance, for cancer, there are 100 types of,
more than 100 types of cancer.
And you think, again, like in a forward modeling approach,
you have certain processes that you say,
if I have, for instance, a certain DNA that is replicated or so,
and if I then translate this in the wrong way into proteins and then cancer can grow.
And this is the so-called signaling pathway approach where you start from different models
how cancer can evolve in your different pathways, depending on the different enzymes,
depending on the different stages of all of your cells, depending on the different stage in which your cells are in,
all of that.
And then it can happen that you have a certain signaling part.
pathway where you say, okay, this type of cancer is growing in this way. But suddenly you have a
patient where you say, nothing of this fits and the medication we apply doesn't work. Why?
Oh, there is a second signaling pathway in which this cancer can also grow. And then you suddenly
start to realize that you have different options to end up in what you think is the same state.
And of course, then you could also apply the inverse modeling to ask what is the necessary
ingredient in order to get to this stage or hopefully in order to prevent this cancer.
In that sense, I would say there is a lot of potential there as well.
So what happens next? Where is this research leading?
I mean, for me, I would say I've realized that after almost 10 years in Lensing, I've realized
Lansing isn't good for anything unless you couple it to other data.
And this is why in the last two years, I've started to look into.
kinematics because if I have a structure that is a lens, then I would like to know, can I constrain
the structure from the inside a bit better? Because now I know how to shoot light rays around it.
But inside, for instance, in a galaxy cluster, I have galaxies and these galaxies move and I can
measure part of the movement. So now I'm trying to get a more holistic approach to say,
I have local information from lensing. And funny enough, math is your friend.
I found a way that actually I can transfer my local lensing approach to the kinematics,
to the description of the kinematics.
And so I hope in the future I can also get local kinematics information about such a
structure.
And then to patchwork all of this, as we said, I want the positive way of science,
meaning I know from the lensing the local stuff.
I hopefully then know from the kinematics the local stuff.
so I will more or less reveal, shed more light into the structure by all of these individual
information pieces that I will puzzle together.
But I know that I can only fall back to the local information as my ground truth as the stuff
that I have already validated by my approaches.
So that's the next step for this one.
But on the other hand, it also needs funding.
and I think that our community is pretty much underfunded.
So I would really like to see some light at the end of the tunnel
that some more funding flows in our direction
because I get rejections with words.
Nobody has done a progress in 30 years on this project
or in this research direction.
Why should you?
And then I think, yeah, give me a chance to do it
or at least give me a chance to move a little bit forward.
Maybe I'm not reaching the final goal, but it's a direction that needs more attention.
Because if I take a look, people say, for instance, I don't like artificial intelligence or I'm against artificial intelligence.
I'm not, but I see that in cosmology we have a hard time to actually succeed with artificial intelligence.
Demis has sabis, he won the Nobel Prize for Alpha Fold, for this protein folding.
And when he gave his Nobel lecture, he had three criteria.
What makes a successful AI application?
Number one is, know your feature space, and it should be really large.
Like, a human cannot understand it, so you really need a huge computing form to resolve all
of the parameters that you have in your problem.
And the parameter space is huge.
You cannot oversee everything.
And the second criterion is know your goal function.
What is my function that I want to optimize for?
With alpha fold, it was pretty obvious.
you have the free energy that should be minimized
and then you know how the protein is folding.
And the third part is you need lots of data
in order to train your artificial intelligence.
And if you now look at cosmology,
what do we have?
None of the three.
Do we know all our feature space?
No.
Because we don't have like 23 amino acids
that we can combine to proteins
if we just assemble them long enough.
We do not know what dark matter is.
So what are we looking for?
What's our feature space?
Is it a particle?
Is it a fluid?
What is it?
Wait, what is a feature space?
What's a feature space mean?
A feature space is the ingredients that you have.
For instance, I would say if I have a galaxy cluster, I would say, okay, what do I need?
I need galaxies.
I need the positions of the galaxies.
I need the velocity of the galaxies in order to describe such a coeritationally bound structure.
But I also need the dark matter in this galaxy.
and the positions and the velocities.
So if I want to understand how a galaxy cluster is taken together by artificial intelligence,
I would need the position and the velocity of dark matter.
If I do not know what it is, I cannot reserve part of this feature space for it.
Then the second part is the goal function.
What is the goal?
I mean, do we really understand gravity to a degree that we say we can write down this optimization function?
I mean, Demisazabis used to free energy for the proteins.
What is energy in general relativity?
Masses have to be defined and the definition may not be unique.
So there we struggle from the other side to say from the GR point of view, from Einstein's general relativity,
how can we write down such a goal function to describe, for instance, a galaxy cluster?
Like I said also statistical mechanics to describe it is not going to cut it.
And then third, data.
We can use simulations.
We have lots of simulations and in simulations we have, I would say, enough data to train.
But this is very costly to store on the one hand, very costly to produce from the starting point.
And the big question is, is it actually realistic?
Are we training on something that makes sense?
For instance, for strong gravitational lensing, there was an AI approach.
And this AI approach trained on simulations because we do not have enough data from strong
lensing from the observational point of view to train such a machine.
So they used simulated data and they found out that when they tried to recover the simulations,
great rates, 100%, I mean, not 100%, but 95% recovery rate of the data and correct descriptions.
But then they went to the observations.
And then the rates went pretty much down
because there was something missing in the simulations
that was in the data.
And so the training process was not realistic enough
so that the machine could actually find
all of the lensing events that were there
or that we actually know they were there.
So there is something missing here.
And last but not least,
if you take the observations,
like for instance, Desi has now found
11 billion, no
it was 11 million galaxies
in our local universe like
11 million galaxies
that's not half of the population
of Ghana. So
just to put in a context
you have 11 million galaxies
out to a certain redshift in a huge
cosmological volume
and now compare this to what we know on
earth, I mean
population of Ghana, if I'm not mistaken, is
37 million people.
So in a very small country in Africa,
So if we now compare this to the 11 million galaxies in this cosmic volume, you know how the sampling is.
I mean, we have a lot of data.
We have a high degree of detail.
But I don't think that it's enough to train an artificial intelligence.
Leave aside the problems that we don't know what's actually in your observations due to dark matter, dark energy and the unknown ingredients of the universe.
And this is why I think we first need to.
understand ourselves, what are we talking about? What is dark matter? What is dark energy? And when we
have these questions, I think we can use artificial intelligence. But so far for the exploration process,
we may get some hints where to go or not, but we cannot expect, for instance, in AI,
to understand the global mass distribution in a strong lensing event when I've already shown
that only local information is the one we get from the data. So in that sense, the AI would
just speculate better or give us a more sophisticated model, but we still don't know why this model,
okay, it fits the data, but as do all other models, as I have shown. So what is the gain
using artificial intelligence, for instance, in this problem, if all we get back is a more
complex model and even maybe a less transparent model that we cannot understand? I mean, if we get a more
complex model and not just a power law, how do we now argue that this is reasonable physically?
The machine may give us something that fits the data, but it's just one option of many.
This is what we know. But then how do we argue that this is the true model? This is the truth
that's actually out there. When we already know from the inverse problem, we cannot know this.
Now I know you said you will be talking about this issue and your colleague may push back and you say,
just give me a chance.
Maybe I can't take the ball all the way to the end of the football field, but I could take it a few feet further.
So allow me to ask what changes if you get to the end of the football field?
Paint that picture for me.
What changes in cosmology?
What changes in cancer research?
your method of prioritizing inverse problems over forward problems and your solutions via these
inverse problems takes off. What does the future look like? Well, I mean, inverse problem solving
is, I would say, a typical thing that you use when you solve criminal cases. So now let's
assume that we use it also more in biomedical, in astro and cosmology, wherever we have like
complicated problems with maybe non-unique answers. So if we do this,
I would say we have to rethink our scientific method completely.
If we replace this forward modeling with this inverse problem solving approach,
it would mean that we change the way we think about science.
And in my opinion, this would be a better world
because we would have a more positive way of knowledge gaining.
I would like to show you how I mean this.
I mean, imagine you now have this inverse problem solving method
and you build a tree of necessary models from the trunk to the branches to the leaves
and every higher level is based on the assumptions on the lower levels.
So in that sense, if you now think I want to solve a problem and I have already built the
trunk and now from one branch I want to go to a smaller twig.
So to extend my model a little bit further.
If I guess a little bit wrong, then I can only fall back to the level of
beneath this one, but I will not fall down the entire tree to the ground.
This is what forward modeling sometimes does, that you end up at square zero and you do not
know how to climb up again.
And in this way here, we avoid this problem.
And so I hope that this would be more positive and encourage more people to care about
science, because if they have already built their trunk of knowledge of a certain part, then
they only need to extend their knowledge a little bit.
So that's more positive.
And on the other hand, this is even more important, if we now take a look at how science is done,
it's usually investing a lot of money into a high risk, high gain science research.
If we now say we already have a solid trunk and we just need to extend it a little bit,
it takes out the high risk of all of this.
And another point that comes with it, if you now climb up the tree, the further you climb,
the less choices you should get.
It's like in a criminal case, the more evidence is presented,
the more people you should be able to kick off your suspect list.
So in the end, you narrow down the choices.
And then this means you need fewer and fewer resources,
the higher you climb up.
So this idea is highly efficient, it's resource saving.
And in that sense, I think you could even fund more projects than before.
at a lower risk, of course.
And so in general, I think if you build up all of this,
it sounds more positive for me than gambling on certain forward models.
And in the end, you may not even be able to find counter evidence
or evidence in favor of something.
This is super interesting because theoretical physics is, in large part, model generation.
So on the archive, almost every day is a new model.
here's how the universe works.
Even on this channel, theories of everything.
New model, well, here's how quantum theory works.
Here's how it doesn't work.
Here's how blah, blah, blah, blah, blah.
Here's how you combine them.
Here's what space time is.
Here's what space time isn't.
And then the question is, well, look, in absence of data, it's always caveated like that.
In absence of new data, we don't have anything that's beyond the standard model.
In absence of new data, what else are we to do as theoretical physicists?
All we know how to do is just generate models, and we don't exactly have data to go by.
So in your analogy, it sounds like what you're saying is that would be the equivalent of just adding more suspects to your criminalist.
Yes.
And that's the opposite of what should be done.
You should be narrowing suspects down.
Yes.
But, okay, I'm sure you've spoke to theoreticians about this.
And this goes against the whole ethos.
And again, like I mentioned, it's caveated with in absence of evidence.
So what do you say to this throwing models to the wall and seeing what sticks?
It seems like that's all that can be done.
Yeah, but on the other hand, I mean, there are people like Neil Turok who say,
we have a lot of data, so let's use it in a different way.
Let's try to be minimalistic.
And in that sense, to clean up, I mean, Bjork Hayne also said for particle physics
that when they found all the new particles and they always inserted yet another field,
yet another particle and so on.
So their model just grew, and he said that the discovery was so quick.
They just added and added and it somehow worked out.
But he said, we were always hoping that a few years in the future,
somebody would come and clean up that mess.
So they were obviously, they were putting together a model with a really hot needle
because they had so many discoveries.
But they thought in the end, hopefully somebody will look from top down
in order to make sense out of this from a higher viewpoint and then to clean up the mess.
And I think it's the same with all of these models that we have.
have in cosmology or astrophysics, we are not short of models, but to compare them with each
other, to find what do they have in common, where do they differ, why do they differ? All these things,
they are the important things to do. It's not coming up with yet another model. It's trying to embed
the model that we now have, that we want to put forward, into the landscape of all the other models,
and in addition also to relate them to the other models, to say, here is where I differ, here is where I
agree with the others. And this is what I did with my approach to say the maximum information
is where all models agree. And then it's pretty obvious why and where the models will,
the models will differ from each other and will differ from the information in the data.
What question keeps you up at night?
Well, a question has been keeping me up at night since I was 16. I mean, I would really
understand what is gravity. That's the thing. I do not know.
Why, but this is something that completely struck me the first time I heard it.
When I first heard about Einstein's general relativity, I thought, what is this?
This is really bizarre.
And then I realized gravity, this is a force, but on the other hand, it seems to be space time.
So what is it now?
What is real?
What is not real?
And how can I understand this really?
This is something that's, it's not just like electromagnetism that lives on a background.
It seems to be the background.
And this is something that is still keeping me going.
I mean, academia is not the easiest way to earn money.
Actually, it's the hardest and it's the most stressful one.
But this is the only reason why I'm doing this.
I want an answer.
What's some concept, so maybe it's this one, but what I was going to ask is,
there must be some concept, whether it's mathematical or physical,
that you couldn't understand.
Maybe you tried for months, even years, to understand it.
then all of a sudden it made sense.
And so what was that concept or idea, what have you,
and what made it click?
I think the best, there were several, of course.
I mean, when you study, you learn a lot of things
and you always ask yourself, why?
Why do I need to know that?
And then there is a certain point,
when you're suddenly confronted in your research project
with something, you think,
wait a second, I've heard about this during my lectures.
And back then I wondered, what is it good for now?
I know it.
And one of the key moments in that direction for me was the Sobelov space.
Because when I studied functional analysis, I mean, I had the best professor in the world to do this.
He was really trying to make it applicable and to show all of the things that he knew from his applications.
But it was always something that I never connected much to because he was more of engineering problems and all of that.
And suddenly when I saw in my lensing formalism,
subal-left spaces physically mean you can have an infinite amount of black holes in your space time,
and math doesn't care about this.
This was something that I found completely intriguing,
but on the other hand also bothersome.
And I thought, okay, finally I understand why this concept was invented or discovered or needed in mathematics
in order to tackle cases like this.
That was super interesting.
Now, where can people find out more about you?
Potentially some funders who are listening, who are watching,
who want to help this sort of research,
and then also maybe some other researchers who would like to collaborate with you.
Where should people go to find out more about you?
First and foremost, I'm really happy to collaborate with anybody on the entire planet,
So Sun never sets over my projects.
I really love this internationality.
And during the pandemic, I really like this.
The review that we wrote on the cosmological principle was written from researchers all over the world.
And not everybody has met everybody on this project.
It was just online collaboration.
So if you want to find out, look at my webpage.
And I also have a Wikipedia page, but all of them contain my email address.
So drop me an email.
I'm always happy to answer.
I'm always happy to go on podcasts to talk about all of these things.
Because I think that the more people hear about this
and the more people get a little bit more of the details,
then people might be more convinced.
And lastly, what's a life lesson that you wish you could impart to your younger self?
Well, there are many.
But I would say the most important one is don't listen to others.
Like, make your own judgment.
Because I would say based on the experiences that I've made and I draw a lot on the philosophers of cosmology that I have been collaborating with,
they taught me that in principle there are many possible ways of living a self-consistent and fully reasonable life to say,
I have a theory, I have a way of living, I have a way of interpreting things that I see to make
sense out of everything. But not everybody would agree on the interpretations and how to do this.
And I think I grew up a little bit in the wrong bubble and it took me a hard time to get out of
this bubble and to meet with the people whom I'm working with now. I mean, for instance,
last year I met Thomas Buchhardt from Lyon. He is the guy who says that,
If we take GR seriously, then we always need to intertwine the cosmological background with the matter on top of it.
And as Einstein said, both of them are one.
There is an equal sign in the equation.
So this means I cannot decouple my cosmology from the evolution of the masses.
And he says the matter has a back reaction on the space time and we need to take this into account.
And when I met him, he said, yeah,
I'm also skeptical of dark matter and of dark energy because in the end, it's all the underdetermination problem that you chop up your entire signal into parts.
Part of it is, let's say, it's like the barionic physics, part of it is dark matter, part of it is dark energy.
But the question is, if you have several unknowns that are hard to capture, you can redistribute your entire signal because it's hard to say under which assignment.
you assume this is a dark matter property, this is a dark energy property.
And then he also told me, yeah, it's really hard to kill dark matter or to kill dark energy
because there are always the two camps in this underdetermination to say,
this is definitely a sign against dark energy or against dark matter, against the cosmological
principle. Others say, no, it's an evolution in time. Or no, it's actually a special
the property of dark matter that we have missed.
And I would say, you have to make up your own way, you have to find your own way that you say,
this is something that I find reasonable and not just take on what other people say just because
you think, yeah, I accept them as authorities or think that they know more than I do.
I would say being critical also and what happens on earth is the same.
Being critical and making your own judgment, I think is the most important thing.
Thank you.
Thank you for spending.
So long with me and the audience.
Yeah.
Could I ask a question?
I saw that you have like always mostly 30,000 K, 100,000 K viewers or so.
This is, I mean, it's highly impressive and you have like half a million subscribers.
So who's your target audience?
I mean, can you see like from which countries or so what people are mostly watching?
your blog.
Yeah, well, those are two different questions.
So the target audience,
we always aim this podcast to skew toward the research direction.
So we aim it toward researchers.
We aim it to be technical,
as if this was behind the closed doors of academia,
and it's just professors speaking.
I'm not a professor, but you get the idea.
So that's the target and the audience skews technical,
but the majority of people are actually,
well, they're artists.
They're people who are computer scientists, philosophers.
It's a mix.
And these people are interested in deep questions, in fundamental questions.
And what is this place?
What is time?
What is reality?
What is consciousness?
So it's people who are probing and they want something more than the mysticism that they've heard before from many popularizers of science, like the particles in two places at once.
The cat's dead and alive at the same time.
Isn't it cool?
Isn't it cool the same hundred times that I tell you over and over?
And they're like, okay, but what else is there?
What's the actual math behind that?
What does the math say?
Does the math validate that?
That does the physics validate it?
People want to know.
Can the cat really be dead in the life at the same time?
Because physically, it just doesn't make sense.
So what does it mean?
Yeah.
Yeah.
And is that exactly what a superposition says?
Yeah, yeah.
So in principle, no, that's interesting.
So you really capture the entire scope of people.
It's not just the retired engineers, which are usually the ones who go into that direction.
So it's really more.
It's great.
Yeah, I aim toward professors and existing researchers and academics.
I aim toward that.
And I think we have a large skewing toward that, more so than the average podcast,
because we stay technical, even if it limits the audience.
And it does limit the audience.
But the bulk of the audience, it's, I imagine.
artists and philosophers and computer scientists and logicians, but various sorts.
And just lay people, truck drivers, nurses.
That's cool.
That's cool.
I mean, if you could get a chance, I mean, it might be interesting to get George Ellis for the podcast.
Because he does cosmology, he does philosophy of science.
And recently he abandoned all of this because I think he understood cosmology really to the fullest.
And he said, now it's boring.
Gravity is the same everywhere.
I don't care.
And then he said, now I'm doing neuroscience, because now I want actually to see where is consciousness coming from?
And he's also driven by mathematics.
And he says the mathematics is the same everywhere.
And he won the Templeton Prize, this very famous price, it gets more than a novel price,
for applying mathematics of cosmology to the real estate market in South Africa in order to improve the conditions
for the underprivileged people there.
And he won the Templeton Prize in principle
for doing good in his own country
for the underprivileged people.
But in principle, I mean,
he says the math is the same everywhere
and this is why I could actually apply
the same equations to this one and that one alike
in order to help people on Earth
from what I've learned from the cosmos.
So he's also quite interdisciplinary in that sense.
Yeah, I would like to speak to him.
Anyhow,
I want to thank you once more, and the audience thanks to you as well.
Thank you for having me.
It was really a very lovely chat and highly inspiring.
Hi there, Kurt here.
If you'd like more content from theories of everything
and the very best listening experience,
then be sure to check out my substack at kurtjymongle.org.
Some of the top perks are that every week you get brand new episodes ahead of time.
you also get bonus written content exclusively for our members.
That's C-U-R-T-J-I-M-U-N-G-A-L.org.
You can also just search my name and the word substack on Google.
Since I started that substack,
it somehow already became number two in the science category.
Now, substack for those who are unfamiliar is like a newsletter,
one that's beautifully formatted, there's zero spam.
this is the best place to follow the content of this channel that isn't anywhere else.
It's not on YouTube. It's not on Patreon. It's exclusive to the substack. It's free.
There are ways for you to support me on substack if you want, and you'll get special bonuses if you do.
Several people ask me like, hey, Kurt, you've spoken to so many people in the fields of theoretical physics,
a philosophy, of consciousness. What are your thoughts, man?
Well, while I remain impartial in interviews, this substack is a way to peer into my present deliberations on these topics.
And it's the perfect way to support me directly.
Kurtjaimongle.org or search Kurtzimungle substack on Google.
Oh, and I've received several messages, emails, and comments from professors and researchers
saying that they recommend theories of everything to their students.
fantastic. If you're a professor or a lecturer or what have you and there's a particular
standout episode that students can benefit from or your friends, please do share. And of course,
a huge thank you to our advertising sponsor, The Economist. Visit Economist.com slash
totoe to get a massive discount on their annual subscription. I subscribe to The Economist and you'll
love it as well. To is actually the only podcast that they currently partner with. So it's a huge
honor for me and for you, you're getting an exclusive discount. That's economist.com slash
Toe, T-O-E. And finally, you should know this podcast is on iTunes, it's on Spotify, it's on all the
audio platforms. All you have to do is type in theories of everything and you'll find it. I know my last
name is complicated, so maybe you don't want to type in Jymongle, but you can
can type in theories of everything, and you'll find it. Personally, I gain from re-watching
lectures and podcasts. I also read in the comment that toe listeners also gain from replaying,
so how about instead you re-listen on one of those platforms like iTunes, Spotify, Google
podcasts. Whatever podcast catcher you use, I'm there with you.
The Economist covers math, physics, philosophy, and AI in a manner that shows how different
countries perceive developments and how the impact markets. They recently,
published a piece on China's new neutrino detector. They cover extending life via mitochondrial
transplants, creating an entirely new field of medicine. But it's also not just science. They analyze
culture. They analyze finance, economics, business, international affairs across every region.
I'm particularly liking their new insider feature. It was just launched this month. It gives you,
it gives me a front row access to the economist's internal editorial debates,
where senior editors argue through the news with world leaders and policymakers in twice-weekly long format shows.
Basically, an extremely high-quality podcast.
Something else you should know about is that if you go to their app, they not only have daily articles,
but they also have long-form podcasts with their editors and writers.
This is also available online.
Whether it's scientific innovation or shifting global politics,
the economist provides comprehensive coverage beyond headlines,
As a toll listener, you get a special discount.
Head over to Economist.com slash T-O-E to subscribe.
That's economist.com slash T-O-E for your discount.
This spring performance auto group invites drivers to upgrade with confidence.
From March 26 to 28th, the Spring Upgrade Sales Event offers a $1,000 upgrade credit
toward any new or pre-owned vehicle.
Plus, trade evaluations across their network deliver maximum market value for your vehicle.
With competitive manufacturer rates and programs available,
now is your moment to upgrade the Performance Auto Group Way.
39 stores, 23 brands, one upgrade event.
March 26 to 28th, visit Performance.ca.commodgrade sale for details.
Thank you for listening.
