lynx   »   [go: up one dir, main page]

Pages

Showing posts with label Dear Dr B. Show all posts
Showing posts with label Dear Dr B. Show all posts

Sunday, August 30, 2020

Do we really travel through time with the speed of light?

[Note: This transcript will not make much sense without the equations that I show in the video.]

Today I want to answer a question that was sent to me by Ed Catmull who writes:
“Twice, I have read books on relativity by PhDs who said that we travel through time at the speed of light, but I can’t find those books, and I haven’t seen it written anywhere else. Can you let me know if this is right or if this is utter nonsense.”


I really like this question because it’s one of those things that blow your mind when you learn about them first, but by the time you have your PhD you’ve all but forgotten about them. So, the brief answer is: It’s right, we do travel through time at the speed of light. But, as always, there is some fine-print to what exactly this means.

At first, it does not seem to make much sense to even talk about a speed in time. A speed is distance per time. So, if you travel in time, a speed would be time per time, and you would end up with the arguably correct but rather lame insight that we travel through time at one second per second.

This, however, is not where the statement that we travel through time at the speed of light comes from. It comes from good, old Albert Einstein. Yes, that guy again. Einstein based his theory of special relativity on an idea from Hermann Minkowski, which is that space and time belong together to a common entity called space-time. In space-time, you do not only have the usual three directions of space, you have a fourth direction, which is time. In the following, I want to show you a few equations, and for that I will, as usual, call the three directions of space, xy, and z, and t stands for time.

Now, here’s the problem. You can add directions like North and West to get something like North-West. But you cannot add space and time because that’s like adding apples and oranges. Space and time have different units, so if you want to add them, you have to put a constant in front of one of them. It does not matter where you put that constant, but by convention we put it in front of the time-coordinate. The constant you have to put here so that you can add these directions must have units of space over time, so that’s a speed. Let’s call it “c”.

You all know that c is the speed of light, but, and this is really important, you do not need to know this if you formulate special relativity. You can put a dummy parameter there that could be any speed, and you will later find that it is the speed of massless particles. And since we experimentally know that the particles of light are to very good precision massless, that constant is then also the speed of light.

Now, of course there is a difference between time and space, so that can’t be all there is to space-time. You can move around in space either which way, but you cannot move around in time as you please. So what makes time different from space in Einstein’s space-time? What makes time different from space is the way you add them.

If you want to calculate a distance in space, you use Euclid’s formula. A distance, in three dimension, is the square-root of the of the sum of the squared distances in each direction of space. Here the Δx is a difference between two points in direction x, and Δy and Δz are likewise differences between two points in directions y and z.

But in space-time this works differently. A distance between two points in in space-time is usually called Δs, so that’s what we will call it too. A distance in space-time is now the square-root of minus the squares of the distances in each of the dimensions of space, plus c square times the squared distance in time.

Maybe let me mention that some old books on Special Relativity use a different notation, in which, instead of just putting a minus in the space-time distance, one uses a prefactor for the time-coordinate that is i times c. This has the exact same effect because the i square will give you a minus. The I turns out to be useless otherwise though, so this notation is not used today any more.

But why would you define a space-time distance like this, why not just all plusses? Well, for one, if you do it differently it doesn’t work. It would not correctly describe observation. That’s an answer, but not a very insightful one, so here is a better answer.

Einstein based special relativity on the idea that the speed of light is the same for all observers. You cannot do this in a Euclidean space where all the signs are plusses. But you can do it if one of the signs is different relative to the others. 

That’s because a space-time distance that is zero for one observer is zero for all observers. This is also the case in Euclidean space, but in Euclidean space, this just means zero in each of the directions of space. But what does a zero distance mean in space-time? Well, let’s find out. For simplicity, let us look at only one dimension of space. So if the distance in space-time is zero, this means that the distance in space divided by the distance in time equals plus or minus c. And that’s the same for all observers. So this speed, c, is an invariant speed.

But, well, we are not light, so we do not travel with the speed of light through space, and we do actually cover a distance in space-time. So let us look at this equation for the space-time distance again. Now let us divide this by the time difference. Now what you have on the left side is the space-time distance per time. And under the square root you have roughly something like the squares of the velocities in each of the directions of space. Plus c2.

And there you have it. Relative to yourself, you do not move through space, so these velocities are zero. You then only move into the time-like direction, and in this direction, you move with the speed of light. So, we indeed all travel through time with the speed of light.

I always try to show you equations because physics is all about equations. But to really understand what these equations mean, you have to use them yourself. A great place to do this is Brilliant, who have been sponsoring this video. Brilliant offers a large variety of interactive courses on topics in science and mathematics. They do for example have a course on Special Relativity, that will teach you all you need to know about space-time diagrams, Lorentz-transformations, and 4-vectors.

To support this channel and learn more about Brilliant, go to brilliant.org/Sabine, and sign up for free. The first two-hundred people who go to that link will get twenty percent off the annual Premium subscription.

Tuesday, April 02, 2019

Dear Dr B: Does the LHC collide protons at twice the speed of light?

I recently got a brilliant question after a public lecture: “If the LHC accelerates protons to almost the speed of light and then collides them head-on, do they collide at twice the speed of light?”

The short answer is “No.” But it’s a lovely question and the explanation contains a big chunk of 20th century physics.

First, let me clarify that it does not make sense to speak of a collision’s velocity. One can speak about its center-of-mass energy, but one cannot meaningfully assign a velocity to a collision itself. What makes sense, instead, is to speak about relative velocities. If you were one of the protons and the other proton comes directly at you, does it come at you with twice the speed of light?

It does not, of course. You already knew this, because Einstein taught us nothing travels faster than the speed of light. But for this to work, it is necessary that velocities do not add the way we are used to. Indeed, according to Einstein, for velocities, 1 plus 1 is not equal to 2. Instead, 1+1 is equal to 1.

I know that sounds crazy, but it’s true.

To give you an idea how this comes about, let us forget for a moment that we have three dimensions of space and that protons at the LHC actually go in a circle. It is easier to look at the case where the protons move in straight lines, so, basically only in one dimension of space. It is then no longer necessary to worry about the direction of velocities and we can just speak about their absolute value.

Let us also divide all velocities by the speed of light so that we do not have to bother with units.

Now, if you have objects that move almost at the speed of light, you have to use Special Relativity to describe what they do. In particular you want to know, if you see two objects approaching each other at velocity u and v, then what is the velocity of one object if you were flying along with the other? For this, in special relativity, you have to add u and v by the following equation:


You see right away that the result of this addition law is always smaller than 1 if both velocities were smaller than 1. And if u equals 1 – that is, one object moves with the speed of light – then the outcome is also 1. This means that all observers agree on the speed at which light moves.

If you check what happens with the protons at the LHC, you will see that adding twice 99% of the speed of light brings you to something like 99,9999% of the speed of light, but never to 100%, and certainly not to 200%.

I will admit the first time I saw this equation it just seemed entirely arbitrary to me. I was in middle school, then, and really didn’t know much about Special Relativity. I just thought, well, why? Why this? Why not some other weird addition law?

But once you understand the mathematics, it becomes clear there is nothing arbitrary about this equation. What happens is roughly the following.

Special Relativity is based on the symmetry of space-time. This does not mean that time is like space – arguably, it is not – but that the two belong together and cannot be treated separately. Importantly, the combination of space and time has to work the same way for all observers, regardless of how fast they move. This observer-independence is the key principle of Einstein’s theory of Special Relativity.

If you formulate observer-independence mathematically, it turns out there is only one way that a moving clock can tick, and only one way that a moving object can appear – it all follows from the symmetry requirement. The way that moving objects shrink and their time slows down is famously described by time-dilation and length-contraction. But once you have this, you can also derive that there is only one way to add velocities and still be consistent with observer-independence of the space-time symmetry. This is what the above equation expresses.

Let me also mention that the commonly made reference to the speed of light in Special Relativity is somewhat misleading. We do this mostly for historical reasons.

In Special Relativity we have a limiting velocity which cannot be reached by massive particles, no matter how much energy we use to accelerate them. Particles without masses, on the other hand, always move at that limiting velocity. Therefore, if light is made of massless particles, then the speed of light is identical to the limiting velocity. And for all we currently know, light is indeed made of massless particles, the so-called photons.

However, should it turn out one day that photons really have a tiny mass that we just haven’t been able to measure so far, then the limiting velocity would still exist. It would just no longer be equal to the speed of light.

So, in summary: Sometimes 1 and 1 is indeed 1.

Monday, October 15, 2018

Dear Dr B: What do you actually live from?

Some weeks ago a friend emailed me to say he was shocked – shocked! – to hear I had lost my job. This sudden unemployment was news to me, but not as big a surprise as you may think. I was indeed unemployed for two months last year, not because I said rude things about other people’s theories, but simply because someone forgot to renew my contract. Or maybe I forgot to ask that it be renewed. Or both.

In any case, this happened a few times before, and while my younger self wouldn’t normally let such a brilliant opportunity for outrage go to waste, I now like to pretend that I am old and wise and breathe out bullshit.

After some breathing, I learned that this time my sudden unemployment originated not in a forgotten signature, but on Wikipedia. I missed the ensuing kerfuffle about my occupation, but later someone sent me a glorious photoshopped screenshot (see above) which shows me with a painted-on mustache and informs us that Sabine Hossenfelder is known for “a horrible blog on which she makes fun of other people’s theories.”

The truly horrible thing about this blog, however, is that I’m not making fun. String theorists are happily studying universes that don’t exist, particle physicists are busy inventing particles that no one ever measures, and theorists mass-produce “solutions” to the black hole information loss problem that no one will ever be able to test. All these people get paid well for their remarkable contributions to human knowledge. If that makes you laugh, it’s the absurdity of the situation, not my blog, that’s funny.

Be that as it may, I have given a lot of interviews in the past months and noticed people are somewhat confused about what I actually work on. I didn’t write about my current research in my book because inevitably the physicists I criticize would have complained I wrote the book merely to advertise my own work. So now they just complain that I wrote the book, period. Or they complain I’m a horrible person. Which is probably correct because, you see, all that bullshit I’ve been breathing out now sticks to them.

Horrible person that I am, I don’t even work in the foundations of physics any more. I now work on quantum simulations or, more specifically, on using weakly coupled condensed-matter-systems to obtain information about a different, strongly coupled condensed matter system.

The relation between the two systems stems from a combination of analogue gravity with the gauge-gravity duality. The neat thing about this is that – in contrast to either the gauge-gravity duality or analogue gravity alone – we are dealing with two systems that can (at least in principle) be prepared in the laboratory. It’s about the real world!

This opens the possibility to experimentally test the validity of the gauge-gravity duality, or its applicability to certain systems, respectively. Current experiments (like Jeff Steinhauer’s) aren’t precise enough to actually do this, but the technology in this area is rapidly improving, so I’m hopeful that maybe in a decade or so it’ll be doable.

If that was too much terminology, I’m developing new methods to describe how large numbers of atoms interact at very low temperature.

Today, Tobias Zingg and I have a new paper on the arXiv that sums up our recent results. And that’s what I’ll be working on until my contract runs out for real, in November next year. And then what? I don’t know, but stay tuned and we’ll find out.

Sunday, August 26, 2018

Dear Dr B: What does the universe expand into?

    “When the universe expands, into what is it expanding? In what medium is it expanding? Is the universe like a bubble in a higher dimension something?
    [Anonymous], Indiana, USA”
[image: luftballon-profi.at]

This is a very good question and one, I should add, I get frequently. It is, I believe, to no small part caused by the common illustrations of a curved universe: it’s a rubber-sheet with a bowling-ball on it, it’s an inflating balloon, or – in the rarer case that someone tries to illustrate negative curvature, it’s a potato chip (because really I have no idea what a saddle looks like).

But in each of these cases what the illustration actually shows is a two-dimensional surface embedded in a non-curved (“flat”) three-dimensional space. That’s good because you can draw it, but it’s bad because it raises the impression that to speak of curvature you need to put the surface into a larger space. That, however, isn’t so: Curvature is a property of the surface itself.

To get an idea of how this works, consider the simplest example of a curved surface, a ball. On the ball’s surface the angles of triangles will not add up to 180 degrees. You can calculate the curvature from measuring all the angles in all triangles that you could draw onto the ball. This is a measurement which can be done entirely on the surface itself. Or by ants crawling on the surface, if you wish, to use another common analogy.

Curvature, hence, is an intrinsic property of the surface – you do not need the embedding space to define it and to measure it. Also note that the curvature is a local property; it can change from one place to the next, just that a ball has constant curvature.

General relativity uses the same notion of local, intrinsic curvature, just that in this case we aren’t dealing with two dimensions of space and ants crawling on it, but with three dimensions of space, one dimension of time, and humans crawling around in it. So the math is more complicated and all the properties of space-time are collected in something called the curvature-tensor, but that is still an entirely internal construct. We can measure it by tracking the motion of particles, and it’s this curvature that creates the effect we usually refer to as gravity.

Now, what cosmologists mean when they speak of the expansion of the universe is a trend of certain measurement results that, using Einstein’s equations, can be interpreted as being due to an increasing distance between galaxies. Again, this expansion is an entirely internal notion. It is defined and measured in our universe. You do not have to embed this four dimensional space-time into anything else to quantify it. You do not need a medium and you do not need a larger space. Einstein’s theory is entirely self-contained with a four-dimensional, internally curved space-time.

While you do not have to embed space-time in a higher-dimensional flat space, you can. Indeed it can be mathematically proved that you can embed any curved four dimensional space-time into a ten dimensional flat space-time. The reason physicists don’t normally do this is that these additional dimensions are superfluous and they don’t aid the math either.

Black hole embedding diagram.
Only the surface itself has physical meaning.
The surrounding space is for visual purposes.
[Image source: Quora]  
We do, however, on occasion use what is called an “embedding diagram”, which
can be useful to visualize the extrinsic curvature of certain slices of space-time. This is, for example, what gives rise to the idea that when matter collapses to a black hole, space develops a long throat with a bubble that eventually pinches off. But please keep in mind that these are merely visual aids. They have their uses as such, but one has to be very careful in interpreting them because they depend on the chosen embedding.

Now you ask what does the universe expand into? It doesn’t expand into anything, it just expands. That the universe expands is a statement about what happens inside the universe, supported by measurements inside the universe. It’s an entirely internal notion that does not require us to speak of an outside of the universe or a medium into which it is embedded.

Thanks for an interesting question!

Tuesday, August 07, 2018

Dear Dr B: Is it possible that there is a universe in every particle?

“Is it possible that our ‘elementary’ particles are actually large scale aggregations of a different set of something much smaller? Then, from a mathematical point of view, there could be an infinite sequence of smaller (and larger) building blocks and universes.”

                                                                      ~Peter Letts
Dear Peter,

I love the idea that there is a universe in every elementary particle! Unfortunately, it is really hard to make this hypothesis compatible with what we already know about particle physics.

Simply conjecturing that the known particles are made up of smaller particles doesn’t work well. The reason is that the masses of the constituent particles must be smaller than the mass of the composite particle, and the lighter a particle, the easier it is to produce in particle accelerators. So why then haven’t we seen these constituents already?

One way to get around this problem is to make the new particles strongly bound, so that it takes a lot of energy to break the bond even though the particles themselves are light. This is how it works for the strong nuclear force which holds quarks together inside protons. The quarks are light but still difficult to produce because you need a high energy to tear them apart from each other.

There isn’t presently any evidence that any of the known elementary particles are made up of new strongly-bound smaller particles (usually referred to as preons), and many of the models which have been proposed for this have run into conflict with data. Some are still viable, but with such strongly bound particles you cannot create something remotely resembling our universe. To get structures similar to what we observe you need an interplay of both long-distance forces (like gravity) and short-distance forces (like the strong nuclear force).

The other thing you could try is to make the constituent particles really weakly interacting with the particles we know already, so that producing them in particle colliders would be unlikely. This, however, causes several other problems, one of which is that even the very weakly interacting particles carry energy and hence have a gravitational pull. If they are produced at any substantial rates at any time in the history of the universe, we should see evidence for their presence but we don’t. Another problem is that by Heisenberg’s uncertainty principle, particles with small masses are difficult to keep inside small regions of space, like inside another elementary particle.

You can circumvent the latter problem by conjecturing that the inside of a particle actually has a large volume, kinda like Mary Poppins’ magical bag, if anyone recalls this.


via GIPHY

Sounds crazy, I know, but you can make this work in general relativity because space can be strongly curved. Such cases are known as “baby universes”: They look small from the outside but can be huge on the inside. You then need to sprinkle a little quantum gravity magic over them for stability. You also need to add some kind of strange fluid, not unlike dark energy, to make sure that even though there are lots of massive particles inside, from the outside the mass is small.

I hope you notice that this was already a lot of hand-waving, but the problems don’t stop there. If you want every elementary particle to each have a universe inside, you need to explain why we only know 25 different elementary particles. Why aren’t there billions of them? An even bigger problem is that elementary particles are quantum objects: They get constantly created and destroyed and they can be in several places at once. How would structure formation ever work in such a universe? It is also a generally the case in quantum theories that the more variants there are of a particle, the more of them you produce. So why don’t we produce humongous amounts of elementary particles if they’re all different inside?

The problems that I listed do not of course rule out the idea. You can try to come up with explanations for all of this so that the model does what you want and is compatible with all observations. But what you then end up with is a complicated theory that has no evidence speaking for it, designed merely because someone likes the idea. It’s not necessarily wrong. I would even say it’s interesting to speculate about (as you can tell, I have done my share of speculation). But it’s not science.

Thanks for an interesting question!

Friday, May 11, 2018

Dear Dr B: Should I study string theory?

Strings. [image: freeimages.com]
“Greetings Dr. Hossenfelder!

I am a Princeton physics major who regularly reads your wonderful blog.

I recently came across a curious passage in Brian Greene’s introduction to a reprint edition of Einstein's Meaning of Relativity which claims that:
“Superstring theory successfully merges general relativity and quantum mechanics [...] Moreover, not only does superstring theory merge general relativity with quantum mechanics, but it also has the capacity to embrace — on an equal footing — the electromagnetic force, the weak force, and the strong force. Within superstring theory, each of these forces is simply associated with a different vibrational pattern of a string. And so, like a guitar chord composed of four different notes, the four forces of nature are united within the music of superstring theory. What’s more, the same goes for all of matter as well. The electron, the quarks, the neutrinos, and all other particles are also described in superstring theory as strings undergoing different vibrational patterns. Thus, all matter and all forces are brought together under the same rubric of vibrating strings — and that’s about as unified as a unified theory could be.”
Is all this true? Part of the reason I am asking is that I am thinking about pursuing String Theory, but it has been somewhat difficult wrapping my head around its current status. Does string theory accomplish all of the above?

Thank you!

An Anonymous Princeton Physics Major”

Dear Anonymous,

Yes, it is true that superstring theory merges general relativity and quantum mechanics. Is it successful? Depends on what you mean by success.

Greene states very carefully that superstring theory “has the capacity to embrace” gravity as well as the other known fundamental forces (electromagnetic, weak, and strong). What he means is that most string theorists currently believe there exists a specific model for superstring theory which gives rise to these four forces. The vague phrase “has the capacity” is an expression of this shared belief; it glosses over the fact that no one has been able to find a model that actually does what Greene says.

Superstring theory also comes with many side-effects which all too often go unnoticed. To begin with, the “super” isn’t there to emphasize the theory is awesome, but to indicate it’s supersymmetric. Supersymmetry, to remind you, is a symmetry that postulates all particles of the standard model have a partner particle. These partner particles were not found. This doesn’t rule out supersymmetry because the particles might only be produced at energies higher than what we have tested. But it does mean we have no evidence that supersymmetry is realized in nature.

Worse, if you make the standard model supersymmetric, the resulting theory conflicts with experiment. The reason is that doing so enables flavor changing neutral currents which have not been seen. This became clear in the mid 1990s, sufficiently long ago so that it’s now one of the “well known problems” that nobody ever mentions. To save both supersymmetry and superstrings, theorists postulated an additional symmetry, called “R-parity” that simply forbids the worrisome processes.

Another side-effect of superstrings is that they require additional dimensions of space, nine in total. Since we haven’t seen more than the usual three, the other six have to be rolled up or “compactified” as the terminology has it. There are many ways to do this compactification and that’s what eventually gives rise to the “landscape” of string theory: The vast number of different theories that supposedly all exist somewhere in the multiverse.

The problems don’t stop there. Superstring theory does contain gravity, yes, but not the normal type of gravity. It is gravity plus a large number of additional fields, the so-called moduli fields. These fields are potentially observable, but we haven’t seen them. Hence, if you want to continue believing in superstrings you have to prevent these fields from making trouble. There are ways to do that, and that adds a further layer of complexity.

Then there’s the issue with the cosmological constant. Superstring theory works best in a space-time with a cosmological constant that is negative, the so-called “Anti de Sitter spaces.” Unfortunately, we don’t live in such a space. For all we presently know the cosmological constant in our universe is positive. When astrophysicists measured the cosmological constant and found it to be positive, string theorists cooked up another fix for their theory to get the right sign. Even among string-theorists this fix isn’t popular, and in any case it’s yet another ad-hoc construction that must be added to make the theory work.

Finally, there is the question how much the requirement of mathematical consistency can possibly tell you about the real world to begin with. Even if superstring theory is a way to unify general relativity and quantum mechanics, it’s not the only way, and without experimental test we won’t know which one is the right way. Currently the best developed competing approach is asymptotically safe gravity, which requires neither supersymmetry nor extra dimensions.

Leaving aside the question whether superstring theory is the right way to combine the known fundamental forces, the approach may have other uses. The theory of strings has many mathematical ties with the quantum field theories of the standard model, and some think that the gauge-gravity correspondence may have applications in condensed matter physics. However, the dosage of string theory in these applications is homeopathic at best.

This is a quick overview. If you want more details, a good starting point is Joseph Conlon’s book “Why String Theory?” On a more general level, I hope you excuse if I mention that the question what makes a theory promising is the running theme of my upcoming book “Lost in Math.” In the book I go through the pros and cons of string theory and supersymmetry and the multiverse, and also discuss the relevance of arguments from mathematical consistency.

Thanks for an interesting question!

With best wishes for your future research,

B.

Thursday, February 15, 2018

What does it mean for string theory that the LHC has not seen supersymmetric particles?



The LHC data so far have not revealed any evidence for supersymmetric particles, or any other new particles. For all we know at present, the standard model of particle physics suffices to explain observations.

There is some chance that better statistics which come with more data will reveal some less obvious signal, so the game isn’t yet over. But it’s not looking good for susy and her friends.
Simulated signal of black hole
production and decay at the LHC.
[Credits: CERN/ATLAS]

What are the consequences? The consequences for supersymmetry itself are few. The reason is that supersymmetry by itself is not a very predictive theory.

To begin with, there are various versions of supersymmetry. But more importantly, the theory doesn’t tell us what the masses of the supersymmetric particles are. We know they must be heavier than something we would have observed already, but that’s it. There is nothing in supersymmetric extensions of the standard model which prevents theorists from raising the masses of the supersymmetric partners until they are out of the reach of the LHC.

This is also the reason why the no-show of supersymmetry has no consequences for string theory. String theory requires supersymmetry, but it makes no requirements about the masses of supersymmetric particles either.

Yes, I know the headlines said the LHC would probe string theory, and the LHC would probe supersymmetry. The headlines were wrong. I am sorry they lied to you.

But the LHC, despite not finding supersymmetry or extra dimensions or black holes or unparticles or what have you, has taught us an important lesson. That’s because it is clear now that the Higgs mass is not “natural”, in contrast to all the other particle masses in the standard model. That the mass be natural means, roughly speaking, that getting masses from a calculation should not require the input of finely tuned numbers.

The idea that the Higgs-mass should be natural is why many particle physicists were confident the LHC would see something beyond the Higgs. This didn’t happen, so the present state of affairs forces them to rethink their methods. There are those who cling to naturalness, hoping it might still be correct, just in a more difficult form. Some are willing to throw it out and replace it instead with appealing to random chance in a multiverse. But most just don’t know what to do.

Personally I hope they’ll finally come around and see that they have tried for several decades to solve a problem that doesn’t exist. There is nothing wrong with the mass of the Higgs. What’s wrong with the standard model is the missing connection to gravity and a Landau pole.

Be that as it may, the community of theoretical particle physicists is currently in a phase of rethinking. There are of course those who already argue a next larger collider is needed because supersymmetry is just around the corner. But the main impression that I get when looking at recent publications is a state of confusion.

Fresh ideas are needed. The next years, I am sure, will be interesting.



I explain all about supersymmetry, string theory, the problem with the Higgs-mass, naturalness, the multiverse, and what they have to do with each other in my upcoming book “Lost in Math.”

Wednesday, September 27, 2017

Dear Dr B: Why are neutrinos evidence for physics beyond the standard model?

Dear Chris,

The standard model of particle physics contains two different types of particles. There are the fermions, which make up matter, and the gauge-bosons which mediate interactions between the fermions and, in some cases, among themselves. There is one additional particle – the Higgs-boson – which is needed to give masses to both bosons and fermions.

Neutrino event at the IceCube Observatory in Antarctica.
Image: IceCube Collaboration

The fermions come in left-handed and right-handed versions which are mirror-images of each other. In what I think is the most perplexing feature of the standard model, the left-handed and right-handed versions of fermions behave differently. We say the fermions are “chiral.” The difference between the left- and right-handed particles is most apparent if you look at neutrinos: Nobody has ever seen a right-handed neutrino.

You could say, well, no problem, let’s just get rid of the right-handed neutrinos. Who needs those anyway?

But it’s not that easy because we have known for 20 years or so that neutrinos have masses. We know this because we see them mix or “oscillate” into each other, and such an oscillation requires a non-vanishing mass-difference. This means not all the neutrino-masses can be zero.

Neutrino masses are a complication because the usual way to give masses to fermions is to couple the left-handed version with the right-handed version and with the Higgs. So what do you do if you have no right-handed neutrinos and yet neutrinos are massive?

The current status is therefore that either a) there are right-handed neutrinos but we haven’t yet seen them, or b) neutrinos are different from the other fermions and can get masses in a different way. In either case, the standard model is incomplete.

It is partly an issue of terminology though. Some physicists say right-handed neutrinos are part of the standard model. In this case they aren’t “beyond the standard model” but instead their discovery is pending.

I have a personal fascination with neutrinos because I believe they’ll be key to understanding the pattern of particle-masses. This is because the right-handed neutrino is the only particle in the standard model that doesn’t carry gauge-charges (or they are all zero, respectively). It seems to me that this should be the reason for it either being very heavy or not being there at all. But that’s speculation.

In any case, there many neutrino experiments presently under way to closer study neutrino-oscillations and also to look for “neutrinoless double-beta decay.” The relevance of the latter is that such a decay is possible only if neutrinos are different from the other fermions of the standard model, so that no additional particles are needed to create neutrino masses.

So, no, particle physics isn’t dead and over, it’s still full with discoveries waiting to happen!

Thanks for an interesting question.


See also:
or click here for all posts in this series.

Monday, June 26, 2017

Dear Dr B: Is science democratic?

    “Hi Bee,

    One of the often repeated phrases here in Italy by so called “science enthusiasts” is that “science is not democratic”, which to me sounds like an excuse for someone to justify some authoritarian or semi-fascist fantasy.

    We see this on countless “Science pages”, one very popular example being Fare Serata Con Galileo. It's not a bad page per se, quite the contrary, but the level of comments including variations of “Democracy is overrated”, “Darwin works to eliminate weak and stupid people” and the usual “Science is not democratic” is unbearable. It underscores a troubling “sympathy for authoritarian politics” that to me seems to be more and more common among “science enthusiasts". The classic example it’s made is “the speed of light is not voted”, which to me, as true as it may be, has some sinister resonance.

    Could you comment on this on your blog?

    Luca S.”


Dear Luca,

Wow, I had no idea there’s so much hatred in the backyards of science communication.

Hand count at convention of the German
party CDU. Image Source: AFP
It’s correct that science isn’t democratic, but that doesn’t mean it’s fascistic. Science is a collective enterprise and a type of adaptive system, just like democracy is. But science isn’t democratic any more than sausage is a fruit just because you can eat both.

In an adaptive system, small modifications create a feedback that leads to optimization. The best-known example is probably Darwinian evolution, in which a species’ genetic information receives feedback through natural selection, thereby optimizing the odds of successful reproduction. A market economy is also an adaptive system. Here, the feedback happens through pricing. A free market optimizes “utility” that is, roughly speaking, a measure of the agents’ (customers/producers) satisfaction.

Democracy too is an adaptive system. Its task is to match decisions that affect the whole collective with the electorate’s values. We use democracy to keep our “is” close to the “ought.”

Democracies are more stable than monarchies or autocracies because an independent leader is unlikely to continuously make decisions which the governed people approve of. And the more governed people disapprove, the more likely they are to chop off the king’s head. Democracy, hence, works better than monarchy for the same reason a free market works better than a planned economy: It uses feedback for optimization, and thereby increases the probability for serving peoples’ interests.

The scientific system too uses feedback for optimization – this is the very basis of the scientific method: A hypothesis that does not explain observations has to be discarded or amended. But that’s about where similarities end.

The most important difference between the scientific, democratic, and economic system is the weight of an individual’s influence. In a free market, influence is weighted by wealth: The more money you can invest, the more influence you can have. In a democracy, each voter’s opinion has the same weight. That’s pretty much the definition of democracy – and note that this is a value in itself.

In science, influence is correlated with expertise. While expertise doesn’t guarantee influence, an expert is more likely to hold relevant knowledge, hence expertise is in practice strongly correlated with influence.

There are a lot of things that can go wrong with scientific self-optimization – and a lot of things do go wrong – but that’s a different story and shall be told another time. Still, optimizing hypotheses by evaluating empirical adequacy is how it works in principle. Hence, science clearly isn’t democratic.

Democracy, however, plays an important role for science.

For science to work properly, scientists must be free to communicate and discuss their findings. Non-democratic societies often stifle discussion on certain topics which can create a tension with the scientific system. This doesn’t have to be the case – science can flourish just fine in non-democratic societies – but free speech strongly links the two.

Science also plays an important role for democracy.

Politics isn’t done with polling the electorate on what future they would like to see. Elected representatives then have to find out how to best work towards this future, and scientific knowledge is necessary to get from “is” to “ought.”

But things often go wrong at the step from “is” to “ought.” Trouble is, the scientific system does not export knowledge in a format that can be directly imported by the political system. The information that elected representatives would need to make decisions is a breakdown of predictions with quantified risks and uncertainties. But science doesn’t come with a mechanism to aggregate knowledge. For an outsider, it’s a mess of technical terms and scientific papers and conferences – and every possible opinion seems to be defended by someone!

As a result, public discourse often draws on the “scientific consensus” but this is a bad way to quantify risk and uncertainty.

To begin with, scientists are terribly disagreeable and the only consensuses I know of are those on thousand years-old questions. More important, counting the numbers of people who agree with a statement simply isn’t an accurate quantifier of certainty. The result of such counting inevitably depends on how much expertise the counted people have: Too little expertise, and they’re likely to be ill-informed. Too much expertise, and they’re likely to have personal stakes in the debate. Worse, still, the head-count can easily be skewed by pouring money into some research programs.

Therefore, the best way we presently have make scientific knowledge digestible for politicians is to use independent panels. Such panels – done well – can both circumvent the problem of personal bias and the skewed head count. In the long run, however, I think we need a fourth arm of government to prevent politicians from attempting to interpret scientific debate. It’s not their job and it shouldn’t be.

But those “science enthusiasts” who you complain about are as wrong-headed as the science deniers who selectively disregard facts that are inconvenient for their political agenda. Both of them confuse opinions about what “ought to be” with the question how to get there. The former is a matter of opinion, the latter isn’t.

That vaccine debate that you mentioned, for example. It’s one question what are the benefits of vaccination and who is at risk from side-effects – that’s a scientific debate. It’s another question entirely whether we should allow parents to put their and other peoples’ children at an increased risk of early death or a life of disability. There’s no scientific and no logical argument that tells us where to draw the line.

Personally, I think parents who don’t vaccinate their kids are harming minors and society shouldn’t tolerate such behavior. But this debate has very little to do with scientific authority. Rather, the issue is to what extent parents are allowed to ruin their offspring’s life. Your values may differ from mine.

There is also, I should add, no scientific and no logical argument for counting the vote of everyone (above some quite arbitrary age threshold) with the same weight. Indeed, as Daniel Gilbert argues, we are pretty bad at predicting what will make us happy. If he’s right, then the whole idea of democracy is based on a flawed premise.

So – science isn’t democratic, never has been, never will be. But rather than stating the obvious, we should find ways to better integrate this non-democratically obtained knowledge into our democracies. Claiming that science settles political debate is as stupid as ignoring knowledge that is relevant to make informed decisions.

Science can only help us to understand the risks and opportunities that our actions bring. It can’t tell us what to do.

Thanks for an interesting question.

Wednesday, June 07, 2017

Dear Dr B: What are the chances of the universe ending out of nowhere due to vacuum decay?

    “Dear Sabine,

    my names [-------]. I'm an anxiety sufferer of the unknown and have been for 4 years. I've recently came across some articles saying that the universe could just end out of no where either through false vacuum/vacuum bubbles or just ending and I'm just wondering what the chances of this are occurring anytime soon. I know it sounds silly but I'd be dearly greatful for your reply and hopefully look forward to that

    Many thanks

    [--------]”


Dear Anonymous,

We can’t predict anything.

You see, we make predictions by seeking explanations for available data, and then extrapolating the best explanation into the future. It’s called “abductive reasoning,” or “inference to the best explanation” and it sounds reasonable until you ask why it works. To which the answer is “Nobody knows.”

We know that it works. But we can’t justify inference with inference, hence there’s no telling whether the universe will continue to be predictable. Consequently, there is also no way to exclude that tomorrow the laws of nature will stop and planet Earth will fall apart. But do not despair.

Francis Bacon – widely acclaimed as the first to formulate the scientific method – might have reasoned his way out by noting there are only two possibilities. Either the laws of nature will break down unpredictably or they won’t. If they do, there’s nothing we can do about it. If they don’t, it would be stupid not to use predictions to improve our lives.

It’s better to prepare for a future that you don’t have than to not prepare for a future you do have. And science is based on this reasoning: We don’t know why the universe is comprehensible and why the laws of nature are predictive. But we cannot do anything about unknown unknowns anyway, so we ignore them. And if we do that, we can benefit from our extrapolations.

Just how well scientific predictions work depends on what you try to predict. Physics is the currently most predictive discipline because it deals with the simplest of systems, those whose properties we can measure to high precision and whose behavior we can describe with mathematics. This enables physicists to make quantitatively accurate predictions – if they have sufficient data to extrapolate.

The articles that you read about vacuum decay, however, are unreliable extrapolations of incomplete evidence.

Existing data in particle physics are well-described by a field – the Higgs-field – that fills the universe and gives masses to elementary particles. This works because the value of the Higgs-field is different from zero even in vacuum. We say it has a “non-vanishing vacuum expectation value.” The vacuum expectation value can be calculated from the masses of the known particles.

In the currently most widely used theory for the Higgs and its properties, the vacuum expectation value is non-zero because it has a potential with a local minimum whose value is not at zero.

We do not, however, know that the minimum which the Higgs currently occupies is the only minimum of the potential and – if the potential has another minimum – whether the other minimum would be at a smaller energy. If that was so, then the present state of the vacuum would not be stable, it would merely be “meta-stable” and would eventually decay to the lowest minimum. In this case, we would live today in what is called a “false vacuum.”

Image Credits: Gary Scott Watson.


If our vacuum decays, the world will end – I don’t know a more appropriate expression. Such a decay, once triggered, releases an enormous amount of energy – and it spreads at the speed of light, tearing apart all matter it comes in contact with, until all vacuum has decayed.

How can we tell whether this is going to happen?

Well, we can try to measure the properties of the Higgs’ potential and then extrapolate it away from the minimum. This works much like Taylor series expansions, and it has the same pitfalls. Indeed, making predictions about the minima of a function based on a polynomial expansion is generally a bad idea.

Just look for example at the Taylor series of the sine function. The full function has an infinite number of minima at exactly the same value but you’d never guess from the first terms in the series expansion. First it has one minimum, then it has two minima of different value, then again it has only one – and the higher the order of the expansion the more minima you get.

The situation for the Higgs’ potential is more complicated because the coefficients are not constant, but the argument is similar. If you extract the best-fit potential from the available data and extrapolate it to other values of the Higgs-field, then you find that our present vacuum is meta-stable.

The figure below shows the situation for the current data (figure from this paper). The horizontal axis is the Higgs mass, the vertical axis the mass of the top-quark. The current best-fit is the upper left red point in the white region labeled “Metastability.”
Figure 2 from Bednyakov et al, Phys. Rev. Lett. 115, 201802 (2015).


This meta-stable vacuum has, however, a ridiculously long lifetime of about 10600 times the current age of the universe, take or give a few billion billion billion years. This means that the vacuum will almost certainly not decay until all stars have burnt out.

However, this extrapolation of the potential assumes that there aren’t any unknown particles at energies higher than what we have probed, and no other changes to physics as we know it either. And there is simply no telling whether this assumption is correct.

The analysis of vacuum stability is not merely an extrapolation of the presently known laws into the future – which would be justified – it is also an extrapolation of the presently known laws into an untested energy regime – which is not justified. This stability debate is therefore little more than a mathematical exercise, a funny way to quantify what we already know about the Higgs’ potential.

Besides, from all the ways I can think of humanity going extinct, this one worries me least: It would happen without warning, it would happen quickly, and nobody would be left behind to mourn. I worry much more about events that may cause much suffering, like asteroid impacts, global epidemics, nuclear war – and my worry-list goes on.

Not all worries can be cured by rational thought, but since I double-checked you want facts and not comfort, fact is that current data indicates our vacuum is meta-stable. But its decay is an unreliable prediction based the unfounded assumption that there either are no changes to physics at energies beyond the ones we have tested, or that such changes don’t matter. And even if you buy this, the vacuum almost certainly wouldn’t decay as long as the universe is hospitable for life.

Particle physics is good for many things, but generating potent worries isn’t one of them. The biggest killer in physics is still the 2nd law of thermodynamics. It will get us all, eventually. But keep in mind that the only reason we play the prediction game is to get the best out of the limited time that we have.

Thanks for an interesting question!

Thursday, April 06, 2017

Dear Dr. B: Why do physicists worry so much about the black hole information paradox?

    “Dear Dr. B,

    Why do physicists worry so much about the black hole information paradox, since it looks like there are several, more mundane processes that are also not reversible? One obvious example is the increase of the entropy in an isolated system and another one is performing a measurement according to quantum mechanics.

    Regards, Petteri”


Dear Petteri,

This is a very good question. Confusion orbits the information paradox like accretion disks orbit supermassive black holes. A few weeks ago, I figured even my husband doesn’t really know what the problem is, and he doesn’t only have a PhD in physics, he has also endured me rambling about the topic for more than 15 years!

So, I’m happy to elaborate on why theorists worry so much about black hole information. There are two aspects to this worry: one scientific and one sociological. Let me start with the scientific aspect. I’ll comment on the sociology below.

In classical general relativity, black holes aren’t much trouble. Yes, they contain a singularity where curvature becomes infinitely large – and that’s deemed unphysical – but the singularity is hidden behind the horizon and does no harm.

As Stephen Hawking pointed out, however, if you take into account that the universe – even vacuum – is filled with quantum fields of matter, you can calculate that black holes emit particles, now called “Hawking radiation.” This combination of unquantized gravity with quantum fields of matter is known as “semi-classical” gravity, and it should be a good approximation as long as quantum effects of gravity can be neglected, which means as long as you’re not close by the singularity.

Illustration of black hole with jet and accretion disk.
Image credits: NASA.


Hawking radiation consists of pairs of entangled particles. Of each pair, one particle falls into the black hole while the other one escapes. This leads to a net loss of mass of the black hole, ie the black hole shrinks. It loses mass until entirely evaporated and all that’s left are the particles of the Hawking radiation which escaped.

Problem is, the surviving particles don’t contain any information about what formed the black hole. And not only that, information of the particles’ partners that went into the black hole is also lost. If you investigate the end-products of black hole evaporation, you therefore can’t tell what the initial state was; the only quantities you can extract are the total mass, charge, and angular momentum- the three “hairs” of black holes (plus one qubit). Black hole evaporation is therefore irreversible.



Irreversible processes however don’t exist in quantum field theory. In technical jargon, black holes can turn pure states into mixed states, something that shouldn’t ever happen. Black hole evaporation thus gives rise to an internal contradiction, or “inconsistency”: You combine quantum field theory with general relativity, but the result isn’t compatible with quantum field theory.

To address your questions: Entropy increase usually does not imply a fundamental irreversibility, but merely a practical one. Entropy increases because the probability to observe the reverse process is small. But fundamentally, any process is reversible: Unbreaking eggs, unmixing dough, unburning books – mathematically, all of this can be described just fine. We merely never see this happening because such processes would require exquisitely finetuned initial conditions. A large entropy increase makes a process irreversible in practice, but not irreversible in principle.

That is true for all processes except black hole evaporation. No amount of finetuning will bring back the information that was lost in a black hole. It’s the only known case of a fundamental irreversibility. We know it’s wrong, but we don’t know exactly what’s wrong. That’s why we worry about it.

The irreversibility in quantum mechanics, which you are referring to, comes from the measurement process, but black hole evaporation is irreversible already before a measurement was made. You could argue then, why should it bother us if everything we can possibly observe requires a measurement anyway? Indeed, that’s an argument which can and has been made. But in and by itself it doesn’t remove the inconsistency. You still have to demonstrate just how to reconcile the two mathematical frameworks.

This problem has attracted so much attention because the mathematics is so clear-cut and the implications are so deep. Hawking evaporation relies on the quantum properties of matter fields, but it does not take into account the quantum properties of space and time. It is hence widely believed that quantizing space-time is necessary to remove the inconsistency. Figuring out just what it would take to prevent information loss would teach us something about the still unknown theory of quantum gravity. Black hole information loss, therefore, is a lovely logical puzzle with large potential pay-off – that’s what makes it so addictive.

Now some words on the sociology. It will not have escaped your attention that the problem isn’t exactly new. Indeed, its origin predates my birth. Thousands of papers have been written about it during my lifetime, and hundreds of solutions have been proposed, but theorists just can’t agree on one. The reason is that they don’t have to: For the black holes which we observe (eg at the center of our galaxy), the temperature of the Hawking radiation is so tiny there’s no chance of measuring any of the emitted particles. And so, black hole evaporation is the perfect playground for mathematical speculation.

[Lots of Papers. Img: 123RF]
There is an obvious solution to the black hole information loss problem which was pointed out already in early days. The reason that black holes destroy information is that whatever falls through the horizon ends up in the singularity where it is ultimately destroyed. The singularity, however, is believed to be a mathematical artifact that should no longer be present in a theory of quantum gravity. Remove the singularity and you remove the problem.

Indeed, Hawking’s calculation breaks down when the black hole has lost almost all of its mass and has become so small that quantum gravity is important. This would mean the information would just come out in the very late, quantum gravitational, phase and no contradiction ever occurs.

This obvious solution, however, is also inconvenient because it means that nothing can be calculated if one doesn’t know what happens nearby the singularity and in strong curvature regimes which would require quantum gravity. It is, therefore, not a fruitful idea. Not many papers can be written about it and not many have been written about it. It’s much more fruitful to assume that something else must go wrong with Hawking’s calculation.

Sadly, if you dig into the literature and try to find out on which grounds the idea that information comes out in the strong curvature phase was discarded, you’ll find it’s mostly sociology and not scientific reasoning.

If the information is kept by the black hole until late, this means that small black holes must be able to keep many different combinations of information inside. There are a few papers which have claimed that these black holes then must emit their information slowly, which means small black holes would behave like a technically infinite number of particles. In this case, so the claim, they should be produced in infinite amounts even in weak background fields (say, nearby Earth), which is clearly incompatible with observation.

Unfortunately, these arguments are based on an unwarranted assumption, namely that the interior of small black holes has a small volume. In GR, however, there isn’t any obvious relation between surface area and volume because space can be curved. The assumption that such small black holes, for which quantum gravity is strong, can be effectively described as particles is equally shaky. (For details and references, please see this paper I wrote with Lee some years ago.)

What happened, to make a long story short, is that Lenny Susskind wrote a dismissive paper about the idea that information is kept in black holes until late. This dismissal gave everybody else the opportunity to claim that the obvious solution doesn’t work and to henceforth produce endless amounts of papers on other speculations.

Excuse the cynicism, but that’s my take on the situation. I’ll even admit having contributed to the paper pile because that’s how academia works. I too have to make a living somehow.

So that’s the other reason why physicists worry so much about the black hole information loss problem: Because it’s speculation unconstrained by data, it’s easy to write papers about it, and there are so many people working on it that citations aren’t hard to come by either.

Thanks for an interesting question, and sorry for the overly honest answer.

Wednesday, November 30, 2016

Dear Dr. B: What is emergent gravity?

    “Hello Sabine, I've seen a couple of articles lately on emergent gravity. I'm not a scientist so I would love to read one of your easy-to-understand blog entries on the subject.

    Regards,

    Michael Tucker
    Wichita, KS”

Dear Michael,

Emergent gravity has been in the news lately because of a new paper by Erik Verlinde. I’ll tell you some more about that paper in an upcoming post, but answering your question makes for a good preparation.

The “gravity” in emergent gravity refers to the theory of general relativity in the regimes where we have tested it. That means Einstein’s field equations and curved space-time and all that.

The “emergent” means that gravity isn’t fundamental, but instead can be derived from some underlying structure. That’s what we mean by “emergent” in theoretical physics: If theory B can be derived from theory A but not the other way round, then B emerges from A.

You might be more familiar with seeing the word “emergent” applied to objects or properties of objects, which is another way physicists use the expression. Sound waves in the theory of gases, for example, emerge from molecular interactions. Van-der Waals forces emerge from quantum electrodynamics. Protons emerge from quantum chromodynamics. And so on.

Everything that isn’t in the standard model or general relativity is known to be emergent already. And since I know that it annoys so many of you, let me point out again that, yes, to our current best knowledge this includes cells and brains and free will. Fundamentally, you’re all just a lot of interacting particles. Get over it.

General relativity and the standard model are the currently the most fundamental descriptions of nature which we have. For the theoretical physicist, the interesting question is then whether these two theories are also emergent from something else. Most physicists in the field think the answer is yes. And any theory in which general relativity – in the tested regimes – is derived from a more fundamental theory, is a case of “emergent gravity.”

That might not sound like such a new idea and indeed it isn’t. In string theory, for example, gravity – like everything else – “emerges” from, well, strings. There are a lot of other attempts to explain gravitons – the quanta of the gravitational interaction – as not-fundamental “quasi-particles” which emerge, much like sound-waves, because space-time is made of something else. An example for this is the model pursued by Xiao-Gang Wen and collaborators in which space-time, and matter, and really everything is made of qbits. Including cells and brains and so on.

Xiao-Gang’s model stands out because it can also include the gauge-groups of the standard model, though last time I looked chirality was an issue. But there are many other models of emergent gravity which focus on just getting general relativity. Lorenzo Sindoni has written a very useful, though quite technical, review of such models.

Almost all such attempts to have gravity emerge from some underlying “stuff” run into trouble because the “stuff” defines a preferred frame which shouldn’t exist in general relativity. They violate Lorentz-invariance, which we know observationally is fulfilled to very high precision.

An exception to this is entropic gravity, an idea pioneered by Ted Jacobson 20 years ago. Jacobson pointed out that there are very close relations between gravity and thermodynamics, and this research direction has since gained a lot of momentum.

The relation between general relativity and thermodynamics in itself doesn’t make gravity emergent, it’s merely a reformulation of gravity. But thermodynamics itself is an emergent theory – it describes the behavior of very large numbers of some kind of small things. Hence, that gravity looks a lot like thermodynamics makes one think that maybe it’s emergent from the interaction of a lot of small things.

What are the small things? Well, the currently best guess is that they’re strings. That’s because string theory is (at least to my knowledge) the only way to avoid the problems with Lorentz-invariance violation in emergent gravity scenarios. (Gravity is not emergent in Loop Quantum Gravity – its quantized version is directly encoded in the variables.)

But as long as you’re not looking at very short distances, it might not matter much exactly what gravity emerges from. Like thermodynamics was developed before it could be derived from statistical mechanics, we might be able to develop emergent gravity before we know what to derive it from.

This is only interesting, however, if the gravity that “emerges” is only approximately identical to general relativity, and differs from it in specific ways. For example, if gravity is emergent, then the cosmological constant and/or dark matter might emerge with it, whereas in our current formulation, these have to be added as sources for general relativity.

So, in summary “emergent gravity” is a rather vague umbrella term that encompasses a large number of models in which gravity isn’t a fundamental interaction. The specific theory of emergent gravity which has recently made headlines is better known as “entropic gravity” and is, I would say, the currently most promising candidate for emergent gravity. It’s believed to be related to, or maybe even be part of string theory, but if there are such links they aren’t presently well understood.

Thanks for an interesting question!

Aside: Sorry about the issue with the comments. I turned on G+ comments, thinking they'd be displayed in addition, but that instead removed all the other comments. So I've reset this to the previous version, though I find it very cumbersome to have to follow four different comment threads for the same post.

Wednesday, October 19, 2016

Dear Dr B: Where does dark energy come from and what’s it made of?

“As the universe expands and dark energy remains constant (negative pressure) then where does the ever increasing amount of dark energy come from? Is this genuinely creating something from nothing (bit of lay man’s hype here), do conservation laws not apply? Puzzled over this for ages now.”
-- pete best
“When speaking of the Einstein equation, is it the case that the contribution of dark matter is always included in the stress energy tensor (source term) and that dark energy is included in the cosmological constant term? If so, is this the main reason to distinguish between these two forms of ‘darkness’? I ask because I don’t normally read about dark energy being ‘composed of particles’ in the way dark matter is discussed phenomenologically.”
-- CGT

Dear Pete, CGT:

Dark energy is often portrayed as very mysterious. But when you look at the math, it’s really the simplest aspect of general relativity.

Ahead, allow me to clarify that your questions refer to “dark energy” but are specifically about the cosmological constant which is a certain type of dark energy. For all we know, the cosmological constant fits all existing observations. Dark energy could be more complicated than that, but let’s start with the cosmological constant.

Einstein’s field equations can be derived from very few assumptions. First, there’s the equivalence principle, which can be formulated mathematically as the requirement that the equations be tensor-equations. Second, the equations should describe the curvature of space-time. Third, the source of gravity is the stress-energy tensor and it’s locally conserved.

If you write down the simplest equations which fulfill these criteria you get Einstein’s field equations with two free constants. One constant can be fixed by deriving the Newtonian limit and it turns out to be Newton’s constant, G. The other constant is the cosmological constant, usually denoted Λ. You can make the equations more complicated by adding higher order terms, but at low energies these two constants are the only relevant ones.
Einstein's field equations. [Image Source]
If the cosmological constant is not zero, then flat space-time is no longer a solution of the equations. If the constant is positive-valued in particular, space will undergo accelerated expansion if there are no other matter sources, or these are negligible in comparison to Λ. Our universe presently seems to be in a phase that is dominated by a positive cosmological constant – that’s the easiest way to explain the observations which were awarded the 2011 Nobel Prize in physics.

Things get difficult if one tries to find an interpretation of the rather unambiguous mathematics. You can for example take the term with the cosmological constant and not think of it as geometrical, but instead move it to the other side of the equation and think of it as some stuff that causes curvature. If you do that, you might be tempted to read the entries of the cosmological constant term as if it was a kind of fluid. It would then correspond to a fluid with constant density and with constant, negative pressure. That’s something one can write down. But does this interpretation make any sense? I don’t know. There isn’t any known fluid with such behavior.

Since the cosmological constant is also present if matter sources are absent, it can be interpreted as the energy-density and pressure of the vacuum. Indeed, one can calculate such a term in quantum field theory, just that the result is infamously 120 orders of magnitude too large. But that’s a different story and shall be told another time. The cosmological constant term is therefore often referred to as the “vacuum energy,” but that’s sloppy. It’s an energy-density, not an energy, and that’s an important difference.

How can it possibly be that an energy density remains constant as the universe expands, you ask. Doesn’t this mean you need to create more energy from somewhere? No, you don’t need to create anything. This is a confusion which comes about because you interpret the density which has been assigned to the cosmological constant like a density of matter, but that’s not what it is. If it was some kind of stuff we know, then, yes, you would expect the density to dilute as space expands. But the cosmological constant is a property of space-time itself. As space expands, there’s more space, and that space still has the same vacuum energy density – it’s constant!

The cosmological constant term is indeed conserved in general relativity, and it’s conserved separately from that of the other energy and matter sources. It’s just that conservation of stress-energy in general relativity works differently than you might be used to from flat space.

According to Noether’s theorem there’s a conserved quantity for every (continuous) symmetry. A flat space-time is the same at every place and at every moment of time. We say it has a translational invariance in space and time. These are symmetries, and they come with conserved quantities: Translational invariance of space conserves momentum, translational invariance in time conserves energy.

In a curved space-time generically neither symmetry is fulfilled, hence neither energy nor momentum are conserved. So, if you take the vacuum energy density and you integrate it over some volume to get an energy, then the total energy grows with the volume indeed. It’s just not conserved. How strange! But that makes perfect sense: It’s not conserved because space expands and hence we have no invariance in time. Consequently, there’s no conserved quantity for invariance in time.

But General Relativity has a more complicated type of symmetry to which Noether’s theorem can be applied. This gives rise to a local conservation of stress-momentum when coupled to gravity (the stress-momentum tensor is covariantly conserved).

The conservation law for the density of a pressureless fluid, for example, works as you expect it to work: As space expands, the density goes down with the volume. For radiation – which has pressure – the energy density falls faster than that of matter because wavelengths also redshift. And if you put the cosmological constant term with its negative pressure into the conservation law, both energy and pressure remain the same. It’s all consistent: They are conserved if they are constant.

Dark energy now is a generalization of the cosmological constant, in which one invents some fields which give rise to a similar term. There are various fields that theoretical physicists have played with: chameleon fields and phantom fields and quintessence and such. The difference to the cosmological constant is that these fields’ densities do change with time, albeit slowly. There is however presently no evidence that this is the case.

As to the question which dark stuff to include in which term. Dark matter is usually assumed to be pressureless, which means that for what its gravitational pull is concerned it behaves just like normal matter. Dark energy, in contrast, has negative pressure and does odd things. That’s why they are usually collected in different terms.

Why don’t you normally read about dark energy being made of particles? Because you need some really strange stuff to get something that behaves like dark energy. You can’t make it out of any kind of particle that we know – this would either give you a matter term or a radiation term, neither of which does what dark energy needs to do.

If dark energy was some kind of field, or some kind of condensate, then it would be made of something else. In that case its density might indeed also vary from one place to the next and we might be able to detect the presence of that field in some way. Again though, there isn’t presently any evidence for that.

Thanks for your interesting questions!

Tuesday, September 27, 2016

Dear Dr B: What do physicists mean by “quantum gravity”?

[Image Source: giphy.com]
“please could you give me a simple definition of "quantum gravity"?

J.”

Dear J,

Physicists refer with “quantum gravity” not so much to a specific theory but to the sought-after solution to various problems in the established theories. The most pressing problem is that the standard model combined with general relativity is internally inconsistent. If we just use both as they are, we arrive at conclusions which do not agree with each other. So just throwing them together doesn’t work. Something else is needed, and that something else is what we call quantum gravity.

Unfortunately, the effects of quantum gravity are very small, so presently we have no observations to guide theory development. In all experiments made so far, it’s sufficient to use unquantized gravity.

Nobody knows how to combine a quantum theory – like the standard model – with a non-quantum theory – like general relativity – without running into difficulties (except for me, but nobody listens). Therefore the main strategy has become to find a way to give quantum properties to gravity. Or, since Einstein taught us gravity is nothing but the curvature of space-time, to give quantum properties to space and time.

Just combining quantum field theory with general relativity doesn’t work because, as confirmed by countless experiments, all the particles we know have quantum properties. This means (among many other things) they are subject to Heisenberg’s uncertainty principle and can be in quantum superpositions. But they also carry energy and hence should create a gravitational field. In general relativity, however, the gravitational field can’t be in a quantum superposition, so it can’t be directly attached to the particles, as it should be.

One can try to find a solution to this conundrum, for example by not directly coupling the energy (and related quantities like mass, pressure, momentum flux and so on) to gravity, but instead only coupling the average value, which behaves more like a classical field. This solves one problem, but creates a new one. The average value of a quantum state must be updated upon measurement. This measurement postulate is a non-local prescription and general relativity can’t deal with it – after all Einstein invented general relativity to get rid of the non-locality of Newtonian gravity. (Neither decoherence nor many worlds remove the problem, you still have to update the probabilities, somehow, somewhere.)

The quantum field theories of the standard model and general relativity clash in other ways. If we try to understand the evaporation of black holes, for example, we run into another inconsistency: Black holes emit Hawking-radiation due to quantum effects of the matter fields. This radiation doesn’t carry information about what formed the black hole. And so, if the black hole entirely evaporates, this results in an irreversible process because from the end-state one can’t infer the initial state. This evaporation however can’t be accommodated in a quantum theory, where all processes can be time-reversed – it’s another contradiction that we hope quantum gravity will resolve.

Then there is the problem with the singularities in general relativity. Singularities, where the space-time curvature becomes infinitely large, are not mathematical inconsistencies. But they are believed to be physical nonsense. Using dimensional analysis, one can estimate that the effects of quantum gravity should become large close by the singularities. And so we think that quantum gravity should replace the singularities with a better-behaved quantum space-time.

The sought-after theory of quantum gravity is expected to solve these three problems: tell us how to couple quantum matter to gravity, explain what happens to information that falls into a black hole, and avoid singularities in general relativity. Any theory which achieves this we’d call quantum gravity, whether or not you actually get it by quantizing gravity.

Physicists are presently pursuing various approaches to a theory of quantum gravity, notably string theory, loop quantum gravity, asymptotically safe gravity, and causal dynamical triangulation, for just to name the most popular ones. But none of these approaches has experimental evidence speaking for it. Indeed, so far none of them has made a testable prediction.

This is why, in the area of quantum gravity phenomenology, we’re bridging the gap between theory and experiment with simplified models, some of which motivated by specific approaches (hence: string phenomenology, loop quantum cosmology, and so on). These phenomenological models don’t aim to directly solve the above mentioned problems, they merely provide a mathematical framework – consistent in its range of applicability – to quantify and hence test the presence of effects that could be signals of quantum gravity, for example space-time fluctuations, violations of the equivalence principle, deviations from general relativity, and so on.

Thanks for an interesting question!
Лучший частный хостинг