lynx   »   [go: up one dir, main page]

Pages

Showing posts with label History of Science. Show all posts
Showing posts with label History of Science. Show all posts

Saturday, May 18, 2024

Einstein's Other Theory of Everything

Einstein completed his theory of general relativity in 1915 when he was 37 years old. What did he do for the remaining 40 years of his life? He continued developing his masterwork of course! Feeling that his theory was incomplete, Einstein pursued a unified field theory. Though he ultimately failed, the ideas he came up with were quite interesting. I have read a lot of old Einstein papers in the past weeks and here is my summary of what I believe he tried to do.

This video comes with a quiz which you can take here:

Saturday, February 03, 2024

The discovery of X-rays and what we can learn from it

I was recently trying to figure out just how X-rays were discovered. The first 3 explanations didn't make any sense to me and before I knew it, I had 12 books about Wilhelm Röntgen on my desk because my brain is a wild place. I eventually figured out what must have happened, I believe, and thought you might find it interesting, too.

Oh, and the reason the shoe-box has the word "Ostern" (German for "Easter") written on it is that my husband's family has used it for decades to store dies for Easter eggs. (Which we still do today.)



This video comes with a quiz that lets you check how much you remember.

Saturday, December 25, 2021

We wish you a nerdy Xmas!

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Happy holidays everybody, today we’re celebrating Isaac Newton’s birthday with a hand selected collection of nerdy Christmas facts that you can put to good use in every appropriate and inappropriate occasion.

You have probably noticed that in recent years worshipping Newton on Christmas has become somewhat of a fad on social media. People are wishing each other a happy Newtonmas rather than Christmas because December 25th is also Newton’s birthday. But did you know that this fad is more than a century old?

In 1891, The Japan Daily Mail reported that a society of Newton worshippers had sprung up at the University of Tokyo. It was founded, no surprise, by mathematicians and physicists. It was basically a social club for nerds, with Newton’s picture residing over meetings. The members were expected to give speeches and make technical jokes that only other members would get. So kind of like physics conferences basically.

The Japan Daily Mail also detailed what the nerds considered funny. For example, on Christmas, excuse me, Newtonmas, they’d have a lottery in which everyone drew a paper with a scientists’ name and then got a matching gift. So if you drew Newton you’d get an apple, if you drew Franklin a kite, Archimedes got you a naked doll, and Kant-Laplace would get you a puff of tobacco into your face. That was supposed to represent the Nebular Hypothesis. What’s that? That’s the idea that solar systems form from gas clouds, and yes, that was first proposed by Immanuel Kant. No, it doesn’t rhyme to pissant, sorry.

Newton worship may not have caught on, but nebular hypotheses certainly have.

By the way, did you know that Xmas isn’t an atheist term for Christmas? The word “Christ” in Greek is Christos written like this (Χριστός.) That first letter is called /kaɪ/ and in the Roman alphabet it becomes an X. It’s been used as an abbreviation for Christ since at least the 15th century.

However, in the 20th century the abbreviation has become somewhat controversial among Christians because the “X” is now more commonly associated with a big unknown. So, yeah, use at your own risk. Or maybe stick with Happy Newtonmas after all?

Well that is controversial too because it’s not at all cl

ear that Newton’s birthday is actually December 25th. Isaac Newton was born on December 25, 1642 in England.

But. At that time, the English still used the Julian calendar. That is already confusing because the new, Gregorian calendar, was introduced by Pope Gregory in 1584, well before Newton’s birth. It replaced the older, Julian calendar, that didn’t properly match the months to the orbit of Earth around the sun.

Yet, when Pope Gregory introduced the new calendar, the British were mostly Anglicans and they weren’t going to have some pope tell them what to do. So for over a hundred years, people in Great Britain celebrated Christmas 10 or 11 days later than most of Europe. Newton was born during that time. Great Britain eventually caved in and adopted the Gregorian calendar in 1751. They passed a law that overnight moved all dates forward by 11 days. So now Newton would have celebrated his birthday on January 4th, except by that time he was dead.

However, it gets more difficult because these two calendars continue running apart, so if you’d run forward the old Julian calendar until today, then December 25th according to the old calendar would now actually be January 7th. So, yeah, I think sorting this out will greatly enrich your conversation over Christmas lunch. By the way, Greece didn’t adopt the Gregorian calendar until 1923. Except for the Monastic Republic of Mount Athos, of course, which still uses the Gregorian calendar.

Regardless of exactly which day you think Newton was born, there’s no doubt he changed the course of science and with that the course of the world. But Newton was also very religious. He spent a lot of time studying the Bible looking for numerological patterns. On one occasion he argued, I hope you’re sitting, that the Pope is the anti-Christ, based in part on the appearance of the number 666 in scripture. Yeah, the Brits really didn’t like the Catholics, did they.

Newton also, at the age of 19 or 20, had a notebook in which he kept a list of sins he had committed such as eating an apple at the church, making pies on Sunday night, “Robbing my mother’s box of plums and sugar” and “Using Wilford’s towel to spare my own”. Bad boy. Maybe more interesting is that Newton recorded his secret confessions in a cryptic code that was only deciphered in 1964. There are still four words that nobody has been able to crack. If you get bored over Christmas, you can give it a try yourself, link’s in the info below.

Newton may now be most famous for inventing calculus and for Newton’s laws and Newtonian gravity, all of which sound like he was a pen on paper person. But he did some wild self-experiments that you can put to good use for your Christmas conversations. Merry Christmas, did you know that Newton once poked a needle into his eye? I think this will go really well.

Not a joke. In 1666, when he was 23, Newton, according to his own records, poked his eye with a bodkin, which is more or less a blunt stitching needle. In his own words “I took a bodkine and put it between my eye and the bone as near to the backside of my eye as I could: and pressing my eye with the end of it… there appeared several white dark and coloured circles.”

If this was not crazy enough, in the same year, he also stared at the Sun taking great care to first spend some time in a dark room so his pupils would be wide open when he stepped outside. Here is how he described this in a letter to John Locke 30 years later:
“in a few hours’ time I had brought my eyes to such a pass that I could look upon no bright object with either eye but I saw the sun before me, so that I could neither write nor read... I began in three or four days to have some use of my eyes again & by forbearing a few days longer to look upon bright objects recovered them pretty well.”
Don’t do this at home. Since we’re already talking about needles, did you know that pine needles are edible? Yes, they are edible and some people say they taste like vanilla, so you can make ice cream with them. Indeed, they are a good source of vitamin C and were once used by sailors to treat and prevent scurvy.

By some estimate, scurvy killed more than 2 million sailors between the 16th and 18th centuries. On a long trip it was common to lose about half of the crew, but in extreme cases it could be worse. On his first trip to India in 1499, Vasco da Gama reportedly lost 116 of 170 men, almost all to scurvy.

But in 1536, the crew of the French explorer Jacques Cartier was miraculously healed from scurvy upon arrival in what is now Québec. The miracle cure was a drink that the Iroquois prepared by boiling winter leaves and the bark from an evergreen tree, which was rich in vitamin C.

So, if you’ve run out of emphatic sounds to make in response to aunt Emma, just take a few bites off the Christmas tree, I’m sure that’ll lighten things up a bit.

Speaking of lights. Christmas lights were invented by no other than Thomas Edison. According to the Library of Congress, Edison created the first strand of electric lights in 1880, and he hung them outside his laboratory in New Jersey, during Christmastime. Two years later, his business partner Edward Johnson had the idea to wrap a strand of hand-wired red, white, and blue bulbs around a Christmas tree. So maybe take a break with worshipping Newton and spare a thought for Edison.

But watch out when you put the lights on the tree. According to the United States Consumer Product Safety Commission, in 2018, 17,500 people sought treatment at a hospital for injuries sustained while decorating for the holiday.

And this isn’t the only health risk on Christmas. In 2004 researchers in the United States found that people are much more likely to die from heart problems than you expect both on Christmas and on New Year. A 2018 study from Sweden made a similar finding. The authors of the 2004 study speculate that the reason may be that people delay seeking treatment during the holidays. So if you feel unwell don’t put off seeing a doctor even if it’s Christmas.

And since we’re already handing out the cheerful news, couples are significantly more likely to break up in the weeks before Christmas. This finding comes from a 2008 paper by British researchers who analyzed facebook status updates. Makes you wonder, do people break up because they can’t agree which day Newton was born or do they just not want to see their in-laws? Let me know what you think in the comments.

Saturday, October 09, 2021

How I learned to love pseudoscience

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


On this channel, I try to separate the good science from the bad science, the pseudoscience. And I used to think that we’d be better off without pseudoscience, that this would prevent confusion and make our lives easier. But now I think that pseudoscience is actually good for us. And that’s what we’ll talk about today.

Philosophers can’t agree on just what defines “pseudoscience” but in this episode I will take it to mean theories that are in conflict with evidence, but that promoters believe in, either by denying the evidence, or denying the scientific method, or maybe just because they have no idea what either the evidence or the scientific method is.

But what we call pseudoscience today might once have been science. Astrology for example, the idea that the constellations of the stars influence human affairs was once a respectable discipline. Every king and queen had a personal astrologer to give them advice. And many early medical practices weren’t just pseudoscience, they were often fatal. The literal snake oil, obtained by boiling snakes in oil, was at least both useless and harmless. However, they also prescribed tape worms for weight loss. Though in all fairness, that might actually work, if you survive it.

And sometimes, theories accused of being pseudoscientific turned out to be right, for example the idea that the continents on earth today broke apart from one large tectonic plate. That was considered pseudoscience until evidence confirmed it. And the hypothesis of atoms was at first decried as pseudoscience because one could not, at the time, observe atoms.

So the first lesson we can take away is that pseudoscience is a natural byproduct of normal science. You can’t have one without the other. If we learn something new about nature, some fraction of people will cling on to falsified theories longer than reasonable. And some crazy ideas in the end turn out to be correct.

But pseudoscience isn’t just a necessary evil. It’s actually useful to advance science because it forces scientists to improve their methods.

Single-blind trials, for example, were invented in the 18th century to debunk the practice of Mesmerism. At that time, scientists had already begun to study and apply electromagnetism. But many people were understandably mystified by the first batteries and electrically powered devices. Franz Mesmer exploited their confusion.

Mesmer was a German physician who claimed he’d discovered a very thin fluid that penetrated the entire universe, including the human body. When this fluid was blocked from flowing, he argued, the result was that people fell ill.

Fortunately, Mesmer said, it was possible to control the flow of the fluid and cure people. And he knew how to do it. The fluid was supposedly magnetic, and entered the body through “poles”. The north pole was on your head and that’s where the fluid came in from the stars, and the south pole was at your feet where it connected with the magnetic field of earth.

Mesmer claimed that the flow of the fluid could be unblocked by “magnetizing” people. Here is how the historian Lopez described what happened after Mesmer moved to Paris in 1778:
“Thirty or more persons could be magnetized simultaneously around a covered tub, a case made of oak, about one foot high, filled with a layer of powdered glass and iron filings... The lid was pierced with holes through which passed jointed iron branches, to be held by the patients. In subdued light, absolutely silent, they sat in concentric rows, bound to one another by a cord. Then Mesmer, wearing a coat of lilac silk and carrying a long iron wand, walked up and down the crowd, touching the diseased parts of the patients’ bodies. He was a tall, handsome, imposing man.”
After being “magnetized” by Mesmer, patients frequently reported feeling significantly better. This, by the way, is the origin of the word mesmerizing.

Scientists of the time, Benjamin Franklin and Antoine Lavoisier among them, set out to debunk Mesmer’s claims. For this, they blindfolded a group of patients. Some of them they told they’d get a treatment, but then they didn’t do anything, and others they gave a treatment without their knowledge.

Franklin and his people found that the supposed effects of mesmerism were not related to the actual treatment, but to the belief of whether one received a treatment. This isn’t to say there were no effects at all. Quite possibly some patients actually did feel better just believing they’d been treated. But it’s a psychological benefit, not a physical one.

In this case the patients didn’t know whether they received an actual treatment, but those conducting the study did. Such trials can be improved by randomly assigning people to one of the two groups so that neither the people leading the study nor those participating in it know who received an actual treatment. This is now called a “double blind trial,” and that too was invented to debunk pseudoscience, namely homeopathy.

Homeopathy was invented by another German, Samuel Hahnemann. It’s based on the belief that diluting a natural substance makes it more effective in treating illness. In eighteen thirty-five, Friedrich Wilhelm von Hoven, a public health official in Nuremberg, got into a public dispute with the dedicated homeopath Johann Jacob Reuter. Reuter claimed that dissolving a single grain of salt in 100 drops of water, and then diluting it 30 times by a factor of 100 would produce “extraordinary sensations” if you drank it. Von Hoven wouldn’t have it. He proposed and then conducted the following experiment.

He prepared 50 samples of homeopathic salt-water following Reuter’s recipe, and 50 samples of plain water. Today, we’d call the plain water samples a “placebo.” The samples were numbered and randomly assigned to trial participants by repeated shuffling. Here is how they explained this in the original paper from 1835:
“100 vials… are labeled consecutively… then mixed well among each other and placed, 50 per table, on two tables. Those on the table at the right are filled with the potentiation, those on the table at the left are filled with pure distilled snow water. Dr. Löhner enters the number of each bottle, indicating its contents, in a list, seals the latter and hands it over to the committee… The filled bottles are then brought to the large table in the middle, are once more mixed among each other and thereupon submitted to the committee for the purpose of distribution.”
The assignments were kept secret on a list in a sealed envelope. Neither von Hoven nor the patients knew who got what.

They found 50 people to participate in the trial. For three weeks von Hoven collected reports from the study participants, after which he opened the sealed envelope to see who had received what. It turned out that only eight participants had experienced anything unusual. Five of those had received the homeopathic dilution, three had received water. Using today’s language you’d say the effect wasn’t statistically significant.

Von Hoven wasn’t alone with his debunking passion. He was a member of the “society of truth-loving men”. That was one of the skeptical societies that had popped up to counter the spread of quackery and fraud in the 19th century. The society of truth loving men no longer exists. But the oldest such society that still exists today was founded as far back as 1881 in the Netherlands. It’s called the Vereniging tegen de Kwakzalverij, literally the “Society Against Quackery”. This society gave out an annual price called the Master Charlatan Prize to discourage the spread of quackery. They still do this today.

Thanks to this Dutch anti-quackery society, the Netherlands became one of the first countries with governmental drug regulation. In case you wonder, the first country to have such a regulation was the United Kingdom with the 1868 Pharmacy Act. The word “skeptical” has suffered somewhat in recent years because a lot of science deniers now claim to be skeptics. But historically, the task of skeptic societies was to fight pseudoscience and to provide scientific information to the public.

And there are more examples where fighting pseudoscience resulted in scientific and societal progress. For example, to debunk telepathy in the late nineteenth century. At the time, some prominent people believed in it, for example Nobel Prize winners Lord Rayleigh and Charles Richet. Richet proposed to test telepathy by having one person draw a playing card at random and concentrating on it for a while. Then another person had to guess the card. The results were then compared against random chance. This is basically how we today calculate statistical significance.

And if you remember, Karl Popper came up with his demarcation criterion of falsification because he wanted to show that Marxism and Freud’s psychoanalysis wasn’t proper science. Now, of course we know today that falsification is not the best way to go about it, but Popper’s work was arguably instrumental to the entire discipline of the philosophy of science. Again that came out of the desire to fight pseudoscience.

And this fight isn’t over. We’re still today fighting pseudoscience and in that process scientists constantly have to update their methods. For example, all this research we see in the foundations of physics on multiverses and unobservable particles doesn’t contribute to scientific progress. I am pretty sure in fifty years or so that’ll go down as pseudoscience. And of course there’s still loads of quackery in medicine, just think of all the supposed COVID remedies that we’ve seen come and go in the past year.

The fight against pseudoscience today is very much a fight to get relevant information to those who need it. And again I’d say that in the process scientists are forced to get better and stronger. They develop new methods to quickly identify fake studies, to explain why some results can’t be trusted, and to improve their communication skills.

In case this video inspired you to attempt self-experiments with homeopathic remedies, please keep in mind that not everything that’s labeled “homeopathic” is necessarily strongly diluted. Some homeopathic remedies contain barely diluted active ingredients of plants that can be dangerous when overdosed. Before you assume it’s just water or sugar, please check the label carefully.

If you want to learn more about the history of pseudoscience, I can recommend Michael Gordin’s recent book “On the Fringe”.

Saturday, May 08, 2021

What did Einstein mean by “spooky action at a distance”?

[This is a transcript of the video embedded below.]


Quantum mechanics is weird – I am sure you’ve read that somewhere. And why is it weird? Oh, it’s because it’s got that “spooky action at a distance”, doesn’t it? Einstein said that. Yes, that guy again. But what is spooky at a distance? What did Einstein really say? And what does it mean? That’s what we’ll talk about today.

The vast majority of sources on the internet claim that Einstein’s “spooky action at a distance” referred to entanglement. Wikipedia for example. And here is an example from Science Magazine. You will also find lots of videos on YouTube that say the same thing: Einstein’s spooky action at a distance was entanglement. But I do not think that’s what Einstein meant.

Let’s look at what Einstein actually said. The origin of the phrase “spooky action at a distance” is a letter that Einstein wrote to Max Born in March 1947. In this letter, Einstein explains to Born why he does not believe that quantum mechanics really describes how the world works.

He begins by assuring Born that he knows perfectly well that quantum mechanics is very successful: “I understand of course that the statistical formalism which you pioneered captures a significant truth.” But then he goes on to explain his problem. Einstein writes:
“I cannot seriously believe [in quantum mechanics] because the theory is incompatible with the requirement that physics should represent reality in space and time without spooky action at a distance...”

There it is, the spooky action at a distance. But just exactly what was Einstein referring to? Before we get into this, I have to quickly remind you how quantum mechanics works.

In quantum mechanics, everything is described by a complex-valued wave-function usually denoted Psi. From the wave-function we calculate probabilities for measurement outcomes, for example the probability to find a particle at a particular place. We do this by taking the absolute square of the wave-function.

But we cannot observe the wave-function itself. We only observe the outcome of the measurement. This means most importantly that if we make a measurement for which the outcome was not one hundred percent certain, then we have to suddenly „update” the wave-function. That’s because the moment we measure the particle, we know it’s either there or it isn’t. And this update is instantaneous. It happens at the same time everywhere, seemingly faster than the speed of light. And I think *that’s what Einstein was worried about because he had explained that already twenty years earlier, in the discussion of the 1927 Solvay conference.

In 1927, Einstein used the following example. Suppose you direct a beam of electrons at a screen with a tiny hole and ask what happens with a single electron. The wave-function of the electron will diffract on the hole, which means it will spread symmetrically into all directions. Then you measure it at a certain distance from the hole. The electron has the same probability to have gone in any direction. But if you measure it, you will suddenly find it in one particular point.

Einstein argues: “The interpretation, according to which [the square of the wave-function] expresses the probability that this particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance, which prevents the wave continuously distributed in space from producing an action in two places on the screen.”

What he is saying is that somehow the wave-function on the left side of the screen must know that the particle was actually detected on the other side of the screen. In 1927, he did not call this action at a distance “spooky” but “peculiar” but I think he was referring to the same thing.

However, in Einstein’s electron argument it’s rather unclear what is acting on what. Because there is only one particle. This is why, Einstein together with Podolsky and Rosen later looked at the measurement for two particles that are entangled, which led to their famous 1935 EPR paper. So this is why entanglement comes in: Because you need at least two particles to show that the measurement on one particle can act on the other particle. But entanglement itself is unproblematic. It’s just a type of correlation, and correlations can be non-local without there being any “action” at a distance.

To see what I mean, forget all about quantum mechanics for a moment. Suppose I have two socks that are identical, except the one is red and the other one blue. I put them in two identical envelopes and ship one to you. The moment you open the envelope and see that your sock is red, you know that my sock is blue. That’s because the information about the color in the envelopes is correlated, and this correlation can span over large distances.

There isn’t any spooky action going on though because that correlation was created locally. Such correlations exist everywhere and are created all the time. Imagine for example you bounce a ball off a wall and it comes back. That transfers momentum to the wall. You can’t see how much, but you know that the total momentum is conserved, so the momentum of the wall is now correlated with that of the ball.

Entanglement is a correlation like this, it’s just that you can only create it with quantum particles. Suppose you have a particle with total spin zero that decays in two particles that can have spin either plus or minus one. One particle goes left, the other one right. You don’t know which particle has which spin, but you know that the total spin is conserved. So either the particle going to the right had spin plus one and the one going left minus one or the other way round.

According to quantum mechanics, before you have measured one of the particles, both possibilities exist. You can then measure the correlations between the spins of both particles with two detectors on the left and right side. It turns out that the entanglement correlations can in certain circumstances be stronger than non-quantum correlations. That’s what makes them so interesting. But there’s no spooky action in the correlation themselves. These correlations were created locally. What Einstein worried about instead is that once you measure the particle on one side, the wave-function for the particle on the other side changes.

But isn’t this the same with the two socks? Before you open the envelope the probability was 50-50 and then when you open it, it jumps to 100:0. But there’s no spooky action going on there. It’s just that the probability was a statement about what you knew, and not about what really was the case. Really, which sock was in which envelope was already decided the time I sent them.

Yes, that explains the case for the socks. But in quantum mechanics, that explanation does not work. If you think that really it was decided already which spin went into which direction when they were emitted, that will not create sufficiently strong correlations. It’s just incompatible with observations. Einstein did not know that. These experiments were done only after he died. But he knew that using entangled states you can demonstrate whether spooky action is real, or not.

I will admit that I’m a little defensive of good, old Albert Einstein because I feel that a lot of people too cheerfully declare that Einstein was wrong about quantum mechanics. But if you read what Einstein actually wrote, he was exceedingly careful in expressing himself and yet most physicists dismissed his concerns. In April 1948, he repeats his argument to Born. He writes that a measurement on one part of the wave-function is a “physical intervention” and that “such an intervention cannot immediately influence the physically reality in a distant part of space.” Einstein concludes:
“For this reason I tend to believe that quantum mechanics is an incomplete and indirect description of reality which will later be replaced by a complete and direct one.”

So, Einstein did not think that quantum mechanics was wrong. He thought it was incomplete, that something fundamental was missing in it. And in my reading, the term “spooky action at a distance” referred to the measurement update, not to entanglement.

Saturday, April 10, 2021

Does the Universe have Higher Dimensions? Part 1

[This is a transcript of the video embedded below.]

Space, the way we experience it, has three dimensions. Left-right, forward backward, and up-down. But why three? Why not 7? Or 26? The answer is: No one knows. But if no one knows why space has three dimensions, could it be that it actually has more? Just that we haven’t noticed for some reason? That’s what we will talk about today.


The idea that space has more than three dimensions may sound entirely nuts, but it’s a question that physicists have seriously studied for more than a century. And since there’s quite a bit to say about it, this video will have two parts. In this part we will talk about the origins of the idea of extra dimensions, Kaluza-Klein theory and all that. And in the next part, we will talk about more recent work on it, string theory and black holes at the Large Hadron Collider and so on.

Let us start with recalling how we describe space and objects in it. In two dimensions, we can put a grid on a plane, and then each point is a pair of numbers that says how far away from zero you have to go in the horizontal and vertical direction to reach that point. The arrow pointing to that point is called a “vector”.

This construction is not specific to two dimensions. You can add a third direction, and do exactly the same thing. And why stop there? You can no longer *draw a grid for four dimensions of space, but you can certainly write down the vectors. They’re just a row of four numbers. Indeed, you can construct vector spaces in any number of dimensions, even in infinitely many dimensions.

And once you have vectors in these higher dimensions, you can do geometry with them, like constructing higher dimensional planes, or cubes, and calculating volumes, or the shapes of curves, and so on. And while we cannot directly draw these higher dimensional objects, we can draw their projections into lower dimensions. This for example is the projection of a four-dimensional cube into two dimensions.

Now, it might seem entirely obvious today that you can do geometry in any number of dimensions, but it’s actually a fairly recent development. It wasn’t until eighteen forty-three, that the British mathematician Arthur Cayley wrote about the “Analytical Geometry of (n) Dimensions” where n could be any positive integer. Higher Dimensional Geometry sounds innocent, but it was a big step towards abstract mathematical thinking. It marked the beginning of what is now called “pure mathematics”, that is mathematics pursued for its own sake, and not necessarily because it has an application.

However, abstract mathematical concepts often turn out to be useful for physics. And these higher dimensional geometries came in really handy for physicists because in physics, we usually do not only deal with things that sit in particular places, but with things that also move in particular directions. If you have a particle, for example, then to describe what it does you need both a position and a momentum, where the momentum tells you the direction into which the particle moves. So, actually each particle is described by a vector in a six dimensional space, with three entries for the position and three entries for the momentum. This six-dimensional space is called phase-space.

By dealing with phase-spaces, physicists became quite used to dealing with higher dimensional geometries. And, naturally, they began to wonder if not the *actual space that we live in could have more dimensions. This idea was first pursued by the Finnish physicist Gunnar Nordström, who, in 1914, tried to use a 4th dimension of space to describe gravity. It didn’t work though. The person to figure out how gravity works was Albert Einstein.

Yes, that guy again. Einstein taught us that gravity does not need an additional dimension of space. Three dimensions of space will do, it’s just that you have to add one dimension of time, and allow all these dimensions to be curved.

But then, if you don’t need extra dimensions for gravity, maybe you can use them for something else.

Theodor Kaluza certainly thought so. In 1921, Kaluza wrote a paper in which he tried to use a fourth dimension of space to describe the electromagnetic force in a very similar way to how Einstein described gravity. But Kaluza used an infinitely large additional dimension and did not really explain why we don’t normally get lost in it.

This problem was solved few years later by Oskar Klein, who assumed that the 4th dimension of space has to be rolled up to a small radius, so you can’t get lost in it. You just wouldn’t notice if you stepped into it, it’s too small. This idea that electromagnetism is caused by a curled-up 4th dimension of space is now called Kaluza-Klein theory.

I have always found it amazing that this works. You take an additional dimension of space, roll it up, and out comes gravity together with electromagnetism. You can explain both forces entirely geometrically. It is probably because of this that Einstein in his later years became convinced that geometry is the key to a unified theory for the foundations of physics. But at least so far, that idea has not worked out.

Does Kaluza-Klein theory make predictions? Yes, it does. All the electromagnetic fields which go into this 4th dimension have to be periodic so they fit onto the curled-up dimension. In the simplest case, the fields just don’t change when you go into the extra dimension. And that reproduces the normal electromagnetism. But you can also have fields which oscillate once as you go around, then twice, and so on. These are called higher harmonics, like you have in music. So, Kaluza Klein theory makes a prediction which is that all these higher harmonics should also exist.

Why haven’t we seen them? Because you need energy to make this extra dimension wiggle. And the more it wiggles, that is, the higher the harmonics, the more energy you need. Just how much energy? Well, that depends on the radius of the extra dimension. The smaller the radius, the smaller the wavelength, and the higher the frequency. So a smaller radius means you need higher energy to find out if the extra dimension is there. Just how small the radius is, the theory does not tell you, so we don’t know what energy is necessary to probe it. But the short summary is that we have never seen one of these higher harmonics, so the radius must be very small.

Oskar Klein himself, btw was really modest about his theory. He wrote in 1926:
"Ob hinter diesen Andeutungen von Möglichkeiten etwas Wirkliches besteht, muss natürlich die Zukunft entscheiden."

("Whether these indications of possibilities are built on reality has of course to be decided by the future.")

But we don’t actually use Kaluza-Klein theory instead of electromagnetism, and why is that? It’s because Kaluza-Klein theory has some serious problems.

The first problem is that while the geometry of the additional dimension correctly gives you electric and magnetic fields, it does not give you charged particles, like electrons. You still have to put those in. The second problem is that the radius of the extra dimension is not stable. If you perturb it, it can begin to increase, and that can have observable consequences which we have not seen. The third problem is that the theory is not quantized, and no one has figured out how to quantize geometry without running into problems. You can however quantize plain old electromagnetism without problems.

We also know today of course that the electromagnetic force actually combines with the weak nuclear force to what is called the electroweak force. That, interestingly enough, turns out to not be a problem for Kaluza-Klein theory. Indeed, it was shown in the 1960s by Ryszard Kerner, that one can do Kaluza-Klein theory not only for electromagnetism, but for any similar force, including the strong and weak nuclear force. You just need to add a few more dimensions.

How many? For the weak nuclear force, you need two more, and for the strong nuclear force another four. So in total, we now have one dimension of time, 3 for gravity, one for electromagnetism, 2 for the weak nuclear force and 4 for the strong nuclear force, which adds up to a total of 11.

In 1981, Edward Witten noticed that 11 happened to be the same number of dimensions which is the maximum for supergravity. What happened after this is what we’ll talk about next week.

Saturday, July 25, 2020

Einstein’s Greatest Legacy: Thought Experiments

Einstein’s greatest legacy is not General Relativity, it’s not the photoelectric effect, and it’s not slices of his brain. It’s a word: Gedankenexperiment – that’s German for “thought experiment”.

Today, thought experiments are common in theoretical physics. We use them to examine the consequences of a theory beyond what is measureable with existing technology, but still measureable in principle. Thought experiments are useful to push a theory to its limits, and doing so can reveal inconsistencies in the theory or new effects. There are only two rules for thought experiments: (A) relevant is only what is measureable and (B) do not fool yourself. This is not as easy as it sounds.

The maybe first thought experiment came from James Maxwell and is known today as Maxwell’s demon. Maxwell used his thought experiment to find out whether one can beat the second law of thermodynamics and build a perpetual motion machine, from which an infinite amount of energy could be extracted.

Yes, we know that this is not possible, but Maxwell said, suppose you have two boxes of gas, one of high temperature and one of low temperature. If you bring them into contact with each other, the temperatures will reach equilibrium at a common temperature somewhere in the middle. In that process of reaching the equilibrium temperature, the system becomes more mixed up and entropy increases. And while that happens – while the gas mixes up – you can extract energy from the system. It “does work” as physicists say. But once the temperatures have equalized and are the same throughout the gas, you can no longer extract energy from the system. Entropy has become maximal and that’s the end of the story.

Maxwell’s demon now is a little omniscient being that sits at the connection between the two boxes where there is a little door. Each time a fast atom comes from the left, the demon lets it through. But if there’s a fast atom coming from the right, the demon closes the door. This way the number of fast atoms on the one side will increase, which means that the temperature on that side goes up again and the entropy of the whole system goes down.

It seems like thermodynamics is broken, because we all know that entropy cannot decrease, right? So what gives? Well, the demon needs to have information about the motion of the atoms, otherwise it does not know when to open the door. This means, essentially, the demon is itself a reservoir of low entropy. If you combine demon and gas the second law holds and all is well. The interesting thing about Maxwell’s demon is that it tells us entropy is somehow the opposite of information, you can use information to decrease entropy. Indeed, a miniature version of Maxwell’s demon has meanwhile been experimentally realized.

But let us come back to Einstein. Einstein’s best known thought experiment is that he imagined what would happen in an elevator that’s being pulled up. Einstein argued that there is no measurement that you can do inside the elevator to find out whether the elevator is in rest in a gravitational field or is being pulled up with constant acceleration. This became Einstein’s “equivalence principle”, according to which the effects of gravitation in a small region of space-time are the same as the effects of acceleration in the absence of gravity. If you converted this principle into mathematical equations, it becomes the basis of General Relativity.

Einstein also liked to imagine how it would be to chase after photons, which was super-important for him to develop special relativity, and he spent a lot of time thinking about what it really means to measure time and distances.

But the maybe most influential of his thought experiments was one that he came up with to illustrate that quantum mechanics must be wrong. In this thought experiment, he explored one of the most peculiar effects of quantum mechanics: entanglement. He did this together with Boris Podolsky and Nathan Rosen, so today this is known as the Einstein-Podolsky-Rosen or just EPR experiment.

How does it work? Entangled particles have some measureable property, for example spin, that is correlated between particles even though the value for each single particle is not determined as long as the particles were not measured. If you have a pair of particles, you can know for example that if one particle has spin up, then the other one has spin down, or the other way round, but you may still not know which is which. The consequence is that if one of these particles is measured, the state of the other one seems to change – instantaneously.

Einstein, Podolsky and Rosen suggested this experiment because Einstein believed this instantaneous ‘spooky’ action at a distance is nonsense. You see, Einstein had a problem with it because it seems to conflict with the speed of light limit in Special Relativity. We know today that this is not the case, quantum mechanics does not conflict with Special Relativity because no useful information can be sent between entangled particles. But Einstein didn’t know that. Today, the EPR experiment is no longer a thought experiment. It can, and has been done, and we now know beyond doubt that quantum entanglement is real.

A thought experiment that still gives headaches to theoretical physicists today is the black hole information loss paradox. General relativity and quantum field theory are both extremely well established theories, but if you combine them, you find that black holes will evaporate. We cannot measure this for real, because the temperature of the radiation is too low, but it is measureable in principle.

However, if you do the calculation, which was first done by Stephen Hawking, it seems that black hole evaporation is not reversible; it destroys information for good. This however cannot happen in quantum field theory and so we face a logical inconsistency when combining quantum theory with general relativity. This cannot be how nature works, so we must be making a mistake. But which?

There are many proposed solutions to the black hole information loss problem. Most of my colleagues believe that the inconsistency comes from using general relativity in a regime where it should no longer be used and that we need a quantum theory of gravity to resolve the problem. So far, however, physicists have not found a solution, or at least not one they can all agree on.

So, yes, thought experiments are a technique of investigation that physicists have used in the past and continue to use today. But we should not forget that eventually we need real experiments to test our theories.

Saturday, January 04, 2014

Book review: “Free Radicals” by Michael Brooks

Free Radicals: The Secret Anarchy of Science
By Michael Brooks
Profile Books Ltd (2011)

“Free Radicals” is a selection of juicy bits from the history of science, telling stories about how scientists break and bend rules to push onward and forward, how they fight, cheat and lie to themselves and to others. The reader meets well-known scientists (mostly dead ones) who fudged data, ignored evidence, flirted their way to lab equipment, experimented on themselves or family members, took drugs, publicly ridiculed their colleagues, and wiggled their way out of controversy with rhetorical tricks.

The book is very enjoyable as a collection of anecdotes. It is fast flowing, does not drown the reader in historical, biographical or scientific details, and it is well-written without distracting from the content. (I’ve gotten really tired of authors who want to be terribly witty and can’t leave you alone for a single paragraph).

Michael Brooks tries to convince the reader that there is a lesson to be learned from these anecdotes, which is that science thrives only because of scientists behaving badly in one way or the other. He refers to this as the “secret anarchy of science”. He actually disagrees with himself on that, as it becomes very clear from his stories that far from being anarchic, science is an elitist meritocracy that grandfathers achievers and is biased against newcomers, in particular members of minorities. Anarchy is unstable – it’s a vacuum that gets rapidly filled with rules and hierarchies – and academia is full with these unwritten rules. Science is not and has never been anything like anarchic, neither secretly nor openly, though the house of science has arguably housed its share of rebels.

Worse than that misuse of the term ‘anarchy’ is that Brooks tries to construct his lesson from a small and hand-picked selection of examples and ignores the biggest part of science, which is business as usual. As we discussed in this earlier post, the question is not whether there are people who bent rules and were successful, but how many people bent rules and just wasted everybody’s time, a problem to which no thought is given in the book.

Luckily, Brooks does not elaborate on his lessons too much. The reader gets some of this in the beginning and then again in the end, where Brooks also uses the opportunity and tries to encourage scientists to engage more in policy making. Again he disagrees with himself. After he spent two hundred pages vividly depicting how scientists care about nothing but making progress on their research, arguing that this single-mindedness is the secret to scientific progress, in the last chapter he now wants scientists to engage more in politics, but that square block won’t fit through the round hole.

In summary, the book is a very enjoyable collection of anecdotes from the history of science. It would have benefitted if the author had refrained from trying to turn it into lessons about the sociology of science.

Wednesday, May 22, 2013

Who said it first? The historical comeback of the cosmological constant

I finished high school in 1995, and the 1998 evidence for the cosmological constant from supernova redshift data was my first opportunity to see physicists readjusting their worldview to accommodate new facts. Initially met by skepticism - as all unexpected experimental results - the nonzero value of the cosmological constant was quickly accepted though. (Unlike eg neutrino oscillations, where the situation remained murky, and people remained skeptic, for more than a decade.)

But how unexpected was that experimental result really?

I learned only recently that by 1998 it might not have been so much of a surprise. Already in 1990, Efstathiou, Sutherland and Maddox, argued in a Nature paper that a cosmological constant is necessary to explain large scale structures. The abstract reads:
"We argue here that the successes of the [Cold Dark Matter (CDM)] theory can be retained and the new observations accommodated in a spatially flat cosmology in which as much as 80% of the critical density is provided by a positive cosmological constant, which is dynamically equivalent to endowing the vacuum with a non-zero energy density. In such a universe, expansion was dominated by CDM until a recent epoch, but is now governed by the cosmological constant. As well as explaining large-scale structure, a cosmological constant can account for the lack of fluctuations in the microwave background and the large number of certain kinds of object found at high redshift."
By 1995 a bunch of tentative and suggestive evidence had piled up that lead Krauss and Turner to publish a paper titled "The Cosmological Constant is Back".

I find this interesting for two reasons. First, it doesn't seem to be very widely known, it's also not mentioned in the Wikipedia entry. Second, taking into account that there must have been preliminary data and rumors even before the 1990 Nature paper was published, this means that by the late 1980s, the cosmological constant likely started to seep back into physicists brains.

Weinberg's anthropic prediction dates to 1987, which likely indeed predated observational evidence. Vilenkin's 1995 refinement of Weinberg's prediction was timely but one is lead to suspect he anticipated the 1998 results from the then already available data. Sorkin's prediction for a small positive cosmological constant in the context of Causal Sets seems to date back into the late 80s, but the exact timing is somewhat murky. There is a paper here which dates to 1990 with the prediction (scroll to the last paragraph), which leads me to think at the time of writing he likely didn't know about the recent developments in astrophysics that would later render this paper a historically interesting prediction.

Wednesday, January 25, 2012

The Planck length as a minimal length

The best scientific arguments are those that are surprising at first sight, yet at second sight they make perfect sense. The following argument, which goes back to Mead's 1964 paper "Possible Connection Between Gravitation and Fundamental Length," is of this type. Look at the abstract and note that it took more than 5 years from submission to publication of the paper. Clearely, Mead's argument seemed controversial at this time, even though all he did was to study the resolution of a microscope taking into account gravity.

For all practical purposes, the gravitational interaction is far too weak to be of relevance for microscopy. Normally, we can neglect gravity, in which case we can use Heisenberg's argument that I first want to remind you of before adding gravity. In the following, the speed of light c and Planck's constant ℏ are equal to one, unless they are not. If you don't know how natural units work, you should watch this video, or scroll down past the equations and just read the conclusion.

Consider a photon with frequency ω, moving in direction x, which scatters on a particle whose position on the x-axis we want to measure (see image below). The scattered photons that reach the lens (red) of the microscope have to lie within an angle ε to produces an image from which we want to infer the position of the particle.

According to classical optics, the wavelength of the photon sets a limit to the possible resolution Δx But the photon used to measure the position of the particle has a recoil when it scatters and transfers a momentum to the particle. Since one does not know the direction of the photon to better than ε, this results in an uncertainty for the momentum of the particle in direction xTaken together one obtains Heisenberg's uncertainty principle
We know today that Heisenberg's uncertainty principle is more than a limit on the resolution of microscopes; up to a factor of order one, the above inequality is a fundamental principle of quantum mechanics.

Now we repeat this little exercise by taking into account gravity.

Since we know that Heisenberg's uncertainty principle is a fundamental property of nature, it does not make sense, strictly speaking, to speak of the position and momentum of the particle at the same time. Consequently, instead of speaking about the photon scattering off the particle as if that would happen in one particular point, we should speak of the photon having a strong interaction with the particle in some region of size R (shown in the above image).

With gravity, the relevant question now will be what happens with the measured particle due to the gravitational attraction of the test particle.

For any interaction to take place and subsequent measurement to be possible, the time elapsed between the interaction and measurement has to be at least of the order of the time, τ, the photon needs to travel the distance R, so that τ is larger than R. (The blogger editor has an issue with the "larger than" and "smaller than" signs, which is why I avoid using them.) The photon carries an energy that, though in general tiny, exerts a gravitational pull on the particle whose position we wish to measure. The gravitational acceleration acting on the particle is at least of the orderwhere G is Newton's constant which is, in natural units, the square of the Planck length lPl. Assuming that the particle is non-relativistic and much slower than the photon, the acceleration lasts about the duration the photon is in the region of strong interaction. From this, the particle acquires a velocity of vaRThus, in the time R, the aquired velocity allows the particle to travels a distance of LGω.

Since the direction of the photon was unknown to within ε, the direction of the acceleration and the motion of the is also unknown. Projection on the x-axis then yields the additional uncertainty ofCombining this with the usual uncertainty (multiply both, then take the square root), one obtainsThus, we find that the distortion of the measured particle by the gravitational field of the particle used for measurement prevents the resolution of arbitrarily small structures. Resolution is bounded by the Planck length, which is about 10-33cm. The Planck length thus plays the role of a minimal length.

(You might criticize this argument because it makes use of Newtonian gravity rather than general relativity, so let me add that, in his paper, Mead goes on to show that the estimate remains valid also in general relativity.)

As anticipated, this minimal length is far too small to be of relevance for actual microscopes; its relevance is conceptual. Given that Heisenberg's uncertainty turned out to be a fundamental property of quantum mechanics, encoded in the commutation relations, we have to ask then if not this modified uncertainty too should be promoted to fundamental relevance. In fact, in the last 5 decades this simple argument has inspired a great many works that attempted exactly this. But that is a different story and shall be told another time.

To finish this story, let me instead quote from a letter that Mead, the author of the above argument, wrote to Physics Today in 2001. In it, he recalls how little attention his argument originally received:
"[In the 1960s], I read many referee reports on my papers and discussed the matter with every theoretical physicist who was willing to listen; nobody that I contacted recognized the connection with the Planck proposal, and few took seriously the idea of [the Planck length] as a possible fundamental length. The view was nearly unanimous, not just that I had failed to prove my result, but that the Planck length could never play a fundamental role in physics. A minority held that there could be no fundamental length at all, but most were then convinced that a [different] fundamental length..., of the order of the proton Compton wavelength, was the wave of the future. Moreover, the people I contacted seemed to treat this much longer fundamental length as established fact, not speculation, despite the lack of actual evidence for it."

Wednesday, September 28, 2011

On the universal length appearing in the theory of elementary particles - in 1938

Special relativity and quantum mechanics are characterized by two universal constants, the speed of light, c, and Planck's constant, ℏ. Yet, from these constants one cannot construct a constant of dimension length (or mass respectively as a length can be converted to a mass by use of ℏ and c). In 1899, Max Planck pointed out that adding Newton's constant G to the universal constants c and ℏ allows one to construct units of mass, length and time. Today these are known as Planck-time, Planck-length and Planck-mass respectively. As we have seen in this earlier post, they mark the scale at which quantum gravitational effects are expected to become important. But back in Planck's days their relevance was in their universality, since they are constructed entirely from fundamental constants.

In the early 20th century, with the advent of quantum field theory, it was widely believed that a fundamental length was necessary to cure troublesome divergences. The most commonly used regularization was a cut-off or some other dimensionful quantity to render integrals finite. It seemed natural to think of this pragmantic cut-off as having fundamental significance, though the problems it caused with Lorentz-invariance. In 1938, Heisenberg wrote "Über die in der Theorie der Elemtarteilchen auftretende universelle Länge" (On the universal length appearing in the theory of elementary particles), in which he argued that this fundamental length, which he denoted r0, should appear somewhere not too far beyond the classical electron radius (of the order some fm).

This idea seems curious today, and has to be put into perspective. Heisenberg was very worried about the non-renormalizability of Fermi's theory of β-decay. He had previously shown that applying Fermi's theory to the high center of mass energies of some hundred GeV lead to an "explosion," by which he referred to events of very high multiplicity. Heisenberg argued this would explain the observed cosmic ray showers, whose large number of secondary particles we know today are created by cascades (a possibility that was discussed at the time of Heisenberg's writing already, but not agreed upon). We also know today that what Heisenberg actually discovered is that Fermi's theory breaks down at such high energies, and the four-fermion coupling has to be replaced by the exchange of a gauge boson in the electroweak interaction. But in the 1930s neither the strong nor the electroweak force was known. Heisenberg then connected the problem of regularization with the breakdown of the perturbation expansion of Fermi's theory, and argued that the presence of the alleged explosions would prohibit the resolution of finer structures:

"Wenn die Explosionen tatsächlich existieren und die für die Konstante r0 eigentlich charakeristischen Prozesse darstellen, so vermitteln sie vielleicht ein erstes, noch unklares Verständnis der unanschaulichen Züge, die mit der Konstanten r0 verbunden sind. Diese sollten sich ja wohl zunächst darin äußern, daß die Messung einer den Wert r0 unterschreitenden Genauigkeit zu Schwierigkeiten führt... [D]ie Explosionen [würden] dafür sorgen..., daß Ortsmessungen mit einer r0 unterschreitenden Genauigkeit unmöglich sind."

("If the explosions actually exist and represent the processes characteristic for the constant r0, then they maybe convey a first, still unclear, understanding of the obscure properties connected with the constant r0. These should, one may expect, express themselves in difficulties of measurements with a precision better than r0... The explosions would have the effect... that measurements of positions are not possible to a precision better than r0.")

In hindsight we know that Heisenberg was, correctly, arguing that the theory of elementary particles known in the 1930s was incomplete. The strong interaction was missing and Fermi's theory indeed non-renormalizable, but not fundamental. Today we also know that the standard model of particle physics is perturbatively renormalizable and know techniques to deal with divergent integrals that do not necessitate cut-offs, such as dimensional regularization. But lacking that knowledge, it is understandable that Heisenberg argued gravity had no role to play for the appearance of a fundamental length:

"Der Umstand, daß [die Plancklänge] wesentlich kleiner ist als r0, gibt uns das Recht, von den durch die Gravitation bedingen unanschaulichen Zügen der Naturbeschreibung zunächst abzusehen, da sie - wenigstens in der Atomphysik - völlig untergehen in den viel gröberen unanschaulichen Zügen, die von der universellen Konstanten r0 herrühren. Es dürfte aus diesen Gründen wohl kaum möglich sein, die elektrischen und die Gravitationserscheinungen in die übrige Physik einzuordnen, bevor die mit der Länge r0 zusammenhängenden Probleme gelöst sind."

("The fact that [the Planck length] is much smaller than r0 gives us the right to leave aside the obscure properties of the description of nature due to gravity, since they - at least in atomic physics - are totally negligible relative to the much coarser obscure properties that go back to the universal constant r0. For this reason, it seems hardly possible to integrate electric and gravitational phenomena into the rest of physics until the problems connected to the length r0 are solved.")

Today, one of the big outstanding questions in theoretical physics is how to resolve the apparent disagreements between the quantum field theories of the standard model and general relativity. It is not that we cannot quantize gravity, but that the attempt to do so leads to a non-renormalizable and thus fundamentally nonsensical theory. The reason is that the coupling constant of gravity, Newton's constant, is dimensionful. This leads to the necessity to introduce an infinite number of counter-terms, eventually rendering the theory incapable of prediction.

But the same is true for Fermi's theory that Heisenberg was so worried about that he argued for a finite resolution where the theory breaks down - and mistakenly so since he was merely pushing an effective theory beyond its limits. So we have to ask then if we are we making the same mistake as Heisenberg, in that we falsely interpret the failure of general relativity to extend beyond the Planck scale as the occurence of a fundamentally finite resolution of structures, rather than just the limit beyond which we have to look for a new theory that will allow us to resolve smaller distances still?

If it was only the extension of classical gravity, laid out in many thought experiments (see eg. Garay 1994), that made us believe the Planck length is of fundamental importance, then the above historical lesson should caution us we might be on the wrong track. Yet, the situation today is different from that which Heisenberg faced. Rather than pushing a quantum theory beyond its limits, we are pushing a classical theory and conclude that its short-distance behavior is troublesome, which we hope to resolve with quantizing the theory. And several attempts at a UV-completion of gravity (string theory, loop quantum gravity, asymptotically safe gravity) suggest that the role of the Planck length as a minimal length carries over into the quantum regime as a dimensionful regulator, though in very different ways. This feeds our hopes that we might be working on unraveling another layer of natures secrets and that this time it might actually be the fundamental one.


Aside: This text is part of the introduction to an article I am working on. Is the English translation of the German extracts from Heisenberg's paper understandable? It sounds funny to me, but then Heisenberg's German is also funny for 21st century ears. Feedback would be appreciated!

Wednesday, November 24, 2010

Nonsense people once believed in

I have a list with notes for blogposts, and one topic that's been on it for a while is believes people once firmly held that during the history of science turned out to be utterly wrong.

Some examples that came to my mind were the "élan vital" (the belief that life is some sort of substance), the theory of the four humors (one consequence of which was the wide spread use of bloodletting as medical treatment for all sorts of purposes), the static universe, and the non-acceptance of continental drift. On the more absurd side of things is the belief that semen is produced in the brain (because the brain was considered the seat of the soul), and that women who are nursing turn menstruation blood into breast milk. From my recent read of Annie Paul's book "Origins" I further learned that until only some decades ago it was widely believed that pretty much any sort of toxins are blocked by the placenta and do not reach the unborn child. It was indeed recommended that pregnant women drink alcohol, and smoking was not of concern. This dramatically wrong belief was also the reason why thalidomide was handed out without much concerns to pregnant women, with the know well-known disastrous consequences, and why the fetal alcohol syndrome is a fairly recent diagnosis.

I was collecting more examples, not very actively I have to admit, but I found yesterday that somebody saved me the work! Richard Thaler, director of the Center for Decision Research at the University of Chicago Graduate School of Business, is working on a book about the topic, and he's asked the Edge-club for input:

"The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?"

You find the replies on this website, which include most of my examples and a few more. One reply that I found very interesting is that by Frank Tipler:
"The false belief that stomach ulcers were caused by stress rather than bacteria. I have some information on this subject that has never been published anywhere. There is a modern Galileo in this story, a scientist convicted of a felony in criminal court in the 1960's because he thought that bacteria caused ulcers."

I hadn't known about the "modern Galileo," is anybody aware of the details? Eric Weinstein adds the tau-theta puzzle, and Rupert Sheldrake suggests "With the advent of quantum theory, indeterminacy rendered the belief in determinism untenable," though I would argue that this issue isn't settled, and maybe never will be settled.

Do you know more examples?

Monday, October 04, 2010

Einstein on the discretenes of space-time

I recently came across this interesting quotation by Albert Einstein:
“But you have correctly grasped the drawback that the continuum brings. If the molecular view of matter is the correct (appropriate) one, i.e., if a part of the universe is to be represented by a finite number of moving points, then the continuum of the present theory contains too great a manifold of possibilities. I also believe that this too great is responsible for the fact that our present means of description miscarry with the quantum theory. The problem seems to me how one can formulate statements about a discontinuum without calling upon a continuum (space-time) as an aid; the latter should be banned from the theory as a supplementary construction not justified by the essence of the problem, which corresponds to nothing “real”. But we still lack the mathematical structure unfortunately. How much have I already plagued myself in this way!”

It's from a 1916 letter to Hans Walter Dällenbach, a former student of Einstein. (Unfortunately the letter is not available online.) I hadn't been aware Einstein thought (at least then) that a continuous space-time is not “real.” It's an interesting piece of history.

Friday, February 12, 2010

350 years Royal Society

As Sabine has mentioned earlier today, this year is the 350th anniversary of the Royal Society, the british national academy of science. Going back to a gathering of a few men interested in "Experimental Philosophy" in London in November 1660, the Royal Society is one of the oldest scientific academies in the world.

Outside Britain, it may be best known for its 13th president, Sir Isaac Newton, and for the publication of the "Philosophical Transactions of the Royal Society", the oldest existing scientific journal in continuous publication.

The Royal Society has set up a special website, and a very nice interactive timeline dubbed "trailblazing", which allows a brief virtual journey through the history of science since the 1650s.

Moreover, there will be several commemorative publications free to access over the anniversary year 2010, for example a special issue of the "Philosophical Transactions A". It features articles not requiring the reader to be a specialist to gain understanding of the content, ranging in topics from "Geometry and physics" by Michael Atiyah, Robbert Dijkgraaf and Nigel Hitchin to "Flat-panel electronic displays" by Cyril Hilsum.

And, most important, the Royal Society Digital Journal Archive will free until 28 February 2010 (two more weeks left only, unfortunately). This means full access to all issues of the "Philosophical Transactions" starting back in 1665!

So, for example, we can read about

  • Isaac Newton presenting his "New Theory about Light and Colors", with the description of his experiments with prisms and the spectrum (1671, 6 3075-3087),

  • Benjamin Franklin reporting his experiments "concerning an Electrical Kite" (1751, 47 565-567),

  • John Michell discussing "the Means of Discovering the Distance, Magnitude, &c. of the Fixed Stars, in Consequence of the Diminution of the Velocity of Their Light...", suggesting stars so massive that light cannot escape from them (1784, 74 35-57),

  • Henry Cavendish describing his "Experiments to Determine the Density of the Earth", or to measure Newton's gravitational constant with a torsion balance (1798, 88 469-526),

  • Alexander Volta reporting Galvani's experiments on electricity (the "frog" experiments - 1793, 83 10-44) and his own construction of the "Volta pile", the prototype of an electrical battery (1800, 90 403-431),

  • William Herschel discussing recent developments about "his" planet Uranus (1783, 73 1-3), reasoning "On the Construction of the Heavens" (1785, 75 213-266) and "the Nature and Construction of the Sun and Fixed Stars" (1795, 85 46-72), and describing his discovery of "Solar, and ... Terrestrial Rays that Occasion Heat", now known as infrared light (1800, 90 293-326),

  • Thomas Young arguing for the wave nature of light in "Outlines of Experiments and Inquiries Respecting Sound and Light" (1800, 90 106-150), and reporting the results of his interference experiments (1804, 94 1-16),

  • James Prescott Joule demonstrating the "Mechanical Equivalent of Heat" (1850, 140 61-82), and

  • James Clerk Maxwell introducing the principle of the RGB colour system in "On the Theory of Compound Colours" (1860, 150 57-84), presenting "A Dynamical Theory of the Electromagnetic Field" (1865, 155 459-512) and contributing to the "Dynamical Theory of Gases" (1867, 157 49-88).


More findings are welcome in the comments! Have a great reading weekend!

Tuesday, February 02, 2010

LaserFest 2010

This year, the laser will turn 50! On May 16, 1960, at the Hughes Research Laboratories in Malibu, California, Theodore Maiman realized for the first time "Light Amplification by Stimulated Emission of Radiation", using a tiny ruby crystal.

Actually, Maiman and his small group of coworkers was back then just one of several teams, all at industrial laboratories, intensely searching for ways to create laser beams. At the end of the year, the ruby laser was replicated and improved, and lasing was realized using other crystals, and helium-neon gas mixtures. So, it's just fair that the American Physical Society, the Optical Society, SPIE, and the IEEE Photonics Society have decided to organize a yearlong celebration of the 50th anniversary of the laser - that's LaserFest.

But in fact, the path to the laser had begun much earlier.

Berlin, 1916

In the summer of 1916, Albert Einstein took a break from general relativity and cosmology and tried to make sense, once more, of the riddle of the quantum. Specifically, he thought about ways to combine the recent ideas of Bohr on discrete energy levels in atoms with the Planck spectrum of blackbody radiation.

Atoms in thermal equilibrium with radiation can absorb radiation, thereby transiting to a state of higher energy, and they can drop from an excited state to a state with lower energy spontaneously, thereby emitting radiation. Could it be, so Einstein's idea, that atoms also will transit from an excited to a lower-energy state when they are hit by radiation with suitable energy?

Indeed, assuming a thermal Boltzmann distribution for the states of the atoms interacting with radiation, and equal rates for absorption on the one hand and spontaneous and stimulated emission – as the newly stipulated process came to be called – on the other hand, as one would expect for a thermal equilibrium between the atoms and radiation, Einstein could reproduce the Planck formula for the spectrum of blackbody radiation. "A splendid light has dawned on me about the absorption and emission of radiation," he wrote in a letter to his friend Michele Besso on August 11, 1916.

Einstein's "splendid light" of stimulated emission of radiation: An atom in a state with energy E2 is hit by a photon with energy = E2E1. This can trigger a transition of the atom to the lower energy level E1, accompanied with the emission of a photon with energy , in phase with the initial photon. After this so-called stimulated emission, there are two photons instead of one, both in the same state – a nice manifestation of the "bunching" Bose character of photons.

It was recognized in the 1920s that theoretically the process of stimulated emission could result in "negative absorption", that is, amplification, of radiation, but nobody had a good idea how to demonstrate this effect in practice.

New York, 1954

To achieve amplification of radiation via stimulated emission, it is necessary to have more atoms in the high-energy state than in the low-energy state. Otherwise, a photon hitting an atom will more likely just be absorbed than trigger stimulated emission, and there is no gain in radiation. This requirement for amplification is called "population inversion".

In 1951, Charles Townes had an idea how to create "population inversion" in an ensemble of ammonia molecules. The ammonia molecule comes with two states which are separated by an energy corresponding to microwave frequencies. A beam of ammonia molecules can be split into two in an inhomogeneous electric field, separating molecules in the higher and the lower energy states, respectively, with an arrangement similar to a Stern-Gerlach apparatus.

In April 1954, Townes and his students Jim Gordon and Herbert Zeiger at Columbia University piped a beam of ammonia molecules in the higher-energy state into a microwave cavity resonating at the frequency of the energy difference between the two states, and obtained "microwave amplification by stimulated emission of radiation" - this was the birth of the maser.

Townes soon started to think about ways how to extend the maser principle to infrared or optical frequencies. With graduate student Gordon Gould, he discussed arrangements of mirrors around the medium in which population inversion is created, replacing the microwave cavity. These mirrors make sure that a beam of light is going back and forth through the medium many times, thus being able to "collect" ever more photons every time it crosses the medium.

Gould realized that such an arrangement, for which he coined the term "laser", could create sharply focussed light beams of extreme intensity, which could be used for communication, as a tool, or as a weapon.

As soon as the concept of the "optical maser", as Townes continued to call it, was explained in detail in a paper written together with Arthur Schawlow, many groups embarked on a race to be the first to actually construct such a device.

Malibu, 1960

Theodore Maiman had received his doctorate in Physics from Stanford University in 1955 to take a job at the Hughes Research Laboratories, which moved to Malibu in 1960. At Hughes, Maiman had constructed masers using ruby crystals, and when he learned of the possibility of the laser, he convinced himself that it should be possible to build a laser using ruby as the "lasing" medium.

Ruby is, chemically speaking, a crystal of aluminum oxide doted with chromium ions. The chromium ions have several energy levels which can be excited by irradiation with light, two of which are metastable and can be used as the upper level of a lasing medium. The energy of the transition to the ground state corresponds to red light with a wavelength of 694 nm.

Maiman's idea was to take a rod of ruby with parallel faces, to coat these faces with silver to realize the mirrors, and to put the rod inside a helical flashlight tube. The flashlight then excites the chromium atoms and creates population inversion, and the spontaneous emission of one photon can trigger an avalanche of photons by stimulated emission.

On the afternoon of May 16, 1960, Maiman and his assistant Irnee D’Haenens saw for the first time directed beams of intense red light emerging from the ruby - they had realized the first laser.

Theodore Maiman holding the first laser. It consists of a small ruby crystal and a helical flashlight which serves to stimulate the chromium ions of the ruby, thus creating the population inversion necessary for laser action. The ends of the ruby rod have been coated with silver to mirror back and forth the light stemming from stimulated emission, thus producing sufficient gain. The whole device is placed in the small white casing. (Source)


Maiman is reported to have said that “A laser is a solution seeking a problem”, Gould's visions notwithstanding. I have no specific idea how fast the laser was used for commercial or industrial purposes, but it immediately grasped public imagination.

When the movie Goldfinger is released in 1964, James Bond has to face a huge laser, looking similar to a scaled-up version of Maiman's first tiny ruby device, and replacing the buzz saw of Ian Flemings original 1959 novel. As Auric Goldfinger explains:

l, too, have a new toy, but considerably more practical. You are looking at an industrial laser, which emits an extraordinary light, unknown in nature. It can project a spot on the moon. Or, at closer range, cut through solid metal. I will show you.







At the LaserFest website, you can find a nice description of the mechanism of the ruby laser, and a video with explanations by Theodore Maiman himself. Moroever, there is a long interview with Charles Townes on the history of the maser and the laser.

If you want to know more about the history of the laser, there are two books I can recommend:
  • The history of the laser, by Mario Bertolotti, actually tells much more than just the story of the laser: It starts back at the beginning of the 20th century with the early atom models and the puzzle of blackbody radiation, and traces the path to the laser via spectroscopy, magnetic resonance, and the maser.

  • Beam: the race to make the laser, by Jeff Hecht, focusses on the developments of the late 1950s and 1960, beginning with just two brief chapters on the early history of stimulated emission and the maser. If you get lost in between all the names, there is a list of dramatis personae at the end of the book which I, unfortunately, discovered only after reading the text.

If you have Feynman's lectures at hand, there is a discussion of Einstein's derivation of the blackbody spectrum using stimulated emission and the Einstein coefficients in Section 42-5 of Volume I, and the whole Chapter 9 of Volume III is devoted to explain the principle of the ammonia maser.


Monday, January 11, 2010

A splendid light has dawned on me …

“Es ist mir ein prächtiges Licht über die Absorption und Emission der Strahlung aufgegangen ‒ es wird Dich interessieren. Eine verblüffend einfache Ableitung der Planck’schen Formel, ich möchte sagen die Ableitung. Alles ganz quantisch.”

“A splendid light has dawned on me about the absorption and emission of radiation ‒ it will be of interest to you. A stunningly simple derivation of Planck's formula, I might say the derivation. All completely quantical.”


Albert Einstein in a letter to his friend Michele Besso on August 11, 1916.

The “splendid light” refers to Einstein's insight that stimulated emission (also called induced emission) of light from excited atoms occurs in nature, and that this yields an elementary explanation of Planck's formula for the spectrum of black body radiation.

And, of course, some 46 years later and 50 years ago this May, the “splendid light” of Einstein's idea became a real “splendid light” with the construction of the Laser, based on the principle of stimulated emission of radiation.

Wednesday, October 28, 2009

Science Park "Albert Einstein" Potsdam

Stefan and I, we were in Potsdam the past few days where I was visiting the Albert-Einstein Institute in Golm. While in the area, we also stopped at the "Science Park" in Potsdam. Potsdam may be more famous for the parks of Sanssouci and other palaces of the Prussian kings but this park, on a hill not far off the city center, is definitly worth a visit when you are interested in the history of science.

The park has an interesting past: Named "Telegraphenberg" (Telegraph Hill), it originally was the location of a relais station of an optical telegraph system linking Berlin to the Rhine. The park was designed in the second half of the 19th century, when an Astrophysical Observatory and a Geodetic Institute were installed on the hill.


The park on Telegraph Hill, Potsdam.

It was here that in 1880, Albert Michelson made his first interference experiment to test the direction-dependence of the speed of light. He was a guest scientist at the physics institute of Hermann von Helmholtz in Berlin at the time, and had to move his sensitive experimental setup to quiet Potsdam to escape the noise and vibrations of street traffic in the capital. Of course, Michelson didn't find any signs of the expected ether drift at the time, and thought of his experiment as a failure. Back to the US, he convinced his colleague Morley to collaborate on an improved experimental setup, and the rest is history.


The "Michelson Building" on Telegraph Hill, Potsdam.

The building where Michelson had installed his interferometer in the basement is now called the "Michelson Building", and accommodates the Potsdam Institute for Climate Impact Research.

The most famous monument on Telegraph Hill in Potsdam is the "Einstein Tower," housing a solar telescope. Designed by expressionist architect Erich Mendelson and financed in parts by Carl Bosch (the same Bosch who built the "Villa Bosch" in Heidelberg I visited last year), it is a cute looking phallus symbol whose scientific purpose was to test the redshift of spectral lines of sunlight in the Sun's gravitational field, one of the predictions of Einstein's theory of General Relativity.


The "Einstein Tower" solar observatory on Telegraph Hill, Potsdam.

Also this experiment failed, due to the thermal broadening of spectral lines and the fluctuations of the Sun's surface which, by the Doppler shift, mask the gravitational redshift and form a source of systematic error much higher than originally expected. Evidence for the "Gravitational Displacement of Lines in the Solar Spectrum" eventually came from other observatories, and unambiguous proof of the gravitational redshift finally was provided by the experiments of Rebka and Pound in 1959, using the Mössbauer effect to detect tiny shifts in the gamma ray frequencies of iron nuclei.

Nevertheless, the Einstein Tower is the only observatory on Telegraph Hill still in use for active research: The solar telescope and spectrographs now serve to study magnetic fields in the Sun's photosphere.


The building is quite small. A person in the scene, in this photo Stefan, helps to set a scale.

Directly in front of the Einstein-Tower, I found, to my surprise, a Boltzmann brain popping out of the ground:



Wikipedia informed us later that the bronze brain with the imprint "3 SEC" was put in place by the artist Volker März in 2002. It is titled "The 3 SEC Bronze Brain – Admonition to the Now – Monument to the continuous present” and symbolizes the scientific thesis that “the experience of continuity is based on an illusion" and that "continuity arises through the networking of contents, which in each case are represented in a time window of three seconds."

I wonder what Einstein would have thought of that.

Лучший частный хостинг