Showing posts with label Sociology of Science. Show all posts
Showing posts with label Sociology of Science. Show all posts
Sunday, December 22, 2024
Science Does Not Work By Consensus – Don’t Poll Us
A group of philosophers and sociologists have come up with the idea to regularly poll scientists in order to establish and maintain a “scientific consensus.” While this might sound plausible, it’s a terrible idea. Here’s why.
Saturday, November 16, 2024
Science is in trouble and it worries me.
Innovation is slowing, research productivity is declining, scientific work is becoming more disruptive. In this video I summarize what we know about the problem and what possible causes have been proposed. I also explain why this matters so much to me.
Wednesday, September 28, 2022
I’ve said it all before but here we go again
[I didn't write the title and byline and indeed didn't see it until it appeared online.] |
What’s going on? I spent years trying to understand why their field isn’t making progress, analyzing the problem, and putting forward a solution. It’s not that I hate particle physics, it’s rather to the contrary, I think it’s too important to let it die. But they don’t like to hear that their field urgently needs to change direction, so they attack me as the bearer of bad news.
But trying to get rid of me isn’t going to solve their problem. For one thing, it's not working. More importantly, everyone can see that nothing useful is coming out of particle physics, it’s just a sink of money. Lots of money. And soon enough governments are going to realize that particle physics is a good place to save money that they need for more urgent things. It would be in particle physicists’ own interest to listen to what I have to say.
And I have said this all many times before but I hate long twitter threads, so let me just summarize it in one blogpost:
a) Predictions for fundamentally new phenomena made from new theories in particle physics have all been wrong ever since the completion of the standard model in the 1970s. You have witnessed this ongoing failure in the popular science media. All their ideas were either falsified or they have been turned into eternally amendable and fapp unfalsifiable models, like supersymmetry.
b) Saying that “it’s difficult” explains why they haven’t managed to find new phenomena, but it doesn’t explain why their predictions are constantly wrong.
c) Scientists should learn from failure. If particle physicists’ method of theory-development isn’t working, they should analyze why, and change their methods. But this isn’t happening.
My answer to why their current method isn’t working is that their new theories (often in the form of new particles) do not solve any problems in the existing theories. They just add unnecessary clutter. When theoretical predictions were correct in the past, they solved problems of consistency (example: the Higgs, anti-particles, neutrinos, general relativity, etc).
Two common misunderstandings: Note that I do NOT say theorists in the past used this argument to make their predictions. I am merely noting in hindsight that’s what they did. It’s what the successful predictions have in common, and we should learn from history. Neither do I say that theoretical predictions were the ONLY way that progress happened. Of course not. Progress can also happen by experimental discoveries. But the more expensive new experiments become, the more careful we have to be about deciding which experiments to make, so we need solid theoretical predictions.
In many cases, particle physicists have made up pseudo-problems that they claim their new particles solve. Pseudo-problems are metaphysical misgivings, often a perceived lack of beauty. A typical example is the alleged problem with the Higgs mass being too small (that was behind the idea that the LHC should see supersymmetry). It’s a pseudo-problem because there is obviously nothing wrong with the Higgs-mass being what it is, seeing that they can very well make predictions with the standard model and its Higgs as it is.
(I sometimes see particle physicists claiming that supersymmetry “explains” the Higgs-mass. This is bluntly wrong. You cannot calculate the Higgs-mass from supersymmetric models, it remains a free parameter.)
Other pseudo-problems are the baryon asymmetry or the smallness of the cosmological constant etc. I have a list that distinguishes problems from pseudo-problems here.
So my recommendation is that theory development should focus on resolving inconsistencies, and stop wasting time on pseudo-problems. Real problems are eg the lacking quantization of gravity, dark matter, the measurement problem in quantum mechanics, as well as several rather technical issues with quantum mechanics (see the above mentioned list).
When I say “dark matter” I refer to the inconsistency between observation and theory. Note that to solve this problem one does NOT need details of the particles. That’s another point which particle physicists like to misunderstand. You fit the observations with an energy density and that’s pretty much it. You don’t need to fumble together entire “hidden sectors” with “portals” and other nonsense. Come on, people, wake up! This isn’t proper science!
There are several reasons why particle physicists can’t and don’t want to make this change. The most important one is that it would dramatically impede their capability to produce papers. And papers are what keeps grant cycles churning. This is a systemic problem. Next problem is that they can’t believe that what I say can possibly be correct because they have grown up in a community that has taught them their current methods are good. That’s group think in action.
There are solutions to both of these problems, but they require changes from within the community.
Particle physicists, rather unsurprisingly, don’t like the idea that they have to change. Their responses are boringly predictable.
They almost all attack me rather than my argument. Typically they will make claims like I’m just “trying to sell books” or that I “want attention” or that I “like to be contrarian” or that, in one way or another, I don’t know what I am talking about. I yet have to find a particle physicists who actually engaged with the argument I made. Indeed most of them never bother finding out what I said in the first place.
A novel accusation that I recently heard for the first time is that I allegedly refuse to argue with them. A particle physicist claimed on twitter that I had been repeatedly invited to give a seminar at CERN but declined, something she had been told by someone else. This is untrue. I have to my best knowledge never declined an opportunity to talk to particle physicists, even though I have been yelled at repeatedly. I was never invited to give a seminar at CERN.
The particle physicist who made this claim actually went and asked the main seminar organizers at CERN and they confirmed that I was never invited. She apologized. So it’s all good, except that it documents they have been circulating lies about me in the attempt to question my expertise. (Another symptom of social reinforcement.)
Other pseudo-problems are the baryon asymmetry or the smallness of the cosmological constant etc. I have a list that distinguishes problems from pseudo-problems here.
So my recommendation is that theory development should focus on resolving inconsistencies, and stop wasting time on pseudo-problems. Real problems are eg the lacking quantization of gravity, dark matter, the measurement problem in quantum mechanics, as well as several rather technical issues with quantum mechanics (see the above mentioned list).
When I say “dark matter” I refer to the inconsistency between observation and theory. Note that to solve this problem one does NOT need details of the particles. That’s another point which particle physicists like to misunderstand. You fit the observations with an energy density and that’s pretty much it. You don’t need to fumble together entire “hidden sectors” with “portals” and other nonsense. Come on, people, wake up! This isn’t proper science!
There are several reasons why particle physicists can’t and don’t want to make this change. The most important one is that it would dramatically impede their capability to produce papers. And papers are what keeps grant cycles churning. This is a systemic problem. Next problem is that they can’t believe that what I say can possibly be correct because they have grown up in a community that has taught them their current methods are good. That’s group think in action.
There are solutions to both of these problems, but they require changes from within the community.
Particle physicists, rather unsurprisingly, don’t like the idea that they have to change. Their responses are boringly predictable.
They almost all attack me rather than my argument. Typically they will make claims like I’m just “trying to sell books” or that I “want attention” or that I “like to be contrarian” or that, in one way or another, I don’t know what I am talking about. I yet have to find a particle physicists who actually engaged with the argument I made. Indeed most of them never bother finding out what I said in the first place.
A novel accusation that I recently heard for the first time is that I allegedly refuse to argue with them. A particle physicist claimed on twitter that I had been repeatedly invited to give a seminar at CERN but declined, something she had been told by someone else. This is untrue. I have to my best knowledge never declined an opportunity to talk to particle physicists, even though I have been yelled at repeatedly. I was never invited to give a seminar at CERN.
The particle physicist who made this claim actually went and asked the main seminar organizers at CERN and they confirmed that I was never invited. She apologized. So it’s all good, except that it documents they have been circulating lies about me in the attempt to question my expertise. (Another symptom of social reinforcement.)
There have also been several instances in the past where particle physicists called senior people at my workplace to complain about me, probably in the hope to intimidate me or to get me fired. It speaks much for my institution that the people in charge exerted no pressure on me. (In other words, don't bother calling them, it’s not going to help.)
The only “arguments” I hear from particle physicists are misunderstandings that I have cleared up thousands of times in the past. Like the dumb claim that inventing particles worked for Dirac. Or that I’m “anti-science” because I think building a bigger collider isn’t a good investment right now.
You would think that scientists should be interested in finding out how their field can make progress, but particle physicists just desperately try to make me go away, as if I was the problem.
But hey, here’s a pro-tip: If you want to sell books, I recommend you don’t write them about theoretical high energy physics. It’s not a topic that has a huge market. Also, I have way more attention than I need or want. I don’t want attention, I want to see progress. And I don’t like being contrarian, I am just not afraid of being contrarian when it’s necessary.
As a consequence of these recent insults targeted at me, I wrote an opinion piece for the Guardian that appeared on Monday. Please note the causal order: I wrote the piece because particle physicists picked on me in a renewed attempt to justify continuing with their failed methods, not the other way round.
It's not that I think they will finally see the light. But yeah I’m having fun for sure.
Saturday, February 12, 2022
Epic Fights in Science
[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]
Scientists are rational by profession. They objectively evaluate the evidence and carefully separate fact from opinion. Except of course they don’t, really. In this episode, we will talk about some epic fights among scientists that show very much that scientists, after all, are only human. Who dissed whom and why and what can we learn from that? That’s what we’ll talk about today.
1. Wilson vs Dawkins
Edward Wilson passed away just a few weeks ago at age 92. He is widely regarded as one of the most brilliant biologists in history. But some of his ideas about evolution got him into trouble with another big shot of biology: Richard Dawkins.
In 2012 Dawkins reviewed Wilson’s book “The Social Conquest of Earth”. He left no doubt about his misgivings. In his review Dawkins wrote:
2. Leibniz vs Newton
Newton and Leibniz were both instrumental in the development of differential calculus, but they approached the topic entirely differently. Newton came at it from a physical perspective and thought about the change of variables with time. Leibniz had a more abstract, analytical approach. He looked at general variables x and y that could take on infinitely close values. Leibniz introduced dx and dy as differences between successive values of these sequences.
The two men also had a completely different attitude to science communication. Leibniz put a lot of thought into the symbols he used and how he explained himself. Newton, on the other hand, wrote mostly for himself and often used whatever notation he liked on that day. Because of this, Leibniz’s notation was much easier to generalize to multiple variables and much of the notation we use in calculus today goes back to Leibniz. Though the notation xdot for speed and x double dot for acceleration that we use in physics comes from Newton.
Okay, so they both developed differential calculus. But who did it *first? Historians say today it’s clear that Newton had the idea first, during the plague years sixteen sixty-five and sixty-six, but he didn’t write it up until 5 years later and it wasn’t published for more than 20 years.
Meanwhile, Leibniz invented calculus in the mid 1670s. So, by the time word got out, it looked as if they’d both had the idea at the same time.
Newton and Leibniz then got into a bitter dispute over who was first. Leibniz wrote to the British Royal Society to ask for a committee to investigate the matter. But at that time the society’s president was… Isaac Newton. And Newton simply drafted the report himself. He wrote “we reckon Mr Newton the first inventor” and then presented it to the members of the committee to sign, which they did.
The document was published in 1712 by the Royal Society with the title Commercium Epistolicum Collinii et aliorum, De Analysi promota. In the modern translation the title would be “Newton TOTALLY DESTROYS Leibniz”.
On top of that, a comment on the report was published in the Philosophical Transactions of the Royal Society of London. The anonymous author, who was also Newton, explained in this comment:
And Newton? Well, even after Leibniz died, Newton refused mentioning him in the third edition of his Principia.
You can read the full story in Rupert Hall’s book “Philosophers at war.”
Scientists are rational by profession. They objectively evaluate the evidence and carefully separate fact from opinion. Except of course they don’t, really. In this episode, we will talk about some epic fights among scientists that show very much that scientists, after all, are only human. Who dissed whom and why and what can we learn from that? That’s what we’ll talk about today.
1. Wilson vs Dawkins
Edward Wilson passed away just a few weeks ago at age 92. He is widely regarded as one of the most brilliant biologists in history. But some of his ideas about evolution got him into trouble with another big shot of biology: Richard Dawkins.
In 2012 Dawkins reviewed Wilson’s book “The Social Conquest of Earth”. He left no doubt about his misgivings. In his review Dawkins wrote:
“unfortunately one is obliged to wade through many pages of erroneous and downright perverse misunderstandings of evolutionary theory. In particular, Wilson now rejects “kin selection” [...] and replaces it with a revival of “group selection”—the poorly defined and incoherent view that evolution is driven by the differential survival of whole groups of organisms.”Wilson idea of group selection is based on a paper that he wrote together with two mathematicians in 2010. When their paper was published in Nature magazine, it attracted criticism from more than 140 evolutionary biologists, among them some big names in the field.
In his review, Dawkins also said that Wilson’s paper probably would never have been published if Wilson hadn’t been so famous. That Wilson then ignored the criticism and published his book pretending nobody disagreed with him was to Dawkins “an act of wanton arrogance”.
Dawkins finished his review:
Dawkins finished his review:
“To borrow from Dorothy Parker, this is not a book to be tossed lightly aside. It should be thrown with great force. And sincere regret.”Wilson replied that his theory was mathematically more sound that of kin selection, and that he also had a list of names who supported his idea but, he said,
“if science depended on rhetoric and polls, we would still be burning objects with phlogiston and navigating with geocentric maps.”In a 2014 BBC interview, Wilson said:
“There is no dispute between me and Richard Dawkins and never has been. Because he is a journalist, and journalists are people who report what the scientists have found. And the arguments I’ve had, have actually been with scientists doing research.”Right after Wilson passed away, Dawkins tweeted:
“Sad news of death of Ed Wilson. Great entomologist, ecologist, greatest myrmecologist, invented sociobiology, pioneer of island biogeography, genial humanist & biophiliac, Crafoord & Pulitzer Prizes, great Darwinian (single exception, blind spot over kin selection). R.I.P.”
2. Leibniz vs Newton
Newton and Leibniz were both instrumental in the development of differential calculus, but they approached the topic entirely differently. Newton came at it from a physical perspective and thought about the change of variables with time. Leibniz had a more abstract, analytical approach. He looked at general variables x and y that could take on infinitely close values. Leibniz introduced dx and dy as differences between successive values of these sequences.
The two men also had a completely different attitude to science communication. Leibniz put a lot of thought into the symbols he used and how he explained himself. Newton, on the other hand, wrote mostly for himself and often used whatever notation he liked on that day. Because of this, Leibniz’s notation was much easier to generalize to multiple variables and much of the notation we use in calculus today goes back to Leibniz. Though the notation xdot for speed and x double dot for acceleration that we use in physics comes from Newton.
Okay, so they both developed differential calculus. But who did it *first? Historians say today it’s clear that Newton had the idea first, during the plague years sixteen sixty-five and sixty-six, but he didn’t write it up until 5 years later and it wasn’t published for more than 20 years.
Meanwhile, Leibniz invented calculus in the mid 1670s. So, by the time word got out, it looked as if they’d both had the idea at the same time.
Newton and Leibniz then got into a bitter dispute over who was first. Leibniz wrote to the British Royal Society to ask for a committee to investigate the matter. But at that time the society’s president was… Isaac Newton. And Newton simply drafted the report himself. He wrote “we reckon Mr Newton the first inventor” and then presented it to the members of the committee to sign, which they did.
The document was published in 1712 by the Royal Society with the title Commercium Epistolicum Collinii et aliorum, De Analysi promota. In the modern translation the title would be “Newton TOTALLY DESTROYS Leibniz”.
On top of that, a comment on the report was published in the Philosophical Transactions of the Royal Society of London. The anonymous author, who was also Newton, explained in this comment:
“the Method of Fluxions, as used by Mr. Newton, has all the Advantages of the Differential, and some others. It is more elegant ... Newton has recourse to converging Series, and thereby his Method becomes incomparably more universal than that of Mr. Leibniz.”Leibniz responded with his own anonymous publication, a four page paper which in the modern translation would be titled “Leibniz OWNS Newton”. That “anonymous” text gave all the credit to Leibniz and directly accused Newton of stealing calculus. Leibniz even wrote his own History and Origin of Differential Calculus in 1714. He went so far to change the dates on some of his manuscripts to pretend he knew about calculus before he really did.
And Newton? Well, even after Leibniz died, Newton refused mentioning him in the third edition of his Principia.
You can read the full story in Rupert Hall’s book “Philosophers at war.”
3. Edison vs Tesla
Electric lights came in use around the end of the 19th Century. At first, they all worked with Thomas Edison’s direct current system, DC for short. But his old employee Nicola Tesla had developed a competing system, the alternate current system, or AC for short. Tesla had actually offered it to Edison when he was working for him, but Edison didn’t want it.
Tesla then went to work for the engineer George Westinghouse. Together they created an AC system that was threatening Edison’s dominance on the market. The “war of the currents” began.
An engineer named Harold Brown, later found to be paid by Edison’s company, started writing letters to newspapers trying to discredit AC, saying that it was really dangerous and that the way to go was DC.
This didn’t have the desired effect, and Edison soon took more drastic steps. I have to warn you that the following is a really ugly story and in case you find animal maltreatment triggering, I think you should skip over the next minute.
Edison organized a series of demonstrations in which he killed dogs by electrocuting them with AC, arguing that a similar voltage in DC was not so deadly. Edison didn’t stop there. He went on to electrocute a horse, and then an adult elephant which he fried with a stunning 6000 volts. There’s an old still movie of this, erm, demonstration on YouTube. If you really want to see it, I’ll leave a link in the info below.
Still Edison wasn’t done. He paid Brown to build an electric chair with AC generators that they bought from Westinghouse and Tesla, and then had Brown lobby for using it to electrocute people so the general public would associate AC with death. And that partly worked. But in the end AC won mostly because it’s more efficient when sent over long distances.
Electric lights came in use around the end of the 19th Century. At first, they all worked with Thomas Edison’s direct current system, DC for short. But his old employee Nicola Tesla had developed a competing system, the alternate current system, or AC for short. Tesla had actually offered it to Edison when he was working for him, but Edison didn’t want it.
Tesla then went to work for the engineer George Westinghouse. Together they created an AC system that was threatening Edison’s dominance on the market. The “war of the currents” began.
An engineer named Harold Brown, later found to be paid by Edison’s company, started writing letters to newspapers trying to discredit AC, saying that it was really dangerous and that the way to go was DC.
This didn’t have the desired effect, and Edison soon took more drastic steps. I have to warn you that the following is a really ugly story and in case you find animal maltreatment triggering, I think you should skip over the next minute.
Edison organized a series of demonstrations in which he killed dogs by electrocuting them with AC, arguing that a similar voltage in DC was not so deadly. Edison didn’t stop there. He went on to electrocute a horse, and then an adult elephant which he fried with a stunning 6000 volts. There’s an old still movie of this, erm, demonstration on YouTube. If you really want to see it, I’ll leave a link in the info below.
Still Edison wasn’t done. He paid Brown to build an electric chair with AC generators that they bought from Westinghouse and Tesla, and then had Brown lobby for using it to electrocute people so the general public would associate AC with death. And that partly worked. But in the end AC won mostly because it’s more efficient when sent over long distances.
4. Cope vs Marsh
Another scientific fight from the 19th Century happened in paleontology, and this one I swear only involves animals that were already dead anyway.
The American paleontologists, Edward Cope and Othniel Marsh met in 1863 as students in Germany. They became good friends and later named some discoveries after each other.
Cope for example named an amphibian fossl Ptyonius marshii, after Marsh and, in return Marsh named a gigantic serpent Mosasaurus copeanus.
However, they were both very competitive and soon they were trying to outdo each other. Cope later claimed it all started when he showed Marsh a location where he’d found fossils and Marsh, behind Cope’s back, bribed the quarry operators to send anything they’d find directly to Marsh.
Marsh’s version of events is that things went downhills after he pointed out that Cope had published a paper in which he had reconstructed a dinosaur fossil but got it totally wrong. Cope had mistakenly reversed the vertebrae and then put the skull at the end of the tail! Marsh claimed that Cope was embarrassed and wanted revenge.
Whatever the reason, their friendship was soon forgotten. Marsh hired spies to track Cope and on some occasions had people destroy fossils before Cope could get his hands on them. Cope tried to boost his productivity by publishing the discovery of every new bone as that of a new species, a tactic which the American paleontologist Robert Bakker described as “taxonomic carpet-bombing.” Cope’s colleagues disapproved, but it was remarkably efficient. Cope would publish about 1400 academic papers in total. Marsh merely made it to 300.
But Marsh eventually became chief paleontologists of the United States Geological Survey, USGS, and used its funds to promote his own research while cutting funds for Cope’s expeditions. And when Cope still managed to do some expeditions, Marsh tried to take his fossils, claiming that since the USGS funded them, they belonged to the government.
This didn’t work out as planned. Cope could prove that he had financed most of his expeditions with his own money. He then contacted a journalist at the New York Herald who published an article claiming Marsh had misused USGS funds. An investigation found that Cope was right. Marsh was expelled from the Society without his fossils, because they had been obtained with USGS funds.
In a last attempt to outdo Marsh, Cope stated in his will that he’d donated his skull to science. He wanted his brain to be measured and compared to that of Marsh! But Marsh didn’t accept the challenge, so the world will never know which of the two had the bigger brain.
Together the two men discovered 136 species of dinosaurs (Cope 56 and Marsh 80) but they died financially ruined with their scientific reputation destroyed.
5. Hoyle vs The World
British astronomer Fred Hoyle is known as the man who discovered how nuclear reactions work inside stars. In 1983, the Nobel Prize in physics was given... to his collaborator Willy Fowler, not to Hoyle. Everyone, including Fowler, was stunned. How could that happen?
Well, the Swedish Royal Academy isn’t exactly forthcoming with information, but over the years Hoyle’s colleagues have offered the following explanation. Let’s go back a few years to 1974.
In that year, the Nobel Prize for physics went to Anthony Hewish for his role in the discovery of pulsars. Upon hearing the news Hoyle told a reporter: “Jocelyn Bell was the actual discoverer, not Hewish, who was her supervisor, so she should have been included.” Bell’s role in the discovery of pulsars is widely recognized today, but in 1974, that Hoyle put in a word for Bell made global headlines.
Hewish was understandably upset, and Hoyle clarified in a letter to The Times that his issue wasn’t with Hewish, but with the Nobel committee: “I would add that my criticism of the Nobel award was directed against the awards committee itself, not against Professor Hewish. It seems clear that the committee did not bother itself to understand what happened in this case.”
Hoyle’s biographer Simon Mitton claimed this is why Hoyle didn’t get the Nobel Prize: The Nobel Prize committee didn’t like being criticized. However, the British scientist Sir Harry Kroto, who won the Nobel Prize for chemistry in 1996, doesn’t think this is what happened.
Kroto points out that while Hoyle may have made a groundbreaking physics discovery, he was also a vocal defender of some outright pseudoscience, for example, he believed that the flu was caused by microbes that rain down on us from outer space.
Hoyle was also, well, an unfriendly and difficult man who had offended most of his colleagues at some point. According to Sir Harry, the actual reason that Hoyle didn’t get a Nobel Prize was that he’d use it to promote pseudoscience. He said
Another scientific fight from the 19th Century happened in paleontology, and this one I swear only involves animals that were already dead anyway.
The American paleontologists, Edward Cope and Othniel Marsh met in 1863 as students in Germany. They became good friends and later named some discoveries after each other.
Cope for example named an amphibian fossl Ptyonius marshii, after Marsh and, in return Marsh named a gigantic serpent Mosasaurus copeanus.
However, they were both very competitive and soon they were trying to outdo each other. Cope later claimed it all started when he showed Marsh a location where he’d found fossils and Marsh, behind Cope’s back, bribed the quarry operators to send anything they’d find directly to Marsh.
Marsh’s version of events is that things went downhills after he pointed out that Cope had published a paper in which he had reconstructed a dinosaur fossil but got it totally wrong. Cope had mistakenly reversed the vertebrae and then put the skull at the end of the tail! Marsh claimed that Cope was embarrassed and wanted revenge.
Whatever the reason, their friendship was soon forgotten. Marsh hired spies to track Cope and on some occasions had people destroy fossils before Cope could get his hands on them. Cope tried to boost his productivity by publishing the discovery of every new bone as that of a new species, a tactic which the American paleontologist Robert Bakker described as “taxonomic carpet-bombing.” Cope’s colleagues disapproved, but it was remarkably efficient. Cope would publish about 1400 academic papers in total. Marsh merely made it to 300.
But Marsh eventually became chief paleontologists of the United States Geological Survey, USGS, and used its funds to promote his own research while cutting funds for Cope’s expeditions. And when Cope still managed to do some expeditions, Marsh tried to take his fossils, claiming that since the USGS funded them, they belonged to the government.
This didn’t work out as planned. Cope could prove that he had financed most of his expeditions with his own money. He then contacted a journalist at the New York Herald who published an article claiming Marsh had misused USGS funds. An investigation found that Cope was right. Marsh was expelled from the Society without his fossils, because they had been obtained with USGS funds.
In a last attempt to outdo Marsh, Cope stated in his will that he’d donated his skull to science. He wanted his brain to be measured and compared to that of Marsh! But Marsh didn’t accept the challenge, so the world will never know which of the two had the bigger brain.
Together the two men discovered 136 species of dinosaurs (Cope 56 and Marsh 80) but they died financially ruined with their scientific reputation destroyed.
5. Hoyle vs The World
British astronomer Fred Hoyle is known as the man who discovered how nuclear reactions work inside stars. In 1983, the Nobel Prize in physics was given... to his collaborator Willy Fowler, not to Hoyle. Everyone, including Fowler, was stunned. How could that happen?
Well, the Swedish Royal Academy isn’t exactly forthcoming with information, but over the years Hoyle’s colleagues have offered the following explanation. Let’s go back a few years to 1974.
In that year, the Nobel Prize for physics went to Anthony Hewish for his role in the discovery of pulsars. Upon hearing the news Hoyle told a reporter: “Jocelyn Bell was the actual discoverer, not Hewish, who was her supervisor, so she should have been included.” Bell’s role in the discovery of pulsars is widely recognized today, but in 1974, that Hoyle put in a word for Bell made global headlines.
Hewish was understandably upset, and Hoyle clarified in a letter to The Times that his issue wasn’t with Hewish, but with the Nobel committee: “I would add that my criticism of the Nobel award was directed against the awards committee itself, not against Professor Hewish. It seems clear that the committee did not bother itself to understand what happened in this case.”
Hoyle’s biographer Simon Mitton claimed this is why Hoyle didn’t get the Nobel Prize: The Nobel Prize committee didn’t like being criticized. However, the British scientist Sir Harry Kroto, who won the Nobel Prize for chemistry in 1996, doesn’t think this is what happened.
Kroto points out that while Hoyle may have made a groundbreaking physics discovery, he was also a vocal defender of some outright pseudoscience, for example, he believed that the flu was caused by microbes that rain down on us from outer space.
Hoyle was also, well, an unfriendly and difficult man who had offended most of his colleagues at some point. According to Sir Harry, the actual reason that Hoyle didn’t get a Nobel Prize was that he’d use it to promote pseudoscience. He said
“Hoyle was so arrogant and dismissive of others that he would use the prestige of the Nobel prize to foist his other truly ridiculous ideas on the lay public. The whole scientific community felt that.”So what do we learn from that? Well, one thing we can take away is that if you want to win a Nobel Prize, don’t spread pseudoscience. But the bigger lesson I think is that while some competition is a good thing, it’s best enjoyed in small doses.
Saturday, May 01, 2021
Google talk online now
The major purpose of the talk was to introduce our SciMeter project which I've been working on for a few years now with Tom Price and Tobias Mistele. But I also talk a bit about my PhD topic and particle physics and how my book came about, so maybe it's interesting for some of you.
Sunday, August 23, 2020
Your sudden enthusiasm for virtual meetings is beginning to worry me
Screenshot from Zoom meting. Image Source: Reshape. |
My husband works for a company that has sections in several other countries, including India, the USA, and Great Britain. He, too, is used to teleconferences with participants from several time-zones.
This makes me think my family was probably better prepared for the covid lockdown than many others. For the same reason though, we also had more time to contemplate the pros and cons of remote collaboration.
The pros are clear: Less time wasted in transit. Less carbon dioxide emitted. Less germs circulated.
And with more people in the same situation, the pros have proliferated. I have, for example, been thrilled to see the spike in online seminars. Suddenly, even I am able to find seminars that are actually interesting for my research! Better still, if it turns out they’re not as interesting as anticipated, no one notices if I virtually sneak out. Also, asking for a virtual meeting has become routine. Everyone is now familiar with screen sharing and prepared to tolerate the hassle of lagging vids or chopped audios.
These have been positive developments, and many of them deserve to be carried forward. Traveling for seminars or colloquia has long been absurdly out-of-date. We all know that a lot of speakers will give the same seminar dozens of times to largely disinterested audiences, when those who actually wanted to hear it could as well have called into the same online meeting, or watched a recording.
Or consider this. I have frequently gotten invitations from overseas institutions that were prepared to fly me in and out for giving a one-hour talk. This isn’t only ecologically insane, it’s also a bad use of researchers’ time. A lot of my colleagues work while on planes and in airports, and of course I do, too, but let’s be honest: It’s not quality time. Traveling is disruptive, both mentally and metabolically. And that’s leaving aside that it screws up the work-life balance.
So, yes, scientists could certainly slim down those seminar series and cut back traveling quite a bit. But as researchers are becoming more familiar with virtual meetings and teleconferences, I fear some of them are getting carried away.
I’ve seen scientists on social media seriously discussing that seminar series should remain online-only even post-pandemic. Virtual conferences are supposedly better than the real thing. And if you listen to them, there’s nothing, it seems, you can’t get done on Zoom.
Let us therefore talk about the cons.
Virtual collaborations work well as long as you know the people in real life already. Even with both audio and video, a lot of information that humans draw on to efficiently communicate is missing. Through a screen, you neither get body language nor the context from chatter in the hallway or just from physically being in the same room. These cues are important for deliberation and argumentation to work properly.
I know this sounds somewhat Neanderthal, but fact is that evolution didn’t prepare us to communicate through webcams.
This has long been known to sociologists who therefore recommend that teams which collaborate remotely meet in person at least a few times a year, a recommendation that my husband’s employer strictly follows. The occasional in-person meeting, so the idea, provides team members with the required information to understand where the others are coming from. It is especially important to introduce new members to a group.
A good starting point to get a sense of the troubles that remote collaboration can bring is the 2005 report by the (US-American) National Defense Research Institute on “Challenges in Virtual Collaboration”. Summarizing the published literature, they find that during video- and audio-conferences “local coalitions can form in which participants tend to agree more with those in the same room than with those on the other end of the line” and that computer-mediated communication has “shown to increase polarization, deindividuation, and disinhibition. That is, individuals may become more extreme in their thinking, less sensitive to interpersonal aspects of their messages, and more honest and candid.”
Online-only scientific collaboration and conferences would therefore most likely work well for some time, but eventually communication would suffer. Especially those who currently praise the zoomiverse for its supposed inclusivity, as this recent piece in SciAm, simply have not thought it through.
You see, regardless of how much effort we put into online conferencing and meeting, there will still be people who know each other in real life. These will be those who just happen to work or live near each other, or who have the funds to travel. Unless you actually want to forbid everyone to meet in real life, this will create a two-class community. Those who can meet. And those who can’t.
At present, most funding agencies acknowledge the need to occasionally see each other in person to collaborate effectively. If that would no longer be the case, then it would be especially the already disadvantaged people who would suffer because they would become remote-only participants. The Ivy League, I am sure, would find a way to continue having drinks together one way or the other.
None of this is to say that I am against virtual conferences or remote collaboration. But international collaboration has been a boon to science. And abstract ideas, like the ones we deal with in the foundations of physics, are hard to get across; having to pipe them through glass fiber cables doesn’t help. As we discuss how to reduce traveling, let us not forget that communication is absolutely essential to science.
Saturday, December 14, 2019
How Scientists Can Avoid Cognitive Bias
Today I want to talk about a topic that is much, much more important than anything I have previously talked about. And that’s how cognitive biases prevent science from working properly.
Cognitive biases have received some attention in recent years, thanks to books like “Thinking Fast and Slow,” “You Are Not So Smart,” or “Blind Spot.” Unfortunately, this knowledge has not been put into action in scientific research. Scientists do correct for biases in statistical analysis of data and they do correct for biases in their measurement devices, but they still do not correct for biases in the most important apparatus that they use: Their own brain.
Before I tell you what problems this creates, a brief reminder what a cognitive bias is. A cognitive bias is a thinking shortcut which the human brain uses to make faster decisions.
Cognitive biases work much like optical illusions. Take this example of an optical illusion. If your brain works normally, then the square labelled A looks much darker than the square labelled B.
But if you compare the actual color of the pixels, you see that these squares have exactly the same color.
The reason that we intuitively misjudge the color of these squares is that the image suggests it is really showing a three-dimensional scene where part of the floor is covered by a shadow. Your brain factors in the shadow and calculates back to the original color, correctly telling you that the actual color of square B must have been lighter than that of square A.
So, if someone asked you to judge the color in a natural scene, your answer would be correct. But if your task was to evaluate the color of pixels on the screen, you would give a wrong answer – unless you know of your bias and therefore do not rely on your intuition.
Cognitive biases work the same way and can be prevented the same way: by not relying on intuition. Cognitive biases are corrections that your brain applies to input to make your life easier. We all have them, and in every-day life, they are usually beneficial.
The maybe best-known cognitive bias is attentional bias. It means that the more often you hear about something, the more important you think it is. This normally makes a lot of sense. Say, if many people you meet are talking about the flu, chances are the flu’s making the rounds and you are well-advised to pay attention to what they’re saying and get a flu shot.
But attentional bias can draw your attention to false or irrelevant information, for example if the prevalence of a message is artificially amplified by social media, causing you to misjudge its relevance for your own life. A case where this frequently happens is terrorism. Receives a lot of media coverage, has people hugely worried, but if you look at the numbers for most of us terrorism is very unlikely to directly affect our life.
And this attentional bias also affects scientific judgement. If a research topic receives a lot of media coverage, or scientists hear a lot about it from their colleagues, those researchers who do not correct for attentional bias are likely to overrate the scientific relevance of the topic.
There are many other biases that affect scientific research. Take for example loss aversion. This is more commonly known as “throwing good money after bad”. It means that if we have invested time or money into something, we are reluctant to let go of it and continue to invest in it even if it no longer makes sense, because getting out would mean admitting to ourselves that we made a mistake. Loss aversion is one of the reasons scientists continue to work on research agendas that have long stopped being promising.
But the most problematic cognitive bias in science is social reinforcement, also known as group think. This is what happens in almost closed, likeminded, communities, if you have people reassuring each other that they are doing the right thing. They will develop a common narrative that is overly optimistic about their own research, and they will dismiss opinions from people outside their own community. Group think makes it basically impossible for researchers to identify their own mistakes and therefore stands in the way of the self-correction that is so essential for science.
A bias closely linked to social reinforcement is the shared information bias. This bias has the consequence that we are more likely to pay attention to information that is shared by many people we know, rather than to the information held by only few people. You can see right away how this is problematic for science: That’s because how many people know of a certain fact tells you nothing about whether that fact is correct or not. And whether some information is widely shared should not be a factor for evaluating its correctness.
Now, there are lots of studies showing that we all have these cognitive biases and also that intelligence does not make it less likely to have them. It should be obvious, then, that we organize scientific research so that scientists can avoid or at least alleviate their biases. Unfortunately, the way that research is currently organized has exactly the opposite effect: It makes cognitive biases worse.
For example, it is presently very difficult for a scientist to change their research topic, because getting a research grant requires that you document expertise. Likewise, no one will hire you to work on a topic you do not already have experience with.
Superficially this seems like good strategy to invest money into science, because you reward people for bringing expertise. But if you think about the long-term consequences, it is a bad investment strategy. Because now, not only do researchers face a psychological hurdle to leaving behind a topic they have invested time in, they would also cause themselves financial trouble. As a consequence, researchers are basically forced to continue to claim that their research direction is promising and to continue working on topics that lead nowhere.
Another problem with the current organization of research is that it rewards scientists for exaggerating how exciting their research is and for working on popular topics, which makes social reinforcement worse and adds to the shared information bias.
I know this all sounds very negative, but there is good news too: Once you are aware that these cognitive biases exist and you know the problems that they can cause, it is easy to think of ways to work against them.
For example, researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing. Also, researchers should always list shortcoming of their research topics, in lectures and papers, so that the shortcomings stay on the collective consciousness. Similarly, conferences should always have speakers from competing programs, and scientists should be encouraged to offer criticism on their community and not be avoided for it. These are all little improvements that every scientist can make individually, and once you start thinking about it, it’s not hard to come up with further ideas.
And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality.
The reason this is so, so important to me, is that science drives innovation and if science does not work properly, progress in our societies will slow down. But cognitive bias in science is a problem we can solve, and that we should solve. Now you know how.
Cognitive biases have received some attention in recent years, thanks to books like “Thinking Fast and Slow,” “You Are Not So Smart,” or “Blind Spot.” Unfortunately, this knowledge has not been put into action in scientific research. Scientists do correct for biases in statistical analysis of data and they do correct for biases in their measurement devices, but they still do not correct for biases in the most important apparatus that they use: Their own brain.
Before I tell you what problems this creates, a brief reminder what a cognitive bias is. A cognitive bias is a thinking shortcut which the human brain uses to make faster decisions.
Cognitive biases work much like optical illusions. Take this example of an optical illusion. If your brain works normally, then the square labelled A looks much darker than the square labelled B.
[Example of optical illusion. Image: Wikipedia] |
[Example of optical illusion. Image: Wikipedia] |
So, if someone asked you to judge the color in a natural scene, your answer would be correct. But if your task was to evaluate the color of pixels on the screen, you would give a wrong answer – unless you know of your bias and therefore do not rely on your intuition.
Cognitive biases work the same way and can be prevented the same way: by not relying on intuition. Cognitive biases are corrections that your brain applies to input to make your life easier. We all have them, and in every-day life, they are usually beneficial.
The maybe best-known cognitive bias is attentional bias. It means that the more often you hear about something, the more important you think it is. This normally makes a lot of sense. Say, if many people you meet are talking about the flu, chances are the flu’s making the rounds and you are well-advised to pay attention to what they’re saying and get a flu shot.
But attentional bias can draw your attention to false or irrelevant information, for example if the prevalence of a message is artificially amplified by social media, causing you to misjudge its relevance for your own life. A case where this frequently happens is terrorism. Receives a lot of media coverage, has people hugely worried, but if you look at the numbers for most of us terrorism is very unlikely to directly affect our life.
And this attentional bias also affects scientific judgement. If a research topic receives a lot of media coverage, or scientists hear a lot about it from their colleagues, those researchers who do not correct for attentional bias are likely to overrate the scientific relevance of the topic.
There are many other biases that affect scientific research. Take for example loss aversion. This is more commonly known as “throwing good money after bad”. It means that if we have invested time or money into something, we are reluctant to let go of it and continue to invest in it even if it no longer makes sense, because getting out would mean admitting to ourselves that we made a mistake. Loss aversion is one of the reasons scientists continue to work on research agendas that have long stopped being promising.
But the most problematic cognitive bias in science is social reinforcement, also known as group think. This is what happens in almost closed, likeminded, communities, if you have people reassuring each other that they are doing the right thing. They will develop a common narrative that is overly optimistic about their own research, and they will dismiss opinions from people outside their own community. Group think makes it basically impossible for researchers to identify their own mistakes and therefore stands in the way of the self-correction that is so essential for science.
A bias closely linked to social reinforcement is the shared information bias. This bias has the consequence that we are more likely to pay attention to information that is shared by many people we know, rather than to the information held by only few people. You can see right away how this is problematic for science: That’s because how many people know of a certain fact tells you nothing about whether that fact is correct or not. And whether some information is widely shared should not be a factor for evaluating its correctness.
Now, there are lots of studies showing that we all have these cognitive biases and also that intelligence does not make it less likely to have them. It should be obvious, then, that we organize scientific research so that scientists can avoid or at least alleviate their biases. Unfortunately, the way that research is currently organized has exactly the opposite effect: It makes cognitive biases worse.
For example, it is presently very difficult for a scientist to change their research topic, because getting a research grant requires that you document expertise. Likewise, no one will hire you to work on a topic you do not already have experience with.
Superficially this seems like good strategy to invest money into science, because you reward people for bringing expertise. But if you think about the long-term consequences, it is a bad investment strategy. Because now, not only do researchers face a psychological hurdle to leaving behind a topic they have invested time in, they would also cause themselves financial trouble. As a consequence, researchers are basically forced to continue to claim that their research direction is promising and to continue working on topics that lead nowhere.
Another problem with the current organization of research is that it rewards scientists for exaggerating how exciting their research is and for working on popular topics, which makes social reinforcement worse and adds to the shared information bias.
I know this all sounds very negative, but there is good news too: Once you are aware that these cognitive biases exist and you know the problems that they can cause, it is easy to think of ways to work against them.
For example, researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing. Also, researchers should always list shortcoming of their research topics, in lectures and papers, so that the shortcomings stay on the collective consciousness. Similarly, conferences should always have speakers from competing programs, and scientists should be encouraged to offer criticism on their community and not be avoided for it. These are all little improvements that every scientist can make individually, and once you start thinking about it, it’s not hard to come up with further ideas.
And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality.
The reason this is so, so important to me, is that science drives innovation and if science does not work properly, progress in our societies will slow down. But cognitive bias in science is a problem we can solve, and that we should solve. Now you know how.
Wednesday, October 30, 2019
The crisis in physics is not only about physics
downward spiral |
The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today we have collaborations of thousands of people operating machines that cost billions of dollars.
With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end.
The only way to avoid being sucked into this vicious cycle is to choose carefully which hypothesis to put to the test. But physicists still operate by the “just look” idea like this was the 19th century. They do not think about which hypotheses are promising because their education has not taught them to do so. Such self-reflection would require knowledge of the philosophy and sociology of science, and those are subjects physicists merely make dismissive jokes about. They believe they are too intelligent to have to think about what they are doing.
The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know.
But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it.
Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field.
This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that science always progresses somehow, and that sooner or later certainly someone will stumble over something interesting. But even if that happened – even if someone found a piece of the puzzle – at this point we wouldn’t notice, because today any drop of genuine theoretical progress would drown in an ocean of “healthy speculation”.
And so, what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality.
Physicists need new methods. Better methods. Methods that are appropriate to the present century.
And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on. I may be on the wrong track with this, of course. But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do.
Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it.
You may want to put this down as a minor worry because – $40 billion dollar collider aside – who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it’s not all that much money. I grant you that much. Theorists are not expensive.
But even if you don’t care what’s up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It’s an old discipline and the first to run into this problem. But the same problem will sooner or later surface in other disciplines if experiments become increasingly expensive and recruit large fractions of the scientific community.
Indeed, we see this beginning to happen in medicine and in ecology, too.
Small-scale drug trials have pretty much run their course. These are good only to find in-your-face correlations that are universal across most people. Medicine, therefore, will increasingly have to rely on data collected from large groups over long periods of time to find increasingly personalized diagnoses and prescriptions. The studies which are necessary for this are extremely costly. They must be chosen carefully for not many of them can be made. The study of ecosystems faces a similar challenge, where small, isolated investigations are about to reach their limits.
How physicists handle their crisis will give an example to other disciplines. So watch this space.
Thursday, April 25, 2019
Yes, scientific theories have to be falsifiable. Why do we even have to talk about this?
[image: theskillsfarm.com] |
An hypothesis that is not falsifiable through observation is optional. You may believe in it or not. Such hypotheses belong into the realm of religion. That much is clear, and I doubt any scientist would disagree with that. But troubles start when we begin to ask just what it means for a theory to be falsifiable. One runs into the following issues:
1. How long it should take to make a falsifiable prediction (or postdiction) with a hypothesis?
If you start out working on an idea, it might not be clear immediately where it will lead, or even if it will lead anywhere. That could be because mathematical methods to make predictions do not exist, or because crucial details of the hypothesis are missing, or just because you don’t have enough time or people to do the work.
My personal opinion is that it makes no sense to require predictions within any particular time, because such a requirement would inevitably be arbitrary. However, if scientists work on hypotheses without even trying to arrive at predictions, such a research direction should be discontinued. Once you allow this to happen, you will end up funding scientists forever because falsifiable predictions become an inconvenient career risk.
2. How practical should a falsification be?
Some hypotheses are falsifiable in principle, but not falsifiable in practice. Even in practice, testing them might take so long that for all practical purposes they’re unfalsifiable. String theory is the obvious example. It is testable, but no experiment in the foreseeable future will be able to probe its predictions. A similar consideration goes for the detection of quanta of the gravitational field. You can measure those, in principle. But with existing methods, you will still collect data when the heat death of the universe chokes your ambitious research agenda.
Personally, I think predictions for observations that are not presently measurable are worthwhile because you never know what future technology will enable. However, it makes no sense working out details of futuristic detectors. This belongs into the realm of science fiction, not science. I do not mind if scientists on occasion engage in such speculation, but it should be the exception rather than the norm.
3. What even counts as a hypothesis?
In physics we work with theories. The theories themselves are based on axioms, that are mathematical requirements or principles, eg symmetries or functional relations. But neither theories nor principles by themselves lead to predictions.
To make predictions you always need a concrete model, and you need initial conditions. Quantum field theory, for example, does not make predictions – the standard model does. Supersymmetry also does not make predictions – only supersymmetric models do. Dark matter is neither a theory nor a principle, it is a word. Only specific models for dark matter particles are falsifiable. General relativity does not make predictions unless you specify the number of dimensions and chose initial conditions. And so on.
In some circumstances, one can arrive at predictions that are “model-independent”, which are the most useful predictions you can have. I scare-quote “model-independent” because such predictions are not really independent of the model, they merely hold for a large number of models. Violations of Bell’s inequality are a good example. They rule out a whole class of models, not just a particular one. Einstein’s equivalence principle is another such example.
Troubles begin if scientists attempt to falsify principles by producing large numbers of models that all make different predictions. This is, unfortunately, the current situation in both cosmology and particle physics. It documents that these models are strongly underdetermined. In such a case, no further models should be developed because that is a waste of time. Instead, scientists need to find ways to arrive at more strongly determined predictions. This can be done, eg, by looking for model-independent predictions, or by focusing on inconsistencies in the existing theories.
This is not currently happening because it would make it more difficult for scientists to produce predictions, and hence decrease their paper output. As long as we continue to think that a large number of publications is a signal of good science, we will continue to see wrong predictions based on useless models.
4. Falsifiability is necessary but not sufficient.
A lot of hypotheses are falsifiable but just plain nonsense. Really arguing that a hypothesis must be science just because you can test it is typical crackpot thinking. I previously wrote about this here.
5. Not all aspects of a hypothesis must be falsifiable.
It can happen that a hypothesis which makes some falsifiable predictions leads to unanswerable questions. An often named example is that certain models of eternal inflation seem to imply that besides our own universe there exist an infinite number of other universes. These other universes, however, are unobservable. We have a similar conundrum already in quantum mechanics. If you take the theory at face value then the question what a particle does before you measure it is not answerable.
There is nothing wrong with a hypothesis that generates such problems; it can still be a good theory, and its non-falsifiable predictions certainly make for good after-dinner conversations. However, debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests.
This post was brought on by Matthew Francis’ article “Falsifiability and Physics” for Symmetry Magazine.
Saturday, March 09, 2019
Motte and Bailey, Particle Physics Style
“Motte and bailey” is a rhetorical maneuver in which someone switches between an argument that does not support their conclusion but is easy to defend (the “motte”), and an argument that supports their conclusion but is hard to defend (the “bailey”). The purpose of this switch is to trick the listener into believing that the easy-to-defend argument suffices to support the conclusion.
This rhetorical trick is omnipresent in arguments that particle physicists currently make for building the next larger collider.
There are good arguments to build a larger collider, but those don’t justify the investment. These arguments are measuring the properties of known particles to higher precision and keeping particle physicists occupied. Also, we could just look and see if we find something new. That’s the motte.
Then there is an argument which would justify the investment, but this is not based on sound reasoning. This argument is that a next larger collider would lead to progress in the foundations of physics, for example by finding new symmetries or solving the riddle of dark matter. This argument is indefensible because there is no reason to think the next larger collider would help answering those questions. That’s the bailey.
This maneuver is particularly amusing if you both have people who make the indefensible argument and others who insist no one makes it. In a recent interview with the CERN courier, for example, Nima Arkani-Hamed says:
For this reason I want to give you an explicit example for how motte and bailey is employed by particle physicists to make their case. I do this in the hope that it will help others notice when they encounter this flawed argument.
The example I will use is a recent interview I did for a podcast with the Guardian. The narrator is Ian Sample. Also on the show is particle physicist Brian Foster. I don’t personally know Foster and never spoke with him before. You can listen to the whole thing here, but I have transcribed the relevant parts below. (Please let me know in case I misheard something.)
At around 10:30 min the following exchange takes place.
Ian: “Are there particular things that physicists would like to look for, actual sort-of targets like the Higgs, that could be named like the Higgs?”
Brian: “The Higgs is really, I think, at the moment the thing that we are particularly interested in because it is the new particle on the block. And we know very little about it so far. And that will give us hopefully clues as to where to look for new phenomena beyond the standard model. Because the thing is that we know there must be physics beyond the standard model. If for no other reason than, as you mention, there’s very strong evidence that there is dark matter in the universe and that dark matter must be made of particles of some sort we have no candidate for those particles at the moment.”
I then explain that this argument does not work because there is no reason to think the next larger collider would find dark matter particles, that, in fact, we are not even sure dark matter is made of particles.
After some more talk about the various proposals for new colliders that are currently on the table, the discussion returns to the question what justifies the investment. At about 24:06 you can hear:
Ian: “Sabine, you’ve had a fair bit of flak for some of your criticisms for the FCC, haven’t you, from within the community?”
Sabine: “Sure, true, but I did expect it. Fact is, we have no reason to think that a next larger particle collider will actually tell us anything new about the fundamental laws of nature. There’s certainly some constants that you can always measure better, you can always say, well, I want to measure more precisely what the Higgs is doing, or how that particle decays, and so on and so forth. But if you want to make progress in our understanding of the foundations of physics that’s just not currently a promising thing to invest in. And I don’t think that’s so terribly controversial, but a lot of particle physicists clearly did not like me saying this publicly.”
Brian: “I beg to differ, I think it is very controversial, and I think it’s wrong, as I’ve tried to say several times. I mean the way in which you can make progress in particle physics is by making these precision measurements. You know very well that quantum mechanics is such that if you can make very high precision measurements that can tell you a lot of things about much higher energies than what you can reach in the laboratory. So that’s the purpose of doing very high precision physics at the LHC, it’s not like stamp collecting. You are trying to make measurements which will be sufficiently precise that they will give you a very strong indication of where there will be new physics at high energies.”
(Only tangentially relevant, but note that I was talking about the foundations of physics, whereas Brian’s reply is about progress in particle physics in particular.)
Sabine: “I totally agree with that. The more precisely you measure, the more sensitive you are to the high energy contributions. But still there is no good reason right now to think that there is anything to find, is what I’m saying.”
Brian: “But that’s not true. I mean, it’s quite clear, as you said yourself, that the standard model is incomplete. Therefore, if we can measure the absolutely outstanding particle in the standard model, the Higgs boson, which is completely unique, to very high precision, then the chances are very strong that we will find some indication for what this physics beyond the standard model is.”
Sabine: “So exactly what physics beyond the standard model are you referring to there?
Brian: “I have no idea. That’s why I want to do the measurement.”
I then explain why there is no reason to think that the next larger collider will find evidence of new physical effects. I do this by pointing out that the only reliable indications we have for new physics merely tell us something new has to appear at latest at energies that are still about a billion times higher than what even the next larger collider could reach.
At this point Brian stops claiming the chances are “very strong” that a bigger machine would find something new, and switches to the just-look-argument:
Brian: “Look, it’s a grave mistake to be too strongly lead by theoretical models [...]”
The just-look-argument is of course well and fine. But, as I have pointed out many times before, the same just-look-argument can be made for any other new experiment in the foundations of physics. It therefore does not explain why a larger particle collider in particular is a good investment. Indeed, the opposite is the case: There are less costly experiments for which we have good reasons, such as measuring more precisely the properties of dark matter or probing the weak field regime of quantum gravity.
When I debunk the just-look-argument, a lot of particle physicists then bring up the no-zero-sum-argument. I just did another podcast a few days ago where the no-zero-sum-argument played a big role and if that appears online, I’ll comment on that in more detail.
The real tragedy is that there is absolutely no learning curve in this exchange. Doesn’t matter how often I point out that particle physicists’ arguments don’t hold water, they’ll still repeat them.
(Completely irrelevant aside: This is the first time I have heard a recording made in my basement studio next to other recordings. I am pleased to note all the effort I put into getting good sound quality paid off.)
This rhetorical trick is omnipresent in arguments that particle physicists currently make for building the next larger collider.
There are good arguments to build a larger collider, but those don’t justify the investment. These arguments are measuring the properties of known particles to higher precision and keeping particle physicists occupied. Also, we could just look and see if we find something new. That’s the motte.
Then there is an argument which would justify the investment, but this is not based on sound reasoning. This argument is that a next larger collider would lead to progress in the foundations of physics, for example by finding new symmetries or solving the riddle of dark matter. This argument is indefensible because there is no reason to think the next larger collider would help answering those questions. That’s the bailey.
This maneuver is particularly amusing if you both have people who make the indefensible argument and others who insist no one makes it. In a recent interview with the CERN courier, for example, Nima Arkani-Hamed says:
“Nobody who is making the case for future colliders is invoking, as a driving motivation, supersymmetry, extra dimensions…”While his colleague, Lisa Randall, has defended the investment into the next larger collider by arguing:
“New dimensions or underlying structures might exist, but we won’t know unless we explore.”I don’t think that particle physicists are consciously aware of what they are doing. Really, I get the impression they just throw around whatever arguments come to their mind and hope the other side doesn’t have a response. Most unfortunately, this tactic often works, just because there are few people competent enough to understand particle physicists’ arguments and also willing to point out when they go wrong.
For this reason I want to give you an explicit example for how motte and bailey is employed by particle physicists to make their case. I do this in the hope that it will help others notice when they encounter this flawed argument.
The example I will use is a recent interview I did for a podcast with the Guardian. The narrator is Ian Sample. Also on the show is particle physicist Brian Foster. I don’t personally know Foster and never spoke with him before. You can listen to the whole thing here, but I have transcribed the relevant parts below. (Please let me know in case I misheard something.)
At around 10:30 min the following exchange takes place.
Ian: “Are there particular things that physicists would like to look for, actual sort-of targets like the Higgs, that could be named like the Higgs?”
Brian: “The Higgs is really, I think, at the moment the thing that we are particularly interested in because it is the new particle on the block. And we know very little about it so far. And that will give us hopefully clues as to where to look for new phenomena beyond the standard model. Because the thing is that we know there must be physics beyond the standard model. If for no other reason than, as you mention, there’s very strong evidence that there is dark matter in the universe and that dark matter must be made of particles of some sort we have no candidate for those particles at the moment.”
I then explain that this argument does not work because there is no reason to think the next larger collider would find dark matter particles, that, in fact, we are not even sure dark matter is made of particles.
After some more talk about the various proposals for new colliders that are currently on the table, the discussion returns to the question what justifies the investment. At about 24:06 you can hear:
Ian: “Sabine, you’ve had a fair bit of flak for some of your criticisms for the FCC, haven’t you, from within the community?”
Sabine: “Sure, true, but I did expect it. Fact is, we have no reason to think that a next larger particle collider will actually tell us anything new about the fundamental laws of nature. There’s certainly some constants that you can always measure better, you can always say, well, I want to measure more precisely what the Higgs is doing, or how that particle decays, and so on and so forth. But if you want to make progress in our understanding of the foundations of physics that’s just not currently a promising thing to invest in. And I don’t think that’s so terribly controversial, but a lot of particle physicists clearly did not like me saying this publicly.”
Brian: “I beg to differ, I think it is very controversial, and I think it’s wrong, as I’ve tried to say several times. I mean the way in which you can make progress in particle physics is by making these precision measurements. You know very well that quantum mechanics is such that if you can make very high precision measurements that can tell you a lot of things about much higher energies than what you can reach in the laboratory. So that’s the purpose of doing very high precision physics at the LHC, it’s not like stamp collecting. You are trying to make measurements which will be sufficiently precise that they will give you a very strong indication of where there will be new physics at high energies.”
(Only tangentially relevant, but note that I was talking about the foundations of physics, whereas Brian’s reply is about progress in particle physics in particular.)
Sabine: “I totally agree with that. The more precisely you measure, the more sensitive you are to the high energy contributions. But still there is no good reason right now to think that there is anything to find, is what I’m saying.”
Brian: “But that’s not true. I mean, it’s quite clear, as you said yourself, that the standard model is incomplete. Therefore, if we can measure the absolutely outstanding particle in the standard model, the Higgs boson, which is completely unique, to very high precision, then the chances are very strong that we will find some indication for what this physics beyond the standard model is.”
Sabine: “So exactly what physics beyond the standard model are you referring to there?
Brian: “I have no idea. That’s why I want to do the measurement.”
I then explain why there is no reason to think that the next larger collider will find evidence of new physical effects. I do this by pointing out that the only reliable indications we have for new physics merely tell us something new has to appear at latest at energies that are still about a billion times higher than what even the next larger collider could reach.
At this point Brian stops claiming the chances are “very strong” that a bigger machine would find something new, and switches to the just-look-argument:
Brian: “Look, it’s a grave mistake to be too strongly lead by theoretical models [...]”
The just-look-argument is of course well and fine. But, as I have pointed out many times before, the same just-look-argument can be made for any other new experiment in the foundations of physics. It therefore does not explain why a larger particle collider in particular is a good investment. Indeed, the opposite is the case: There are less costly experiments for which we have good reasons, such as measuring more precisely the properties of dark matter or probing the weak field regime of quantum gravity.
When I debunk the just-look-argument, a lot of particle physicists then bring up the no-zero-sum-argument. I just did another podcast a few days ago where the no-zero-sum-argument played a big role and if that appears online, I’ll comment on that in more detail.
The real tragedy is that there is absolutely no learning curve in this exchange. Doesn’t matter how often I point out that particle physicists’ arguments don’t hold water, they’ll still repeat them.
(Completely irrelevant aside: This is the first time I have heard a recording made in my basement studio next to other recordings. I am pleased to note all the effort I put into getting good sound quality paid off.)
Tuesday, March 05, 2019
Merchants of Hype
Once upon a time, the task of scientists was to understand nature. “Merchants of Light,” Francis Bacon called them. They were a community of knowledge-seekers who subjected hypotheses to experimental test, using what we now simply call “the scientific method.” Understanding nature, so the idea, would both satisfy human curiosity and better our lives.
Today, the task of scientists is no longer to understand nature. Instead, their task is to uphold an illusion of progress by wrapping incremental advances in false promise. Merchants they still are, all right. But now their job is not to bring enlightenment; it is to bring excitement.
Nowhere is this more obvious than with big science initiatives. Quantum computing, personalized medicine, artificial intelligence, simulated brains, mega-scale particle colliders, and everything nano and neuro: While all those fields have a hard scientific core that justifies some investment, the big bulk is empty headlines. Most of the money goes into producing papers whose only purpose is to create an appearance of relevance.
Sooner or later, those research-bubbles become unsustainable and burst. But with the current organization of research, more people brings more money brings more people. And so, the moment one bubble bursts, the next one is on the rise already.
The hype-cycle is self-sustaining: Scientists oversell the promise of their research and get funding. Higher education institutions take their share and deliver press releases to the media. The media, since there’s money to make, produce headlines about breakthrough insights. Politicians are pleased about the impact, talk about international competitiveness, and keep the money flowing.
Trouble is, the supposed breakthroughs rarely lead to tangible progress. Where are our quantum computers? Where are our custom cancer cures? Where are the nano-bots? And why do we still not know what dark matter is made of?
Most scientists are well aware their research floats on empty promise, but keep their mouths shut. I know this not just from my personal experience. I know this because it has been vividly, yet painfully, documented in a series of anonymous interviews with British and Australian scientists about their experience writing grant proposals. These interviews, conducted by Jennifer Chubb and Richard Watermeyer (published in Studies in Higher Education), made me weep:
Worse, the above quotes only document the tip of the iceberg. That’s because the people who survive in the current system are the ones most likely to be okay with the situation. This may be because they genuinely believe their field is as promising as they make it sound, or because they manage to excuse their behavior to themselves. Either way, the present selection criteria in science favor skilled salesmanship over objectivity. Need I say that this is not a good way to understand nature?
The tragedy is not that this situation sucks, though, of course, it does. The tragedy is that it’s an obvious problem and yet no one does anything about it. If scientists can increase their chances to get funding by exaggeration, they will exaggerate. If they can increase their chances to get funding by being nice to their peers, they will be nice to their peers. If they can increase their chances to get funding by publishing on popular topics, they will publish on popular topics. You don’t have to be a genius to figure that out.
Tenure was supposed to remedy scientists’ conflict of interest between truth-seeking and economic survival. But tenure is now a rarity. Even the lucky ones who have it must continue to play nice, both to please their institution and keep the funding flowing. And honesty has become self-destructive. If you draw attention to shortcomings, if you debunk hype, if you question the promise of your own research area, you will be expelled from the community. A recent commenter on this blog summed it up like this:
But I also don’t expect everyone to disagree with me, and neither should you. So here is the puzzle: Why can you not find any expert, besides me, willing to publicly voice criticism on particle physics? Hint: It’s not because there is nothing to criticize.
And if you figured this one out, maybe you will understand why I say I cannot trust scientists any more. It’s a problem. It’s a problem in dire need of a solution.
This rant, was, for once, not brought on by a particle physicist, but by someone who works in quantum computing. Someone who complained to me that scientists are overselling the potential of their research, especially when it comes to large investments. Someone distraught, frustrated, disillusioned, and most of all, unsure what to do.
I understand that many of you cannot break the ranks without putting your jobs at risk. I do not – and will not – expect you to sacrifice a career you worked hard for; no one would be helped by this. But I want to remind you that you didn’t become a scientist just to shut up and advocate.
Today, the task of scientists is no longer to understand nature. Instead, their task is to uphold an illusion of progress by wrapping incremental advances in false promise. Merchants they still are, all right. But now their job is not to bring enlightenment; it is to bring excitement.
Nowhere is this more obvious than with big science initiatives. Quantum computing, personalized medicine, artificial intelligence, simulated brains, mega-scale particle colliders, and everything nano and neuro: While all those fields have a hard scientific core that justifies some investment, the big bulk is empty headlines. Most of the money goes into producing papers whose only purpose is to create an appearance of relevance.
Sooner or later, those research-bubbles become unsustainable and burst. But with the current organization of research, more people brings more money brings more people. And so, the moment one bubble bursts, the next one is on the rise already.
The hype-cycle is self-sustaining: Scientists oversell the promise of their research and get funding. Higher education institutions take their share and deliver press releases to the media. The media, since there’s money to make, produce headlines about breakthrough insights. Politicians are pleased about the impact, talk about international competitiveness, and keep the money flowing.
Trouble is, the supposed breakthroughs rarely lead to tangible progress. Where are our quantum computers? Where are our custom cancer cures? Where are the nano-bots? And why do we still not know what dark matter is made of?
Most scientists are well aware their research floats on empty promise, but keep their mouths shut. I know this not just from my personal experience. I know this because it has been vividly, yet painfully, documented in a series of anonymous interviews with British and Australian scientists about their experience writing grant proposals. These interviews, conducted by Jennifer Chubb and Richard Watermeyer (published in Studies in Higher Education), made me weep:
“I will write my proposals which will have in the middle of them all this work, yeah but on the fringes will tell some untruths about what it might do because that’s the only way it’s going to get funded and you know I’ve got a job to do, and that’s the way I’ve got to do it. It’s a shame isn’t it?”In other interviews, the researchers referred to their proposals as “virtually meaningless,” “made up stories” or “charades.” They felt sorry for their own situation. And then justified their behavior by the need to get funding.
(UK, Professor)
“If you can find me a single academic who hasn’t had to bullshit or bluff or lie or embellish in order to get grants, then I will find you an academic who is in trouble with his Head of Department. If you don’t play the game, you don’t do well by your university. So anyone that’s so ethical that they won’t bend the rules in order to play the game is going to be in trouble, which is deplorable.”
(Australia, Professor)
“We’ll just find some way of disguising it, no we’ll come out of it alright, we always bloody do, it’s not that, it’s the moral tension it places people under.”
(UK, Professor)
“They’re just playing games – I mean, I think it’s a whole load of nonsense, you’re looking for short term impact and reward so you’re playing a game... it’s over inflated stuff.”
(Australia, Professor)
“Then I’ve got this bit that’s tacked on... That might be sexy enough to get funded but I don’t believe in my heart that there’s any correlation whatsoever... There’s a risk that you end up tacking bits on for fear of the agenda and expectations when it’s not really where your heart is and so the project probably won’t be as strong.”
(Australia, Professor)
Worse, the above quotes only document the tip of the iceberg. That’s because the people who survive in the current system are the ones most likely to be okay with the situation. This may be because they genuinely believe their field is as promising as they make it sound, or because they manage to excuse their behavior to themselves. Either way, the present selection criteria in science favor skilled salesmanship over objectivity. Need I say that this is not a good way to understand nature?
The tragedy is not that this situation sucks, though, of course, it does. The tragedy is that it’s an obvious problem and yet no one does anything about it. If scientists can increase their chances to get funding by exaggeration, they will exaggerate. If they can increase their chances to get funding by being nice to their peers, they will be nice to their peers. If they can increase their chances to get funding by publishing on popular topics, they will publish on popular topics. You don’t have to be a genius to figure that out.
Tenure was supposed to remedy scientists’ conflict of interest between truth-seeking and economic survival. But tenure is now a rarity. Even the lucky ones who have it must continue to play nice, both to please their institution and keep the funding flowing. And honesty has become self-destructive. If you draw attention to shortcomings, if you debunk hype, if you question the promise of your own research area, you will be expelled from the community. A recent commenter on this blog summed it up like this:
“at least when I was in [high energy physics], it was taken for granted that anyone in academic [high energy physics] who was not a booster for more spending, especially bigger colliders, was a traitor to the field.”If you doubt this, think about the following. I have laid out clearly why I do not think a bigger particle collider is currently a good investment. No one who understands the scientific and technological situation seriously disagrees with my argument; they merely disagree with the conclusions. This is fine with me. This is not the problem. I don’t expect everyone to agree with me.
But I also don’t expect everyone to disagree with me, and neither should you. So here is the puzzle: Why can you not find any expert, besides me, willing to publicly voice criticism on particle physics? Hint: It’s not because there is nothing to criticize.
And if you figured this one out, maybe you will understand why I say I cannot trust scientists any more. It’s a problem. It’s a problem in dire need of a solution.
This rant, was, for once, not brought on by a particle physicist, but by someone who works in quantum computing. Someone who complained to me that scientists are overselling the potential of their research, especially when it comes to large investments. Someone distraught, frustrated, disillusioned, and most of all, unsure what to do.
I understand that many of you cannot break the ranks without putting your jobs at risk. I do not – and will not – expect you to sacrifice a career you worked hard for; no one would be helped by this. But I want to remind you that you didn’t become a scientist just to shut up and advocate.
Saturday, March 02, 2019
Check your Biases
[slide 8 of this presentation] |
You would think that sufficiently much has been written about cognitive biases and logical fallacies that even particle physicists took note, but at least the ones I deal with have no clue. If I ask them what measures they take to avoid cognitive biases when evaluating the promise of a research direction, they will either mention techniques to prevent biased data-analysis (different thing entirely), or they will deny that they even have biases (thereby documenting the very problem whose existence they deny).
Here is a response I got from a particle physicist when I pointed out that Gianotti did not answer the question about group think:
(This person then launched an ad-hominem attack at me and eventually deleted their comment. In the hope that this deletion documents some sliver of self-insight, I decided to remove identifying information.)
Here is another particle physicist commenting on the same topic, demonstrating just how much these scientists overrate their rationality:
It is beyond me why scientists are still not required to have basic training in the sociology of science, cognitive biases, and decision making in groups. Such knowledge is necessary to properly evaluate information. Scientists cannot correctly judge the promise of research directions unless they are aware how their opinions are influenced by the groups they are part of.
It would be easy enough to set up online courses for this. If I had the funding, I would do it. Alas, I don’t. The only thing I can do, therefore, is to ask everyone – and especially those in leadership positions – to please take the problem seriously. Scientists are human. Leaving cognitive biases unchecked results in inefficient allocations of research funding, not to mention that it wastes time.
In all brevity, here are the basics.
What is a social bias, what is a cognitive bias?
A cognitive bias is thinking shortcut that has developed through evolution. It can be beneficial in some situations, but in other situations it can result in incorrect judgement. A cognitive bias is similar to an optical illusion. Look at this example:
Example of optical illusion. A and B have the same color. Click here if you don’t believe it. [Image source: Wikipedia] |
The pixels in the squares A and B have the exact same color. However, to most people, square B looks lighter than A. That’s because there is a shadow over square B, so your brain factors in that the original color should have been lighter.
The conclusion that B is lighter, therefore, makes perfect sense in a naturally occurring situation. When asked to judge the color on your screen, however, you are likely to give a wrong answer if you are not aware of how your brain works.
Likewise, a cognitive bias happens if your brain factors in information that may be relevant in some situations but can lead to wrong results in others. A social bias, more specifically, is a type of cognitive bias that comes from the interaction with other people.
It is important to keep in mind that cognitive biases are not a sign of lacking intelligence. Everyone has cognitive biases and that’s nothing to be ashamed of. But if your job is to objectively evaluate information, you should be aware that the results of your evaluation are skewed by the way your brain functions.
Scientists, therefore, need to take measures to prevent cognitive biases the same way that they take measures to prevent biases in data analysis. The brain is yet another apparatus. Understanding how it operates is necessary to arrive at correct conclusions.
There are dozens of cognitive biases. I here merely list the ones that I think are most important for science:
- Communal Reinforcement
More commonly known as “group think,” communal reinforcement happens if members of a community constantly reassure each other that what they are doing is the right thing. It is typically accompanied by devaluing or ignoring outside opinions. You will often see it come along with arguments from popularity. Communal reinforcement is the major reason bad methodologies can become accepted practice in research communities.
-
Availability Cascades
What we hear of repeatedly sounds more interesting, and we talk more about what is more interesting, which makes it sound even more interesting. This does make a lot of sense if you want to find out what important things are happening in your village. It does not make sense, however, if your job is, say, to decide what’s the most promising experiment to make progress in the foundations of physics. Availability cascades are a driving force in scientific fashion trends and can lead to over-inflated research bubbles with little promise.
-
Post-purchase Rationalization
This is the tendency to tell ourselves and others that we have not made stupid decision in the past, like, say, pouring billions of dollars in to entirely fruitless research avenues. It is a big obstacle to learning from failure. This bias is amplified by our desire to avoid cognitive dissonance, that is any threat to our self-image as a rationally thinking individual. Post-purchase rationalization is why no experiment in the history of science has ever been a bad investment.
-
Irrational Escalation
Also known as the “sunk cost fallacy” or “throwing good money after bad.” Irrational Escalation is the argument that you cannot give up now because you have invested so much already. This is one of the main reasons why research agendas survive well beyond the point at which they stopped making sense, see supersymmetry, string theory, or searches for dark matter particles that become heavier and more weakly interacting every time they are not found.
-
Motivated Reasoning
More collectively known as “wishful thinking,” motivated reasoning is the human tendency to give pep talks and then actually believe the rosy picture we painted ourselves. While usually well-intended, motivated reasoning can result in overly optimistic expectations and an insistence to hold onto irrational dreams. Surely particle physicists are just about to discover some new particle, the next round of experiments will find that dark matter candidate, etc.
The more people have told you that a crappy scientific method is okay, the more likely you are to believe it is okay. Keep that in mind next time a BSM phenomenologist tells you it is totally normal when a scientific discipline makes wrong predictions for 40 years.
The easiest way to see that particle physics has a big problem with cognitive biases is that members of this community deny they even have biases and refuse to do anything about it.
The topic of cognitive biases has been extensively covered elsewhere, and I see no use in repeating what others have said better. Google will give you all the information you need. Some good starting points are:
- The Cognitive Bias Cheat Sheet, Buster Benson
- 20 cognitive biases that screw up your decisions, Business Insider
- How to Reduce Bias In Decision-Making, USC Marshall Critical Thinking Initiative
Saturday, February 23, 2019
Gian-Francesco Giudice On Future High-Energy Colliders
Gian-Francesco Giudice [Image: Wikipedia] |
The article begins with Giudice stating that “the most remarkable result [of the LHC measurements] was the discovery of a completely new type of force.” By this he means that the interaction with the Higgs-boson amounts to a force, and therefore the discovery of the Higgs can be interpreted as the discovery of a new force.
That the Higgs-boson exchanges a force is technically correct, but this terminology creates a risk of misunderstanding, so please allow me to clarify. In common terminology, the standard model describes three fundamental forces (stemming from the three gauge-symmetries): The electromagnetic force, the strong nuclear force, and the weak nuclear force. The LHC results have not required physicists to rethink this. The force associated with the Higgs-boson is not normally counted among the fundamental forces.
One can debate whether or not this is a new type of force. Higgs-like phenomena have been observed in condensed-matter physics for a long time. In any case, rebranding the Higgs as a force doesn’t change the fact that it was predicted in the 1960s and was the last missing piece in the standard model.
Giudice then lists reasons why particle physicists want to further explore high energy regimes. Let me go through these quickly to explain why they are bad motivations for a next larger collider (for more details see also my earlier post about good and bad problems):
- “the pattern of quark and lepton masses and mixings”
There is no reason to think a larger particle collider will tell us anything new about this. There isn’t even a reason to think those patterns have any deeper explanation. -
“the dynamics generating neutrino masses”
The neutrino-masses are either of Majorana-type, which you test for with other experiments (looking for neutrino-less double-beta decay) or they are of Dirac-type, in which case there is no reason to think the (so-far missing) right handed neutrinos have masses in the range accessible by the next larger collider. -
“Higgs naturalness”
Arguments from naturalness were the reason why so many physicists believed the LHC should have seen fundamentally new particles besides the Higgs already (see here for references). Those predictions were all wrong. It’s about time that particle physicists learn from their mistakes. -
“the origin of symmetry breaking dynamics”
I am not sure what this refers to. If you know, pls leave a note in the comments. - “the stability of the Higgs potential”
A next larger collider would tell us more about the Higgs potential. But the question whether the potential is stable cannot be answered by this collider because the answer also depends on what happens at even higher energies. - “unification of forces, quantum gravity”
Expected to become relevant at energies far exceeding that of the next larger collider. -
“cosmological constant”
Relevant on long distances and not something that high energy colliders test. -
“the nature and origin of dark matter, dark energy, cosmic baryon asymmetry, inflation”
No reason to think that a next larger collider will tell us anything about this.
The Michelson-Morley experiment, however, is an unfortunate example to enlist in favor of a larger collider. To begin with, it is somewhat disputed among historians how relevant the Michelson-Morley experiment really was for Einstein’s formulation of Special Relativity, since you can derive his theory from Maxwell’s equations. More interesting for the case of building a larger collider, though, is to look at what happened after the null-result of Michelson and Morley.
What happened is that for some decades experimentalists built larger and larger interferometers looking for the aether, not finding any evidence for it. These experiments eventually grew too large and this line of research was discontinued. Then, the second world-war interfered, and for some while scientific exploration stalled.
In the 1950s, due to rapid technological improvements, interferometers could be dramatically shrunk back in size and the search for the aether continued with smaller devices. Indeed, Michelson-Morley-like experiments are still made today. But the best constraints on deviations from Einstein’s theory now come from entirely different observations, notably from particles traveling over cosmologically long distances. The aether, needless to say, hasn’t been found.
There are two lessons to take away from this: (a) When experiments became too large and costly they paused until technological progress improved the return on investment. (b) Advances in entirely different research directions enabled better tests.
Back to high energy particle physics. There hasn’t been much progress in collider technology for decades. For this reason physicists still try to increase collision energies by digging longer tunnels. The costs are now exceeding $10 billion dollars for a next larger collider. We have no reason to think that this collider will tell us anything besides measuring details of the standard model to higher precision. This line of research should be discontinued until it becomes more cost-efficient again.
Giudice ends his essay with arguing that particle colliders are somehow exceptionally great experiments and therefore must be continued. He writes “No other instrument or research programme can replace high-energy colliders in the search for the fundamental laws governing the universe.”
But look at the facts: The best constraints on grand unified theories come from searches for proton decay. Such searches entail closely monitoring large tanks of water. These are not high-energy experiments. You could maybe call them “large volume experiments”. Likewise, the tightest constraints on physics at high energies currently comes from the ACME measurement of the electric dipole moment. This is a high precision measurement at low energies. And our currently best shot at finding evidence for quantum gravity comes from massive quantum oscillators. Again, that is not high energy physics.
Building larger colliders is not the only way forward in the foundations of physics. Particle physicists only seem to be able to think of reasons for a next larger particle collider and not of reasons against it. This is not a good way to evaluate the potential of such a large financial investment.
Friday, February 08, 2019
A philosopher of science reviews “Lost in Math”
His is a very detailed review that focuses, unsurprisingly, on the philosophical implications of my book. I think his summary will give you a pretty good impression of the book’s content. However, I want to point out two places where he misrepresents my argument.
First, in section 2, Butterfield lays out his disagreements with me. Alas, he disagrees with positions I don’t hold and certainly did not state, neither in the book nor anywhere else:
“Hossenfelder’s main criticism of supersymmetry is, in short, that it is advocated because of its beauty, but is unobserved. But even if supersymmetry is not realized in nature, one might well defend studying it as an invaluable tool for getting a better understanding of quantum field theories. A similar defence might well be given for studying string theory.”Sure. Supersymmetry, string theory, grand unification, even naturalness, started out as good ideas and valuable research programs. I do not say these should not have been studied; neither do I say one should now discontinue studying them. The problem is that these ideas have grown into paper-production industries that no longer produce valuable output.
Beautiful hypotheses are certainly worth consideration. Troubles begin if data disagree with the hypotheses but scientists continue to rely on their beautiful hypotheses rather than taking clues from evidence.
Second, Butterfield misunderstands just how physicists working on the field’s foundations are “led astray” by arguments from beauty. He writes:
“I also think advocates of beauty as a heuristic do admit these limitations. They advocate no more than a historically conditioned, and fallible, heuristic [...] In short, I think Hossenfelder interprets physicists as more gung-ho, more naïve, that beauty is a guide to truth than they really are.”To the extent that physicists are aware they use arguments from beauty, most know that these are not scientific arguments and also readily admit it. I state this explicitly in the book. They use such arguments anyway, however, because doing so has become accepted methodology. Look at what they do, don’t listen to what they say.
A few try to justify using arguments from beauty by appeals to cherry-picked historical examples or quotes to Einstein and Dirac. In most cases, however, physicists are not aware they use arguments from beauty to begin with (hence the book’s title). I have such discussions on a daily basis.
Physicists wrap appeals to beauty into statements like “this just can’t be the last word,” “intuition tells me,” or “this screams for an explanation”. They have forgotten that naturalness is an argument from beauty and can’t recall, or never looked at, the motivation for axions or gauge coupling unification. They will express their obsessions with numerical coincidences by saying “it’s curious” or “it is suggestive,” often followed by “Don’t you agree?”.
Of course I agree. I agree that supersymmetry is beautiful and it should be true, and it looks like there should be a better explanation for the parameters in the standard model, and it looks like there should be a unified force. But who cares what I think nature should be like? Human intuition is not a good guide to the development of new laws of nature.
What physicists are naive about is not appeals to beauty; what they are naive about is their own rationality. They cannot fathom the possibility that their scientific judgement is influenced by cognitive biases and social trends in scientific communities. They believe it does not matter for their interests how their research is presented in the media.
The easiest way to see that the problem exists is that they deny it.
Subscribe to:
Posts (Atom)