Don’t Tell Me What to Think!

Your Karma ran over My Dogma
A woman holds up a sign while participating in the annual King Mango Strut parade in Miami, FL on 28 December 2014. BluIz60/Shutterstock

2 January 2019 – Now that the year-end holidays are over, it’s time to get back on my little electronic soapbox to talk about an issue that scientists have had to fight with authorities over for centuries. It’s an issue that has been around for millennia, but before a few centuries ago there weren’t scientists around to fight over it. The issue rears its ugly head under many guises. Most commonly today it’s discussed as academic freedom, or freedom of expression. You might think it was definitively won for all Americans in 1791 with the ratification of the first ten amendments to the U.S. Constitution and for folks in other democracies soon after, but you’d be wrong.

The issue is wrapped up in one single word: dogma.

According to the Oxford English Dictionary, the word dogma is defined as:

“A principle or set of principles laid down by an authority as incontrovertibly true.”

In 1600 CE, Giordano Bruno was burned at the stake for insisting that the stars were distant suns surrounded by their own planets, raising the possibility that these planets might foster life of their own, and that the universe is infinite and could have no “center.” These ideas directly controverted the dogma laid down as incontrovertibly true by both the Roman Catholic and Protestant Christian churches of the time.

Galileo Galilei, typically thought as the poster child for resistance to dogma, was only placed under house arrest (for the rest of his life) for advocating the less radical Copernican vision of the solar system.

Nicholas Copernicus, himself, managed to fly under the Catholic Church’s radar for nearly a century and a quarter by the simple tactic of not publishing his heliocentric model. Starting in 1510, he privately communicated it to his friends, who then passed it to some of their friends, etc. His signature work, Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in which he laid it out for all to see, wasn’t published until his death in 1643, when he’d already escaped beyond the reach of earthly authorities.

If this makes it seem that astrophysicists have been on the front lines of the war against dogma since there was dogma to fight against, that’s almost certainly true. Astrophysicists study stuff relating to things beyond the Earth, and that traditionally has been a realm claimed by religious authorities.

That claim largely started with Christianity, specifically the Roman Catholic Church. Ancient religions, which didn’t have delusions that they could dominate all of human thought, didn’t much care what cockamamie ideas astrophysicists (then just called “philosophers”) came up with. Thus, Aristarchus of Samos suffered no ill consequences (well, maybe a little, but nothing life – or even career – threatening) from proposing the same ideas that Galileo was arrested for championing some eighteen centuries later.

Fast forward to today and we have a dogma espoused by political progressives called “climate change.” It used to be called “global warming,” but that term was laughed down decades ago, though the dogma’s still the same.

The United-Nations-funded Intergovernmental Panel on Climate Change (IPCC) has become “the Authority” laying down the principles that Earth’s climate is changing and that change constitutes a rapid warming caused by human activity. The dogma also posits that this change will continue uninterrupted unless national governments promulgate drastic laws to curtail human activity.

Sure sounds like dogma to me!

Once again, astrophysicists are on the front lines of the fight against dogma. The problem is that the IPCC dogma treats the Sun (which is what powers Earth’s climate in the first place) as, to all intents and purposes, a fixed star. That is, it assumes climate change arises solely from changes in Earthly conditions, then assumes we control those conditions.

Astrophysicists know that just ain’t so.

First, stars generally aren’t fixed. Most stars are variable stars. In fact, all stars are variable on some time scale. They all evolve over time scales of millions or billions of years, but that’s not the kind of variability we’re talking about here.

The Sun is in the evolutionary phase called “main sequence,” where stars evolve relatively slowly. That’s the source of much “invariability” confusion. Main sequence stars, however, go through periods where they vary in brightness more or less violently on much shorter time scales. In fact, most main sequence stars exhibit this kind of behavior to a greater or lesser extent at any given time – like now.

So, a modern (as in post-nineteenth-century) astrophysicist would never make the bald assumption that the Sun’s output was constant. Statistically, the odds are against it. Most stars are variables; the Sun is like most stars; so the Sun is probably a variable. In fact, it’s well known to vary with a fairly stable period of roughly 22 years (the 11-year “sunspot cycle” is actually only a half cycle).

A couple of centuries ago, astronomers assumed (with no evidence) that the Sun’s output was constant, so they started trying to measure this assumed “solar constant.” Charles Greeley Abbot, who served as the Secretary of the Smithsonian Institute from 1928 to 1944, oversaw the first long-term study of solar output.

His observations were necessarily ground based and the variations observed (amounting to 3-5 percent) have been dismissed as “due to changing weather conditions and incomplete analysis of his data.” That despite the monumental efforts he went through to control such effects.

On the 1970s I did an independent analysis of his data and realized that part of the problem he had stemmed from a misunderstanding of the relationship between sunspots and solar irradiance. At the time, it was assumed that sunspots were akin to atmospheric clouds. That is, scientists assumed they affected overall solar output by blocking light, thus reducing the total power reaching Earth.

Thus, when Abbott’s observations showed the opposite correlation, they were assumed to be erroneous. His purported correlations with terrestrial weather observations were similarly confused, and thus dismissed.

Since then, astrophysicists have realized that sunspots are more like a symptom of increased internal solar activity. That is, increases in sunspot activity positively correlate with increases in the internal dynamism that generates the Sun’s power output. Seen in this light, Abbott’s observations and analysis make a whole lot more sense.

We have ample evidence, from historical observations of climate changes correlating with observed variations in sunspot activity, that there is a strong connection between climate and solar variability. Most notably the fact that the Sporer and Maunder anomalies (which were times when sunspot activity all but disappeared for extended periods) in sunspot records correlated with historically cold periods in Earth’s history. There was a similar period from about 1790 to 1830 of low solar activity (as measured by sunspot numbers) called the “Dalton Minimum” that similarly depressed global temperatures and gave an anomalously low baseline for the run up to the Modern Maximum.

For astrophysicists, the phenomenon of solar variability is not in doubt. The questions that remain involve by how much, how closely they correlate with climate change, and are they predictable?

Studies of solar variability, however, run afoul of the IPCC dogma. For example, in May of 2017 an international team of solar dynamicists led by Valentina V. Zharkova at Northumbria University in the U.K. published a paper entitled “On a role of quadruple component of magnetic field in defining solar activity in grand cycles” in the Journal of Atmospheric and Solar-Terrestrial Physics. Their research indicates that the Sun, while it’s activity has been on the upswing for an extended period, should be heading into a quiescent period starting with the next maximum of the 11-year sunspot cycle in around five years.

That would indicate that the IPCC prediction of exponentially increasing global temperatures due to human-caused increasing carbon-dioxide levels may be dead wrong. I say “may be dead wrong” because this is science, not dogma. In science, nothing is incontrovertible.

I was clued in to this research by my friend Dan Romanchik, who writes a blog for amateur radio enthusiasts. Amateur radio enthusiasts care about solar activity because sunspots are, in fact, caused by magnetic fields at the Sun’s surface. Those magnetic fields affect Earth by deflecting cosmic rays away from the inner solar system, which is where we live. Those cosmic rays are responsible for the Kennelly–Heaviside layer of ionized gas in Earth’s upper atmosphere (roughly 90–150 km, or 56–93 mi, above the ground).

Radio amateurs bounce signals off this layer to reach distant stations beyond line of sight. When solar activity is weak this layer drops to lower altitudes, reducing the effectiveness of this technique (often called “DXing”).

In his post of 16 December 2018, Dan complained: “If you operate HF [the high-frequency radio band], it’s no secret that band conditions have not been great. The reason, of course, is that we’re at the bottom of the sunspot cycle. If we’re at the bottom of the sunspot cycle, then there’s no way to go but up, right? Maybe not.

“Recent data from the NOAA’s Space Weather Prediction Center seems to suggest that solar activity isn’t going to get better any time soon.”

After discussing the NOAA prediction, he went on to further complain: “And, if that wasn’t depressing enough, I recently came across an article reporting on the research of Prof. Valentina Zharkova, who is predicting a grand minimum of 30 years!”

He included a link to a presentation Dr. Zharkova made at the Global Warming Policy Foundation last October in which she outlined her research and pointedly warned that the IPCC dogma was totally wrong.

I followed the link, viewed her presentation, and concluded two things:

  1. The research methods she used are some that I’m quite familiar with, having used them on numerous occasions; and

  2. She used those techniques correctly, reaching convincing conclusions.

Her results seems well aligned with meta-analysis published by the Cato Institute in 2015, which I mentioned in my posting of 10 October 2018 to this blog. The Cato meta-analysis of observational data indicated a much reduced rate of global warming compared to that predicted by IPCC models.

The Zharkova-model data covers a much wider period (millennia-long time scale rather than decades-long time scale) than the Cato data. It’s long enough to show the Medieval Warm Period as well as the Little Ice Age (Maunder minimum) and the recent warming trend that so fascinates climate-change activists. Instead of a continuation of the modern warm period, however, Zharkova’s model shows an abrupt end starting in about five years with the next maximum of the 11-year sunspot cycle.

Don’t expect a stampede of media coverage disputing the IPCC dogma, however. A host of politicians (especially among those in the U.S. Democratic Party) have hung their hats on that dogma as well as an array of governments who’ve sold policy decisions based on it. The political left has made an industry of vilifying anyone who doesn’t toe the “climate change” line, calling them “climate deniers” with suspect intellectual capabilities and moral characters.

Again, this sounds a lot like dogma. It’s the same tactic that the Inquisition used against Bruno and Galileo before escalating to more brutal methods.

Supporters of Zharkova’s research labor under a number of disadvantages. Of course, there’s the obvious disadvantage that Zharkova’s thick Ukrainian accent limits her ability to explain her work to those who don’t want to listen. She would not come off well on the evening news.

A more important disadvantage is the abstruse nature of the applied mathematics techniques used in the research. How many political reporters and, especially, commentators are familiar enough with the mathematical technique of principal component analysis to understand what Zharkova’s talking about? This stuff makes macroeconomics modeling look like kiddie play!

But, the situation’s even worse because to really understand the research, you also need an appreciation of stellar dynamics, which is based on magnetohydrodynamics. How many CNN commentators even know how to spell that?

Of course, these are all tools of the trade for astrophysicists. They’re as familiar to them as a hammer or a saw is to a carpenter.

For those in the media, on the other hand, it’s a lot easier to take the “most scientists agree” mantra at face value than to embark on the nearly hopeless task of re-educating themselves to understand Zharkova’s research. That goes double for politicians.

It’s entirely possible that “most” scientists might agree with the IPCC dogma, but those in a position to understand what’s driving Earth’s climate do not agree.

Climate Models Bat Zero

Climate models vs. observations
Whups! Since the 1970s, climate models have overestimated global temperature rise by … a lot! Cato Institute

The articles discussed here reflect the print version of The Wall Street Journal, rather than the online version. Links to online versions are provided. The publication dates and some of the contents do not match.

10 October 2018 – Baseball is well known to be a game of statistics. Fans pay as much attention to statistical analysis of performance by both players and teams as they do to action on the field. They hope to use those statistics to indicate how teams and individual players are likely to perform in the future. It’s an intellectual exercise that is half the fun of following the sport.

While baseball statistics are quoted to three decimal places, or one part in a thousand, fans know to ignore the last decimal place, be skeptical of the second decimal place, and recognize that even the first decimal place has limited predictive power. It’s not that these statistics are inaccurate or in any sense meaningless, it’s that they describe a situation that seems predictable, yet is full of surprises.

With 18 players in a game at any given time, a complex set of rules, and at least three players and an umpire involved in the outcome of every pitch, a baseball game is a highly chaotic system. What makes it fun is seeing how this system evolves over time. Fans get involved by trying to predict what will happen next, then quickly seeing if their expectations materialize.

The essence of a chaotic system is conditional unpredictability. That is, the predictability of any result drops more-or-less drastically with time. For baseball, the probability of, say, a player maintaining their batting average is fairly high on a weekly basis, drops drastically on a month-to-month basis, and simply can’t be predicted from year to year.

Folks call that “streakiness,” and it’s one of the hallmarks of mathematical chaos.

Since the 1960s, mathematicians have recognized that weather is also chaotic. You can say with certainty what’s happening right here right now. If you make careful observations and take into account what’s happening at nearby locations, you can be fairly certain what’ll happen an hour from now. What will happen a week from now, however, is a crapshoot.

This drives insurance companies crazy. They want to live in a deterministic world where they can predict their losses far into the future so that they can plan to have cash on hand (loss reserves) to cover them. That works for, say, life insurance. It works poorly for losses do to extreme-weather events.

That’s because weather is chaotic. Predicting catastrophic weather events next year is like predicting Miami Marlins pitcher Drew Steckenrider‘s earned-run-average for the 2019 season.

Laugh out loud.

Notes from 3 October

My BS detector went off big-time when I read an article in this morning’s Wall Street Journal entitled “A Hotter Planet Reprices Risk Around the World.” That headline is BS for many reasons.

Digging into the article turned up the assertion that insurance providers were using deterministic computer models to predict risk of losses due to future catastrophic weather events. The article didn’t say that explicitly. We have to understand a bit about computer modeling to see what’s behind the words they used. Since I’ve been doing that stuff since the 1970s, pulling aside the curtain is fairly easy.

I’ve also covered risk assessment in industrial settings for many years. It’s not done with deterministic models. It’s not even done with traditional mathematics!

The traditional mathematics you learned in grade school uses real numbers. That is numbers with a definite value.

Like Pi.

Pi = 3.1415926 ….

We know what Pi is because it’s measurable. It’s the ratio of a circle’s circumference to its diameter.

Measure the circumference. Measure the diameter. Then divide one by the other.

The ancient Egyptians performed the exercise a kazillion times and noticed that, no matter what circle you used, no matter how big it was, whether you drew it on papyrus or scratched it on a rock or laid it out in a crop circle, you always came out with the same number. That number eventually picked up the name “Pi.”

Risk assessment is NOT done with traditional arithmetic using deterministic (real) numbers. It’s done using what’s called “fuzzy logic.”

Fuzzy logic is not like the fuzzy thinking used by newspaper reporters writing about climate change. The “fuzzy” part simply means it uses fuzzy categories like “small,” “medium” and “large” that don’t have precisely defined values.

While computer programs are perfectly capable of dealing with fuzzy logic, they won’t give you the kind of answers cost accountants are prepared to deal with. They won’t tell you that you need a risk-reserve allocation of $5,937,652.37. They’ll tell you something like “lots!”

You can’t take “lots” to the bank.

The next problem is imagining that global climate models could have any possible relationship to catastrophic weather events. Catastrophic weather events are, by definition, catastrophic. To analyze them you need the kind of mathermatics called “catastrophe theory.”

Catastrophe theory is one of the underpinnings of chaos. In Steven Spielberg’s 1993 movie Jurassic Park, the character Ian Malcolm tries to illustrate catastrophe theory with the example of a drop of water rolling off the back of his hand. Whether it drips off to the right or left depends critically on how his hand is tipped. A small change creates an immense difference.

If a ball is balanced at the edge of a table, it can either stay there or drop off, and you can’t predict in advance which will happen.

That’s the thinking behind catastrophe theory.

The same analysis goes into predicting what will happen with a hurricane. As I recall, at the time Hurricane Florence (2018) made landfall, most models predicted it would move south along the Appalachian Ridge. Another group of models predicted it would stall out to the northwest.

When push came to shove, however, it moved northeast.

What actually happened depended critically on a large number of details that were too small to include in the models.

How much money was lost due to storm damage was a result of the result of unpredictable things. (That’s not an editing error. It was really the second order result of a result.) It is a fundamentally unpredictable thing. The best you can do is measure it after the fact.

That brings us to comparing climate-model predictions with observations. We’ve got enough data now to see how climate-model predictions compare with observations on a decades-long timescale. The graph above summarizes results compiled in 2015 by the Cato Institute.

Basically, it shows that, not only did the climate models overestimate the temperature rise from the late 1970s to 2015 by a factor of approximately three, but in the critical last decade, when the computer models predicted a rapid rise, the actual observations showed that it nearly stalled out.

Notice that the divergence between the models and the observations increased with time. As I’ve said, that’s the main hallmark of chaos.

It sure looks like the climate models are batting zero!

I’ve been watching these kinds of results show up since the 1980s. It’s why by the late 1990s I started discounting statements like the WSJ article’s: “A consensus of scientists puts blame substantially on emissios greenhouse gasses from cars, farms and factories.”

I don’t know who those “scientists” might be, but it sounds like they’re assigning blame for an effect that isn’t observed. Real scientists wouldn’t do that. Only politicians would.

Clearly, something is going on, but what it is, what its extent is, and what is causing it is anything but clear.

In the data depicted above, the results from global climate modeling do not look at all like the behavior of a chaotic system. The data from observations, however, do look like what we typically get from a chaotic system. Stuff moves constantly. On short time scales it shows apparent trends. On longer time scales, however, the trends tend to evaporate.

No wonder observers like Steven Pacala, who is Frederick D. Petrie Professor in Ecology and Evolutionary Biology at Princeton University and a board member at Hamilton Insurance Group, Ltd., are led to say (as quoted in the article): “Climate change makes the historical record of extreme weather an unreliable indicator of current risk.”

When you’re dealing with a chaotic system, the longer the record you’re using, the less predictive power it has.

Duh!

Another point made in the WSJ article that I thought was hilarious involved prediction of hurricanes in the Persian Gulf.

According to the article, “Such cyclones … have never been observed in the Persian Gulf … with new conditions due to warming, some cyclones could enter the Gulf in the future and also form in the Gulf itself.”

This sounds a lot like a tongue-in-cheek comment I once heard from astronomer Carl Sagan about predictions of life on Venus. He pointed out that when astronomers observe Venus, they generally just see a featureless disk. Science fiction writers had developed a chain of inferences that led them from that observation of a featureless disk to imagining total cloud cover, then postulating underlying swamps teeming with life, and culminating with imagining the existence of Venusian dinosaurs.

Observation: “We can see nothing.”

Conclusion: “There are dinosaurs.”

Sagan was pointing out that, though it may make good science fiction, that is bad science.

The WSJ reporters, Bradley Hope and Nicole Friedman, went from “No hurricanes ever seen in the Persian Gulf” to “Future hurricanes in the Persian Gulf” by the same sort of logic.

The kind of specious misinformation represented by the WSJ article confuses folks who have genuine concerns about the environment. Before politicians like Al Gore hijacked the environmental narrative, deflecting it toward climate change, folks paid much more attention to the real environmental issue of pollution.

Insurance losses from extreme weather events
Actual insurance losses due to catastrophic weather events show a disturbing trend.

The one bit of information in the WSJ article that appears prima facie disturbing is contained in the graph at right.

The graph shows actual insurance losses due to catastrophic weather events increasing rapidly over time. The article draws the inference that this trend is caused by human-induced climate change.

That’s quite a stretch, considering that there are obvious alternative explanations for this trend. The most likely alternative is the possibility that folks have been building more stuff in hurricane-prone areas. With more expensive stuff there to get damaged, insurance losses will rise.

Again: duh!

Invoking Occam’s Razor (choose the most believable of alternative explanations), we tend to favor the second explanation.

In summary, I conclude that the 3 October article is poor reporting that draws conclusions that are likely false.

Notes from 4 October

Don’t try to find the 3 October WSJ article online. I spent a couple of hours this morning searching for it, and came up empty. The closest I was able to get was a draft version that I found by searching on Bradley Hope’s name. It did not appear on WSJ‘s public site.

Apparently, WSJ‘s editors weren’t any more impressed with the article than I was.

The 4 October issue presents a corroboration of my alternative explanation of the trend in insurance-loss data: it’s due to a build up of expensive real estate in areas prone to catastrophic weather events.

In a half-page expose entitled “Hurricane Costs Grow as Population Shifts,” Kara Dapena reports that, “From 1980 to 2017, counties along the U.S. shoreline that endured hurricane-strength winds from Florence in September experienced a surge in population.”

In the end, this blog posting serves as an illustration of four points I tried to make last month. Specifically, on 19 September I published a post entitled: “Noble Whitefoot or Lying Blackfoot?” in which I laid out four principles to use when trying to determine if the news you’re reading is fake. I’ll list them in reverse of the order I used in last month’s posting, partly to illustrate that there is no set order for them:

  • Nobody gets it completely right  ̶  In the 3 October WSJ story, the reporters got snookered by the climate-change political lobby. That article, taken at face value, has to be stamped with the label “fake news.”
  • Does it make sense to you? ̶  The 3 October fake news article set off my BS detector by making a number of statements that did not make sense to me.
  • Comparison shopping for ideas  ̶  Assertions in the suspect article contradicted numerous other sources.
  • Consider your source  ̶  The article’s source (The Wall Street Journal) is one that I normally trust. Otherwise, I likely never would have seen it, since I don’t bother listening to anyone I catch in a lie. My faith in the publication was restored when the next day they featured an article that corrected the misinformation.

And, You Thought Global Warming was a BAD Thing?

Ice skaters on the frozen Thames river in 1677

10 March 2017 – ‘Way back in the 1970s, when I was an astophysics graduate student, I was hot on the trail of why solar prominences had the shapes we observe them to have. Being a good little budding scientist, I spent most of my waking hours in the library poring over old research notes from the (at that time barely existing) current solar research, back to the beginning of time. Or, at least to the invention of the telescope.

The fact that solar prominences are closely associated with sunspots led me to studying historical measurements of sunspots. Of course, I quickly ran across two well-known anomalies known as the Maunder and Sporer minima. These were periods in the middle ages when sunspots practically disappeared for decades at a time. Astronomers of the time commented on it, but hadn’t a clue as to why.

The idea that sunspots could disappear for extended periods is not really surprising. The Sun is well known to be a variable star whose surface activity varies on a more-or-less regular 11-year cycle (22 years if you count the fact that the magnetic polarity reverses after every minimum). The idea that any such oscillator can drop out once in a while isn’t hard to swallow.

Besides, when Mommy Nature presents you with an observable fact, it’s best not to doubt the fact, but to ask “Why?” That leads to much more fun research and interesting insights.

More surprising (at the time) was the observed correlation between the Maunder and Sporer minima and a period of anomalously cold temperatures throughout Europe known as the “Little Ice Age.” Interesting effects of the Little Ice Age included the invention of buttons to make winter garments more effective, advances of glaciers in the mountains, ice skating on rivers that previously never froze at all, and the abandonment of Viking settlements in Greenland.

And, crop failures. Can’t forget crop failures! Marie Antoinette’s famous “Let ’em eat cake” faux pas was triggered by consistent failures of the French wheat harvest.

The moral of the Little Ice Age story is:

Global Cooling = BAD

The converse conclusion:

Global Warming = GOOD

seems less well documented. A Medieval Warm Period from about 950-1250 did correlate with fairly active times for European culture. Similarly, the Roman Warm Period (250 BCE – 400 CE) saw the rise of the Roman civilization. So, we can tentatively conclude that global warming is generally NOT bad.

Sunspots as Markers

The reason seeing sunspot minima coincide with cool temperatures was surprising was that at the time astronomers fantasized that sunspots were like clouds that blocked radiation leaving the Sun. Folks assumed that more clouds meant more blocking of radiation, and cooler temperatures on Earth.

Careful measurements quickly put that idea into its grave with a stake through its heart! The reason is another feature of sunspots, which the theory conveniently forgot: they’re surrounded by relatively bright areas (called faculae) that pump out radiation at an enhanced rate. It turns out that the faculae associated with a sunspot easily make up for the dimming effect of the spot itself.

That’s why we carefully measure details before jumping to conclusions!

Anyway, the best solar-output (irradiance) research I was able to find was by Charles Greeley Abbott, who, as Director of the Smithsonian Astrophysical Observatory from 1907 to 1944, assembled an impressive decades-long series of meticulous measurements of the total radiation arriving at Earth from the Sun. He also attempted to correlate these measurements with weather records from various cities.

Blinded by a belief that solar activity (as measured by sunspot numbers) would anticorrelate with solar irradiation and therefore Earthly temperatures, he was dismayed to be unable to make sense of the combined data sets.

By simply throwing out the assumptions, I was quickly able to see that the only correlation in the data was that temperatures more-or-less positively correlated with sunspot numbers and solar irradiation measurements. The resulting hypothesis was that sunspots are a marker for increased output from the Sun’s core. Below a certain level there are no spots. As output increases above the trigger level, sunspots appear and then increase with increasing core output.

The conclusion is that the Little Ice Age corresponded with a long period of reduced solar-core output, and the Maunder and Sporer minima are shorter periods when the core output dropped below the sunspot-trigger level.

So, we can conclude (something astronomers have known for decades if not centuries) that the Sun is a variable star. (The term “solar constant” is an oxymoron.) Second, we can conclude that variations in solar output have a profound affect on Earth’s climate. Those are neither surprising nor in doubt.

We’re also on fairly safe ground to say that (within reason) global warming is a good thing. At least its pretty clearly better than global cooling!