So, You Thought It Was About Climate Change?

Smog over Warsaw
Air pollution over Warsaw center city in winter. Piotr Szczepankiewicz / Shutterstock

Sorry about failing to post to this blog last week. I took sick and just couldn’t manage it. This is the entry I started for 10 April, but couldn’t finish until now.

17 April 2019 – I had a whole raft of things to talk about in this week’s blog posting, some of which I really wanted to cover for various reasons, but I couldn’t resist an excuse to bang this old “environmental pollution” drum once again.

A Zoë Schlanger-authored article published on 2 April 2019 by World Economic Forum in collaboration with Quartz entitled “The average person in Europe loses two years of their life due to air pollution” crossed my desk this morning (8 April 2019). It was important to me because environmental pollution is an issue I’ve been obsessed with since the 1950s.

The Setup

One of my earliest memories is of my father taking delivery of a even-then-ancient 26-foot lifeboat (I think it was from an ocean liner, though I never really knew where it came from), which he planned to convert to a small cabin cruiser. I was amazed when, with no warning to me, this great, whacking flatbed trailer backed over our front lawn, and deposited this thing that looked like a miniature version of Noah’s Ark.

It was double-ended – meaning it had a prow-shape at both ends – and was pretty much empty inside. That is, it had benches for survivors to sit on and fittings for oarlocks (I vaguely remember oarlocks actually being in place, but my memory from over sixty years ago is a bit hazy.) but little else. No decks. No superstructure. Maybe some grates in the bottom to keep people’s feet out of the bilge, but that’s about it.

My father spent year or so installing lower decks, upper decks, a cabin with bunks, head and a small galley, and a straight-six gasoline engine for propulsion. I sorta remember the keel already having been fitted for a propeller shaft and rudder, which would class the boat as a “launch” rather than a simple lifeboat, but I never heard it called that.

Finally, after multiple-years’ reconstruction, the thing was ready to dump into the water to see if it would float. (Wooden boats never float when you first put them in the water. The planks have to absorb water and swell up to tighten the joints. Until then, they leak like sieves.)

The water my father chose to dump this boat into was the Seekonk River in nearby Providence, Rhode Island. It was a momentous day in our family, so my mother shepherded my big sister and me around while my father stressed out about getting the deed done.

We won’t talk about the day(s) the thing spent on the tiny shipway off Gano Street where the last patches of bottom paint were applied over where the boat’s cradle had supported its hull while under construction, and the last little forgotten bits were fitted and checked out before it was launched.

While that was going on, I spent the time playing around the docks and frightening my mother with my antics.

That was when I noticed the beautiful rainbow sheen covering the water.

Somebody told me it was called “iridescence” and was caused by the whole Seekonk River being covered by an oil slick. The oil came from the constant movement of oil-tank ships delivering liquid dreck to the oil refinery and tank farm upstream. The stuff was getting dumped into the water and flowing down to help turn Narragansett Bay, which takes up half the state to the south, into one vast combination open sewer and toxic-waste dump.

That was my introduction to pollution.

It made my socks rot every time I accidentally or reluctantly-on-purpose dipped any part of my body into that cesspool.

It was enough to gag a maggot!

So when, in the late 1960s, folks started yammering on about pollution, my heartfelt reaction was: “About f***ing time!”

I did not join the “Earth Day” protests that started in 1970, though. Previously, I’d observed the bizarre antics surrounding the anti-war protests of the middle-to-late 1960s, and saw the kind of reactions they incited. My friends and I had been a safe distance away leaning on an embankment blowing weed and laughing as less-wise classmates set themselves up as targets for reactionary authoritarians’ ire.

We’d already learned that the best place to be when policemen suit up for riot patrol is someplace a safe distance away.

We also knew the protest organizers – they were, after all, our classmates in college – and smiled indulgently as they worked up their resumes for lucrative careers in activist management. There’s more than one way to make a buck!

Bohemians, beatniks, hippies, or whatever term du jour you wanted to call us just weren’t into the whole money-and-power trip. We had better, mellower things to do than march around carrying signs, shouting slogans, and getting our heads beaten in for our efforts. So, when our former friends, the Earth-Day organizers, wanted us to line up, we didn’t even bother to say “no.” We just turned and walked away.

I, for one, was in the midst of changing tracks from English to science. I’d already tried my hand at writing, but found that, while I was pretty good at putting sentences together in English, then stringing them into paragraphs and stories, I really had nothing worthwhile to write about. I’d just not had enough life experience.

Since physics was basic to all the other stuff I’d been interested in – for decades – I decided to follow that passion and get a good grounding in the hard sciences, starting with physics. By the late seventies, I had learned whereof science was all about, and had developed a feel for how it was done, and what the results looked like. Especially, I was deep into astrophysics in general and solar physics in particular.

As time went on, the public noises I heard about environmental concerns began to sound more like political posturing and less like scientific discourse. Especially as they chose to ignore variability of the Sun that we astronomers knew was what made everything work.

By the turn of the millennium, scholarly reports generally showed no observations that backed up the global-warming rhetoric. Instead, they featured ambiguous results that showed chaotic evolution of climate with no real long-term trends.

Those of us interested in the history of science also realized that warm periods coincided with generally good conditions for humans, while cool periods could be pretty rough. So, what was wrong with a little global warming when you needed it?

A disturbing trend, however, was that these reports began to feature a boilerplate final paragraph saying, roughly: “climate change is a real danger and caused by human activity.” They all featured this paragraph, suspiciously almost word for word, despite there being little or nothing in the research results to support such a conclusion.

Since nothing in the rest of the report provided any basis for that final paragraph, it was clearly non-sequitur and added for non-science reasons. Clearly something was terribly wrong with climate research.

The penny finally dropped in 2006 when emeritus Vice President Albert Gore (already infamous for having attempted to take credit for developing the Internet) produced his hysteria-inducing movie An Inconvenient Truth along with the splashing about of Jerry Mahlman’s laughable “hockey-stick graph.” The graph, in particular, was based on a stitching together of historical data for proxies of global temperature with a speculative projection of a future exponential rise in global temperatures. That is something respectable scientists are specifically trained not to do, although it’s a favorite tactic of psycho-ceramics.

Air Pollution

By that time, however, so much rhetoric had been invested in promoting climate-change fear and convincing the media that it was human-induced, that concerns about plain old pollution (which anyone could see) seemed dowdy and uninteresting by comparison.

One of the reasons pollution seemed then (and still does now) old news is that in civilized countries (generally those run as democracies) great strides had already been made beating it down. A case in point is the image at right

East/West Europe Pollution
A snapshot of particulate pollution across Europe on Jan. 27, 2018. (Apologies to Quartz [ https://qz.com/1192348/europe-is-divided-into-safe-and-dangerous-places-to-breathe/ ] from whom this image was shamelessly stolen.)

. This image, which is a political map overlaid by a false-color map with colors indicating air-pollution levels, shows relatively mild pollution in Western Europe and much more severe levels in the more-authoritarian-leaning countries of Eastern Europe.

While this map makes an important point about how poorly communist and other authoritarian-leaning regimes take care of the “soup” in which their citizens have to live, it doesn’t say a lot about the environmental state of the art more generally in Europe. We leave that for Zoë Schlanger’s WEF article, which begins:

“The average person living in Europe loses two years of their life to the health effects of breathing polluted air, according to a report published in the European Heart Journal on March 12.

“The report also estimates about 800,000 people die prematurely in Europe per year due to air pollution, or roughly 17% of the 5 million deaths in Europe annually. Many of those deaths, between 40 and 80% of the total, are due to air pollution effects that have nothing to do with the respiratory system but rather are attributable to heart disease and strokes caused by air pollutants in the bloodstream, the researchers write.

“‘Chronic exposure to enhanced levels of fine particle matter impairs vascular function, which can lead to myocardial infarction, arterial hypertension, stroke, and heart failure,’ the researchers write.”

The point is, while American politicians debate the merits of climate change legislation, and European politicians seem to have knuckled under to IPCC climate-change rhetoric by wholeheartedly endorsing the 2015 Paris Agreement, the bigger and far more salient problem of environmental pollution is largely being ignored. This despite the visible and immediate deleterious affects on human health, and the demonstrated effectiveness of government efforts to ameliorate it.

By the way, in the two decades between the time I first observed iridescence atop the waters of the Seekonk River and when I launched my own first boat in the 1970s, Narragansett Bay went from a potential Superfund site to a beautiful, clean playground for recreational boaters. That was largely due to the efforts of the Save the Bay volunteer organization. While their job is not (and never will be) completely finished, they can serve as a model for effective grassroots activism.

Why Diversity Rules

Diverse friends
A diverse group of people with different ages and nationalities having fun together. Rawpixel/Shutterstock

23 January 2019 – Last week two concepts reared their ugly heads that I’ve been banging on about for years. They’re closely intertwined, so it’s worthwhile to spend a little blog space discussing why they fit so tightly together.

Diversity is Good

The first idea is that diversity is good. It’s good in almost every human pursuit. I’m particularly sensitive to this, being someone who grew up with the idea that rugged individualism was the highest ideal.

Diversity, of course, is incompatible with individualism. Individualism is the cult of the one. “One” cannot logically be diverse. Diversity is a property of groups, and groups by definition consist of more than one.

Okay, set theory admits of groups with one or even no members, but those groups have a diversity “score” (Gini–Simpson index) of zero. To have any diversity at all, your group has to have at absolute minimum two members. The more the merrier (or diversitier).

The idea that diversity is good came up in a couple of contexts over the past week.

First, I’m reading a book entitled Farsighted: How We Make the Decisions That Matter the Most by Steven Johnson, which I plan eventually to review in this blog. Part of the advice Johnson offers is that groups make better decisions when their membership is diverse. How they are diverse is less important than the extent to which they are diverse. In other words, this is a case where quantity is more important than quality.

Second, I divided my physics-lab students into groups to perform their first experiment. We break students into groups to prepare them for working in teams after graduation. Unlike when I was a student fifty years ago, activity in scientific research and technology development is always done in teams.

When I was a student, research was (supposedly) done by individuals working largely in isolation. I believe it was Willard Gibbs (I have no reliable reference for this quote) who said: “An experimental physicist must be a professional scientist and an amateur everything else.”

By this he meant that building a successful physics experiment requires the experimenter to apply so many diverse skills that it is impossible to have professional mastery of all of them. He (or she) must have an amateur’s ability pick up novel skills in order to reach the next goal in their research. They must be ready to work outside their normal comfort zone.

That asked a lot from an experimental researcher! Individuals who could do that were few and far between.

Today, the fast pace of technological development has reduced that pool of qualified individuals essentially to zero. It certainly is too small to maintain the pace society expects of the engineering and scientific communities.

Tolkien’s “unimaginable hand and mind of Feanor” puttering around alone in his personal workshop crafting magical things is unimaginable today. Marlowe’s Dr. Faustus character, who had mastered all earthly knowledge, is now laughable. No one person is capable of making a major contribution to today’s technology on their own.

The solution is to perform the work of technological research and development in teams with diverse skill sets.

In the sciences, theoreticians with strong mathematical backgrounds partner with engineers capable of designing machines to test the theories, and technicians with the skills needed to fabricate the machines and make them work.

Chaotic Universe

The second idea I want to deal with in this essay is that we live in a chaotic Universe.

Chaos is a property of complex systems. These are systems consisting of many interacting moving parts that show predictable behavior on short time scales, but eventually foil the most diligent attempts at long-term prognostication.

A pendulum, by contrast, is a simple system consisting of, basically, three moving parts: a massive weight, or “pendulum bob,” that hangs by a rod or string (the arm) from a fixed support. Simple systems usually do not exhibit chaotic behavior.

The solar system, consisting of a huge, massive star (the Sun), eight major planets and a host of minor planets, is decidedly not a simple system. Its behavior is borderline chaotic. I say “borderline” because the solar system seems well behaved on short time scales (e.g., millennia), but when viewed on time scales of millions of years does all sorts of unpredictable things.

For example, approximately four and a half billion years ago (a few tens of millions of years after the system’s initial formation) a Mars-sized planet collided with Earth, spalling off a mass of material that coalesced to form the Moon, then ricochetted out of the solar system. That’s the sort of unpredictable event that happens in a chaotic system if you wait long enough.

The U.S. economy, consisting of millions of interacting individuals and companies, is wildly chaotic, which is why no investment strategy has ever been found to work reliably over a long time.

Putting It Together

The way these two ideas (diversity is good, and we live in a chaotic Universe) work together is that collaborating in diverse groups is the only way to successfully navigate life in a chaotic Universe.

An individual human being is so powerless that attempting anything but the smallest task is beyond his or her capacity. The only way to do anything of significance is to collaborate with others in a diverse team.

In the late 1980s my wife and I decided to build a house. To begin with, we had to decide where to build the house. That required weeks of collaboration (by our little team of two) to combine our experiences of different communities in the area where we were living, develop scenarios of what life might be like living in each community, and finally agree on which we might like the best. Then we had to find an architect to work with our growing team to design the building. Then we had to negotiate with banks for construction loans, bridge loans, and ultimate mortgage financing. Our architect recommended adding a prime contractor who had connections with carpenters, plumbers, electricians and so forth to actually complete the work. The better part of a year later, we had our dream house.

There’s no way I could have managed even that little project – building one house – entirely on my own!

In 2015, I ran across the opportunity to produce a short film for a film festival. I knew how to write a script, run a video camera, sew a costume, act a part, do the editing, and so forth. In short, I had all the skills needed to make that 30-minute film.

Did that mean I could make it all by my onesies? Nope! By the time the thing was completed, the list of cast and crew counted over a dozen people, each with their own job in the production.

By now, I think I’ve made my point. The take-home lesson of this essay is that if you want to accomplish anything in this chaotic Universe, start by assembling a diverse team, and the more diverse, the better!

The Scientific Method

Scientific Method Diagram
The scientific method assumes uncertainty.

9 January 2019 – This week I start a new part-time position on the faculty at Florida Gulf Coast University teaching two sections of General Physics laboratory. In preparation, I dusted off a posting to this blog from last Summer that details my take on the scientific method, which I re-edited to present to my students. I thought readers of this blog might profit by my posting the edited version. The original posting contrasted the scientific method of getting at the truth with the method used in the legal profession. Since I’ve been banging on about astrophysics and climate science, specifically, I thought it would be helpful to zero in again on how scientists figure out what’s really going on in the world at large. How do we know what we think we know?


While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school is a procedure consisting of five to seven steps, which pretty much look like this:

  1. Observation
  2. Hypothesis
  3. Prediction
  4. Experimentation
  5. Analysis
  6. Repeat

I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.

The Stepwise Program

It all starts with observation of things that go on in the World.

Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question: “why?”

Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several possible explanations that vary from the erudite to the thoroughly bizarre.

Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!

For example, the ancients tended to think in terms of objects somehow “wanting” to go downward as the least weird of explanations for gravity. The idea came from animism, which was the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior: Rocks are hard because their spirits resist being broken; They fall down when released because their spirits somehow like down better than up.

What we now consider the most-correctest explanation (that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other) wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.

Scientists then take all the hypotheses available, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results from the experiments.

Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results, and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.

It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.

That’s why the last step is to repeat the entire process ad nauseam.

While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.

Not boiling the method down to its essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, the science-pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”

A More Holistic Approach

The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following two complementary paths through to the resultant results.

There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis (the model) to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.

If you do that a bazillion times in a bazillion different ways, a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.

Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.

For example, I was once (at a University other than this one) asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.” He couldn’t get the machine to give the results he was convinced he should get.

I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.

I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He persisted in believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.

Anyway, the way this method works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.

If your scientific hypothesis is wrong (meaning it gives wrong results), “So, What?”

Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.

Finding that some hypothesis is wrong is no big deal. It just means it was a dumb idea, and you don’t have to bother thinking about that dumb idea anymore.

Alien abductions get relegated to entertainment for the entertainment starved. Real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.

(FYI: the current leading hypothesis is that the distances from there to here are so vast that anybody smart enough to figure out how to make the trip has better things to do.)

For scientists “Gee, it looks like … ” is usually as good as it gets!

Don’t Tell Me What to Think!

Your Karma ran over My Dogma
A woman holds up a sign while participating in the annual King Mango Strut parade in Miami, FL on 28 December 2014. BluIz60/Shutterstock

2 January 2019 – Now that the year-end holidays are over, it’s time to get back on my little electronic soapbox to talk about an issue that scientists have had to fight with authorities over for centuries. It’s an issue that has been around for millennia, but before a few centuries ago there weren’t scientists around to fight over it. The issue rears its ugly head under many guises. Most commonly today it’s discussed as academic freedom, or freedom of expression. You might think it was definitively won for all Americans in 1791 with the ratification of the first ten amendments to the U.S. Constitution and for folks in other democracies soon after, but you’d be wrong.

The issue is wrapped up in one single word: dogma.

According to the Oxford English Dictionary, the word dogma is defined as:

“A principle or set of principles laid down by an authority as incontrovertibly true.”

In 1600 CE, Giordano Bruno was burned at the stake for insisting that the stars were distant suns surrounded by their own planets, raising the possibility that these planets might foster life of their own, and that the universe is infinite and could have no “center.” These ideas directly controverted the dogma laid down as incontrovertibly true by both the Roman Catholic and Protestant Christian churches of the time.

Galileo Galilei, typically thought as the poster child for resistance to dogma, was only placed under house arrest (for the rest of his life) for advocating the less radical Copernican vision of the solar system.

Nicholas Copernicus, himself, managed to fly under the Catholic Church’s radar for nearly a century and a quarter by the simple tactic of not publishing his heliocentric model. Starting in 1510, he privately communicated it to his friends, who then passed it to some of their friends, etc. His signature work, Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in which he laid it out for all to see, wasn’t published until his death in 1643, when he’d already escaped beyond the reach of earthly authorities.

If this makes it seem that astrophysicists have been on the front lines of the war against dogma since there was dogma to fight against, that’s almost certainly true. Astrophysicists study stuff relating to things beyond the Earth, and that traditionally has been a realm claimed by religious authorities.

That claim largely started with Christianity, specifically the Roman Catholic Church. Ancient religions, which didn’t have delusions that they could dominate all of human thought, didn’t much care what cockamamie ideas astrophysicists (then just called “philosophers”) came up with. Thus, Aristarchus of Samos suffered no ill consequences (well, maybe a little, but nothing life – or even career – threatening) from proposing the same ideas that Galileo was arrested for championing some eighteen centuries later.

Fast forward to today and we have a dogma espoused by political progressives called “climate change.” It used to be called “global warming,” but that term was laughed down decades ago, though the dogma’s still the same.

The United-Nations-funded Intergovernmental Panel on Climate Change (IPCC) has become “the Authority” laying down the principles that Earth’s climate is changing and that change constitutes a rapid warming caused by human activity. The dogma also posits that this change will continue uninterrupted unless national governments promulgate drastic laws to curtail human activity.

Sure sounds like dogma to me!

Once again, astrophysicists are on the front lines of the fight against dogma. The problem is that the IPCC dogma treats the Sun (which is what powers Earth’s climate in the first place) as, to all intents and purposes, a fixed star. That is, it assumes climate change arises solely from changes in Earthly conditions, then assumes we control those conditions.

Astrophysicists know that just ain’t so.

First, stars generally aren’t fixed. Most stars are variable stars. In fact, all stars are variable on some time scale. They all evolve over time scales of millions or billions of years, but that’s not the kind of variability we’re talking about here.

The Sun is in the evolutionary phase called “main sequence,” where stars evolve relatively slowly. That’s the source of much “invariability” confusion. Main sequence stars, however, go through periods where they vary in brightness more or less violently on much shorter time scales. In fact, most main sequence stars exhibit this kind of behavior to a greater or lesser extent at any given time – like now.

So, a modern (as in post-nineteenth-century) astrophysicist would never make the bald assumption that the Sun’s output was constant. Statistically, the odds are against it. Most stars are variables; the Sun is like most stars; so the Sun is probably a variable. In fact, it’s well known to vary with a fairly stable period of roughly 22 years (the 11-year “sunspot cycle” is actually only a half cycle).

A couple of centuries ago, astronomers assumed (with no evidence) that the Sun’s output was constant, so they started trying to measure this assumed “solar constant.” Charles Greeley Abbot, who served as the Secretary of the Smithsonian Institute from 1928 to 1944, oversaw the first long-term study of solar output.

His observations were necessarily ground based and the variations observed (amounting to 3-5 percent) have been dismissed as “due to changing weather conditions and incomplete analysis of his data.” That despite the monumental efforts he went through to control such effects.

On the 1970s I did an independent analysis of his data and realized that part of the problem he had stemmed from a misunderstanding of the relationship between sunspots and solar irradiance. At the time, it was assumed that sunspots were akin to atmospheric clouds. That is, scientists assumed they affected overall solar output by blocking light, thus reducing the total power reaching Earth.

Thus, when Abbott’s observations showed the opposite correlation, they were assumed to be erroneous. His purported correlations with terrestrial weather observations were similarly confused, and thus dismissed.

Since then, astrophysicists have realized that sunspots are more like a symptom of increased internal solar activity. That is, increases in sunspot activity positively correlate with increases in the internal dynamism that generates the Sun’s power output. Seen in this light, Abbott’s observations and analysis make a whole lot more sense.

We have ample evidence, from historical observations of climate changes correlating with observed variations in sunspot activity, that there is a strong connection between climate and solar variability. Most notably the fact that the Sporer and Maunder anomalies (which were times when sunspot activity all but disappeared for extended periods) in sunspot records correlated with historically cold periods in Earth’s history. There was a similar period from about 1790 to 1830 of low solar activity (as measured by sunspot numbers) called the “Dalton Minimum” that similarly depressed global temperatures and gave an anomalously low baseline for the run up to the Modern Maximum.

For astrophysicists, the phenomenon of solar variability is not in doubt. The questions that remain involve by how much, how closely they correlate with climate change, and are they predictable?

Studies of solar variability, however, run afoul of the IPCC dogma. For example, in May of 2017 an international team of solar dynamicists led by Valentina V. Zharkova at Northumbria University in the U.K. published a paper entitled “On a role of quadruple component of magnetic field in defining solar activity in grand cycles” in the Journal of Atmospheric and Solar-Terrestrial Physics. Their research indicates that the Sun, while it’s activity has been on the upswing for an extended period, should be heading into a quiescent period starting with the next maximum of the 11-year sunspot cycle in around five years.

That would indicate that the IPCC prediction of exponentially increasing global temperatures due to human-caused increasing carbon-dioxide levels may be dead wrong. I say “may be dead wrong” because this is science, not dogma. In science, nothing is incontrovertible.

I was clued in to this research by my friend Dan Romanchik, who writes a blog for amateur radio enthusiasts. Amateur radio enthusiasts care about solar activity because sunspots are, in fact, caused by magnetic fields at the Sun’s surface. Those magnetic fields affect Earth by deflecting cosmic rays away from the inner solar system, which is where we live. Those cosmic rays are responsible for the Kennelly–Heaviside layer of ionized gas in Earth’s upper atmosphere (roughly 90–150 km, or 56–93 mi, above the ground).

Radio amateurs bounce signals off this layer to reach distant stations beyond line of sight. When solar activity is weak this layer drops to lower altitudes, reducing the effectiveness of this technique (often called “DXing”).

In his post of 16 December 2018, Dan complained: “If you operate HF [the high-frequency radio band], it’s no secret that band conditions have not been great. The reason, of course, is that we’re at the bottom of the sunspot cycle. If we’re at the bottom of the sunspot cycle, then there’s no way to go but up, right? Maybe not.

“Recent data from the NOAA’s Space Weather Prediction Center seems to suggest that solar activity isn’t going to get better any time soon.”

After discussing the NOAA prediction, he went on to further complain: “And, if that wasn’t depressing enough, I recently came across an article reporting on the research of Prof. Valentina Zharkova, who is predicting a grand minimum of 30 years!”

He included a link to a presentation Dr. Zharkova made at the Global Warming Policy Foundation last October in which she outlined her research and pointedly warned that the IPCC dogma was totally wrong.

I followed the link, viewed her presentation, and concluded two things:

  1. The research methods she used are some that I’m quite familiar with, having used them on numerous occasions; and

  2. She used those techniques correctly, reaching convincing conclusions.

Her results seems well aligned with meta-analysis published by the Cato Institute in 2015, which I mentioned in my posting of 10 October 2018 to this blog. The Cato meta-analysis of observational data indicated a much reduced rate of global warming compared to that predicted by IPCC models.

The Zharkova-model data covers a much wider period (millennia-long time scale rather than decades-long time scale) than the Cato data. It’s long enough to show the Medieval Warm Period as well as the Little Ice Age (Maunder minimum) and the recent warming trend that so fascinates climate-change activists. Instead of a continuation of the modern warm period, however, Zharkova’s model shows an abrupt end starting in about five years with the next maximum of the 11-year sunspot cycle.

Don’t expect a stampede of media coverage disputing the IPCC dogma, however. A host of politicians (especially among those in the U.S. Democratic Party) have hung their hats on that dogma as well as an array of governments who’ve sold policy decisions based on it. The political left has made an industry of vilifying anyone who doesn’t toe the “climate change” line, calling them “climate deniers” with suspect intellectual capabilities and moral characters.

Again, this sounds a lot like dogma. It’s the same tactic that the Inquisition used against Bruno and Galileo before escalating to more brutal methods.

Supporters of Zharkova’s research labor under a number of disadvantages. Of course, there’s the obvious disadvantage that Zharkova’s thick Ukrainian accent limits her ability to explain her work to those who don’t want to listen. She would not come off well on the evening news.

A more important disadvantage is the abstruse nature of the applied mathematics techniques used in the research. How many political reporters and, especially, commentators are familiar enough with the mathematical technique of principal component analysis to understand what Zharkova’s talking about? This stuff makes macroeconomics modeling look like kiddie play!

But, the situation’s even worse because to really understand the research, you also need an appreciation of stellar dynamics, which is based on magnetohydrodynamics. How many CNN commentators even know how to spell that?

Of course, these are all tools of the trade for astrophysicists. They’re as familiar to them as a hammer or a saw is to a carpenter.

For those in the media, on the other hand, it’s a lot easier to take the “most scientists agree” mantra at face value than to embark on the nearly hopeless task of re-educating themselves to understand Zharkova’s research. That goes double for politicians.

It’s entirely possible that “most” scientists might agree with the IPCC dogma, but those in a position to understand what’s driving Earth’s climate do not agree.

Legal vs. Scientific Thinking

Scientific Method Diagram
The scientific method assumes uncertainty.

29 August 2018 – With so much controversy in the news recently surrounding POTUS’ exposure in the Mueller investigation into Russian meddling in the 2016 Presidential election, I’ve been thinking a whole lot about how lawyers look at evidence versus how scientists look at evidence. While I’ve only limited background with legal matters (having an MBA’s exposure to business law), I’ve spent a career teaching and using the scientific method.

While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school consists of five to seven steps, which pretty much look like this:

  1. Observation
  2. Hypothesis
  3. Prediction
  4. Experimentation
  5. Analysis
  6. Repeat

I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.

It all starts with observation of things that go on in the World. Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question “why.”

Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several explanations that vary from the erudite to the thoroughly bizarre.

Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!

For example, ancients tended to think in terms of objects somehow “wanting” to go downward as the least wierd of explanations for gravity. It came from animism, which is the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior. Rocks are hard because their spirits resist being broken. They fall down when released because their spirits somehow like down better than up.

What we now consider the most-correctest explanation, that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other, wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.

Scientists then take all the hypotheses, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results of the experiments.

Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.

It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.

That’s why the last step is to repeat the entire process ad nauseam.

While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.

Not boiling it down to essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, science pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”

The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following through to the resultant results.

There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.

You do that a bazillion times in a bazillion different ways, and a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.

Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.

For example, I was once asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.”

I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.

I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He kept believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.

Anyway, the way this all works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.

That is NOT what our legal system does.

Not by a LONG shot!

The Legal Method

While both scientific and legal thinking methods start from some initial state, and move to some final conclusion, the processes for getting from A to B differs in important ways.

The Legal Method
In legal thinking, a chain of evidence is used to get from criminal charges to a final verdict.

First, while the hypothesis in the scientific method is assumed to be provisional, the legal system is based on coming to a definite explanation of events that is in some sense “correct.” The results of scientific inquiry, on the other hand, are accepted as “probably right, maybe, for now.”

That ain’t good enough in legal matters. The verdict of a criminal trial, for example, has to be true “beyond a reasonable doubt.”

Second, in legal matters the path from the initial conditions (the “charges”) to the results (the “verdict”) is linear. It has one path: through a chain of evidence. There may be multiple bits of evidence, but you can follow them through from a definite start to a definite end.

The third way the legal method differs from the scientific method is what I call the “So, What?” factor.

If your scientific hypothesis is wrong, meaning it gives wrong results, “So, What?”

Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.

Finding that some hypothesis is wrong is no big deal. It just means you don’t have to bother with that dumbass idea, anymore. Alien abductions get relegated to entertainment for the entertainment starved, and real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.

(Leading hypothesis: the distances from there to here are so vast that anybody smart enough to make the trip has better things to do.)

If, on the other hand, your legal verdict is wrong, really bad things happen. Maybe somebody’s life is ruined. Maybe even somebody dies. The penalty for failure in the legal system is severe!

So, the term “air tight” shows up a lot in talking about legal evidence. In science not so much.

For scientists “Gee, it looks like . . . ” is usually as good as it gets.

For judges, they need a whole lot more.

So, as a scientist I can say: “POTUS looks like a career criminal.”

That, however, won’t do the job for, say, Robert Mueller.

In Real Life

Very few of us are either scientists or judges. We live in the real world and have to make real-world decisions. So, which sort of method for coming to conclusions should we use?

In 1983, film director Paul Brickman spent an estimated 6.2 million dollars and 99 min worth of celluloid (some 142,560 individual images at the standard frame rate of 24 fps) telling us that successful entrepreneurs must be prepared to make decisions based on insufficient information. That means with no guarantee of being right. No guarantee of success.

He, by the way, was right. His movie, Risky Business, grossed $63 million at the box office in the U.S. alone. A clear gross margin of 1,000%!

There’s an old saying: “A conclusion is that point at which you decide to stop thinking about it.”

It sounds a bit glib, but it actually isn’t. Every experienced businessman, for example, knows that you never have enough information. You are generally forced to make a decision based on incomplete information.

In the real world, making a wrong decision is usually better than making no decision at all. What that means is that, in the real world, if you make a wrong decision you usually get to say “Oops!” and walk it back. If you decide to make no decision, that’s a decision that you can’t walk back.

Oops! I have to walk that statement back.

There are situations where the penalty for the failure of making a wrong decision is severe. For example, we had a cat once, who took exception to a number of changes in our home life. We’d moved. We’d gotten a new dog. We’d adopted another cat. He didn’t like any of that.

I could see from his body language that he was developing a bad attitude. Whereas he had previously been patient when things didn’t go exactly his way, he’d started acting more aggressive. One night, we were startled to hear a screetching of brakes in the road passing our front door. We went out to find that Nick had run across the road and been hit by a car.

Splat!

Considering the pattern of events, I concluded that Nick had died of PCD. That is, “Poor Cat Decision.” He’d been overly aggressive when deciding whether or not to cross the road.

Making no decision (hesitating before running across the road) would probably have been better than the decision he made to turn on his jets.

That’s the kind of decision where getting it wrong is worse than holding back.

Usually, however, no decision is the worst decision. As the Zen haiku says:

In walking, just walk.
In sitting, just sit.
Above all, don’t wobble.

That argues for using the scientist’s method: gather what facts you have, then make a decision. If you’re hypothesis turns out to be wrong, “So, What?”

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.

And, You Thought Global Warming was a BAD Thing?

Ice skaters on the frozen Thames river in 1677

10 March 2017 – ‘Way back in the 1970s, when I was an astophysics graduate student, I was hot on the trail of why solar prominences had the shapes we observe them to have. Being a good little budding scientist, I spent most of my waking hours in the library poring over old research notes from the (at that time barely existing) current solar research, back to the beginning of time. Or, at least to the invention of the telescope.

The fact that solar prominences are closely associated with sunspots led me to studying historical measurements of sunspots. Of course, I quickly ran across two well-known anomalies known as the Maunder and Sporer minima. These were periods in the middle ages when sunspots practically disappeared for decades at a time. Astronomers of the time commented on it, but hadn’t a clue as to why.

The idea that sunspots could disappear for extended periods is not really surprising. The Sun is well known to be a variable star whose surface activity varies on a more-or-less regular 11-year cycle (22 years if you count the fact that the magnetic polarity reverses after every minimum). The idea that any such oscillator can drop out once in a while isn’t hard to swallow.

Besides, when Mommy Nature presents you with an observable fact, it’s best not to doubt the fact, but to ask “Why?” That leads to much more fun research and interesting insights.

More surprising (at the time) was the observed correlation between the Maunder and Sporer minima and a period of anomalously cold temperatures throughout Europe known as the “Little Ice Age.” Interesting effects of the Little Ice Age included the invention of buttons to make winter garments more effective, advances of glaciers in the mountains, ice skating on rivers that previously never froze at all, and the abandonment of Viking settlements in Greenland.

And, crop failures. Can’t forget crop failures! Marie Antoinette’s famous “Let ’em eat cake” faux pas was triggered by consistent failures of the French wheat harvest.

The moral of the Little Ice Age story is:

Global Cooling = BAD

The converse conclusion:

Global Warming = GOOD

seems less well documented. A Medieval Warm Period from about 950-1250 did correlate with fairly active times for European culture. Similarly, the Roman Warm Period (250 BCE – 400 CE) saw the rise of the Roman civilization. So, we can tentatively conclude that global warming is generally NOT bad.

Sunspots as Markers

The reason seeing sunspot minima coincide with cool temperatures was surprising was that at the time astronomers fantasized that sunspots were like clouds that blocked radiation leaving the Sun. Folks assumed that more clouds meant more blocking of radiation, and cooler temperatures on Earth.

Careful measurements quickly put that idea into its grave with a stake through its heart! The reason is another feature of sunspots, which the theory conveniently forgot: they’re surrounded by relatively bright areas (called faculae) that pump out radiation at an enhanced rate. It turns out that the faculae associated with a sunspot easily make up for the dimming effect of the spot itself.

That’s why we carefully measure details before jumping to conclusions!

Anyway, the best solar-output (irradiance) research I was able to find was by Charles Greeley Abbott, who, as Director of the Smithsonian Astrophysical Observatory from 1907 to 1944, assembled an impressive decades-long series of meticulous measurements of the total radiation arriving at Earth from the Sun. He also attempted to correlate these measurements with weather records from various cities.

Blinded by a belief that solar activity (as measured by sunspot numbers) would anticorrelate with solar irradiation and therefore Earthly temperatures, he was dismayed to be unable to make sense of the combined data sets.

By simply throwing out the assumptions, I was quickly able to see that the only correlation in the data was that temperatures more-or-less positively correlated with sunspot numbers and solar irradiation measurements. The resulting hypothesis was that sunspots are a marker for increased output from the Sun’s core. Below a certain level there are no spots. As output increases above the trigger level, sunspots appear and then increase with increasing core output.

The conclusion is that the Little Ice Age corresponded with a long period of reduced solar-core output, and the Maunder and Sporer minima are shorter periods when the core output dropped below the sunspot-trigger level.

So, we can conclude (something astronomers have known for decades if not centuries) that the Sun is a variable star. (The term “solar constant” is an oxymoron.) Second, we can conclude that variations in solar output have a profound affect on Earth’s climate. Those are neither surprising nor in doubt.

We’re also on fairly safe ground to say that (within reason) global warming is a good thing. At least its pretty clearly better than global cooling!