Don’t Panic!

Panic button
Do not push the red button! Peter Hermes Furian/Shutterstock

20 March 2019 – The image at right visualizes something described in Douglas Adams’ Hitchiker’s Guide to the Galaxy. At one point, the main characters of that six-part “trilogy” found a big red button on the dashboard of a spaceship they were trying to steal that was marked “DO NOT PRESS THIS BUTTON!” Naturally, they pressed the button, and a new label popped up that said “DO NOT PRESS THIS BUTTON AGAIN!”

Eventually, they got the autopilot engaged only to find it was a stunt ship programmed to crash headlong into the nearest Sun as part of the light show for an interstellar rock band. The moral of this story is “Never push buttons marked ‘DO NOT PUSH THIS BUTTON.’”

Per the author: “It is said that despite its many glaring (and occasionally fatal) inaccuracies, the Hitchhiker’s Guide to the Galaxy itself has outsold the Encyclopedia Galactica because it is slightly cheaper, and because it has the words ‘DON’T PANIC’ in large, friendly letters on the cover.”

Despite these references to the Hitchhiker’s Guide to the Galaxy, this posting has nothing to do with that book, the series, or the guide it describes, except that I’ve borrowed the words from the Guide’s cover as a title. I did that because those words perfectly express the take-home lesson of Bill Snyder’s 11 March 2019 article in The Robot Report entitled “Fears of job-stealing robots are misplaced, say experts.”

Expert Opinions

Snyder’s article reports opinions expressed at the the Conference on the Future of Work at Stanford University last month. It’s a topic I’ve shot my word processor off about on numerous occasions in this space, so I thought it would be appropriate to report others’ views as well. First, I’ll present material from Snyder’s article, then I’ll wrap up with my take on the subject.

“Robots aren’t coming for your job,” Snyder says, “but it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.”

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist.

David Autor, professor of economics at the Massachusetts Institute of Technology points out that education is a big determinant of how developing trends affect workers: “It’s a great time to be young and educated, but there’s no clear land of opportunity for adults who haven’t been to college.”

“When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation,” said Varian, “demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude.”

His research indicates that shrinkage of the labor supply due to demographic trends is 53% greater than shrinkage of demand for labor due to automation. That means, while relatively fewer jobs are available, there are a lot fewer workers available to do them. The result is the prospect of a continued labor shortage.

At the same time, Snyder reports that “[The] most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.”

In other words, fears that robots will displace humans for existing jobs miss the point. Robots, instead, are taking over jobs for which there aren’t enough humans to do them.

Another effect is the fact that what people think of as “jobs” are actually made up of many “tasks,” and it’s tasks that get automated, not entire jobs. Some tasks are amenable to automation while others aren’t.

“Consider the job of a gardener,” Snyder suggests as an example. “Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores.”

Some of these tasks, like mowing and watering, can easily be automated. Pruning rose bushes, not so much!

Snyder points to news reports of a hotel in Nagasaki, Japan being forced to “fire” robot receptionists and room attendants that proved to be incompetent.

There’s a scene in the 1997 film The Fifth Element where a supporting character tries to converse with a robot bartender about another character. He says: “She’s so vulnerable – so human. Do you you know what I mean?” The robot shakes its head, “No.”

Sometimes people, even misanthropes, would prefer to interact with another human than with a drink-dispensing machine.

“Jobs,” Varian points out, “unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator.”

“Excessive automation at Tesla was a mistake,” founder Elon Musk mea culpa-ed last year “Humans are underrated.”

Another trend Snyder points out is that automation-ready jobs, such as assembly-line factory workers, have already largely disappeared from America. “The 10 most common occupations in the U.S.,” he says, “include such jobs as retail salespersons, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer even make the list.

Again, robots are mainly taking over tasks that humans are not available to do.

The final trend that Snyder presents, is the stark fact that birthrates in developed nations are declining – in some cases precipitously. “The aging of the baby boom generation creates demand for service jobs,” Varian points out, “but leaves fewer workers actively contributing labor to the economy.”

Those “service jobs” are just the ones that require a human touch, so they’re much harder to automate successfully.

My Inexpert Opinion

I’ve been trying, not entirely successfully, to figure out what role robots will actually have vis-a-vis humans in the future. I think there will be a few macroscopic trends. And, the macroscopic trends should be the easiest to spot ‘cause they’re, well, macroscopic. That means bigger. So, there easier to see. See?

As early as 2010, I worked out one important difference between robots and humans that I expounded in my novel Vengeance is Mine! Specifically, humans have a wider view of the Universe and have more of an emotional stake in it.

“For example,” I had one of my main characters pontificate at a cocktail party, “that tall blonde over there is an archaeologist. She uses ROVs – remotely operated vehicles – to map underwater shipwreck sites. So, she cares about what she sees and finds. We program the ROVs with sophisticated navigational software that allows her to concentrate on what she’s looking at, rather than the details of piloting the vehicle, but she’s in constant communication with it because she cares what it does. It doesn’t.”

More recently, I got a clearer image of this relationship and it’s so obvious that we tend to overlook it. I certainly missed it for decades.

It hit me like a brick when I saw a video of an autonomous robot marine-trash collector. This device is a small autonomous surface vessel with a big “mouth” that glides around seeking out and gobbling up discarded water bottles, plastic bags, bits of styrofoam, and other unwanted jetsam clogging up waterways.

The first question that popped into my mind was “who’s going to own the thing?” I mean, somebody has to want it, then buy it, then put it to work. I’m sure it could be made to automatically regurgitate the junk it collects into trash bags that it drops off at some collection point, but some human or humans have to make sure the trash bags get collected and disposed of. Somebody has to ensure that the robot has a charging system to keep its batteries recharged. Somebody has to fix it when parts wear out, and somebody has to take responsibility if it becomes a navigation hazard. Should that happen, the Coast Guard is going to want to scoop it up and hand its bedraggled carcass to some human owner along with a citation.

So, on a very important level, the biggest thing robots need from humans is ownership. Humans own robots, not the other way around. Without a human owner, an orphan robot is a pile of junk left by the side of the road!

Don’t Tell Me What to Think!

Your Karma ran over My Dogma
A woman holds up a sign while participating in the annual King Mango Strut parade in Miami, FL on 28 December 2014. BluIz60/Shutterstock

2 January 2019 – Now that the year-end holidays are over, it’s time to get back on my little electronic soapbox to talk about an issue that scientists have had to fight with authorities over for centuries. It’s an issue that has been around for millennia, but before a few centuries ago there weren’t scientists around to fight over it. The issue rears its ugly head under many guises. Most commonly today it’s discussed as academic freedom, or freedom of expression. You might think it was definitively won for all Americans in 1791 with the ratification of the first ten amendments to the U.S. Constitution and for folks in other democracies soon after, but you’d be wrong.

The issue is wrapped up in one single word: dogma.

According to the Oxford English Dictionary, the word dogma is defined as:

“A principle or set of principles laid down by an authority as incontrovertibly true.”

In 1600 CE, Giordano Bruno was burned at the stake for insisting that the stars were distant suns surrounded by their own planets, raising the possibility that these planets might foster life of their own, and that the universe is infinite and could have no “center.” These ideas directly controverted the dogma laid down as incontrovertibly true by both the Roman Catholic and Protestant Christian churches of the time.

Galileo Galilei, typically thought as the poster child for resistance to dogma, was only placed under house arrest (for the rest of his life) for advocating the less radical Copernican vision of the solar system.

Nicholas Copernicus, himself, managed to fly under the Catholic Church’s radar for nearly a century and a quarter by the simple tactic of not publishing his heliocentric model. Starting in 1510, he privately communicated it to his friends, who then passed it to some of their friends, etc. His signature work, Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in which he laid it out for all to see, wasn’t published until his death in 1643, when he’d already escaped beyond the reach of earthly authorities.

If this makes it seem that astrophysicists have been on the front lines of the war against dogma since there was dogma to fight against, that’s almost certainly true. Astrophysicists study stuff relating to things beyond the Earth, and that traditionally has been a realm claimed by religious authorities.

That claim largely started with Christianity, specifically the Roman Catholic Church. Ancient religions, which didn’t have delusions that they could dominate all of human thought, didn’t much care what cockamamie ideas astrophysicists (then just called “philosophers”) came up with. Thus, Aristarchus of Samos suffered no ill consequences (well, maybe a little, but nothing life – or even career – threatening) from proposing the same ideas that Galileo was arrested for championing some eighteen centuries later.

Fast forward to today and we have a dogma espoused by political progressives called “climate change.” It used to be called “global warming,” but that term was laughed down decades ago, though the dogma’s still the same.

The United-Nations-funded Intergovernmental Panel on Climate Change (IPCC) has become “the Authority” laying down the principles that Earth’s climate is changing and that change constitutes a rapid warming caused by human activity. The dogma also posits that this change will continue uninterrupted unless national governments promulgate drastic laws to curtail human activity.

Sure sounds like dogma to me!

Once again, astrophysicists are on the front lines of the fight against dogma. The problem is that the IPCC dogma treats the Sun (which is what powers Earth’s climate in the first place) as, to all intents and purposes, a fixed star. That is, it assumes climate change arises solely from changes in Earthly conditions, then assumes we control those conditions.

Astrophysicists know that just ain’t so.

First, stars generally aren’t fixed. Most stars are variable stars. In fact, all stars are variable on some time scale. They all evolve over time scales of millions or billions of years, but that’s not the kind of variability we’re talking about here.

The Sun is in the evolutionary phase called “main sequence,” where stars evolve relatively slowly. That’s the source of much “invariability” confusion. Main sequence stars, however, go through periods where they vary in brightness more or less violently on much shorter time scales. In fact, most main sequence stars exhibit this kind of behavior to a greater or lesser extent at any given time – like now.

So, a modern (as in post-nineteenth-century) astrophysicist would never make the bald assumption that the Sun’s output was constant. Statistically, the odds are against it. Most stars are variables; the Sun is like most stars; so the Sun is probably a variable. In fact, it’s well known to vary with a fairly stable period of roughly 22 years (the 11-year “sunspot cycle” is actually only a half cycle).

A couple of centuries ago, astronomers assumed (with no evidence) that the Sun’s output was constant, so they started trying to measure this assumed “solar constant.” Charles Greeley Abbot, who served as the Secretary of the Smithsonian Institute from 1928 to 1944, oversaw the first long-term study of solar output.

His observations were necessarily ground based and the variations observed (amounting to 3-5 percent) have been dismissed as “due to changing weather conditions and incomplete analysis of his data.” That despite the monumental efforts he went through to control such effects.

On the 1970s I did an independent analysis of his data and realized that part of the problem he had stemmed from a misunderstanding of the relationship between sunspots and solar irradiance. At the time, it was assumed that sunspots were akin to atmospheric clouds. That is, scientists assumed they affected overall solar output by blocking light, thus reducing the total power reaching Earth.

Thus, when Abbott’s observations showed the opposite correlation, they were assumed to be erroneous. His purported correlations with terrestrial weather observations were similarly confused, and thus dismissed.

Since then, astrophysicists have realized that sunspots are more like a symptom of increased internal solar activity. That is, increases in sunspot activity positively correlate with increases in the internal dynamism that generates the Sun’s power output. Seen in this light, Abbott’s observations and analysis make a whole lot more sense.

We have ample evidence, from historical observations of climate changes correlating with observed variations in sunspot activity, that there is a strong connection between climate and solar variability. Most notably the fact that the Sporer and Maunder anomalies (which were times when sunspot activity all but disappeared for extended periods) in sunspot records correlated with historically cold periods in Earth’s history. There was a similar period from about 1790 to 1830 of low solar activity (as measured by sunspot numbers) called the “Dalton Minimum” that similarly depressed global temperatures and gave an anomalously low baseline for the run up to the Modern Maximum.

For astrophysicists, the phenomenon of solar variability is not in doubt. The questions that remain involve by how much, how closely they correlate with climate change, and are they predictable?

Studies of solar variability, however, run afoul of the IPCC dogma. For example, in May of 2017 an international team of solar dynamicists led by Valentina V. Zharkova at Northumbria University in the U.K. published a paper entitled “On a role of quadruple component of magnetic field in defining solar activity in grand cycles” in the Journal of Atmospheric and Solar-Terrestrial Physics. Their research indicates that the Sun, while it’s activity has been on the upswing for an extended period, should be heading into a quiescent period starting with the next maximum of the 11-year sunspot cycle in around five years.

That would indicate that the IPCC prediction of exponentially increasing global temperatures due to human-caused increasing carbon-dioxide levels may be dead wrong. I say “may be dead wrong” because this is science, not dogma. In science, nothing is incontrovertible.

I was clued in to this research by my friend Dan Romanchik, who writes a blog for amateur radio enthusiasts. Amateur radio enthusiasts care about solar activity because sunspots are, in fact, caused by magnetic fields at the Sun’s surface. Those magnetic fields affect Earth by deflecting cosmic rays away from the inner solar system, which is where we live. Those cosmic rays are responsible for the Kennelly–Heaviside layer of ionized gas in Earth’s upper atmosphere (roughly 90–150 km, or 56–93 mi, above the ground).

Radio amateurs bounce signals off this layer to reach distant stations beyond line of sight. When solar activity is weak this layer drops to lower altitudes, reducing the effectiveness of this technique (often called “DXing”).

In his post of 16 December 2018, Dan complained: “If you operate HF [the high-frequency radio band], it’s no secret that band conditions have not been great. The reason, of course, is that we’re at the bottom of the sunspot cycle. If we’re at the bottom of the sunspot cycle, then there’s no way to go but up, right? Maybe not.

“Recent data from the NOAA’s Space Weather Prediction Center seems to suggest that solar activity isn’t going to get better any time soon.”

After discussing the NOAA prediction, he went on to further complain: “And, if that wasn’t depressing enough, I recently came across an article reporting on the research of Prof. Valentina Zharkova, who is predicting a grand minimum of 30 years!”

He included a link to a presentation Dr. Zharkova made at the Global Warming Policy Foundation last October in which she outlined her research and pointedly warned that the IPCC dogma was totally wrong.

I followed the link, viewed her presentation, and concluded two things:

  1. The research methods she used are some that I’m quite familiar with, having used them on numerous occasions; and

  2. She used those techniques correctly, reaching convincing conclusions.

Her results seems well aligned with meta-analysis published by the Cato Institute in 2015, which I mentioned in my posting of 10 October 2018 to this blog. The Cato meta-analysis of observational data indicated a much reduced rate of global warming compared to that predicted by IPCC models.

The Zharkova-model data covers a much wider period (millennia-long time scale rather than decades-long time scale) than the Cato data. It’s long enough to show the Medieval Warm Period as well as the Little Ice Age (Maunder minimum) and the recent warming trend that so fascinates climate-change activists. Instead of a continuation of the modern warm period, however, Zharkova’s model shows an abrupt end starting in about five years with the next maximum of the 11-year sunspot cycle.

Don’t expect a stampede of media coverage disputing the IPCC dogma, however. A host of politicians (especially among those in the U.S. Democratic Party) have hung their hats on that dogma as well as an array of governments who’ve sold policy decisions based on it. The political left has made an industry of vilifying anyone who doesn’t toe the “climate change” line, calling them “climate deniers” with suspect intellectual capabilities and moral characters.

Again, this sounds a lot like dogma. It’s the same tactic that the Inquisition used against Bruno and Galileo before escalating to more brutal methods.

Supporters of Zharkova’s research labor under a number of disadvantages. Of course, there’s the obvious disadvantage that Zharkova’s thick Ukrainian accent limits her ability to explain her work to those who don’t want to listen. She would not come off well on the evening news.

A more important disadvantage is the abstruse nature of the applied mathematics techniques used in the research. How many political reporters and, especially, commentators are familiar enough with the mathematical technique of principal component analysis to understand what Zharkova’s talking about? This stuff makes macroeconomics modeling look like kiddie play!

But, the situation’s even worse because to really understand the research, you also need an appreciation of stellar dynamics, which is based on magnetohydrodynamics. How many CNN commentators even know how to spell that?

Of course, these are all tools of the trade for astrophysicists. They’re as familiar to them as a hammer or a saw is to a carpenter.

For those in the media, on the other hand, it’s a lot easier to take the “most scientists agree” mantra at face value than to embark on the nearly hopeless task of re-educating themselves to understand Zharkova’s research. That goes double for politicians.

It’s entirely possible that “most” scientists might agree with the IPCC dogma, but those in a position to understand what’s driving Earth’s climate do not agree.

Reimagining Our Tomorrows

Cover Image
Utopia with a twist.

19 December 2018 – I generally don’t buy into utopias.

Utopias are intended as descriptions of a paradise. They’re supposed to be a paradise for everybody, and they’re supposed to be filled with happy people committed to living in their city (utopias are invariably built around descriptions of cities), which they imagine to be the best of all possible cities located in the best of all possible worlds.

Unfortunately, however, utopia stories are written by individual authors, and they’d only be a paradise for that particular author. If the author is persuasive enough, the story will win over a following of disciples, who will praise it to high Heaven. Once in a great while (actually surprisingly often) those disciples become so enamored of the description that they’ll drop everything and actually attempt to build a city to match the description.

When that happens, it invariably ends in tears.

That’s because, while utopian stories invariably describe city plans that would be paradise to their authors, great swaths of the population would find living in them to be horrific.

Even Thomas More, the sixteenth century philosopher, politician and generally overall smart guy who’s credited with giving us the word “utopia” in the first place, was wise enough to acknowledge that the utopia he described in his most famous work, Utopia, wouldn’t be such a fun place for the slaves he had serving his upper-middle class citizens, who were the bulwark of his utopian society.

Even Plato’s Republic, which gave us the conundrum summarized in Juvenal’s Satires as “Who guards the guards?,” was never meant as a workable society. Plato’s work, in general, was meant to teach us how to think, not what to think.

What to think is a highly malleable commodity that varies from person to person, society to society, and, most importantly, from time to time. Plato’s Republic reflected what might have passed as good ideas for city planning in 380 BC Athens, but they wouldn’t have passed muster in More’s sixteenth-century England. Still less would they be appropriate in twenty-first-century democracies.

So, I approached Joe Tankersley’s Reimagining Our Tomorrows with some trepidation. I wouldn’t have put in the effort to read the thing if it wasn’t for the subtitle: “Making Sure Your Future Doesn’t SUCK.”

That subtitle indicated that Tankersley just might have a sense of humor, and enough gumption to put that sense of humor into his contribution to Futurism.

Futurism tends to be the work of self-important intellectuals out to make a buck by feeding their audience on fantasies that sound profound, but bear no relation to any actual or even possible future. Its greatest value is in stimulating profits for publishers of magazines and books about Futurism. Otherwise, they’re not worth the trees killed to make the paper they’re printed on.

Trees, after all and as a group, make a huge contribution to all facets of human life. Like, for instance, breathing. Breathing is of incalculable value to humans. Trees make an immense contribution to breathing by absorbing carbon dioxide and pumping out vast quantities of oxygen, which humans like to breathe.

We like trees!

Futurists, not so much.

Tankersley’s little (168 pages, not counting author bio, front matter and introduction) opus is not like typical Futurist literature, however. Well, it would be like that if it weren’t more like the Republic in that it’s avowed purpose is to stimulate its readers to think about the future themselves. In the introduction that I purposely left out of the page count he says:

I want to help you reimagine our tomorrows; to show you that we are living in a time when the possibility of creating a better future has never been greater.”

Tankersley structured the body of his book in ten chapters, each telling a separate story about an imagined future centered around a possible solution to an issue relevant today. Following each chapter is an “apology” by a fictional future character named Archibald T. Patterson III.

Archie is what a hundred years ago would have been called a “Captain of Industry.” Today, we’d refer to him as an uber-rich and successful entrepreneur. Think Elon Musk or Bill Gates.

Actually, I think he’s more like Warren Buffet in that he’s reasonably introspective and honest with himself. Archie sees where society has come from, how it got to the future it got to, and what he and his cohorts did wrong. While he’s super-rich and privileged, the futures the stories describe were made by other people who weren’t uber-rich and successful. His efforts largely came to naught.

The point Tankersley seems to be making is that progress comes from the efforts of ordinary individuals who, in true British fashion, “muddle through.” They see a challenge and apply their talents and resources to making a solution. The solution is invariably nothing anyone would foresee, and is nothing like what anyone else would come up with to meet the same challenge. Each is a unique response to a unique challenge by unique individuals.

It might seem naive, this idea that human development comes from ordinary individuals coming up with ordinary solutions to ordinary problems all banded together into something called “progress,” but it’s not.

For example, Mark Zuckerberg developed Facebook as a response to the challenge of applying then-new computer-network technology to the age-old quest by late adolescents to form their own little communities by communicating among themselves. It’s only fortuitous that he happened on the right combination of time (the dawn of a radical new technology), place (in the midst of a huge cadre of the right people well versed in using that radical new technology) and marketing to get the word out to those right people wanting to use that radical new technology for that purpose. Take away any of those elements and there’d be no Facebook!

What if Zuckerberg hadn’t invented Facebook? In that event, somebody else (Reid Hoffman) would have come up with a similar solution (Linkedin) to the same challenge facing a similar group (technology professionals).

Oh, my! They did!

History abounds with similar examples. There’s hardly any advancement in human culture that doesn’t fit this model.

The good news is that Tankersley’s vision for how we can re-imagine our tomorrows is right on the money.

The bad news is … there isn’t any bad news!

Teaching News Consumption and Critical Thinking

Teaching media literacy
Teaching global media literacy to children should be started when they’re young. David Pereiras/Shutterstock

21 November 2018 – Regular readers of this blog know one of my favorite themes is critical thinking about news. Another of my favorite subjects is education. So, they won’t be surprised when I go on a rant about promoting teaching of critical news consumption habits to youngsters.

Apropos of this subject, last week the BBC launched a project entitled “Beyond Fake News,” which aims to “fight back” against fake news with a season of documentaries, special reports and features on the BBC’s international TV, radio and online networks.

In an article by Lucy Mapstone, Press Association Deputy Entertainment Editor for the Independent.ie digital network, entitled “BBC to ‘fight back’ against disinformation with Beyond Fake News project,” Jamie Angus, director of the BBC World Service Group, is quoted as saying: “Poor standards of global media literacy, and the ease with which malicious content can spread unchecked on digital platforms mean there’s never been a greater need for trustworthy news providers to take proactive steps.”

Angus’ quote opens up a Pandora’s box of issues. Among them is the basic question of what constitutes “trustworthy news providers” in the first place. Of course, this is an issue I’ve tackled in previous columns.

Another issue is what would be appropriate “proactive steps.” The BBC’s “Beyond Fake News” project is one example that seems pretty sound. (Sorry if this language seems a little stilted, but I’ve just finished watching a mid-twentieth-century British film, and those folks tended to talk that way. It’ll take me a little while to get over it.)

Another sort of “proactive step” is what I’ve been trying to do in this blog: provide advice about what steps to take to ensure that the news you consume is reliable.

A third is providing rebuttal of specific fake-news stories, which is what pundits on networks like CNN and MSNBC try (with limited success, I might say) to do every day.

The issue I hope to attack in this blog posting is the overarching concern in the first phrase of the Angus quote: “Poor standards of global media literacy, … .”

Global media literacy can only be improved the same way any lack of literacy can be improved, and that is through education.

Improving global media literacy begins with ensuring a high standard of media literacy among teachers. Teachers can only teach what they already know. Thus, a high standard of media literacy must start in college and university academic-education programs.

While I’ve spent decades teaching at the college level, so I have plenty of experience, I’m not actually qualified to teach other teachers how to teach. I’ve only taught technical subjects, and the education required to teach technical subjects centers on the technical subjects themselves. The art of teaching is (or at least was when I was at university) left to the student’s ability to mimic what their teachers did, informal mentoring by fellow teachers, and good-ol’ experience in the classroom. We were basically dumped into the classroom and left to sink or swim. Some swam, while others sank.

That said, I’m not going to try to lay out a program for teaching teachers how to teach media literacy. I’ll confine my remarks to making the case that it needs to be done.

Teaching media literacy to schoolchildren is especially urgent because the media-literacy projects I keep hearing about are aimed at adults “in the wild,” so to speak. That is, they’re aimed at adult citizens who have already completed their educations and are out earning livings, bringing up families, and participating in the political life of society (or ignoring it, as the case may be).

I submit that’s exactly the wrong audience to aim at.

Yes, it’s the audience that is most involved in media consumption. It’s the group of people who most need to be media literate. It is not, however, the group that we need to aim media-literacy education at.

We gotta get ‘em when they’re young!

Like any other academic subject, the best time to teach people good media-consumption habits is before they need to have them, not afterwards. There are multiple reasons for this.

First, children need to develop good habits before they’ve developed bad habits. It saves the dicey stage of having to unlearn old habits before you can learn new ones. Media literacy is no different. Neither is critical thinking.

Most of the so-called “fake news” appeals to folks who’ve never learned to think critically in the first place. They certainly try to think critically, but they’ve never been taught the skills. Of course, those critical-thinking skills are a prerequisite to building good media-consumption habits.

How can you get in the habit of thinking critically about news stories you consume unless you’ve been taught to think critically in the first place? I submit that the two skills are so intertwined that the best strategy is to teach them simultaneously.

And, it is most definitely a habit, like smoking, drinking alcohol, and being polite to pretty girls (or boys). It’s not something you can just tell somebody to do, then expect they’ll do it. They have to do it over and over again until it becomes habitual.

‘Nuff said.

Another reason to promote media literacy among the young is that’s when people are most amenable to instruction. Human children are pre-programmed to try to learn things. That’s what “play” is all about. Acquiring knowledge is not an unpleasant chore for children (unless misguided adults make it so). It’s their job! To ensure that children learn what they need to know to function as adults, Mommy Nature went out of her way to make learning fun, just as she did with everything else humans need to do to survive as a species.

Learning, having sex, taking care of babies are all things humans have to do to survive, so Mommy Nature puts systems in place to make them fun, and so drive humans to do them.

A third reason we need to teach media literacy to the young is that, like everything else, you’re better off learning it before you need to practice it. Nobody in their right mind teaches a novice how to drive a car by running them out in city traffic. High schools all have big, torturously laid out parking lots to give novice drivers a safe, challenging place to practice the basic skills of starting, stopping and turning before they have to perform those functions while dealing with fast-moving Chevys coming out of nowhere.

Similarly, you want students to practice deciphering written and verbal communications before asking them to parse a Donald-Trump speech!

The “Call to Action” for this editorial piece is thus, “Agitate for developing good media-consumption habits among schoolchildren along with the traditional Three Rs.” It starts with making the teaching of media literacy part of K-12 teacher education. It also includes teaching critical thinking skills and habits at the same time. Finally, it includes holding K-12 teachers responsible for inculcating good media-consumption habits in their students.

Yes, it’s important to try to bring the current crop of media-illiterate adults up to speed, but it’s more important to promote global media literacy among the young.

Radicalism and the Death of Discourse

Gaussian political spectrum
Most Americans prefer to be in the middle of the political spectrum, but most of the noise comes from the far right and far left.

7 November 2018 – During the week of 22 October 2018 two events dominated the news: Cesar Sayoc mailed fourteen pipe bombs to prominent individuals critical of Donald Trump, and Robert Bowers shot up a synagogue because he didn’t like Jews. Both of these individuals identified themselves with far-right ideology, so the media has been full of rhetoric condemning far-right activists.

To be legally correct, I have to note that, while I’ve written the above paragraph as if those individuals’ culpability for those crimes is established fact, they (as of this writing) haven’t been convicted. It’s entirely possible that some deus ex machina will appear out of the blue and exonerate one or both of them.

Clearly, things have gotten out of hand with Red Team activists when they start “throwing” pipe bombs and bullets. But, I’m here to say “naughty, naughty” to both sides.

Both sides are culpable.

I don’t want you to interpret that last sentence as agreement with Donald Trump’s idiotic statement after last year’s Charlottesville incident that there were “very fine people on both sides.”

There aren’t “very fine people” on both sides. Extremists are “bad” people no matter what side they’re on.

For example, not long ago social media sites (specifically Linkedin and, especially, Facebook) were lit up with vitriol about the Justice Kavanaugh hearings by pundits from both the Red Team and the Blue Team. It got so hot that I was embarrassed!

Some have pointed out that, statistically, most of the actual violence has been perpetrated by the Red Team.

Does that mean the Red Team is more culpable than the Blue Team?

No. It means they’re using different weapons.

The Blue Team, which I believe consists mainly of extremists from the liberal/progressive wing of the Democratic Party, has traditionally chosen written and spoken words as their main weapon. Recall some of the political correctness verbiage used to attack free expression in the late 20th Century, and demonstrations against conservative speakers on college campuses in our own.

The Red Team, which today consists of the Trumpian remnants of the Republican Party, has traditionally chosen to throw hard things, like rocks, bullets and pipe bombs.

Both sides also attempt to disarm the other side. The Blue Team wisely attempts to disarm the Red Team by taking away their guns. The Red Team, which eschews anything that smacks of wisdom, tries to disarm the Blue Team by (figuratively, so far) burning their books.

Recognize that calling the Free Press “the enemy of the people” is morally equivalent to throwing books on a bonfire. They’re both attempts to promote ignorance.

What’s actually happening is that the fringes of society are making all of the noise, and the mass of moderate-thinking citizens can’t get a word in edgewise.

George Schultz pointed out: “He who walks in the middle of the roads gets hit from both sides.”

I think it was Douglas Adams who pointed out that fanatics get to run things because they care enough to put in the effort. Moderates don’t because they don’t.

Both of these pundits point out the sad fact that Nature favors extremes. The most successful companies are those with the highest growth rates. Most drivers exceed the speed limit. The squeaky wheel gets the most grease. And, those who express the most extreme views get the most media attention.

Our Constitution specifies in no uncertain terms that the nation is founded on (small “d”) democratic principles. Democratic principles insist that policy matters be debated and resolved by consensus of the voting population. That can only be done when people meet together in the middle.

Extremists on both the Red Team and Blue Team don’t want that. They treat politics as a sporting event.

In a baseball game, for example, nobody roots for a tie. They root for a win by one team or the other.

Government is not a sporting event.

When one team or the other wins, all Americans lose.

The enemy we are facing now, which is the same enemy democracies face around the world, is not the right or left. It is extremism in general. Always has been. Always will be.

Authoritarians always go for one extreme or the other. Hitler went for the right. Stalin went for the left.

The reason authoritarians pick an extreme is that’s where there are people who are passionate enough about their ideas to shoot anyone who doesn’t agree with them. That, authoritarians realize, is the only way they can become “Dictator for Life.” Since that is their goal, they have to pick an extreme.

We love democracy because it’s the best way for “We the People” to ensure nobody gets to be “Dictator for Life.” When everyone meets in the middle (which is the only place everyone can meet), authoritarians get nowhere.

Ergo, authoritarians love extremes and everyone else needs the middle.

Vilifying “nationalism” as a Red Team vice misses the point. In the U.S. (or any similar democracy), nationalism requires more-or-less moderate political views. There’s lots of room in the middle for healthy (and ultimately entertaining) debate, but very little room at the extremes.

Try going for the middle.

To quote Victor “Animal” Palotti in Roland Emmerich’s 1998 film Godzilla: “C’mon. It’ll be fun! It’ll be fun! It’ll be fun!”

Babies and Bath Water

A baby in bath water
Don’t throw the baby out with the bathwater. Switlana Symonenko/Shutterstock.com

31 October 2018 – An old catchphrase derived from Medieval German is “Don’t throw the baby out with the bathwater.” It expresses an important principle in systems engineering.

Systems engineering focuses on how to design, build, and manage complex systems. A system can consist of almost anything made up of multiple parts or elements. For example, an automobile internal combustion engine is a system consisting of pistons, valves, a crankshaft, etc. Complex systems, such as that internal combustion engine, are typically broken up into sub-systems, such as the ignition system, the fuel system, and so forth.

Obviously, the systems concept can be applied to almost everything, from microorganisms to the World economy. As another example, medical professionals divide the human body into eleven organ systems, which would each be sub-systems within the body, which is considered as a complex system, itself.

Most systems-engineering principles transfer seamlessly from one kind of system to another.

Perhaps the most best known example of a systems-engineering principle was popularized by Robin Williams in his Mork and Mindy TV series. The Used-Car rule, as Williams’ Mork character put it, quite simply states:

“If it works, don’t fix it!”

If you’re getting the idea that systems engineering principles are typically couched in phrases that sound pretty colloquial, you’re right. People have been dealing with systems for as long as there have been people, so most of what they discovered about how to deal with systems long ago became “common sense.”

Systems engineering coalesced into an interdisciplinary engineering field around the middle of the twentieth century. Simon Ramo is sometimes credited as the founder of modern systems engineering, although many engineers and engineering managers contributed to its development and formalization.

The Baby/Bathwater rule means (if there’s anybody out there still unsure of the concept) that when attempting to modify something big (such as, say, the NAFTA treaty), make sure you retain those elements you wish to keep while in the process of modifying those elements you want to change.

The idea is that most systems that are already in place more or less already work, indicating that there are more elements that are right than are wrong. Thus, it’ll be easier, simpler, and less complicated to fix what’s wrong than to violate another systems principle:

“Don’t reinvent the wheel.”

Sometimes, on the other hand, something is such an unholy mess that trying to pick out those elements that need to change from the parts you don’t wish to change is so difficult that it’s not worth the effort. At that point, you’re better off scrapping the whole thing (throwing the baby out with the bathwater) and starting over from scratch.

Several months ago, I noticed that a seam in the convertible top on my sports car had begun to split. I quickly figured out that the big brush roller at my neighborhood automated car wash was over stressing the more-than-a-decade-old fabric. Naturally, I stopped using that car wash, and started looking around for a hand-detailing shop that would be more gentle.

But, that still left me with a convertible top that had started to split. So, I started looking at my options for fixing the problem.

Considering the car’s advanced age, and that a number of little things were starting to fail, I first considered trading the whole car in for a newer model. That, of course, would violate the rule about not throwing the baby out with the bath water. I’d be discarding the whole car just because of a small flaw, which might be repaired.

Of course, I’d also be getting rid of a whole raft of potentially impending problems. Then, again, I might be taking on a pile of problems that I knew nothing about.

It turned out, however, that the best car-replacement option was unacceptable, so I started looking into replacing just the convertible top. That, too, turned out to be infeasible. Finally, I found an automotive upholstery specialist who described a patching scheme that would solve the immediate problem and likely last through the remaining life of the car. So, that’s what I did.

I’ve put you through listening to this whole story to illustrate the thought process behind applying the “don’t throw the baby out with the bathwater” rule.

Unfortunately, our current President, Donald Trump, seems to have never learned anything about systems engineering, or about babies and bathwater. He’s apparently enthralled with the idea that he can bully U.S. trading partners into giving him concessions when he negotiates with them one-on-one. That’s the gist of his love of bilateral trade agreements.

Apparently, he feels that if he gets into a multilateral trade negotiation, his go-to strategy of browbeating partners into giving in to him might not work. Multiple negotiating partners might get together and provide a united front against him.

In fact, that’s a reasonable assumption. He’s a sufficiently weak deal maker on his own that he’d have trouble standing up to a combination of, say, Mexico’s Nieto and Canada’s Trudeau banded together against him.

With that background, it’s not hard to understand why POTUS is looking at all U.S. treaties, which are mostly multilateral, and looking for any niddly thing wrong with them to use as an excuse to scrap the whole arrangement and start over. Obvious examples being the NAFTA treaty and the Iran Nuclear Accord.

Both of these treaties have been in place for some time, and have generally achieved the goals they were put in place to achieve. Howsoever, they’re not perfect, so POTUS is in the position of trying to “fix” them.

Since both these treaties are multilateral deals, to make even minor adjustments POTUS would have to enter multilateral negotiations with partners (such as Germany’s quantum-physicist-turned-politician, Angela Merkel) who would be unlikely to cow-tow to his bullying style. Robbed of his signature strategy, he’d rather scrap the whole thing and start all over, taking on partners one at a time in bilateral negotiations. So, that’s what he’s trying to do.

A more effective strategy would be to forget everything his ghostwriter put into his self-congratulatory “How-To” book The Art of the Deal, enumerate a list of what’s actually wrong with these documents, and tap into the cadre of veteran treaty negotiators that used to be available in the U.S. State Department to assemble a team of career diplomats capable of fixing what’s wrong without throwing the babies out with the bathwater.

But, that would violate his narcissistic world view. He’d have to admit that it wasn’t all about him, and acknowledge one of the first principles of project management (another discipline that he should have vast knowledge of, but apparently doesn’t):

Begin by making sure the needs of all stakeholders are built into any project plan.”

Reaping the Whirlwind

Tornado
Powerful Tornado destroying property, with lightning in the background. Solarseven/Shutterstock.com

24 October 2018 – “They sow the wind, and they shall reap the whirlwind” is a saying from The Holy Bible‘s Old Testament Book of Hosea. I’m certainly not a Bible scholar, but, having been paying attention for seven decades, I can attest to saying’s validity.

The equivalent Buddhist concept is karma, which is the motive force driving the Wheel of Birth and Death. It is also wrapped up with samsara, which is epitomized by the saying: “What goes around comes around.”

Actions have consequences.

If you smoke a pack of Camels a day, you’re gonna get sick!

By now, you should have gotten the idea that “reaping the whirlwind” is a common theme among the world’s religions and philosophies. You’ve got to be pretty stone headed to have missed it.

Apparently the current President of the United States (POTUS), Donald J. Trump, has been stone headed enough to miss it.

POTUS is well known for trying to duck consequences of his actions. For example, during his 2016 Presidential Election campaign, he went out of his way to capitalize on Wikileaks‘ publication of emails stolen from Hillary Clinton‘s private email server. That indiscretion and his attempt to cover it up by firing then-FBI-Director James Comey grew into a Special Counsel Investigation, which now threatens to unmask all the nefarious activities he’s engaged in throughout his entire life.

Of course, Hillary’s unsanctioned use of that private email server while serving as Secretary of State is what opened her up to the email hacking in the first place! That error came back to bite her in the backside by giving the Russians something to hack. They then forwarded that junk to Wikileaks, who eventually made it public, arguably costing her the 2016 Presidential election.

Or, maybe it was her standing up for her philandering husband, or maybe lingering suspicions surrounding the pair’s involvement in the Whitewater scandal. Whatever the reason(s), Hillary, too, reaped the whirlwind.

In his turn, Russian President Vladimir Putin sowed the wind by tasking operatives to do the hacking of Hillary’s email server. Now he’s reaping the whirlwind in the form of a laundry list sanctions by western governments and Special Counsel Investigation indictments against the operatives he sent to do the hacking.

Again, POTUS showed his stone-headedness about the Bible verse by cuddling up to nearly every autocrat in the world: Vlad Putin, Kim Jong Un, Xi Jinping, … . The list goes on. Sensing waves of love emanating from Washington, those idiots have become ever more extravagant in their misbehavior.

The latest example of an authoritarian regime rubbing POTUS’ nose in filth is the apparent murder and dismemberment of Saudi Arabian journalist Jamal Khashoggi when he briefly entered the Saudi embassy in Turkey on personal business.

The most popular theory of the crime lays blame at the feet of Mohammad Bin Salman Al Saud (MBS), Crown Prince of Saudi Arabia and the country’s de facto ruler. Unwilling to point his finger at another would-be autocrat, POTUS is promoting a Saudi cover-up attempt suggesting the murder was done by some unnamed “rogue agents.”

Actually, that theory deserves some consideration. The idea that MBS was emboldened (spelled S-T-U-P-I-D) enough to have ordered Kashoggi’s assassination in such a ham-fisted way strains credulity. We should consider the possibility that ultra-conservative Wahabist factions within the Saudi government, who see MBS’ reforms as a threat to their historical patronage from the oil-rich Saudi monarchy, might have created the incident to embarrass MBS.

No matter what the true story is, the blow back is a whirlwind!

MBS has gone out of his way to promote himself as a business-friendly reformer. This reputation has persisted despite repeated instances of continued repression in the country he controls.

The whirlwind, however, is threatening MBS’ and the Saudi monarchy’s standing in the international community. Especially, international bankers, led by JP Morgan Chase’s Jamie Dimon, and a host of Silicon Valley tech companies are running for the exits from Saudi Arabia’s three-day Financial Investment Initiative conference that was scheduled to start Tuesday (23 October 2018).

That is a major embarrassment and will likely derail MBS’ efforts to modernize Saudi Arabia’s economy away from dependence on oil revenue.

It appears that these high-powered executives are rethinking the wisdom of dealing with the authoritarian Saudi regime. They’ve decided not to sow the wind by dealing with the Saudis because they don’t want to reap the whirlwind likely to result!

Update

Since this manuscript was drafted it’s become clear that we’ll never get the full story about the Kashoggi incident. Both regimes involved (Turkey and Saudi Arabia) are authoritarians with no incentive to be honest about this story. While Saudi Arabia seems to make a pretense of press freedom, this incident shows their true colors (i.e, color them repressive). Turkey hasn’t given even a passing nod to press freedom for years. It’s like two rival foxes telling the dog about a hen house break in.

On the “dog” side, we’re stuck with a POTUS who attacks press freedom on a daily basis. So, who’s going to ferret out the truth? Maybe the Brits or the French, but not the U.S. Executive Branch!

Immigration in Perspective

Day without immigrants protest
During ‘A Day Without Immigrants’ , more than 500,000 people marched down Wilshire Boulevard in Los Angeles, CA to protest a proposed federal crackdown on illegal immigration. Krista Kennell / Shutterstock.com

17 October 2018 – Immigration is, by and large, a good thing. It’s not always a good thing, and it carries with it a host of potential problems, but in general immigration is better than its opposite: emigration. And, there are a number of reasons for that.

Immigration is movement toward some place. Emigration is flow away from a place.

Mathematically, population shifts are described by a non-homogeneous second-order differential equation. I expect that statement means absolutely nothing to about half the target audience for this blog, and a fair fraction of the others have (like me) forgotten most of what they ever knew (or wanted to know) about such equations. So, I’ll start with a short review of the relevant points of how the things behave.

It’ll help the rest of this blog make a lot more sense, so bear with me.

Basically, the relevant non-homogeneous second-order differential equation is something called the “diffusion equation.” Leaving the detailed math aside, what this equation says is that the rate of migration of just about anything from one place to another depends on the spatial distribution of population density, a mobility factor, and a driving force pushing the population in one direction or the other.

Things (such as people) “diffuse” from places with higher densities to those with lower densities.

That tendency is moderated by a “mobility” factor that expresses how easy it is to get from place to place. It’s hard to walk across a desert, so mobility of people through a desert is low. Similarly, if you build a wall across the migration path, that also reduces mobility. Throwing up all kinds of passport checks, visas and customs inspections also reduces mobility.

Giving people automobiles, buses and airplanes, on the other hand, pushes mobility up by a lot!

But, changing mobility only affects the rate of flow. It doesn’t do anything to change the direction of flow, or to actually stop it. That’s why building walls has never actually worked. It didn’t work for the First Emperor of China. It didn’t work for Hadrian. It hasn’t done much for the Israelis, either.

Direction of flow is controlled by a forcing term. Existence of that forcing term is what makes the equation “non-homogeneous” rather than “homogeneous.” The homogeneous version (without the forcing term) is called the “heat equation” because it models what dumb-old thermal energy does.

Things that can choose what to do (like people), and have feet to help them act on their choices, get to “vote with their feet.” That means they can go where they want, instead of always floating downstream like a dead leaf.

The forcing term largely accounts for the desirability of being in one place instead of another. For example, the United States has a reputation for being a nice place to live. Thus, people try to flock here in droves from places that are not so nice. Thus, there’s a forcing term that points people from other places to the U.S.

That’s the big reason you want to live in a country that has immigration issues, rather than one with emigration issues. The Middle East had a serious emigration problem in 2015. For a number of reasons, it had become a nasty place to live. Folks that lived there wanted out in a big way. So, they voted with their feet.

There was a huge forcing term that pushed a million people from the Middle East to elsewhere, specifically Europe. Europe was considered a much nicer place to be, so people were willing to go through Hell to get there. Thus: emigration from the Middle East, and immigration into Europe.

In another example Nazi occupation in the first half of the twentieth century made most places in Europe distasteful, especially for certain groups of people. So, the forcing term pushed a lot of people across the Atlantic toward America. In 1942 Michael Curtiz made a film about that. It was called Casablanca and is arguably one of the greatest films Humphrey Bogart starred in.

Similarly, for decades Mexico had some serious problems with poverty, organized crime and corruption. Those are things that make a place nasty to live in, so there was a big forcing function pushing people to cross the border into the much nicer United States.

In recent decades, regime change in Mexico cleaned up a lot of the country’s problems, so migration from Mexico to the United States dropped like a stone in the last years of the Obama administration. When Mexico became a nicer place to live, people stopped wanting to move away.

Duh!

There are two morals to this story:

  1. If you want to cut down on immigration from some other country, help that other country become a nicer place to live. (Conversely, you could turn your own country into a third-world toilet so nobody wants to come in, but that’s not what we want.)
  2. Putting up walls and other barriers to immigration doesn’t stop it. They only slow it down.

We’re All Immigrants

I’d should subtitle this section, “The Bigot’s Lament.”

There isn’t a bi-manual (two-handed) biped (two-legged) creature anywhere in North or South America who isn’t an immigrant or a descendant of immigrants.

There have been two major influxes of human population in the history (and pre-history) of the Americas. The first occurred near the end of the last Ice Age, and the second occurred during the European Age of Discovery.

Before about ten-thousand years ago, there were horses, wolves, saber-tooth tigers, camels(!), elephants, bison and all sorts of big and little critters running around the Americas, but not a single human being.

(The actual date is controversial, but you get the idea.)

Anatomically modern humans, (and there aren’t any others because everyone else went extinct tens of thousands of years ago) developed in East Africa about 200,000 years ago.

They were, by the way, almost certainly negroes. A fact every racist wants to ignore is that: everybody has black ancestors! You can’t hate black people without hating your own forefathers.

More important for this discussion, however, is that every human being in North and South America is descended from somebody who came here from somewhere else. So-called “Native Americans” came here in the Pleistocene Epoch, most likely from Siberia. Most everybody else showed up after Christopher Columbus accidentally fell over North America.

That started the second big migration of people into the Americas: European colonization.

Mostly these later immigrants were imported to fill America’s chronic labor shortage.

America’s labor shortage has persisted since the Spanish conquistadores pretty much wiped out the indigenous people, leaving the Spaniards with hardly anybody to do the manual labor on which their economy depended. Waves of forced and unforced migration have never caught up. We still have a chronic labor shortage.

Immigrants generally don’t come to take jobs from “real” Americans. They come here because there are by-and-large more available jobs than workers.

Currently, natural reductions in birth rates among better educated, better housed, and generally wealthier Americans have left the United States (similar to most developed countries) with the problem that the the working-age population is declining while the older, retired population expands. That means we haven’t got enough young squirts to support us old farts in retirement.

The only viable solution is to import more young squirts. That means welcoming working-age immigrants.

End of story.

Climate Models Bat Zero

Climate models vs. observations
Whups! Since the 1970s, climate models have overestimated global temperature rise by … a lot! Cato Institute

The articles discussed here reflect the print version of The Wall Street Journal, rather than the online version. Links to online versions are provided. The publication dates and some of the contents do not match.

10 October 2018 – Baseball is well known to be a game of statistics. Fans pay as much attention to statistical analysis of performance by both players and teams as they do to action on the field. They hope to use those statistics to indicate how teams and individual players are likely to perform in the future. It’s an intellectual exercise that is half the fun of following the sport.

While baseball statistics are quoted to three decimal places, or one part in a thousand, fans know to ignore the last decimal place, be skeptical of the second decimal place, and recognize that even the first decimal place has limited predictive power. It’s not that these statistics are inaccurate or in any sense meaningless, it’s that they describe a situation that seems predictable, yet is full of surprises.

With 18 players in a game at any given time, a complex set of rules, and at least three players and an umpire involved in the outcome of every pitch, a baseball game is a highly chaotic system. What makes it fun is seeing how this system evolves over time. Fans get involved by trying to predict what will happen next, then quickly seeing if their expectations materialize.

The essence of a chaotic system is conditional unpredictability. That is, the predictability of any result drops more-or-less drastically with time. For baseball, the probability of, say, a player maintaining their batting average is fairly high on a weekly basis, drops drastically on a month-to-month basis, and simply can’t be predicted from year to year.

Folks call that “streakiness,” and it’s one of the hallmarks of mathematical chaos.

Since the 1960s, mathematicians have recognized that weather is also chaotic. You can say with certainty what’s happening right here right now. If you make careful observations and take into account what’s happening at nearby locations, you can be fairly certain what’ll happen an hour from now. What will happen a week from now, however, is a crapshoot.

This drives insurance companies crazy. They want to live in a deterministic world where they can predict their losses far into the future so that they can plan to have cash on hand (loss reserves) to cover them. That works for, say, life insurance. It works poorly for losses do to extreme-weather events.

That’s because weather is chaotic. Predicting catastrophic weather events next year is like predicting Miami Marlins pitcher Drew Steckenrider‘s earned-run-average for the 2019 season.

Laugh out loud.

Notes from 3 October

My BS detector went off big-time when I read an article in this morning’s Wall Street Journal entitled “A Hotter Planet Reprices Risk Around the World.” That headline is BS for many reasons.

Digging into the article turned up the assertion that insurance providers were using deterministic computer models to predict risk of losses due to future catastrophic weather events. The article didn’t say that explicitly. We have to understand a bit about computer modeling to see what’s behind the words they used. Since I’ve been doing that stuff since the 1970s, pulling aside the curtain is fairly easy.

I’ve also covered risk assessment in industrial settings for many years. It’s not done with deterministic models. It’s not even done with traditional mathematics!

The traditional mathematics you learned in grade school uses real numbers. That is numbers with a definite value.

Like Pi.

Pi = 3.1415926 ….

We know what Pi is because it’s measurable. It’s the ratio of a circle’s circumference to its diameter.

Measure the circumference. Measure the diameter. Then divide one by the other.

The ancient Egyptians performed the exercise a kazillion times and noticed that, no matter what circle you used, no matter how big it was, whether you drew it on papyrus or scratched it on a rock or laid it out in a crop circle, you always came out with the same number. That number eventually picked up the name “Pi.”

Risk assessment is NOT done with traditional arithmetic using deterministic (real) numbers. It’s done using what’s called “fuzzy logic.”

Fuzzy logic is not like the fuzzy thinking used by newspaper reporters writing about climate change. The “fuzzy” part simply means it uses fuzzy categories like “small,” “medium” and “large” that don’t have precisely defined values.

While computer programs are perfectly capable of dealing with fuzzy logic, they won’t give you the kind of answers cost accountants are prepared to deal with. They won’t tell you that you need a risk-reserve allocation of $5,937,652.37. They’ll tell you something like “lots!”

You can’t take “lots” to the bank.

The next problem is imagining that global climate models could have any possible relationship to catastrophic weather events. Catastrophic weather events are, by definition, catastrophic. To analyze them you need the kind of mathermatics called “catastrophe theory.”

Catastrophe theory is one of the underpinnings of chaos. In Steven Spielberg’s 1993 movie Jurassic Park, the character Ian Malcolm tries to illustrate catastrophe theory with the example of a drop of water rolling off the back of his hand. Whether it drips off to the right or left depends critically on how his hand is tipped. A small change creates an immense difference.

If a ball is balanced at the edge of a table, it can either stay there or drop off, and you can’t predict in advance which will happen.

That’s the thinking behind catastrophe theory.

The same analysis goes into predicting what will happen with a hurricane. As I recall, at the time Hurricane Florence (2018) made landfall, most models predicted it would move south along the Appalachian Ridge. Another group of models predicted it would stall out to the northwest.

When push came to shove, however, it moved northeast.

What actually happened depended critically on a large number of details that were too small to include in the models.

How much money was lost due to storm damage was a result of the result of unpredictable things. (That’s not an editing error. It was really the second order result of a result.) It is a fundamentally unpredictable thing. The best you can do is measure it after the fact.

That brings us to comparing climate-model predictions with observations. We’ve got enough data now to see how climate-model predictions compare with observations on a decades-long timescale. The graph above summarizes results compiled in 2015 by the Cato Institute.

Basically, it shows that, not only did the climate models overestimate the temperature rise from the late 1970s to 2015 by a factor of approximately three, but in the critical last decade, when the computer models predicted a rapid rise, the actual observations showed that it nearly stalled out.

Notice that the divergence between the models and the observations increased with time. As I’ve said, that’s the main hallmark of chaos.

It sure looks like the climate models are batting zero!

I’ve been watching these kinds of results show up since the 1980s. It’s why by the late 1990s I started discounting statements like the WSJ article’s: “A consensus of scientists puts blame substantially on emissios greenhouse gasses from cars, farms and factories.”

I don’t know who those “scientists” might be, but it sounds like they’re assigning blame for an effect that isn’t observed. Real scientists wouldn’t do that. Only politicians would.

Clearly, something is going on, but what it is, what its extent is, and what is causing it is anything but clear.

In the data depicted above, the results from global climate modeling do not look at all like the behavior of a chaotic system. The data from observations, however, do look like what we typically get from a chaotic system. Stuff moves constantly. On short time scales it shows apparent trends. On longer time scales, however, the trends tend to evaporate.

No wonder observers like Steven Pacala, who is Frederick D. Petrie Professor in Ecology and Evolutionary Biology at Princeton University and a board member at Hamilton Insurance Group, Ltd., are led to say (as quoted in the article): “Climate change makes the historical record of extreme weather an unreliable indicator of current risk.”

When you’re dealing with a chaotic system, the longer the record you’re using, the less predictive power it has.

Duh!

Another point made in the WSJ article that I thought was hilarious involved prediction of hurricanes in the Persian Gulf.

According to the article, “Such cyclones … have never been observed in the Persian Gulf … with new conditions due to warming, some cyclones could enter the Gulf in the future and also form in the Gulf itself.”

This sounds a lot like a tongue-in-cheek comment I once heard from astronomer Carl Sagan about predictions of life on Venus. He pointed out that when astronomers observe Venus, they generally just see a featureless disk. Science fiction writers had developed a chain of inferences that led them from that observation of a featureless disk to imagining total cloud cover, then postulating underlying swamps teeming with life, and culminating with imagining the existence of Venusian dinosaurs.

Observation: “We can see nothing.”

Conclusion: “There are dinosaurs.”

Sagan was pointing out that, though it may make good science fiction, that is bad science.

The WSJ reporters, Bradley Hope and Nicole Friedman, went from “No hurricanes ever seen in the Persian Gulf” to “Future hurricanes in the Persian Gulf” by the same sort of logic.

The kind of specious misinformation represented by the WSJ article confuses folks who have genuine concerns about the environment. Before politicians like Al Gore hijacked the environmental narrative, deflecting it toward climate change, folks paid much more attention to the real environmental issue of pollution.

Insurance losses from extreme weather events
Actual insurance losses due to catastrophic weather events show a disturbing trend.

The one bit of information in the WSJ article that appears prima facie disturbing is contained in the graph at right.

The graph shows actual insurance losses due to catastrophic weather events increasing rapidly over time. The article draws the inference that this trend is caused by human-induced climate change.

That’s quite a stretch, considering that there are obvious alternative explanations for this trend. The most likely alternative is the possibility that folks have been building more stuff in hurricane-prone areas. With more expensive stuff there to get damaged, insurance losses will rise.

Again: duh!

Invoking Occam’s Razor (choose the most believable of alternative explanations), we tend to favor the second explanation.

In summary, I conclude that the 3 October article is poor reporting that draws conclusions that are likely false.

Notes from 4 October

Don’t try to find the 3 October WSJ article online. I spent a couple of hours this morning searching for it, and came up empty. The closest I was able to get was a draft version that I found by searching on Bradley Hope’s name. It did not appear on WSJ‘s public site.

Apparently, WSJ‘s editors weren’t any more impressed with the article than I was.

The 4 October issue presents a corroboration of my alternative explanation of the trend in insurance-loss data: it’s due to a build up of expensive real estate in areas prone to catastrophic weather events.

In a half-page expose entitled “Hurricane Costs Grow as Population Shifts,” Kara Dapena reports that, “From 1980 to 2017, counties along the U.S. shoreline that endured hurricane-strength winds from Florence in September experienced a surge in population.”

In the end, this blog posting serves as an illustration of four points I tried to make last month. Specifically, on 19 September I published a post entitled: “Noble Whitefoot or Lying Blackfoot?” in which I laid out four principles to use when trying to determine if the news you’re reading is fake. I’ll list them in reverse of the order I used in last month’s posting, partly to illustrate that there is no set order for them:

  • Nobody gets it completely right  ̶  In the 3 October WSJ story, the reporters got snookered by the climate-change political lobby. That article, taken at face value, has to be stamped with the label “fake news.”
  • Does it make sense to you? ̶  The 3 October fake news article set off my BS detector by making a number of statements that did not make sense to me.
  • Comparison shopping for ideas  ̶  Assertions in the suspect article contradicted numerous other sources.
  • Consider your source  ̶  The article’s source (The Wall Street Journal) is one that I normally trust. Otherwise, I likely never would have seen it, since I don’t bother listening to anyone I catch in a lie. My faith in the publication was restored when the next day they featured an article that corrected the misinformation.

And, You Thought Global Warming was a BAD Thing?

Ice skaters on the frozen Thames river in 1677

10 March 2017 – ‘Way back in the 1970s, when I was an astophysics graduate student, I was hot on the trail of why solar prominences had the shapes we observe them to have. Being a good little budding scientist, I spent most of my waking hours in the library poring over old research notes from the (at that time barely existing) current solar research, back to the beginning of time. Or, at least to the invention of the telescope.

The fact that solar prominences are closely associated with sunspots led me to studying historical measurements of sunspots. Of course, I quickly ran across two well-known anomalies known as the Maunder and Sporer minima. These were periods in the middle ages when sunspots practically disappeared for decades at a time. Astronomers of the time commented on it, but hadn’t a clue as to why.

The idea that sunspots could disappear for extended periods is not really surprising. The Sun is well known to be a variable star whose surface activity varies on a more-or-less regular 11-year cycle (22 years if you count the fact that the magnetic polarity reverses after every minimum). The idea that any such oscillator can drop out once in a while isn’t hard to swallow.

Besides, when Mommy Nature presents you with an observable fact, it’s best not to doubt the fact, but to ask “Why?” That leads to much more fun research and interesting insights.

More surprising (at the time) was the observed correlation between the Maunder and Sporer minima and a period of anomalously cold temperatures throughout Europe known as the “Little Ice Age.” Interesting effects of the Little Ice Age included the invention of buttons to make winter garments more effective, advances of glaciers in the mountains, ice skating on rivers that previously never froze at all, and the abandonment of Viking settlements in Greenland.

And, crop failures. Can’t forget crop failures! Marie Antoinette’s famous “Let ’em eat cake” faux pas was triggered by consistent failures of the French wheat harvest.

The moral of the Little Ice Age story is:

Global Cooling = BAD

The converse conclusion:

Global Warming = GOOD

seems less well documented. A Medieval Warm Period from about 950-1250 did correlate with fairly active times for European culture. Similarly, the Roman Warm Period (250 BCE – 400 CE) saw the rise of the Roman civilization. So, we can tentatively conclude that global warming is generally NOT bad.

Sunspots as Markers

The reason seeing sunspot minima coincide with cool temperatures was surprising was that at the time astronomers fantasized that sunspots were like clouds that blocked radiation leaving the Sun. Folks assumed that more clouds meant more blocking of radiation, and cooler temperatures on Earth.

Careful measurements quickly put that idea into its grave with a stake through its heart! The reason is another feature of sunspots, which the theory conveniently forgot: they’re surrounded by relatively bright areas (called faculae) that pump out radiation at an enhanced rate. It turns out that the faculae associated with a sunspot easily make up for the dimming effect of the spot itself.

That’s why we carefully measure details before jumping to conclusions!

Anyway, the best solar-output (irradiance) research I was able to find was by Charles Greeley Abbott, who, as Director of the Smithsonian Astrophysical Observatory from 1907 to 1944, assembled an impressive decades-long series of meticulous measurements of the total radiation arriving at Earth from the Sun. He also attempted to correlate these measurements with weather records from various cities.

Blinded by a belief that solar activity (as measured by sunspot numbers) would anticorrelate with solar irradiation and therefore Earthly temperatures, he was dismayed to be unable to make sense of the combined data sets.

By simply throwing out the assumptions, I was quickly able to see that the only correlation in the data was that temperatures more-or-less positively correlated with sunspot numbers and solar irradiation measurements. The resulting hypothesis was that sunspots are a marker for increased output from the Sun’s core. Below a certain level there are no spots. As output increases above the trigger level, sunspots appear and then increase with increasing core output.

The conclusion is that the Little Ice Age corresponded with a long period of reduced solar-core output, and the Maunder and Sporer minima are shorter periods when the core output dropped below the sunspot-trigger level.

So, we can conclude (something astronomers have known for decades if not centuries) that the Sun is a variable star. (The term “solar constant” is an oxymoron.) Second, we can conclude that variations in solar output have a profound affect on Earth’s climate. Those are neither surprising nor in doubt.

We’re also on fairly safe ground to say that (within reason) global warming is a good thing. At least its pretty clearly better than global cooling!