Teacher, artist, scientist, engineer, journalist, and author, C.G. Masi writes adventure/mystery novels having multi-layered plots with unconventional characters who apply intelligence, understanding of historical and social issues, and mastery of high technology to resolve the situations they confront. Masi has advanced degrees in astrophysics and business administration, with hundreds of published articles in magazines as diverse as American Iron and Review of Scientific Instruments. As an award-winning magazine editor, he has been involved in launches of four successful magazines. His non-fiction book, How to Set Up Your Motorcycle Workshop, is in its third edition.
6 March 2019 – While surfing the Internet this morning, in a valiant effort to put off actually getting down to business grading that pile of lab reports that I should have graded a couple of days ago, I ran across this posting I wrote in 2013 for Packaging Digest.
Surprisingly, it still seems relevant today, and on a subject that I haven’t treated in this blog, yet. It being that I’m planning to devote most of next week to preparing my 2018 tax return, I decided to save some writing time by dusting it off and presenting it as this week’s posting to Tech Trends. I hope the folks at Packaging Digest won’t get their noses too far out of joint about my encroaching on their five-year-old copyright without asking permission.
By the way, this piece is way shorter than the usual Tech Trends essay because of the specifications for that Packaging Digest blog, which was entitled “New Metropolis” in homage to Fritz Lang’s 1927 feature film entitled Metropolis, which told the story of a futuristic mechanized culture and an anthropomorphic robot that a mad scientist creates to bring it down. The “New Metropolis” postings were specified to be approximately 500 words long, whereas Tech Trends postings are planned to be 1,000-1,500 words long.
Anyway, I hope you enjoy this little slice of recent history.
11 November 2013 – I thought it might be fun—and maybe even useful—to catalog the classifications of these things we call “robots.”
Let’s start with the word “robot.” The idea behind the word “robot” grows from the ancient concept of the golem. A golem was an artificial person created by people.
Frankly, the idea of a golem scared the bejeezus out of the ancients because the golem stands at the interface between living and non-living things. In our “enlightened” age, it still scares the bejeezus out of people!
If we restricted the field to golems—strictly humanoid robots, or androids—we wouldn’t have a lot to talk about, and practically nothing to do. The things haven’t proved particularly useful. So, I submit that we should expand the “robot” definition to include all kinds of human-made artificial critters.
This has, of course, already been done by everyone working in the field. The SCARA (selective compliance assembly robot arm) machines from companies like Kuka, and the delta robots from Adept Technologies clearly insist on this expanded definition. Mobile robots, such as the Roomba from iRobot push the boundary in another direction. Weird little things like the robotic insects and worms so popular with academics these days push in a third direction.
Considering the foregoing, the first observation is that the line between robot and non-robot is fuzzy. The old 50s-era dumb thermostats probably shouldn’t be considered robots, but a smart, computer-controlled house moving in the direction of the Jarvis character in the Ironman series probably should. Things in between are – in between. Let’s bite the bullet and admit we’re dealing with fuzzy-logic categories, and then move on.
Okay, so what are the main characteristics symptomatic of this fuzzy category “robot?”
First, it’s gotta be artificial. A cloned sheep is not a robot. Even designer germs are non-robots.
Second, it’s gotta be automated. A fly-by-wire fighter jet is not a robot. A drone linked at the hip to a human pilot is not a robot. A driverless car, on the other hand, is a robot. (Either that, or it’s a traffic accident waiting to happen.)
Third, it’s gotta interact with the environment. A general-purpose computer sitting there thinking computer-like thoughts is not a robot. A SCARA unit assembling a car is. I submit that an automated bill-paying system arguing through the telephone with my wife over how much to take out of her checkbook this month is a robot.
More problematic is a fourth direction—embedded systems, like automated houses—that beg to be admitted into the robotic fold. I vote for letting them in, along with artificial intelligence (AI) systems, like the robot bill paying systems my wife is so fond of arguing with.
Finally (maybe), it’s gotta be independent. To be a robot, the thing has to take basic instruction from a human, then go off on its onesies to do the deed. Ideally, you should be able to do something like say, “Go wash the car,” and it’ll run off as fast as its little robotic legs can carry it to wash the car. More chronistically, you should be able to program it to vacuum the living room at 4:00 a.m., then be able to wake up at 6:00 a.m. to a freshly vacuumed living room.
27 February 2019 – Some ideas are just so beautiful that we try to hang on to them even after failure after failure shows them to be unrealizable. Especially for the naive, these ideas hold such fascination that they persist long after cooler inspection consigns them to the dust bin of fantasy. This essay looks at two such ideas that display features in common: the ancient Greek myth of the flying horse, Pegasus, and the modern myth of the socialist state.
The ancient myth of the flying horse Pegasus is an obvious example. There’s no physical reason for such a creature to be impossible. Actual horses are built far too robustly to take to the air on their own power, but a delicately built version of Equus ferus fitted with properly functioning wings could certainly be able to fly.
That’s not the objection. Certainly, other robust land animals have developed flying forms. Birds, of course, developed from what our ancestors believed to be great lumbering theropod dinosaurs. Bats belong to the same mammalian class as horses, and they fly very well, indeed.
The objection to the existence of Pegasus-like creatures comes from evolutionary history. Specifically, the history of land-based vertebrates.
You see, all land-based vertebrates on Earth evolved from a limited number of ray-finned fish species. In fact, the number of fish species contributing DNA to land-vertebrate animals is likely limited to one.
All land vertebrates have exactly the same basic body form – with modifications – that developed from features common to ray-finned fishes. Basically, they have:
One spine that extends into a tail,
One head appended to the forward (opposite the tail) end of the spine,
Two front appendages that developed from the fish’s pectoral fins, and
Two rear appendages that developed from the fish’s pelvic fins.
Not all land-based vertebrates have all these features. Some originally extant features (like the human tail and cetacean rear legs) atrophied nearly to non-existence. But, the listed features are the only ones land-based vertebrates have ever had. Of course, I’m also including such creatures as birds and dolphins that developed from land-based critters as they moved on to other habitats or back to the sea.
The reason I suggest that all land vertebrates likely hail from one fish species is that no land vertebrates have ever had anal, caudal or adipose appendages, thus we all seem to have developed from some fish species that lacked these fins.
“Aha!” you say, “cetaceans like dolphins and whales have tail fins!”
“Nope,” I rebut. “Notice that cetacean tail flukes are fleshy appendages extending horizontally from the tip of the animals’ tails, not bony appendages oriented vertically like a fish’s caudal fins.”
Okay, so we’ve discovered what’s wrong with Pegasus that is not wrong with bats, pterodactyls, and birds. All the real land-based vertebrate forms have four limbs, whereas the fanciful Pegasus has six (four legs and two wings). Six-limbed Pegasus can’t exist because there aren’t any similar prior forms for it to have evolved from.
So, Pegasus is a beautiful idea that simply can’t be existent on Earth.
Well, you could have some sort of flying-horse-like creature that evolved on some other planet, then caught a convenient flying saucer to pop over to Earth, but they wouldn’t be native, and likely wouldn’t look at all earthlike.
So, what has all this got to do with socialism?
Well, as I’ve intimated, both are beautiful ideas that people are pretty fond of. Notwithstanding its popularity, Pegasus is not possible (as a native Earth creature) for a very good reason. Socialism is also a beautiful idea that people (at least great swaths of the population) are pretty fond of. Socialism is, however, also not possible as a stable form of society for a very good reason.
The reason socialism is not possible as a stable form of society goes back to our old friend, the Tragedy of the Commons. If you aren’t intimately familiar with this concept, follow the link to a well-written article by Margaret E. Banyan, Adjunct Assistant Professor in the Southwest Florida Center for Public and Social Policy at Florida Gulf Coast University, which explains the Tragedy, its origins, and ways that have been proposed to ameliorate its effects.
Anyway, economist Milton Friedman summarized the Tragedy of the Commons with the phrase: “When everybody owns something, nobody owns it … .”
The Tragedy of the Commons speaks directly to why true socialism is impossible, or at least not tenable as a stable, permanent system. Let’s start with what the word “socialism” actually means. According to Merriam-Webster, socialism is:
“any of various economic and political theories advocating collective or governmental ownership and administration of the means of production and distribution of goods.”
Other dictionaries largely agree, so we’ll work with this definition.
So, you can see where the Tragedy of the Commons connects to socialism. The beautiful idea relates to the word “collective.”
We know that human beings evolved as territorial animals, but we’d like to imagine a utopia where we’ve gotten past this primitive urge. Without territoriality, one could imagine a world where conflict would cease to exist. Folks would just get along because nobody’d say “Hey, that’s mine. Keep your mitts off!”
The problem with such a world is the Tragedy of the Commons as described by Friedman: if everybody owns the means of production, then nobody owns it.
There are two potential outcomes
Scenario 1 is the utter destruction of whatever resource is held in common as described at the start of Banyan’s essay.
Scenario 2 is what happened to the first recorded experiment with democracy in ancient Athens: somebody steps up to the plate and takes over management of the resource for everybody. For Athens it was a series of dictator kings ending with Alexander the Great. In effect, to save the resource from destruction, some individual moves in to “own” it.
In scenario 1, the resource is destroyed along with the socialist society that collectively owns it.Everyone either starves or leaves. Result: no more socialism.
In scenario 2, the resource is saved by being claimed by some individual. That individual sets up rules for how to apportion use of the resource, which is, in effect, no longer collectively owned. Result: dictatorship and, no more socialism.
Generally, all socialist states eventually degenerate into dictatorships via scenario 2. They invariably keep the designation “socialist,” but their governments are de facto authoritarian, not socialist. This is why I say socialism is a beautiful idea that is, in the long term, impossible. Socialist states can be created, but they very quickly come under authoritarian rule.
The Democracy Option
The Merriam-Webster definition admits of one more scenario, and that’s what we use in democratically governed nations, which are generally not considered socialist states: government ownership of some (but not all) resources.
If we have a democracy, there are all kinds of great things we can have governmentally owned, but not collectively owned. Things that everybody needs and everybody uses and everybody has to share, like roads, airspace, forests, electricity grids, and national parks. These are prime candidates for government ownership.
Things like wives, husbands, houses, and bicycles (note there’s been a big bicycle-sharing SNAFU recently reported in China) have historically been shown best to not be shared!
So, in a democracy, lots of stuff can be owned by the government, rather than by individuals or “everybody.”
A prime example is airspace. I don’t mean the air itself. I mean airspace! That is the space in the air over anyplace in the United States, or virtually the entire world. One might think it’s owned by everybody, but that just ain’t so.
You just try floating off at over 500 feet above ground level (AGL) in any type of aircraft and see where it gets you. Ya just can’t do it legally. You have to get permission from the Federal Government (in the form of a pilot’s license), which involves a great whacking pile of training, examinations, and even background checks. That’s because everybody does NOT own airspace above 500 feet AGL (and great, whacking swaths of the stuff lower down, too), the government does. You, personally, individually or collectively, don’t own a bit of it and have no rights to even be there without permission from its real owner, the Federal Government.
Another one is the Interstate Highway System. Try walking down Interstate 75 in, say, Florida. Assuming you survive long enough without getting punted off the roadway by a passing Chevy, you’ll soon find yourself explaining what the heck you think you’re doing to the nearest representative (spelled C-O-P) of whatever division of government takes ownership of that particular stretch of roadway. Unless you’ve got a really good excuse (e.g., “I gotta pee real bad!”) they’ll immediately escort you off the premises via the nearest exit ramp.
Ultimately, the only viable model of socialism is a limited one that combines individual ownership of some resources that are not shared, with government ownership of other resources that are shared. Democracy provides a mechanism for determining which is what.
13 February 2019 – Most mentally adult human beings recognize that binary thinking seldom proves useful in real-world situations. Our institutions, however, seem to be set up to promote binary thinking. And, that accounts for most of today’s societal dysfunction.
Lets start with what binary thinking really is. We’ve all heard disparaging remarks about “seeing things in black and white.” Simplistic thinking tends to categorize things into two starkly divided categories: good vs. evil, left vs. right, and, of course, dark vs. light. That latter category gives rise to the “black and white” metaphor.
“Binary thinking” refers to this simplistic strategy of dividing whatever we’re thinking about into two (hence the word “binary”) categories.
In many situations, binary thinking makes sense. For example, in team sports it makes sense to divide outcomes of contests into Team 1 wins and Team 2 loses.
Ultimately, every decision process degenerates into a selection between two choices. We do one and not the other. Even with multiple choices, we make the ultimate decision to pick one of the options to win after relegating all the others into the “loser” category.
If you think about it, however, those are always (or almost always) artificial situations. Mommy Nature seldom presents us with clear options. You aren’t presented with a clear choice between painting your house red or blue. House paint comes in a wide variety of hues that are blends of five primary colors: red, blue, yellow, black and white.
Even people aren’t really strictly divided into men and women. It’s a multidimensional mix of male-associated and female-associated traits that each blend from one extreme to another. The strict division into male and female is a dichotomy that we, as a society, impose on the world. Even existence or absence of a penis is a situation where there are numerous examples of intermediate forms.
The fact that we see binary choices everywhere is a fiction we impose on the Universe for our own convenience. That is, it’s easier and often more satisfying to create artificial dichotomies just so we don’t have to think about the middle.
But, the middle is where most of what goes on happens.
More than once I’ve depicted the expected distribution of folks holding views along the conservative/liberal spectrum by an image like that below, with those holding conservative views in red on the right and those with liberal views in blue to the left. That’s what I mean by my oft-repeated metaphor of the Red Team and Blue Team. It’s an extreme example of what statisticians call a “bimodal distribution.” That is a graph of numbers of examples plotted along a vertical axis with some linearly varying characteristic on a one-dimensional horizontal axis, that has two peaks.
The actual distribution we should expect from basic statistics is a single-mode distribution with a broad peak in the middle.
The two main political parties, however, act as if they imagine the distribution of political views to be bimodal, with one narrow peak ‘way over on the (liberal) left, and another narrow peak ‘way over on the (conservative) right. That picture leads to a binary view where you (the voter) are expected to be either on the left or the right.
With that view, campaigning becomes a two-team contest where the Democratic Party (Blue Team) hopes to attract voters over to their liberal view, making the blue peak larger than the red peak. The Republican Party, in turn, hopes to attract voters to their conservative agenda, making the red peak larger than the blue one.
What voters want, of course, is for the politicians to reflect the preferences they actually have. Since voters’ views can be expected to have a standard distribution with one (admittedly quite broad) peak more or less centered in the middle, Congress should be made up of folks with views falling in a broad peak more-or-less centered in the middle, with the vast majority advocating a moderate agenda. That would work out well because with that kind of distribution, compromise would be relatively easy to come by and laws would be passed that most people could find palatable, things would get done, and so forth.
Why don’t we have a situation like that? Why do we have this epidemic of binary thinking?
I believe that the answer comes from the two major parties becoming mesmerized in the 1980s by the principles of Marketing 101. The first thing they teach you in Marketing 101 is how to segment your customers. Translated into the one-dimensional left/right view so common in political thinking, that leads to imagining the bimodal distribution I’ve presented.
The actual information space characterizing voter preferences, however, is multidimensional. It’s not one single characteristic that can be represented on a one-dimensional spectrum. Every issue that comes up in political discourse represents a separate dimension, and any voter’s views appear as a point floating somewhere in that multidimensional space.
Nobody talks about this multidimensional space because it’s too complicated a picture to present in the evening news. Most political reporters don’t have the mathematical background to imagine it, let alone explain it. They’re lucky to get the basic one-dimensional spectrum picture across.
The second thing they teach you in Marketing 101 is product differentiation. Once you’ve got your customer base segmented, you pick a segment with the biggest population group, and say things to convince individuals in that group that your product (in this case, your candidate) matches the characteristics desired by that group, while the competition’s characteristics don’t.
If you think your chosen segment likes candidates wearing red T-shirts, you dress your candidate in a red T-shirt and point out that the competitor wears blue. In fact, you say things aimed at convincing voters that candidates wearing red T-shirts are somehow better (more likeable) than those awful bums wearing those ugly, nasty blue T-shirts. That way you try to attract voters to the imaginary red peak from the imaginary blue peak. If you’re successful, you win the election.
Of course, since voters actually expect your candidate to run the government after the election, what color T-shirt he or she wears is then immaterial. Since they were elected based on the color of their T-shirt, however, you end up with a legislature sitting around cheering for “Red!” or “Blue!” when voters want them to pass purple legislation.
An example of rabid binary thinking is the recent Democratic Party decision to have “zero tolerance” on race and gender issues. That thinking assumes that the blue peak on the left is filled with saintly heaven-bound creatures devoted to women’s and minorities’ rights, while the red peak on the right is full of mysogynistic racist bullies, and that there’s nobody in the middle.
That’s what “zero tolerance” means.
Liberals tried a similar stunt in the 1980s with “Political Correctness.” That fiasco worked for approximately zero time. It worked only until people realized that hardly anyone agreed with everything the PC folks liked. Since it was a binary choice – you were either politically correct or not – most folks opted for “not.” Very soon the jokes started, then folks started voting anti-PC.
What started out as a ploy by the left to bully everyone into joining their political base had the opposite effect. Most Americans don’t react well to bullying. They tend to turn on the bullies.
Instead of a cadre of Americans cowed into spouting politically correct rhetoric, we got a generation proudly claiming politically incorrect views.
You don’t hear much about political correctness, any more.
It’s quickly becoming clear that the binary thinking of the “zero tolerance” agenda will, like the PC cultural revolution, quickly lead to a “zero support” result.
Perhaps the Democratic Party should go back to school and learn Marketing 102. The first thing they teach you in Marketing 102 is “the customer is always right.”
13 February 2019 – The following is an invited guest post by Nicholas Sarwark, Chairman of the Libertarian National Committee
Republicans and Democrats often have a stranglehold on the U.S. political process, but Americans are ready for that to change.
According to a Morning Consult–Politico poll conducted in early February, more than half of all voters in the United States believe a third party is needed, and one third of all voters would be willing to vote for a third-party candidate in the 2020 presidential election. A Gallup poll from October showed that 57 percent of Americans think a strong third party is needed.
It’s no wonder why. Another Gallup poll from January revealed that only 35 percent of Americans trust the U.S. government to handle domestic problems, a number that increases to only 41 percent for international troubles. Those are the lowest figures in more than 20 years. A running Gallup poll showed that in January, 29 percent of Americans view government itself as the biggest problem facing the country.
This widespread dissatisfaction with U.S. government is consistent with the increasing prevalence of libertarian views among the general public. Polling shows that more than a quarter of Americans have political views that can be characterized as libertarian.
All of this suggests that the Libertarian Party should be winning more and bigger electoral races than ever. In fact, that’s exactly what’s happening. Out of the 833 Libertarian candidates who ran in 2018, 55 were elected to public office in 11 states.
One of those officials elected is Jeff Hewitt, who in November won a seat on the board of supervisors in Riverside County, Calif. while finishing up eight years on the Calimesa city council—three as mayor. Before being elected to the city council, he had served six years on the city’s planning commission. Hewitt recently gave the Libertarian Party’s 2019 State of the Union address, explaining how Libertarians would restrain runaway government spending, withdraw from never-ending wars abroad, end the surveillance state, protect privacy and property rights, end mass incarceration and the destructive “war on drugs,” and welcome immigrants who expand our economy and enrich our culture.
Journalist Gustavo Arellano attended Hewitt’s swearing-in ceremony on January 8. In his feature story for the Los Angeles Times, he remarked, “Riverside County Supervisor Jeff Hewitt just might be the strangest Libertarian of them all: a politician capable of winning elections who could move the party from the fringes into the mainstream.”
During Hewitt’s time as mayor of Calimesa, he severed ties with the bloated pensions and overstaffing of the state-run fire department. He replaced it with a local alternative that costs far less and has been much more effective at protecting endangered property. This simple change also eliminated two layers of administrative costs at the county and state levels.
Now Hewitt is poised to bring libertarian solutions to an even larger region, in his new position with Riverside County, which has more residents than the populations of 15 different states. This rise from local success is a model that can be replicated around the country, suggested Fullerton College political science professor Jodi Balma, quoted in the L.A. Times article as saying that Hewitt’s success shows how Libertarian candidates can “build a pipeline to higher office” with successful local races that show the practical value of Libertarian Party ideas on a small scale, then parlaying those experiences into winning state and federal office.
That practical value is immense, as Libertarian Laura Ebke showed when, as a Nebraska state legislator, she almost single-handedly brought statewide occupational-licensure reform to nearly unanimous 45-to-1, tri-partisan approval. This legislation has cleared the way for countless Nebraskans to build careers in fields that were once closed off from effective competition behind mountains of regulatory red tape.
The American people have the third party they’re looking for. The Libertarian Party is already the third-largest political party in the United States, and it shares the public’s values of fiscal responsibility and social tolerance — the same values that drive the public’s disdain for American politicians and wasteful, destructive, ineffective government programs.
The Libertarian Party is also the only alternative party that routinely appears on ballots in every state.
As of December 17 we had secured ballot access for our 2020 Presidential ticket in 33 states and the District of Columbia — the best starting position since 1914 for any alternative party at this point in the election cycle. This will substantially reduce the burden for achieving nationwide ballot access that we have so often borne. After the 1992 midterm election, for example, we had ballot access in only 17 states — half as many as today. Full ballot access for the Libertarian Party means that voters of every state will have more choice.
The climate is ripe for Libertarian progress. The pieces are all here, ready to be assembled. All it requires is building awareness of the Libertarian Party — our ideas, our values, our practical reforms, and our electoral successes — in the minds and hearts of the American public.
Nicholas Sarwark is serving his third term as chair of the Libertarian National Committee, having first been elected in 2014. Prior to that, he has served as chair of the Libertarian Party of Maryland and as vice chair of the Libertarian Party of Colorado, where he played a key role in recruiting the state’s 42 Libertarian candidates in 2014 and supported the passage of Colorado’s historic marijuana legalization initiative in 2012. In 2018, he ran for mayor of Phoenix, Ariz.
7 February 2019 – This is not the essay I’d planned to write for this week’s blog. I’d planned a long-winded, abstruse dissertation on the use of principal component analysis to glean information from historical data in chaotic systems. I actually got most of that one drafted on Monday, and planned to finish it up Tuesday.
Then, bright and early on Tuesday morning, before I got anywhere near the incomplete manuscript, I ran headlong into an email issue.
Generally, I start my morning by scanning email to winnow out the few valuable bits buried in the steaming pile of worthless refuse that has accumulated in my Inbox since the last time I visited it. Then, I visit a couple of social media sites in an effort to keep my name if front of the Internet-entertained public. After a couple of hours of this colossal waste of time, I settle in to work on whatever actual work I have to do for the day.
So, finding that my email client software refused to communicate with me threatened to derail my whole day. The fact that I use email for all my business communications, made it especially urgent that I determine what was wrong, and then fix it.
It took the entire morning and on into the early afternoon to realize that there was no way I was going to get to that email account on my computer, find out that nobody in the outside world (not my ISP, not the cable company that went that extra mile to bring Internet signals from that telephone pole out there to the router at the center of my local area network, or anyone else available with more technosavvy than I have) was going to be able to help. I was finally forced to invent a work around involving a legacy computer that I’d neglected to throw in the trash just to get on with my technology-bound life.
At that point the Law of Deadlines forced me to abandon all hope of getting this week’s blog posting out on time, and move on to completing final edits and distribution of that press release for the local art gallery.
That wasn’t the last time modern technology let me down. In discussing a recent Physics Lab SNAFU, Danielle, the laboratory coordinator I work with at the University said: “It’s wonderful when it works, but horrible when it doesn’t.”
Where have I heard that before?
The SNAFU Danielle was lamenting happened last week.
I teach two sections of General Physics Laboratory at Florida Gulf Coast University, one on Wednesdays and one on Fridays. The lab for last week had students dropping a ball, then measuring its acceleration using a computer-controlled ultrasonic detection system as it (the ball, not the computer) bounces on the table.
For the Wednesday class everything worked perfectly. Half a dozen teams each had their own setups, and all got good data, beautiful-looking plots, and automated measurements of position and velocity. The computers then automatically derived accelerations from the velocity data. Only one team had trouble with their computer, but they got good data by switching to an unused setup nearby.
That was Wednesday.
Come Friday the situation was totally different. Out of four teams, only two managed to get data that looked even remotely like it should. Then, one team couldn’t get their computer to spit out accelerations that made any sense at all. Eventually, after class time ran out, the one group who managed to get good results agreed to share their information with the rest of the class.
The high point of the day was managing to distribute that data to everyone via the school’s cloud-based messaging service.
Concerned about another fiasco, after this week’s lab Danielle asked me how it worked out. I replied that, since the equipment we use for this week’s lab is all manually operated, there were no problems whatsoever. “Humans are much more capable than computers,” I said. “They’re able to cope with disruptions that computers have no hope of dealing with.”
The latest example of technology Hell appeared in a story in this morning’s (2/7/2019) Wall Street Journal. Some $136 million of customers’ cryptocurrency holdings became stuck in an electronic vault when the founder (and sole employee) of cryptocurrency exchange QuadrigaCX, Gerald Cotten, died of complications related to Crohn’s disease while building an orphanage in India. The problem is that Cotten was so secretive about passwords and security that nobody, even his wife, Jennifer Robertson, can get into the reserve account maintained on his laptop.
“Quadriga,” according to the WSJ account, “would need control of that account to send those funds to customers.”
No lie! The WSJ attests this bizarre tale is the God’s own truth!
Now, I’ve no sympathy for cryptocurrency mavens, which I consider to be, at best, technoweenies gleefully leading a parade down the primrose path to technology Hell, but this story illustrates what that Hell looks like!
It’s exactly what the Luddites of the early 19th Century warned us about. It’s a place of nameless frustration and unaccountable loss that we’ve brought on ourselves.
30 January 2019 – This is not a textbook on decision making.
Farsighted: How We Make the Decisions That Matter the Most does cover most of the elements of state-of-the-art decision making, but it’s not a true textbook. If he’d really wanted to write a textbook, its author, Steven Johnson, would have structured it differently, and would have included exercises for the student. Perhaps he would also have done other things differently that I’m not going to enumerate because I don’t want to write a textbook on state-of-the-art decision making, either.
What Johnson apparently wanted to do, and did do successfully, was lay down a set of principles today’s decision makers would do well to follow.
Something he would have left out, if he were writing a textbook, was the impassioned plea for educators to incorporate mandatory decision making courses into secondary-school curricula. I can’t disagree with this sentiment!
A little bit about my background with regard to decision-theory education: ‘Way back in the early 2010s, I taught a course at a technical college entitled “Problem Solving Theory.” Johnson’s book did not exist then, and I wish that it had. The educational materials available at the time fell woefully short. They were, at best, pedantic.
I spent a lot of class time waving my hands and telling stories from my days as a project manager. Unfortunately, the decision-making techniques I learned about in MBA school weren’t of any help at all. Some of the research Johnson weaves into his narrative hadn’t even been done back then!
So, when I heard about Johnson’s new book, I rushed out to snag a copy and devoured it.
As Johnson points out, everybody is a decision maker every day. These decisions run the gamut from snap decisions that people have to make almost instantly, to long-term deliberate choices that reverberate through the rest of their lives. Many, if not most, people face making decisions affecting others, from children to spouses, siblings and friends. Some of us participate in group decision making that can have truly global ramifications.
In John McTiernan’s 1990 film The Hunt for Red October, Admiral Josh Painter points out to CIA analyst Jack Ryan: “Russians don’t take a dump, son, without a plan. Senior captains don’t start something this dangerous without having thought the matter through.”
It’s not just Russians, however, who plan out even minor actions. And, senior captains aren’t the only ones who don’t start things without having thought the matter through. We all do it.
As Johnson points out, it may be the defining characteristic of the human species, which he likes to call Homo prospectus for their ability to apply foresight to advance planning.
The problem, of course, is the alarming rate at which we screw it up. As John F. Kennedy’s failure in the Bay of Pigs invasion shows, even highly intelligent, highly educated and experienced leaders can get it disastrously wrong. Johnson devotes considerable space to enumerating the taxonomy of “things that can go wrong.”
So, decision making isn’t just for leaders, and it’s easier to get it wrong than to do it right.
Enumerating the ways it can all go disastrously wrong, and setting out principles that will help us get it right are the basic objectives Johnson set out for himself when he first decided to write this book. To wit, three goals:
Convince readers that it’s important;
Warn folks of how easily it can be done wrong; and
Give folks a prescription for doing it right.
Pursuant to the third goal, Johnson breaks decision making down into a process involving three steps:
Mapping consists of gathering preliminary information about the state of the Universe before any action has been taken. What do we have to work with? What options do we have to select from? What do we want to accomplish and when?
Predicting consists of prognisticating, for each of the various options available, how the Universe will evolve from now into the foreseeable (and possibly unforeseeable) future. This is probably the most fraught stage of the process. Do we need a Plan B in case of surprises? As Sean Connery’s “Mac” character intones in Jon Amiel’s 1999 crime drama, Entrapment: “Trust me, there always are surprises.”
Deciding is the ultimate finish of the process. It consists of finally choosing between the previously identified alternatives based on the predicted results. What alternative is most likely to give us a result we want to have?
An important technique Johnson recommends basing your decision-making strategy on is narrative. That explicitly means storytelling. Johnson supplies numerous examples from both fiction and non-fiction that help us understand the decision-making process and help us apply it to the problems we face.
He points out that double-blind clinical trials were the single most important technique that advanced medicine from quackery and the witch-doctor’s art to reliable medical science. It allowed trying out various versions of medical interventions in a systematic way and comparing the results. In the same way, he says, fictional storytelling, allows us to mentally “try out” multiple alternative versions of future history.
Johnson suggests that’s why humans evolved the desire and capacity to create such fictional narratives in the first place. “When we read these novels,” he says, “ … we are not just entertaining ourselves; we are also rehearsing for our own real-world experiences.”
Of course, while “deciding” is the ultimate act of Johnson’s process, it’s never the end of the story in real life. What to do when it all goes disastrously wrong is always an important consideration. Johnson actually covers that as an important part of the “predicting” step. That’s when you should develop Mac’s “Plan B pack” and figure out when to trigger it if necessary.
Another important consideration, which I covered extensively in my problem solving course and Johnson starts looking at ‘way back in “mapping” is how to live with the aftermath of your decision, whether it’s a resounding success or a disastrous failure. Either way, the Universe is changed forever by your decision, and you and everyone else will have to live in it.
So, your ultimate goal should be deciding how to make the Universe a better place in which to live!
23 January 2019 – Last week two concepts reared their ugly heads that I’ve been banging on about for years. They’re closely intertwined, so it’s worthwhile to spend a little blog space discussing why they fit so tightly together.
Diversity is Good
The first idea is that diversity is good. It’s good in almost every human pursuit. I’m particularly sensitive to this, being someone who grew up with the idea that rugged individualism was the highest ideal.
Diversity, of course, is incompatible with individualism. Individualism is the cult of the one. “One” cannot logically be diverse. Diversity is a property of groups, and groups by definition consist of more than one.
Okay, set theory admits of groups with one or even no members, but those groups have a diversity “score” (Gini–Simpson index) of zero. To have any diversity at all, your group has to have at absolute minimum two members. The more the merrier (or diversitier).
The idea that diversity is good came up in a couple of contexts over the past week.
First, I’m reading a book entitled Farsighted: How We Make the Decisions That Matter the Most by Steven Johnson, which I plan eventually to review in this blog. Part of the advice Johnson offers is that groups make better decisions when their membership is diverse. How they are diverse is less important than the extent to which they are diverse. In other words, this is a case where quantity is more important than quality.
Second, I divided my physics-lab students into groups to perform their first experiment. We break students into groups to prepare them for working in teams after graduation. Unlike when I was a student fifty years ago, activity in scientific research and technology development is always done in teams.
When I was a student, research was (supposedly) done by individuals working largely in isolation. I believe it was Willard Gibbs (I have no reliable reference for this quote) who said: “An experimental physicist must be a professional scientist and an amateur everything else.”
By this he meant that building a successful physics experiment requires the experimenter to apply so many diverse skills that it is impossible to have professional mastery of all of them. He (or she) must have an amateur’s ability pick up novel skills in order to reach the next goal in their research. They must be ready to work outside their normal comfort zone.
That asked a lot from an experimental researcher! Individuals who could do that were few and far between.
Today, the fast pace of technological development has reduced that pool of qualified individuals essentially to zero. It certainly is too small to maintain the pace society expects of the engineering and scientific communities.
Tolkien’s “unimaginable hand and mind of Feanor” puttering around alone in his personal workshop crafting magical things is unimaginable today. Marlowe’s Dr. Faustus character, who had mastered all earthly knowledge, is now laughable. No one person is capable of making a major contribution to today’s technology on their own.
The solution is to perform the work of technological research and development in teams with diverse skill sets.
In the sciences, theoreticians with strong mathematical backgrounds partner with engineers capable of designing machines to test the theories, and technicians with the skills needed to fabricate the machines and make them work.
The second idea I want to deal with in this essay is that we live in a chaotic Universe.
Chaos is a property of complex systems. These are systems consisting of many interacting moving parts that show predictable behavior on short time scales, but eventually foil the most diligent attempts at long-term prognostication.
A pendulum, by contrast, is a simple system consisting of, basically, three moving parts: a massive weight, or “pendulum bob,” that hangs by a rod or string (the arm) from a fixed support. Simple systems usually do not exhibit chaotic behavior.
The solar system, consisting of a huge, massive star (the Sun), eight major planets and a host of minor planets, is decidedly not a simple system. Its behavior is borderline chaotic. I say “borderline” because the solar system seems well behaved on short time scales (e.g., millennia), but when viewed on time scales of millions of years does all sorts of unpredictable things.
For example, approximately four and a half billion years ago (a few tens of millions of years after the system’s initial formation) a Mars-sized planet collided with Earth, spalling off a mass of material that coalesced to form the Moon, then ricochetted out of the solar system. That’s the sort of unpredictable event that happens in a chaotic system if you wait long enough.
The U.S. economy, consisting of millions of interacting individuals and companies, is wildly chaotic, which is why no investment strategy has ever been found to work reliably over a long time.
Putting It Together
The way these two ideas (diversity is good, and we live in a chaotic Universe) work together is that collaborating in diverse groups is the only way to successfully navigate life in a chaotic Universe.
An individual human being is so powerless that attempting anything but the smallest task is beyond his or her capacity. The only way to do anything of significance is to collaborate with others in a diverse team.
In the late 1980s my wife and I decided to build a house. To begin with, we had to decide where to build the house. That required weeks of collaboration (by our little team of two) to combine our experiences of different communities in the area where we were living, develop scenarios of what life might be like living in each community, and finally agree on which we might like the best. Then we had to find an architect to work with our growing team to design the building. Then we had to negotiate with banks for construction loans, bridge loans, and ultimate mortgage financing. Our architect recommended adding a prime contractor who had connections with carpenters, plumbers, electricians and so forth to actually complete the work. The better part of a year later, we had our dream house.
There’s no way I could have managed even that little project – building one house – entirely on my own!
In 2015, I ran across the opportunity to produce a short film for a film festival. I knew how to write a script, run a video camera, sew a costume, act a part, do the editing, and so forth. In short, I had all the skills needed to make that 30-minute film.
Did that mean I could make it all by my onesies? Nope! By the time the thing was completed, the list of cast and crew counted over a dozen people, each with their own job in the production.
By now, I think I’ve made my point. The take-home lesson of this essay is that if you want to accomplish anything in this chaotic Universe, start by assembling a diverse team, and the more diverse, the better!
16 January 2019 – The poster child for rampant nationalism is Hitler’s National Socialist German Workers’ Party, commonly called the Nazi Party. I say “is” rather than “was” because, while resoundingly defeated by Allies of WW2 in 1945, the Nazi Party still has widespread appeal in Germany, and throughout the world.
These folks give nationalism a bad name, leading to the Oxford Living Dictionary, giving primacy to the following definition of nationalism: “Identification with one’s own nation and support for its interests, especially to the exclusion or detriment of the interests of other nations.” [Emphasis added.]
The Oxford Dictionary also offers a second definition of nationalism: “Advocacy of or support for the political independence of a particular nation or people.”
This second definition is a lot more benign, and one that I wish were more often used. I certainly prefer it!
Nationalism under the first definition has been used since time immemorial as an excuse to create closed, homogeneous societies. That was probably the biggest flaw of the Nazi state(s). Death camps, ethnic cleansing, slave labor, and most of the other evils of those regimes flowed directly from their attempts to build closed, homogeneous societies.
Under the second definition, however, nationalism can, and should, be used to create a more diverse society.
That’s a good thing, as the example of United States history clearly demonstrates. Most of U.S. success can be traced directly to the country’s ethnic, cultural and racial diversity. The fact that the U.S., with a paltry 5% of the world’s population, now has by far the largest economy; that it dominates the fields of science, technology and the humanities; that its common language (American English) is fast becoming the “lingua franca” of the entire world; and that it effectively leads the world by so many measures is directly attributed to the continual renewal of its population diversity by immigration. In any of these areas, it’s easy to point out major contributions from recent immigrants or other minorities.
This harkens back to a theory of cultural development I worked out in the 1970s. It starts with the observation that all human populations – no matter how large or how small – consist of individuals whose characteristics vary somewhat. When visualized on a multidimensional scatter plot, populations generally consist of a cluster with a dense center and fewer individuals farther out.
This pattern is similar to the image of a typical globular star cluster in the photo at right. Globular star clusters exhibit this pattern in three dimensions, while human populations exist and can be mapped on a great many dimensions representing different characteristics. Everything from physical characteristics like height, weight and skin color, to non-physical characteristics like ethnicity and political ideology – essentially anything that can be measured – can be plotted as a separate dimension.
The dense center of the pattern consists of individuals whose characteristics don’t stray too far from the norm. Everyone, of course, is a little off average. For example, the average white American female is five-feet, four-inches tall. Nearly everyone in that population, however, is a little taller or shorter than exactly average. Very few are considerably taller or shorter, with more individuals closer to the average than farther out.
The population’s diversity shows up as a widening of the pattern. That is, diversity is a measure of how often individuals appear farther out from the center.
Darwin’s theory of natural selection posits that where the population center is depends on where is most appropriate for it to be depending on conditions. What is average height, for example, depends on a complex interplay of conditions, including nutrition, attractiveness to the opposite sex, and so forth.
Observing that conditions change with time, one expects the ideal center of the population should move about in the multidimensional characteristics space. Better childhood nutrition, for example, should push the population toward increased tallness. And, it does!
One hopes that these changes happen slowly with time, giving the population a chance to follow in response. If the changes happen too fast, however, the population is unable to respond fast enough and it goes extinct. So, wooly mammoths were unable to respond fast enough to a combination of environmental changes and increased predation by humans emigrating into North America after the last Ice Age, so they died out. No more wooly mammoths!
Assuming whatever changes occur happen slowly enough, those individuals in the part of the distribution better adapted to the new conditions do better than those on the opposite side. So, the whole population shifts with time toward characteristics that are better adapted.
Where diversity comes into this dynamic is by providing more individuals in the better-adapted part of the distribution. The faster conditions change, the more individuals you need at the edges of the population to help with the response. For example, if the climate gets warmer, it’s folks who like to wear skimpy outfits who thrive. Folks who insist on covering themselves up in heavy clothing, don’t do so well. That was amply demonstrated when Englishmen tried to wear their heavy Elizabethan outfits in the warmer North American weather conditions. Styles changed practically overnight!
Closed, homogeneous societies of the type the Nazis tried to create have low diversity. They try to suppress folks who differ from the norm. When conditions change, such societies have less of the diversity needed to respond, so they wither and die.
That’s why cultures need diversity, and the more diversity, the better.
We live in a chaotic universe. The most salient characteristic of chaotic systems is constant change. Without diversity, we can’t respond to that change.
That’s why when technological change sped up in the early Twentieth Century, it was the bohemians of the twenties developing into the beatniks of the fifties and the hippies of the sixties that defined the cultures of the seventies and beyond.
9 January 2019 – This week I start a new part-time position on the faculty at Florida Gulf Coast University teaching two sections of General Physics laboratory. In preparation, I dusted off a posting to this blog from last Summer that details my take on the scientific method, which I re-edited to present to my students. I thought readers of this blog might profit by my posting the edited version. The original posting contrasted the scientific method of getting at the truth with the method used in the legal profession. Since I’ve been banging on about astrophysics and climate science, specifically, I thought it would be helpful to zero in again on how scientists figure out what’s really going on in the world at large. How do we know what we think we know?
While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school is a procedure consisting of five to seven steps, which pretty much look like this:
I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.
The Stepwise Program
It all starts with observation of things that go on in the World.
Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question: “why?”
Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several possible explanations that vary from the erudite to the thoroughly bizarre.
Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!
For example, the ancients tended to think in terms of objects somehow “wanting” to go downward as the least weird of explanations for gravity. The idea came from animism, which was the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior: Rocks are hard because their spirits resist being broken; They fall down when released because their spirits somehow like down better than up.
What we now consider the most-correctest explanation (that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other) wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.
Scientists then take all the hypotheses available, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results from the experiments.
Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results, and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.
It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.
That’s why the last step is to repeat the entire process ad nauseam.
While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.
Not boiling the method down to its essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, the science-pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”
A More Holistic Approach
The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following two complementary paths through to the resultant results.
There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis (the model) to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.
If you do that a bazillion times in a bazillion different ways, a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.
Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.
For example, I was once (at a University other than this one) asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.” He couldn’t get the machine to give the results he was convinced he should get.
I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.
I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He persisted in believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.
Anyway, the way this method works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.
If your scientific hypothesis is wrong (meaning it gives wrong results), “So, What?”
Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.
Finding that some hypothesis is wrong is no big deal. It just means it was a dumb idea, and you don’t have to bother thinking about that dumb idea anymore.
Alien abductions get relegated to entertainment for the entertainment starved. Real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.
(FYI: the current leading hypothesis is that the distances from there to here are so vast that anybody smart enough to figure out how to make the trip has better things to do.)
For scientists “Gee, it looks like … ” is usually as good as it gets!
2 January 2019 – Now that the year-end holidays are over, it’s time to get back on my little electronic soapbox to talk about an issue that scientists have had to fight with authorities over for centuries. It’s an issue that has been around for millennia, but before a few centuries ago there weren’t scientists around to fight over it. The issue rears its ugly head under many guises. Most commonly today it’s discussed as academic freedom, or freedom of expression. You might think it was definitively won for all Americans in 1791 with the ratification of the first ten amendments to the U.S. Constitution and for folks in other democracies soon after, but you’d be wrong.
The issue is wrapped up in one single word: dogma.
“A principle or set of principles laid down by an authority as incontrovertibly true.”
In 1600 CE, Giordano Bruno was burned at the stake for insisting that the stars were distant suns surrounded by their own planets, raising the possibility that these planets might foster life of their own, and that the universe is infinite and could have no “center.” These ideas directly controverted the dogma laid down as incontrovertibly true by both the Roman Catholic and Protestant Christian churches of the time.
Galileo Galilei, typically thought as the poster child for resistance to dogma, was only placed under house arrest (for the rest of his life) for advocating the less radical Copernican vision of the solar system.
Nicholas Copernicus, himself, managed to fly under the Catholic Church’s radar for nearly a century and a quarter by the simple tactic of not publishing his heliocentric model. Starting in 1510, he privately communicated it to his friends, who then passed it to some of their friends, etc. His signature work, Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in which he laid it out for all to see, wasn’t published until his death in 1643, when he’d already escaped beyond the reach of earthly authorities.
If this makes it seem that astrophysicists have been on the front lines of the war against dogma since there was dogma to fight against, that’s almost certainly true. Astrophysicists study stuff relating to things beyond the Earth, and that traditionally has been a realm claimed by religious authorities.
That claim largely started with Christianity, specifically the Roman Catholic Church. Ancient religions, which didn’t have delusions that they could dominate all of human thought, didn’t much care what cockamamie ideas astrophysicists (then just called “philosophers”) came up with. Thus, Aristarchus of Samos suffered no ill consequences (well, maybe a little, but nothing life – or even career – threatening) from proposing the same ideas that Galileo was arrested for championing some eighteen centuries later.
Fast forward to today and we have a dogma espoused by political progressives called “climate change.” It used to be called “global warming,” but that term was laughed down decades ago, though the dogma’s still the same.
The United-Nations-funded Intergovernmental Panel on Climate Change (IPCC) has become “the Authority” laying down the principles that Earth’s climate is changing and that change constitutes a rapid warming caused by human activity. The dogma also posits that this change will continue uninterrupted unless national governments promulgate drastic laws to curtail human activity.
Sure sounds like dogma to me!
Once again, astrophysicists are on the front lines of the fight against dogma. The problem is that the IPCC dogma treats the Sun (which is what powers Earth’s climate in the first place) as, to all intents and purposes, a fixed star. That is, it assumes climate change arises solely from changes in Earthly conditions, then assumes we control those conditions.
Astrophysicists know that just ain’t so.
First, stars generally aren’t fixed. Most stars are variable stars. In fact, all stars are variable on some time scale. They all evolve over time scales of millions or billions of years, but that’s not the kind of variability we’re talking about here.
The Sun is in the evolutionary phase called “main sequence,” where stars evolve relatively slowly. That’s the source of much “invariability” confusion. Main sequence stars, however, go through periods where they vary in brightness more or less violently on much shorter time scales. In fact, most main sequence stars exhibit this kind of behavior to a greater or lesser extent at any given time – like now.
So, a modern (as in post-nineteenth-century) astrophysicist would never make the bald assumption that the Sun’s output was constant. Statistically, the odds are against it. Most stars are variables; the Sun is like most stars; so the Sun is probably a variable. In fact, it’s well known to vary with a fairly stable period of roughly 22 years (the 11-year “sunspot cycle” is actually only a half cycle).
A couple of centuries ago, astronomers assumed (with no evidence) that the Sun’s output was constant, so they started trying to measure this assumed “solar constant.” Charles Greeley Abbot, who served as the Secretary of the Smithsonian Institute from 1928 to 1944, oversaw the first long-term study of solar output.
His observations were necessarily ground based and the variations observed (amounting to 3-5 percent) have been dismissed as “due to changing weather conditions and incomplete analysis of his data.” That despite the monumental efforts he went through to control such effects.
On the 1970s I did an independent analysis of his data and realized that part of the problem he had stemmed from a misunderstanding of the relationship between sunspots and solar irradiance. At the time, it was assumed that sunspots were akin to atmospheric clouds. That is, scientists assumed they affected overall solar output by blocking light, thus reducing the total power reaching Earth.
Thus, when Abbott’s observations showed the opposite correlation, they were assumed to be erroneous. His purported correlations with terrestrial weather observations were similarly confused, and thus dismissed.
Since then, astrophysicists have realized that sunspots are more like a symptom of increased internal solar activity. That is, increases in sunspot activity positively correlate with increases in the internal dynamism that generates the Sun’s power output. Seen in this light, Abbott’s observations and analysis make a whole lot more sense.
We have ample evidence, from historical observations of climate changes correlating with observed variations in sunspot activity, that there is a strong connection between climate and solar variability. Most notably the fact that the Sporer and Maunder anomalies (which were times when sunspot activity all but disappeared for extended periods) in sunspot records correlated with historically cold periods in Earth’s history. There was a similar period from about 1790 to 1830 of low solar activity (as measured by sunspot numbers) called the “Dalton Minimum” that similarly depressed global temperatures and gave an anomalously low baseline for the run up to the Modern Maximum.
For astrophysicists, the phenomenon of solar variability is not in doubt. The questions that remain involve by how much, how closely they correlate with climate change, and are they predictable?
Studies of solar variability, however, run afoul of the IPCC dogma. For example, in May of 2017 an international team of solar dynamicists led by Valentina V. Zharkova at Northumbria University in the U.K. published a paper entitled “On a role of quadruple component of magnetic field in defining solar activity in grand cycles” in the Journal of Atmospheric and Solar-Terrestrial Physics. Their research indicates that the Sun, while it’s activity has been on the upswing for an extended period, should be heading into a quiescent period starting with the next maximum of the 11-year sunspot cycle in around five years.
That would indicate that the IPCC prediction of exponentially increasing global temperatures due to human-caused increasing carbon-dioxide levels may be dead wrong. I say “may be dead wrong” because this is science, not dogma. In science, nothing is incontrovertible.
I was clued in to this research by my friend Dan Romanchik, who writes a blog for amateur radio enthusiasts. Amateur radio enthusiasts care about solar activity because sunspots are, in fact, caused by magnetic fields at the Sun’s surface. Those magnetic fields affect Earth by deflecting cosmic rays away from the inner solar system, which is where we live. Those cosmic rays are responsible for the Kennelly–Heaviside layer of ionized gas in Earth’s upper atmosphere (roughly 90–150 km, or 56–93 mi, above the ground).
Radio amateurs bounce signals off this layer to reach distant stations beyond line of sight. When solar activity is weak this layer drops to lower altitudes, reducing the effectiveness of this technique (often called “DXing”).
In his post of 16 December 2018, Dan complained: “If you operate HF [the high-frequency radio band], it’s no secret that band conditions have not been great. The reason, of course, is that we’re at the bottom of the sunspot cycle. If we’re at the bottom of the sunspot cycle, then there’s no way to go but up, right? Maybe not.
After discussing the NOAA prediction, he went on to further complain: “And, if that wasn’t depressing enough, I recently came across an article reporting on the research of Prof. Valentina Zharkova, who is predicting a grand minimum of 30 years!”
He included a link to a presentation Dr. Zharkova made at the Global Warming Policy Foundation last October in which she outlined her research and pointedly warned that the IPCC dogma was totally wrong.
I followed the link, viewed her presentation, and concluded two things:
The research methods she used are some that I’m quite familiar with, having used them on numerous occasions; and
She used those techniques correctly, reaching convincing conclusions.
Her results seems well aligned with meta-analysis published by the Cato Institute in 2015, which I mentioned in my posting of 10 October 2018 to this blog. The Cato meta-analysis of observational data indicated a much reduced rate of global warming compared to that predicted by IPCC models.
The Zharkova-model data covers a much wider period (millennia-long time scale rather than decades-long time scale) than the Cato data. It’s long enough to show the Medieval Warm Period as well as the Little Ice Age (Maunder minimum) and the recent warming trend that so fascinates climate-change activists. Instead of a continuation of the modern warm period, however, Zharkova’s model shows an abrupt end starting in about five years with the next maximum of the 11-year sunspot cycle.
Don’t expect a stampede of media coverage disputing the IPCC dogma, however. A host of politicians (especially among those in the U.S. Democratic Party) have hung their hats on that dogma as well as an array of governments who’ve sold policy decisions based on it. The political left has made an industry of vilifying anyone who doesn’t toe the “climate change” line, calling them “climate deniers” with suspect intellectual capabilities and moral characters.
Again, this sounds a lot like dogma. It’s the same tactic that the Inquisition used against Bruno and Galileo before escalating to more brutal methods.
Supporters of Zharkova’s research labor under a number of disadvantages. Of course, there’s the obvious disadvantage that Zharkova’s thick Ukrainian accent limits her ability to explain her work to those who don’t want to listen. She would not come off well on the evening news.
A more important disadvantage is the abstruse nature of the applied mathematics techniques used in the research. How many political reporters and, especially, commentators are familiar enough with the mathematical technique of principal component analysis to understand what Zharkova’s talking about? This stuff makes macroeconomics modeling look like kiddie play!
But, the situation’s even worse because to really understand the research, you also need an appreciation of stellar dynamics, which is based on magnetohydrodynamics. How many CNN commentators even know how to spell that?
Of course, these are all tools of the trade for astrophysicists. They’re as familiar to them as a hammer or a saw is to a carpenter.
For those in the media, on the other hand, it’s a lot easier to take the “most scientists agree” mantra at face value than to embark on the nearly hopeless task of re-educating themselves to understand Zharkova’s research. That goes double for politicians.
It’s entirely possible that “most” scientists might agree with the IPCC dogma, but those in a position to understand what’s driving Earth’s climate do not agree.