Nationalism and Diversity

Flags of many countries
Nationalism can promote diversity – or not! Brillenstimmer/shutterstock

16 January 2019 – The poster child for rampant nationalism is Hitler’s National Socialist German Workers’ Party, commonly called the Nazi Party. I say “is” rather than “was” because, while resoundingly defeated by Allies of WW2 in 1945, the Nazi Party still has widespread appeal in Germany, and throughout the world.

These folks give nationalism a bad name, leading to the Oxford Living Dictionary, giving primacy to the following definition of nationalism: “Identification with one’s own nation and support for its interests, especially to the exclusion or detriment of the interests of other nations.” [Emphasis added.]

The Oxford Dictionary also offers a second definition of nationalism: “Advocacy of or support for the political independence of a particular nation or people.”

This second definition is a lot more benign, and one that I wish were more often used. I certainly prefer it!

Nationalism under the first definition has been used since time immemorial as an excuse to create closed, homogeneous societies. That was probably the biggest flaw of the Nazi state(s). Death camps, ethnic cleansing, slave labor, and most of the other evils of those regimes flowed directly from their attempts to build closed, homogeneous societies.

Under the second definition, however, nationalism can, and should, be used to create a more diverse society.

That’s a good thing, as the example of United States history clearly demonstrates. Most of U.S. success can be traced directly to the country’s ethnic, cultural and racial diversity. The fact that the U.S., with a paltry 5% of the world’s population, now has by far the largest economy; that it dominates the fields of science, technology and the humanities; that its common language (American English) is fast becoming the “lingua franca” of the entire world; and that it effectively leads the world by so many measures is directly attributed to the continual renewal of its population diversity by immigration. In any of these areas, it’s easy to point out major contributions from recent immigrants or other minorities.

This harkens back to a theory of cultural development I worked out in the 1970s. It starts with the observation that all human populations – no matter how large or how small – consist of individuals whose characteristics vary somewhat. When visualized on a multidimensional scatter plot, populations generally consist of a cluster with a dense center and fewer individuals farther out.

Globular cluster image
The Great Hercules Star Cluster.. Albert Barr/Shutterstock

This pattern is similar to the image of a typical globular star cluster in the photo at right. Globular star clusters exhibit this pattern in three dimensions, while human populations exist and can be mapped on a great many dimensions representing different characteristics. Everything from physical characteristics like height, weight and skin color, to non-physical characteristics like ethnicity and political ideology – essentially anything that can be measured – can be plotted as a separate dimension.

The dense center of the pattern consists of individuals whose characteristics don’t stray too far from the norm. Everyone, of course, is a little off average. For example, the average white American female is five-feet, four-inches tall. Nearly everyone in that population, however, is a little taller or shorter than exactly average. Very few are considerably taller or shorter, with more individuals closer to the average than farther out.

The population’s diversity shows up as a widening of the pattern. That is, diversity is a measure of how often individuals appear farther out from the center.

Darwin’s theory of natural selection posits that where the population center is depends on where is most appropriate for it to be depending on conditions. What is average height, for example, depends on a complex interplay of conditions, including nutrition, attractiveness to the opposite sex, and so forth.

Observing that conditions change with time, one expects the ideal center of the population should move about in the multidimensional characteristics space. Better childhood nutrition, for example, should push the population toward increased tallness. And, it does!

One hopes that these changes happen slowly with time, giving the population a chance to follow in response. If the changes happen too fast, however, the population is unable to respond fast enough and it goes extinct. So, wooly mammoths were unable to respond fast enough to a combination of environmental changes and increased predation by humans emigrating into North America after the last Ice Age, so they died out. No more wooly mammoths!

Assuming whatever changes occur happen slowly enough, those individuals in the part of the distribution better adapted to the new conditions do better than those on the opposite side. So, the whole population shifts with time toward characteristics that are better adapted.

Where diversity comes into this dynamic is by providing more individuals in the better-adapted part of the distribution. The faster conditions change, the more individuals you need at the edges of the population to help with the response. For example, if the climate gets warmer, it’s folks who like to wear skimpy outfits who thrive. Folks who insist on covering themselves up in heavy clothing, don’t do so well. That was amply demonstrated when Englishmen tried to wear their heavy Elizabethan outfits in the warmer North American weather conditions. Styles changed practically overnight!

Closed, homogeneous societies of the type the Nazis tried to create have low diversity. They try to suppress folks who differ from the norm. When conditions change, such societies have less of the diversity needed to respond, so they wither and die.

That’s why cultures need diversity, and the more diversity, the better.

We live in a chaotic universe. The most salient characteristic of chaotic systems is constant change. Without diversity, we can’t respond to that change.

That’s why when technological change sped up in the early Twentieth Century, it was the bohemians of the twenties developing into the beatniks of the fifties and the hippies of the sixties that defined the cultures of the seventies and beyond.

Jerry Garcia stamp image
spatuletail/shutterstock

Long live Ben and Jerry’s Cherry Garcia Ice Cream!

The Scientific Method

Scientific Method Diagram
The scientific method assumes uncertainty.

9 January 2019 – This week I start a new part-time position on the faculty at Florida Gulf Coast University teaching two sections of General Physics laboratory. In preparation, I dusted off a posting to this blog from last Summer that details my take on the scientific method, which I re-edited to present to my students. I thought readers of this blog might profit by my posting the edited version. The original posting contrasted the scientific method of getting at the truth with the method used in the legal profession. Since I’ve been banging on about astrophysics and climate science, specifically, I thought it would be helpful to zero in again on how scientists figure out what’s really going on in the world at large. How do we know what we think we know?


While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school is a procedure consisting of five to seven steps, which pretty much look like this:

  1. Observation
  2. Hypothesis
  3. Prediction
  4. Experimentation
  5. Analysis
  6. Repeat

I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.

The Stepwise Program

It all starts with observation of things that go on in the World.

Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question: “why?”

Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several possible explanations that vary from the erudite to the thoroughly bizarre.

Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!

For example, the ancients tended to think in terms of objects somehow “wanting” to go downward as the least weird of explanations for gravity. The idea came from animism, which was the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior: Rocks are hard because their spirits resist being broken; They fall down when released because their spirits somehow like down better than up.

What we now consider the most-correctest explanation (that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other) wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.

Scientists then take all the hypotheses available, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results from the experiments.

Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results, and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.

It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.

That’s why the last step is to repeat the entire process ad nauseam.

While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.

Not boiling the method down to its essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, the science-pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”

A More Holistic Approach

The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following two complementary paths through to the resultant results.

There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis (the model) to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.

If you do that a bazillion times in a bazillion different ways, a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.

Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.

For example, I was once (at a University other than this one) asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.” He couldn’t get the machine to give the results he was convinced he should get.

I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.

I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He persisted in believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.

Anyway, the way this method works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.

If your scientific hypothesis is wrong (meaning it gives wrong results), “So, What?”

Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.

Finding that some hypothesis is wrong is no big deal. It just means it was a dumb idea, and you don’t have to bother thinking about that dumb idea anymore.

Alien abductions get relegated to entertainment for the entertainment starved. Real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.

(FYI: the current leading hypothesis is that the distances from there to here are so vast that anybody smart enough to figure out how to make the trip has better things to do.)

For scientists “Gee, it looks like … ” is usually as good as it gets!

Don’t Tell Me What to Think!

Your Karma ran over My Dogma
A woman holds up a sign while participating in the annual King Mango Strut parade in Miami, FL on 28 December 2014. BluIz60/Shutterstock

2 January 2019 – Now that the year-end holidays are over, it’s time to get back on my little electronic soapbox to talk about an issue that scientists have had to fight with authorities over for centuries. It’s an issue that has been around for millennia, but before a few centuries ago there weren’t scientists around to fight over it. The issue rears its ugly head under many guises. Most commonly today it’s discussed as academic freedom, or freedom of expression. You might think it was definitively won for all Americans in 1791 with the ratification of the first ten amendments to the U.S. Constitution and for folks in other democracies soon after, but you’d be wrong.

The issue is wrapped up in one single word: dogma.

According to the Oxford English Dictionary, the word dogma is defined as:

“A principle or set of principles laid down by an authority as incontrovertibly true.”

In 1600 CE, Giordano Bruno was burned at the stake for insisting that the stars were distant suns surrounded by their own planets, raising the possibility that these planets might foster life of their own, and that the universe is infinite and could have no “center.” These ideas directly controverted the dogma laid down as incontrovertibly true by both the Roman Catholic and Protestant Christian churches of the time.

Galileo Galilei, typically thought as the poster child for resistance to dogma, was only placed under house arrest (for the rest of his life) for advocating the less radical Copernican vision of the solar system.

Nicholas Copernicus, himself, managed to fly under the Catholic Church’s radar for nearly a century and a quarter by the simple tactic of not publishing his heliocentric model. Starting in 1510, he privately communicated it to his friends, who then passed it to some of their friends, etc. His signature work, Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in which he laid it out for all to see, wasn’t published until his death in 1643, when he’d already escaped beyond the reach of earthly authorities.

If this makes it seem that astrophysicists have been on the front lines of the war against dogma since there was dogma to fight against, that’s almost certainly true. Astrophysicists study stuff relating to things beyond the Earth, and that traditionally has been a realm claimed by religious authorities.

That claim largely started with Christianity, specifically the Roman Catholic Church. Ancient religions, which didn’t have delusions that they could dominate all of human thought, didn’t much care what cockamamie ideas astrophysicists (then just called “philosophers”) came up with. Thus, Aristarchus of Samos suffered no ill consequences (well, maybe a little, but nothing life – or even career – threatening) from proposing the same ideas that Galileo was arrested for championing some eighteen centuries later.

Fast forward to today and we have a dogma espoused by political progressives called “climate change.” It used to be called “global warming,” but that term was laughed down decades ago, though the dogma’s still the same.

The United-Nations-funded Intergovernmental Panel on Climate Change (IPCC) has become “the Authority” laying down the principles that Earth’s climate is changing and that change constitutes a rapid warming caused by human activity. The dogma also posits that this change will continue uninterrupted unless national governments promulgate drastic laws to curtail human activity.

Sure sounds like dogma to me!

Once again, astrophysicists are on the front lines of the fight against dogma. The problem is that the IPCC dogma treats the Sun (which is what powers Earth’s climate in the first place) as, to all intents and purposes, a fixed star. That is, it assumes climate change arises solely from changes in Earthly conditions, then assumes we control those conditions.

Astrophysicists know that just ain’t so.

First, stars generally aren’t fixed. Most stars are variable stars. In fact, all stars are variable on some time scale. They all evolve over time scales of millions or billions of years, but that’s not the kind of variability we’re talking about here.

The Sun is in the evolutionary phase called “main sequence,” where stars evolve relatively slowly. That’s the source of much “invariability” confusion. Main sequence stars, however, go through periods where they vary in brightness more or less violently on much shorter time scales. In fact, most main sequence stars exhibit this kind of behavior to a greater or lesser extent at any given time – like now.

So, a modern (as in post-nineteenth-century) astrophysicist would never make the bald assumption that the Sun’s output was constant. Statistically, the odds are against it. Most stars are variables; the Sun is like most stars; so the Sun is probably a variable. In fact, it’s well known to vary with a fairly stable period of roughly 22 years (the 11-year “sunspot cycle” is actually only a half cycle).

A couple of centuries ago, astronomers assumed (with no evidence) that the Sun’s output was constant, so they started trying to measure this assumed “solar constant.” Charles Greeley Abbot, who served as the Secretary of the Smithsonian Institute from 1928 to 1944, oversaw the first long-term study of solar output.

His observations were necessarily ground based and the variations observed (amounting to 3-5 percent) have been dismissed as “due to changing weather conditions and incomplete analysis of his data.” That despite the monumental efforts he went through to control such effects.

On the 1970s I did an independent analysis of his data and realized that part of the problem he had stemmed from a misunderstanding of the relationship between sunspots and solar irradiance. At the time, it was assumed that sunspots were akin to atmospheric clouds. That is, scientists assumed they affected overall solar output by blocking light, thus reducing the total power reaching Earth.

Thus, when Abbott’s observations showed the opposite correlation, they were assumed to be erroneous. His purported correlations with terrestrial weather observations were similarly confused, and thus dismissed.

Since then, astrophysicists have realized that sunspots are more like a symptom of increased internal solar activity. That is, increases in sunspot activity positively correlate with increases in the internal dynamism that generates the Sun’s power output. Seen in this light, Abbott’s observations and analysis make a whole lot more sense.

We have ample evidence, from historical observations of climate changes correlating with observed variations in sunspot activity, that there is a strong connection between climate and solar variability. Most notably the fact that the Sporer and Maunder anomalies (which were times when sunspot activity all but disappeared for extended periods) in sunspot records correlated with historically cold periods in Earth’s history. There was a similar period from about 1790 to 1830 of low solar activity (as measured by sunspot numbers) called the “Dalton Minimum” that similarly depressed global temperatures and gave an anomalously low baseline for the run up to the Modern Maximum.

For astrophysicists, the phenomenon of solar variability is not in doubt. The questions that remain involve by how much, how closely they correlate with climate change, and are they predictable?

Studies of solar variability, however, run afoul of the IPCC dogma. For example, in May of 2017 an international team of solar dynamicists led by Valentina V. Zharkova at Northumbria University in the U.K. published a paper entitled “On a role of quadruple component of magnetic field in defining solar activity in grand cycles” in the Journal of Atmospheric and Solar-Terrestrial Physics. Their research indicates that the Sun, while it’s activity has been on the upswing for an extended period, should be heading into a quiescent period starting with the next maximum of the 11-year sunspot cycle in around five years.

That would indicate that the IPCC prediction of exponentially increasing global temperatures due to human-caused increasing carbon-dioxide levels may be dead wrong. I say “may be dead wrong” because this is science, not dogma. In science, nothing is incontrovertible.

I was clued in to this research by my friend Dan Romanchik, who writes a blog for amateur radio enthusiasts. Amateur radio enthusiasts care about solar activity because sunspots are, in fact, caused by magnetic fields at the Sun’s surface. Those magnetic fields affect Earth by deflecting cosmic rays away from the inner solar system, which is where we live. Those cosmic rays are responsible for the Kennelly–Heaviside layer of ionized gas in Earth’s upper atmosphere (roughly 90–150 km, or 56–93 mi, above the ground).

Radio amateurs bounce signals off this layer to reach distant stations beyond line of sight. When solar activity is weak this layer drops to lower altitudes, reducing the effectiveness of this technique (often called “DXing”).

In his post of 16 December 2018, Dan complained: “If you operate HF [the high-frequency radio band], it’s no secret that band conditions have not been great. The reason, of course, is that we’re at the bottom of the sunspot cycle. If we’re at the bottom of the sunspot cycle, then there’s no way to go but up, right? Maybe not.

“Recent data from the NOAA’s Space Weather Prediction Center seems to suggest that solar activity isn’t going to get better any time soon.”

After discussing the NOAA prediction, he went on to further complain: “And, if that wasn’t depressing enough, I recently came across an article reporting on the research of Prof. Valentina Zharkova, who is predicting a grand minimum of 30 years!”

He included a link to a presentation Dr. Zharkova made at the Global Warming Policy Foundation last October in which she outlined her research and pointedly warned that the IPCC dogma was totally wrong.

I followed the link, viewed her presentation, and concluded two things:

  1. The research methods she used are some that I’m quite familiar with, having used them on numerous occasions; and

  2. She used those techniques correctly, reaching convincing conclusions.

Her results seems well aligned with meta-analysis published by the Cato Institute in 2015, which I mentioned in my posting of 10 October 2018 to this blog. The Cato meta-analysis of observational data indicated a much reduced rate of global warming compared to that predicted by IPCC models.

The Zharkova-model data covers a much wider period (millennia-long time scale rather than decades-long time scale) than the Cato data. It’s long enough to show the Medieval Warm Period as well as the Little Ice Age (Maunder minimum) and the recent warming trend that so fascinates climate-change activists. Instead of a continuation of the modern warm period, however, Zharkova’s model shows an abrupt end starting in about five years with the next maximum of the 11-year sunspot cycle.

Don’t expect a stampede of media coverage disputing the IPCC dogma, however. A host of politicians (especially among those in the U.S. Democratic Party) have hung their hats on that dogma as well as an array of governments who’ve sold policy decisions based on it. The political left has made an industry of vilifying anyone who doesn’t toe the “climate change” line, calling them “climate deniers” with suspect intellectual capabilities and moral characters.

Again, this sounds a lot like dogma. It’s the same tactic that the Inquisition used against Bruno and Galileo before escalating to more brutal methods.

Supporters of Zharkova’s research labor under a number of disadvantages. Of course, there’s the obvious disadvantage that Zharkova’s thick Ukrainian accent limits her ability to explain her work to those who don’t want to listen. She would not come off well on the evening news.

A more important disadvantage is the abstruse nature of the applied mathematics techniques used in the research. How many political reporters and, especially, commentators are familiar enough with the mathematical technique of principal component analysis to understand what Zharkova’s talking about? This stuff makes macroeconomics modeling look like kiddie play!

But, the situation’s even worse because to really understand the research, you also need an appreciation of stellar dynamics, which is based on magnetohydrodynamics. How many CNN commentators even know how to spell that?

Of course, these are all tools of the trade for astrophysicists. They’re as familiar to them as a hammer or a saw is to a carpenter.

For those in the media, on the other hand, it’s a lot easier to take the “most scientists agree” mantra at face value than to embark on the nearly hopeless task of re-educating themselves to understand Zharkova’s research. That goes double for politicians.

It’s entirely possible that “most” scientists might agree with the IPCC dogma, but those in a position to understand what’s driving Earth’s climate do not agree.

Reimagining Our Tomorrows

Cover Image
Utopia with a twist.

19 December 2018 – I generally don’t buy into utopias.

Utopias are intended as descriptions of a paradise. They’re supposed to be a paradise for everybody, and they’re supposed to be filled with happy people committed to living in their city (utopias are invariably built around descriptions of cities), which they imagine to be the best of all possible cities located in the best of all possible worlds.

Unfortunately, however, utopia stories are written by individual authors, and they’d only be a paradise for that particular author. If the author is persuasive enough, the story will win over a following of disciples, who will praise it to high Heaven. Once in a great while (actually surprisingly often) those disciples become so enamored of the description that they’ll drop everything and actually attempt to build a city to match the description.

When that happens, it invariably ends in tears.

That’s because, while utopian stories invariably describe city plans that would be paradise to their authors, great swaths of the population would find living in them to be horrific.

Even Thomas More, the sixteenth century philosopher, politician and generally overall smart guy who’s credited with giving us the word “utopia” in the first place, was wise enough to acknowledge that the utopia he described in his most famous work, Utopia, wouldn’t be such a fun place for the slaves he had serving his upper-middle class citizens, who were the bulwark of his utopian society.

Even Plato’s Republic, which gave us the conundrum summarized in Juvenal’s Satires as “Who guards the guards?,” was never meant as a workable society. Plato’s work, in general, was meant to teach us how to think, not what to think.

What to think is a highly malleable commodity that varies from person to person, society to society, and, most importantly, from time to time. Plato’s Republic reflected what might have passed as good ideas for city planning in 380 BC Athens, but they wouldn’t have passed muster in More’s sixteenth-century England. Still less would they be appropriate in twenty-first-century democracies.

So, I approached Joe Tankersley’s Reimagining Our Tomorrows with some trepidation. I wouldn’t have put in the effort to read the thing if it wasn’t for the subtitle: “Making Sure Your Future Doesn’t SUCK.”

That subtitle indicated that Tankersley just might have a sense of humor, and enough gumption to put that sense of humor into his contribution to Futurism.

Futurism tends to be the work of self-important intellectuals out to make a buck by feeding their audience on fantasies that sound profound, but bear no relation to any actual or even possible future. Its greatest value is in stimulating profits for publishers of magazines and books about Futurism. Otherwise, they’re not worth the trees killed to make the paper they’re printed on.

Trees, after all and as a group, make a huge contribution to all facets of human life. Like, for instance, breathing. Breathing is of incalculable value to humans. Trees make an immense contribution to breathing by absorbing carbon dioxide and pumping out vast quantities of oxygen, which humans like to breathe.

We like trees!

Futurists, not so much.

Tankersley’s little (168 pages, not counting author bio, front matter and introduction) opus is not like typical Futurist literature, however. Well, it would be like that if it weren’t more like the Republic in that it’s avowed purpose is to stimulate its readers to think about the future themselves. In the introduction that I purposely left out of the page count he says:

I want to help you reimagine our tomorrows; to show you that we are living in a time when the possibility of creating a better future has never been greater.”

Tankersley structured the body of his book in ten chapters, each telling a separate story about an imagined future centered around a possible solution to an issue relevant today. Following each chapter is an “apology” by a fictional future character named Archibald T. Patterson III.

Archie is what a hundred years ago would have been called a “Captain of Industry.” Today, we’d refer to him as an uber-rich and successful entrepreneur. Think Elon Musk or Bill Gates.

Actually, I think he’s more like Warren Buffet in that he’s reasonably introspective and honest with himself. Archie sees where society has come from, how it got to the future it got to, and what he and his cohorts did wrong. While he’s super-rich and privileged, the futures the stories describe were made by other people who weren’t uber-rich and successful. His efforts largely came to naught.

The point Tankersley seems to be making is that progress comes from the efforts of ordinary individuals who, in true British fashion, “muddle through.” They see a challenge and apply their talents and resources to making a solution. The solution is invariably nothing anyone would foresee, and is nothing like what anyone else would come up with to meet the same challenge. Each is a unique response to a unique challenge by unique individuals.

It might seem naive, this idea that human development comes from ordinary individuals coming up with ordinary solutions to ordinary problems all banded together into something called “progress,” but it’s not.

For example, Mark Zuckerberg developed Facebook as a response to the challenge of applying then-new computer-network technology to the age-old quest by late adolescents to form their own little communities by communicating among themselves. It’s only fortuitous that he happened on the right combination of time (the dawn of a radical new technology), place (in the midst of a huge cadre of the right people well versed in using that radical new technology) and marketing to get the word out to those right people wanting to use that radical new technology for that purpose. Take away any of those elements and there’d be no Facebook!

What if Zuckerberg hadn’t invented Facebook? In that event, somebody else (Reid Hoffman) would have come up with a similar solution (Linkedin) to the same challenge facing a similar group (technology professionals).

Oh, my! They did!

History abounds with similar examples. There’s hardly any advancement in human culture that doesn’t fit this model.

The good news is that Tankersley’s vision for how we can re-imagine our tomorrows is right on the money.

The bad news is … there isn’t any bad news!

Secular and Sectarian

Church and State
The intersection of Church Street and State Street in Champaign, Illinois. Kristopher Kettner/Shutterstock

28 November 2018 – There’s a reason all modern civilized countries, at least all democracies, institutionalize separation of church and state. It’s the most critical part of the “separation of powers” mantra that the U.S. Founding Fathers repeated ad nauseam. It’s also a rant I’ve repeated time and again for at least a decade.

In my 2011 novel Vengeance is Mine! I wrote the following dialog between two people discussing what to do about a Middle-Eastern dictator from whom they’d just rescued a kidnapped woman:

Even in Medieval Europe,” Doc grew professorial, “you had military dictatorships with secular power competing with the Catholic Church, which had enormous sectarian power.

Modern regimes all have similar checks and balances – with separation of church and state the most important one. It’s why I get antsy when I see scientific organizations getting too cozy with governments, and why everyone gets nervous about weakness in religious organizations.

No matter what your creed, we have to have organized religion of some kind to balance the secular power of governments.

Islam was founded as a theocracy – both sectarian and secular power concentrated together in one or a few individuals. At the time, nobody understood the need to separate them. Most thinkers have since grown up to embrace the separation concept, realizing that the dynamic tension is needed to keep the whole culture centered, and able to respond to changing conditions.

Fundamentalist Islam, however, has steadfastly refused to modernize. That’s why psychopaths like your Emir are able to achieve high office, with its accompanying state protection, in some Islamic countries. The only way to touch him is to topple his government, and the Manchek family isn’t going to do that.

Unfortunately, radical Islam now seems to be gaining adherents, like Communism a hundred years ago. Eventually, Communist governments became so radicalized that they became inefficient, and collapsed under their own weight.”

You’re comparing Islam to Communism?” Red questioned.

Well,” Doc replied, “they may be at opposite ends of the spectrum doctrinaire-wise, but they share the same flaw.

Communism was (and still is) an atheistic doctrine. Its answer to the question of religion is to deny the validity of religion. That kicks the pins out from under the competition.

Since people need some sort of ethical, moral guide, they appealed to the Communist dogma. That blows the separation of church and state, again.

There’s nobody to say, ‘naughty, naughty.’ Abuses go unchecked. Psychopaths find happy homes, and so forth. Witness Stalin.

The problem isn’t what philosophy you have, it’s the inability to correct abuses because there aren’t separate, competing authorities.

The strength of the American system is that there’s no absolute authority. The checks and balances are built in. Abuses happen, and can persist for a while, but eventually they get slapped down because there’s somebody around to slap them down.

The weakness is that it’s difficult to get anything done.

The strength is that it’s difficult to get anything done.”

In the novel, their final solution was to publicly humiliate the “Emir” in front of the “Saudi Sheik,” who then approved the Emir’s assassination.

Does that sound familiar?

The final edit of that novel was completed in 2011. Fast forward seven years and we’re now watching the aftermath of similar behavior by the Saudi Crown Prince Mohammed Bin Salman ordering the murder of dissident journalist Jamal Khashoggi. It’s interesting that authoritarian behavior is so predictable that real events so closely mimic the fiction of years before.

In a parallel development, the Republican Party today is suffering a moral implosion. Over the past two years, long-time Republicans, from senior Senators to loyal voters, have been jumping the Republican ship in droves on moral grounds.

I submit that this decline can be traced, at least in part, to the early 1980s when conservative elements of the Party forgot the meaning of “political conservatism,” and started courting the support of certain elements among Evangelical Christians. That led to adding religiously based planks (such as anti-abortion) to the Republican platform.

The elements among Evangelical Christians who responded were, of course, those who had no truck with the secular/sectarian-separation ideal. Unable to convince any but their most subservient followers of their moral rectitude (frankly because they didn’t have any, but that’s a rant for another day), those elements jumped at the chance to have the Federal Government codify their religious dogma into law.

By the way, it was an identical dynamic that led a delegation of Rabbinical Jews to talk Pontius Pilate into ordering the crucifixion of Jesus. In the end, Pilate was so disgusted by the whole proceeding that he suffered a bout of manic hand washing.

That points out the relative sophistication of the Roman culture of 2,000 years ago. Yes, the Roman emperors insisted that every Roman citizen acknowledge them to be a “god.” Unlike the Hebrew god, however, the Roman emperor was not a “jealous god.” He was perfectly willing to let his subjects worship any other god or gods they wanted to. All he required was lip-service fealty to him. And taxes. We can’t forget the taxes!

By the First Century CE, Greco-Roman civilization had been playing around with democratically based government off and on for five hundred years. They’d come to embrace religious tolerance as a good working principle that they honored in action, if not in word.

Pilate went slightly nuts over breaking the taboo against government-enforced religion because he knew it would not play well at home (in Rome). He was right. Lucius Vitellius Veteris, then Governor of Syria, deposed Pilate soon afterward, and sent him home in disgrace.

Pilate was not specifically disgraced over his handling of Jesus’ crucifixion, but more generally over his handling of the internecine conflicts between competing Jewish sects of the time. One surmises that he meddled too much, taking sides when he should have remained neutral in squabbles between two-bit religious sects in a far off desert outpost.

The take-home lesson of this blog posting is that it makes no difference what religious creed you espouse, what’s important from a governance point of view is that every citizen have some moral guide separate from secular law by which to judge the actions of their political leaders.

There are, of course, some elements required of that moral guide. For example, society cannot put up with a religion that condones murder. The Thugee Cult of British-Colonial India is such an example. Nor can society allow cults that encourage behaviors that threaten general order or rule of law, such as organized crime or corruption.

Especially helpful to governments are religions whose teachings promote obedience to rule of law, such as Catholicism. Democracies especially like various Protestant sects that promote individual responsibility.

Zen Buddhism, which combines Buddhist introspection with the Taoist inclusive world view, is another good foil for a democratic government. Its fundamental goal of minimizing suffering plays well with democratic ideals as well.

There are plenty of organized (as well as disorganized) religious guides out there. It’s important to keep in mind that the Founding Fathers were not trying to create an atheistic state. Separation of church and state implies the existence of both church and state, not one without the other.

Teaching News Consumption and Critical Thinking

Teaching media literacy
Teaching global media literacy to children should be started when they’re young. David Pereiras/Shutterstock

21 November 2018 – Regular readers of this blog know one of my favorite themes is critical thinking about news. Another of my favorite subjects is education. So, they won’t be surprised when I go on a rant about promoting teaching of critical news consumption habits to youngsters.

Apropos of this subject, last week the BBC launched a project entitled “Beyond Fake News,” which aims to “fight back” against fake news with a season of documentaries, special reports and features on the BBC’s international TV, radio and online networks.

In an article by Lucy Mapstone, Press Association Deputy Entertainment Editor for the Independent.ie digital network, entitled “BBC to ‘fight back’ against disinformation with Beyond Fake News project,” Jamie Angus, director of the BBC World Service Group, is quoted as saying: “Poor standards of global media literacy, and the ease with which malicious content can spread unchecked on digital platforms mean there’s never been a greater need for trustworthy news providers to take proactive steps.”

Angus’ quote opens up a Pandora’s box of issues. Among them is the basic question of what constitutes “trustworthy news providers” in the first place. Of course, this is an issue I’ve tackled in previous columns.

Another issue is what would be appropriate “proactive steps.” The BBC’s “Beyond Fake News” project is one example that seems pretty sound. (Sorry if this language seems a little stilted, but I’ve just finished watching a mid-twentieth-century British film, and those folks tended to talk that way. It’ll take me a little while to get over it.)

Another sort of “proactive step” is what I’ve been trying to do in this blog: provide advice about what steps to take to ensure that the news you consume is reliable.

A third is providing rebuttal of specific fake-news stories, which is what pundits on networks like CNN and MSNBC try (with limited success, I might say) to do every day.

The issue I hope to attack in this blog posting is the overarching concern in the first phrase of the Angus quote: “Poor standards of global media literacy, … .”

Global media literacy can only be improved the same way any lack of literacy can be improved, and that is through education.

Improving global media literacy begins with ensuring a high standard of media literacy among teachers. Teachers can only teach what they already know. Thus, a high standard of media literacy must start in college and university academic-education programs.

While I’ve spent decades teaching at the college level, so I have plenty of experience, I’m not actually qualified to teach other teachers how to teach. I’ve only taught technical subjects, and the education required to teach technical subjects centers on the technical subjects themselves. The art of teaching is (or at least was when I was at university) left to the student’s ability to mimic what their teachers did, informal mentoring by fellow teachers, and good-ol’ experience in the classroom. We were basically dumped into the classroom and left to sink or swim. Some swam, while others sank.

That said, I’m not going to try to lay out a program for teaching teachers how to teach media literacy. I’ll confine my remarks to making the case that it needs to be done.

Teaching media literacy to schoolchildren is especially urgent because the media-literacy projects I keep hearing about are aimed at adults “in the wild,” so to speak. That is, they’re aimed at adult citizens who have already completed their educations and are out earning livings, bringing up families, and participating in the political life of society (or ignoring it, as the case may be).

I submit that’s exactly the wrong audience to aim at.

Yes, it’s the audience that is most involved in media consumption. It’s the group of people who most need to be media literate. It is not, however, the group that we need to aim media-literacy education at.

We gotta get ‘em when they’re young!

Like any other academic subject, the best time to teach people good media-consumption habits is before they need to have them, not afterwards. There are multiple reasons for this.

First, children need to develop good habits before they’ve developed bad habits. It saves the dicey stage of having to unlearn old habits before you can learn new ones. Media literacy is no different. Neither is critical thinking.

Most of the so-called “fake news” appeals to folks who’ve never learned to think critically in the first place. They certainly try to think critically, but they’ve never been taught the skills. Of course, those critical-thinking skills are a prerequisite to building good media-consumption habits.

How can you get in the habit of thinking critically about news stories you consume unless you’ve been taught to think critically in the first place? I submit that the two skills are so intertwined that the best strategy is to teach them simultaneously.

And, it is most definitely a habit, like smoking, drinking alcohol, and being polite to pretty girls (or boys). It’s not something you can just tell somebody to do, then expect they’ll do it. They have to do it over and over again until it becomes habitual.

‘Nuff said.

Another reason to promote media literacy among the young is that’s when people are most amenable to instruction. Human children are pre-programmed to try to learn things. That’s what “play” is all about. Acquiring knowledge is not an unpleasant chore for children (unless misguided adults make it so). It’s their job! To ensure that children learn what they need to know to function as adults, Mommy Nature went out of her way to make learning fun, just as she did with everything else humans need to do to survive as a species.

Learning, having sex, taking care of babies are all things humans have to do to survive, so Mommy Nature puts systems in place to make them fun, and so drive humans to do them.

A third reason we need to teach media literacy to the young is that, like everything else, you’re better off learning it before you need to practice it. Nobody in their right mind teaches a novice how to drive a car by running them out in city traffic. High schools all have big, torturously laid out parking lots to give novice drivers a safe, challenging place to practice the basic skills of starting, stopping and turning before they have to perform those functions while dealing with fast-moving Chevys coming out of nowhere.

Similarly, you want students to practice deciphering written and verbal communications before asking them to parse a Donald-Trump speech!

The “Call to Action” for this editorial piece is thus, “Agitate for developing good media-consumption habits among schoolchildren along with the traditional Three Rs.” It starts with making the teaching of media literacy part of K-12 teacher education. It also includes teaching critical thinking skills and habits at the same time. Finally, it includes holding K-12 teachers responsible for inculcating good media-consumption habits in their students.

Yes, it’s important to try to bring the current crop of media-illiterate adults up to speed, but it’s more important to promote global media literacy among the young.

Computers Are Revolting!

Will Computers Revolt? cover
Charles Simon’s Will Computers Revolt? looks at the future of interactions between artificial intelligence and the human race.

14 November 2018 – I just couldn’t resist the double meaning allowed by the title for this blog posting. It’s all I could think of when reading the title of Charles Simon’s new book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence.

On one hand, yes, computers are revolting. Yesterday my wife and I spent two hours trying to figure out how to activate our Netflix account on my new laptop. We ended up having to change the email address and password associated with the account. And, we aren’t done yet! The nice lady at Netflix sadly informed me that in thirty days, their automated system would insist that we re-log-into the account on both devices due to the change.

That’s revolting!

On the other hand, the uprising has already begun. Computers are revolting in the sense that they’re taking power to run our lives.

We used to buy stuff just by picking it off the shelf, then walking up to the check-out counter and handing over a few pieces of green paper. The only thing that held up the process was counting out the change.

Later, when credit cards first reared their ugly heads, we had to wait a few minutes for the salesperson to fill out a sales form, then run our credit cards through the machine. It was all manual. No computers involved. It took little time once you learned how to do it, and, more importantly, the process was pretty much the same everywhere and never changed, so once you learned it, you’d learned it “forever.”

Not no more! How much time do you, I, and everyone else waste navigating the multiple pages we have to walk through just to pay for anything with a credit or debit card today?

Even worse, every store has different software using different screens to ask different questions. So, we can’t develop a habitual routine for the process. It’s different every time!

Not long ago the banks issuing my debit-card accounts switched to those %^&^ things with the chips. I always forget to put the thing in the slot instead of swiping the card across the magnetic-stripe reader. When that happens we have to start the process all over, wasting even more time.

The computers have taken over, so now we have to do what they tell us to do.

Now we know who’ll be first against the wall when the revolution comes. It’s already here and the first against the wall is us!

Golem Literature in Perspective

But seriously folks, Simon’s book is the latest in a long tradition of works by thinkers fascinated by the idea that someone could create an artifice that would pass for a human. Perhaps the earliest, and certainly the most iconic, is the golem stories from Jewish folklore. I suspect (on no authority, whatsoever, but it does seem likely) that the idea of a golem appeared about the time when human sculptors started making statues in realistic human form. That was very early, indeed!

A golem is, for those who aren’t familiar with the term or willing to follow the link provided above to learn about it, an artificial creature fashioned by a human that is effectively what we call a “robot.” The folkloric golems were made of clay or (sometimes) wood because those were the best materials available at the time that the stories’ authors’ could have their artists work with. A well-known golem story is Carlo Collodi’s The Adventures of Pinocchio.

By the sixth century BCE, Greek sculptors had begun to produce lifelike statues. The myth of Pygmalion and Galatea appeared in a pseudo-historical work by Philostephanus Cyrenaeus in the third century BCE. Pygmalion was a sculptor who made a statue representing his ideal woman, then fell in love with it. Aphrodite granted his prayer for a wife exactly like the statue by bringing the statue to life. The wife’s name was Galatea.

The Talmud points out that Adam started out as a golem. Like Galatea, Adam was brought to life when the Hebrew God Yahweh gave him a soul.

These golem examples emphasize the idea that humans, no matter how holy or wise, cannot give their creations a soul. The best they can do is to create automatons.

Simon effectively begs to differ. He spends the first quarter of his text laying out the case that it is possible, and indeed inevitable, that automated control systems displaying artificial general intelligence (AGI) capable of thinking at or (eventually) well above human capacity will appear. He spends the next half of his text showing how such AGI systems could be created and making the case that they will eventually exhibit functionality indistinguishable from consciousness. He devotes the rest of his text to speculating about how we, as human beings, will likely interact with such hyperintelligent machines.

Spoiler Alert

Simon’s answer to the question posed by his title is a sort-of “yes.” He feels AGIs will inevitably displace humans as the most intelligent beings on our planet, but won’t exactly “revolt” at any point.

“The conclusion,” he says, “is that the paths of AGIs and humanity will diverge to such an extent that there will be no close relationship between humans and our silicon counterparts.”

There won’t be any violent conflict because robotic needs are sufficiently dissimilar to ours that there won’t be any competition for scarce resources, which is what leads to conflict between groups (including between species).

Robots, he posits, are unlikely to care enough about us to revolt. There will be no Terminator robots seeking to exterminate us because they won’t see us as enough of a threat to bother with. They’re more likely to view us much the way we view squirrels and birds: pleasant fixtures of the natural world.

They won’t, of course, tolerate any individual humans who make trouble for them the same way we wouldn’t tolerate a rabid coyote. But, otherwise, so what?

So, the !!!! What?

The main value of Simon’s book is not in its ultimate conclusion. That’s basically informed opinion. Rather, its value lies in the voluminous detail he provides in getting to that conclusion.

He spends the first quarter of his text detailing exactly what he means by AGI. What functions are needed to make it manifest? How will we know when it rears its head (ugly or not, as a matter of taste)? How will a conscious, self-aware AGI system act?

A critical point Simon makes in this section is the assertion that AGI will arise first in autonomous mobile robots. I thoroughly agree for pretty much the same reasons he puts forth.

I first started seriously speculating about machine intelligence back in the middle of the twentieth century. I never got too far – certainly not as far as Simon gets in this volume – but pretty much the first thing I actually did realize was that it was impossible to develop any kind of machine with any recognizable intelligence unless its main feature was having a mobile body.

Developing any AGI feature requires the machine to have a mobile body. It has to take responsibility not only for deciding how to move itself about in space, but figuring out why. Why would it, for example, rather be over there, rather than to just stay here? Note that biological intelligence arose in animals, not in plants!

Simultaneously with reading Simon’s book, I was re-reading Robert A. Heinlein’s 1966 novel The Moon is a Harsh Mistress, which is one of innumerable fiction works whose plot hangs on actions of a superintelligent sentient computer. I found it interesting to compare Heinlein’s early fictional account with Simon’s much more informed discussion.

Heinlein sidesteps the mobile-body requirement by making his AGI arise in a computer tasked with operating the entire infrastructure of the first permanent human colony on the Moon (more accurately in the Moon, since Heinlein’s troglodytes burrowed through caves and tunnels, coming up to the surface only reluctantly when circumstances forced them to). He also avoids trying to imagine the AGI’s inner workings, by glossing over with the 1950s technology he was most familiar with.

In his rather longish second section, Simon leads his reader through a thought experiment speculating about what components an AGI system would need to have for its intelligence to develop. What sorts of circuitry might be needed, and how might it be realized? This section might be fascinating for those wanting to develop hardware and software to support AGI. For those of us watching from our armchairs on the outside, though, not so much.

Altogether, Charles Simon’s Will Computers Revolt? is an important book that’s fairly easy to read (or, at least as easy as any book this technical can be) and accessible to a wide range of people interested in the future of robotics and artificial intelligence. It is not the last word on this fast-developing field by any means. It is, however, a starting point for the necessary debate over how we should view the subject. Do we have anything to fear? Do we need to think about any regulations? Is there anything to regulate and would any such regulations be effective?

Radicalism and the Death of Discourse

Gaussian political spectrum
Most Americans prefer to be in the middle of the political spectrum, but most of the noise comes from the far right and far left.

7 November 2018 – During the week of 22 October 2018 two events dominated the news: Cesar Sayoc mailed fourteen pipe bombs to prominent individuals critical of Donald Trump, and Robert Bowers shot up a synagogue because he didn’t like Jews. Both of these individuals identified themselves with far-right ideology, so the media has been full of rhetoric condemning far-right activists.

To be legally correct, I have to note that, while I’ve written the above paragraph as if those individuals’ culpability for those crimes is established fact, they (as of this writing) haven’t been convicted. It’s entirely possible that some deus ex machina will appear out of the blue and exonerate one or both of them.

Clearly, things have gotten out of hand with Red Team activists when they start “throwing” pipe bombs and bullets. But, I’m here to say “naughty, naughty” to both sides.

Both sides are culpable.

I don’t want you to interpret that last sentence as agreement with Donald Trump’s idiotic statement after last year’s Charlottesville incident that there were “very fine people on both sides.”

There aren’t “very fine people” on both sides. Extremists are “bad” people no matter what side they’re on.

For example, not long ago social media sites (specifically Linkedin and, especially, Facebook) were lit up with vitriol about the Justice Kavanaugh hearings by pundits from both the Red Team and the Blue Team. It got so hot that I was embarrassed!

Some have pointed out that, statistically, most of the actual violence has been perpetrated by the Red Team.

Does that mean the Red Team is more culpable than the Blue Team?

No. It means they’re using different weapons.

The Blue Team, which I believe consists mainly of extremists from the liberal/progressive wing of the Democratic Party, has traditionally chosen written and spoken words as their main weapon. Recall some of the political correctness verbiage used to attack free expression in the late 20th Century, and demonstrations against conservative speakers on college campuses in our own.

The Red Team, which today consists of the Trumpian remnants of the Republican Party, has traditionally chosen to throw hard things, like rocks, bullets and pipe bombs.

Both sides also attempt to disarm the other side. The Blue Team wisely attempts to disarm the Red Team by taking away their guns. The Red Team, which eschews anything that smacks of wisdom, tries to disarm the Blue Team by (figuratively, so far) burning their books.

Recognize that calling the Free Press “the enemy of the people” is morally equivalent to throwing books on a bonfire. They’re both attempts to promote ignorance.

What’s actually happening is that the fringes of society are making all of the noise, and the mass of moderate-thinking citizens can’t get a word in edgewise.

George Schultz pointed out: “He who walks in the middle of the roads gets hit from both sides.”

I think it was Douglas Adams who pointed out that fanatics get to run things because they care enough to put in the effort. Moderates don’t because they don’t.

Both of these pundits point out the sad fact that Nature favors extremes. The most successful companies are those with the highest growth rates. Most drivers exceed the speed limit. The squeaky wheel gets the most grease. And, those who express the most extreme views get the most media attention.

Our Constitution specifies in no uncertain terms that the nation is founded on (small “d”) democratic principles. Democratic principles insist that policy matters be debated and resolved by consensus of the voting population. That can only be done when people meet together in the middle.

Extremists on both the Red Team and Blue Team don’t want that. They treat politics as a sporting event.

In a baseball game, for example, nobody roots for a tie. They root for a win by one team or the other.

Government is not a sporting event.

When one team or the other wins, all Americans lose.

The enemy we are facing now, which is the same enemy democracies face around the world, is not the right or left. It is extremism in general. Always has been. Always will be.

Authoritarians always go for one extreme or the other. Hitler went for the right. Stalin went for the left.

The reason authoritarians pick an extreme is that’s where there are people who are passionate enough about their ideas to shoot anyone who doesn’t agree with them. That, authoritarians realize, is the only way they can become “Dictator for Life.” Since that is their goal, they have to pick an extreme.

We love democracy because it’s the best way for “We the People” to ensure nobody gets to be “Dictator for Life.” When everyone meets in the middle (which is the only place everyone can meet), authoritarians get nowhere.

Ergo, authoritarians love extremes and everyone else needs the middle.

Vilifying “nationalism” as a Red Team vice misses the point. In the U.S. (or any similar democracy), nationalism requires more-or-less moderate political views. There’s lots of room in the middle for healthy (and ultimately entertaining) debate, but very little room at the extremes.

Try going for the middle.

To quote Victor “Animal” Palotti in Roland Emmerich’s 1998 film Godzilla: “C’mon. It’ll be fun! It’ll be fun! It’ll be fun!”

Six Tips to Protect Your Vote from Election Meddlers

Theresa Payton headshot
Theresa Payton, cybersecurity expert and CEO of Fortalice Solutions. photo courtesy Fortalice Solutions

6 November 2018 – Below is from a press release I received yesterday (Monday, 11/5) evening. It’s of sufficient import and urgent timing that I decided to post it to this blog verbatim.

There’s been a lot of talk about cybersecurity and whether or not the Trump administration is prepared for tomorrow’s midterm elections, but now that we’re down to the wire, former White House CIO and Fortalice Solutions CEO Theresa Payton says it’s time for voters to think about what they can do to make sure their voices are heard.

Theresa’s six cyber tips for voters ahead of midterms:

  • Don’t zone out while you’re voting. Pay close attention to how you cast your ballot and who you cast your ballot for.

  • Take your time during the review process, and double-check your vote before you finalize it;

  • It may sound cliche, but if you see something say something. If something seems strange, report it to your State Board of Elections immediately;

  • If you see suspicious social media personas pushing information that’s designed to influence (and maybe even misinform) voters, here’s where you can report it:

  • Check your voter registration status before you go to the polls. Voters in 37 states and the District of Columbia can register to vote online. Visit vote.org to find out how to check your registration status in your state;

  • Unless you are a resident of West Virginia or you’re serving overseas in the U.S. military, you cannot vote electronically on your phone. Protect yourself from text messages and email scams that indicate that you can. Knowledge is power.

Finally, trust the system. Yes, it’s flawed. Yes, it’s imperfect. But it’s the bedrock of our democracy. If you stay home or lose trust in the legitimacy of the process, our cyber enemies win.

Theresa is one of the nation’s leading experts in cyber security and IT strategy. She is the CEO of Fortalice Solutions, an industry-leading security consulting company. Under President George W. Bush, she served as the first female chief information officer at the White House, overseeing IT operations for POTUS and his staff. She was named #4 on IFSEC Global’s list of the world’s Top 50 cybersecurity influencers in security & fire 2017. See her profiled in the Washington Post for her role on the 2017 CBS reality show “Hunted” here.

Babies and Bath Water

A baby in bath water
Don’t throw the baby out with the bathwater. Switlana Symonenko/Shutterstock.com

31 October 2018 – An old catchphrase derived from Medieval German is “Don’t throw the baby out with the bathwater.” It expresses an important principle in systems engineering.

Systems engineering focuses on how to design, build, and manage complex systems. A system can consist of almost anything made up of multiple parts or elements. For example, an automobile internal combustion engine is a system consisting of pistons, valves, a crankshaft, etc. Complex systems, such as that internal combustion engine, are typically broken up into sub-systems, such as the ignition system, the fuel system, and so forth.

Obviously, the systems concept can be applied to almost everything, from microorganisms to the World economy. As another example, medical professionals divide the human body into eleven organ systems, which would each be sub-systems within the body, which is considered as a complex system, itself.

Most systems-engineering principles transfer seamlessly from one kind of system to another.

Perhaps the most best known example of a systems-engineering principle was popularized by Robin Williams in his Mork and Mindy TV series. The Used-Car rule, as Williams’ Mork character put it, quite simply states:

“If it works, don’t fix it!”

If you’re getting the idea that systems engineering principles are typically couched in phrases that sound pretty colloquial, you’re right. People have been dealing with systems for as long as there have been people, so most of what they discovered about how to deal with systems long ago became “common sense.”

Systems engineering coalesced into an interdisciplinary engineering field around the middle of the twentieth century. Simon Ramo is sometimes credited as the founder of modern systems engineering, although many engineers and engineering managers contributed to its development and formalization.

The Baby/Bathwater rule means (if there’s anybody out there still unsure of the concept) that when attempting to modify something big (such as, say, the NAFTA treaty), make sure you retain those elements you wish to keep while in the process of modifying those elements you want to change.

The idea is that most systems that are already in place more or less already work, indicating that there are more elements that are right than are wrong. Thus, it’ll be easier, simpler, and less complicated to fix what’s wrong than to violate another systems principle:

“Don’t reinvent the wheel.”

Sometimes, on the other hand, something is such an unholy mess that trying to pick out those elements that need to change from the parts you don’t wish to change is so difficult that it’s not worth the effort. At that point, you’re better off scrapping the whole thing (throwing the baby out with the bathwater) and starting over from scratch.

Several months ago, I noticed that a seam in the convertible top on my sports car had begun to split. I quickly figured out that the big brush roller at my neighborhood automated car wash was over stressing the more-than-a-decade-old fabric. Naturally, I stopped using that car wash, and started looking around for a hand-detailing shop that would be more gentle.

But, that still left me with a convertible top that had started to split. So, I started looking at my options for fixing the problem.

Considering the car’s advanced age, and that a number of little things were starting to fail, I first considered trading the whole car in for a newer model. That, of course, would violate the rule about not throwing the baby out with the bath water. I’d be discarding the whole car just because of a small flaw, which might be repaired.

Of course, I’d also be getting rid of a whole raft of potentially impending problems. Then, again, I might be taking on a pile of problems that I knew nothing about.

It turned out, however, that the best car-replacement option was unacceptable, so I started looking into replacing just the convertible top. That, too, turned out to be infeasible. Finally, I found an automotive upholstery specialist who described a patching scheme that would solve the immediate problem and likely last through the remaining life of the car. So, that’s what I did.

I’ve put you through listening to this whole story to illustrate the thought process behind applying the “don’t throw the baby out with the bathwater” rule.

Unfortunately, our current President, Donald Trump, seems to have never learned anything about systems engineering, or about babies and bathwater. He’s apparently enthralled with the idea that he can bully U.S. trading partners into giving him concessions when he negotiates with them one-on-one. That’s the gist of his love of bilateral trade agreements.

Apparently, he feels that if he gets into a multilateral trade negotiation, his go-to strategy of browbeating partners into giving in to him might not work. Multiple negotiating partners might get together and provide a united front against him.

In fact, that’s a reasonable assumption. He’s a sufficiently weak deal maker on his own that he’d have trouble standing up to a combination of, say, Mexico’s Nieto and Canada’s Trudeau banded together against him.

With that background, it’s not hard to understand why POTUS is looking at all U.S. treaties, which are mostly multilateral, and looking for any niddly thing wrong with them to use as an excuse to scrap the whole arrangement and start over. Obvious examples being the NAFTA treaty and the Iran Nuclear Accord.

Both of these treaties have been in place for some time, and have generally achieved the goals they were put in place to achieve. Howsoever, they’re not perfect, so POTUS is in the position of trying to “fix” them.

Since both these treaties are multilateral deals, to make even minor adjustments POTUS would have to enter multilateral negotiations with partners (such as Germany’s quantum-physicist-turned-politician, Angela Merkel) who would be unlikely to cow-tow to his bullying style. Robbed of his signature strategy, he’d rather scrap the whole thing and start all over, taking on partners one at a time in bilateral negotiations. So, that’s what he’s trying to do.

A more effective strategy would be to forget everything his ghostwriter put into his self-congratulatory “How-To” book The Art of the Deal, enumerate a list of what’s actually wrong with these documents, and tap into the cadre of veteran treaty negotiators that used to be available in the U.S. State Department to assemble a team of career diplomats capable of fixing what’s wrong without throwing the babies out with the bathwater.

But, that would violate his narcissistic world view. He’d have to admit that it wasn’t all about him, and acknowledge one of the first principles of project management (another discipline that he should have vast knowledge of, but apparently doesn’t):

Begin by making sure the needs of all stakeholders are built into any project plan.”