Why Diversity Rules

Diverse friends
A diverse group of people with different ages and nationalities having fun together. Rawpixel/Shutterstock

23 January 2019 – Last week two concepts reared their ugly heads that I’ve been banging on about for years. They’re closely intertwined, so it’s worthwhile to spend a little blog space discussing why they fit so tightly together.

Diversity is Good

The first idea is that diversity is good. It’s good in almost every human pursuit. I’m particularly sensitive to this, being someone who grew up with the idea that rugged individualism was the highest ideal.

Diversity, of course, is incompatible with individualism. Individualism is the cult of the one. “One” cannot logically be diverse. Diversity is a property of groups, and groups by definition consist of more than one.

Okay, set theory admits of groups with one or even no members, but those groups have a diversity “score” (Gini–Simpson index) of zero. To have any diversity at all, your group has to have at absolute minimum two members. The more the merrier (or diversitier).

The idea that diversity is good came up in a couple of contexts over the past week.

First, I’m reading a book entitled Farsighted: How We Make the Decisions That Matter the Most by Steven Johnson, which I plan eventually to review in this blog. Part of the advice Johnson offers is that groups make better decisions when their membership is diverse. How they are diverse is less important than the extent to which they are diverse. In other words, this is a case where quantity is more important than quality.

Second, I divided my physics-lab students into groups to perform their first experiment. We break students into groups to prepare them for working in teams after graduation. Unlike when I was a student fifty years ago, activity in scientific research and technology development is always done in teams.

When I was a student, research was (supposedly) done by individuals working largely in isolation. I believe it was Willard Gibbs (I have no reliable reference for this quote) who said: “An experimental physicist must be a professional scientist and an amateur everything else.”

By this he meant that building a successful physics experiment requires the experimenter to apply so many diverse skills that it is impossible to have professional mastery of all of them. He (or she) must have an amateur’s ability pick up novel skills in order to reach the next goal in their research. They must be ready to work outside their normal comfort zone.

That asked a lot from an experimental researcher! Individuals who could do that were few and far between.

Today, the fast pace of technological development has reduced that pool of qualified individuals essentially to zero. It certainly is too small to maintain the pace society expects of the engineering and scientific communities.

Tolkien’s “unimaginable hand and mind of Feanor” puttering around alone in his personal workshop crafting magical things is unimaginable today. Marlowe’s Dr. Faustus character, who had mastered all earthly knowledge, is now laughable. No one person is capable of making a major contribution to today’s technology on their own.

The solution is to perform the work of technological research and development in teams with diverse skill sets.

In the sciences, theoreticians with strong mathematical backgrounds partner with engineers capable of designing machines to test the theories, and technicians with the skills needed to fabricate the machines and make them work.

Chaotic Universe

The second idea I want to deal with in this essay is that we live in a chaotic Universe.

Chaos is a property of complex systems. These are systems consisting of many interacting moving parts that show predictable behavior on short time scales, but eventually foil the most diligent attempts at long-term prognostication.

A pendulum, by contrast, is a simple system consisting of, basically, three moving parts: a massive weight, or “pendulum bob,” that hangs by a rod or string (the arm) from a fixed support. Simple systems usually do not exhibit chaotic behavior.

The solar system, consisting of a huge, massive star (the Sun), eight major planets and a host of minor planets, is decidedly not a simple system. Its behavior is borderline chaotic. I say “borderline” because the solar system seems well behaved on short time scales (e.g., millennia), but when viewed on time scales of millions of years does all sorts of unpredictable things.

For example, approximately four and a half billion years ago (a few tens of millions of years after the system’s initial formation) a Mars-sized planet collided with Earth, spalling off a mass of material that coalesced to form the Moon, then ricochetted out of the solar system. That’s the sort of unpredictable event that happens in a chaotic system if you wait long enough.

The U.S. economy, consisting of millions of interacting individuals and companies, is wildly chaotic, which is why no investment strategy has ever been found to work reliably over a long time.

Putting It Together

The way these two ideas (diversity is good, and we live in a chaotic Universe) work together is that collaborating in diverse groups is the only way to successfully navigate life in a chaotic Universe.

An individual human being is so powerless that attempting anything but the smallest task is beyond his or her capacity. The only way to do anything of significance is to collaborate with others in a diverse team.

In the late 1980s my wife and I decided to build a house. To begin with, we had to decide where to build the house. That required weeks of collaboration (by our little team of two) to combine our experiences of different communities in the area where we were living, develop scenarios of what life might be like living in each community, and finally agree on which we might like the best. Then we had to find an architect to work with our growing team to design the building. Then we had to negotiate with banks for construction loans, bridge loans, and ultimate mortgage financing. Our architect recommended adding a prime contractor who had connections with carpenters, plumbers, electricians and so forth to actually complete the work. The better part of a year later, we had our dream house.

There’s no way I could have managed even that little project – building one house – entirely on my own!

In 2015, I ran across the opportunity to produce a short film for a film festival. I knew how to write a script, run a video camera, sew a costume, act a part, do the editing, and so forth. In short, I had all the skills needed to make that 30-minute film.

Did that mean I could make it all by my onesies? Nope! By the time the thing was completed, the list of cast and crew counted over a dozen people, each with their own job in the production.

By now, I think I’ve made my point. The take-home lesson of this essay is that if you want to accomplish anything in this chaotic Universe, start by assembling a diverse team, and the more diverse, the better!

The Scientific Method

Scientific Method Diagram
The scientific method assumes uncertainty.

9 January 2019 – This week I start a new part-time position on the faculty at Florida Gulf Coast University teaching two sections of General Physics laboratory. In preparation, I dusted off a posting to this blog from last Summer that details my take on the scientific method, which I re-edited to present to my students. I thought readers of this blog might profit by my posting the edited version. The original posting contrasted the scientific method of getting at the truth with the method used in the legal profession. Since I’ve been banging on about astrophysics and climate science, specifically, I thought it would be helpful to zero in again on how scientists figure out what’s really going on in the world at large. How do we know what we think we know?


While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school is a procedure consisting of five to seven steps, which pretty much look like this:

  1. Observation
  2. Hypothesis
  3. Prediction
  4. Experimentation
  5. Analysis
  6. Repeat

I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.

The Stepwise Program

It all starts with observation of things that go on in the World.

Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question: “why?”

Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several possible explanations that vary from the erudite to the thoroughly bizarre.

Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!

For example, the ancients tended to think in terms of objects somehow “wanting” to go downward as the least weird of explanations for gravity. The idea came from animism, which was the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior: Rocks are hard because their spirits resist being broken; They fall down when released because their spirits somehow like down better than up.

What we now consider the most-correctest explanation (that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other) wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.

Scientists then take all the hypotheses available, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results from the experiments.

Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results, and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.

It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.

That’s why the last step is to repeat the entire process ad nauseam.

While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.

Not boiling the method down to its essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, the science-pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”

A More Holistic Approach

The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following two complementary paths through to the resultant results.

There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis (the model) to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.

If you do that a bazillion times in a bazillion different ways, a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.

Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.

For example, I was once (at a University other than this one) asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.” He couldn’t get the machine to give the results he was convinced he should get.

I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.

I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He persisted in believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.

Anyway, the way this method works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.

If your scientific hypothesis is wrong (meaning it gives wrong results), “So, What?”

Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.

Finding that some hypothesis is wrong is no big deal. It just means it was a dumb idea, and you don’t have to bother thinking about that dumb idea anymore.

Alien abductions get relegated to entertainment for the entertainment starved. Real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.

(FYI: the current leading hypothesis is that the distances from there to here are so vast that anybody smart enough to figure out how to make the trip has better things to do.)

For scientists “Gee, it looks like … ” is usually as good as it gets!

Reimagining Our Tomorrows

Cover Image
Utopia with a twist.

19 December 2018 – I generally don’t buy into utopias.

Utopias are intended as descriptions of a paradise. They’re supposed to be a paradise for everybody, and they’re supposed to be filled with happy people committed to living in their city (utopias are invariably built around descriptions of cities), which they imagine to be the best of all possible cities located in the best of all possible worlds.

Unfortunately, however, utopia stories are written by individual authors, and they’d only be a paradise for that particular author. If the author is persuasive enough, the story will win over a following of disciples, who will praise it to high Heaven. Once in a great while (actually surprisingly often) those disciples become so enamored of the description that they’ll drop everything and actually attempt to build a city to match the description.

When that happens, it invariably ends in tears.

That’s because, while utopian stories invariably describe city plans that would be paradise to their authors, great swaths of the population would find living in them to be horrific.

Even Thomas More, the sixteenth century philosopher, politician and generally overall smart guy who’s credited with giving us the word “utopia” in the first place, was wise enough to acknowledge that the utopia he described in his most famous work, Utopia, wouldn’t be such a fun place for the slaves he had serving his upper-middle class citizens, who were the bulwark of his utopian society.

Even Plato’s Republic, which gave us the conundrum summarized in Juvenal’s Satires as “Who guards the guards?,” was never meant as a workable society. Plato’s work, in general, was meant to teach us how to think, not what to think.

What to think is a highly malleable commodity that varies from person to person, society to society, and, most importantly, from time to time. Plato’s Republic reflected what might have passed as good ideas for city planning in 380 BC Athens, but they wouldn’t have passed muster in More’s sixteenth-century England. Still less would they be appropriate in twenty-first-century democracies.

So, I approached Joe Tankersley’s Reimagining Our Tomorrows with some trepidation. I wouldn’t have put in the effort to read the thing if it wasn’t for the subtitle: “Making Sure Your Future Doesn’t SUCK.”

That subtitle indicated that Tankersley just might have a sense of humor, and enough gumption to put that sense of humor into his contribution to Futurism.

Futurism tends to be the work of self-important intellectuals out to make a buck by feeding their audience on fantasies that sound profound, but bear no relation to any actual or even possible future. Its greatest value is in stimulating profits for publishers of magazines and books about Futurism. Otherwise, they’re not worth the trees killed to make the paper they’re printed on.

Trees, after all and as a group, make a huge contribution to all facets of human life. Like, for instance, breathing. Breathing is of incalculable value to humans. Trees make an immense contribution to breathing by absorbing carbon dioxide and pumping out vast quantities of oxygen, which humans like to breathe.

We like trees!

Futurists, not so much.

Tankersley’s little (168 pages, not counting author bio, front matter and introduction) opus is not like typical Futurist literature, however. Well, it would be like that if it weren’t more like the Republic in that it’s avowed purpose is to stimulate its readers to think about the future themselves. In the introduction that I purposely left out of the page count he says:

I want to help you reimagine our tomorrows; to show you that we are living in a time when the possibility of creating a better future has never been greater.”

Tankersley structured the body of his book in ten chapters, each telling a separate story about an imagined future centered around a possible solution to an issue relevant today. Following each chapter is an “apology” by a fictional future character named Archibald T. Patterson III.

Archie is what a hundred years ago would have been called a “Captain of Industry.” Today, we’d refer to him as an uber-rich and successful entrepreneur. Think Elon Musk or Bill Gates.

Actually, I think he’s more like Warren Buffet in that he’s reasonably introspective and honest with himself. Archie sees where society has come from, how it got to the future it got to, and what he and his cohorts did wrong. While he’s super-rich and privileged, the futures the stories describe were made by other people who weren’t uber-rich and successful. His efforts largely came to naught.

The point Tankersley seems to be making is that progress comes from the efforts of ordinary individuals who, in true British fashion, “muddle through.” They see a challenge and apply their talents and resources to making a solution. The solution is invariably nothing anyone would foresee, and is nothing like what anyone else would come up with to meet the same challenge. Each is a unique response to a unique challenge by unique individuals.

It might seem naive, this idea that human development comes from ordinary individuals coming up with ordinary solutions to ordinary problems all banded together into something called “progress,” but it’s not.

For example, Mark Zuckerberg developed Facebook as a response to the challenge of applying then-new computer-network technology to the age-old quest by late adolescents to form their own little communities by communicating among themselves. It’s only fortuitous that he happened on the right combination of time (the dawn of a radical new technology), place (in the midst of a huge cadre of the right people well versed in using that radical new technology) and marketing to get the word out to those right people wanting to use that radical new technology for that purpose. Take away any of those elements and there’d be no Facebook!

What if Zuckerberg hadn’t invented Facebook? In that event, somebody else (Reid Hoffman) would have come up with a similar solution (Linkedin) to the same challenge facing a similar group (technology professionals).

Oh, my! They did!

History abounds with similar examples. There’s hardly any advancement in human culture that doesn’t fit this model.

The good news is that Tankersley’s vision for how we can re-imagine our tomorrows is right on the money.

The bad news is … there isn’t any bad news!

Robots Revisited

Engineer with SCARA robots
Engineer using monitoring system software to check and control SCARA welding robots in a digital manufacturing operation. PopTika/Shutterstock

12 December 2018 – I was wondering what to talk about in this week’s blog posting, when an article bearing an interesting-sounding headline crossed my desk. The article, written by Simone Stolzoff of Quartz Media was published last Monday (12/3/2018) by the World Economic Forum (WEF) under the title “Here are the countries most likely to replace you with a robot.”

I generally look askance at organizations with grandiose names that include the word “World,” figuring that they likely are long on megalomania and short on substance. Further, this one lists the inimitable (thank God there’s only one!) Al Gore on its Board of Trustees.

On the other hand, David Rubenstein is also on the WEF board. Rubenstein usually seems to have his head screwed on straight, so that’s a positive sign for the organization. Therefore, I figured the article might be worth reading and should be judged on its own merits.

The main content is summarized in two bar graphs. The first lists the ratio of robots to thousands of manufacturing workers in various countries. The highest scores go to South Korea and Singapore. In fact, three of the top four are Far Eastern countries. The United States comes in around number seven.Figure 1

The second applies a correction to the graphed data to reorder the list by taking into account the countries’ relative wealth. There, the United States comes in dead last among the sixteen countries listed. East Asian countries account for all of the top five.

Figure 2The take-home-lesson from the article is conveniently stated in its final paragraph:

The upshot of all of this is relatively straightforward. When taking wages into account, Asian countries far outpace their western counterparts. If robots are the future of manufacturing, American and European countries have some catching up to do to stay competitive.

This article, of course, got me started thinking about automation and how manufacturers choose to adopt it. It’s a subject that was a major theme throughout my tenure as Chief Editor of Test & Measurement World and constituted the bulk of my work at Control Engineering.

The graphs certainly support the conclusions expressed in the cited paragraph’s first two sentences. The third sentence, however, is problematical.

That ultimate conclusion is based on accepting that “robots are the future of manufacturing.” Absolute assertions like that are always dangerous. Seldom is anything so all-or-nothing.

Predicting the future is epistemological suicide. Whenever I hear such bald-faced statements I recall Jim Morrison’s prescient statement: “The future’s uncertain and the end is always near.”

The line was prescient because a little over a year after the song’s release, Morrison was dead at age twenty seven, thereby fulfilling the slogan expressed by John Derek’s “Nick Romano” character in Nicholas Ray’s 1949 film Knock on Any Door: “Live fast, die young, and leave a good-looking corpse.”

Anyway, predictions like “robots are the future of manufacturing” are generally suspect because, in the chaotic Universe in which we live, the future is inherently unpredictable.

If you want to say something practically guaranteed to be wrong, predict the future!

I’d like to offer an alternate explanation for the data presented in the WEF graphs. It’s based on my belief that American Culture usually gets things right in the long run.

Yes, that’s the long run in which economist John Maynard Keynes pointed out that we’re all dead.

My belief in the ultimate vindication of American trends is based, not on national pride or jingoism, but on historical precedents. Countries that have bucked American trends often start out strong, but ultimately fade.

An obvious example is trendy Japanese management techniques based on Druckerian principles that were so much in vogue during the last half of the twentieth century. Folks imagined such techniques were going to drive the Japanese economy to pre-eminence in the world. Management consultants touted such principles as the future for corporate governance without noticing that while they were great for middle management, they were useless for strategic planning.

Japanese manufacturers beat the crap out of U.S. industry for a while, but eventually their economy fell into a prolonged recession characterized by economic stagnation and disinflation so severe that even negative interest rates couldn’t restart it.

Similar examples abound, which is why our little country with its relatively minuscule population (4.3% of the world’s) has by far the biggest GDP in the world. China, with more than four times the population, grosses less than a third of what we do.

So, if robotic adoption is the future of manufacturing, why are we so far behind? Assuming we actually do know what we’re doing, as past performance would suggest, the answer must be that the others are getting it wrong. Their faith in robotics as a driver of manufacturing productivity may be misplaced.

How could that be? What could be wrong with relying on technological advancement as the driver of productivity?

Manufacturing productivity is calculated on the basis of stuff produced (as measured by its total value in dollars) divided by the number of worker-hours needed to produce it. That should tell you something about what it takes to produce stuff. It’s all about human worker involvement.

Folks who think robots automatically increase productivity are fixating on the denominator in the productivity calculation. Making even the same amount of stuff while reducing the worker-hours needed to produce it should drive productivity up fast. That’s basic number theory. Yet, while manufacturing has been rapidly introducing all kinds of automation over the last few decades, productivity has stagnated.

We need to look for a different explanation.

It just might be that robotic adoption is another example of too much of a good thing. It might be that reliance on technology could prove to be less effective than something about the people making up the work force.

I’m suggesting that because I’ve been led to believe that work forces in the Far Eastern developing economies are less skillful, may have lower expectations, and are more tolerant of authoritarian governments.

Why would those traits make a difference? I’ll take them one at a time to suggest how they might.

The impression that Far Eastern populations are less skillful is not easy to demonstrate. Nobody who’s dealt with people of Asian extraction in either an educational or work-force setting would ever imagine they are at all deficient in either intelligence or motivation. On the other hand, as emerging or developing economies those countries are likely more dependent on workers newly recruited from rural, agrarian settings, who are likely less acclimated to manufacturing and industrial environments. On this basis, one may posit that the available workers may prove less skillful in a manufacturing setting.

It’s a weak argument, but it exists.

The idea that people making up Far-Eastern work forces have lower expectations than those in more developed economies is on firmer footing. Workers in Canada, the U.S. and Europe have very high expectations for how they should be treated. Wages are higher. Benefits are more generous. Upward mobility perceptions are ingrained in the cultures.

For developing economies, not so much.

Then, we come to tolerance of authoritarian regimes. Tolerance of authoritarianism goes hand-in-hand with tolerance for the usual authoritarian vices of graft, lack of personal freedom and social immobility. Only those believing populist political propaganda think differently (which is the danger of populism).

What’s all this got to do with manufacturing productivity?

Lack of skill, low expectations and patience under authority are not conducive to high productivity. People are productive when they work hard. People work hard when they are incentivized. They are incentivized to work when they believe that working harder will make their lives better. It’s not hard to grasp!

Installing robots in a plant won’t by itself lead human workers to believe that working harder will make their lives better. If anything, it’ll do the opposite. They’ll start worrying that their lives are about to take a turn for the worse.

Maybe that has something to do with why increased automation has failed to increase productivity.

Teaching News Consumption and Critical Thinking

Teaching media literacy
Teaching global media literacy to children should be started when they’re young. David Pereiras/Shutterstock

21 November 2018 – Regular readers of this blog know one of my favorite themes is critical thinking about news. Another of my favorite subjects is education. So, they won’t be surprised when I go on a rant about promoting teaching of critical news consumption habits to youngsters.

Apropos of this subject, last week the BBC launched a project entitled “Beyond Fake News,” which aims to “fight back” against fake news with a season of documentaries, special reports and features on the BBC’s international TV, radio and online networks.

In an article by Lucy Mapstone, Press Association Deputy Entertainment Editor for the Independent.ie digital network, entitled “BBC to ‘fight back’ against disinformation with Beyond Fake News project,” Jamie Angus, director of the BBC World Service Group, is quoted as saying: “Poor standards of global media literacy, and the ease with which malicious content can spread unchecked on digital platforms mean there’s never been a greater need for trustworthy news providers to take proactive steps.”

Angus’ quote opens up a Pandora’s box of issues. Among them is the basic question of what constitutes “trustworthy news providers” in the first place. Of course, this is an issue I’ve tackled in previous columns.

Another issue is what would be appropriate “proactive steps.” The BBC’s “Beyond Fake News” project is one example that seems pretty sound. (Sorry if this language seems a little stilted, but I’ve just finished watching a mid-twentieth-century British film, and those folks tended to talk that way. It’ll take me a little while to get over it.)

Another sort of “proactive step” is what I’ve been trying to do in this blog: provide advice about what steps to take to ensure that the news you consume is reliable.

A third is providing rebuttal of specific fake-news stories, which is what pundits on networks like CNN and MSNBC try (with limited success, I might say) to do every day.

The issue I hope to attack in this blog posting is the overarching concern in the first phrase of the Angus quote: “Poor standards of global media literacy, … .”

Global media literacy can only be improved the same way any lack of literacy can be improved, and that is through education.

Improving global media literacy begins with ensuring a high standard of media literacy among teachers. Teachers can only teach what they already know. Thus, a high standard of media literacy must start in college and university academic-education programs.

While I’ve spent decades teaching at the college level, so I have plenty of experience, I’m not actually qualified to teach other teachers how to teach. I’ve only taught technical subjects, and the education required to teach technical subjects centers on the technical subjects themselves. The art of teaching is (or at least was when I was at university) left to the student’s ability to mimic what their teachers did, informal mentoring by fellow teachers, and good-ol’ experience in the classroom. We were basically dumped into the classroom and left to sink or swim. Some swam, while others sank.

That said, I’m not going to try to lay out a program for teaching teachers how to teach media literacy. I’ll confine my remarks to making the case that it needs to be done.

Teaching media literacy to schoolchildren is especially urgent because the media-literacy projects I keep hearing about are aimed at adults “in the wild,” so to speak. That is, they’re aimed at adult citizens who have already completed their educations and are out earning livings, bringing up families, and participating in the political life of society (or ignoring it, as the case may be).

I submit that’s exactly the wrong audience to aim at.

Yes, it’s the audience that is most involved in media consumption. It’s the group of people who most need to be media literate. It is not, however, the group that we need to aim media-literacy education at.

We gotta get ‘em when they’re young!

Like any other academic subject, the best time to teach people good media-consumption habits is before they need to have them, not afterwards. There are multiple reasons for this.

First, children need to develop good habits before they’ve developed bad habits. It saves the dicey stage of having to unlearn old habits before you can learn new ones. Media literacy is no different. Neither is critical thinking.

Most of the so-called “fake news” appeals to folks who’ve never learned to think critically in the first place. They certainly try to think critically, but they’ve never been taught the skills. Of course, those critical-thinking skills are a prerequisite to building good media-consumption habits.

How can you get in the habit of thinking critically about news stories you consume unless you’ve been taught to think critically in the first place? I submit that the two skills are so intertwined that the best strategy is to teach them simultaneously.

And, it is most definitely a habit, like smoking, drinking alcohol, and being polite to pretty girls (or boys). It’s not something you can just tell somebody to do, then expect they’ll do it. They have to do it over and over again until it becomes habitual.

‘Nuff said.

Another reason to promote media literacy among the young is that’s when people are most amenable to instruction. Human children are pre-programmed to try to learn things. That’s what “play” is all about. Acquiring knowledge is not an unpleasant chore for children (unless misguided adults make it so). It’s their job! To ensure that children learn what they need to know to function as adults, Mommy Nature went out of her way to make learning fun, just as she did with everything else humans need to do to survive as a species.

Learning, having sex, taking care of babies are all things humans have to do to survive, so Mommy Nature puts systems in place to make them fun, and so drive humans to do them.

A third reason we need to teach media literacy to the young is that, like everything else, you’re better off learning it before you need to practice it. Nobody in their right mind teaches a novice how to drive a car by running them out in city traffic. High schools all have big, torturously laid out parking lots to give novice drivers a safe, challenging place to practice the basic skills of starting, stopping and turning before they have to perform those functions while dealing with fast-moving Chevys coming out of nowhere.

Similarly, you want students to practice deciphering written and verbal communications before asking them to parse a Donald-Trump speech!

The “Call to Action” for this editorial piece is thus, “Agitate for developing good media-consumption habits among schoolchildren along with the traditional Three Rs.” It starts with making the teaching of media literacy part of K-12 teacher education. It also includes teaching critical thinking skills and habits at the same time. Finally, it includes holding K-12 teachers responsible for inculcating good media-consumption habits in their students.

Yes, it’s important to try to bring the current crop of media-illiterate adults up to speed, but it’s more important to promote global media literacy among the young.

Computers Are Revolting!

Will Computers Revolt? cover
Charles Simon’s Will Computers Revolt? looks at the future of interactions between artificial intelligence and the human race.

14 November 2018 – I just couldn’t resist the double meaning allowed by the title for this blog posting. It’s all I could think of when reading the title of Charles Simon’s new book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence.

On one hand, yes, computers are revolting. Yesterday my wife and I spent two hours trying to figure out how to activate our Netflix account on my new laptop. We ended up having to change the email address and password associated with the account. And, we aren’t done yet! The nice lady at Netflix sadly informed me that in thirty days, their automated system would insist that we re-log-into the account on both devices due to the change.

That’s revolting!

On the other hand, the uprising has already begun. Computers are revolting in the sense that they’re taking power to run our lives.

We used to buy stuff just by picking it off the shelf, then walking up to the check-out counter and handing over a few pieces of green paper. The only thing that held up the process was counting out the change.

Later, when credit cards first reared their ugly heads, we had to wait a few minutes for the salesperson to fill out a sales form, then run our credit cards through the machine. It was all manual. No computers involved. It took little time once you learned how to do it, and, more importantly, the process was pretty much the same everywhere and never changed, so once you learned it, you’d learned it “forever.”

Not no more! How much time do you, I, and everyone else waste navigating the multiple pages we have to walk through just to pay for anything with a credit or debit card today?

Even worse, every store has different software using different screens to ask different questions. So, we can’t develop a habitual routine for the process. It’s different every time!

Not long ago the banks issuing my debit-card accounts switched to those %^&^ things with the chips. I always forget to put the thing in the slot instead of swiping the card across the magnetic-stripe reader. When that happens we have to start the process all over, wasting even more time.

The computers have taken over, so now we have to do what they tell us to do.

Now we know who’ll be first against the wall when the revolution comes. It’s already here and the first against the wall is us!

Golem Literature in Perspective

But seriously folks, Simon’s book is the latest in a long tradition of works by thinkers fascinated by the idea that someone could create an artifice that would pass for a human. Perhaps the earliest, and certainly the most iconic, is the golem stories from Jewish folklore. I suspect (on no authority, whatsoever, but it does seem likely) that the idea of a golem appeared about the time when human sculptors started making statues in realistic human form. That was very early, indeed!

A golem is, for those who aren’t familiar with the term or willing to follow the link provided above to learn about it, an artificial creature fashioned by a human that is effectively what we call a “robot.” The folkloric golems were made of clay or (sometimes) wood because those were the best materials available at the time that the stories’ authors’ could have their artists work with. A well-known golem story is Carlo Collodi’s The Adventures of Pinocchio.

By the sixth century BCE, Greek sculptors had begun to produce lifelike statues. The myth of Pygmalion and Galatea appeared in a pseudo-historical work by Philostephanus Cyrenaeus in the third century BCE. Pygmalion was a sculptor who made a statue representing his ideal woman, then fell in love with it. Aphrodite granted his prayer for a wife exactly like the statue by bringing the statue to life. The wife’s name was Galatea.

The Talmud points out that Adam started out as a golem. Like Galatea, Adam was brought to life when the Hebrew God Yahweh gave him a soul.

These golem examples emphasize the idea that humans, no matter how holy or wise, cannot give their creations a soul. The best they can do is to create automatons.

Simon effectively begs to differ. He spends the first quarter of his text laying out the case that it is possible, and indeed inevitable, that automated control systems displaying artificial general intelligence (AGI) capable of thinking at or (eventually) well above human capacity will appear. He spends the next half of his text showing how such AGI systems could be created and making the case that they will eventually exhibit functionality indistinguishable from consciousness. He devotes the rest of his text to speculating about how we, as human beings, will likely interact with such hyperintelligent machines.

Spoiler Alert

Simon’s answer to the question posed by his title is a sort-of “yes.” He feels AGIs will inevitably displace humans as the most intelligent beings on our planet, but won’t exactly “revolt” at any point.

“The conclusion,” he says, “is that the paths of AGIs and humanity will diverge to such an extent that there will be no close relationship between humans and our silicon counterparts.”

There won’t be any violent conflict because robotic needs are sufficiently dissimilar to ours that there won’t be any competition for scarce resources, which is what leads to conflict between groups (including between species).

Robots, he posits, are unlikely to care enough about us to revolt. There will be no Terminator robots seeking to exterminate us because they won’t see us as enough of a threat to bother with. They’re more likely to view us much the way we view squirrels and birds: pleasant fixtures of the natural world.

They won’t, of course, tolerate any individual humans who make trouble for them the same way we wouldn’t tolerate a rabid coyote. But, otherwise, so what?

So, the !!!! What?

The main value of Simon’s book is not in its ultimate conclusion. That’s basically informed opinion. Rather, its value lies in the voluminous detail he provides in getting to that conclusion.

He spends the first quarter of his text detailing exactly what he means by AGI. What functions are needed to make it manifest? How will we know when it rears its head (ugly or not, as a matter of taste)? How will a conscious, self-aware AGI system act?

A critical point Simon makes in this section is the assertion that AGI will arise first in autonomous mobile robots. I thoroughly agree for pretty much the same reasons he puts forth.

I first started seriously speculating about machine intelligence back in the middle of the twentieth century. I never got too far – certainly not as far as Simon gets in this volume – but pretty much the first thing I actually did realize was that it was impossible to develop any kind of machine with any recognizable intelligence unless its main feature was having a mobile body.

Developing any AGI feature requires the machine to have a mobile body. It has to take responsibility not only for deciding how to move itself about in space, but figuring out why. Why would it, for example, rather be over there, rather than to just stay here? Note that biological intelligence arose in animals, not in plants!

Simultaneously with reading Simon’s book, I was re-reading Robert A. Heinlein’s 1966 novel The Moon is a Harsh Mistress, which is one of innumerable fiction works whose plot hangs on actions of a superintelligent sentient computer. I found it interesting to compare Heinlein’s early fictional account with Simon’s much more informed discussion.

Heinlein sidesteps the mobile-body requirement by making his AGI arise in a computer tasked with operating the entire infrastructure of the first permanent human colony on the Moon (more accurately in the Moon, since Heinlein’s troglodytes burrowed through caves and tunnels, coming up to the surface only reluctantly when circumstances forced them to). He also avoids trying to imagine the AGI’s inner workings, by glossing over with the 1950s technology he was most familiar with.

In his rather longish second section, Simon leads his reader through a thought experiment speculating about what components an AGI system would need to have for its intelligence to develop. What sorts of circuitry might be needed, and how might it be realized? This section might be fascinating for those wanting to develop hardware and software to support AGI. For those of us watching from our armchairs on the outside, though, not so much.

Altogether, Charles Simon’s Will Computers Revolt? is an important book that’s fairly easy to read (or, at least as easy as any book this technical can be) and accessible to a wide range of people interested in the future of robotics and artificial intelligence. It is not the last word on this fast-developing field by any means. It is, however, a starting point for the necessary debate over how we should view the subject. Do we have anything to fear? Do we need to think about any regulations? Is there anything to regulate and would any such regulations be effective?

Babies and Bath Water

A baby in bath water
Don’t throw the baby out with the bathwater. Switlana Symonenko/Shutterstock.com

31 October 2018 – An old catchphrase derived from Medieval German is “Don’t throw the baby out with the bathwater.” It expresses an important principle in systems engineering.

Systems engineering focuses on how to design, build, and manage complex systems. A system can consist of almost anything made up of multiple parts or elements. For example, an automobile internal combustion engine is a system consisting of pistons, valves, a crankshaft, etc. Complex systems, such as that internal combustion engine, are typically broken up into sub-systems, such as the ignition system, the fuel system, and so forth.

Obviously, the systems concept can be applied to almost everything, from microorganisms to the World economy. As another example, medical professionals divide the human body into eleven organ systems, which would each be sub-systems within the body, which is considered as a complex system, itself.

Most systems-engineering principles transfer seamlessly from one kind of system to another.

Perhaps the most best known example of a systems-engineering principle was popularized by Robin Williams in his Mork and Mindy TV series. The Used-Car rule, as Williams’ Mork character put it, quite simply states:

“If it works, don’t fix it!”

If you’re getting the idea that systems engineering principles are typically couched in phrases that sound pretty colloquial, you’re right. People have been dealing with systems for as long as there have been people, so most of what they discovered about how to deal with systems long ago became “common sense.”

Systems engineering coalesced into an interdisciplinary engineering field around the middle of the twentieth century. Simon Ramo is sometimes credited as the founder of modern systems engineering, although many engineers and engineering managers contributed to its development and formalization.

The Baby/Bathwater rule means (if there’s anybody out there still unsure of the concept) that when attempting to modify something big (such as, say, the NAFTA treaty), make sure you retain those elements you wish to keep while in the process of modifying those elements you want to change.

The idea is that most systems that are already in place more or less already work, indicating that there are more elements that are right than are wrong. Thus, it’ll be easier, simpler, and less complicated to fix what’s wrong than to violate another systems principle:

“Don’t reinvent the wheel.”

Sometimes, on the other hand, something is such an unholy mess that trying to pick out those elements that need to change from the parts you don’t wish to change is so difficult that it’s not worth the effort. At that point, you’re better off scrapping the whole thing (throwing the baby out with the bathwater) and starting over from scratch.

Several months ago, I noticed that a seam in the convertible top on my sports car had begun to split. I quickly figured out that the big brush roller at my neighborhood automated car wash was over stressing the more-than-a-decade-old fabric. Naturally, I stopped using that car wash, and started looking around for a hand-detailing shop that would be more gentle.

But, that still left me with a convertible top that had started to split. So, I started looking at my options for fixing the problem.

Considering the car’s advanced age, and that a number of little things were starting to fail, I first considered trading the whole car in for a newer model. That, of course, would violate the rule about not throwing the baby out with the bath water. I’d be discarding the whole car just because of a small flaw, which might be repaired.

Of course, I’d also be getting rid of a whole raft of potentially impending problems. Then, again, I might be taking on a pile of problems that I knew nothing about.

It turned out, however, that the best car-replacement option was unacceptable, so I started looking into replacing just the convertible top. That, too, turned out to be infeasible. Finally, I found an automotive upholstery specialist who described a patching scheme that would solve the immediate problem and likely last through the remaining life of the car. So, that’s what I did.

I’ve put you through listening to this whole story to illustrate the thought process behind applying the “don’t throw the baby out with the bathwater” rule.

Unfortunately, our current President, Donald Trump, seems to have never learned anything about systems engineering, or about babies and bathwater. He’s apparently enthralled with the idea that he can bully U.S. trading partners into giving him concessions when he negotiates with them one-on-one. That’s the gist of his love of bilateral trade agreements.

Apparently, he feels that if he gets into a multilateral trade negotiation, his go-to strategy of browbeating partners into giving in to him might not work. Multiple negotiating partners might get together and provide a united front against him.

In fact, that’s a reasonable assumption. He’s a sufficiently weak deal maker on his own that he’d have trouble standing up to a combination of, say, Mexico’s Nieto and Canada’s Trudeau banded together against him.

With that background, it’s not hard to understand why POTUS is looking at all U.S. treaties, which are mostly multilateral, and looking for any niddly thing wrong with them to use as an excuse to scrap the whole arrangement and start over. Obvious examples being the NAFTA treaty and the Iran Nuclear Accord.

Both of these treaties have been in place for some time, and have generally achieved the goals they were put in place to achieve. Howsoever, they’re not perfect, so POTUS is in the position of trying to “fix” them.

Since both these treaties are multilateral deals, to make even minor adjustments POTUS would have to enter multilateral negotiations with partners (such as Germany’s quantum-physicist-turned-politician, Angela Merkel) who would be unlikely to cow-tow to his bullying style. Robbed of his signature strategy, he’d rather scrap the whole thing and start all over, taking on partners one at a time in bilateral negotiations. So, that’s what he’s trying to do.

A more effective strategy would be to forget everything his ghostwriter put into his self-congratulatory “How-To” book The Art of the Deal, enumerate a list of what’s actually wrong with these documents, and tap into the cadre of veteran treaty negotiators that used to be available in the U.S. State Department to assemble a team of career diplomats capable of fixing what’s wrong without throwing the babies out with the bathwater.

But, that would violate his narcissistic world view. He’d have to admit that it wasn’t all about him, and acknowledge one of the first principles of project management (another discipline that he should have vast knowledge of, but apparently doesn’t):

Begin by making sure the needs of all stakeholders are built into any project plan.”

Reaping the Whirlwind

Tornado
Powerful Tornado destroying property, with lightning in the background. Solarseven/Shutterstock.com

24 October 2018 – “They sow the wind, and they shall reap the whirlwind” is a saying from The Holy Bible‘s Old Testament Book of Hosea. I’m certainly not a Bible scholar, but, having been paying attention for seven decades, I can attest to saying’s validity.

The equivalent Buddhist concept is karma, which is the motive force driving the Wheel of Birth and Death. It is also wrapped up with samsara, which is epitomized by the saying: “What goes around comes around.”

Actions have consequences.

If you smoke a pack of Camels a day, you’re gonna get sick!

By now, you should have gotten the idea that “reaping the whirlwind” is a common theme among the world’s religions and philosophies. You’ve got to be pretty stone headed to have missed it.

Apparently the current President of the United States (POTUS), Donald J. Trump, has been stone headed enough to miss it.

POTUS is well known for trying to duck consequences of his actions. For example, during his 2016 Presidential Election campaign, he went out of his way to capitalize on Wikileaks‘ publication of emails stolen from Hillary Clinton‘s private email server. That indiscretion and his attempt to cover it up by firing then-FBI-Director James Comey grew into a Special Counsel Investigation, which now threatens to unmask all the nefarious activities he’s engaged in throughout his entire life.

Of course, Hillary’s unsanctioned use of that private email server while serving as Secretary of State is what opened her up to the email hacking in the first place! That error came back to bite her in the backside by giving the Russians something to hack. They then forwarded that junk to Wikileaks, who eventually made it public, arguably costing her the 2016 Presidential election.

Or, maybe it was her standing up for her philandering husband, or maybe lingering suspicions surrounding the pair’s involvement in the Whitewater scandal. Whatever the reason(s), Hillary, too, reaped the whirlwind.

In his turn, Russian President Vladimir Putin sowed the wind by tasking operatives to do the hacking of Hillary’s email server. Now he’s reaping the whirlwind in the form of a laundry list sanctions by western governments and Special Counsel Investigation indictments against the operatives he sent to do the hacking.

Again, POTUS showed his stone-headedness about the Bible verse by cuddling up to nearly every autocrat in the world: Vlad Putin, Kim Jong Un, Xi Jinping, … . The list goes on. Sensing waves of love emanating from Washington, those idiots have become ever more extravagant in their misbehavior.

The latest example of an authoritarian regime rubbing POTUS’ nose in filth is the apparent murder and dismemberment of Saudi Arabian journalist Jamal Khashoggi when he briefly entered the Saudi embassy in Turkey on personal business.

The most popular theory of the crime lays blame at the feet of Mohammad Bin Salman Al Saud (MBS), Crown Prince of Saudi Arabia and the country’s de facto ruler. Unwilling to point his finger at another would-be autocrat, POTUS is promoting a Saudi cover-up attempt suggesting the murder was done by some unnamed “rogue agents.”

Actually, that theory deserves some consideration. The idea that MBS was emboldened (spelled S-T-U-P-I-D) enough to have ordered Kashoggi’s assassination in such a ham-fisted way strains credulity. We should consider the possibility that ultra-conservative Wahabist factions within the Saudi government, who see MBS’ reforms as a threat to their historical patronage from the oil-rich Saudi monarchy, might have created the incident to embarrass MBS.

No matter what the true story is, the blow back is a whirlwind!

MBS has gone out of his way to promote himself as a business-friendly reformer. This reputation has persisted despite repeated instances of continued repression in the country he controls.

The whirlwind, however, is threatening MBS’ and the Saudi monarchy’s standing in the international community. Especially, international bankers, led by JP Morgan Chase’s Jamie Dimon, and a host of Silicon Valley tech companies are running for the exits from Saudi Arabia’s three-day Financial Investment Initiative conference that was scheduled to start Tuesday (23 October 2018).

That is a major embarrassment and will likely derail MBS’ efforts to modernize Saudi Arabia’s economy away from dependence on oil revenue.

It appears that these high-powered executives are rethinking the wisdom of dealing with the authoritarian Saudi regime. They’ve decided not to sow the wind by dealing with the Saudis because they don’t want to reap the whirlwind likely to result!

Update

Since this manuscript was drafted it’s become clear that we’ll never get the full story about the Kashoggi incident. Both regimes involved (Turkey and Saudi Arabia) are authoritarians with no incentive to be honest about this story. While Saudi Arabia seems to make a pretense of press freedom, this incident shows their true colors (i.e, color them repressive). Turkey hasn’t given even a passing nod to press freedom for years. It’s like two rival foxes telling the dog about a hen house break in.

On the “dog” side, we’re stuck with a POTUS who attacks press freedom on a daily basis. So, who’s going to ferret out the truth? Maybe the Brits or the French, but not the U.S. Executive Branch!

Doing Business with Bad Guys

Threatened with a gun
Authoritarians make dangerous business partners. rubikphoto/Shutterstock

3 October 2018 – Parents generally try to drum into their childrens’ heads a simple maxim: “People judge you by the company you keep.

Children (and we’re all children, no matter how mature and sophisticated we pretend to be) just as generally find it hard to follow that maxim. We all screw it up once in a while by succumbing to the temptation of some perceived advantage to be had by dealing with some unsavory character.

Large corporations and national governments are at least as likely to succumb to the prospect of making a fast buck or signing some treaty with peers who don’t entertain the same values we have (or at least pretend to have). Governments, especially, have a tough time in dealing with what I’ll call “Bad Guys.”

Let’s face it, better than half the nations of the world are run by people we wouldn’t want in our living rooms!

I’m specifically thinking about totalitarian regimes like the People’s Republic of China (PRC).

‘Way back in the last century, Mao Tse-tung (or Mao Zedong, depending on how you choose to mis-spell the anglicization of his name) clearly placed China on the “Anti-American” team, espousing a virulent form of Marxism and descending into the totalitarian authoritarianism Marxist regimes are so prone to. This situation continued from the PRC’s founding in 1949 through 1972, when notoriously authoritarian-friendly U.S. President Richard Nixon toured China in an effort to start a trade relationship between the two countries.

Greedy U.S. corporations quickly started falling all over themselves in an effort to gain access to China’s enormous potential market. Mesmerized by the statistics of more than a billion people spread out over China’s enormous land mass, they ignored the fact that those people were struggling in a subsistence-agriculture economy that had collapsed under decades of mis-managment by Mao’s authoritarian regime.

What they hoped those generally dirt-poor peasants were going to buy from them I never could figure out.

Unfortunately, years later I found myself embedded in the management of one of those starry-eyed multinational corporations that was hoping to take advantage of the developing Chinese electronics industry. Fresh off our success launching Test & Measurement Europe, they wanted to launch a new publication called Test & Measurement China. Recalling the then-recent calamity ending the Tiananmen Square protests of 1989, I pulled a Nancy Reagan and just said “No.”

I pointed out that the PRC was still run by a totalitarian, authoritarian regime, and that you just couldn’t trust those guys. You never knew when they were going to decide to sacrifice you on the altar of internal politics.

Today, American corporations are seeing the mistakes they made in pursuit of Chinese business, which like Robert Southey’s chickens, are coming home to roost. In 2015, Chinese Premier Li Keqiang announced the “Made in China 2025” plan to make China the World’s technology leader. It quickly became apparent that Mao’s current successor, Xi Jinping intends to achieve his goals by building on technology pilfered from western companies who’d naively partnered with Chinese firms.

Now, their only protector is another authoritarian-friendly president, Donald Trump. Remember it was Trump who, following his ill-advised summit with North Korean strongman Kim Jong Un, got caught on video enviously saying: “He speaks, and his people sit up at attention. I want my people to do the same.

So, now these corporations have to look to an American would-be dictator for protection from an entrenched Chinese dictator. No wonder they find themselves screwed, blued, and tattooed!

Governments are not immune to the PRC’s siren song, either. Pundits are pointing out that the PRC’s vaunted “One Belt, One Road” initiative is likely an example of “debt-trap diplomacy.”

Debt-trap diplomacy is a strategy similar to organized crime’s loan-shark operations. An unscrupulous cash-rich organization, the loan shark, offers funds to a cash-strapped individual, such as an ambitious entrepreneur, in a deal that seems too good to be true. It’s NOT true because the deal comes in the form of a loan at terms that nearly guarantee that the debtor will default. The shark then offers to write off the debt in exchange for the debtor’s participation in some unsavory scheme, such as money laundering.

In the debt-trap diplomacy version, the PRC stands in the place of the loan shark while some emerging-economy nation, such as, say, Malaysia, accepts the unsupportable debt. In the PRC/ Malaysia case, the unsavory scheme is helping support China’s imperial ambitions in the western Pacific.

Earlier this month, Malaysia wisely backed out of the deal.

It’s not just the post-Maoist PRC that makes a dangerous place for western corporations to do business, authoritarians all over the world treat people like Heart’s Barracuda. They suck you in with mesmerizing bright and shiny promises, then leave you twisting in the wind.

Yes, I’ve piled up a whole mess of mixed metaphors here, but I’m trying to drive home a point!

Another example of the traps business people can get into by trying to deal with authoritarians is afforded by Danske Bank’s Estonia branch and their dealings with Vladimir Putin‘s Russian kleptocracy. Danske Bank is a Danish financial institution with a pan-European footprint and global ambitions. Recent release of a Danske Bank internal report produced by the Danish law firm Bruun & Hjejle says that the Estonia branch engaged in “dodgy dealings” with numerous corrupt Russian officials. Basically, the bank set up a scheme to launder money stolen from Russian tax receipts by organized criminals.

The scandal broke in Russia in June of 2007 when dozens of police officers raided the Moscow offices of Hermitage Global, an activist fund focused on global emerging markets. A coverup by Kremlin authorities resulted in the death (while in a Russian prison) of Sergei Leonidovich Magnitsky, a Russian tax accountant who specialized in anti-corruption activities.

Magnitsky’s case became an international cause célèbre. The U.S. Congress and President Barack Obama enacted the Magnitsky Act at the end of 2012, barring, among others, those Russian officials believed to be involved in Magnitsky’s death from entering the United States or using its banking system.

Apparently, the purpose of the infamous Trump Tower meeting of June 9, 2016 was, on the Russian side, an effort to secure repeal of the Magnitsky Act should then-candidate Trump win the election. The Russians dangled release of stolen emails incriminating Trump-rival Hillary Clinton as bait. This activity started the whole Mueller Investigation, which has so far resulted in dozens of indictments for federal crimes, and at least eight guilty pleas or convictions.

The latest business strung up in this mega-scandal was the whole corrupt banking system of Cyprus, whose laundering of Russian oligarchs’ money amounted to over $20B.

The moral of this story is: Don’t do business with bad guys, no matter how good they make the deal look.

News vs. Opinion

News reporting
Journalists report reopening of Lindt cafe in Sydney after ISIS siege, 20 March 2015. M. W. Hunt / Shutterstock.com

26 September 2018 – This is NOT a news story!

Last week I spent a lot of space yammering on about how to tell fake news from the real stuff. I made a big point about how real news organizations don’t allow editorializing in news stories. I included an example of a New York Times op-ed (opinion editorial) that was decidedly not a news story.

On the other hand, last night I growled at my TV screen when I heard a CNN commentator say that she’d been taught that journalists must have opinions and should voice them. I growled because her statement could be construed to mean something anathema to journalistic ethics. I’m afraid way too many TV journalists may be confused about this issue. Certainly too many news consumers are confused!

It’s easy to get confused. For example, I got myself in trouble some years ago in a discussion over dinner and drinks with Andy Wilson, Founding Editor at Vision Systems Design, over a related issue that is less important to political-news reporting, but is crucial for business-to-business (B2B) journalism: the role of advertising in editorial considerations.

Andy insisted upon strictly ignoring advertiser needs when making editorial decisions. I advocated a more nuanced approach. I said that ignoring advertiser needs and desires would lead to cutting oneself off from our most important source of technology-trends information.

I’m not going to delve too deeply into that subject because it has only peripheral significance for this blog posting. The overlap with news reporting is that both activities involve dealing with biased sources.

My disagreement with Andy arose from my veteran-project-manager’s sensitivity to all stakeholders in any activity. In the B2B case, editors have several ways of enforcing journalistic discipline without biting the hand that feeds us. I was especially sensitive to the issue because I specialized in case studies, which necessarily discuss technology embodied in commercial products. Basically, I insisted on limiting (to one) actual product mentions in each story, and suppressing any claims that the mentioned product was the only possible way to access the embodied technology. In essence, I policed the stories I wrote or edited to avoid the “buy our stuff” messages that advertisers love and that send chills down Andy’s (and my) spine.

In the news-media realm, journalists need to police their writing for “buy our ideas” messages in news stories. “Just the facts, ma’am” needs to be the goal for news. Expressing editorial opinions in news stories is dangerous. That’s when the lines between fake news and real news get blurry.

Those lines need to be sharp to help news consumers judge the … information … they’re being fed.

Perhaps “information” isn’t exactly the right word.

It might be best to start with the distinction between “information” and “data.”

The distinction is not always clear in a general setting. It is, however, stark in the world of science, which is where I originally came from.

What comes into our brains from the outside world is “data.” It’s facts and figures. Contrary to what many people imagine, “data” is devoid of meaning. Scientists often refer to it as “raw data” to emphasize this characteristic.

There is nothing actionable in raw data. The observation that “the sky is blue” can’t even tell you if the sky was blue yesterday, or how likely it is to be blue tomorrow. It just says: “the sky is blue.” End of story.

Turning “data” into “information” involves combining it with other, related data, and making inferences about or deductions from patterns perceivable in the resulting superset. The process is called “interpretation,” and it’s the second step in turning data into knowledge. It’s what our brains are good for.

So, does this mean that news reporters are to be empty-headed recorders of raw facts?

Not by a long shot!

The CNN commentator’s point was that reporters are far from empty headed. While learning their trade, they develop ways to, for example, tell when some data source is lying to them.

In the hard sciences it’s called “instrumental error,” and experimental scientists (as I was) spend careers detecting and eliminating it.

Similarly, what a reporter does when faced with a lying source is the hard part of news reporting. Do you say, “This source is unreliable” and suppress what they told you? Do you report what they said along with a comment that they’re a lying so-and-so who shouldn’t be believed? Certainly, you try to find another source who tells you something you can rely on. But, what if the second source is lying, too?

???

That’s why we news consumers have to rely on professionals who actually care about the truth for our news.

On the other hand, nobody goes to news outlets for just raw data. We want something we can use. We want something actionable.

Most of us have neither the time nor the tools to interpret all the drivel we’re faced with. Even if we happen to be able to work it out for ourselves, we could always use some help, even if just to corroborate our own conclusions.

Who better to help us interpret the data (news) and glean actionable opinions from it than those journalists who’ve been spending their careers listening to the crap newsmakers want to feed us?

That’s where commentators come in. The difference between an editor and a reporter is that the editor has enough background and experience to interpret the raw data and turn it into actionable information.

That is: opinion you can use to make a decision. Like, maybe, who to vote for.

People with the chops to interpret news and make comments about it are called “commentators.”

When I was looking to hire what we used to call a “Technical Editor” for Test & Measurement World, I specifically looked for someone with a technical degree and experience developing the technology I wanted that person to cover. So, for example, when I was looking for someone to cover advances in testing of electronics for the telecommunications industry, I went looking for a telecommunications engineer. I figured that if I found one who could also tell a story, I could train them to be a journalist.

That brings us back to the CNN commentator who thought she should have opinions.

The relevant word here is “commentator.”

She’s not just a reporter. To be a commentator, she supposedly has access to the best available “data” and enough background to skillfully interpret it. So, what she was saying is true for a commentator rather than just a reporter.

Howsomever, ya can’t just give a conclusion without showing how the facts lead to it.

Let’s look at how I assemble a post for this blog as an example of what you should look for in a reliable op-ed piece.

Obviously, I look for a subject about which I feel I have something worthwhile to say. Specifically, I look for what I call the “take-home lesson” on which I base every piece of blather I write.

The “take-home lesson” is the basic point I want my reader to remember. Come Thursday next you won’t remember every word or even every point I make in this column. You’re (hopefully) going to remember some concept from it that you should be able to summarize in one or two sentences. It may be the “call to action” my eighth-grade English teacher, Miss Langley, told me to look for in every well-written editorial. Or, it could be just some idea, such as “Racism sucks,” that I want my reader to believe.

Whatever it is, it’s what I want the reader to “take home” from my writing. All the rest is just stuff I use to convince the reader to buy into the “take-home lesson.”

Usually, I start off by providing the reader with some context in which to fit what I have to say. It’s there so that the reader and I start off on the same page. This is important to help the reader fit what I have to say into the knowledge pattern of their own mind. (I hope that makes sense!)

After setting the context, I provide the facts that I have available from which to draw my conclusion. The conclusion will be, of course, the “take-home lesson.”

I can’t be sure that my readers will have the facts already, so I provide links to what I consider reliable outside sources. Sometimes I provide primary sources, but more often they’re secondary sources.

Primary sources for, say, a biographical sketch of Thomas Edison would be diary pages or financial records, which few readers would have immediate access to.

A secondary source might be a well-researched entry on, say, the Biography.com website, which the reader can easily get access to and which can, in turn, provide links to useful primary sources.

In any case, I try to provide sources for each piece of data on which I base my conclusion.

Then, I’ll outline the logical path that leads from the data pattern to my conclusion. While the reader should have no need to dispute the “data,” he or she should look very carefully to see whether my logic makes sense. Does it lead inevitably from the data to my conclusion?

Finally, I’ll clearly state the conclusion.

In general, every consumer of ideas should look for this same pattern in every information source they use.