Immigration in Perspective

Day without immigrants protest
During ‘A Day Without Immigrants’ , more than 500,000 people marched down Wilshire Boulevard in Los Angeles, CA to protest a proposed federal crackdown on illegal immigration. Krista Kennell / Shutterstock.com

17 October 2018 – Immigration is, by and large, a good thing. It’s not always a good thing, and it carries with it a host of potential problems, but in general immigration is better than its opposite: emigration. And, there are a number of reasons for that.

Immigration is movement toward some place. Emigration is flow away from a place.

Mathematically, population shifts are described by a non-homogeneous second-order differential equation. I expect that statement means absolutely nothing to about half the target audience for this blog, and a fair fraction of the others have (like me) forgotten most of what they ever knew (or wanted to know) about such equations. So, I’ll start with a short review of the relevant points of how the things behave.

It’ll help the rest of this blog make a lot more sense, so bear with me.

Basically, the relevant non-homogeneous second-order differential equation is something called the “diffusion equation.” Leaving the detailed math aside, what this equation says is that the rate of migration of just about anything from one place to another depends on the spatial distribution of population density, a mobility factor, and a driving force pushing the population in one direction or the other.

Things (such as people) “diffuse” from places with higher densities to those with lower densities.

That tendency is moderated by a “mobility” factor that expresses how easy it is to get from place to place. It’s hard to walk across a desert, so mobility of people through a desert is low. Similarly, if you build a wall across the migration path, that also reduces mobility. Throwing up all kinds of passport checks, visas and customs inspections also reduces mobility.

Giving people automobiles, buses and airplanes, on the other hand, pushes mobility up by a lot!

But, changing mobility only affects the rate of flow. It doesn’t do anything to change the direction of flow, or to actually stop it. That’s why building walls has never actually worked. It didn’t work for the First Emperor of China. It didn’t work for Hadrian. It hasn’t done much for the Israelis, either.

Direction of flow is controlled by a forcing term. Existence of that forcing term is what makes the equation “non-homogeneous” rather than “homogeneous.” The homogeneous version (without the forcing term) is called the “heat equation” because it models what dumb-old thermal energy does.

Things that can choose what to do (like people), and have feet to help them act on their choices, get to “vote with their feet.” That means they can go where they want, instead of always floating downstream like a dead leaf.

The forcing term largely accounts for the desirability of being in one place instead of another. For example, the United States has a reputation for being a nice place to live. Thus, people try to flock here in droves from places that are not so nice. Thus, there’s a forcing term that points people from other places to the U.S.

That’s the big reason you want to live in a country that has immigration issues, rather than one with emigration issues. The Middle East had a serious emigration problem in 2015. For a number of reasons, it had become a nasty place to live. Folks that lived there wanted out in a big way. So, they voted with their feet.

There was a huge forcing term that pushed a million people from the Middle East to elsewhere, specifically Europe. Europe was considered a much nicer place to be, so people were willing to go through Hell to get there. Thus: emigration from the Middle East, and immigration into Europe.

In another example Nazi occupation in the first half of the twentieth century made most places in Europe distasteful, especially for certain groups of people. So, the forcing term pushed a lot of people across the Atlantic toward America. In 1942 Michael Curtiz made a film about that. It was called Casablanca and is arguably one of the greatest films Humphrey Bogart starred in.

Similarly, for decades Mexico had some serious problems with poverty, organized crime and corruption. Those are things that make a place nasty to live in, so there was a big forcing function pushing people to cross the border into the much nicer United States.

In recent decades, regime change in Mexico cleaned up a lot of the country’s problems, so migration from Mexico to the United States dropped like a stone in the last years of the Obama administration. When Mexico became a nicer place to live, people stopped wanting to move away.

Duh!

There are two morals to this story:

  1. If you want to cut down on immigration from some other country, help that other country become a nicer place to live. (Conversely, you could turn your own country into a third-world toilet so nobody wants to come in, but that’s not what we want.)
  2. Putting up walls and other barriers to immigration doesn’t stop it. They only slow it down.

We’re All Immigrants

I’d should subtitle this section, “The Bigot’s Lament.”

There isn’t a bi-manual (two-handed) biped (two-legged) creature anywhere in North or South America who isn’t an immigrant or a descendant of immigrants.

There have been two major influxes of human population in the history (and pre-history) of the Americas. The first occurred near the end of the last Ice Age, and the second occurred during the European Age of Discovery.

Before about ten-thousand years ago, there were horses, wolves, saber-tooth tigers, camels(!), elephants, bison and all sorts of big and little critters running around the Americas, but not a single human being.

(The actual date is controversial, but you get the idea.)

Anatomically modern humans, (and there aren’t any others because everyone else went extinct tens of thousands of years ago) developed in East Africa about 200,000 years ago.

They were, by the way, almost certainly negroes. A fact every racist wants to ignore is that: everybody has black ancestors! You can’t hate black people without hating your own forefathers.

More important for this discussion, however, is that every human being in North and South America is descended from somebody who came here from somewhere else. So-called “Native Americans” came here in the Pleistocene Epoch, most likely from Siberia. Most everybody else showed up after Christopher Columbus accidentally fell over North America.

That started the second big migration of people into the Americas: European colonization.

Mostly these later immigrants were imported to fill America’s chronic labor shortage.

America’s labor shortage has persisted since the Spanish conquistadores pretty much wiped out the indigenous people, leaving the Spaniards with hardly anybody to do the manual labor on which their economy depended. Waves of forced and unforced migration have never caught up. We still have a chronic labor shortage.

Immigrants generally don’t come to take jobs from “real” Americans. They come here because there are by-and-large more available jobs than workers.

Currently, natural reductions in birth rates among better educated, better housed, and generally wealthier Americans have left the United States (similar to most developed countries) with the problem that the the working-age population is declining while the older, retired population expands. That means we haven’t got enough young squirts to support us old farts in retirement.

The only viable solution is to import more young squirts. That means welcoming working-age immigrants.

End of story.

Climate Models Bat Zero

Climate models vs. observations
Whups! Since the 1970s, climate models have overestimated global temperature rise by … a lot! Cato Institute

The articles discussed here reflect the print version of The Wall Street Journal, rather than the online version. Links to online versions are provided. The publication dates and some of the contents do not match.

10 October 2018 – Baseball is well known to be a game of statistics. Fans pay as much attention to statistical analysis of performance by both players and teams as they do to action on the field. They hope to use those statistics to indicate how teams and individual players are likely to perform in the future. It’s an intellectual exercise that is half the fun of following the sport.

While baseball statistics are quoted to three decimal places, or one part in a thousand, fans know to ignore the last decimal place, be skeptical of the second decimal place, and recognize that even the first decimal place has limited predictive power. It’s not that these statistics are inaccurate or in any sense meaningless, it’s that they describe a situation that seems predictable, yet is full of surprises.

With 18 players in a game at any given time, a complex set of rules, and at least three players and an umpire involved in the outcome of every pitch, a baseball game is a highly chaotic system. What makes it fun is seeing how this system evolves over time. Fans get involved by trying to predict what will happen next, then quickly seeing if their expectations materialize.

The essence of a chaotic system is conditional unpredictability. That is, the predictability of any result drops more-or-less drastically with time. For baseball, the probability of, say, a player maintaining their batting average is fairly high on a weekly basis, drops drastically on a month-to-month basis, and simply can’t be predicted from year to year.

Folks call that “streakiness,” and it’s one of the hallmarks of mathematical chaos.

Since the 1960s, mathematicians have recognized that weather is also chaotic. You can say with certainty what’s happening right here right now. If you make careful observations and take into account what’s happening at nearby locations, you can be fairly certain what’ll happen an hour from now. What will happen a week from now, however, is a crapshoot.

This drives insurance companies crazy. They want to live in a deterministic world where they can predict their losses far into the future so that they can plan to have cash on hand (loss reserves) to cover them. That works for, say, life insurance. It works poorly for losses do to extreme-weather events.

That’s because weather is chaotic. Predicting catastrophic weather events next year is like predicting Miami Marlins pitcher Drew Steckenrider‘s earned-run-average for the 2019 season.

Laugh out loud.

Notes from 3 October

My BS detector went off big-time when I read an article in this morning’s Wall Street Journal entitled “A Hotter Planet Reprices Risk Around the World.” That headline is BS for many reasons.

Digging into the article turned up the assertion that insurance providers were using deterministic computer models to predict risk of losses due to future catastrophic weather events. The article didn’t say that explicitly. We have to understand a bit about computer modeling to see what’s behind the words they used. Since I’ve been doing that stuff since the 1970s, pulling aside the curtain is fairly easy.

I’ve also covered risk assessment in industrial settings for many years. It’s not done with deterministic models. It’s not even done with traditional mathematics!

The traditional mathematics you learned in grade school uses real numbers. That is numbers with a definite value.

Like Pi.

Pi = 3.1415926 ….

We know what Pi is because it’s measurable. It’s the ratio of a circle’s circumference to its diameter.

Measure the circumference. Measure the diameter. Then divide one by the other.

The ancient Egyptians performed the exercise a kazillion times and noticed that, no matter what circle you used, no matter how big it was, whether you drew it on papyrus or scratched it on a rock or laid it out in a crop circle, you always came out with the same number. That number eventually picked up the name “Pi.”

Risk assessment is NOT done with traditional arithmetic using deterministic (real) numbers. It’s done using what’s called “fuzzy logic.”

Fuzzy logic is not like the fuzzy thinking used by newspaper reporters writing about climate change. The “fuzzy” part simply means it uses fuzzy categories like “small,” “medium” and “large” that don’t have precisely defined values.

While computer programs are perfectly capable of dealing with fuzzy logic, they won’t give you the kind of answers cost accountants are prepared to deal with. They won’t tell you that you need a risk-reserve allocation of $5,937,652.37. They’ll tell you something like “lots!”

You can’t take “lots” to the bank.

The next problem is imagining that global climate models could have any possible relationship to catastrophic weather events. Catastrophic weather events are, by definition, catastrophic. To analyze them you need the kind of mathermatics called “catastrophe theory.”

Catastrophe theory is one of the underpinnings of chaos. In Steven Spielberg’s 1993 movie Jurassic Park, the character Ian Malcolm tries to illustrate catastrophe theory with the example of a drop of water rolling off the back of his hand. Whether it drips off to the right or left depends critically on how his hand is tipped. A small change creates an immense difference.

If a ball is balanced at the edge of a table, it can either stay there or drop off, and you can’t predict in advance which will happen.

That’s the thinking behind catastrophe theory.

The same analysis goes into predicting what will happen with a hurricane. As I recall, at the time Hurricane Florence (2018) made landfall, most models predicted it would move south along the Appalachian Ridge. Another group of models predicted it would stall out to the northwest.

When push came to shove, however, it moved northeast.

What actually happened depended critically on a large number of details that were too small to include in the models.

How much money was lost due to storm damage was a result of the result of unpredictable things. (That’s not an editing error. It was really the second order result of a result.) It is a fundamentally unpredictable thing. The best you can do is measure it after the fact.

That brings us to comparing climate-model predictions with observations. We’ve got enough data now to see how climate-model predictions compare with observations on a decades-long timescale. The graph above summarizes results compiled in 2015 by the Cato Institute.

Basically, it shows that, not only did the climate models overestimate the temperature rise from the late 1970s to 2015 by a factor of approximately three, but in the critical last decade, when the computer models predicted a rapid rise, the actual observations showed that it nearly stalled out.

Notice that the divergence between the models and the observations increased with time. As I’ve said, that’s the main hallmark of chaos.

It sure looks like the climate models are batting zero!

I’ve been watching these kinds of results show up since the 1980s. It’s why by the late 1990s I started discounting statements like the WSJ article’s: “A consensus of scientists puts blame substantially on emissios greenhouse gasses from cars, farms and factories.”

I don’t know who those “scientists” might be, but it sounds like they’re assigning blame for an effect that isn’t observed. Real scientists wouldn’t do that. Only politicians would.

Clearly, something is going on, but what it is, what its extent is, and what is causing it is anything but clear.

In the data depicted above, the results from global climate modeling do not look at all like the behavior of a chaotic system. The data from observations, however, do look like what we typically get from a chaotic system. Stuff moves constantly. On short time scales it shows apparent trends. On longer time scales, however, the trends tend to evaporate.

No wonder observers like Steven Pacala, who is Frederick D. Petrie Professor in Ecology and Evolutionary Biology at Princeton University and a board member at Hamilton Insurance Group, Ltd., are led to say (as quoted in the article): “Climate change makes the historical record of extreme weather an unreliable indicator of current risk.”

When you’re dealing with a chaotic system, the longer the record you’re using, the less predictive power it has.

Duh!

Another point made in the WSJ article that I thought was hilarious involved prediction of hurricanes in the Persian Gulf.

According to the article, “Such cyclones … have never been observed in the Persian Gulf … with new conditions due to warming, some cyclones could enter the Gulf in the future and also form in the Gulf itself.”

This sounds a lot like a tongue-in-cheek comment I once heard from astronomer Carl Sagan about predictions of life on Venus. He pointed out that when astronomers observe Venus, they generally just see a featureless disk. Science fiction writers had developed a chain of inferences that led them from that observation of a featureless disk to imagining total cloud cover, then postulating underlying swamps teeming with life, and culminating with imagining the existence of Venusian dinosaurs.

Observation: “We can see nothing.”

Conclusion: “There are dinosaurs.”

Sagan was pointing out that, though it may make good science fiction, that is bad science.

The WSJ reporters, Bradley Hope and Nicole Friedman, went from “No hurricanes ever seen in the Persian Gulf” to “Future hurricanes in the Persian Gulf” by the same sort of logic.

The kind of specious misinformation represented by the WSJ article confuses folks who have genuine concerns about the environment. Before politicians like Al Gore hijacked the environmental narrative, deflecting it toward climate change, folks paid much more attention to the real environmental issue of pollution.

Insurance losses from extreme weather events
Actual insurance losses due to catastrophic weather events show a disturbing trend.

The one bit of information in the WSJ article that appears prima facie disturbing is contained in the graph at right.

The graph shows actual insurance losses due to catastrophic weather events increasing rapidly over time. The article draws the inference that this trend is caused by human-induced climate change.

That’s quite a stretch, considering that there are obvious alternative explanations for this trend. The most likely alternative is the possibility that folks have been building more stuff in hurricane-prone areas. With more expensive stuff there to get damaged, insurance losses will rise.

Again: duh!

Invoking Occam’s Razor (choose the most believable of alternative explanations), we tend to favor the second explanation.

In summary, I conclude that the 3 October article is poor reporting that draws conclusions that are likely false.

Notes from 4 October

Don’t try to find the 3 October WSJ article online. I spent a couple of hours this morning searching for it, and came up empty. The closest I was able to get was a draft version that I found by searching on Bradley Hope’s name. It did not appear on WSJ‘s public site.

Apparently, WSJ‘s editors weren’t any more impressed with the article than I was.

The 4 October issue presents a corroboration of my alternative explanation of the trend in insurance-loss data: it’s due to a build up of expensive real estate in areas prone to catastrophic weather events.

In a half-page expose entitled “Hurricane Costs Grow as Population Shifts,” Kara Dapena reports that, “From 1980 to 2017, counties along the U.S. shoreline that endured hurricane-strength winds from Florence in September experienced a surge in population.”

In the end, this blog posting serves as an illustration of four points I tried to make last month. Specifically, on 19 September I published a post entitled: “Noble Whitefoot or Lying Blackfoot?” in which I laid out four principles to use when trying to determine if the news you’re reading is fake. I’ll list them in reverse of the order I used in last month’s posting, partly to illustrate that there is no set order for them:

  • Nobody gets it completely right  ̶  In the 3 October WSJ story, the reporters got snookered by the climate-change political lobby. That article, taken at face value, has to be stamped with the label “fake news.”
  • Does it make sense to you? ̶  The 3 October fake news article set off my BS detector by making a number of statements that did not make sense to me.
  • Comparison shopping for ideas  ̶  Assertions in the suspect article contradicted numerous other sources.
  • Consider your source  ̶  The article’s source (The Wall Street Journal) is one that I normally trust. Otherwise, I likely never would have seen it, since I don’t bother listening to anyone I catch in a lie. My faith in the publication was restored when the next day they featured an article that corrected the misinformation.

Doing Business with Bad Guys

Threatened with a gun
Authoritarians make dangerous business partners. rubikphoto/Shutterstock

3 October 2018 – Parents generally try to drum into their childrens’ heads a simple maxim: “People judge you by the company you keep.

Children (and we’re all children, no matter how mature and sophisticated we pretend to be) just as generally find it hard to follow that maxim. We all screw it up once in a while by succumbing to the temptation of some perceived advantage to be had by dealing with some unsavory character.

Large corporations and national governments are at least as likely to succumb to the prospect of making a fast buck or signing some treaty with peers who don’t entertain the same values we have (or at least pretend to have). Governments, especially, have a tough time in dealing with what I’ll call “Bad Guys.”

Let’s face it, better than half the nations of the world are run by people we wouldn’t want in our living rooms!

I’m specifically thinking about totalitarian regimes like the People’s Republic of China (PRC).

‘Way back in the last century, Mao Tse-tung (or Mao Zedong, depending on how you choose to mis-spell the anglicization of his name) clearly placed China on the “Anti-American” team, espousing a virulent form of Marxism and descending into the totalitarian authoritarianism Marxist regimes are so prone to. This situation continued from the PRC’s founding in 1949 through 1972, when notoriously authoritarian-friendly U.S. President Richard Nixon toured China in an effort to start a trade relationship between the two countries.

Greedy U.S. corporations quickly started falling all over themselves in an effort to gain access to China’s enormous potential market. Mesmerized by the statistics of more than a billion people spread out over China’s enormous land mass, they ignored the fact that those people were struggling in a subsistence-agriculture economy that had collapsed under decades of mis-managment by Mao’s authoritarian regime.

What they hoped those generally dirt-poor peasants were going to buy from them I never could figure out.

Unfortunately, years later I found myself embedded in the management of one of those starry-eyed multinational corporations that was hoping to take advantage of the developing Chinese electronics industry. Fresh off our success launching Test & Measurement Europe, they wanted to launch a new publication called Test & Measurement China. Recalling the then-recent calamity ending the Tiananmen Square protests of 1989, I pulled a Nancy Reagan and just said “No.”

I pointed out that the PRC was still run by a totalitarian, authoritarian regime, and that you just couldn’t trust those guys. You never knew when they were going to decide to sacrifice you on the altar of internal politics.

Today, American corporations are seeing the mistakes they made in pursuit of Chinese business, which like Robert Southey’s chickens, are coming home to roost. In 2015, Chinese Premier Li Keqiang announced the “Made in China 2025” plan to make China the World’s technology leader. It quickly became apparent that Mao’s current successor, Xi Jinping intends to achieve his goals by building on technology pilfered from western companies who’d naively partnered with Chinese firms.

Now, their only protector is another authoritarian-friendly president, Donald Trump. Remember it was Trump who, following his ill-advised summit with North Korean strongman Kim Jong Un, got caught on video enviously saying: “He speaks, and his people sit up at attention. I want my people to do the same.

So, now these corporations have to look to an American would-be dictator for protection from an entrenched Chinese dictator. No wonder they find themselves screwed, blued, and tattooed!

Governments are not immune to the PRC’s siren song, either. Pundits are pointing out that the PRC’s vaunted “One Belt, One Road” initiative is likely an example of “debt-trap diplomacy.”

Debt-trap diplomacy is a strategy similar to organized crime’s loan-shark operations. An unscrupulous cash-rich organization, the loan shark, offers funds to a cash-strapped individual, such as an ambitious entrepreneur, in a deal that seems too good to be true. It’s NOT true because the deal comes in the form of a loan at terms that nearly guarantee that the debtor will default. The shark then offers to write off the debt in exchange for the debtor’s participation in some unsavory scheme, such as money laundering.

In the debt-trap diplomacy version, the PRC stands in the place of the loan shark while some emerging-economy nation, such as, say, Malaysia, accepts the unsupportable debt. In the PRC/ Malaysia case, the unsavory scheme is helping support China’s imperial ambitions in the western Pacific.

Earlier this month, Malaysia wisely backed out of the deal.

It’s not just the post-Maoist PRC that makes a dangerous place for western corporations to do business, authoritarians all over the world treat people like Heart’s Barracuda. They suck you in with mesmerizing bright and shiny promises, then leave you twisting in the wind.

Yes, I’ve piled up a whole mess of mixed metaphors here, but I’m trying to drive home a point!

Another example of the traps business people can get into by trying to deal with authoritarians is afforded by Danske Bank’s Estonia branch and their dealings with Vladimir Putin‘s Russian kleptocracy. Danske Bank is a Danish financial institution with a pan-European footprint and global ambitions. Recent release of a Danske Bank internal report produced by the Danish law firm Bruun & Hjejle says that the Estonia branch engaged in “dodgy dealings” with numerous corrupt Russian officials. Basically, the bank set up a scheme to launder money stolen from Russian tax receipts by organized criminals.

The scandal broke in Russia in June of 2007 when dozens of police officers raided the Moscow offices of Hermitage Global, an activist fund focused on global emerging markets. A coverup by Kremlin authorities resulted in the death (while in a Russian prison) of Sergei Leonidovich Magnitsky, a Russian tax accountant who specialized in anti-corruption activities.

Magnitsky’s case became an international cause célèbre. The U.S. Congress and President Barack Obama enacted the Magnitsky Act at the end of 2012, barring, among others, those Russian officials believed to be involved in Magnitsky’s death from entering the United States or using its banking system.

Apparently, the purpose of the infamous Trump Tower meeting of June 9, 2016 was, on the Russian side, an effort to secure repeal of the Magnitsky Act should then-candidate Trump win the election. The Russians dangled release of stolen emails incriminating Trump-rival Hillary Clinton as bait. This activity started the whole Mueller Investigation, which has so far resulted in dozens of indictments for federal crimes, and at least eight guilty pleas or convictions.

The latest business strung up in this mega-scandal was the whole corrupt banking system of Cyprus, whose laundering of Russian oligarchs’ money amounted to over $20B.

The moral of this story is: Don’t do business with bad guys, no matter how good they make the deal look.

News vs. Opinion

News reporting
Journalists report reopening of Lindt cafe in Sydney after ISIS siege, 20 March 2015. M. W. Hunt / Shutterstock.com

26 September 2018 – This is NOT a news story!

Last week I spent a lot of space yammering on about how to tell fake news from the real stuff. I made a big point about how real news organizations don’t allow editorializing in news stories. I included an example of a New York Times op-ed (opinion editorial) that was decidedly not a news story.

On the other hand, last night I growled at my TV screen when I heard a CNN commentator say that she’d been taught that journalists must have opinions and should voice them. I growled because her statement could be construed to mean something anathema to journalistic ethics. I’m afraid way too many TV journalists may be confused about this issue. Certainly too many news consumers are confused!

It’s easy to get confused. For example, I got myself in trouble some years ago in a discussion over dinner and drinks with Andy Wilson, Founding Editor at Vision Systems Design, over a related issue that is less important to political-news reporting, but is crucial for business-to-business (B2B) journalism: the role of advertising in editorial considerations.

Andy insisted upon strictly ignoring advertiser needs when making editorial decisions. I advocated a more nuanced approach. I said that ignoring advertiser needs and desires would lead to cutting oneself off from our most important source of technology-trends information.

I’m not going to delve too deeply into that subject because it has only peripheral significance for this blog posting. The overlap with news reporting is that both activities involve dealing with biased sources.

My disagreement with Andy arose from my veteran-project-manager’s sensitivity to all stakeholders in any activity. In the B2B case, editors have several ways of enforcing journalistic discipline without biting the hand that feeds us. I was especially sensitive to the issue because I specialized in case studies, which necessarily discuss technology embodied in commercial products. Basically, I insisted on limiting (to one) actual product mentions in each story, and suppressing any claims that the mentioned product was the only possible way to access the embodied technology. In essence, I policed the stories I wrote or edited to avoid the “buy our stuff” messages that advertisers love and that send chills down Andy’s (and my) spine.

In the news-media realm, journalists need to police their writing for “buy our ideas” messages in news stories. “Just the facts, ma’am” needs to be the goal for news. Expressing editorial opinions in news stories is dangerous. That’s when the lines between fake news and real news get blurry.

Those lines need to be sharp to help news consumers judge the … information … they’re being fed.

Perhaps “information” isn’t exactly the right word.

It might be best to start with the distinction between “information” and “data.”

The distinction is not always clear in a general setting. It is, however, stark in the world of science, which is where I originally came from.

What comes into our brains from the outside world is “data.” It’s facts and figures. Contrary to what many people imagine, “data” is devoid of meaning. Scientists often refer to it as “raw data” to emphasize this characteristic.

There is nothing actionable in raw data. The observation that “the sky is blue” can’t even tell you if the sky was blue yesterday, or how likely it is to be blue tomorrow. It just says: “the sky is blue.” End of story.

Turning “data” into “information” involves combining it with other, related data, and making inferences about or deductions from patterns perceivable in the resulting superset. The process is called “interpretation,” and it’s the second step in turning data into knowledge. It’s what our brains are good for.

So, does this mean that news reporters are to be empty-headed recorders of raw facts?

Not by a long shot!

The CNN commentator’s point was that reporters are far from empty headed. While learning their trade, they develop ways to, for example, tell when some data source is lying to them.

In the hard sciences it’s called “instrumental error,” and experimental scientists (as I was) spend careers detecting and eliminating it.

Similarly, what a reporter does when faced with a lying source is the hard part of news reporting. Do you say, “This source is unreliable” and suppress what they told you? Do you report what they said along with a comment that they’re a lying so-and-so who shouldn’t be believed? Certainly, you try to find another source who tells you something you can rely on. But, what if the second source is lying, too?

???

That’s why we news consumers have to rely on professionals who actually care about the truth for our news.

On the other hand, nobody goes to news outlets for just raw data. We want something we can use. We want something actionable.

Most of us have neither the time nor the tools to interpret all the drivel we’re faced with. Even if we happen to be able to work it out for ourselves, we could always use some help, even if just to corroborate our own conclusions.

Who better to help us interpret the data (news) and glean actionable opinions from it than those journalists who’ve been spending their careers listening to the crap newsmakers want to feed us?

That’s where commentators come in. The difference between an editor and a reporter is that the editor has enough background and experience to interpret the raw data and turn it into actionable information.

That is: opinion you can use to make a decision. Like, maybe, who to vote for.

People with the chops to interpret news and make comments about it are called “commentators.”

When I was looking to hire what we used to call a “Technical Editor” for Test & Measurement World, I specifically looked for someone with a technical degree and experience developing the technology I wanted that person to cover. So, for example, when I was looking for someone to cover advances in testing of electronics for the telecommunications industry, I went looking for a telecommunications engineer. I figured that if I found one who could also tell a story, I could train them to be a journalist.

That brings us back to the CNN commentator who thought she should have opinions.

The relevant word here is “commentator.”

She’s not just a reporter. To be a commentator, she supposedly has access to the best available “data” and enough background to skillfully interpret it. So, what she was saying is true for a commentator rather than just a reporter.

Howsomever, ya can’t just give a conclusion without showing how the facts lead to it.

Let’s look at how I assemble a post for this blog as an example of what you should look for in a reliable op-ed piece.

Obviously, I look for a subject about which I feel I have something worthwhile to say. Specifically, I look for what I call the “take-home lesson” on which I base every piece of blather I write.

The “take-home lesson” is the basic point I want my reader to remember. Come Thursday next you won’t remember every word or even every point I make in this column. You’re (hopefully) going to remember some concept from it that you should be able to summarize in one or two sentences. It may be the “call to action” my eighth-grade English teacher, Miss Langley, told me to look for in every well-written editorial. Or, it could be just some idea, such as “Racism sucks,” that I want my reader to believe.

Whatever it is, it’s what I want the reader to “take home” from my writing. All the rest is just stuff I use to convince the reader to buy into the “take-home lesson.”

Usually, I start off by providing the reader with some context in which to fit what I have to say. It’s there so that the reader and I start off on the same page. This is important to help the reader fit what I have to say into the knowledge pattern of their own mind. (I hope that makes sense!)

After setting the context, I provide the facts that I have available from which to draw my conclusion. The conclusion will be, of course, the “take-home lesson.”

I can’t be sure that my readers will have the facts already, so I provide links to what I consider reliable outside sources. Sometimes I provide primary sources, but more often they’re secondary sources.

Primary sources for, say, a biographical sketch of Thomas Edison would be diary pages or financial records, which few readers would have immediate access to.

A secondary source might be a well-researched entry on, say, the Biography.com website, which the reader can easily get access to and which can, in turn, provide links to useful primary sources.

In any case, I try to provide sources for each piece of data on which I base my conclusion.

Then, I’ll outline the logical path that leads from the data pattern to my conclusion. While the reader should have no need to dispute the “data,” he or she should look very carefully to see whether my logic makes sense. Does it lead inevitably from the data to my conclusion?

Finally, I’ll clearly state the conclusion.

In general, every consumer of ideas should look for this same pattern in every information source they use.

Legal vs. Scientific Thinking

Scientific Method Diagram
The scientific method assumes uncertainty.

29 August 2018 – With so much controversy in the news recently surrounding POTUS’ exposure in the Mueller investigation into Russian meddling in the 2016 Presidential election, I’ve been thinking a whole lot about how lawyers look at evidence versus how scientists look at evidence. While I’ve only limited background with legal matters (having an MBA’s exposure to business law), I’ve spent a career teaching and using the scientific method.

While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school consists of five to seven steps, which pretty much look like this:

  1. Observation
  2. Hypothesis
  3. Prediction
  4. Experimentation
  5. Analysis
  6. Repeat

I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.

It all starts with observation of things that go on in the World. Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question “why.”

Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several explanations that vary from the erudite to the thoroughly bizarre.

Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!

For example, ancients tended to think in terms of objects somehow “wanting” to go downward as the least wierd of explanations for gravity. It came from animism, which is the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior. Rocks are hard because their spirits resist being broken. They fall down when released because their spirits somehow like down better than up.

What we now consider the most-correctest explanation, that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other, wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.

Scientists then take all the hypotheses, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results of the experiments.

Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.

It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.

That’s why the last step is to repeat the entire process ad nauseam.

While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.

Not boiling it down to essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, science pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”

The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following through to the resultant results.

There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.

You do that a bazillion times in a bazillion different ways, and a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.

Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.

For example, I was once asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.”

I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.

I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He kept believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.

Anyway, the way this all works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.

That is NOT what our legal system does.

Not by a LONG shot!

The Legal Method

While both scientific and legal thinking methods start from some initial state, and move to some final conclusion, the processes for getting from A to B differs in important ways.

The Legal Method
In legal thinking, a chain of evidence is used to get from criminal charges to a final verdict.

First, while the hypothesis in the scientific method is assumed to be provisional, the legal system is based on coming to a definite explanation of events that is in some sense “correct.” The results of scientific inquiry, on the other hand, are accepted as “probably right, maybe, for now.”

That ain’t good enough in legal matters. The verdict of a criminal trial, for example, has to be true “beyond a reasonable doubt.”

Second, in legal matters the path from the initial conditions (the “charges”) to the results (the “verdict”) is linear. It has one path: through a chain of evidence. There may be multiple bits of evidence, but you can follow them through from a definite start to a definite end.

The third way the legal method differs from the scientific method is what I call the “So, What?” factor.

If your scientific hypothesis is wrong, meaning it gives wrong results, “So, What?”

Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.

Finding that some hypothesis is wrong is no big deal. It just means you don’t have to bother with that dumbass idea, anymore. Alien abductions get relegated to entertainment for the entertainment starved, and real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.

(Leading hypothesis: the distances from there to here are so vast that anybody smart enough to make the trip has better things to do.)

If, on the other hand, your legal verdict is wrong, really bad things happen. Maybe somebody’s life is ruined. Maybe even somebody dies. The penalty for failure in the legal system is severe!

So, the term “air tight” shows up a lot in talking about legal evidence. In science not so much.

For scientists “Gee, it looks like . . . ” is usually as good as it gets.

For judges, they need a whole lot more.

So, as a scientist I can say: “POTUS looks like a career criminal.”

That, however, won’t do the job for, say, Robert Mueller.

In Real Life

Very few of us are either scientists or judges. We live in the real world and have to make real-world decisions. So, which sort of method for coming to conclusions should we use?

In 1983, film director Paul Brickman spent an estimated 6.2 million dollars and 99 min worth of celluloid (some 142,560 individual images at the standard frame rate of 24 fps) telling us that successful entrepreneurs must be prepared to make decisions based on insufficient information. That means with no guarantee of being right. No guarantee of success.

He, by the way, was right. His movie, Risky Business, grossed $63 million at the box office in the U.S. alone. A clear gross margin of 1,000%!

There’s an old saying: “A conclusion is that point at which you decide to stop thinking about it.”

It sounds a bit glib, but it actually isn’t. Every experienced businessman, for example, knows that you never have enough information. You are generally forced to make a decision based on incomplete information.

In the real world, making a wrong decision is usually better than making no decision at all. What that means is that, in the real world, if you make a wrong decision you usually get to say “Oops!” and walk it back. If you decide to make no decision, that’s a decision that you can’t walk back.

Oops! I have to walk that statement back.

There are situations where the penalty for the failure of making a wrong decision is severe. For example, we had a cat once, who took exception to a number of changes in our home life. We’d moved. We’d gotten a new dog. We’d adopted another cat. He didn’t like any of that.

I could see from his body language that he was developing a bad attitude. Whereas he had previously been patient when things didn’t go exactly his way, he’d started acting more aggressive. One night, we were startled to hear a screetching of brakes in the road passing our front door. We went out to find that Nick had run across the road and been hit by a car.

Splat!

Considering the pattern of events, I concluded that Nick had died of PCD. That is, “Poor Cat Decision.” He’d been overly aggressive when deciding whether or not to cross the road.

Making no decision (hesitating before running across the road) would probably have been better than the decision he made to turn on his jets.

That’s the kind of decision where getting it wrong is worse than holding back.

Usually, however, no decision is the worst decision. As the Zen haiku says:

In walking, just walk.
In sitting, just sit.
Above all, don’t wobble.

That argues for using the scientist’s method: gather what facts you have, then make a decision. If you’re hypothesis turns out to be wrong, “So, What?”

Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.

Who’s NOT a Creative?

 

Compensting sales
Close-up Of A Business Woman Giving Cheque To Her Colleague At Workplace In Office. Andrey Popov/Shutterstock

25 July 2018 – Last week I made a big deal about the things that motivate creative people, such as magazine editors, and how the most effective rewards were non-monetary. I also said that monetary rewards, such as commissions based on sales results, were exactly the right rewards to use for salespeople. That would imply that salespeople were somehow different from others, and maybe even not creative.

That is not the impression I want to leave you with. I’m devoting this blog posting to setting that record straight.

My remarks last week were based on Maslow‘s and Herzberg‘s work on motivation of employees. I suggested that these theories were valid in other spheres of human endeavor. Let’s be clear about this: yes, Maslow’s and Herzberg’s theories are valid and useful in general, whenever you want to think about motivating normal, healthy human beings. It’s incidental that those researchers were focused on employer/employee relations as an impetus to their work. If they’d been focused on anything else, their conclusions would probably have been pretty much the same.

That said, there are a whole class of people for whom monetary compensation is the holy grail of motivators. They are generally very high functioning individuals who are in no way pathological. On the surface, however, their preferred rewards appear to be monetary.

Traditionally, observers who don’t share this reward system have indicted these individuals as “greedy.”

I, however, dispute that conclusion. Let me explain why.

When pointing out the rewards that can be called “motivators for editors,” I wrote:

“We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like ‘Best Technical Article.’ Above all, we talked up the fact that ours was ‘the premier publication in the market.'”

Notice that these rewards, though non-monetary. were more or less measurable. They could be (and indeed for the individuals they motivated) seen as scorecards. The individuals involved had a very clear idea of value attached to such rewards. A Nobel Prize in Physics is of greater value than, say, a similar award given by, say, Harvard University.

For example, in 1987 I was awarded the “Cahners Editorial Medal of Excellence, Best How-To Article.” That wasn’t half bad. The competition was articles written for a few dozen magazines that were part of the Cahners Publishing Company, which at the time was a big deal in the business-to-business magazine field.

What I considered to be of higher value, however, was the “First Place Award For Editorial Excellence for a Technical Article in a Magazine with Over 80,000 Circulation” I got in 1997 from the American Society of Business Press Editors, where I was competing with a much wider pool of journalists.

Economists have a way of attempting to quantify such non-monetary awards called utility. They arrive at values by presenting various options and asking the question: “Which would you rather have?”

Of course, measures of utility generally vary widely depending on who’s doing the choosing.

For example, an article in the 19 July The Wall Street Journal described a phenomenon the author seemed to think was surprising: Saudi-Arabian women drivers (new drivers all) showed a preference for muscle cars over more pedestrian models. The author, Margherita Stancati, related an incident where a Porche salesperson in Riyadh offered a recently minted woman driver an “easy to drive crossover designed to primarily attract women.” The customer demurred. She wanted something “with an engine that roars.”

So, the utility of anything is not an absolute in any sense. It all depends on answering the question: “Utility to whom?”

Everyone is motivated by rewards in the upper half of the Needs Pyramid. If you’re a salesperson, growth in your annual (or other period) sales revenue is in the green Self Esteem block. It’s well and truly in the “motivator” category, and has nothing to do with the Safety and Security “hygiene factor” where others might put it. Successful salespeople have those hygiene factors well-and-truly covered. They’re looking for a reward that tells them they’ve hit a home run. That is likely having a bigger annual bonus than the next guy.

The most obvious money-driven motivators accrue to the folks in the CEO ranks. Jeff Bezos, Elon Musk, and Warren Buffett would have a hard time measuring their success (i.e., hitting the Pavlovian lever to get Self Actualization rewards) without looking at their monetary compensation!

The Pyramid of Needs

Needs Pyramid
The Pyramid of Needs combines Maslow’s and Herzberg’s motivational theories.

18 July 2018 – Long, long ago, in a [place] far, far away. …

When I was Chief Editor at business-to-business magazine Test & Measurement World, I had a long, friendly though heated, discussion with one of our advertising-sales managers. He suggested making the compensation we paid our editorial staff contingent on total advertising sales. He pointed out that what everyone came to work for was to get paid, and that tying their pay to how well the magazine was doing financially would give them an incentive to make decisions that would help advertising sales, and advance the magazine’s financial success.

He thought it was a great idea, but I disagreed completely. I pointed out that, though revenue sharing was exactly the right way to compensate the salespeople he worked with, it was exactly the wrong way to compensate creative people, like writers and journalists.

Why it was a good idea for his salespeople I’ll leave for another column. Today, I’m interested in why it was not a good idea for my editors.

In the heat of the discussion I didn’t do a deep dive into the reasons for taking my position. Decades later, from the standpoint of a semi-retired whatever-you-call-my-patchwork-career, I can now sit back and analyze in some detail the considerations that led me to my conclusion, which I still think was correct.

We’ll start out with Maslow’s Hierarchy of Needs.

In 1943, Abraham Maslow proposed that healthy human beings have a certain number of needs, and that these needs are arranged in a hierarchy. At the top is “self actualization,” which boils down to a need for creativity. It’s the need to do something that’s never been done before in one’s own individual way. At the bottom is the simple need for physical survival. In between are three more identified needs people also seek to satisfy.

Maslow pointed out that people seek to satisfy these needs from the bottom to the top. For example, nobody worries about security arrangements at their gated community (second level) while having a heart attack that threatens their survival (bottom level).

Overlaid on Maslow’s hierarchy is Frederick Herzberg’s Two-Factor Theory, which he published in his 1959 book The Motivation to Work. Herzberg’s theory divides Maslow’s hierarchy into two sections. The lower section is best described as “hygiene factors.” They are also known as “dissatisfiers” or “demotivators” because if they’re not met folks get cranky.

Basically, a person needs to have their hygiene factors covered in order have a level of basic satisfaction in life. Not having any of these needs satisfied makes them miserable. Having them satisfied doesn’t motivate them at all. It makes ’em fat, dumb and happy.

The upper-level needs are called “motivators.” Not having motivators met drives an individual to work harder, smarter, etc. It energizes them.

My position in the argument with my ad-sales friend was that providing revenue sharing worked at the “Safety and Security” level. Editors were (at least in my organization) paid enough that they didn’t have to worry about feeding their kids and covering their bills. They were talented people with a choice of whom they worked for. If they weren’t already being paid enough, they’d have been forced to go work for somebody else.

Creative people, my argument went, are motivated by non-monetary rewards. They work at the upper “motivator” levels. They’ve already got their physical needs covered, so to motivate them we have to offer rewards in the “motivator” realm.

We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like “Best Technical Article.” Above all, we talked up the fact that ours was “the premier publication in the market.”

These were all non-monetary rewards to motivate people who already had their basic needs (the hygiene factors) covered.

I summarized my compensation theory thusly: “We pay creative people enough so that they don’t have to go do something else.”

That gives them the freedom to do what they would want to do, anyway. The implication is that creative people want to do stuff because it’s something they can do that’s worth doing.

In other words, we don’t pay creative people to work. We pay them to free them up so they can work. Then, we suggest really fun stuff for them to work at.

What does this all mean for society in general?

First of all, if you want there to be a general level of satisfaction within your society, you’d better take care of those hygiene factors for everybody!

That doesn’t mean the top 1%. It doesn’t mean the top 80%, either. Or, the top 90%. It means everybody!

If you’ve got 99% of everybody covered, that still leaves a whole lot of people who think they’re getting a raw deal. Remember that in the U.S.A. there are roughly 300 million people. If you’ve left 1% feeling ripped off, that’s 3 million potential revolutionaries. Three million people can cause a lot of havoc if motivated.

Remember, at the height of the 1960s Hippy movement, there were, according to the most generous estimates, only about 100,000 hipsters wandering around. Those hundred-thousand activists made a huge change in society in a very short period of time.

Okay. If you want people invested in the status quo of society, make sure everyone has all their hygiene factors covered. If you want to know how to do that, ask Bernie Sanders.

Assuming you’ve got everybody’s hygiene factors covered, does that mean they’re all fat, dumb, and happy? Do you end up with a nation of goofballs with no motivation to do anything?

Nope!

Remember those needs Herzberg identified as “motivators” in the upper part of Maslow’s pyramid?

The hygiene factors come into play only when they’re not met. The day they’re met, people stop thinking about who’ll be first against the wall when the revolution comes. Folks become fat, dumb and happy, and stay that way for about an afternoon. Maybe an afternoon and an evening if there’s a good ballgame on.

The next morning they start thinking: “So, what can we screw with next?”

What they’re going to screw with next is anything and everything they damn well please. Some will want to fly to the Moon. Some will want to outdo Michaelangelo‘s frescos for the ceiling of the Sistine Chapel. They’re all going to look at what they think was the greatest stuff from the past, and try to think of ways to do better, and to do it in their own way.

That’s the whole point of “self actualization.”

The Renaissance didn’t happen because everybody was broke. It happened because they were already fat, dumb and happy, and looking for something to screw with next.

POTUS and the Peter Principle

Will Rogers & Wiley Post
In 1927, Will Rogers wrote: “I never met a man I didn’t like.” Here he is (on left) posing with aviator Wiley Post before their ill-fated flying exploration of Alaska. Everett Historical/Shutterstock

11 July 2018 – Please bear with me while I, once again, invert the standard news-story pyramid by presenting a great whacking pile of (hopfully entertaining) detail that leads eventually to the point of this column. If you’re too impatient to read it to the end, leave now to check out the latest POTUS rant on Twitter.

Unlike Will Rogers, who famously wrote, “I never met a man I didn’t like,” I’ve run across a whole slew of folks I didn’t like, to the point of being marginally misanthropic.

I’ve made friends with all kinds of people, from murderers to millionaires, but there are a few types that I just can’t abide. Top of that list is people that think they’re smarter than everybody else, and want you to acknowledge it.

I’m telling you this because I’m trying to be honest about why I’ve never been able to abide two recent Presidents: William Jefferson Clinton (#42) and Donald J. Trump (#45). Having been forced to observe their antics over an extended period, I’m pleased to report that they’ve both proved to be among the most corrupt individuals to occupy the Oval Office in recent memory.

I dislike them because they both show that same, smarmy self-satisfied smile when contemplating their own greatness.

Tricky Dick Nixon (#37) was also a world-class scumbag, but he never triggered the same automatic revulsion. That is because, instead of always looking self satisfied, he always looked scared. He was smart enough to recognize that he was walking a tightrope and, if he stayed on it long enough, he eventually would fall off.

And, he did.

I had no reason for disliking #37 until the mid-1960s, when, as a college freshman, I researched a paper for a history class that happened to involve digging into the McCarthy hearings of the early 1950s. Seeing the future #37’s activities in that period helped me form an extremely unflattering picture of his character, which a decade later proved accurate.

During those years in between I had some knock-down, drag-out arguments with my rabid-Nixon-fan grandmother. I hope I had the self control never to have said “I told you so” after Nixon’s fall. She was a nice lady and a wonderful grandma, and wouldn’t have deserved it.

As Abraham Lincoln (#16) famously said: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

Since #45 came on my radar many decades ago, I’ve been trying to figure out what, exactly, is wrong with his brain. At first, when he was a real-estate developer, I just figured he had bad taste and was infantile. That made him easy to dismiss, so I did just that.

Later, he became a reality-TV star. His show, The Apprentice, made it instantly clear that he knew absolutely nothing about running a business.

No wonder his companies went bankrupt. Again, and again, and again….

I’ve known scads of corporate CEOs over the years. During the quarter century I spent covering the testing business as a journalist, I got to spend time with most of the corporate leaders of the world’s major electronics manufacturing companies. Unsurprisingly, the successful ones followed the best practices that I learned in MBA school.

Some of the CEOs I got to know were goofballs. Most, however, were absolutely brilliant. The successful ones all had certain things in common.

Chief among the characteristics of successful corporate executives is that they make the people around them happy to work for them. They make others feel comfortable, empowered, and enthusiastically willing to cooperate to make the CEO’s vision manifest.

Even Commendatore Ferrari, who I’ve heard was Hell to work for and Machiavellian in interpersonal relationships, made underlings glad to have known him. I’ve noticed that ‘most everybody who’s ever worked for Ferrari has become a Ferrari fan for life.

As far as I can determine, nobody ever sued him.

That’s not the impression I got of Donald Trump, the corporate CEO. He seemed to revel in conflict, making those around him feel like dog pooh.

Apparently, everyone who’s ever dealt with him has wanted to sue him.

That worked out fine, however, for Donald Trump, the reality-TV star. So-called “reality” TV shows generally survive by presenting conflict. The more conflict the better. Everybody always seems to be fighting with everybody else, and the winners appear to be those who consistently bully their opponents into feeling like dog pooh.

I see a pattern here.

The inescapable conclusion is that Donald Trump was never a successful corporate executive, but succeeded enormously playing one on TV.

Another characteristic I should mention of reality TV shows is that they’re unscripted. The idea seems to be that nobody knows what’s going to happen next, including the cast.

That leaves off the necessity for reality-TV stars to learn lines. Actual movie stars and stage actors have to learn lines of dialog. Stories are tightly scripted so that they conform to Aristotle’s recommendations for how to write a successful plot.

Having written a handful of traditional motion-picture scripts as well as having produced a few reality-TV episodes, I know the difference. Following Aristotle’s dicta gives you the ability to communicate, and sometimes even teach, something to your audience. The formula reality-TV show, on the other hand, goes nowhere. Everybody (including the audience) ends up exactly where they started, ready to start the same stupid arguments over and over again ad nauseam.

Apparently, reality-TV audiences don’t want to actually learn anything. They’re more focused on ranting and raving.

Later on, following a long tradition among theater, film and TV stars, #45 became a politician.

At first, I listened to what he said. That led me to think he was a Nazi demagogue. Then, I thought maybe he was some kind of petty tyrant, like Mussolini. (I never considered him competent enough to match Hitler.)

Eventually, I realized that it never makes any sense to listen to what #45 says because he lies. That makes anything he says irrelevant.

FIRST PRINCIPAL: If you catch somebody lying to you, stop believing what they say.

So, it’s all bullshit. You can’t draw any conclusion from it. If he says something obviously racist (for example), you can’t conclude that he’s a racist. If he says something that sounds stupid, you can’t conclude he’s stupid, either. It just means he’s said something that sounds stupid.

Piling up this whole load of B.S., then applying Occam’s Razor, leads to the conclusion that #45 is still simply a reality-TV star. His current TV show is titled The Trump Administration. Its supporting characters are U.S. senators and representatives, executive-branch bureaucrats, news-media personalities, and foreign “dignitaries.” Some in that last category (such as Justin Trudeau and Emmanuel Macron) are reluctant conscripts into the cast, and some (such as Vladimir Putin and Kim Jong-un) gleefully play their parts, but all are bit players in #45’s reality TV show.

Oh, yeah. The largest group of bit players in The Trump Administration is every man, woman, child and jackass on the planet. All are, in true reality-TV style, going exactly nowhere as long as the show lasts.

Politicians have always been showmen. Of the Founding Fathers, the one who stands out for never coming close to becoming President was Benjamin Franklin. Franklin was a lot of things, and did a lot of things extremely well. But, he was never really a P.T.-Barnum-like showman.

Really successful politicians, such as Abraham Lincoln, Franklin Roosevelt (#32), Bill Clinton, and Ronald Reagan (#40) were showmen. They could wow the heck out of an audience. They could also remember their lines!

That brings us, as promised, to Donald Trump and the Peter Principle.

Recognizing the close relationship between Presidential success and showmanship gives some idea about why #45 is having so much trouble making a go of being President.

Before I dig into that, however, I need to point out a few things that #45 likes to claim as successes that actually aren’t:

  • The 2016 election was not really a win for Donald Trump. Hillary Clinton was such an unpopular candidate that she decisively lost on her own (de)merits. God knows why she was ever the Democratic Party candidate at all. Anybody could have beaten her. If Donald Trump hadn’t been available, Elmer Fudd could have won!
  • The current economic expansion has absolutely nothing to do with Trump policies. I predicted it back in 2009, long before anybody (with the possible exception of Vladimir Putin, who apparently engineered it) thought Trump had a chance of winning the Presidency. My prediction was based on applying chaos theory to historical data. It was simply time for an economic expansion. The only effect Trump can have on the economy is to screw it up. Being trained as an economist (You did know that, didn’t you?), #45 is unlikely to screw up so badly that he derails the expansion.
  • While #45 likes to claim a win on North Korean denuclearization, the Nobel Peace Prize is on hold while evidence piles up that Kim Jong-un was pulling the wool over Trump’s eyes at the summit.

Finally, we move on to the Peter Principle.

In 1969 Canadian writer Raymond Hull co-wrote a satirical book entitled The Peter Principle with Laurence J. Peter. It was based on research Peter had done on organizational behavior.

Peter was (he died at age 70 in 1990) not a management consultant or a behavioral psychologist. He was an Associate Professor of Education at the University of Southern California. He was also Director of the Evelyn Frieden Centre for Prescriptive Teaching at USC, and Coordinator of Programs for Emotionally Disturbed Children.

The Peter principle states: “In a hierarchy every employee tends to rise to his level of incompetence.”

Horrifying to corporate managers, the book went on to provide real examples and lucid explanations to show the principle’s validity. It works as satire only because it leaves the reader with a choice either to laugh or to cry.

See last week’s discussion of why academic literature is exactly the wrong form with which to explore really tough philosophical questions in an innovative way.

Let’s be clear: I’m convinced that the Peter principle is God’s Own Truth! I’ve seen dozens of examples that confirm it, and no counter examples.

It’s another proof that Mommy Nature has a sense of humor. Anyone who disputes that has, philosophically speaking, a piece of paper taped to the back of his (or her) shirt with the words “Kick Me!” written on it.

A quick perusal of the Wikipedia entry on the Peter Principle elucidates: “An employee is promoted based on their success in previous jobs until they reach a level at which they are no longer competent, as skills in one job do not necessarily translate to another. … If the promoted person lacks the skills required for their new role, then they will be incompetent at their new level, and so they will not be promoted again.”

I leave it as an exercise for the reader (and the media) to find the numerous examples where #45, as a successful reality-TV star, has the skills he needed to be promoted to President, but not those needed to be competent in the job.