Stick to Your Knitting

Man knitting
Man in suit sticking to his knitting. Photo by fokusgood / Shutterstock

6 June 2019 – Once upon a time in an MBA school far, far away, I took a Marketing 101 class. The instructor, whose name I can no longer be sure of, had a number of sayings that proved insightful, bordering on the oracular. (That means they were generally really good advice.) One that he elevated to the level of a mantra was: “Stick to the knitting.”

Really successful companies of all sizes hew to this advice. There have been periods of history where fast-growing companies run by CEOs with spectacularly big egos have equally spectacularly honored this mantra in the breach. With more hubris than brains, they’ve managed to over-invest themselves out of business.

Today’s tech industry – especially the FAANG companies (Facebook, Amazon, Apple, Netflix and Google) – is particularly prone to this mistake. Here I hope to concentrate on what the mantra means, and what goes wrong when you ignore it.

Okay, “stick to your knitting” is based on the obvious assumption that every company has some core expertise. Amazon, for example, has expertise in building and operating an online catalog store. Facebook has expertise in running an online forum. Netflix operates a bang-up streaming service. Ford builds trucks. Lockheed Martin makes state-of-the-art military airplanes.

General Electric, which has core expertise in manufacturing industrial equipment, got into real trouble when it got the bright idea of starting a finance company to extend loans to its customers for purchases of its equipment.

Conglomeration

There is a business model, called the conglomerate that is based on explicitly ignoring the “knitting” mantra. It was especially popular in the 1960s. Corporate managers imagined that conglomerates could bring into play synergies that would make conglomerates more effective than single-business companies.

For a while there, this model seemed to be working. However, when business conditions began to change (specifically interest rates began to rise from an abnormally low level to more normal rates) their supposed advantages began melting like a birthday cake left outside in a rainstorm. These huge conglomerates began hemorrhaging money until vultures swooped in to pick them apart. Conglomerates are now a thing of the past.

There are companies, such as Berkshire Hathaway, whose core expertise is in evaluating and investing in other companies. Some of them are very successful, but that’s because they stick to their core expertise.

Berkshire Hathaway was originally a textile company that investor Warren Buffett took over when the textile industry was busy going overseas. As time went on, textiles became less important and, by 1985 this core part of the company was shut down. It had become a holding company for Buffett’s investments in other companies. It turns out that Buffett’s core competence is in handicapping companies for investment potential. That’s his knitting!

The difference between a holding company and a conglomerate is (and this is specifically my interpretation) a matter of integration. In a conglomerate, the different businesses are more-or-less integrated into the parent corporation. In a holding company, they are not.

Berkshire Hathaway is known for it’s insurance business, but if you want to buy, say, auto insurance from Berkshire Hathaway, you have to go to it’s Government Employees Insurance Company (GEICO) subsidiary. GEICO is a separate company that happens to be wholly owned by Berkshire Hathaway. That is, it has its own corporate headquarters and all the staff, fixtures and other resources needed to operate as an independent insurance company. It just happens to be owned, lock, stock and intellectual property by another corporate entity: Berkshire Hathaway.

GEICO’s core expertise is insurance. Berkshire Hathaway’s core expertise is finding good companies to invest in. Some are partially owned (e.g., 5.4% of Apple) some are wholly owned (e.g., Acme Brick).

Despite Berkshire Hathaway’s holding positions in both Apple and Acme Brick, if you ask Warren Buffet if Berkshire Hathaway is a computer company or a brick company, he’d undoubtedly say “no.” Berkshire Hathaway is a diversified holding company.

It’s business is owning other businesses.

To paraphrase James Coburn’s line from Stanley Donen’s 1963 film Charade: “[Mrs. Buffett] didn’t raise no stupid children!”

Why Giant Corporations?

All this giant corporation stuff stems from a dynamic I also learned about in MBA school: a company grows or it dies. I ran across this dynamic during a financial modeling class where we used computers to predict results of corporate decisions in lifelike conditions. Basically, what happens is that unless the company strives to its utmost to maintain growth, it starts to shrink and then all is lost. Feedback effects take over and it withers and dies.

Observations since then have convinced me this is some kind of natural law. It shows up in all kinds of natural systems. I used to think I understood why, but I’m not so sure anymore. It may have something to do with chaos, and we live in a chaotic universe. I resolve to study this in more detail – later.

But, anyway … .

Companies that embrace this mantra (You grow or you die.) grow until they reach some kind of external limit, then they stop growing and – in some fashion or other – die.

Sometimes (and paradigm examples abound) external limits don’t kick in before some company becomes very big, indeed. Standard Oil Company may be the poster child for this effect. Basically, the company grew to monopoly status and, in 1911 the U.S. Federal Government stepped in and, using the 1890 Sherman Anti-Trust Act, forced its breakup into 33 smaller oil companies, many of which still exist today as some of the world’s major oil companies (e.g., Mobil, Amoco, and Chevron). At the time of its breakup, Standard Oil had a market capitalization of just under $11B and was the third most valuable company in the U.S. Compare that to the U.S. GDP of roughly $34B at the time.

The problem with companies that big is that they generate tons of free cash. What to do with it?

There are three possibilities:

  1. You can reinvest it in your company;

  2. You can return it to your shareholders; or

  3. You can give it away.

Reinvesting free cash in your company is usually the first choice. I say it is the first choice because it is used at the earliest period of the company’s history – the period when growth is necessarily the only goal.

If done properly reinvestment can make your company grow bigger faster. You can reinvest by out-marketing your competition (by, say, making better advertisements) and gobbling up market share. You can also reinvest to make your company’s operations more effective or efficient. To grow, you also need to invest in adding production facilities.

At a later stage, your company is already growing fast and you’ve got state-of-the-art facilities, and you dominate your market. It’s time to do what your investors gave you their money for in the first place: return profits to them in the form of dividends. I kinda like that. It’s what the game’s all about, anyway.

Finally, most leaders of large companies recognize that having a lot of free cash laying around is an opportunity to do some good without (obviously) expecting a payback. I qualify this with the word “obviously” because on some level altruism does provide a return in some form.

Generally, companies engage in altruism (currently more often called “philanthropy”) to enhance their perception by the public. That’s useful when lawsuits rear their ugly heads or somebody in the organization screws up badly enough to invite public censure. Companies can enhance their reputations by supporting industry activities that do not directly enhance their profits.

So-called “growth companies,” however, get stuck in that early growth phase, and never transition to paying dividends. In the early days of the personal-computer revolution, tech companies prided themselves on being “growth stocks.” That is, investors gained vast wealth on paper as the companies’ stock prices went up, but couldn’t realized those gains (capital gains) unless they sold the stock. Or, as my father once did, by using the stock for collateral to borrow money.

In the end, wise investors eventually want their money back in the form of cash from dividends. For example, in the early 2000s, Microsoft and other technology companies were forced by their shareholders to start paying dividends for the first time.

What can go wrong

So, after all’s said and done, why’s my marketing professor’s mantra wise corporate governance?

To make money, especially the scads of money that corporations need to become really successful, you’ve gotta do something right. In fact, you gotta do something better than the other guys. When you know how to do something better than the other guys, that’s called expertise!

Companies, like people, have limitations. To imagine you don’t have limitations is hubris. To put hubris in perspective, recall that the ancients famously made it Lucifer’s cardinal sin. In fact, it was his only sin!

Folks who tell you that you can do anything are flat out conning your socks off.

If you’re lucky you can do one thing better than others. If you’re really lucky, you can do a few things better than others. If you try to do stuff outside your expertise, however, you’re gonna fail. A person can pick themselves up, dust themselves off, and try again – but don’t try to do the same thing again ‘cause you’ve already proved it’s outside your expertise. People can start over, but companies usually can’t.

One of my favorite sayings is:

Everything looks easy to someone who doesn’t know what they’re doing.

The rank amateur at some activity typically doesn’t know the complexities and pitfalls that an expert in the field has learned about through training and experience. That’s what we know as expertise. When anyone – or any company – wanders outside their field of expertise, they quickly fall foul of those complexities and pitfalls.

I don’t know how many times I’ve overheard some jamoke at an art opening say, “Oh, I could do that!”

Yeah? Then do it!

The artist has actually done it.

The same goes for some computer engineer who imagines that knowing how to program computers makes him (or her) smart, and because (s)he is so smart, (s)he could run, say, a magazine publishing house. How hard can it be?

Mark Zuckerberg is in the process of finding out.

So, You Thought It Was About Climate Change?

Smog over Warsaw
Air pollution over Warsaw center city in winter. Piotr Szczepankiewicz / Shutterstock

Sorry about failing to post to this blog last week. I took sick and just couldn’t manage it. This is the entry I started for 10 April, but couldn’t finish until now.

17 April 2019 – I had a whole raft of things to talk about in this week’s blog posting, some of which I really wanted to cover for various reasons, but I couldn’t resist an excuse to bang this old “environmental pollution” drum once again.

A Zoë Schlanger-authored article published on 2 April 2019 by World Economic Forum in collaboration with Quartz entitled “The average person in Europe loses two years of their life due to air pollution” crossed my desk this morning (8 April 2019). It was important to me because environmental pollution is an issue I’ve been obsessed with since the 1950s.

The Setup

One of my earliest memories is of my father taking delivery of a even-then-ancient 26-foot lifeboat (I think it was from an ocean liner, though I never really knew where it came from), which he planned to convert to a small cabin cruiser. I was amazed when, with no warning to me, this great, whacking flatbed trailer backed over our front lawn, and deposited this thing that looked like a miniature version of Noah’s Ark.

It was double-ended – meaning it had a prow-shape at both ends – and was pretty much empty inside. That is, it had benches for survivors to sit on and fittings for oarlocks (I vaguely remember oarlocks actually being in place, but my memory from over sixty years ago is a bit hazy.) but little else. No decks. No superstructure. Maybe some grates in the bottom to keep people’s feet out of the bilge, but that’s about it.

My father spent year or so installing lower decks, upper decks, a cabin with bunks, head and a small galley, and a straight-six gasoline engine for propulsion. I sorta remember the keel already having been fitted for a propeller shaft and rudder, which would class the boat as a “launch” rather than a simple lifeboat, but I never heard it called that.

Finally, after multiple-years’ reconstruction, the thing was ready to dump into the water to see if it would float. (Wooden boats never float when you first put them in the water. The planks have to absorb water and swell up to tighten the joints. Until then, they leak like sieves.)

The water my father chose to dump this boat into was the Seekonk River in nearby Providence, Rhode Island. It was a momentous day in our family, so my mother shepherded my big sister and me around while my father stressed out about getting the deed done.

We won’t talk about the day(s) the thing spent on the tiny shipway off Gano Street where the last patches of bottom paint were applied over where the boat’s cradle had supported its hull while under construction, and the last little forgotten bits were fitted and checked out before it was launched.

While that was going on, I spent the time playing around the docks and frightening my mother with my antics.

That was when I noticed the beautiful rainbow sheen covering the water.

Somebody told me it was called “iridescence” and was caused by the whole Seekonk River being covered by an oil slick. The oil came from the constant movement of oil-tank ships delivering liquid dreck to the oil refinery and tank farm upstream. The stuff was getting dumped into the water and flowing down to help turn Narragansett Bay, which takes up half the state to the south, into one vast combination open sewer and toxic-waste dump.

That was my introduction to pollution.

It made my socks rot every time I accidentally or reluctantly-on-purpose dipped any part of my body into that cesspool.

It was enough to gag a maggot!

So when, in the late 1960s, folks started yammering on about pollution, my heartfelt reaction was: “About f***ing time!”

I did not join the “Earth Day” protests that started in 1970, though. Previously, I’d observed the bizarre antics surrounding the anti-war protests of the middle-to-late 1960s, and saw the kind of reactions they incited. My friends and I had been a safe distance away leaning on an embankment blowing weed and laughing as less-wise classmates set themselves up as targets for reactionary authoritarians’ ire.

We’d already learned that the best place to be when policemen suit up for riot patrol is someplace a safe distance away.

We also knew the protest organizers – they were, after all, our classmates in college – and smiled indulgently as they worked up their resumes for lucrative careers in activist management. There’s more than one way to make a buck!

Bohemians, beatniks, hippies, or whatever term du jour you wanted to call us just weren’t into the whole money-and-power trip. We had better, mellower things to do than march around carrying signs, shouting slogans, and getting our heads beaten in for our efforts. So, when our former friends, the Earth-Day organizers, wanted us to line up, we didn’t even bother to say “no.” We just turned and walked away.

I, for one, was in the midst of changing tracks from English to science. I’d already tried my hand at writing, but found that, while I was pretty good at putting sentences together in English, then stringing them into paragraphs and stories, I really had nothing worthwhile to write about. I’d just not had enough life experience.

Since physics was basic to all the other stuff I’d been interested in – for decades – I decided to follow that passion and get a good grounding in the hard sciences, starting with physics. By the late seventies, I had learned whereof science was all about, and had developed a feel for how it was done, and what the results looked like. Especially, I was deep into astrophysics in general and solar physics in particular.

As time went on, the public noises I heard about environmental concerns began to sound more like political posturing and less like scientific discourse. Especially as they chose to ignore variability of the Sun that we astronomers knew was what made everything work.

By the turn of the millennium, scholarly reports generally showed no observations that backed up the global-warming rhetoric. Instead, they featured ambiguous results that showed chaotic evolution of climate with no real long-term trends.

Those of us interested in the history of science also realized that warm periods coincided with generally good conditions for humans, while cool periods could be pretty rough. So, what was wrong with a little global warming when you needed it?

A disturbing trend, however, was that these reports began to feature a boilerplate final paragraph saying, roughly: “climate change is a real danger and caused by human activity.” They all featured this paragraph, suspiciously almost word for word, despite there being little or nothing in the research results to support such a conclusion.

Since nothing in the rest of the report provided any basis for that final paragraph, it was clearly non-sequitur and added for non-science reasons. Clearly something was terribly wrong with climate research.

The penny finally dropped in 2006 when emeritus Vice President Albert Gore (already infamous for having attempted to take credit for developing the Internet) produced his hysteria-inducing movie An Inconvenient Truth along with the splashing about of Jerry Mahlman’s laughable “hockey-stick graph.” The graph, in particular, was based on a stitching together of historical data for proxies of global temperature with a speculative projection of a future exponential rise in global temperatures. That is something respectable scientists are specifically trained not to do, although it’s a favorite tactic of psycho-ceramics.

Air Pollution

By that time, however, so much rhetoric had been invested in promoting climate-change fear and convincing the media that it was human-induced, that concerns about plain old pollution (which anyone could see) seemed dowdy and uninteresting by comparison.

One of the reasons pollution seemed then (and still does now) old news is that in civilized countries (generally those run as democracies) great strides had already been made beating it down. A case in point is the image at right

East/West Europe Pollution
A snapshot of particulate pollution across Europe on Jan. 27, 2018. (Apologies to Quartz [ https://qz.com/1192348/europe-is-divided-into-safe-and-dangerous-places-to-breathe/ ] from whom this image was shamelessly stolen.)

. This image, which is a political map overlaid by a false-color map with colors indicating air-pollution levels, shows relatively mild pollution in Western Europe and much more severe levels in the more-authoritarian-leaning countries of Eastern Europe.

While this map makes an important point about how poorly communist and other authoritarian-leaning regimes take care of the “soup” in which their citizens have to live, it doesn’t say a lot about the environmental state of the art more generally in Europe. We leave that for Zoë Schlanger’s WEF article, which begins:

“The average person living in Europe loses two years of their life to the health effects of breathing polluted air, according to a report published in the European Heart Journal on March 12.

“The report also estimates about 800,000 people die prematurely in Europe per year due to air pollution, or roughly 17% of the 5 million deaths in Europe annually. Many of those deaths, between 40 and 80% of the total, are due to air pollution effects that have nothing to do with the respiratory system but rather are attributable to heart disease and strokes caused by air pollutants in the bloodstream, the researchers write.

“‘Chronic exposure to enhanced levels of fine particle matter impairs vascular function, which can lead to myocardial infarction, arterial hypertension, stroke, and heart failure,’ the researchers write.”

The point is, while American politicians debate the merits of climate change legislation, and European politicians seem to have knuckled under to IPCC climate-change rhetoric by wholeheartedly endorsing the 2015 Paris Agreement, the bigger and far more salient problem of environmental pollution is largely being ignored. This despite the visible and immediate deleterious affects on human health, and the demonstrated effectiveness of government efforts to ameliorate it.

By the way, in the two decades between the time I first observed iridescence atop the waters of the Seekonk River and when I launched my own first boat in the 1970s, Narragansett Bay went from a potential Superfund site to a beautiful, clean playground for recreational boaters. That was largely due to the efforts of the Save the Bay volunteer organization. While their job is not (and never will be) completely finished, they can serve as a model for effective grassroots activism.

Why Diversity Rules

Diverse friends
A diverse group of people with different ages and nationalities having fun together. Rawpixel/Shutterstock

23 January 2019 – Last week two concepts reared their ugly heads that I’ve been banging on about for years. They’re closely intertwined, so it’s worthwhile to spend a little blog space discussing why they fit so tightly together.

Diversity is Good

The first idea is that diversity is good. It’s good in almost every human pursuit. I’m particularly sensitive to this, being someone who grew up with the idea that rugged individualism was the highest ideal.

Diversity, of course, is incompatible with individualism. Individualism is the cult of the one. “One” cannot logically be diverse. Diversity is a property of groups, and groups by definition consist of more than one.

Okay, set theory admits of groups with one or even no members, but those groups have a diversity “score” (Gini–Simpson index) of zero. To have any diversity at all, your group has to have at absolute minimum two members. The more the merrier (or diversitier).

The idea that diversity is good came up in a couple of contexts over the past week.

First, I’m reading a book entitled Farsighted: How We Make the Decisions That Matter the Most by Steven Johnson, which I plan eventually to review in this blog. Part of the advice Johnson offers is that groups make better decisions when their membership is diverse. How they are diverse is less important than the extent to which they are diverse. In other words, this is a case where quantity is more important than quality.

Second, I divided my physics-lab students into groups to perform their first experiment. We break students into groups to prepare them for working in teams after graduation. Unlike when I was a student fifty years ago, activity in scientific research and technology development is always done in teams.

When I was a student, research was (supposedly) done by individuals working largely in isolation. I believe it was Willard Gibbs (I have no reliable reference for this quote) who said: “An experimental physicist must be a professional scientist and an amateur everything else.”

By this he meant that building a successful physics experiment requires the experimenter to apply so many diverse skills that it is impossible to have professional mastery of all of them. He (or she) must have an amateur’s ability pick up novel skills in order to reach the next goal in their research. They must be ready to work outside their normal comfort zone.

That asked a lot from an experimental researcher! Individuals who could do that were few and far between.

Today, the fast pace of technological development has reduced that pool of qualified individuals essentially to zero. It certainly is too small to maintain the pace society expects of the engineering and scientific communities.

Tolkien’s “unimaginable hand and mind of Feanor” puttering around alone in his personal workshop crafting magical things is unimaginable today. Marlowe’s Dr. Faustus character, who had mastered all earthly knowledge, is now laughable. No one person is capable of making a major contribution to today’s technology on their own.

The solution is to perform the work of technological research and development in teams with diverse skill sets.

In the sciences, theoreticians with strong mathematical backgrounds partner with engineers capable of designing machines to test the theories, and technicians with the skills needed to fabricate the machines and make them work.

Chaotic Universe

The second idea I want to deal with in this essay is that we live in a chaotic Universe.

Chaos is a property of complex systems. These are systems consisting of many interacting moving parts that show predictable behavior on short time scales, but eventually foil the most diligent attempts at long-term prognostication.

A pendulum, by contrast, is a simple system consisting of, basically, three moving parts: a massive weight, or “pendulum bob,” that hangs by a rod or string (the arm) from a fixed support. Simple systems usually do not exhibit chaotic behavior.

The solar system, consisting of a huge, massive star (the Sun), eight major planets and a host of minor planets, is decidedly not a simple system. Its behavior is borderline chaotic. I say “borderline” because the solar system seems well behaved on short time scales (e.g., millennia), but when viewed on time scales of millions of years does all sorts of unpredictable things.

For example, approximately four and a half billion years ago (a few tens of millions of years after the system’s initial formation) a Mars-sized planet collided with Earth, spalling off a mass of material that coalesced to form the Moon, then ricochetted out of the solar system. That’s the sort of unpredictable event that happens in a chaotic system if you wait long enough.

The U.S. economy, consisting of millions of interacting individuals and companies, is wildly chaotic, which is why no investment strategy has ever been found to work reliably over a long time.

Putting It Together

The way these two ideas (diversity is good, and we live in a chaotic Universe) work together is that collaborating in diverse groups is the only way to successfully navigate life in a chaotic Universe.

An individual human being is so powerless that attempting anything but the smallest task is beyond his or her capacity. The only way to do anything of significance is to collaborate with others in a diverse team.

In the late 1980s my wife and I decided to build a house. To begin with, we had to decide where to build the house. That required weeks of collaboration (by our little team of two) to combine our experiences of different communities in the area where we were living, develop scenarios of what life might be like living in each community, and finally agree on which we might like the best. Then we had to find an architect to work with our growing team to design the building. Then we had to negotiate with banks for construction loans, bridge loans, and ultimate mortgage financing. Our architect recommended adding a prime contractor who had connections with carpenters, plumbers, electricians and so forth to actually complete the work. The better part of a year later, we had our dream house.

There’s no way I could have managed even that little project – building one house – entirely on my own!

In 2015, I ran across the opportunity to produce a short film for a film festival. I knew how to write a script, run a video camera, sew a costume, act a part, do the editing, and so forth. In short, I had all the skills needed to make that 30-minute film.

Did that mean I could make it all by my onesies? Nope! By the time the thing was completed, the list of cast and crew counted over a dozen people, each with their own job in the production.

By now, I think I’ve made my point. The take-home lesson of this essay is that if you want to accomplish anything in this chaotic Universe, start by assembling a diverse team, and the more diverse, the better!

Nationalism and Diversity

Flags of many countries
Nationalism can promote diversity – or not! Brillenstimmer/shutterstock

16 January 2019 – The poster child for rampant nationalism is Hitler’s National Socialist German Workers’ Party, commonly called the Nazi Party. I say “is” rather than “was” because, while resoundingly defeated by Allies of WW2 in 1945, the Nazi Party still has widespread appeal in Germany, and throughout the world.

These folks give nationalism a bad name, leading to the Oxford Living Dictionary, giving primacy to the following definition of nationalism: “Identification with one’s own nation and support for its interests, especially to the exclusion or detriment of the interests of other nations.” [Emphasis added.]

The Oxford Dictionary also offers a second definition of nationalism: “Advocacy of or support for the political independence of a particular nation or people.”

This second definition is a lot more benign, and one that I wish were more often used. I certainly prefer it!

Nationalism under the first definition has been used since time immemorial as an excuse to create closed, homogeneous societies. That was probably the biggest flaw of the Nazi state(s). Death camps, ethnic cleansing, slave labor, and most of the other evils of those regimes flowed directly from their attempts to build closed, homogeneous societies.

Under the second definition, however, nationalism can, and should, be used to create a more diverse society.

That’s a good thing, as the example of United States history clearly demonstrates. Most of U.S. success can be traced directly to the country’s ethnic, cultural and racial diversity. The fact that the U.S., with a paltry 5% of the world’s population, now has by far the largest economy; that it dominates the fields of science, technology and the humanities; that its common language (American English) is fast becoming the “lingua franca” of the entire world; and that it effectively leads the world by so many measures is directly attributed to the continual renewal of its population diversity by immigration. In any of these areas, it’s easy to point out major contributions from recent immigrants or other minorities.

This harkens back to a theory of cultural development I worked out in the 1970s. It starts with the observation that all human populations – no matter how large or how small – consist of individuals whose characteristics vary somewhat. When visualized on a multidimensional scatter plot, populations generally consist of a cluster with a dense center and fewer individuals farther out.

Globular cluster image
The Great Hercules Star Cluster.. Albert Barr/Shutterstock

This pattern is similar to the image of a typical globular star cluster in the photo at right. Globular star clusters exhibit this pattern in three dimensions, while human populations exist and can be mapped on a great many dimensions representing different characteristics. Everything from physical characteristics like height, weight and skin color, to non-physical characteristics like ethnicity and political ideology – essentially anything that can be measured – can be plotted as a separate dimension.

The dense center of the pattern consists of individuals whose characteristics don’t stray too far from the norm. Everyone, of course, is a little off average. For example, the average white American female is five-feet, four-inches tall. Nearly everyone in that population, however, is a little taller or shorter than exactly average. Very few are considerably taller or shorter, with more individuals closer to the average than farther out.

The population’s diversity shows up as a widening of the pattern. That is, diversity is a measure of how often individuals appear farther out from the center.

Darwin’s theory of natural selection posits that where the population center is depends on where is most appropriate for it to be depending on conditions. What is average height, for example, depends on a complex interplay of conditions, including nutrition, attractiveness to the opposite sex, and so forth.

Observing that conditions change with time, one expects the ideal center of the population should move about in the multidimensional characteristics space. Better childhood nutrition, for example, should push the population toward increased tallness. And, it does!

One hopes that these changes happen slowly with time, giving the population a chance to follow in response. If the changes happen too fast, however, the population is unable to respond fast enough and it goes extinct. So, wooly mammoths were unable to respond fast enough to a combination of environmental changes and increased predation by humans emigrating into North America after the last Ice Age, so they died out. No more wooly mammoths!

Assuming whatever changes occur happen slowly enough, those individuals in the part of the distribution better adapted to the new conditions do better than those on the opposite side. So, the whole population shifts with time toward characteristics that are better adapted.

Where diversity comes into this dynamic is by providing more individuals in the better-adapted part of the distribution. The faster conditions change, the more individuals you need at the edges of the population to help with the response. For example, if the climate gets warmer, it’s folks who like to wear skimpy outfits who thrive. Folks who insist on covering themselves up in heavy clothing, don’t do so well. That was amply demonstrated when Englishmen tried to wear their heavy Elizabethan outfits in the warmer North American weather conditions. Styles changed practically overnight!

Closed, homogeneous societies of the type the Nazis tried to create have low diversity. They try to suppress folks who differ from the norm. When conditions change, such societies have less of the diversity needed to respond, so they wither and die.

That’s why cultures need diversity, and the more diversity, the better.

We live in a chaotic universe. The most salient characteristic of chaotic systems is constant change. Without diversity, we can’t respond to that change.

That’s why when technological change sped up in the early Twentieth Century, it was the bohemians of the twenties developing into the beatniks of the fifties and the hippies of the sixties that defined the cultures of the seventies and beyond.

Jerry Garcia stamp image
spatuletail/shutterstock

Long live Ben and Jerry’s Cherry Garcia Ice Cream!

Robots Revisited

Engineer with SCARA robots
Engineer using monitoring system software to check and control SCARA welding robots in a digital manufacturing operation. PopTika/Shutterstock

12 December 2018 – I was wondering what to talk about in this week’s blog posting, when an article bearing an interesting-sounding headline crossed my desk. The article, written by Simone Stolzoff of Quartz Media was published last Monday (12/3/2018) by the World Economic Forum (WEF) under the title “Here are the countries most likely to replace you with a robot.”

I generally look askance at organizations with grandiose names that include the word “World,” figuring that they likely are long on megalomania and short on substance. Further, this one lists the inimitable (thank God there’s only one!) Al Gore on its Board of Trustees.

On the other hand, David Rubenstein is also on the WEF board. Rubenstein usually seems to have his head screwed on straight, so that’s a positive sign for the organization. Therefore, I figured the article might be worth reading and should be judged on its own merits.

The main content is summarized in two bar graphs. The first lists the ratio of robots to thousands of manufacturing workers in various countries. The highest scores go to South Korea and Singapore. In fact, three of the top four are Far Eastern countries. The United States comes in around number seven.Figure 1

The second applies a correction to the graphed data to reorder the list by taking into account the countries’ relative wealth. There, the United States comes in dead last among the sixteen countries listed. East Asian countries account for all of the top five.

Figure 2The take-home-lesson from the article is conveniently stated in its final paragraph:

The upshot of all of this is relatively straightforward. When taking wages into account, Asian countries far outpace their western counterparts. If robots are the future of manufacturing, American and European countries have some catching up to do to stay competitive.

This article, of course, got me started thinking about automation and how manufacturers choose to adopt it. It’s a subject that was a major theme throughout my tenure as Chief Editor of Test & Measurement World and constituted the bulk of my work at Control Engineering.

The graphs certainly support the conclusions expressed in the cited paragraph’s first two sentences. The third sentence, however, is problematical.

That ultimate conclusion is based on accepting that “robots are the future of manufacturing.” Absolute assertions like that are always dangerous. Seldom is anything so all-or-nothing.

Predicting the future is epistemological suicide. Whenever I hear such bald-faced statements I recall Jim Morrison’s prescient statement: “The future’s uncertain and the end is always near.”

The line was prescient because a little over a year after the song’s release, Morrison was dead at age twenty seven, thereby fulfilling the slogan expressed by John Derek’s “Nick Romano” character in Nicholas Ray’s 1949 film Knock on Any Door: “Live fast, die young, and leave a good-looking corpse.”

Anyway, predictions like “robots are the future of manufacturing” are generally suspect because, in the chaotic Universe in which we live, the future is inherently unpredictable.

If you want to say something practically guaranteed to be wrong, predict the future!

I’d like to offer an alternate explanation for the data presented in the WEF graphs. It’s based on my belief that American Culture usually gets things right in the long run.

Yes, that’s the long run in which economist John Maynard Keynes pointed out that we’re all dead.

My belief in the ultimate vindication of American trends is based, not on national pride or jingoism, but on historical precedents. Countries that have bucked American trends often start out strong, but ultimately fade.

An obvious example is trendy Japanese management techniques based on Druckerian principles that were so much in vogue during the last half of the twentieth century. Folks imagined such techniques were going to drive the Japanese economy to pre-eminence in the world. Management consultants touted such principles as the future for corporate governance without noticing that while they were great for middle management, they were useless for strategic planning.

Japanese manufacturers beat the crap out of U.S. industry for a while, but eventually their economy fell into a prolonged recession characterized by economic stagnation and disinflation so severe that even negative interest rates couldn’t restart it.

Similar examples abound, which is why our little country with its relatively minuscule population (4.3% of the world’s) has by far the biggest GDP in the world. China, with more than four times the population, grosses less than a third of what we do.

So, if robotic adoption is the future of manufacturing, why are we so far behind? Assuming we actually do know what we’re doing, as past performance would suggest, the answer must be that the others are getting it wrong. Their faith in robotics as a driver of manufacturing productivity may be misplaced.

How could that be? What could be wrong with relying on technological advancement as the driver of productivity?

Manufacturing productivity is calculated on the basis of stuff produced (as measured by its total value in dollars) divided by the number of worker-hours needed to produce it. That should tell you something about what it takes to produce stuff. It’s all about human worker involvement.

Folks who think robots automatically increase productivity are fixating on the denominator in the productivity calculation. Making even the same amount of stuff while reducing the worker-hours needed to produce it should drive productivity up fast. That’s basic number theory. Yet, while manufacturing has been rapidly introducing all kinds of automation over the last few decades, productivity has stagnated.

We need to look for a different explanation.

It just might be that robotic adoption is another example of too much of a good thing. It might be that reliance on technology could prove to be less effective than something about the people making up the work force.

I’m suggesting that because I’ve been led to believe that work forces in the Far Eastern developing economies are less skillful, may have lower expectations, and are more tolerant of authoritarian governments.

Why would those traits make a difference? I’ll take them one at a time to suggest how they might.

The impression that Far Eastern populations are less skillful is not easy to demonstrate. Nobody who’s dealt with people of Asian extraction in either an educational or work-force setting would ever imagine they are at all deficient in either intelligence or motivation. On the other hand, as emerging or developing economies those countries are likely more dependent on workers newly recruited from rural, agrarian settings, who are likely less acclimated to manufacturing and industrial environments. On this basis, one may posit that the available workers may prove less skillful in a manufacturing setting.

It’s a weak argument, but it exists.

The idea that people making up Far-Eastern work forces have lower expectations than those in more developed economies is on firmer footing. Workers in Canada, the U.S. and Europe have very high expectations for how they should be treated. Wages are higher. Benefits are more generous. Upward mobility perceptions are ingrained in the cultures.

For developing economies, not so much.

Then, we come to tolerance of authoritarian regimes. Tolerance of authoritarianism goes hand-in-hand with tolerance for the usual authoritarian vices of graft, lack of personal freedom and social immobility. Only those believing populist political propaganda think differently (which is the danger of populism).

What’s all this got to do with manufacturing productivity?

Lack of skill, low expectations and patience under authority are not conducive to high productivity. People are productive when they work hard. People work hard when they are incentivized. They are incentivized to work when they believe that working harder will make their lives better. It’s not hard to grasp!

Installing robots in a plant won’t by itself lead human workers to believe that working harder will make their lives better. If anything, it’ll do the opposite. They’ll start worrying that their lives are about to take a turn for the worse.

Maybe that has something to do with why increased automation has failed to increase productivity.

Reaping the Whirlwind

Tornado
Powerful Tornado destroying property, with lightning in the background. Solarseven/Shutterstock.com

24 October 2018 – “They sow the wind, and they shall reap the whirlwind” is a saying from The Holy Bible‘s Old Testament Book of Hosea. I’m certainly not a Bible scholar, but, having been paying attention for seven decades, I can attest to saying’s validity.

The equivalent Buddhist concept is karma, which is the motive force driving the Wheel of Birth and Death. It is also wrapped up with samsara, which is epitomized by the saying: “What goes around comes around.”

Actions have consequences.

If you smoke a pack of Camels a day, you’re gonna get sick!

By now, you should have gotten the idea that “reaping the whirlwind” is a common theme among the world’s religions and philosophies. You’ve got to be pretty stone headed to have missed it.

Apparently the current President of the United States (POTUS), Donald J. Trump, has been stone headed enough to miss it.

POTUS is well known for trying to duck consequences of his actions. For example, during his 2016 Presidential Election campaign, he went out of his way to capitalize on Wikileaks‘ publication of emails stolen from Hillary Clinton‘s private email server. That indiscretion and his attempt to cover it up by firing then-FBI-Director James Comey grew into a Special Counsel Investigation, which now threatens to unmask all the nefarious activities he’s engaged in throughout his entire life.

Of course, Hillary’s unsanctioned use of that private email server while serving as Secretary of State is what opened her up to the email hacking in the first place! That error came back to bite her in the backside by giving the Russians something to hack. They then forwarded that junk to Wikileaks, who eventually made it public, arguably costing her the 2016 Presidential election.

Or, maybe it was her standing up for her philandering husband, or maybe lingering suspicions surrounding the pair’s involvement in the Whitewater scandal. Whatever the reason(s), Hillary, too, reaped the whirlwind.

In his turn, Russian President Vladimir Putin sowed the wind by tasking operatives to do the hacking of Hillary’s email server. Now he’s reaping the whirlwind in the form of a laundry list sanctions by western governments and Special Counsel Investigation indictments against the operatives he sent to do the hacking.

Again, POTUS showed his stone-headedness about the Bible verse by cuddling up to nearly every autocrat in the world: Vlad Putin, Kim Jong Un, Xi Jinping, … . The list goes on. Sensing waves of love emanating from Washington, those idiots have become ever more extravagant in their misbehavior.

The latest example of an authoritarian regime rubbing POTUS’ nose in filth is the apparent murder and dismemberment of Saudi Arabian journalist Jamal Khashoggi when he briefly entered the Saudi embassy in Turkey on personal business.

The most popular theory of the crime lays blame at the feet of Mohammad Bin Salman Al Saud (MBS), Crown Prince of Saudi Arabia and the country’s de facto ruler. Unwilling to point his finger at another would-be autocrat, POTUS is promoting a Saudi cover-up attempt suggesting the murder was done by some unnamed “rogue agents.”

Actually, that theory deserves some consideration. The idea that MBS was emboldened (spelled S-T-U-P-I-D) enough to have ordered Kashoggi’s assassination in such a ham-fisted way strains credulity. We should consider the possibility that ultra-conservative Wahabist factions within the Saudi government, who see MBS’ reforms as a threat to their historical patronage from the oil-rich Saudi monarchy, might have created the incident to embarrass MBS.

No matter what the true story is, the blow back is a whirlwind!

MBS has gone out of his way to promote himself as a business-friendly reformer. This reputation has persisted despite repeated instances of continued repression in the country he controls.

The whirlwind, however, is threatening MBS’ and the Saudi monarchy’s standing in the international community. Especially, international bankers, led by JP Morgan Chase’s Jamie Dimon, and a host of Silicon Valley tech companies are running for the exits from Saudi Arabia’s three-day Financial Investment Initiative conference that was scheduled to start Tuesday (23 October 2018).

That is a major embarrassment and will likely derail MBS’ efforts to modernize Saudi Arabia’s economy away from dependence on oil revenue.

It appears that these high-powered executives are rethinking the wisdom of dealing with the authoritarian Saudi regime. They’ve decided not to sow the wind by dealing with the Saudis because they don’t want to reap the whirlwind likely to result!

Update

Since this manuscript was drafted it’s become clear that we’ll never get the full story about the Kashoggi incident. Both regimes involved (Turkey and Saudi Arabia) are authoritarians with no incentive to be honest about this story. While Saudi Arabia seems to make a pretense of press freedom, this incident shows their true colors (i.e, color them repressive). Turkey hasn’t given even a passing nod to press freedom for years. It’s like two rival foxes telling the dog about a hen house break in.

On the “dog” side, we’re stuck with a POTUS who attacks press freedom on a daily basis. So, who’s going to ferret out the truth? Maybe the Brits or the French, but not the U.S. Executive Branch!

Climate Models Bat Zero

Climate models vs. observations
Whups! Since the 1970s, climate models have overestimated global temperature rise by … a lot! Cato Institute

The articles discussed here reflect the print version of The Wall Street Journal, rather than the online version. Links to online versions are provided. The publication dates and some of the contents do not match.

10 October 2018 – Baseball is well known to be a game of statistics. Fans pay as much attention to statistical analysis of performance by both players and teams as they do to action on the field. They hope to use those statistics to indicate how teams and individual players are likely to perform in the future. It’s an intellectual exercise that is half the fun of following the sport.

While baseball statistics are quoted to three decimal places, or one part in a thousand, fans know to ignore the last decimal place, be skeptical of the second decimal place, and recognize that even the first decimal place has limited predictive power. It’s not that these statistics are inaccurate or in any sense meaningless, it’s that they describe a situation that seems predictable, yet is full of surprises.

With 18 players in a game at any given time, a complex set of rules, and at least three players and an umpire involved in the outcome of every pitch, a baseball game is a highly chaotic system. What makes it fun is seeing how this system evolves over time. Fans get involved by trying to predict what will happen next, then quickly seeing if their expectations materialize.

The essence of a chaotic system is conditional unpredictability. That is, the predictability of any result drops more-or-less drastically with time. For baseball, the probability of, say, a player maintaining their batting average is fairly high on a weekly basis, drops drastically on a month-to-month basis, and simply can’t be predicted from year to year.

Folks call that “streakiness,” and it’s one of the hallmarks of mathematical chaos.

Since the 1960s, mathematicians have recognized that weather is also chaotic. You can say with certainty what’s happening right here right now. If you make careful observations and take into account what’s happening at nearby locations, you can be fairly certain what’ll happen an hour from now. What will happen a week from now, however, is a crapshoot.

This drives insurance companies crazy. They want to live in a deterministic world where they can predict their losses far into the future so that they can plan to have cash on hand (loss reserves) to cover them. That works for, say, life insurance. It works poorly for losses do to extreme-weather events.

That’s because weather is chaotic. Predicting catastrophic weather events next year is like predicting Miami Marlins pitcher Drew Steckenrider‘s earned-run-average for the 2019 season.

Laugh out loud.

Notes from 3 October

My BS detector went off big-time when I read an article in this morning’s Wall Street Journal entitled “A Hotter Planet Reprices Risk Around the World.” That headline is BS for many reasons.

Digging into the article turned up the assertion that insurance providers were using deterministic computer models to predict risk of losses due to future catastrophic weather events. The article didn’t say that explicitly. We have to understand a bit about computer modeling to see what’s behind the words they used. Since I’ve been doing that stuff since the 1970s, pulling aside the curtain is fairly easy.

I’ve also covered risk assessment in industrial settings for many years. It’s not done with deterministic models. It’s not even done with traditional mathematics!

The traditional mathematics you learned in grade school uses real numbers. That is numbers with a definite value.

Like Pi.

Pi = 3.1415926 ….

We know what Pi is because it’s measurable. It’s the ratio of a circle’s circumference to its diameter.

Measure the circumference. Measure the diameter. Then divide one by the other.

The ancient Egyptians performed the exercise a kazillion times and noticed that, no matter what circle you used, no matter how big it was, whether you drew it on papyrus or scratched it on a rock or laid it out in a crop circle, you always came out with the same number. That number eventually picked up the name “Pi.”

Risk assessment is NOT done with traditional arithmetic using deterministic (real) numbers. It’s done using what’s called “fuzzy logic.”

Fuzzy logic is not like the fuzzy thinking used by newspaper reporters writing about climate change. The “fuzzy” part simply means it uses fuzzy categories like “small,” “medium” and “large” that don’t have precisely defined values.

While computer programs are perfectly capable of dealing with fuzzy logic, they won’t give you the kind of answers cost accountants are prepared to deal with. They won’t tell you that you need a risk-reserve allocation of $5,937,652.37. They’ll tell you something like “lots!”

You can’t take “lots” to the bank.

The next problem is imagining that global climate models could have any possible relationship to catastrophic weather events. Catastrophic weather events are, by definition, catastrophic. To analyze them you need the kind of mathermatics called “catastrophe theory.”

Catastrophe theory is one of the underpinnings of chaos. In Steven Spielberg’s 1993 movie Jurassic Park, the character Ian Malcolm tries to illustrate catastrophe theory with the example of a drop of water rolling off the back of his hand. Whether it drips off to the right or left depends critically on how his hand is tipped. A small change creates an immense difference.

If a ball is balanced at the edge of a table, it can either stay there or drop off, and you can’t predict in advance which will happen.

That’s the thinking behind catastrophe theory.

The same analysis goes into predicting what will happen with a hurricane. As I recall, at the time Hurricane Florence (2018) made landfall, most models predicted it would move south along the Appalachian Ridge. Another group of models predicted it would stall out to the northwest.

When push came to shove, however, it moved northeast.

What actually happened depended critically on a large number of details that were too small to include in the models.

How much money was lost due to storm damage was a result of the result of unpredictable things. (That’s not an editing error. It was really the second order result of a result.) It is a fundamentally unpredictable thing. The best you can do is measure it after the fact.

That brings us to comparing climate-model predictions with observations. We’ve got enough data now to see how climate-model predictions compare with observations on a decades-long timescale. The graph above summarizes results compiled in 2015 by the Cato Institute.

Basically, it shows that, not only did the climate models overestimate the temperature rise from the late 1970s to 2015 by a factor of approximately three, but in the critical last decade, when the computer models predicted a rapid rise, the actual observations showed that it nearly stalled out.

Notice that the divergence between the models and the observations increased with time. As I’ve said, that’s the main hallmark of chaos.

It sure looks like the climate models are batting zero!

I’ve been watching these kinds of results show up since the 1980s. It’s why by the late 1990s I started discounting statements like the WSJ article’s: “A consensus of scientists puts blame substantially on emissios greenhouse gasses from cars, farms and factories.”

I don’t know who those “scientists” might be, but it sounds like they’re assigning blame for an effect that isn’t observed. Real scientists wouldn’t do that. Only politicians would.

Clearly, something is going on, but what it is, what its extent is, and what is causing it is anything but clear.

In the data depicted above, the results from global climate modeling do not look at all like the behavior of a chaotic system. The data from observations, however, do look like what we typically get from a chaotic system. Stuff moves constantly. On short time scales it shows apparent trends. On longer time scales, however, the trends tend to evaporate.

No wonder observers like Steven Pacala, who is Frederick D. Petrie Professor in Ecology and Evolutionary Biology at Princeton University and a board member at Hamilton Insurance Group, Ltd., are led to say (as quoted in the article): “Climate change makes the historical record of extreme weather an unreliable indicator of current risk.”

When you’re dealing with a chaotic system, the longer the record you’re using, the less predictive power it has.

Duh!

Another point made in the WSJ article that I thought was hilarious involved prediction of hurricanes in the Persian Gulf.

According to the article, “Such cyclones … have never been observed in the Persian Gulf … with new conditions due to warming, some cyclones could enter the Gulf in the future and also form in the Gulf itself.”

This sounds a lot like a tongue-in-cheek comment I once heard from astronomer Carl Sagan about predictions of life on Venus. He pointed out that when astronomers observe Venus, they generally just see a featureless disk. Science fiction writers had developed a chain of inferences that led them from that observation of a featureless disk to imagining total cloud cover, then postulating underlying swamps teeming with life, and culminating with imagining the existence of Venusian dinosaurs.

Observation: “We can see nothing.”

Conclusion: “There are dinosaurs.”

Sagan was pointing out that, though it may make good science fiction, that is bad science.

The WSJ reporters, Bradley Hope and Nicole Friedman, went from “No hurricanes ever seen in the Persian Gulf” to “Future hurricanes in the Persian Gulf” by the same sort of logic.

The kind of specious misinformation represented by the WSJ article confuses folks who have genuine concerns about the environment. Before politicians like Al Gore hijacked the environmental narrative, deflecting it toward climate change, folks paid much more attention to the real environmental issue of pollution.

Insurance losses from extreme weather events
Actual insurance losses due to catastrophic weather events show a disturbing trend.

The one bit of information in the WSJ article that appears prima facie disturbing is contained in the graph at right.

The graph shows actual insurance losses due to catastrophic weather events increasing rapidly over time. The article draws the inference that this trend is caused by human-induced climate change.

That’s quite a stretch, considering that there are obvious alternative explanations for this trend. The most likely alternative is the possibility that folks have been building more stuff in hurricane-prone areas. With more expensive stuff there to get damaged, insurance losses will rise.

Again: duh!

Invoking Occam’s Razor (choose the most believable of alternative explanations), we tend to favor the second explanation.

In summary, I conclude that the 3 October article is poor reporting that draws conclusions that are likely false.

Notes from 4 October

Don’t try to find the 3 October WSJ article online. I spent a couple of hours this morning searching for it, and came up empty. The closest I was able to get was a draft version that I found by searching on Bradley Hope’s name. It did not appear on WSJ‘s public site.

Apparently, WSJ‘s editors weren’t any more impressed with the article than I was.

The 4 October issue presents a corroboration of my alternative explanation of the trend in insurance-loss data: it’s due to a build up of expensive real estate in areas prone to catastrophic weather events.

In a half-page expose entitled “Hurricane Costs Grow as Population Shifts,” Kara Dapena reports that, “From 1980 to 2017, counties along the U.S. shoreline that endured hurricane-strength winds from Florence in September experienced a surge in population.”

In the end, this blog posting serves as an illustration of four points I tried to make last month. Specifically, on 19 September I published a post entitled: “Noble Whitefoot or Lying Blackfoot?” in which I laid out four principles to use when trying to determine if the news you’re reading is fake. I’ll list them in reverse of the order I used in last month’s posting, partly to illustrate that there is no set order for them:

  • Nobody gets it completely right  ̶  In the 3 October WSJ story, the reporters got snookered by the climate-change political lobby. That article, taken at face value, has to be stamped with the label “fake news.”
  • Does it make sense to you? ̶  The 3 October fake news article set off my BS detector by making a number of statements that did not make sense to me.
  • Comparison shopping for ideas  ̶  Assertions in the suspect article contradicted numerous other sources.
  • Consider your source  ̶  The article’s source (The Wall Street Journal) is one that I normally trust. Otherwise, I likely never would have seen it, since I don’t bother listening to anyone I catch in a lie. My faith in the publication was restored when the next day they featured an article that corrected the misinformation.

Doing Business with Bad Guys

Threatened with a gun
Authoritarians make dangerous business partners. rubikphoto/Shutterstock

3 October 2018 – Parents generally try to drum into their childrens’ heads a simple maxim: “People judge you by the company you keep.

Children (and we’re all children, no matter how mature and sophisticated we pretend to be) just as generally find it hard to follow that maxim. We all screw it up once in a while by succumbing to the temptation of some perceived advantage to be had by dealing with some unsavory character.

Large corporations and national governments are at least as likely to succumb to the prospect of making a fast buck or signing some treaty with peers who don’t entertain the same values we have (or at least pretend to have). Governments, especially, have a tough time in dealing with what I’ll call “Bad Guys.”

Let’s face it, better than half the nations of the world are run by people we wouldn’t want in our living rooms!

I’m specifically thinking about totalitarian regimes like the People’s Republic of China (PRC).

‘Way back in the last century, Mao Tse-tung (or Mao Zedong, depending on how you choose to mis-spell the anglicization of his name) clearly placed China on the “Anti-American” team, espousing a virulent form of Marxism and descending into the totalitarian authoritarianism Marxist regimes are so prone to. This situation continued from the PRC’s founding in 1949 through 1972, when notoriously authoritarian-friendly U.S. President Richard Nixon toured China in an effort to start a trade relationship between the two countries.

Greedy U.S. corporations quickly started falling all over themselves in an effort to gain access to China’s enormous potential market. Mesmerized by the statistics of more than a billion people spread out over China’s enormous land mass, they ignored the fact that those people were struggling in a subsistence-agriculture economy that had collapsed under decades of mis-managment by Mao’s authoritarian regime.

What they hoped those generally dirt-poor peasants were going to buy from them I never could figure out.

Unfortunately, years later I found myself embedded in the management of one of those starry-eyed multinational corporations that was hoping to take advantage of the developing Chinese electronics industry. Fresh off our success launching Test & Measurement Europe, they wanted to launch a new publication called Test & Measurement China. Recalling the then-recent calamity ending the Tiananmen Square protests of 1989, I pulled a Nancy Reagan and just said “No.”

I pointed out that the PRC was still run by a totalitarian, authoritarian regime, and that you just couldn’t trust those guys. You never knew when they were going to decide to sacrifice you on the altar of internal politics.

Today, American corporations are seeing the mistakes they made in pursuit of Chinese business, which like Robert Southey’s chickens, are coming home to roost. In 2015, Chinese Premier Li Keqiang announced the “Made in China 2025” plan to make China the World’s technology leader. It quickly became apparent that Mao’s current successor, Xi Jinping intends to achieve his goals by building on technology pilfered from western companies who’d naively partnered with Chinese firms.

Now, their only protector is another authoritarian-friendly president, Donald Trump. Remember it was Trump who, following his ill-advised summit with North Korean strongman Kim Jong Un, got caught on video enviously saying: “He speaks, and his people sit up at attention. I want my people to do the same.

So, now these corporations have to look to an American would-be dictator for protection from an entrenched Chinese dictator. No wonder they find themselves screwed, blued, and tattooed!

Governments are not immune to the PRC’s siren song, either. Pundits are pointing out that the PRC’s vaunted “One Belt, One Road” initiative is likely an example of “debt-trap diplomacy.”

Debt-trap diplomacy is a strategy similar to organized crime’s loan-shark operations. An unscrupulous cash-rich organization, the loan shark, offers funds to a cash-strapped individual, such as an ambitious entrepreneur, in a deal that seems too good to be true. It’s NOT true because the deal comes in the form of a loan at terms that nearly guarantee that the debtor will default. The shark then offers to write off the debt in exchange for the debtor’s participation in some unsavory scheme, such as money laundering.

In the debt-trap diplomacy version, the PRC stands in the place of the loan shark while some emerging-economy nation, such as, say, Malaysia, accepts the unsupportable debt. In the PRC/ Malaysia case, the unsavory scheme is helping support China’s imperial ambitions in the western Pacific.

Earlier this month, Malaysia wisely backed out of the deal.

It’s not just the post-Maoist PRC that makes a dangerous place for western corporations to do business, authoritarians all over the world treat people like Heart’s Barracuda. They suck you in with mesmerizing bright and shiny promises, then leave you twisting in the wind.

Yes, I’ve piled up a whole mess of mixed metaphors here, but I’m trying to drive home a point!

Another example of the traps business people can get into by trying to deal with authoritarians is afforded by Danske Bank’s Estonia branch and their dealings with Vladimir Putin‘s Russian kleptocracy. Danske Bank is a Danish financial institution with a pan-European footprint and global ambitions. Recent release of a Danske Bank internal report produced by the Danish law firm Bruun & Hjejle says that the Estonia branch engaged in “dodgy dealings” with numerous corrupt Russian officials. Basically, the bank set up a scheme to launder money stolen from Russian tax receipts by organized criminals.

The scandal broke in Russia in June of 2007 when dozens of police officers raided the Moscow offices of Hermitage Global, an activist fund focused on global emerging markets. A coverup by Kremlin authorities resulted in the death (while in a Russian prison) of Sergei Leonidovich Magnitsky, a Russian tax accountant who specialized in anti-corruption activities.

Magnitsky’s case became an international cause célèbre. The U.S. Congress and President Barack Obama enacted the Magnitsky Act at the end of 2012, barring, among others, those Russian officials believed to be involved in Magnitsky’s death from entering the United States or using its banking system.

Apparently, the purpose of the infamous Trump Tower meeting of June 9, 2016 was, on the Russian side, an effort to secure repeal of the Magnitsky Act should then-candidate Trump win the election. The Russians dangled release of stolen emails incriminating Trump-rival Hillary Clinton as bait. This activity started the whole Mueller Investigation, which has so far resulted in dozens of indictments for federal crimes, and at least eight guilty pleas or convictions.

The latest business strung up in this mega-scandal was the whole corrupt banking system of Cyprus, whose laundering of Russian oligarchs’ money amounted to over $20B.

The moral of this story is: Don’t do business with bad guys, no matter how good they make the deal look.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.

Who’s NOT a Creative?

 

Compensting sales
Close-up Of A Business Woman Giving Cheque To Her Colleague At Workplace In Office. Andrey Popov/Shutterstock

25 July 2018 – Last week I made a big deal about the things that motivate creative people, such as magazine editors, and how the most effective rewards were non-monetary. I also said that monetary rewards, such as commissions based on sales results, were exactly the right rewards to use for salespeople. That would imply that salespeople were somehow different from others, and maybe even not creative.

That is not the impression I want to leave you with. I’m devoting this blog posting to setting that record straight.

My remarks last week were based on Maslow‘s and Herzberg‘s work on motivation of employees. I suggested that these theories were valid in other spheres of human endeavor. Let’s be clear about this: yes, Maslow’s and Herzberg’s theories are valid and useful in general, whenever you want to think about motivating normal, healthy human beings. It’s incidental that those researchers were focused on employer/employee relations as an impetus to their work. If they’d been focused on anything else, their conclusions would probably have been pretty much the same.

That said, there are a whole class of people for whom monetary compensation is the holy grail of motivators. They are generally very high functioning individuals who are in no way pathological. On the surface, however, their preferred rewards appear to be monetary.

Traditionally, observers who don’t share this reward system have indicted these individuals as “greedy.”

I, however, dispute that conclusion. Let me explain why.

When pointing out the rewards that can be called “motivators for editors,” I wrote:

“We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like ‘Best Technical Article.’ Above all, we talked up the fact that ours was ‘the premier publication in the market.'”

Notice that these rewards, though non-monetary. were more or less measurable. They could be (and indeed for the individuals they motivated) seen as scorecards. The individuals involved had a very clear idea of value attached to such rewards. A Nobel Prize in Physics is of greater value than, say, a similar award given by, say, Harvard University.

For example, in 1987 I was awarded the “Cahners Editorial Medal of Excellence, Best How-To Article.” That wasn’t half bad. The competition was articles written for a few dozen magazines that were part of the Cahners Publishing Company, which at the time was a big deal in the business-to-business magazine field.

What I considered to be of higher value, however, was the “First Place Award For Editorial Excellence for a Technical Article in a Magazine with Over 80,000 Circulation” I got in 1997 from the American Society of Business Press Editors, where I was competing with a much wider pool of journalists.

Economists have a way of attempting to quantify such non-monetary awards called utility. They arrive at values by presenting various options and asking the question: “Which would you rather have?”

Of course, measures of utility generally vary widely depending on who’s doing the choosing.

For example, an article in the 19 July The Wall Street Journal described a phenomenon the author seemed to think was surprising: Saudi-Arabian women drivers (new drivers all) showed a preference for muscle cars over more pedestrian models. The author, Margherita Stancati, related an incident where a Porche salesperson in Riyadh offered a recently minted woman driver an “easy to drive crossover designed to primarily attract women.” The customer demurred. She wanted something “with an engine that roars.”

So, the utility of anything is not an absolute in any sense. It all depends on answering the question: “Utility to whom?”

Everyone is motivated by rewards in the upper half of the Needs Pyramid. If you’re a salesperson, growth in your annual (or other period) sales revenue is in the green Self Esteem block. It’s well and truly in the “motivator” category, and has nothing to do with the Safety and Security “hygiene factor” where others might put it. Successful salespeople have those hygiene factors well-and-truly covered. They’re looking for a reward that tells them they’ve hit a home run. That is likely having a bigger annual bonus than the next guy.

The most obvious money-driven motivators accrue to the folks in the CEO ranks. Jeff Bezos, Elon Musk, and Warren Buffett would have a hard time measuring their success (i.e., hitting the Pavlovian lever to get Self Actualization rewards) without looking at their monetary compensation!