Luddites’ Lament

Luddites attack
An owner of a factory defending his workshop against Luddites intent on destroying his mechanized looms between 1811-1816. Everett Historical/Shutterstock

27 March 2019 – A reader of last week’s column, in which I reported recent opinions voiced by a few automation experts at February’s Conference on the Future of Work held at at Stanford University, informed me of a chapter from Henry Hazlitt’s 1988 book Economics in One Lesson that Australian computer scientist Steven Shaw uploaded to his blog.

I’m not going to get into the tangled web of potential copyright infringement that Shaw’s posting of Hazlitt’s entire text opens up, I’ve just linked to the most convenient-to-read posting of that particular chapter. If you follow the link and want to buy the book, I’ve given you the appropriate link as well.

The chapter is of immense value apropos the question of whether automation generally reduces the need for human labor, or creates more opportunities for humans to gain useful employment. Specifically, it looks at the results of a number of historic events where Luddites excoriated technology developers for taking away jobs from humans only to have subsequent developments prove them spectacularly wrong.

Hazlitt’s classic book is, not surprisingly for a classic, well documented, authoritative, and extremely readable. I’m not going to pretend to provide an alternative here, but to summarize some of the chapter’s examples in the hope that you’ll be intrigued enough to seek out the original.

Luddism

Before getting on to the examples, let’s start by looking at the history of Luddism. It’s not a new story, really. It probably dates back to just after cave guys first thought of specialization of labor.

That is, sometime in the prehistoric past, some blokes were found to be especially good at doing some things, and the rest of the tribe came up with the idea of letting, say, the best potters make pots for the whole tribe, and everyone else rewarding them for a job well done by, say, giving them choice caribou parts for dinner.

Eventually, they had the best flint knappers make the arrowheads, the best fletchers put the arrowheads on the arrows, the best bowmakers make the bows, and so on. Division of labor into different jobs turned out to be so spectacularly successful that very few of us rugged individualists, who pretend to do everything for ourselves, are few and far between (and are largely kidding ourselves, anyway).

Since then, anyone who comes up with a great way to do anything more efficiently runs the risk of having the folks who spent years learning to do it the old way land on him (or her) like a ton of bricks.

It’s generally a lot easier to throw rocks to drive the innovator away than to adapt to the innovation.

Luddites in the early nineteenth century were organized bands of workers who violently resisted mechanization of factories during the late Industrial Revolution. Named for an imaginary character, Ned Ludd, who was supposedly an apprentice who smashed two stocking frames in 1779 and whose name had become emblematic of machine destroyers. The term “Luddite” has come to mean anyone fanatically opposed to deploying advanced technology.

Of course, like religious fundamentalists, they have to pick a point in time to separate “good” technology from the “bad.” Unlike religious fanatics, who generally pick publication of a certain text to be the dividing line, Luddites divide between the technology of their immediate past (with which they are familiar) and anything new or unfamiliar. Thus, it’s a continually moving target.

In either case, the dividing line is fundamentally arbitrary, so the emotion of their response is irrational. Irrationality typically carries a warranty of being entirely contrary to facts.

What Happens Next

Hazlitt points out, “The belief that machines cause unemployment, when held with any logical consistency, leads to preposterous conclusions.” He points out that on the second page of the first chapter of Adam Smith’s seminal book Wealth of Nations, Smith tells us that a workman unacquainted with the use of machinery employed in sewing-pin-making “could scarce make one pin a day, and certainly could not make twenty,” but with the use of the machinery he can make 4,800 pins a day. So, zero-sum game theory would indicate an immediate 99.98 percent unemployment rate in the pin-making industry of 1776.

Did that happen? No, because economics is not a zero-sum game. Sewing pins went from dear to cheap. Since they were now cheap, folks prized them less and discarded them more (when was the last time you bothered to straighten a bent pin?), and more folks could afford to buy them in the first place. That led to an increase in sewing-pin sales as well as sales of things like sewing-patterns and bulk fine fabric sold to amateur sewers, and more employment, not less.

Similar results obtained in the stocking industry when new stocking frames (the original having been invented William Lee in 1589, but denied a patent by Elizabeth I who feared its effects on employment in hand-knitting industries) were protested by Luddites as fast as they could be introduced. Before the end of the nineteenth century the stocking industry was employing at least a hundred men for every man it employed at the beginning of the century.

Another example Hazlitt presents from the Industrial Revolution happened in the cotton-spinning industry. He says: “Arkwright invented his cotton-spinning machinery in 1760. At that time it was estimated that there were in England 5,200 spinners using spinning wheels, and 2,700 weavers—in all, 7,900 persons engaged in the production of cotton textiles. The introduction of Arkwright’s invention was opposed on the ground that it threatened the livelihood of the workers, and the opposition had to be put down by force. Yet in 1787—twenty-seven years after the invention appeared—a parliamentary inquiry showed that the number of persons actually engaged in the spinning and weaving of cotton had risen from 7,900 to 320,000, an increase of 4,400 percent.”

As these examples indicate, improvements in manufacturing efficiency generally lead to reductions in manufacturing cost, which, when passed along to customers, reduces prices with concommitent increases in unit sales. This is the price elasticity of demand curve from Microeconomics 101. It is the reason economics is decidedly not a zero-sum game.

If we accept economics as not a zero-sum game, predicting what happens when automation makes it possible to produce more stuff with fewer workers becomes a chancy proposition. For example, many economists today blame flat productivity (the amount of stuff produced divided by the number of workers needed to produce it) for lack of wage gains in the face of low unemployment. If that is true, then anything that would help raise productivity (such as automation) should be welcome.

Long experience has taught us that economics is a positive-sum game. In the face of technological advancement, it behooves us to expect positive outcomes while taking measures to ensure that the concomitant economic gains get distributed fairly (whatever that means) throughout society. That is the take-home lesson from the social dislocations that accompanied the technological advancements of the Early Industrial Revolution.

Don’t Panic!

Panic button
Do not push the red button! Peter Hermes Furian/Shutterstock

20 March 2019 – The image at right visualizes something described in Douglas Adams’ Hitchiker’s Guide to the Galaxy. At one point, the main characters of that six-part “trilogy” found a big red button on the dashboard of a spaceship they were trying to steal that was marked “DO NOT PRESS THIS BUTTON!” Naturally, they pressed the button, and a new label popped up that said “DO NOT PRESS THIS BUTTON AGAIN!”

Eventually, they got the autopilot engaged only to find it was a stunt ship programmed to crash headlong into the nearest Sun as part of the light show for an interstellar rock band. The moral of this story is “Never push buttons marked ‘DO NOT PUSH THIS BUTTON.’”

Per the author: “It is said that despite its many glaring (and occasionally fatal) inaccuracies, the Hitchhiker’s Guide to the Galaxy itself has outsold the Encyclopedia Galactica because it is slightly cheaper, and because it has the words ‘DON’T PANIC’ in large, friendly letters on the cover.”

Despite these references to the Hitchhiker’s Guide to the Galaxy, this posting has nothing to do with that book, the series, or the guide it describes, except that I’ve borrowed the words from the Guide’s cover as a title. I did that because those words perfectly express the take-home lesson of Bill Snyder’s 11 March 2019 article in The Robot Report entitled “Fears of job-stealing robots are misplaced, say experts.”

Expert Opinions

Snyder’s article reports opinions expressed at the the Conference on the Future of Work at Stanford University last month. It’s a topic I’ve shot my word processor off about on numerous occasions in this space, so I thought it would be appropriate to report others’ views as well. First, I’ll present material from Snyder’s article, then I’ll wrap up with my take on the subject.

“Robots aren’t coming for your job,” Snyder says, “but it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.”

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist.

David Autor, professor of economics at the Massachusetts Institute of Technology points out that education is a big determinant of how developing trends affect workers: “It’s a great time to be young and educated, but there’s no clear land of opportunity for adults who haven’t been to college.”

“When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation,” said Varian, “demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude.”

His research indicates that shrinkage of the labor supply due to demographic trends is 53% greater than shrinkage of demand for labor due to automation. That means, while relatively fewer jobs are available, there are a lot fewer workers available to do them. The result is the prospect of a continued labor shortage.

At the same time, Snyder reports that “[The] most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.”

In other words, fears that robots will displace humans for existing jobs miss the point. Robots, instead, are taking over jobs for which there aren’t enough humans to do them.

Another effect is the fact that what people think of as “jobs” are actually made up of many “tasks,” and it’s tasks that get automated, not entire jobs. Some tasks are amenable to automation while others aren’t.

“Consider the job of a gardener,” Snyder suggests as an example. “Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores.”

Some of these tasks, like mowing and watering, can easily be automated. Pruning rose bushes, not so much!

Snyder points to news reports of a hotel in Nagasaki, Japan being forced to “fire” robot receptionists and room attendants that proved to be incompetent.

There’s a scene in the 1997 film The Fifth Element where a supporting character tries to converse with a robot bartender about another character. He says: “She’s so vulnerable – so human. Do you you know what I mean?” The robot shakes its head, “No.”

Sometimes people, even misanthropes, would prefer to interact with another human than with a drink-dispensing machine.

“Jobs,” Varian points out, “unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator.”

“Excessive automation at Tesla was a mistake,” founder Elon Musk mea culpa-ed last year “Humans are underrated.”

Another trend Snyder points out is that automation-ready jobs, such as assembly-line factory workers, have already largely disappeared from America. “The 10 most common occupations in the U.S.,” he says, “include such jobs as retail salespersons, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer even make the list.

Again, robots are mainly taking over tasks that humans are not available to do.

The final trend that Snyder presents, is the stark fact that birthrates in developed nations are declining – in some cases precipitously. “The aging of the baby boom generation creates demand for service jobs,” Varian points out, “but leaves fewer workers actively contributing labor to the economy.”

Those “service jobs” are just the ones that require a human touch, so they’re much harder to automate successfully.

My Inexpert Opinion

I’ve been trying, not entirely successfully, to figure out what role robots will actually have vis-a-vis humans in the future. I think there will be a few macroscopic trends. And, the macroscopic trends should be the easiest to spot ‘cause they’re, well, macroscopic. That means bigger. So, there easier to see. See?

As early as 2010, I worked out one important difference between robots and humans that I expounded in my novel Vengeance is Mine! Specifically, humans have a wider view of the Universe and have more of an emotional stake in it.

“For example,” I had one of my main characters pontificate at a cocktail party, “that tall blonde over there is an archaeologist. She uses ROVs – remotely operated vehicles – to map underwater shipwreck sites. So, she cares about what she sees and finds. We program the ROVs with sophisticated navigational software that allows her to concentrate on what she’s looking at, rather than the details of piloting the vehicle, but she’s in constant communication with it because she cares what it does. It doesn’t.”

More recently, I got a clearer image of this relationship and it’s so obvious that we tend to overlook it. I certainly missed it for decades.

It hit me like a brick when I saw a video of an autonomous robot marine-trash collector. This device is a small autonomous surface vessel with a big “mouth” that glides around seeking out and gobbling up discarded water bottles, plastic bags, bits of styrofoam, and other unwanted jetsam clogging up waterways.

The first question that popped into my mind was “who’s going to own the thing?” I mean, somebody has to want it, then buy it, then put it to work. I’m sure it could be made to automatically regurgitate the junk it collects into trash bags that it drops off at some collection point, but some human or humans have to make sure the trash bags get collected and disposed of. Somebody has to ensure that the robot has a charging system to keep its batteries recharged. Somebody has to fix it when parts wear out, and somebody has to take responsibility if it becomes a navigation hazard. Should that happen, the Coast Guard is going to want to scoop it up and hand its bedraggled carcass to some human owner along with a citation.

So, on a very important level, the biggest thing robots need from humans is ownership. Humans own robots, not the other way around. Without a human owner, an orphan robot is a pile of junk left by the side of the road!

Reimagining Our Tomorrows

Cover Image
Utopia with a twist.

19 December 2018 – I generally don’t buy into utopias.

Utopias are intended as descriptions of a paradise. They’re supposed to be a paradise for everybody, and they’re supposed to be filled with happy people committed to living in their city (utopias are invariably built around descriptions of cities), which they imagine to be the best of all possible cities located in the best of all possible worlds.

Unfortunately, however, utopia stories are written by individual authors, and they’d only be a paradise for that particular author. If the author is persuasive enough, the story will win over a following of disciples, who will praise it to high Heaven. Once in a great while (actually surprisingly often) those disciples become so enamored of the description that they’ll drop everything and actually attempt to build a city to match the description.

When that happens, it invariably ends in tears.

That’s because, while utopian stories invariably describe city plans that would be paradise to their authors, great swaths of the population would find living in them to be horrific.

Even Thomas More, the sixteenth century philosopher, politician and generally overall smart guy who’s credited with giving us the word “utopia” in the first place, was wise enough to acknowledge that the utopia he described in his most famous work, Utopia, wouldn’t be such a fun place for the slaves he had serving his upper-middle class citizens, who were the bulwark of his utopian society.

Even Plato’s Republic, which gave us the conundrum summarized in Juvenal’s Satires as “Who guards the guards?,” was never meant as a workable society. Plato’s work, in general, was meant to teach us how to think, not what to think.

What to think is a highly malleable commodity that varies from person to person, society to society, and, most importantly, from time to time. Plato’s Republic reflected what might have passed as good ideas for city planning in 380 BC Athens, but they wouldn’t have passed muster in More’s sixteenth-century England. Still less would they be appropriate in twenty-first-century democracies.

So, I approached Joe Tankersley’s Reimagining Our Tomorrows with some trepidation. I wouldn’t have put in the effort to read the thing if it wasn’t for the subtitle: “Making Sure Your Future Doesn’t SUCK.”

That subtitle indicated that Tankersley just might have a sense of humor, and enough gumption to put that sense of humor into his contribution to Futurism.

Futurism tends to be the work of self-important intellectuals out to make a buck by feeding their audience on fantasies that sound profound, but bear no relation to any actual or even possible future. Its greatest value is in stimulating profits for publishers of magazines and books about Futurism. Otherwise, they’re not worth the trees killed to make the paper they’re printed on.

Trees, after all and as a group, make a huge contribution to all facets of human life. Like, for instance, breathing. Breathing is of incalculable value to humans. Trees make an immense contribution to breathing by absorbing carbon dioxide and pumping out vast quantities of oxygen, which humans like to breathe.

We like trees!

Futurists, not so much.

Tankersley’s little (168 pages, not counting author bio, front matter and introduction) opus is not like typical Futurist literature, however. Well, it would be like that if it weren’t more like the Republic in that it’s avowed purpose is to stimulate its readers to think about the future themselves. In the introduction that I purposely left out of the page count he says:

I want to help you reimagine our tomorrows; to show you that we are living in a time when the possibility of creating a better future has never been greater.”

Tankersley structured the body of his book in ten chapters, each telling a separate story about an imagined future centered around a possible solution to an issue relevant today. Following each chapter is an “apology” by a fictional future character named Archibald T. Patterson III.

Archie is what a hundred years ago would have been called a “Captain of Industry.” Today, we’d refer to him as an uber-rich and successful entrepreneur. Think Elon Musk or Bill Gates.

Actually, I think he’s more like Warren Buffet in that he’s reasonably introspective and honest with himself. Archie sees where society has come from, how it got to the future it got to, and what he and his cohorts did wrong. While he’s super-rich and privileged, the futures the stories describe were made by other people who weren’t uber-rich and successful. His efforts largely came to naught.

The point Tankersley seems to be making is that progress comes from the efforts of ordinary individuals who, in true British fashion, “muddle through.” They see a challenge and apply their talents and resources to making a solution. The solution is invariably nothing anyone would foresee, and is nothing like what anyone else would come up with to meet the same challenge. Each is a unique response to a unique challenge by unique individuals.

It might seem naive, this idea that human development comes from ordinary individuals coming up with ordinary solutions to ordinary problems all banded together into something called “progress,” but it’s not.

For example, Mark Zuckerberg developed Facebook as a response to the challenge of applying then-new computer-network technology to the age-old quest by late adolescents to form their own little communities by communicating among themselves. It’s only fortuitous that he happened on the right combination of time (the dawn of a radical new technology), place (in the midst of a huge cadre of the right people well versed in using that radical new technology) and marketing to get the word out to those right people wanting to use that radical new technology for that purpose. Take away any of those elements and there’d be no Facebook!

What if Zuckerberg hadn’t invented Facebook? In that event, somebody else (Reid Hoffman) would have come up with a similar solution (Linkedin) to the same challenge facing a similar group (technology professionals).

Oh, my! They did!

History abounds with similar examples. There’s hardly any advancement in human culture that doesn’t fit this model.

The good news is that Tankersley’s vision for how we can re-imagine our tomorrows is right on the money.

The bad news is … there isn’t any bad news!

Robots Revisited

Engineer with SCARA robots
Engineer using monitoring system software to check and control SCARA welding robots in a digital manufacturing operation. PopTika/Shutterstock

12 December 2018 – I was wondering what to talk about in this week’s blog posting, when an article bearing an interesting-sounding headline crossed my desk. The article, written by Simone Stolzoff of Quartz Media was published last Monday (12/3/2018) by the World Economic Forum (WEF) under the title “Here are the countries most likely to replace you with a robot.”

I generally look askance at organizations with grandiose names that include the word “World,” figuring that they likely are long on megalomania and short on substance. Further, this one lists the inimitable (thank God there’s only one!) Al Gore on its Board of Trustees.

On the other hand, David Rubenstein is also on the WEF board. Rubenstein usually seems to have his head screwed on straight, so that’s a positive sign for the organization. Therefore, I figured the article might be worth reading and should be judged on its own merits.

The main content is summarized in two bar graphs. The first lists the ratio of robots to thousands of manufacturing workers in various countries. The highest scores go to South Korea and Singapore. In fact, three of the top four are Far Eastern countries. The United States comes in around number seven.Figure 1

The second applies a correction to the graphed data to reorder the list by taking into account the countries’ relative wealth. There, the United States comes in dead last among the sixteen countries listed. East Asian countries account for all of the top five.

Figure 2The take-home-lesson from the article is conveniently stated in its final paragraph:

The upshot of all of this is relatively straightforward. When taking wages into account, Asian countries far outpace their western counterparts. If robots are the future of manufacturing, American and European countries have some catching up to do to stay competitive.

This article, of course, got me started thinking about automation and how manufacturers choose to adopt it. It’s a subject that was a major theme throughout my tenure as Chief Editor of Test & Measurement World and constituted the bulk of my work at Control Engineering.

The graphs certainly support the conclusions expressed in the cited paragraph’s first two sentences. The third sentence, however, is problematical.

That ultimate conclusion is based on accepting that “robots are the future of manufacturing.” Absolute assertions like that are always dangerous. Seldom is anything so all-or-nothing.

Predicting the future is epistemological suicide. Whenever I hear such bald-faced statements I recall Jim Morrison’s prescient statement: “The future’s uncertain and the end is always near.”

The line was prescient because a little over a year after the song’s release, Morrison was dead at age twenty seven, thereby fulfilling the slogan expressed by John Derek’s “Nick Romano” character in Nicholas Ray’s 1949 film Knock on Any Door: “Live fast, die young, and leave a good-looking corpse.”

Anyway, predictions like “robots are the future of manufacturing” are generally suspect because, in the chaotic Universe in which we live, the future is inherently unpredictable.

If you want to say something practically guaranteed to be wrong, predict the future!

I’d like to offer an alternate explanation for the data presented in the WEF graphs. It’s based on my belief that American Culture usually gets things right in the long run.

Yes, that’s the long run in which economist John Maynard Keynes pointed out that we’re all dead.

My belief in the ultimate vindication of American trends is based, not on national pride or jingoism, but on historical precedents. Countries that have bucked American trends often start out strong, but ultimately fade.

An obvious example is trendy Japanese management techniques based on Druckerian principles that were so much in vogue during the last half of the twentieth century. Folks imagined such techniques were going to drive the Japanese economy to pre-eminence in the world. Management consultants touted such principles as the future for corporate governance without noticing that while they were great for middle management, they were useless for strategic planning.

Japanese manufacturers beat the crap out of U.S. industry for a while, but eventually their economy fell into a prolonged recession characterized by economic stagnation and disinflation so severe that even negative interest rates couldn’t restart it.

Similar examples abound, which is why our little country with its relatively minuscule population (4.3% of the world’s) has by far the biggest GDP in the world. China, with more than four times the population, grosses less than a third of what we do.

So, if robotic adoption is the future of manufacturing, why are we so far behind? Assuming we actually do know what we’re doing, as past performance would suggest, the answer must be that the others are getting it wrong. Their faith in robotics as a driver of manufacturing productivity may be misplaced.

How could that be? What could be wrong with relying on technological advancement as the driver of productivity?

Manufacturing productivity is calculated on the basis of stuff produced (as measured by its total value in dollars) divided by the number of worker-hours needed to produce it. That should tell you something about what it takes to produce stuff. It’s all about human worker involvement.

Folks who think robots automatically increase productivity are fixating on the denominator in the productivity calculation. Making even the same amount of stuff while reducing the worker-hours needed to produce it should drive productivity up fast. That’s basic number theory. Yet, while manufacturing has been rapidly introducing all kinds of automation over the last few decades, productivity has stagnated.

We need to look for a different explanation.

It just might be that robotic adoption is another example of too much of a good thing. It might be that reliance on technology could prove to be less effective than something about the people making up the work force.

I’m suggesting that because I’ve been led to believe that work forces in the Far Eastern developing economies are less skillful, may have lower expectations, and are more tolerant of authoritarian governments.

Why would those traits make a difference? I’ll take them one at a time to suggest how they might.

The impression that Far Eastern populations are less skillful is not easy to demonstrate. Nobody who’s dealt with people of Asian extraction in either an educational or work-force setting would ever imagine they are at all deficient in either intelligence or motivation. On the other hand, as emerging or developing economies those countries are likely more dependent on workers newly recruited from rural, agrarian settings, who are likely less acclimated to manufacturing and industrial environments. On this basis, one may posit that the available workers may prove less skillful in a manufacturing setting.

It’s a weak argument, but it exists.

The idea that people making up Far-Eastern work forces have lower expectations than those in more developed economies is on firmer footing. Workers in Canada, the U.S. and Europe have very high expectations for how they should be treated. Wages are higher. Benefits are more generous. Upward mobility perceptions are ingrained in the cultures.

For developing economies, not so much.

Then, we come to tolerance of authoritarian regimes. Tolerance of authoritarianism goes hand-in-hand with tolerance for the usual authoritarian vices of graft, lack of personal freedom and social immobility. Only those believing populist political propaganda think differently (which is the danger of populism).

What’s all this got to do with manufacturing productivity?

Lack of skill, low expectations and patience under authority are not conducive to high productivity. People are productive when they work hard. People work hard when they are incentivized. They are incentivized to work when they believe that working harder will make their lives better. It’s not hard to grasp!

Installing robots in a plant won’t by itself lead human workers to believe that working harder will make their lives better. If anything, it’ll do the opposite. They’ll start worrying that their lives are about to take a turn for the worse.

Maybe that has something to do with why increased automation has failed to increase productivity.

Reaping the Whirlwind

Tornado
Powerful Tornado destroying property, with lightning in the background. Solarseven/Shutterstock.com

24 October 2018 – “They sow the wind, and they shall reap the whirlwind” is a saying from The Holy Bible‘s Old Testament Book of Hosea. I’m certainly not a Bible scholar, but, having been paying attention for seven decades, I can attest to saying’s validity.

The equivalent Buddhist concept is karma, which is the motive force driving the Wheel of Birth and Death. It is also wrapped up with samsara, which is epitomized by the saying: “What goes around comes around.”

Actions have consequences.

If you smoke a pack of Camels a day, you’re gonna get sick!

By now, you should have gotten the idea that “reaping the whirlwind” is a common theme among the world’s religions and philosophies. You’ve got to be pretty stone headed to have missed it.

Apparently the current President of the United States (POTUS), Donald J. Trump, has been stone headed enough to miss it.

POTUS is well known for trying to duck consequences of his actions. For example, during his 2016 Presidential Election campaign, he went out of his way to capitalize on Wikileaks‘ publication of emails stolen from Hillary Clinton‘s private email server. That indiscretion and his attempt to cover it up by firing then-FBI-Director James Comey grew into a Special Counsel Investigation, which now threatens to unmask all the nefarious activities he’s engaged in throughout his entire life.

Of course, Hillary’s unsanctioned use of that private email server while serving as Secretary of State is what opened her up to the email hacking in the first place! That error came back to bite her in the backside by giving the Russians something to hack. They then forwarded that junk to Wikileaks, who eventually made it public, arguably costing her the 2016 Presidential election.

Or, maybe it was her standing up for her philandering husband, or maybe lingering suspicions surrounding the pair’s involvement in the Whitewater scandal. Whatever the reason(s), Hillary, too, reaped the whirlwind.

In his turn, Russian President Vladimir Putin sowed the wind by tasking operatives to do the hacking of Hillary’s email server. Now he’s reaping the whirlwind in the form of a laundry list sanctions by western governments and Special Counsel Investigation indictments against the operatives he sent to do the hacking.

Again, POTUS showed his stone-headedness about the Bible verse by cuddling up to nearly every autocrat in the world: Vlad Putin, Kim Jong Un, Xi Jinping, … . The list goes on. Sensing waves of love emanating from Washington, those idiots have become ever more extravagant in their misbehavior.

The latest example of an authoritarian regime rubbing POTUS’ nose in filth is the apparent murder and dismemberment of Saudi Arabian journalist Jamal Khashoggi when he briefly entered the Saudi embassy in Turkey on personal business.

The most popular theory of the crime lays blame at the feet of Mohammad Bin Salman Al Saud (MBS), Crown Prince of Saudi Arabia and the country’s de facto ruler. Unwilling to point his finger at another would-be autocrat, POTUS is promoting a Saudi cover-up attempt suggesting the murder was done by some unnamed “rogue agents.”

Actually, that theory deserves some consideration. The idea that MBS was emboldened (spelled S-T-U-P-I-D) enough to have ordered Kashoggi’s assassination in such a ham-fisted way strains credulity. We should consider the possibility that ultra-conservative Wahabist factions within the Saudi government, who see MBS’ reforms as a threat to their historical patronage from the oil-rich Saudi monarchy, might have created the incident to embarrass MBS.

No matter what the true story is, the blow back is a whirlwind!

MBS has gone out of his way to promote himself as a business-friendly reformer. This reputation has persisted despite repeated instances of continued repression in the country he controls.

The whirlwind, however, is threatening MBS’ and the Saudi monarchy’s standing in the international community. Especially, international bankers, led by JP Morgan Chase’s Jamie Dimon, and a host of Silicon Valley tech companies are running for the exits from Saudi Arabia’s three-day Financial Investment Initiative conference that was scheduled to start Tuesday (23 October 2018).

That is a major embarrassment and will likely derail MBS’ efforts to modernize Saudi Arabia’s economy away from dependence on oil revenue.

It appears that these high-powered executives are rethinking the wisdom of dealing with the authoritarian Saudi regime. They’ve decided not to sow the wind by dealing with the Saudis because they don’t want to reap the whirlwind likely to result!

Update

Since this manuscript was drafted it’s become clear that we’ll never get the full story about the Kashoggi incident. Both regimes involved (Turkey and Saudi Arabia) are authoritarians with no incentive to be honest about this story. While Saudi Arabia seems to make a pretense of press freedom, this incident shows their true colors (i.e, color them repressive). Turkey hasn’t given even a passing nod to press freedom for years. It’s like two rival foxes telling the dog about a hen house break in.

On the “dog” side, we’re stuck with a POTUS who attacks press freedom on a daily basis. So, who’s going to ferret out the truth? Maybe the Brits or the French, but not the U.S. Executive Branch!

Doing Business with Bad Guys

Threatened with a gun
Authoritarians make dangerous business partners. rubikphoto/Shutterstock

3 October 2018 – Parents generally try to drum into their childrens’ heads a simple maxim: “People judge you by the company you keep.

Children (and we’re all children, no matter how mature and sophisticated we pretend to be) just as generally find it hard to follow that maxim. We all screw it up once in a while by succumbing to the temptation of some perceived advantage to be had by dealing with some unsavory character.

Large corporations and national governments are at least as likely to succumb to the prospect of making a fast buck or signing some treaty with peers who don’t entertain the same values we have (or at least pretend to have). Governments, especially, have a tough time in dealing with what I’ll call “Bad Guys.”

Let’s face it, better than half the nations of the world are run by people we wouldn’t want in our living rooms!

I’m specifically thinking about totalitarian regimes like the People’s Republic of China (PRC).

‘Way back in the last century, Mao Tse-tung (or Mao Zedong, depending on how you choose to mis-spell the anglicization of his name) clearly placed China on the “Anti-American” team, espousing a virulent form of Marxism and descending into the totalitarian authoritarianism Marxist regimes are so prone to. This situation continued from the PRC’s founding in 1949 through 1972, when notoriously authoritarian-friendly U.S. President Richard Nixon toured China in an effort to start a trade relationship between the two countries.

Greedy U.S. corporations quickly started falling all over themselves in an effort to gain access to China’s enormous potential market. Mesmerized by the statistics of more than a billion people spread out over China’s enormous land mass, they ignored the fact that those people were struggling in a subsistence-agriculture economy that had collapsed under decades of mis-managment by Mao’s authoritarian regime.

What they hoped those generally dirt-poor peasants were going to buy from them I never could figure out.

Unfortunately, years later I found myself embedded in the management of one of those starry-eyed multinational corporations that was hoping to take advantage of the developing Chinese electronics industry. Fresh off our success launching Test & Measurement Europe, they wanted to launch a new publication called Test & Measurement China. Recalling the then-recent calamity ending the Tiananmen Square protests of 1989, I pulled a Nancy Reagan and just said “No.”

I pointed out that the PRC was still run by a totalitarian, authoritarian regime, and that you just couldn’t trust those guys. You never knew when they were going to decide to sacrifice you on the altar of internal politics.

Today, American corporations are seeing the mistakes they made in pursuit of Chinese business, which like Robert Southey’s chickens, are coming home to roost. In 2015, Chinese Premier Li Keqiang announced the “Made in China 2025” plan to make China the World’s technology leader. It quickly became apparent that Mao’s current successor, Xi Jinping intends to achieve his goals by building on technology pilfered from western companies who’d naively partnered with Chinese firms.

Now, their only protector is another authoritarian-friendly president, Donald Trump. Remember it was Trump who, following his ill-advised summit with North Korean strongman Kim Jong Un, got caught on video enviously saying: “He speaks, and his people sit up at attention. I want my people to do the same.

So, now these corporations have to look to an American would-be dictator for protection from an entrenched Chinese dictator. No wonder they find themselves screwed, blued, and tattooed!

Governments are not immune to the PRC’s siren song, either. Pundits are pointing out that the PRC’s vaunted “One Belt, One Road” initiative is likely an example of “debt-trap diplomacy.”

Debt-trap diplomacy is a strategy similar to organized crime’s loan-shark operations. An unscrupulous cash-rich organization, the loan shark, offers funds to a cash-strapped individual, such as an ambitious entrepreneur, in a deal that seems too good to be true. It’s NOT true because the deal comes in the form of a loan at terms that nearly guarantee that the debtor will default. The shark then offers to write off the debt in exchange for the debtor’s participation in some unsavory scheme, such as money laundering.

In the debt-trap diplomacy version, the PRC stands in the place of the loan shark while some emerging-economy nation, such as, say, Malaysia, accepts the unsupportable debt. In the PRC/ Malaysia case, the unsavory scheme is helping support China’s imperial ambitions in the western Pacific.

Earlier this month, Malaysia wisely backed out of the deal.

It’s not just the post-Maoist PRC that makes a dangerous place for western corporations to do business, authoritarians all over the world treat people like Heart’s Barracuda. They suck you in with mesmerizing bright and shiny promises, then leave you twisting in the wind.

Yes, I’ve piled up a whole mess of mixed metaphors here, but I’m trying to drive home a point!

Another example of the traps business people can get into by trying to deal with authoritarians is afforded by Danske Bank’s Estonia branch and their dealings with Vladimir Putin‘s Russian kleptocracy. Danske Bank is a Danish financial institution with a pan-European footprint and global ambitions. Recent release of a Danske Bank internal report produced by the Danish law firm Bruun & Hjejle says that the Estonia branch engaged in “dodgy dealings” with numerous corrupt Russian officials. Basically, the bank set up a scheme to launder money stolen from Russian tax receipts by organized criminals.

The scandal broke in Russia in June of 2007 when dozens of police officers raided the Moscow offices of Hermitage Global, an activist fund focused on global emerging markets. A coverup by Kremlin authorities resulted in the death (while in a Russian prison) of Sergei Leonidovich Magnitsky, a Russian tax accountant who specialized in anti-corruption activities.

Magnitsky’s case became an international cause célèbre. The U.S. Congress and President Barack Obama enacted the Magnitsky Act at the end of 2012, barring, among others, those Russian officials believed to be involved in Magnitsky’s death from entering the United States or using its banking system.

Apparently, the purpose of the infamous Trump Tower meeting of June 9, 2016 was, on the Russian side, an effort to secure repeal of the Magnitsky Act should then-candidate Trump win the election. The Russians dangled release of stolen emails incriminating Trump-rival Hillary Clinton as bait. This activity started the whole Mueller Investigation, which has so far resulted in dozens of indictments for federal crimes, and at least eight guilty pleas or convictions.

The latest business strung up in this mega-scandal was the whole corrupt banking system of Cyprus, whose laundering of Russian oligarchs’ money amounted to over $20B.

The moral of this story is: Don’t do business with bad guys, no matter how good they make the deal look.

Who’s NOT a Creative?

 

Compensting sales
Close-up Of A Business Woman Giving Cheque To Her Colleague At Workplace In Office. Andrey Popov/Shutterstock

25 July 2018 – Last week I made a big deal about the things that motivate creative people, such as magazine editors, and how the most effective rewards were non-monetary. I also said that monetary rewards, such as commissions based on sales results, were exactly the right rewards to use for salespeople. That would imply that salespeople were somehow different from others, and maybe even not creative.

That is not the impression I want to leave you with. I’m devoting this blog posting to setting that record straight.

My remarks last week were based on Maslow‘s and Herzberg‘s work on motivation of employees. I suggested that these theories were valid in other spheres of human endeavor. Let’s be clear about this: yes, Maslow’s and Herzberg’s theories are valid and useful in general, whenever you want to think about motivating normal, healthy human beings. It’s incidental that those researchers were focused on employer/employee relations as an impetus to their work. If they’d been focused on anything else, their conclusions would probably have been pretty much the same.

That said, there are a whole class of people for whom monetary compensation is the holy grail of motivators. They are generally very high functioning individuals who are in no way pathological. On the surface, however, their preferred rewards appear to be monetary.

Traditionally, observers who don’t share this reward system have indicted these individuals as “greedy.”

I, however, dispute that conclusion. Let me explain why.

When pointing out the rewards that can be called “motivators for editors,” I wrote:

“We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like ‘Best Technical Article.’ Above all, we talked up the fact that ours was ‘the premier publication in the market.'”

Notice that these rewards, though non-monetary. were more or less measurable. They could be (and indeed for the individuals they motivated) seen as scorecards. The individuals involved had a very clear idea of value attached to such rewards. A Nobel Prize in Physics is of greater value than, say, a similar award given by, say, Harvard University.

For example, in 1987 I was awarded the “Cahners Editorial Medal of Excellence, Best How-To Article.” That wasn’t half bad. The competition was articles written for a few dozen magazines that were part of the Cahners Publishing Company, which at the time was a big deal in the business-to-business magazine field.

What I considered to be of higher value, however, was the “First Place Award For Editorial Excellence for a Technical Article in a Magazine with Over 80,000 Circulation” I got in 1997 from the American Society of Business Press Editors, where I was competing with a much wider pool of journalists.

Economists have a way of attempting to quantify such non-monetary awards called utility. They arrive at values by presenting various options and asking the question: “Which would you rather have?”

Of course, measures of utility generally vary widely depending on who’s doing the choosing.

For example, an article in the 19 July The Wall Street Journal described a phenomenon the author seemed to think was surprising: Saudi-Arabian women drivers (new drivers all) showed a preference for muscle cars over more pedestrian models. The author, Margherita Stancati, related an incident where a Porche salesperson in Riyadh offered a recently minted woman driver an “easy to drive crossover designed to primarily attract women.” The customer demurred. She wanted something “with an engine that roars.”

So, the utility of anything is not an absolute in any sense. It all depends on answering the question: “Utility to whom?”

Everyone is motivated by rewards in the upper half of the Needs Pyramid. If you’re a salesperson, growth in your annual (or other period) sales revenue is in the green Self Esteem block. It’s well and truly in the “motivator” category, and has nothing to do with the Safety and Security “hygiene factor” where others might put it. Successful salespeople have those hygiene factors well-and-truly covered. They’re looking for a reward that tells them they’ve hit a home run. That is likely having a bigger annual bonus than the next guy.

The most obvious money-driven motivators accrue to the folks in the CEO ranks. Jeff Bezos, Elon Musk, and Warren Buffett would have a hard time measuring their success (i.e., hitting the Pavlovian lever to get Self Actualization rewards) without looking at their monetary compensation!

The Pyramid of Needs

Needs Pyramid
The Pyramid of Needs combines Maslow’s and Herzberg’s motivational theories.

18 July 2018 – Long, long ago, in a [place] far, far away. …

When I was Chief Editor at business-to-business magazine Test & Measurement World, I had a long, friendly though heated, discussion with one of our advertising-sales managers. He suggested making the compensation we paid our editorial staff contingent on total advertising sales. He pointed out that what everyone came to work for was to get paid, and that tying their pay to how well the magazine was doing financially would give them an incentive to make decisions that would help advertising sales, and advance the magazine’s financial success.

He thought it was a great idea, but I disagreed completely. I pointed out that, though revenue sharing was exactly the right way to compensate the salespeople he worked with, it was exactly the wrong way to compensate creative people, like writers and journalists.

Why it was a good idea for his salespeople I’ll leave for another column. Today, I’m interested in why it was not a good idea for my editors.

In the heat of the discussion I didn’t do a deep dive into the reasons for taking my position. Decades later, from the standpoint of a semi-retired whatever-you-call-my-patchwork-career, I can now sit back and analyze in some detail the considerations that led me to my conclusion, which I still think was correct.

We’ll start out with Maslow’s Hierarchy of Needs.

In 1943, Abraham Maslow proposed that healthy human beings have a certain number of needs, and that these needs are arranged in a hierarchy. At the top is “self actualization,” which boils down to a need for creativity. It’s the need to do something that’s never been done before in one’s own individual way. At the bottom is the simple need for physical survival. In between are three more identified needs people also seek to satisfy.

Maslow pointed out that people seek to satisfy these needs from the bottom to the top. For example, nobody worries about security arrangements at their gated community (second level) while having a heart attack that threatens their survival (bottom level).

Overlaid on Maslow’s hierarchy is Frederick Herzberg’s Two-Factor Theory, which he published in his 1959 book The Motivation to Work. Herzberg’s theory divides Maslow’s hierarchy into two sections. The lower section is best described as “hygiene factors.” They are also known as “dissatisfiers” or “demotivators” because if they’re not met folks get cranky.

Basically, a person needs to have their hygiene factors covered in order have a level of basic satisfaction in life. Not having any of these needs satisfied makes them miserable. Having them satisfied doesn’t motivate them at all. It makes ’em fat, dumb and happy.

The upper-level needs are called “motivators.” Not having motivators met drives an individual to work harder, smarter, etc. It energizes them.

My position in the argument with my ad-sales friend was that providing revenue sharing worked at the “Safety and Security” level. Editors were (at least in my organization) paid enough that they didn’t have to worry about feeding their kids and covering their bills. They were talented people with a choice of whom they worked for. If they weren’t already being paid enough, they’d have been forced to go work for somebody else.

Creative people, my argument went, are motivated by non-monetary rewards. They work at the upper “motivator” levels. They’ve already got their physical needs covered, so to motivate them we have to offer rewards in the “motivator” realm.

We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like “Best Technical Article.” Above all, we talked up the fact that ours was “the premier publication in the market.”

These were all non-monetary rewards to motivate people who already had their basic needs (the hygiene factors) covered.

I summarized my compensation theory thusly: “We pay creative people enough so that they don’t have to go do something else.”

That gives them the freedom to do what they would want to do, anyway. The implication is that creative people want to do stuff because it’s something they can do that’s worth doing.

In other words, we don’t pay creative people to work. We pay them to free them up so they can work. Then, we suggest really fun stuff for them to work at.

What does this all mean for society in general?

First of all, if you want there to be a general level of satisfaction within your society, you’d better take care of those hygiene factors for everybody!

That doesn’t mean the top 1%. It doesn’t mean the top 80%, either. Or, the top 90%. It means everybody!

If you’ve got 99% of everybody covered, that still leaves a whole lot of people who think they’re getting a raw deal. Remember that in the U.S.A. there are roughly 300 million people. If you’ve left 1% feeling ripped off, that’s 3 million potential revolutionaries. Three million people can cause a lot of havoc if motivated.

Remember, at the height of the 1960s Hippy movement, there were, according to the most generous estimates, only about 100,000 hipsters wandering around. Those hundred-thousand activists made a huge change in society in a very short period of time.

Okay. If you want people invested in the status quo of society, make sure everyone has all their hygiene factors covered. If you want to know how to do that, ask Bernie Sanders.

Assuming you’ve got everybody’s hygiene factors covered, does that mean they’re all fat, dumb, and happy? Do you end up with a nation of goofballs with no motivation to do anything?

Nope!

Remember those needs Herzberg identified as “motivators” in the upper part of Maslow’s pyramid?

The hygiene factors come into play only when they’re not met. The day they’re met, people stop thinking about who’ll be first against the wall when the revolution comes. Folks become fat, dumb and happy, and stay that way for about an afternoon. Maybe an afternoon and an evening if there’s a good ballgame on.

The next morning they start thinking: “So, what can we screw with next?”

What they’re going to screw with next is anything and everything they damn well please. Some will want to fly to the Moon. Some will want to outdo Michaelangelo‘s frescos for the ceiling of the Sistine Chapel. They’re all going to look at what they think was the greatest stuff from the past, and try to think of ways to do better, and to do it in their own way.

That’s the whole point of “self actualization.”

The Renaissance didn’t happen because everybody was broke. It happened because they were already fat, dumb and happy, and looking for something to screw with next.

What’s So Bad About Cryptocurrencies?

15 March 2018 – Cryptocurrency fans point to the vast “paper” fortunes that have been amassed by some bitcoin speculators, and sometimes predict that cryptocurrencies will eventually displace currencies issued and regulated by national governments. Conversely, banking-system regulators in several nations, most notably China and Russia, have outright bans on using cryptocurrency (specifically bitcoin) as a medium of exchange.

At the same time, it appears that fintech (financial technology) pundits pretty universally agree that blockchain technology, which is the enabling technology behind all cryptocurrency efforts, is the greatest thing since sliced bread, or, more to the point, the invention of ink on papyrus (IoP). Before IoP, financial records relied on clanky technologies like bundles of knotted cords, ceramic Easter eggs with little tokens baked inside, and that poster child for early written records, the clay tablet.

IoP immediately made possible tally sheets, journal and record books, double-entry ledgers, and spreadsheets. Without thin sheets of flat stock you could bind together into virtually unlimited bundles and then make indelible marks on, the concept of “bookkeeping” would be unthinkable. How could you keep books without having books to keep?

Blockchain is basically taking the concept of double-entry ledger accounting to the next (digital) level. I don’t pretend to fully understand how blockchain works. It ain’t my bailiwick. I’m a physicist, not a computer scientist.

To me, computers are tools. I think of them the same way I think of hacksaws, screw drivers, and CNC machines. I’m happy to have ’em and anxious to know how to use ’em. How they actually work and, especially, how to design them are details I generally find of marginal interest.

If it sounds like I’m backing away from any attempt to explain blockchains, that’s because I am. There are lots of people out there who are willing and able to explain blockchains far better than I could ever hope to.

Money, on the other hand, is infinitely easier to make sense of, and it’s something I studied extensively in MBA school. And, that’s really what cryptocurrencies are all about. It’s also the part cryptocurrency that its fans seem to have missed.

Once upon a time, folks tried to imbue their money (currency) with some intrinsic value. That’s why they used to make coins out of gold and silver. When Marco Polo introduced the Chinese concept of promissory notes to Renaissance Europe, it became clear that paper currency was possible provided there were two characteristics that went with it:

  • Artifact is some kind of thing (and I can’t identify it any more precisely than with the word “thing” because just about anything and everything has been tried and found to work) that people can pass between them to form a transaction; and
  • Underlying Value is some form of wealth that stands behind the artifact and gives an agreed-on value to the transaction.

For cryptocurrencies, the artifact consists of entries in a computer memory. The transactions are simply changes in the entries in computer memories. More specifically, blockchains amount to electronic ledger entries in a common database that forever leave an indelible record of transactions. (Sound familiar?)

Originally, the underlying value of traditional currencies was imagined to be the wealth represented by the metal in a coin, or the intrinsic value of a jewel, and so forth. More recently folks have begun imagining that the underlying value of government issued currency (dollars, pounds sterling, yuan) was fictitious. They began to believe the value of a dollar was whatever people believed it was.

According to this idea, anybody could issue currency as long as they got a bunch of people together to agree that it had some value. Put that concept together with the blockchain method of common recordkeeping, and you get cryptocurrency.

I’m oversymplifying all this in an effort to keep this posting within rational limits and to make a point, so bear with me. The point I’m trying to make is that the difference between any cryptocurrency and U.S. dollars is that these cryptocurrencies have no underlying value.

I’ve heard the argument that there’s no underlying value behind U.S. dollars, either. That just ain’t so! Having dollars issued by the U.S. government and tied to the U.S. tax base connects dollars to the U.S. economy. In other words, the underlying value backing up the artifacts of U.S. dollars is the entire U.S. economy. The total U.S. economic output in 2016, as measured by gross domestic product (GDP) was just under 20 trillion dollars. That ain’t nothing!