Stick to Your Knitting

Man knitting
Man in suit sticking to his knitting. Photo by fokusgood / Shutterstock

6 June 2019 – Once upon a time in an MBA school far, far away, I took a Marketing 101 class. The instructor, whose name I can no longer be sure of, had a number of sayings that proved insightful, bordering on the oracular. (That means they were generally really good advice.) One that he elevated to the level of a mantra was: “Stick to the knitting.”

Really successful companies of all sizes hew to this advice. There have been periods of history where fast-growing companies run by CEOs with spectacularly big egos have equally spectacularly honored this mantra in the breach. With more hubris than brains, they’ve managed to over-invest themselves out of business.

Today’s tech industry – especially the FAANG companies (Facebook, Amazon, Apple, Netflix and Google) – is particularly prone to this mistake. Here I hope to concentrate on what the mantra means, and what goes wrong when you ignore it.

Okay, “stick to your knitting” is based on the obvious assumption that every company has some core expertise. Amazon, for example, has expertise in building and operating an online catalog store. Facebook has expertise in running an online forum. Netflix operates a bang-up streaming service. Ford builds trucks. Lockheed Martin makes state-of-the-art military airplanes.

General Electric, which has core expertise in manufacturing industrial equipment, got into real trouble when it got the bright idea of starting a finance company to extend loans to its customers for purchases of its equipment.

Conglomeration

There is a business model, called the conglomerate that is based on explicitly ignoring the “knitting” mantra. It was especially popular in the 1960s. Corporate managers imagined that conglomerates could bring into play synergies that would make conglomerates more effective than single-business companies.

For a while there, this model seemed to be working. However, when business conditions began to change (specifically interest rates began to rise from an abnormally low level to more normal rates) their supposed advantages began melting like a birthday cake left outside in a rainstorm. These huge conglomerates began hemorrhaging money until vultures swooped in to pick them apart. Conglomerates are now a thing of the past.

There are companies, such as Berkshire Hathaway, whose core expertise is in evaluating and investing in other companies. Some of them are very successful, but that’s because they stick to their core expertise.

Berkshire Hathaway was originally a textile company that investor Warren Buffett took over when the textile industry was busy going overseas. As time went on, textiles became less important and, by 1985 this core part of the company was shut down. It had become a holding company for Buffett’s investments in other companies. It turns out that Buffett’s core competence is in handicapping companies for investment potential. That’s his knitting!

The difference between a holding company and a conglomerate is (and this is specifically my interpretation) a matter of integration. In a conglomerate, the different businesses are more-or-less integrated into the parent corporation. In a holding company, they are not.

Berkshire Hathaway is known for it’s insurance business, but if you want to buy, say, auto insurance from Berkshire Hathaway, you have to go to it’s Government Employees Insurance Company (GEICO) subsidiary. GEICO is a separate company that happens to be wholly owned by Berkshire Hathaway. That is, it has its own corporate headquarters and all the staff, fixtures and other resources needed to operate as an independent insurance company. It just happens to be owned, lock, stock and intellectual property by another corporate entity: Berkshire Hathaway.

GEICO’s core expertise is insurance. Berkshire Hathaway’s core expertise is finding good companies to invest in. Some are partially owned (e.g., 5.4% of Apple) some are wholly owned (e.g., Acme Brick).

Despite Berkshire Hathaway’s holding positions in both Apple and Acme Brick, if you ask Warren Buffet if Berkshire Hathaway is a computer company or a brick company, he’d undoubtedly say “no.” Berkshire Hathaway is a diversified holding company.

It’s business is owning other businesses.

To paraphrase James Coburn’s line from Stanley Donen’s 1963 film Charade: “[Mrs. Buffett] didn’t raise no stupid children!”

Why Giant Corporations?

All this giant corporation stuff stems from a dynamic I also learned about in MBA school: a company grows or it dies. I ran across this dynamic during a financial modeling class where we used computers to predict results of corporate decisions in lifelike conditions. Basically, what happens is that unless the company strives to its utmost to maintain growth, it starts to shrink and then all is lost. Feedback effects take over and it withers and dies.

Observations since then have convinced me this is some kind of natural law. It shows up in all kinds of natural systems. I used to think I understood why, but I’m not so sure anymore. It may have something to do with chaos, and we live in a chaotic universe. I resolve to study this in more detail – later.

But, anyway … .

Companies that embrace this mantra (You grow or you die.) grow until they reach some kind of external limit, then they stop growing and – in some fashion or other – die.

Sometimes (and paradigm examples abound) external limits don’t kick in before some company becomes very big, indeed. Standard Oil Company may be the poster child for this effect. Basically, the company grew to monopoly status and, in 1911 the U.S. Federal Government stepped in and, using the 1890 Sherman Anti-Trust Act, forced its breakup into 33 smaller oil companies, many of which still exist today as some of the world’s major oil companies (e.g., Mobil, Amoco, and Chevron). At the time of its breakup, Standard Oil had a market capitalization of just under $11B and was the third most valuable company in the U.S. Compare that to the U.S. GDP of roughly $34B at the time.

The problem with companies that big is that they generate tons of free cash. What to do with it?

There are three possibilities:

  1. You can reinvest it in your company;

  2. You can return it to your shareholders; or

  3. You can give it away.

Reinvesting free cash in your company is usually the first choice. I say it is the first choice because it is used at the earliest period of the company’s history – the period when growth is necessarily the only goal.

If done properly reinvestment can make your company grow bigger faster. You can reinvest by out-marketing your competition (by, say, making better advertisements) and gobbling up market share. You can also reinvest to make your company’s operations more effective or efficient. To grow, you also need to invest in adding production facilities.

At a later stage, your company is already growing fast and you’ve got state-of-the-art facilities, and you dominate your market. It’s time to do what your investors gave you their money for in the first place: return profits to them in the form of dividends. I kinda like that. It’s what the game’s all about, anyway.

Finally, most leaders of large companies recognize that having a lot of free cash laying around is an opportunity to do some good without (obviously) expecting a payback. I qualify this with the word “obviously” because on some level altruism does provide a return in some form.

Generally, companies engage in altruism (currently more often called “philanthropy”) to enhance their perception by the public. That’s useful when lawsuits rear their ugly heads or somebody in the organization screws up badly enough to invite public censure. Companies can enhance their reputations by supporting industry activities that do not directly enhance their profits.

So-called “growth companies,” however, get stuck in that early growth phase, and never transition to paying dividends. In the early days of the personal-computer revolution, tech companies prided themselves on being “growth stocks.” That is, investors gained vast wealth on paper as the companies’ stock prices went up, but couldn’t realized those gains (capital gains) unless they sold the stock. Or, as my father once did, by using the stock for collateral to borrow money.

In the end, wise investors eventually want their money back in the form of cash from dividends. For example, in the early 2000s, Microsoft and other technology companies were forced by their shareholders to start paying dividends for the first time.

What can go wrong

So, after all’s said and done, why’s my marketing professor’s mantra wise corporate governance?

To make money, especially the scads of money that corporations need to become really successful, you’ve gotta do something right. In fact, you gotta do something better than the other guys. When you know how to do something better than the other guys, that’s called expertise!

Companies, like people, have limitations. To imagine you don’t have limitations is hubris. To put hubris in perspective, recall that the ancients famously made it Lucifer’s cardinal sin. In fact, it was his only sin!

Folks who tell you that you can do anything are flat out conning your socks off.

If you’re lucky you can do one thing better than others. If you’re really lucky, you can do a few things better than others. If you try to do stuff outside your expertise, however, you’re gonna fail. A person can pick themselves up, dust themselves off, and try again – but don’t try to do the same thing again ‘cause you’ve already proved it’s outside your expertise. People can start over, but companies usually can’t.

One of my favorite sayings is:

Everything looks easy to someone who doesn’t know what they’re doing.

The rank amateur at some activity typically doesn’t know the complexities and pitfalls that an expert in the field has learned about through training and experience. That’s what we know as expertise. When anyone – or any company – wanders outside their field of expertise, they quickly fall foul of those complexities and pitfalls.

I don’t know how many times I’ve overheard some jamoke at an art opening say, “Oh, I could do that!”

Yeah? Then do it!

The artist has actually done it.

The same goes for some computer engineer who imagines that knowing how to program computers makes him (or her) smart, and because (s)he is so smart, (s)he could run, say, a magazine publishing house. How hard can it be?

Mark Zuckerberg is in the process of finding out.

Fed Reports on U.S. Economic Well-Being

Federal Reserve Building
The Federal Reserve released the results of its annual Survey of Household Economics and Decisionmaking for calendar year 2018 last week. Image by Thomas Barrat / Shutterstock

29 May 2019 – Last week (specifically 23 May 2019) the Federal Reserve Board released the results of its annual Survey of Household Economics and Decisionmaking for CY2018. I’ve done two things for readers of this blog. First, I downloaded a PDF copy of the report to make available free of charge on my website at cgmasi.com alongside last year’s report for comparison. Second, I’m publishing an edited extract of the report’s executive summary below.

The report describes the results of the sixth annual Survey of Household Economics and Decisionmaking (SHED). In October and November 2018, the latest SHED polled a self-selected sample of over 11,000 individuals via an online survey.

Along with the survey-results report, the Board published the complete anonymized data in CSV, SAS, STATA formats; as well as a supplement containing the complete SHED questionnaire and responses to all questions in the order asked. The survey continues to use subjective measures and self-assessments to supplement and enhance objective measures.

Overall Results

Survey respondents reported that most measures of economic well-being and financial resilience in 2018 are similar to or slightly better than in 2017. Many families have experienced substantial gains since the survey began in 2013, in line with the nation’s ongoing economic expansion during that period.

Even so, another year of economic expansion and the low national unemployment rates did little to narrow the persistent economic disparities by race, education, and geography. Many adults are financially vulnerable and would have difficulty handling an emergency expense as small as $400.

In addition to asking adults whether they are working, the survey asks if they want to work more and what impediments they see to them working.

Overall Economic Well-Being

A large majority of individuals report that, financially, they are doing okay or living comfortably, and overall economic well-being has improved substantially since the survey began in 2013

  • When asked about their finances, 75% of adults say they are either doing okay or living comfortably. This result in 2018 is similar to 2017 and is 12%age points higher than 2013.

  • Adults with a bachelor’s degree or higher are significantly more likely to be doing at least okay financially (87%) than those with a high school degree or less (64%).

  • Nearly 8 in 10 whites are at least doing okay financially in 2018 versus two-thirds of blacks and Hispanics. The gaps in economic well-being by race and ethnicity have persisted even as overall wellbeing has improved since 2013.

  • Fifty-six percent of adults say they are better off than their parents were at the same age and one fifth say they are worse off.

  • Nearly two-thirds of respondents rate their local economic conditions as “good” or “excellent,” with the rest rating conditions as “poor” or “only fair.” More than half of adults living in rural areas describe their local economy as good or excellent, compared to two-thirds of those living in urban areas.

Income

Changes in family income from month to month remain a source of financial strain for some individuals.

  • Three in 10 adults have family income that varies from month to month. One in 10 adults have struggled to pay their bills because of monthly changes in income. Those with less access to credit are much more likely to report financial hardship due to income volatility.

  • One in 10 adults, and over one-quarter of young adults under age 30, receive some form of financial support from someone living outside their home. This financial support is mainly between parents and adult children and is often to help with general expenses.

Employment

Most adults are working as much as they want to, an indicator of full employment; however, some remain unemployed or underemployed. Economic well-being is lower for those wanting to work more, those with unpredictable work schedules, and those who rely on gig activities as a main source of income.

  • One in 10 adults are not working and want to work, though many are not actively looking for work. Four percent of adults in the SHED are not working, want to work, and applied for a job in the prior 12 months. This is similar to the official unemployment rate of 3.8% in the fourth quarter of 2018.

  • Two in 10 adults are working but say they want to work more. Blacks, Hispanics, and those with less education are less likely to be satisfied with how much they are working.

  • Half of all employees received a raise or promotion in the prior year.

  • Unpredictable work schedules are associated with financial stress for some. One-quarter of employees have a varying work schedule, including 17% whose schedule varies based on their employer’s needs. One-third of workers who do not control their schedule are not doing okay financially, versus one-fifth of workers who set their schedule or have stable hours.

  • Three in 10 adults engaged in at least one gig activity in the prior month, with a median time spent on gig work of five hours. Perhaps surprisingly, little of this activity relies on technology: 3% of all adults say that they use a website or an app to arrange gig work.

  • Signs of financial fragility – such as difficulty handling an emergency expense – are slightly more common for those engaged in gig work, but markedly higher for those who do so as a main source of income.

Dealing with Unexpected Expenses

While self-reported ability to handle unexpected expenses has improved substantially since the survey began in 2013, a sizeable share of adults nonetheless say that they would have some difficulty with a modest unexpected expense.

  • If faced with an unexpected expense of $400, 61% of adults say they would cover it with cash, savings, or a credit card paid off at the next statement – a modest improvement from the prior year. Similar to the prior year, 27% would borrow or sell something to pay for the expense, and 12% would not be able to cover the expense at all.

  • Seventeen percent of adults are not able to pay all of their current month’s bills in full. Another 12% of adults would be unable to pay their current month’s bills if they also had an unexpected $400 expense that they had to pay.

  • One-fifth of adults had major, unexpected medical bills to pay in the prior year. One-fourth of adults skipped necessary medical care in 2018 because they were unable to afford the cost.

Banking and Credit

Most adults have a bank account and are able to obtain credit from mainstream sources. However, sub- stantial gaps in banking and credit services exist among minorities and those with low incomes.

  • Six percent of adults do not have a bank account. Fourteen percent of blacks and 11% of Hispanics are unbanked versus 4% of whites. Thirty-five percent of blacks and 23% of Hispanics have an account but also use alternative financial services, such as money orders and check cashing services, compared to 11% of whites.

  • More than one-fourth of blacks are not confident that a new credit card application would be approved if they applied—over twice the rate among whites.

  • Those who never carry a credit card balance are much more likely to say that they would pay an unexpected $400 expense with cash or its equivalent (88%) than those who carry a balance most or all of the time (40%) or who do not have a credit card (27%).

  • Thirteen percent of adults with a bank account had at least one problem accessing funds in their account in the prior year. Problems with a bank website or mobile app (7%) and delays in when funds were available to use (6%) are the most common problems. Those with volatile income and low savings are more likely to experience such problems.

Housing and Neighborhoods

Satisfaction with one’s housing and neighborhood is generally high, although notably less so in low-income communities. While 8 in 10 adults living in middle- and upper-income neighborhoods are satisfied with the overall quality of their community, 6 in 10 living in low- and moderate-income neighborhoods are satisfied.

  • People’s satisfaction with their housing does not vary much between more expensive and less expensive cities or between urban and rural areas.

  • Over half of renters needed a repair at some point in the prior year, and 15% of renters had moderate or substantial difficulty getting their landlord to complete the repair. Black and Hispanic renters are more likely than whites to have difficulties getting repairs done.

  • Three percent of non-homeowners were evicted, or moved because of the threat of eviction, in the prior two years. Evictions are slightly more common in urban areas than in rural areas.

Higher Education

Economic well-being rises with education, and most of those holding a post-secondary degree think that attending college paid off.

  • Two-thirds of graduates with a bachelor’s degree or more feel that their educational investment paid off financially, but 3 in 10 of those who started but did not complete a degree share this view.

  • Among young adults who attended college, more than twice as many Hispanics went to a for-profit institution as did whites. For young black attendees, this rate was five times the rate of whites.

  • Given what they know now, half of those who attended a private for-profit institution say that they would attend a different school if they had a chance to go back and make their college choices again. By comparison, about one-quarter of those who attended public or private not-for-profit institutions would want to attend a different school.

Student Loans and Other Education Debt

Over half of young adults who attended college took on some debt to pay for their education. Most borrowers are current on their payments or have successfully paid off their loans.

  • Among those making payments on their student loans, the typical monthly payment is between $200 and $299 per month.

  • Over one-fifth of borrowers who attended private for-profit institutions are behind on student loan payments, versus 8% who attended public institutions and 5% who attended private not-for-profit institutions.

Retirement

Many adults are struggling to save for retirement. Even among those who have some savings, people commonly lack financial knowledge and are uncomfortable making investment decisions.

  • Thirty-six percent of non-retired adults think that their retirement saving is on track, but one-quarter have no retirement savings or pension whatsoever. Among non-retired adults over the age of sixty, 45% believe that their retirement saving is on track.

  • Six in 10 non-retirees who hold self-directed retirement savings accounts, such as a 401(k) or IRA, have little or no comfort in managing their investments.

  • On average, people answer fewer than three out of five financial literacy questions correctly, with lower scores among those who are less comfortable managing their retirement savings.

The forgoing is an edited extract from the Report’s Executive Summary. A PDF version of the entire report is available on my website at cgmasi.com [ http://cgmasi.com ] along with a PDF version of the 2017 report, which was published in May of 2018 and based on a similar survey conducted in late 2017. Reports dating back to the first survey done in late 2013 are available from the Federal Reserve Board’s website linked to above.

Luddites’ Lament

Luddites attack
An owner of a factory defending his workshop against Luddites intent on destroying his mechanized looms between 1811-1816. Everett Historical/Shutterstock

27 March 2019 – A reader of last week’s column, in which I reported recent opinions voiced by a few automation experts at February’s Conference on the Future of Work held at at Stanford University, informed me of a chapter from Henry Hazlitt’s 1988 book Economics in One Lesson that Australian computer scientist Steven Shaw uploaded to his blog.

I’m not going to get into the tangled web of potential copyright infringement that Shaw’s posting of Hazlitt’s entire text opens up, I’ve just linked to the most convenient-to-read posting of that particular chapter. If you follow the link and want to buy the book, I’ve given you the appropriate link as well.

The chapter is of immense value apropos the question of whether automation generally reduces the need for human labor, or creates more opportunities for humans to gain useful employment. Specifically, it looks at the results of a number of historic events where Luddites excoriated technology developers for taking away jobs from humans only to have subsequent developments prove them spectacularly wrong.

Hazlitt’s classic book is, not surprisingly for a classic, well documented, authoritative, and extremely readable. I’m not going to pretend to provide an alternative here, but to summarize some of the chapter’s examples in the hope that you’ll be intrigued enough to seek out the original.

Luddism

Before getting on to the examples, let’s start by looking at the history of Luddism. It’s not a new story, really. It probably dates back to just after cave guys first thought of specialization of labor.

That is, sometime in the prehistoric past, some blokes were found to be especially good at doing some things, and the rest of the tribe came up with the idea of letting, say, the best potters make pots for the whole tribe, and everyone else rewarding them for a job well done by, say, giving them choice caribou parts for dinner.

Eventually, they had the best flint knappers make the arrowheads, the best fletchers put the arrowheads on the arrows, the best bowmakers make the bows, and so on. Division of labor into different jobs turned out to be so spectacularly successful that very few of us rugged individualists, who pretend to do everything for ourselves, are few and far between (and are largely kidding ourselves, anyway).

Since then, anyone who comes up with a great way to do anything more efficiently runs the risk of having the folks who spent years learning to do it the old way land on him (or her) like a ton of bricks.

It’s generally a lot easier to throw rocks to drive the innovator away than to adapt to the innovation.

Luddites in the early nineteenth century were organized bands of workers who violently resisted mechanization of factories during the late Industrial Revolution. Named for an imaginary character, Ned Ludd, who was supposedly an apprentice who smashed two stocking frames in 1779 and whose name had become emblematic of machine destroyers. The term “Luddite” has come to mean anyone fanatically opposed to deploying advanced technology.

Of course, like religious fundamentalists, they have to pick a point in time to separate “good” technology from the “bad.” Unlike religious fanatics, who generally pick publication of a certain text to be the dividing line, Luddites divide between the technology of their immediate past (with which they are familiar) and anything new or unfamiliar. Thus, it’s a continually moving target.

In either case, the dividing line is fundamentally arbitrary, so the emotion of their response is irrational. Irrationality typically carries a warranty of being entirely contrary to facts.

What Happens Next

Hazlitt points out, “The belief that machines cause unemployment, when held with any logical consistency, leads to preposterous conclusions.” He points out that on the second page of the first chapter of Adam Smith’s seminal book Wealth of Nations, Smith tells us that a workman unacquainted with the use of machinery employed in sewing-pin-making “could scarce make one pin a day, and certainly could not make twenty,” but with the use of the machinery he can make 4,800 pins a day. So, zero-sum game theory would indicate an immediate 99.98 percent unemployment rate in the pin-making industry of 1776.

Did that happen? No, because economics is not a zero-sum game. Sewing pins went from dear to cheap. Since they were now cheap, folks prized them less and discarded them more (when was the last time you bothered to straighten a bent pin?), and more folks could afford to buy them in the first place. That led to an increase in sewing-pin sales as well as sales of things like sewing-patterns and bulk fine fabric sold to amateur sewers, and more employment, not less.

Similar results obtained in the stocking industry when new stocking frames (the original having been invented William Lee in 1589, but denied a patent by Elizabeth I who feared its effects on employment in hand-knitting industries) were protested by Luddites as fast as they could be introduced. Before the end of the nineteenth century the stocking industry was employing at least a hundred men for every man it employed at the beginning of the century.

Another example Hazlitt presents from the Industrial Revolution happened in the cotton-spinning industry. He says: “Arkwright invented his cotton-spinning machinery in 1760. At that time it was estimated that there were in England 5,200 spinners using spinning wheels, and 2,700 weavers—in all, 7,900 persons engaged in the production of cotton textiles. The introduction of Arkwright’s invention was opposed on the ground that it threatened the livelihood of the workers, and the opposition had to be put down by force. Yet in 1787—twenty-seven years after the invention appeared—a parliamentary inquiry showed that the number of persons actually engaged in the spinning and weaving of cotton had risen from 7,900 to 320,000, an increase of 4,400 percent.”

As these examples indicate, improvements in manufacturing efficiency generally lead to reductions in manufacturing cost, which, when passed along to customers, reduces prices with concommitent increases in unit sales. This is the price elasticity of demand curve from Microeconomics 101. It is the reason economics is decidedly not a zero-sum game.

If we accept economics as not a zero-sum game, predicting what happens when automation makes it possible to produce more stuff with fewer workers becomes a chancy proposition. For example, many economists today blame flat productivity (the amount of stuff produced divided by the number of workers needed to produce it) for lack of wage gains in the face of low unemployment. If that is true, then anything that would help raise productivity (such as automation) should be welcome.

Long experience has taught us that economics is a positive-sum game. In the face of technological advancement, it behooves us to expect positive outcomes while taking measures to ensure that the concomitant economic gains get distributed fairly (whatever that means) throughout society. That is the take-home lesson from the social dislocations that accompanied the technological advancements of the Early Industrial Revolution.

Don’t Panic!

Panic button
Do not push the red button! Peter Hermes Furian/Shutterstock

20 March 2019 – The image at right visualizes something described in Douglas Adams’ Hitchiker’s Guide to the Galaxy. At one point, the main characters of that six-part “trilogy” found a big red button on the dashboard of a spaceship they were trying to steal that was marked “DO NOT PRESS THIS BUTTON!” Naturally, they pressed the button, and a new label popped up that said “DO NOT PRESS THIS BUTTON AGAIN!”

Eventually, they got the autopilot engaged only to find it was a stunt ship programmed to crash headlong into the nearest Sun as part of the light show for an interstellar rock band. The moral of this story is “Never push buttons marked ‘DO NOT PUSH THIS BUTTON.’”

Per the author: “It is said that despite its many glaring (and occasionally fatal) inaccuracies, the Hitchhiker’s Guide to the Galaxy itself has outsold the Encyclopedia Galactica because it is slightly cheaper, and because it has the words ‘DON’T PANIC’ in large, friendly letters on the cover.”

Despite these references to the Hitchhiker’s Guide to the Galaxy, this posting has nothing to do with that book, the series, or the guide it describes, except that I’ve borrowed the words from the Guide’s cover as a title. I did that because those words perfectly express the take-home lesson of Bill Snyder’s 11 March 2019 article in The Robot Report entitled “Fears of job-stealing robots are misplaced, say experts.”

Expert Opinions

Snyder’s article reports opinions expressed at the the Conference on the Future of Work at Stanford University last month. It’s a topic I’ve shot my word processor off about on numerous occasions in this space, so I thought it would be appropriate to report others’ views as well. First, I’ll present material from Snyder’s article, then I’ll wrap up with my take on the subject.

“Robots aren’t coming for your job,” Snyder says, “but it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.”

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist.

David Autor, professor of economics at the Massachusetts Institute of Technology points out that education is a big determinant of how developing trends affect workers: “It’s a great time to be young and educated, but there’s no clear land of opportunity for adults who haven’t been to college.”

“When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation,” said Varian, “demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude.”

His research indicates that shrinkage of the labor supply due to demographic trends is 53% greater than shrinkage of demand for labor due to automation. That means, while relatively fewer jobs are available, there are a lot fewer workers available to do them. The result is the prospect of a continued labor shortage.

At the same time, Snyder reports that “[The] most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.”

In other words, fears that robots will displace humans for existing jobs miss the point. Robots, instead, are taking over jobs for which there aren’t enough humans to do them.

Another effect is the fact that what people think of as “jobs” are actually made up of many “tasks,” and it’s tasks that get automated, not entire jobs. Some tasks are amenable to automation while others aren’t.

“Consider the job of a gardener,” Snyder suggests as an example. “Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores.”

Some of these tasks, like mowing and watering, can easily be automated. Pruning rose bushes, not so much!

Snyder points to news reports of a hotel in Nagasaki, Japan being forced to “fire” robot receptionists and room attendants that proved to be incompetent.

There’s a scene in the 1997 film The Fifth Element where a supporting character tries to converse with a robot bartender about another character. He says: “She’s so vulnerable – so human. Do you you know what I mean?” The robot shakes its head, “No.”

Sometimes people, even misanthropes, would prefer to interact with another human than with a drink-dispensing machine.

“Jobs,” Varian points out, “unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator.”

“Excessive automation at Tesla was a mistake,” founder Elon Musk mea culpa-ed last year “Humans are underrated.”

Another trend Snyder points out is that automation-ready jobs, such as assembly-line factory workers, have already largely disappeared from America. “The 10 most common occupations in the U.S.,” he says, “include such jobs as retail salespersons, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer even make the list.

Again, robots are mainly taking over tasks that humans are not available to do.

The final trend that Snyder presents, is the stark fact that birthrates in developed nations are declining – in some cases precipitously. “The aging of the baby boom generation creates demand for service jobs,” Varian points out, “but leaves fewer workers actively contributing labor to the economy.”

Those “service jobs” are just the ones that require a human touch, so they’re much harder to automate successfully.

My Inexpert Opinion

I’ve been trying, not entirely successfully, to figure out what role robots will actually have vis-a-vis humans in the future. I think there will be a few macroscopic trends. And, the macroscopic trends should be the easiest to spot ‘cause they’re, well, macroscopic. That means bigger. So, there easier to see. See?

As early as 2010, I worked out one important difference between robots and humans that I expounded in my novel Vengeance is Mine! Specifically, humans have a wider view of the Universe and have more of an emotional stake in it.

“For example,” I had one of my main characters pontificate at a cocktail party, “that tall blonde over there is an archaeologist. She uses ROVs – remotely operated vehicles – to map underwater shipwreck sites. So, she cares about what she sees and finds. We program the ROVs with sophisticated navigational software that allows her to concentrate on what she’s looking at, rather than the details of piloting the vehicle, but she’s in constant communication with it because she cares what it does. It doesn’t.”

More recently, I got a clearer image of this relationship and it’s so obvious that we tend to overlook it. I certainly missed it for decades.

It hit me like a brick when I saw a video of an autonomous robot marine-trash collector. This device is a small autonomous surface vessel with a big “mouth” that glides around seeking out and gobbling up discarded water bottles, plastic bags, bits of styrofoam, and other unwanted jetsam clogging up waterways.

The first question that popped into my mind was “who’s going to own the thing?” I mean, somebody has to want it, then buy it, then put it to work. I’m sure it could be made to automatically regurgitate the junk it collects into trash bags that it drops off at some collection point, but some human or humans have to make sure the trash bags get collected and disposed of. Somebody has to ensure that the robot has a charging system to keep its batteries recharged. Somebody has to fix it when parts wear out, and somebody has to take responsibility if it becomes a navigation hazard. Should that happen, the Coast Guard is going to want to scoop it up and hand its bedraggled carcass to some human owner along with a citation.

So, on a very important level, the biggest thing robots need from humans is ownership. Humans own robots, not the other way around. Without a human owner, an orphan robot is a pile of junk left by the side of the road!

How to Train Your Corporate Rebel

Tebel Talent Cover
Rebel Talent by Francesca Gino makes the case for encouraging individualism in the workplace

13 March 2019 – Francesca Gino, author of Rebel Talent: Why It Pays to Break the Rules at Work and In Life, is my kind of girl. She’s smart, thinks for herself, isn’t afraid to go out on a limb, and encourages others to do the same.

That said, I want to inject a note of caution for anyone considering her advice about being a rebel. There’s an old saying: “The nail that sticks up the most is the first to get hammered down.” It’s true in carpentry and in life. Being a rebel is lonely, dangerous, and is no guarantee of success, financial or otherwise.

I speak from experience, having broken every rule available for as long as I can remember. When I was a child in the 1950s, I wanted to grow up to be a beatnik. I’ve always felt most comfortable amongst bohemians. My wife once complained (while we were sitting in a muscle car stopped by the highway waiting for the cop to give me a speeding ticket) about my “always living on the edge.” And, yes, I’ve been thrown out of more than one bar.

On the other hand, I’ve lived a long and eventful life. Most of the items on my bucket list were checked off long ago.

So, when I ran across an ad in The Wall Street Journal for Gino’s book, I had to snag a copy and read it.

As I expected, the book’s theme is best summed up by a line from the blurb on its dust jacket: “ … the most successful among us break the rules.”

The book description goes on to say, “Rebels have a bad reputation. We think of them as trouble-makers. outcasts, contrarians: those colleagues, friends, and family members who complicate seemingly straight-forward decisions, create chaos, and disagree when everyone else is in agreement. But in truth, rebels are also those among us who change the world for the better with their unconventional outlooks. Instead of clinging to what is safe and familiar, and falling back on routines and tradition, rebels defy the status quo. They are masters of innovation and reinvention, and they have a lot to teach us.”

Considering the third paragraph above, I hope she’s right!

The 283-page (including notes and index) volume summarizes Gino’s decade-long study of rebels at organizations around the world, from high-end boutiques in Italy’s fashion capital (Milan), to the world’s best restaurant (Three-Michelin-star-rated Osteria Francescana), to a thriving fast-food chain (Pal’s), and an award-winning computer animation studio (Pixar).

Francesca Gino is a behavioral scientist and professor at Harvard Business School. She is the Tandon Family Professor of Business Administration in the school’s Negotiation, Organizations & Markets Unit. No slouch professionally, she has been honored as one of the world’s top 40 business professors under 40 by Poets & Quants and one of the world’s 50 most influential management thinkers by Thinkers50.

Enough with the “In Praise Of” stuff, though. Let’s look inside the book. It’s divided into eight chapters, starting with “Napoleon and the Hoodie: The Paradox of Rebel Status,” and ending with “Blackbeard, ‘Flatness,’ and the 8 Principles of Rebel Leadership.” Gino then adds a “Conclusion” telling the story of Risotto Cacio e Pepe (a rice-in-Parmigiano-Reggiano dish invented by Chef Massimo Bottura), and an “Epilogue: Rebel Action” giving advice on releasing your inner rebel.

Stylistically, the narrative uses the classic “Harvard Case Study” approach. That is, it’s basically a pile of stories, each of which makes a point about how rebel leaders Gino has known approach their work. In summary, the take-home lesson is that those leaders encourage their employees to unleash their “inner rebel,” thereby unlocking creativity, enthusiasm, and productivity that more traditional management styles suppress.

The downside of this style is that it sometimes is difficult for the reader to get their brain around the points that Gino is making. Luckily, her narrative style is interesting, easy to follow and compelling. Like all well-written prose she keeps the reader wondering “What happens next?” The episodes she presents are invariably unusual and interesting themselves. She regularly brings in her own exploits and keeps, as much as possible, to first-person active voice.

That is unusual for academic writers, who find it all too easy to slip into a pedantic third-person, passive-voice best reserved for works intended as sleep aids.

To give you a feel for what reading an HCS-style volume is like, I’ll describe what it’s like to study Quantum Dynamics. While the differences outnumber the similarities, the overall “feel” is similar.

The first impression students get of QD is that the subject is entirely anti-intuitive. That is, before you can learn anything about QD, you have to discard any lingering intuition about how the Universe works. That’s probably easier for someone who never learned Classical Physics in the first place. Ideas like “you can’t be in two places at the same time” simply do not apply in the quantum world.

Basically, to learn QD, you have to start with a generous dose of “willing suspension of disbelief.” You do that by studying stories about experiments performed in the late nineteenth century that simply didn’t work. At that time, the best minds in Physics spent careers banging their heads into walls as Mommy Nature refused to return results that Classical Physics imagined she had to. Things like the Michelson-Moreley experiment (and many other then-state-of-the-art experiments) gave results at odds with Classical Physics. There were enough of these screwy results that physicists began to doubt that what they believed to be true, was actually how the Universe worked. After listening to enough of these stories, you begins to doubt your own intuition.

Then, you learn to trust the mathematics that will be your only guide in QD Wonderland.

Finally, you spend a couple of years learning about a new set of ideas based on Through the Looking Glass concepts that stand normal intuition on its head. Piling up stories about all these counter-intuitive ideas helps you build up a new intuition about what happens in the quantum world. About that time, you start feeling confident that this new intuition helps you predict what will happen next.

The HCS style of learning does something similar, although usually not as extreme. Reading story after story about what hasn’t and what has worked for others in the business world, you begin to develop an intuition for applying the new ideas. You gain confidence that, in any given situation, you can predict what happens next.

What happens next is that when you apply the methods Gino advocates, you start building a more diverse corporate culture that attracts and retains the kinds of folks that make your company a leader in its field.

There’s an old one-line joke:

I want to be different – like everybody else.”

We can’t all be different because then there wouldn’t be any sameness to be different from, but we can all be rebels. We can all follow the

  1. READY!
  2. AIM!
  3. FIRE!

mantra advocated by firearms instructors everywhere.

In other words:

  1. Observe what’s going on out there in the world, then
  2. Think about what you might do that breaks the established rules, and, finally,
  3. Act in a way that makes the Universe a better place in which to live.

Why Diversity Rules

Diverse friends
A diverse group of people with different ages and nationalities having fun together. Rawpixel/Shutterstock

23 January 2019 – Last week two concepts reared their ugly heads that I’ve been banging on about for years. They’re closely intertwined, so it’s worthwhile to spend a little blog space discussing why they fit so tightly together.

Diversity is Good

The first idea is that diversity is good. It’s good in almost every human pursuit. I’m particularly sensitive to this, being someone who grew up with the idea that rugged individualism was the highest ideal.

Diversity, of course, is incompatible with individualism. Individualism is the cult of the one. “One” cannot logically be diverse. Diversity is a property of groups, and groups by definition consist of more than one.

Okay, set theory admits of groups with one or even no members, but those groups have a diversity “score” (Gini–Simpson index) of zero. To have any diversity at all, your group has to have at absolute minimum two members. The more the merrier (or diversitier).

The idea that diversity is good came up in a couple of contexts over the past week.

First, I’m reading a book entitled Farsighted: How We Make the Decisions That Matter the Most by Steven Johnson, which I plan eventually to review in this blog. Part of the advice Johnson offers is that groups make better decisions when their membership is diverse. How they are diverse is less important than the extent to which they are diverse. In other words, this is a case where quantity is more important than quality.

Second, I divided my physics-lab students into groups to perform their first experiment. We break students into groups to prepare them for working in teams after graduation. Unlike when I was a student fifty years ago, activity in scientific research and technology development is always done in teams.

When I was a student, research was (supposedly) done by individuals working largely in isolation. I believe it was Willard Gibbs (I have no reliable reference for this quote) who said: “An experimental physicist must be a professional scientist and an amateur everything else.”

By this he meant that building a successful physics experiment requires the experimenter to apply so many diverse skills that it is impossible to have professional mastery of all of them. He (or she) must have an amateur’s ability pick up novel skills in order to reach the next goal in their research. They must be ready to work outside their normal comfort zone.

That asked a lot from an experimental researcher! Individuals who could do that were few and far between.

Today, the fast pace of technological development has reduced that pool of qualified individuals essentially to zero. It certainly is too small to maintain the pace society expects of the engineering and scientific communities.

Tolkien’s “unimaginable hand and mind of Feanor” puttering around alone in his personal workshop crafting magical things is unimaginable today. Marlowe’s Dr. Faustus character, who had mastered all earthly knowledge, is now laughable. No one person is capable of making a major contribution to today’s technology on their own.

The solution is to perform the work of technological research and development in teams with diverse skill sets.

In the sciences, theoreticians with strong mathematical backgrounds partner with engineers capable of designing machines to test the theories, and technicians with the skills needed to fabricate the machines and make them work.

Chaotic Universe

The second idea I want to deal with in this essay is that we live in a chaotic Universe.

Chaos is a property of complex systems. These are systems consisting of many interacting moving parts that show predictable behavior on short time scales, but eventually foil the most diligent attempts at long-term prognostication.

A pendulum, by contrast, is a simple system consisting of, basically, three moving parts: a massive weight, or “pendulum bob,” that hangs by a rod or string (the arm) from a fixed support. Simple systems usually do not exhibit chaotic behavior.

The solar system, consisting of a huge, massive star (the Sun), eight major planets and a host of minor planets, is decidedly not a simple system. Its behavior is borderline chaotic. I say “borderline” because the solar system seems well behaved on short time scales (e.g., millennia), but when viewed on time scales of millions of years does all sorts of unpredictable things.

For example, approximately four and a half billion years ago (a few tens of millions of years after the system’s initial formation) a Mars-sized planet collided with Earth, spalling off a mass of material that coalesced to form the Moon, then ricochetted out of the solar system. That’s the sort of unpredictable event that happens in a chaotic system if you wait long enough.

The U.S. economy, consisting of millions of interacting individuals and companies, is wildly chaotic, which is why no investment strategy has ever been found to work reliably over a long time.

Putting It Together

The way these two ideas (diversity is good, and we live in a chaotic Universe) work together is that collaborating in diverse groups is the only way to successfully navigate life in a chaotic Universe.

An individual human being is so powerless that attempting anything but the smallest task is beyond his or her capacity. The only way to do anything of significance is to collaborate with others in a diverse team.

In the late 1980s my wife and I decided to build a house. To begin with, we had to decide where to build the house. That required weeks of collaboration (by our little team of two) to combine our experiences of different communities in the area where we were living, develop scenarios of what life might be like living in each community, and finally agree on which we might like the best. Then we had to find an architect to work with our growing team to design the building. Then we had to negotiate with banks for construction loans, bridge loans, and ultimate mortgage financing. Our architect recommended adding a prime contractor who had connections with carpenters, plumbers, electricians and so forth to actually complete the work. The better part of a year later, we had our dream house.

There’s no way I could have managed even that little project – building one house – entirely on my own!

In 2015, I ran across the opportunity to produce a short film for a film festival. I knew how to write a script, run a video camera, sew a costume, act a part, do the editing, and so forth. In short, I had all the skills needed to make that 30-minute film.

Did that mean I could make it all by my onesies? Nope! By the time the thing was completed, the list of cast and crew counted over a dozen people, each with their own job in the production.

By now, I think I’ve made my point. The take-home lesson of this essay is that if you want to accomplish anything in this chaotic Universe, start by assembling a diverse team, and the more diverse, the better!

Robots Revisited

Engineer with SCARA robots
Engineer using monitoring system software to check and control SCARA welding robots in a digital manufacturing operation. PopTika/Shutterstock

12 December 2018 – I was wondering what to talk about in this week’s blog posting, when an article bearing an interesting-sounding headline crossed my desk. The article, written by Simone Stolzoff of Quartz Media was published last Monday (12/3/2018) by the World Economic Forum (WEF) under the title “Here are the countries most likely to replace you with a robot.”

I generally look askance at organizations with grandiose names that include the word “World,” figuring that they likely are long on megalomania and short on substance. Further, this one lists the inimitable (thank God there’s only one!) Al Gore on its Board of Trustees.

On the other hand, David Rubenstein is also on the WEF board. Rubenstein usually seems to have his head screwed on straight, so that’s a positive sign for the organization. Therefore, I figured the article might be worth reading and should be judged on its own merits.

The main content is summarized in two bar graphs. The first lists the ratio of robots to thousands of manufacturing workers in various countries. The highest scores go to South Korea and Singapore. In fact, three of the top four are Far Eastern countries. The United States comes in around number seven.Figure 1

The second applies a correction to the graphed data to reorder the list by taking into account the countries’ relative wealth. There, the United States comes in dead last among the sixteen countries listed. East Asian countries account for all of the top five.

Figure 2The take-home-lesson from the article is conveniently stated in its final paragraph:

The upshot of all of this is relatively straightforward. When taking wages into account, Asian countries far outpace their western counterparts. If robots are the future of manufacturing, American and European countries have some catching up to do to stay competitive.

This article, of course, got me started thinking about automation and how manufacturers choose to adopt it. It’s a subject that was a major theme throughout my tenure as Chief Editor of Test & Measurement World and constituted the bulk of my work at Control Engineering.

The graphs certainly support the conclusions expressed in the cited paragraph’s first two sentences. The third sentence, however, is problematical.

That ultimate conclusion is based on accepting that “robots are the future of manufacturing.” Absolute assertions like that are always dangerous. Seldom is anything so all-or-nothing.

Predicting the future is epistemological suicide. Whenever I hear such bald-faced statements I recall Jim Morrison’s prescient statement: “The future’s uncertain and the end is always near.”

The line was prescient because a little over a year after the song’s release, Morrison was dead at age twenty seven, thereby fulfilling the slogan expressed by John Derek’s “Nick Romano” character in Nicholas Ray’s 1949 film Knock on Any Door: “Live fast, die young, and leave a good-looking corpse.”

Anyway, predictions like “robots are the future of manufacturing” are generally suspect because, in the chaotic Universe in which we live, the future is inherently unpredictable.

If you want to say something practically guaranteed to be wrong, predict the future!

I’d like to offer an alternate explanation for the data presented in the WEF graphs. It’s based on my belief that American Culture usually gets things right in the long run.

Yes, that’s the long run in which economist John Maynard Keynes pointed out that we’re all dead.

My belief in the ultimate vindication of American trends is based, not on national pride or jingoism, but on historical precedents. Countries that have bucked American trends often start out strong, but ultimately fade.

An obvious example is trendy Japanese management techniques based on Druckerian principles that were so much in vogue during the last half of the twentieth century. Folks imagined such techniques were going to drive the Japanese economy to pre-eminence in the world. Management consultants touted such principles as the future for corporate governance without noticing that while they were great for middle management, they were useless for strategic planning.

Japanese manufacturers beat the crap out of U.S. industry for a while, but eventually their economy fell into a prolonged recession characterized by economic stagnation and disinflation so severe that even negative interest rates couldn’t restart it.

Similar examples abound, which is why our little country with its relatively minuscule population (4.3% of the world’s) has by far the biggest GDP in the world. China, with more than four times the population, grosses less than a third of what we do.

So, if robotic adoption is the future of manufacturing, why are we so far behind? Assuming we actually do know what we’re doing, as past performance would suggest, the answer must be that the others are getting it wrong. Their faith in robotics as a driver of manufacturing productivity may be misplaced.

How could that be? What could be wrong with relying on technological advancement as the driver of productivity?

Manufacturing productivity is calculated on the basis of stuff produced (as measured by its total value in dollars) divided by the number of worker-hours needed to produce it. That should tell you something about what it takes to produce stuff. It’s all about human worker involvement.

Folks who think robots automatically increase productivity are fixating on the denominator in the productivity calculation. Making even the same amount of stuff while reducing the worker-hours needed to produce it should drive productivity up fast. That’s basic number theory. Yet, while manufacturing has been rapidly introducing all kinds of automation over the last few decades, productivity has stagnated.

We need to look for a different explanation.

It just might be that robotic adoption is another example of too much of a good thing. It might be that reliance on technology could prove to be less effective than something about the people making up the work force.

I’m suggesting that because I’ve been led to believe that work forces in the Far Eastern developing economies are less skillful, may have lower expectations, and are more tolerant of authoritarian governments.

Why would those traits make a difference? I’ll take them one at a time to suggest how they might.

The impression that Far Eastern populations are less skillful is not easy to demonstrate. Nobody who’s dealt with people of Asian extraction in either an educational or work-force setting would ever imagine they are at all deficient in either intelligence or motivation. On the other hand, as emerging or developing economies those countries are likely more dependent on workers newly recruited from rural, agrarian settings, who are likely less acclimated to manufacturing and industrial environments. On this basis, one may posit that the available workers may prove less skillful in a manufacturing setting.

It’s a weak argument, but it exists.

The idea that people making up Far-Eastern work forces have lower expectations than those in more developed economies is on firmer footing. Workers in Canada, the U.S. and Europe have very high expectations for how they should be treated. Wages are higher. Benefits are more generous. Upward mobility perceptions are ingrained in the cultures.

For developing economies, not so much.

Then, we come to tolerance of authoritarian regimes. Tolerance of authoritarianism goes hand-in-hand with tolerance for the usual authoritarian vices of graft, lack of personal freedom and social immobility. Only those believing populist political propaganda think differently (which is the danger of populism).

What’s all this got to do with manufacturing productivity?

Lack of skill, low expectations and patience under authority are not conducive to high productivity. People are productive when they work hard. People work hard when they are incentivized. They are incentivized to work when they believe that working harder will make their lives better. It’s not hard to grasp!

Installing robots in a plant won’t by itself lead human workers to believe that working harder will make their lives better. If anything, it’ll do the opposite. They’ll start worrying that their lives are about to take a turn for the worse.

Maybe that has something to do with why increased automation has failed to increase productivity.

Immigration in Perspective

Day without immigrants protest
During ‘A Day Without Immigrants’ , more than 500,000 people marched down Wilshire Boulevard in Los Angeles, CA to protest a proposed federal crackdown on illegal immigration. Krista Kennell / Shutterstock.com

17 October 2018 – Immigration is, by and large, a good thing. It’s not always a good thing, and it carries with it a host of potential problems, but in general immigration is better than its opposite: emigration. And, there are a number of reasons for that.

Immigration is movement toward some place. Emigration is flow away from a place.

Mathematically, population shifts are described by a non-homogeneous second-order differential equation. I expect that statement means absolutely nothing to about half the target audience for this blog, and a fair fraction of the others have (like me) forgotten most of what they ever knew (or wanted to know) about such equations. So, I’ll start with a short review of the relevant points of how the things behave.

It’ll help the rest of this blog make a lot more sense, so bear with me.

Basically, the relevant non-homogeneous second-order differential equation is something called the “diffusion equation.” Leaving the detailed math aside, what this equation says is that the rate of migration of just about anything from one place to another depends on the spatial distribution of population density, a mobility factor, and a driving force pushing the population in one direction or the other.

Things (such as people) “diffuse” from places with higher densities to those with lower densities.

That tendency is moderated by a “mobility” factor that expresses how easy it is to get from place to place. It’s hard to walk across a desert, so mobility of people through a desert is low. Similarly, if you build a wall across the migration path, that also reduces mobility. Throwing up all kinds of passport checks, visas and customs inspections also reduces mobility.

Giving people automobiles, buses and airplanes, on the other hand, pushes mobility up by a lot!

But, changing mobility only affects the rate of flow. It doesn’t do anything to change the direction of flow, or to actually stop it. That’s why building walls has never actually worked. It didn’t work for the First Emperor of China. It didn’t work for Hadrian. It hasn’t done much for the Israelis, either.

Direction of flow is controlled by a forcing term. Existence of that forcing term is what makes the equation “non-homogeneous” rather than “homogeneous.” The homogeneous version (without the forcing term) is called the “heat equation” because it models what dumb-old thermal energy does.

Things that can choose what to do (like people), and have feet to help them act on their choices, get to “vote with their feet.” That means they can go where they want, instead of always floating downstream like a dead leaf.

The forcing term largely accounts for the desirability of being in one place instead of another. For example, the United States has a reputation for being a nice place to live. Thus, people try to flock here in droves from places that are not so nice. Thus, there’s a forcing term that points people from other places to the U.S.

That’s the big reason you want to live in a country that has immigration issues, rather than one with emigration issues. The Middle East had a serious emigration problem in 2015. For a number of reasons, it had become a nasty place to live. Folks that lived there wanted out in a big way. So, they voted with their feet.

There was a huge forcing term that pushed a million people from the Middle East to elsewhere, specifically Europe. Europe was considered a much nicer place to be, so people were willing to go through Hell to get there. Thus: emigration from the Middle East, and immigration into Europe.

In another example Nazi occupation in the first half of the twentieth century made most places in Europe distasteful, especially for certain groups of people. So, the forcing term pushed a lot of people across the Atlantic toward America. In 1942 Michael Curtiz made a film about that. It was called Casablanca and is arguably one of the greatest films Humphrey Bogart starred in.

Similarly, for decades Mexico had some serious problems with poverty, organized crime and corruption. Those are things that make a place nasty to live in, so there was a big forcing function pushing people to cross the border into the much nicer United States.

In recent decades, regime change in Mexico cleaned up a lot of the country’s problems, so migration from Mexico to the United States dropped like a stone in the last years of the Obama administration. When Mexico became a nicer place to live, people stopped wanting to move away.

Duh!

There are two morals to this story:

  1. If you want to cut down on immigration from some other country, help that other country become a nicer place to live. (Conversely, you could turn your own country into a third-world toilet so nobody wants to come in, but that’s not what we want.)
  2. Putting up walls and other barriers to immigration doesn’t stop it. They only slow it down.

We’re All Immigrants

I’d should subtitle this section, “The Bigot’s Lament.”

There isn’t a bi-manual (two-handed) biped (two-legged) creature anywhere in North or South America who isn’t an immigrant or a descendant of immigrants.

There have been two major influxes of human population in the history (and pre-history) of the Americas. The first occurred near the end of the last Ice Age, and the second occurred during the European Age of Discovery.

Before about ten-thousand years ago, there were horses, wolves, saber-tooth tigers, camels(!), elephants, bison and all sorts of big and little critters running around the Americas, but not a single human being.

(The actual date is controversial, but you get the idea.)

Anatomically modern humans, (and there aren’t any others because everyone else went extinct tens of thousands of years ago) developed in East Africa about 200,000 years ago.

They were, by the way, almost certainly negroes. A fact every racist wants to ignore is that: everybody has black ancestors! You can’t hate black people without hating your own forefathers.

More important for this discussion, however, is that every human being in North and South America is descended from somebody who came here from somewhere else. So-called “Native Americans” came here in the Pleistocene Epoch, most likely from Siberia. Most everybody else showed up after Christopher Columbus accidentally fell over North America.

That started the second big migration of people into the Americas: European colonization.

Mostly these later immigrants were imported to fill America’s chronic labor shortage.

America’s labor shortage has persisted since the Spanish conquistadores pretty much wiped out the indigenous people, leaving the Spaniards with hardly anybody to do the manual labor on which their economy depended. Waves of forced and unforced migration have never caught up. We still have a chronic labor shortage.

Immigrants generally don’t come to take jobs from “real” Americans. They come here because there are by-and-large more available jobs than workers.

Currently, natural reductions in birth rates among better educated, better housed, and generally wealthier Americans have left the United States (similar to most developed countries) with the problem that the the working-age population is declining while the older, retired population expands. That means we haven’t got enough young squirts to support us old farts in retirement.

The only viable solution is to import more young squirts. That means welcoming working-age immigrants.

End of story.

Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.