Constructing Ideas

Constructivist pix
Constructivist illustration with rooster’s head. By Leonid Zarubin/Shutterstock

3 July 2019 – Long time readers of my columns will know that one of my favorite philosophical questions is: “How do we know what we think we know?” Along the way, my thoughts have gravitated toward constructivism, which is a theory in the epistemology branch of philosophy.

Jean Piaget has been credited with initiating the constructivist theory of learning through his studies of childhood development. His methods were to ask probing questions of his children and others, in an attempt to understand how they viewed the world. He also devised and administered reading tests to schoolchildren and became interested in the types of errors they made, leading him to explore the reasoning process in these young children.

From his studies, he worked out a model of childhood development that mapped several stages of world-view paradigms they seemed to use as they matured. This forced him to postulate that children actively participate in constructing their own ideas – their knowledge base – based on experience and prior knowledge. Hence, the term “constructivism.”

Imagine a house that represents everything the child “knows.” Mentally, they live in that house all the time, view the world in relation to it, and make decisions based on what’s there.

As they experience everything, including the experience of having someone tell them something verbally or through written words, they actively remodel the place. The operant concept here is that they constantly do the remodeling themselves by trying to fit new information into the structure that’s already there.

My own journey toward constructivism was based on introspective phenomenological studies. That is, I paid attention to how I gained new knowledge and compared my experiences with experiences reported by others studying the same material.

A paradigm example is the study of quantum mechanics. This subject is difficult for students familiar with classical physics because the principles and the phenomena on which they are based seem counterintuitive. Especially, the range of time and distance scales on which quantum principles act is not directly accessible to humans. Quantum mechanics works at submicroscopic distances and on nanosecond time scales.

Successful students of quantum mechanics start by studying human-scale phenomena that betray the presence of quantum principles. For example, the old “planetary model” of atoms as miniature solar systems in which electrons revolve in stable orbits around the atomic nucleus like planets around the Sun is a physical impossibility. Students realize this after studying Maxwellian Electrodynamics.

In 1864, James Clerk Maxwell succeeded in summarizing everything physicists of the time knew about electricity and magnetism in four concise (though definitely not simple) equations. Taken together, they implied the feasibility of radio and not only how light traveled, but even predicted its precise speed. Maxwell’s Equations were enormously successful in guiding the development of electrical technology in the late nineteenth century.

The problem for physicists studying atomic-scale phenomena, however, was that Maxwell’s Equations implied that electrons whizzing around nuclei would rapidly convert all their energy of motion into light, which would radiate away. With no energy of motion left to keep electrons orbiting, the atoms would quickly collapse – then, no more atoms! The Universe as we know it would rapidly cease to exist.

When I say rapidly, I mean on the time scale of trillionths of a second!

Not good for the Universe! Luckily for the Universe, what this really means that there’s something wrong with classical-electrodynamic theory (i.e., Maxwell’s Equations).

The student finds out about dozens of such paradoxes that show that classical physics is just flat out wrong! The student is then ready to entertain some outlandish ideas that form the core of quantum theory. The student proceeds to piece these ideas together into their own mental version of quantum mechanics.

Every physics student I’ve discussed this with has had the same experience learning this quantum-electrodynamical theory (QED). Even more telling, they all report initially learning the ideas by rote without really understanding them, then applying them for considerable time (months or years) before piecing them together into a mental pattern that eventually feels intuitive. At that point, when presented with some phenomenon (such as the sky being blue) they immediately seize on a QED-based explanation as the most obvious. Even doubting QED has become absurd for them!

To a constructivist, this process for learning quantum mechanics makes perfect sense. The student is presented with numerous paradoxes, which causes cognitive dissonance. This state motivates the student to seek alternative concepts and fit them into his or her world view. In a sense, they construct an extension onto the framework of their world view. This will likely require them to make some modifications to the original structure to accommodate the new knowledge.

This method of developing new knowledge dovetails quite nicely with the scientific method that’s been under development since Aristotle and Plato started toying around with it in the fourth century BCE. The new development is that Piaget showed that it is the normal way humans develop new knowledge. Even children can’t fully comprehend a new idea until they fit it into a modified version of their knowledge base.

This model also explains why humans’ normal initial reaction to novel ideas is to forcefully reject them. Accepting new ideas requires them to do a lot of work on their mental scaffolding. It takes a powerful mental event causing severe cognitive dissonance to motivate them to remodel a mental construction they’ve been piecing together for years.

It also explains why younger humans are so much quicker to take up new ideas. Their mental frameworks are still small, and rebuilding them to fit in new concepts is relatively easy. The reward for building out their mental framework is great. They are also more used to tinkering with their mental models than older humans, who have mental frameworks that have served them well for decades without modification.

Of course, once they reach the point of intolerable cognitive dissonance, older humans have more experience to draw on to do the remodeling job. They will be even quicker than youngsters to make whatever adjustments are necessary.

Older humans who have a lifelong habit of challenging themselves with new ideas have the easiest time adapting to change. They are more used to realigning their thinking to incorporate new concepts and have more practice in constructing knowledge frameworks.

The Fluidity of Money

Money exchange
Money is created in the exchange of credit for debt. Image by bluedog studio/shutterstock

19 June 2016 – I’m supposed to have some passing understanding of economics and accounting. I have, after all, a Master’s degree in Business Administration, for which I had to study Macroeconomics and Microeconomics, as well as Cost and Financial Accounting.

Howsomever, while trying to make sense of what folks call “Modern Monetary Theory” it dawned on me that, not only didn’t I have a clear concept of what money actually is, but the people babbling on about money and monetary policy aren’t any clearer on the concept than I am. A review of the differences between neoclassical economics based on Keynsian ideas and so-called Modern Monetary Theory reveals an incomplete understanding of money.

We all think we know what money is, and spout long winded and erudite-sounding loads of gobbledygook that only serve to prove, beyond a shadow of a doubt, that none of us have a clue what the stuff actually is!

I find that situation intolerable, and have set out to change it by trying real hard to come up with a theory that makes sense of all the stupid things we do with and say about money.

Now, I’m not a financial wizard, or a prize-winning economist, or even a whiz-bang developer of computer models of the global economy. I’m just some schmuck with some basic math ability, a little time on my hands, and the desire to make sense of something that it seems the “experts” haven’t wrapped their brains around, yet. So, I’ve thought about this problem a bit, and have a hint of an answer that I want to run up the flagpole to see if anyone salutes.

If this essay triggers something in the brain of somebody smart that sets him, her or it thinking in a new direction about money, I’ll count it time well spent.

So, here goes … .

In science, we try to make sense of anything we don’t fully comprehend by developing some kind of conceptual model that helps us predict what will happen in any given situation. The fact that we currently haven’t a clue what will actually happen when, for example, the Federal Government runs up huge deficits for a very long time, indicates that we’re very far from knowing what we’re talking about with regard to money.

I generally try to model things poorly understood through analogy with things that are well understood. I’ve developed a two-fluid model of money by analogy to certain ideas in classical physics. It seems to work decently for the situations I’ve applied it to.

Analogy with Momentum

Specifically, the model draws an analogy with Newtonian momentum, which is a conserved vector quantity – meaning that the total momentum in a closed system cannot be changed, and that the quantity involves both a magnitude and a spatial direction.

For our analogy to be useful, we need to also use the idea of generalized coordinates, which allow the idea of “direction” to extend beyond strictly cartesian spatial coordinates (motion in straight lines). For example, a bicycle drive chain wraps around two sprockets and has flexible spans linking them, so its motion certainly does not follow along a single cartesian coordinate, yet there is a well-defined path along which any two points on the chain follow each other, maintaining their separation (measured along the path). That allows us to measure motion along the path by a generalized coordinate.

In Newtonian mechanics, momentum is exchanged between objects, which are thought of as components of a system, through the action of forces. Mathematically, the magnitude and direction of the force equals the rate of flow of momentum between the objects.

Newton’s third law, which states that every force is paired with an equal and opposite reaction force, is just an expression of conservation of momentum in that every force (representing a transfer of momentum from one object to another) is paired with an equal and opposite transfer of momentum from the second object to the first. This takes care of maintaining conservation of momentum.

Take, for example, a person stepping off a boat onto a dock. At first, everything is (as seen from the perspective of the dock) stationary. The momentum of an object is defined as the object’s mass (amount of material) times its velocity (a vector combining speed and direction). Since both the person and the boat are stationary (meaning they both have a velocity of zero), the total momentum of the system of person + boat is zero.

Then, the person applies a force to the boat in a direction away from the dock. The Newton’s-third-law reaction force is a push by the boat on the person toward the dock. That’s how the person actually gets to the dock. The boat pushes him/her toward it!

The boat moves away from the dock. The person moves toward the dock. So, the directions of the two momenta are opposite. The speeds of the person and boat automatically (or maybe you’d like to say “magically”) adjust to keep the total momentum of the system equal to zero at all times. That is, at every instant the momentum of the person is equal and opposite to the momentum of the boat.

Money

In the theory of money that I’m proposing, money itself is analogous to momentum. Altogether, it’s conserved. That is, it cannot be created or destroyed. There’s always the same amount of “money” – zero!

What we’re used to thinking of as “money” is only half the story, which is why there’s so much confusion over it. Borrowing from double-entry bookkeeping, we’ll call what we usually think of as money as credit. Everyone who understands double-entry bookkeeping knows that for every credit, there is an equal (and opposite) entry called a debit. For our purposes, we’ll shorten that word to something we’re all familiar with: debt.

Debt is the other side of the analogy, which we tend to ignore and that accounts for all the confusion.

We’re going to visualize credit and debt as fluids because they’re measured as continuous, as opposed to quantized, variables. That means that they’re representable by real numbers as opposed to integers. So, nobody has a problem with dividing seven dollars ($7) into two portions each containing three and a half dollars ($3.50). Current usage is to round everything to the nearest cent, or hundredth of a dollar, but that’s for convenience and not wanting to be bothered with truly small change.

At one time, we had half-penny ($0.005) coins, but we don’t do that anymore.

Okay, so “money” actually represents credit and debt in equal amounts, which consequently always add up to zero. Whenever money is created, it’s created as equal amounts of credit and debt.

Money creation always requires activity by two cooperating entities: a creditor and a debtor. Credit is created and transferred from the creditor to the debtor. An equal quantity of debt is created and flows from the debtor to the creditor. “Money” consists of these paired fluids, which flow through the economy via paired interactions between creditors and debtors. Money is created by an interaction that creates equal amounts of credit and debt, and the words “creditor” and “debtor” simply indicate the direction of flow.

Once created, the money flows around in the economy through paired transactions in which credit flows one way and debt the other.

Wealth

This visualization allows us to separate the concepts of “money” and “wealth.” Wealth refers to tangible and intangible assets, such as commodities and intellectual property. Wealth is very definitely not conserved. When a contractor builds a house, he or she creates wealth from, essentially, nothing. The contractor then sells the house to the new owner in a binary transaction that transfers credit to the contractor and debt to the owner.

We’ll leave out discussion of what happens to the wealth represented by the house, since this essay is about money, and money is not wealth.

The owner previously got the credit through a transaction with a lender in which money was created as a transfer of credit to the owner and debt to the lender. The lender can then, for example, package the debt up into something called a “collateralized debt obligation,” and exchange it with somebody else for an equivalent amount of credit. The lender then transfers that credit to another prospective home owner in exchange for an equivalent amount of debt, and the merry-go-round keeps turning.

Unlike wealth, which was created from nothing, the total of credit minus debt in the system remains zero at all times.

It is interesting to note that wealth appears through the creation of a pattern in the physical universe. For example, bricks used by a contractor to build a house start out as a less-organized pile. The contractor creates wealth by arranging those bricks in a house-like pattern. The owner has no use for the disorganized pile of bricks, but has a use for them when arranged as a house. Similarly, the contractor had no use for the raw clay that went into the bricks until the brick manufacturer rearranged it into the pattern we call “bricks.”

Historically, folks’ fascination with the credit side of money has led them to confuse “money” with “wealth.” They’re entirely different things. One is a medium of exchange related to entries in a bookkeeper’s ledger, the other is a real thing related to patterns in the physical world.

I hope this essay manages to help make sense of the money nonsense!

Stick to Your Knitting

Man knitting
Man in suit sticking to his knitting. Photo by fokusgood / Shutterstock

6 June 2019 – Once upon a time in an MBA school far, far away, I took a Marketing 101 class. The instructor, whose name I can no longer be sure of, had a number of sayings that proved insightful, bordering on the oracular. (That means they were generally really good advice.) One that he elevated to the level of a mantra was: “Stick to the knitting.”

Really successful companies of all sizes hew to this advice. There have been periods of history where fast-growing companies run by CEOs with spectacularly big egos have equally spectacularly honored this mantra in the breach. With more hubris than brains, they’ve managed to over-invest themselves out of business.

Today’s tech industry – especially the FAANG companies (Facebook, Amazon, Apple, Netflix and Google) – is particularly prone to this mistake. Here I hope to concentrate on what the mantra means, and what goes wrong when you ignore it.

Okay, “stick to your knitting” is based on the obvious assumption that every company has some core expertise. Amazon, for example, has expertise in building and operating an online catalog store. Facebook has expertise in running an online forum. Netflix operates a bang-up streaming service. Ford builds trucks. Lockheed Martin makes state-of-the-art military airplanes.

General Electric, which has core expertise in manufacturing industrial equipment, got into real trouble when it got the bright idea of starting a finance company to extend loans to its customers for purchases of its equipment.

Conglomeration

There is a business model, called the conglomerate that is based on explicitly ignoring the “knitting” mantra. It was especially popular in the 1960s. Corporate managers imagined that conglomerates could bring into play synergies that would make conglomerates more effective than single-business companies.

For a while there, this model seemed to be working. However, when business conditions began to change (specifically interest rates began to rise from an abnormally low level to more normal rates) their supposed advantages began melting like a birthday cake left outside in a rainstorm. These huge conglomerates began hemorrhaging money until vultures swooped in to pick them apart. Conglomerates are now a thing of the past.

There are companies, such as Berkshire Hathaway, whose core expertise is in evaluating and investing in other companies. Some of them are very successful, but that’s because they stick to their core expertise.

Berkshire Hathaway was originally a textile company that investor Warren Buffett took over when the textile industry was busy going overseas. As time went on, textiles became less important and, by 1985 this core part of the company was shut down. It had become a holding company for Buffett’s investments in other companies. It turns out that Buffett’s core competence is in handicapping companies for investment potential. That’s his knitting!

The difference between a holding company and a conglomerate is (and this is specifically my interpretation) a matter of integration. In a conglomerate, the different businesses are more-or-less integrated into the parent corporation. In a holding company, they are not.

Berkshire Hathaway is known for it’s insurance business, but if you want to buy, say, auto insurance from Berkshire Hathaway, you have to go to it’s Government Employees Insurance Company (GEICO) subsidiary. GEICO is a separate company that happens to be wholly owned by Berkshire Hathaway. That is, it has its own corporate headquarters and all the staff, fixtures and other resources needed to operate as an independent insurance company. It just happens to be owned, lock, stock and intellectual property by another corporate entity: Berkshire Hathaway.

GEICO’s core expertise is insurance. Berkshire Hathaway’s core expertise is finding good companies to invest in. Some are partially owned (e.g., 5.4% of Apple) some are wholly owned (e.g., Acme Brick).

Despite Berkshire Hathaway’s holding positions in both Apple and Acme Brick, if you ask Warren Buffet if Berkshire Hathaway is a computer company or a brick company, he’d undoubtedly say “no.” Berkshire Hathaway is a diversified holding company.

It’s business is owning other businesses.

To paraphrase James Coburn’s line from Stanley Donen’s 1963 film Charade: “[Mrs. Buffett] didn’t raise no stupid children!”

Why Giant Corporations?

All this giant corporation stuff stems from a dynamic I also learned about in MBA school: a company grows or it dies. I ran across this dynamic during a financial modeling class where we used computers to predict results of corporate decisions in lifelike conditions. Basically, what happens is that unless the company strives to its utmost to maintain growth, it starts to shrink and then all is lost. Feedback effects take over and it withers and dies.

Observations since then have convinced me this is some kind of natural law. It shows up in all kinds of natural systems. I used to think I understood why, but I’m not so sure anymore. It may have something to do with chaos, and we live in a chaotic universe. I resolve to study this in more detail – later.

But, anyway … .

Companies that embrace this mantra (You grow or you die.) grow until they reach some kind of external limit, then they stop growing and – in some fashion or other – die.

Sometimes (and paradigm examples abound) external limits don’t kick in before some company becomes very big, indeed. Standard Oil Company may be the poster child for this effect. Basically, the company grew to monopoly status and, in 1911 the U.S. Federal Government stepped in and, using the 1890 Sherman Anti-Trust Act, forced its breakup into 33 smaller oil companies, many of which still exist today as some of the world’s major oil companies (e.g., Mobil, Amoco, and Chevron). At the time of its breakup, Standard Oil had a market capitalization of just under $11B and was the third most valuable company in the U.S. Compare that to the U.S. GDP of roughly $34B at the time.

The problem with companies that big is that they generate tons of free cash. What to do with it?

There are three possibilities:

  1. You can reinvest it in your company;

  2. You can return it to your shareholders; or

  3. You can give it away.

Reinvesting free cash in your company is usually the first choice. I say it is the first choice because it is used at the earliest period of the company’s history – the period when growth is necessarily the only goal.

If done properly reinvestment can make your company grow bigger faster. You can reinvest by out-marketing your competition (by, say, making better advertisements) and gobbling up market share. You can also reinvest to make your company’s operations more effective or efficient. To grow, you also need to invest in adding production facilities.

At a later stage, your company is already growing fast and you’ve got state-of-the-art facilities, and you dominate your market. It’s time to do what your investors gave you their money for in the first place: return profits to them in the form of dividends. I kinda like that. It’s what the game’s all about, anyway.

Finally, most leaders of large companies recognize that having a lot of free cash laying around is an opportunity to do some good without (obviously) expecting a payback. I qualify this with the word “obviously” because on some level altruism does provide a return in some form.

Generally, companies engage in altruism (currently more often called “philanthropy”) to enhance their perception by the public. That’s useful when lawsuits rear their ugly heads or somebody in the organization screws up badly enough to invite public censure. Companies can enhance their reputations by supporting industry activities that do not directly enhance their profits.

So-called “growth companies,” however, get stuck in that early growth phase, and never transition to paying dividends. In the early days of the personal-computer revolution, tech companies prided themselves on being “growth stocks.” That is, investors gained vast wealth on paper as the companies’ stock prices went up, but couldn’t realized those gains (capital gains) unless they sold the stock. Or, as my father once did, by using the stock for collateral to borrow money.

In the end, wise investors eventually want their money back in the form of cash from dividends. For example, in the early 2000s, Microsoft and other technology companies were forced by their shareholders to start paying dividends for the first time.

What can go wrong

So, after all’s said and done, why’s my marketing professor’s mantra wise corporate governance?

To make money, especially the scads of money that corporations need to become really successful, you’ve gotta do something right. In fact, you gotta do something better than the other guys. When you know how to do something better than the other guys, that’s called expertise!

Companies, like people, have limitations. To imagine you don’t have limitations is hubris. To put hubris in perspective, recall that the ancients famously made it Lucifer’s cardinal sin. In fact, it was his only sin!

Folks who tell you that you can do anything are flat out conning your socks off.

If you’re lucky you can do one thing better than others. If you’re really lucky, you can do a few things better than others. If you try to do stuff outside your expertise, however, you’re gonna fail. A person can pick themselves up, dust themselves off, and try again – but don’t try to do the same thing again ‘cause you’ve already proved it’s outside your expertise. People can start over, but companies usually can’t.

One of my favorite sayings is:

Everything looks easy to someone who doesn’t know what they’re doing.

The rank amateur at some activity typically doesn’t know the complexities and pitfalls that an expert in the field has learned about through training and experience. That’s what we know as expertise. When anyone – or any company – wanders outside their field of expertise, they quickly fall foul of those complexities and pitfalls.

I don’t know how many times I’ve overheard some jamoke at an art opening say, “Oh, I could do that!”

Yeah? Then do it!

The artist has actually done it.

The same goes for some computer engineer who imagines that knowing how to program computers makes him (or her) smart, and because (s)he is so smart, (s)he could run, say, a magazine publishing house. How hard can it be?

Mark Zuckerberg is in the process of finding out.

Fed Reports on U.S. Economic Well-Being

Federal Reserve Building
The Federal Reserve released the results of its annual Survey of Household Economics and Decisionmaking for calendar year 2018 last week. Image by Thomas Barrat / Shutterstock

29 May 2019 – Last week (specifically 23 May 2019) the Federal Reserve Board released the results of its annual Survey of Household Economics and Decisionmaking for CY2018. I’ve done two things for readers of this blog. First, I downloaded a PDF copy of the report to make available free of charge on my website at cgmasi.com alongside last year’s report for comparison. Second, I’m publishing an edited extract of the report’s executive summary below.

The report describes the results of the sixth annual Survey of Household Economics and Decisionmaking (SHED). In October and November 2018, the latest SHED polled a self-selected sample of over 11,000 individuals via an online survey.

Along with the survey-results report, the Board published the complete anonymized data in CSV, SAS, STATA formats; as well as a supplement containing the complete SHED questionnaire and responses to all questions in the order asked. The survey continues to use subjective measures and self-assessments to supplement and enhance objective measures.

Overall Results

Survey respondents reported that most measures of economic well-being and financial resilience in 2018 are similar to or slightly better than in 2017. Many families have experienced substantial gains since the survey began in 2013, in line with the nation’s ongoing economic expansion during that period.

Even so, another year of economic expansion and the low national unemployment rates did little to narrow the persistent economic disparities by race, education, and geography. Many adults are financially vulnerable and would have difficulty handling an emergency expense as small as $400.

In addition to asking adults whether they are working, the survey asks if they want to work more and what impediments they see to them working.

Overall Economic Well-Being

A large majority of individuals report that, financially, they are doing okay or living comfortably, and overall economic well-being has improved substantially since the survey began in 2013

  • When asked about their finances, 75% of adults say they are either doing okay or living comfortably. This result in 2018 is similar to 2017 and is 12%age points higher than 2013.

  • Adults with a bachelor’s degree or higher are significantly more likely to be doing at least okay financially (87%) than those with a high school degree or less (64%).

  • Nearly 8 in 10 whites are at least doing okay financially in 2018 versus two-thirds of blacks and Hispanics. The gaps in economic well-being by race and ethnicity have persisted even as overall wellbeing has improved since 2013.

  • Fifty-six percent of adults say they are better off than their parents were at the same age and one fifth say they are worse off.

  • Nearly two-thirds of respondents rate their local economic conditions as “good” or “excellent,” with the rest rating conditions as “poor” or “only fair.” More than half of adults living in rural areas describe their local economy as good or excellent, compared to two-thirds of those living in urban areas.

Income

Changes in family income from month to month remain a source of financial strain for some individuals.

  • Three in 10 adults have family income that varies from month to month. One in 10 adults have struggled to pay their bills because of monthly changes in income. Those with less access to credit are much more likely to report financial hardship due to income volatility.

  • One in 10 adults, and over one-quarter of young adults under age 30, receive some form of financial support from someone living outside their home. This financial support is mainly between parents and adult children and is often to help with general expenses.

Employment

Most adults are working as much as they want to, an indicator of full employment; however, some remain unemployed or underemployed. Economic well-being is lower for those wanting to work more, those with unpredictable work schedules, and those who rely on gig activities as a main source of income.

  • One in 10 adults are not working and want to work, though many are not actively looking for work. Four percent of adults in the SHED are not working, want to work, and applied for a job in the prior 12 months. This is similar to the official unemployment rate of 3.8% in the fourth quarter of 2018.

  • Two in 10 adults are working but say they want to work more. Blacks, Hispanics, and those with less education are less likely to be satisfied with how much they are working.

  • Half of all employees received a raise or promotion in the prior year.

  • Unpredictable work schedules are associated with financial stress for some. One-quarter of employees have a varying work schedule, including 17% whose schedule varies based on their employer’s needs. One-third of workers who do not control their schedule are not doing okay financially, versus one-fifth of workers who set their schedule or have stable hours.

  • Three in 10 adults engaged in at least one gig activity in the prior month, with a median time spent on gig work of five hours. Perhaps surprisingly, little of this activity relies on technology: 3% of all adults say that they use a website or an app to arrange gig work.

  • Signs of financial fragility – such as difficulty handling an emergency expense – are slightly more common for those engaged in gig work, but markedly higher for those who do so as a main source of income.

Dealing with Unexpected Expenses

While self-reported ability to handle unexpected expenses has improved substantially since the survey began in 2013, a sizeable share of adults nonetheless say that they would have some difficulty with a modest unexpected expense.

  • If faced with an unexpected expense of $400, 61% of adults say they would cover it with cash, savings, or a credit card paid off at the next statement – a modest improvement from the prior year. Similar to the prior year, 27% would borrow or sell something to pay for the expense, and 12% would not be able to cover the expense at all.

  • Seventeen percent of adults are not able to pay all of their current month’s bills in full. Another 12% of adults would be unable to pay their current month’s bills if they also had an unexpected $400 expense that they had to pay.

  • One-fifth of adults had major, unexpected medical bills to pay in the prior year. One-fourth of adults skipped necessary medical care in 2018 because they were unable to afford the cost.

Banking and Credit

Most adults have a bank account and are able to obtain credit from mainstream sources. However, sub- stantial gaps in banking and credit services exist among minorities and those with low incomes.

  • Six percent of adults do not have a bank account. Fourteen percent of blacks and 11% of Hispanics are unbanked versus 4% of whites. Thirty-five percent of blacks and 23% of Hispanics have an account but also use alternative financial services, such as money orders and check cashing services, compared to 11% of whites.

  • More than one-fourth of blacks are not confident that a new credit card application would be approved if they applied—over twice the rate among whites.

  • Those who never carry a credit card balance are much more likely to say that they would pay an unexpected $400 expense with cash or its equivalent (88%) than those who carry a balance most or all of the time (40%) or who do not have a credit card (27%).

  • Thirteen percent of adults with a bank account had at least one problem accessing funds in their account in the prior year. Problems with a bank website or mobile app (7%) and delays in when funds were available to use (6%) are the most common problems. Those with volatile income and low savings are more likely to experience such problems.

Housing and Neighborhoods

Satisfaction with one’s housing and neighborhood is generally high, although notably less so in low-income communities. While 8 in 10 adults living in middle- and upper-income neighborhoods are satisfied with the overall quality of their community, 6 in 10 living in low- and moderate-income neighborhoods are satisfied.

  • People’s satisfaction with their housing does not vary much between more expensive and less expensive cities or between urban and rural areas.

  • Over half of renters needed a repair at some point in the prior year, and 15% of renters had moderate or substantial difficulty getting their landlord to complete the repair. Black and Hispanic renters are more likely than whites to have difficulties getting repairs done.

  • Three percent of non-homeowners were evicted, or moved because of the threat of eviction, in the prior two years. Evictions are slightly more common in urban areas than in rural areas.

Higher Education

Economic well-being rises with education, and most of those holding a post-secondary degree think that attending college paid off.

  • Two-thirds of graduates with a bachelor’s degree or more feel that their educational investment paid off financially, but 3 in 10 of those who started but did not complete a degree share this view.

  • Among young adults who attended college, more than twice as many Hispanics went to a for-profit institution as did whites. For young black attendees, this rate was five times the rate of whites.

  • Given what they know now, half of those who attended a private for-profit institution say that they would attend a different school if they had a chance to go back and make their college choices again. By comparison, about one-quarter of those who attended public or private not-for-profit institutions would want to attend a different school.

Student Loans and Other Education Debt

Over half of young adults who attended college took on some debt to pay for their education. Most borrowers are current on their payments or have successfully paid off their loans.

  • Among those making payments on their student loans, the typical monthly payment is between $200 and $299 per month.

  • Over one-fifth of borrowers who attended private for-profit institutions are behind on student loan payments, versus 8% who attended public institutions and 5% who attended private not-for-profit institutions.

Retirement

Many adults are struggling to save for retirement. Even among those who have some savings, people commonly lack financial knowledge and are uncomfortable making investment decisions.

  • Thirty-six percent of non-retired adults think that their retirement saving is on track, but one-quarter have no retirement savings or pension whatsoever. Among non-retired adults over the age of sixty, 45% believe that their retirement saving is on track.

  • Six in 10 non-retirees who hold self-directed retirement savings accounts, such as a 401(k) or IRA, have little or no comfort in managing their investments.

  • On average, people answer fewer than three out of five financial literacy questions correctly, with lower scores among those who are less comfortable managing their retirement savings.

The forgoing is an edited extract from the Report’s Executive Summary. A PDF version of the entire report is available on my website at cgmasi.com [ http://cgmasi.com ] along with a PDF version of the 2017 report, which was published in May of 2018 and based on a similar survey conducted in late 2017. Reports dating back to the first survey done in late 2013 are available from the Federal Reserve Board’s website linked to above.

Authoritarian’s Lament

Davy Crockett stamp
Davy Crockett was an individualistic hero for children growing up in the 1950s and 1960s. Circa 1967 post stamp printed in USA shows Davy Crockett with rifle and scrub pines. Oldrich / Shutterstock.com

22 May 2019 – I grew up believing in the myth of the rugged individualist.

As did most boys in the 1950s, I looked up to Davy Crockett, Daniel Boone and their ilk. Being fond of developing grand theories, I even worked out an hypothesis that the wisdom of any group’s decisions was inversely proportional to the group’s size (number of members) because in order to develop consensus, the decision had to be acceptable to even the stupidest member of the group.

With this background, I used to think that democracy’s main value was that it protected the rights of individuals – especially those rugged individuals I so respected – so they could scout the path to the future for everyone else to follow.

I’ve since learned better.

There were, of course, a lot of holes in this philosophy, not the least of which was that it matched up so well with the fevered imaginings I saw going on in the minds of authoritarian figures and those who wanted to cozy up to authoritarian figures. Happily, I recognized those philosophical holes and wisely kept on the lookout for better ideas.

First, I realized that no single individual, no matter how accomplished, could do much of anything on their own. Even Albert Einstein, that heroic misfit scientist, was only able to develop his special theory of relativity by abandoning some outdated assumptions that made interpreting results of experiments by other scientists problematic. Without a thorough immersion in the work of his peers, he wouldn’t have even known there was a problem to be solved!

Similarly, that arrogant genius, Sir Isaac Newton  recognized his debt to his peers in a letter to Robert Hooke on 5 February 1676 by saying: “If I have seen a little further it is by standing on the shoulders of giants.”

For all of his hubris, Newton was well known to immerse himself in the society of his fellows.

Of course, my childhood heros, Davy Crockett, Daniel Boone, and Captain Blood, only started out as rugged individuals. They then went on to gather followers and ended up as community leaders of one sort or another. As children, we used to forget that!

My original admiration of rugged individualists was surely an elitist view, but it was tempered with the understanding that predicting in advance who was going to be part of that elite was an exercise in futility. I’d already seen too many counterexamples of people who imagined that they, or somebody they felt inferior to, would eventually turn out to be one of the elite. In, for example, high school, I’d run into lots of idiots (in my estimation) who strutted around thinking they were superior to others because of (usually) family background or social position.

We called that “being a legend in their own mind.”

Diversity Rules!

Eventually, I realized what ancient Athenians had at least a glimmer of, and the framers of the Declaration of Independence and the U.S. Constitution certainly had a clear idea of, and what modern management theorists harp on today: the more diverse a group is, the better its decisions tend to be.

This is, of course, the exact reverse of my earlier rugged-individualist hypothesis.

As one might suspect, diversity is measurable, and there are numerous diversity indices one might choose from to quantify the diversity within a group. Here I’m using the word “group” in the mathematical sense that such a group is a set whose members (elements) are identifiable by sharing specific characteristics.

For example, “boys” forms a group of juvenile male human beings. “Girls” forms another similar, but mutually exclusive group. “Boys” and “girls” are both subsets of multiple larger groups, one of which is “young people.”

“Diversity” seeks to measure the number of separate subgroups one can find within a given group. So, you can (at least) divide “young people” into two subgroups “boys” and “girls.”

The importance of this analysis is that the different characteristics common within subgroups lead to different life experiences, which, the diversity theory posits, provide different points of view and (likely) different suggestions to be considered for solutions to any given problem.

So, the theory goes, the more diverse the group, the more different solutions to the problem can be generated, and the more likely a superior choice will be presented. With more superior choices available and a more diverse set (There’s that word again!) of backgrounds that can be used to compare the choices, the odds are that the more diversity in a group, the better will be the solution it finally chooses.

Yeah, this is a pretty sketchy description of the theory, but Steven Johnson spends 216 pages laying it out in his book Farsighted, and I don’t have 216 pages here. The sketch presented here is the best I can do with the space available. If you want more explanation, buy the book and read it.

Here I’m going to seize on the Gini–Simpson diversity index, which uses the probability that two randomly selected members of a group are members of the same subgroup (λ), then subtracts it from unity. In other words in a group of, say, young people containing equal numbers of boys and girls, the probability that any pair of members selected at random will be either both boys or both girls is 0.5 (50%). The Gini-Simpson index is 1-λ = 1 – 0.5 = 0.5.

A more diverse group (one with three subgroups, for example) would have a lower probability of any pair being exactly matched, and a higher Gini-Simpson diversity index (closer to 1.0). Thus, the diversity theory would have it that such a group would have a better chance of making a superior decision.

Authoritarians Don’t Rule!

Assuming I’ve convinced you that diversity makes groups smarter, where does that leave our authoritarian?

Let’s look at the rugged-individualist/authoritarian situation from a diversity-index viewpoint. There, the number of subgroups in the decision-making group is one, ‘cause there’s only one member to begin with. Randomly selecting twice always comes up with identically the same member, so the probability of getting the same one twice is exactly one. That is, it’s guaranteed.

That makes the diversity score of an individualist/authoritarian exactly zero. In other words, according to the diversity decision-making theory, authoritarians are the worst possible decision makers!

And, don’t try to tell me individualist/authoritarians can cheat the system by having wide-ranging experiences and understanding different cultures. I’ve consciously done exactly that for seven decades. What it’s done is to give me an appreciation of different cultures, lifestyles, philosophies, etc.

It did not, however, make me more diverse. I’m still one person with one brain and one viewpoint. It only gave me the wisdom(?) to ask others for their opinions, and listen to what they say. It didn’t give me the wisdom to answer for them because I’m only the one person with the one viewpoint.

So, why do authoritarian regimes even exist?

What folks often imagine as “human nature” provides the answer. I’m qualifying “human nature” because, while this particular phenomenon is natural for humans, it’s also natural for all living things. It’s a corollary that follows from Darwin’s natural-selection hypothesis.

Imagine you’re a scrap of deoxyribonucleic acid (DNA). Your job is to produce copies of yourself. If you’re going to be successful, you’ll have to code for ways to make lots of copies of yourself. The more copies you can make, the more successful you’ll be.

Over the past four billion years that life is estimated to have been infesting the surface of Earth, a gazillion tricks and strategies have been hit upon by various scraps of DNA to promote reproductions of themselves.

While some DNA has found that promoting reproduction of other scraps of DNA is helpful under some circumstances, your success comes down to promoting reproduction of scraps of DNA like you.

For example, human DNA has found that coding for creatures that help each other survive helps them survive. Thus, human beings tend to cluster in groups, or tribes of related individuals – with similar DNA. We’re all tribal, and (necessarily) proud of it!

Anyway, another strategy that DNA uses for better survival is to prefer creatures similar to us. That helps DNA evolve into more successful forms.

In the end, the priority system that necessarily evolves is:

  • Identical copies first (thus, the bond between identical twins is especially strong);

  • Closely related copies next;

  • More distantly related copies have lower priority.

We also pretty much all like pets because pets are unrelated creatures that somehow help us survive to make scads of copies of our own DNA. But, we prefer mammals as pets because mammals’ DNA is very much like our own. More people keep cats and dogs as pets, than snakes or bugs. See the pattern?

We prefer our children to our brothers (and sisters).

We prefer our brothers and sisters to our neighbors.

We prefer our neighbors to our pets. (Here the priority systems is getting pretty weak!)

And, so forth.

In other words, all living things prefer other living things that are like them.

Birds of a feather flock together.

That is the basis of all discrimination phenomena, from racial bias to how we choose our friends.

How Authoritarians Rule, Anyway.

What has that to do with authoritarianism?

Well, it has a lot to do with authoritarianism! Authoritarians only survive if they’re supported by populations who prefer them enough to cede decision-making power to them. Otherwise, they’d just turn and walk away.

So authoritarian societies require populations with low diversity who generally are very much like the leaders they select. If you want to be an authoritarian leader, go find a low-diversity population and convince them you’re just like them. Tell ‘em they’re the greatest thing since sliced bread because they’re so much like you, and that everyone else – those who are not part of your selected population – are inferior scum simply because they’re not like your selected population. The your followers will love you for it, and hate everyone else.

That’s why authoritarian regimes mainly thrive in low-diversity, xenophobic populations.

That despite (or maybe because of) the fact that such populations are likely to make the poorest decisions.

Why Target Average Inflation?

Federal Reserve Seal
The FOMC attempts to control economic expansion by managing interest rates. Shutterstock.com

8 May 2019 – There’s been a bit of noise in financial-media circles this week (as of this writing, but it’ll be last week when you get to read it) about Federal Reserve Chairman Jerome Powell’s talking up shifting the Fed’s focus to targeting something called “average inflation” and using words like “transient” and “symmetric” to describe this thinking. James Macintosh provided a nice layman-centric description of the pros and cons of this concept in his “Streetwise” column in Friday’s (5/3) The Wall Street Journal. (Sorry, folks, but this article is only available to WSJ subscribers, so the link above leads to a teaser that asks you to either sign in as a current subscriber or to become a new subscriber. And, you thought information was supposed to be distributed for free? Think again!)

I’m not going to rehash what Macintosh wrote, but attempt to show why this change makes sense. In fact, it’s not really a change at all, but an acknowledgement of what’s really been going on all the time.

We start with pointing out that what the Federal Reserve System is mandated to do is to control the U.S. economy. The operant word here is “control.” That means that to understand what the Fed does (and what it should do) requires a basic understanding of control theory.

Basic Control Theory

We’ll start with a thermostat.

A lot of people (I hesitate to say “most” because I’ve encountered so many counter examples – otherwise intelligent people who somehow don’t seem to get the point) understand how a thermostat works.

A thermostat is the poster child for basic automated control systems. It’s the “stone knives and bearskins” version of automated controls, and is the easiest for the layman to understand, so that’s where we’ll start. It’s also a good analog for what has passed for economic controls since the Fed was created in 1913.

Okay, the first thing to understand is the concept of a “set point.” That’s a “desired value” of some measurement that represents the thing you want to control. In the case of the thermostat, the measurement is room temperature (as read out from a thermometer) and the thing you’re trying to control is how comfortable the room air feels to you. In the case of the Fed, the thing you want to control is overall economic performance and the measurement folks decided was most useful is the inflation rate.

Currently, the set point for inflation is 2% per annum.

In the case of the thermostat in our condo, my wife and I have settled on 75º F. That’s a choice we’ve made based on the climate where we live (Southwestern Florida), our ages, and what we, through experience, have found to be most comfortable for us right now. When we lived in New England, we chose a different set point. Similarly, when we lived in Northern Arizona it was different as well.

The bottom line is: the set point is a matter of choice based on a whole raft of factors that we think are important to us and it varies from time to time.

The same goes for the Fed’s inflation set point. It’s a choice Fed governors make based on a whole raft of considerations that they think are important to the country right now. One of the reasons they meet every month is to review that target ‘cause they know that things change. What seems like a good idea in July, might not look so good in August.

Now, it’s important to recognize that the set point is a target. Like any target, you’re trying to hit it, but you don’t really expect to hit it exactly. You really expect that the value you get for your performance measurement will differ from your set point by some amount – by some error or what metrologists prefer to call “deviation.” We prefer deviation to the word error because it has less pejorative connotations. It’s a fact of life, not a bad thing.

When we add in the concept of time, we also introduce the concept of feedback. That is what control theorists call it when you take the results of your measurement and feed it back to your decision of what to do next.

What you do next to control whatever you’re trying to control depends, first, on the sign (positive or negative) of the deviation, and, in more sophisticated controls, it’s value or magnitude. In the case of the thermostat, if the deviation is positive (meaning the room is hotter than you want) you want to do something to cool it down. In the case of the economy, if inflation is too high you want to do something to reduce economic activity so you don’t get an economic bubble that’ll soon burst.

What confuses some presidents is the idea that rising economic activity isn’t always good. Presidents like boom times ‘cause they make people feel good – like a sugar high. Populist presidents typically fail to recognize (or care about the fact) that booms are invariably bubbles that burst disastrously. Just ask the people of Venezuela who watched their economy’s inflation rate suddenly shoot up to about a million(!) percent per annum.

Booms turn to busts in a heartbeat!

This is where we want to abandon the analogy with a thermostat and get a little more sophisticated.

A thermostat is a blunt instrument. What the thermostat automatically does next is like using a club. At best, a thermostat has two clubs to choose from: it can either fire up the furnace (to raise the room temperature in the event of a negative deviation) or kick in the air conditioner (in the event that the deviation is positive – too hot). That’s known as a binary digital control. It’s gives you a digital choice: up or down.

We leave the thermostat analogy because the Fed’s main tool for controlling the economy (the Fed-funds interest rate) is a lot more sophisticated. It’s what mathematicians call analog. That is, instead of providing a binary choice (to use the club or not), it lets you choose how much pressure you want to apply up or down.

Quantitative easing similarly provides analog control, so what I’m going to say below also applies to it.

Okay, the Fed’s control lever (Fed funds interest rate) is more like a brake pedal than a club. In a car, the harder you press the brake pedal, the more pressure you apply to make the car slow down. A little pressure makes the car slow down a little. A lot of pressure makes the car slow down a lot.

So, you can see why authoritarians like low interest rates. Autthoritarians generally have high-D personalities. As Personality Insights says: “They tend to know 2 speeds in life – zero and full throttle… mostly full throttle.”

They generally don’t have much use for brakes!

By the way, the thing governments have that corresponds to a gas pedal is deficit spending, but the correspondence isn’t exact and the Fed can’t control it, anyway. Since this article is about the Fed, we aren’t going to talk about it now.

When inflation’s moving too fast (above the set point) by a little, the Fed governors – being the feedback controller – decide to raise the Fed funds rate, which is analogous to pushing the brake pedal, by a little. If that doesn’t work, they push it a little harder. If inflation seems to be out of control, as it did in the 1970s, they push it as hard as they can, boosting interest rates way up and pulling way back on the economy.

Populist dictators, who generally don’t know what they’re doing, try to prevent their central banks (you can’t have an economy without having a central bank, even if you don’t know you have it) from raising interest rates soon enough or high enough to get inflation under control, which is why populist dictatorships generally end up with hyperinflation leading to economic collapse.

Populist Dictators Need Not Apply

This is why we don’t want the U.S. Federal Reserve Bank under political control. Politicians are not elected for their economic savvy, so we want Fed governors, who are supposed to have economic savvy, to make smart decisions based on their understanding of economic causes and effects, rather than dumb decisions based on political expediency.

Economists are mathematically sophisticated people. They may (or may not) be steeped in the theory of automated control systems, but they’re quite capable of understanding these basics and how they apply to controlling an economy.

Economics, of course, has been around as long as civilization. Hesiod (ca. 750 BCE) is sometimes considered “the first economist.” Contemporary economics traces back to the eighteenth century with Adam Smith. Control theory, on the other hand, has only been elucidated since the early 1950s. So, you don’t really need control theory to understand economics. It just makes it easier to see how the controls work.

To a veteran test and measurement maven like myself, the idea of thinking in terms of average inflation, instead of the observed inflation at some point in time – like right now – makes perfect sense. We know that every time you make a measurement of anything, you’re almost guaranteed to get a different value than you got the last time you measured it. That’s why we (scientists and engineers) always measure whatever we care about multiple times and pay attention to the average of the measurements instead of each measurement individually.

So, Fed governors starting to pay attention to average inflation strikes us as a duh! What else would you look at?

Similarly, using words like “transient” and “symmetric” make perfect sense because “transient” expresses the idea that things change faster than you can measure them and “symmetric” expresses the idea that measurement variations can be positive or negative – symmetric each side of the average.

These ideas all come from the mathematics of statistics. You’ve heard of “statistical significance” associated with polling data, or two polling results being within “statistical error.” The variations I’m talking about are the same thing. Variations between two values (like the average inflation and the target inflation) are statistically significant if they’re sufficiently outside the statistical error.

I’m not going to go into how you calculate a value for statistical error because it takes hours of yammering to teach it in statistics classes, and I just don’t have the space here. You wouldn’t want to read it right now, anyway. Suffice it to say that it’s a well-defined concept relating to how much variation you can expect in a given data set.

While the control theory I’ve been talking about applies especially to automated control systems, it applies equally to Federal Reserve System control of economic performance – if you put the Federal Open Market Committee (FOMC) in place of the control computer that makes decisions for the automated control system.

So,” you ask, “why not put the Fed-funds rate under computer control?”.

The reason it would be unreasonable to fully automate the Fed’s actions is that we can’t duplicate the thinking process of the Fed governors in a computer program. The state of the art of economic models is just not good enough, yet. We still need the gut feelings of seasoned economists to make enough sense out of what goes on in the economy to figure out what to do next.

That, by the way, is why we don’t leave the decisions up to some hyperintelligent pandimensional being (named Trump). We need a panel of economists with diverse backgrounds and experiences – the FOMC – to have some hope of getting it right!

So, You Thought It Was About Climate Change?

Smog over Warsaw
Air pollution over Warsaw center city in winter. Piotr Szczepankiewicz / Shutterstock

Sorry about failing to post to this blog last week. I took sick and just couldn’t manage it. This is the entry I started for 10 April, but couldn’t finish until now.

17 April 2019 – I had a whole raft of things to talk about in this week’s blog posting, some of which I really wanted to cover for various reasons, but I couldn’t resist an excuse to bang this old “environmental pollution” drum once again.

A Zoë Schlanger-authored article published on 2 April 2019 by World Economic Forum in collaboration with Quartz entitled “The average person in Europe loses two years of their life due to air pollution” crossed my desk this morning (8 April 2019). It was important to me because environmental pollution is an issue I’ve been obsessed with since the 1950s.

The Setup

One of my earliest memories is of my father taking delivery of a even-then-ancient 26-foot lifeboat (I think it was from an ocean liner, though I never really knew where it came from), which he planned to convert to a small cabin cruiser. I was amazed when, with no warning to me, this great, whacking flatbed trailer backed over our front lawn, and deposited this thing that looked like a miniature version of Noah’s Ark.

It was double-ended – meaning it had a prow-shape at both ends – and was pretty much empty inside. That is, it had benches for survivors to sit on and fittings for oarlocks (I vaguely remember oarlocks actually being in place, but my memory from over sixty years ago is a bit hazy.) but little else. No decks. No superstructure. Maybe some grates in the bottom to keep people’s feet out of the bilge, but that’s about it.

My father spent year or so installing lower decks, upper decks, a cabin with bunks, head and a small galley, and a straight-six gasoline engine for propulsion. I sorta remember the keel already having been fitted for a propeller shaft and rudder, which would class the boat as a “launch” rather than a simple lifeboat, but I never heard it called that.

Finally, after multiple-years’ reconstruction, the thing was ready to dump into the water to see if it would float. (Wooden boats never float when you first put them in the water. The planks have to absorb water and swell up to tighten the joints. Until then, they leak like sieves.)

The water my father chose to dump this boat into was the Seekonk River in nearby Providence, Rhode Island. It was a momentous day in our family, so my mother shepherded my big sister and me around while my father stressed out about getting the deed done.

We won’t talk about the day(s) the thing spent on the tiny shipway off Gano Street where the last patches of bottom paint were applied over where the boat’s cradle had supported its hull while under construction, and the last little forgotten bits were fitted and checked out before it was launched.

While that was going on, I spent the time playing around the docks and frightening my mother with my antics.

That was when I noticed the beautiful rainbow sheen covering the water.

Somebody told me it was called “iridescence” and was caused by the whole Seekonk River being covered by an oil slick. The oil came from the constant movement of oil-tank ships delivering liquid dreck to the oil refinery and tank farm upstream. The stuff was getting dumped into the water and flowing down to help turn Narragansett Bay, which takes up half the state to the south, into one vast combination open sewer and toxic-waste dump.

That was my introduction to pollution.

It made my socks rot every time I accidentally or reluctantly-on-purpose dipped any part of my body into that cesspool.

It was enough to gag a maggot!

So when, in the late 1960s, folks started yammering on about pollution, my heartfelt reaction was: “About f***ing time!”

I did not join the “Earth Day” protests that started in 1970, though. Previously, I’d observed the bizarre antics surrounding the anti-war protests of the middle-to-late 1960s, and saw the kind of reactions they incited. My friends and I had been a safe distance away leaning on an embankment blowing weed and laughing as less-wise classmates set themselves up as targets for reactionary authoritarians’ ire.

We’d already learned that the best place to be when policemen suit up for riot patrol is someplace a safe distance away.

We also knew the protest organizers – they were, after all, our classmates in college – and smiled indulgently as they worked up their resumes for lucrative careers in activist management. There’s more than one way to make a buck!

Bohemians, beatniks, hippies, or whatever term du jour you wanted to call us just weren’t into the whole money-and-power trip. We had better, mellower things to do than march around carrying signs, shouting slogans, and getting our heads beaten in for our efforts. So, when our former friends, the Earth-Day organizers, wanted us to line up, we didn’t even bother to say “no.” We just turned and walked away.

I, for one, was in the midst of changing tracks from English to science. I’d already tried my hand at writing, but found that, while I was pretty good at putting sentences together in English, then stringing them into paragraphs and stories, I really had nothing worthwhile to write about. I’d just not had enough life experience.

Since physics was basic to all the other stuff I’d been interested in – for decades – I decided to follow that passion and get a good grounding in the hard sciences, starting with physics. By the late seventies, I had learned whereof science was all about, and had developed a feel for how it was done, and what the results looked like. Especially, I was deep into astrophysics in general and solar physics in particular.

As time went on, the public noises I heard about environmental concerns began to sound more like political posturing and less like scientific discourse. Especially as they chose to ignore variability of the Sun that we astronomers knew was what made everything work.

By the turn of the millennium, scholarly reports generally showed no observations that backed up the global-warming rhetoric. Instead, they featured ambiguous results that showed chaotic evolution of climate with no real long-term trends.

Those of us interested in the history of science also realized that warm periods coincided with generally good conditions for humans, while cool periods could be pretty rough. So, what was wrong with a little global warming when you needed it?

A disturbing trend, however, was that these reports began to feature a boilerplate final paragraph saying, roughly: “climate change is a real danger and caused by human activity.” They all featured this paragraph, suspiciously almost word for word, despite there being little or nothing in the research results to support such a conclusion.

Since nothing in the rest of the report provided any basis for that final paragraph, it was clearly non-sequitur and added for non-science reasons. Clearly something was terribly wrong with climate research.

The penny finally dropped in 2006 when emeritus Vice President Albert Gore (already infamous for having attempted to take credit for developing the Internet) produced his hysteria-inducing movie An Inconvenient Truth along with the splashing about of Jerry Mahlman’s laughable “hockey-stick graph.” The graph, in particular, was based on a stitching together of historical data for proxies of global temperature with a speculative projection of a future exponential rise in global temperatures. That is something respectable scientists are specifically trained not to do, although it’s a favorite tactic of psycho-ceramics.

Air Pollution

By that time, however, so much rhetoric had been invested in promoting climate-change fear and convincing the media that it was human-induced, that concerns about plain old pollution (which anyone could see) seemed dowdy and uninteresting by comparison.

One of the reasons pollution seemed then (and still does now) old news is that in civilized countries (generally those run as democracies) great strides had already been made beating it down. A case in point is the image at right

East/West Europe Pollution
A snapshot of particulate pollution across Europe on Jan. 27, 2018. (Apologies to Quartz [ https://qz.com/1192348/europe-is-divided-into-safe-and-dangerous-places-to-breathe/ ] from whom this image was shamelessly stolen.)

. This image, which is a political map overlaid by a false-color map with colors indicating air-pollution levels, shows relatively mild pollution in Western Europe and much more severe levels in the more-authoritarian-leaning countries of Eastern Europe.

While this map makes an important point about how poorly communist and other authoritarian-leaning regimes take care of the “soup” in which their citizens have to live, it doesn’t say a lot about the environmental state of the art more generally in Europe. We leave that for Zoë Schlanger’s WEF article, which begins:

“The average person living in Europe loses two years of their life to the health effects of breathing polluted air, according to a report published in the European Heart Journal on March 12.

“The report also estimates about 800,000 people die prematurely in Europe per year due to air pollution, or roughly 17% of the 5 million deaths in Europe annually. Many of those deaths, between 40 and 80% of the total, are due to air pollution effects that have nothing to do with the respiratory system but rather are attributable to heart disease and strokes caused by air pollutants in the bloodstream, the researchers write.

“‘Chronic exposure to enhanced levels of fine particle matter impairs vascular function, which can lead to myocardial infarction, arterial hypertension, stroke, and heart failure,’ the researchers write.”

The point is, while American politicians debate the merits of climate change legislation, and European politicians seem to have knuckled under to IPCC climate-change rhetoric by wholeheartedly endorsing the 2015 Paris Agreement, the bigger and far more salient problem of environmental pollution is largely being ignored. This despite the visible and immediate deleterious affects on human health, and the demonstrated effectiveness of government efforts to ameliorate it.

By the way, in the two decades between the time I first observed iridescence atop the waters of the Seekonk River and when I launched my own first boat in the 1970s, Narragansett Bay went from a potential Superfund site to a beautiful, clean playground for recreational boaters. That was largely due to the efforts of the Save the Bay volunteer organization. While their job is not (and never will be) completely finished, they can serve as a model for effective grassroots activism.

Falling Out of the Sky

B737 Max taking off
Thai Lion Air Boeing 737 Max 9 taking off from Don Mueang international airport in Bankok, Thailand. Komenton / Shutterstock.com

3 April 2019 – On 29 October 2018, Lion Air flight 610 crashed soon after takeoff from Soekarno–Hatta International Airport in Jakarta, Indonesia. This is not the sort of thing we report in this blog. It’s straight news and we leave that to straight-news media, but I’m diving into it because it involves technology I’m quite familiar with and I might be able to help readers make sense of what happened and judge the often-uninformed reactions to it.

I claim to have the background to understand what happened because I’ve been flying light planes since the 1990s. I also put two years into a post-graduate Aerospace Engineering Program at Arizona State University concentrating on fluid dynamics. That’s enough background to make some educated guesses at what happened to Lion Air 610 as well as in the almost identical crash of an Ethiopian Airlines Boeing 737 MAX in Addis Ababa, , Ethiopia on 10 March 2019.

First, both airliners were recently commissioned Boeing 737 MAX aircraft using standard-equipment installations of Boeing’s new Maneuvering Characteristics Augmentation System (MCAS).

How to Stall an Aircraft

In aerodynamics the word “stall” means something quite unlike what most people expect. Most people encounter the word in an automobile context, where it refers to “stalling the engine.” That happens when you overload an internal-combustion engine. That is pull more power out than the engine can produce at its current operating speed. When that happens, the engine simply stops.

It turns from a power-producing machine to a boat anchor in a heartbeat. Your car stops with a lurch and everyone behind you starts swearing and blowing their horns in an effort to make you feel even worse than you already do.

That’s not what happens when an airplane stalls. It’s not the aircraft’s engine that stalls, but it’s wings. There are similarities in that, like engines, wings stall when they’re overloaded and when stalled they start producing drag like a boat anchor, but that’s about where the similarities end.

When an aircraft stalls, nobody swears and blows their horn. Instead, they scream and die.

Why? Well, wings are supposed to lift the aircraft and support it in the air. If you’ve ever tried to carry a sheet of plywood on a windy day you’ve experience both lift and drag. If you let the sheet tip up a little bit so the wind catches it underneath, it tries to fly up out of your hands. That’s the lift an airplane gets by tipping its wings up into the air stream as it moves forward into the air.

The more you tip the sheet up, the more lift you get for the same airspeed. That is, until you reach a certain attack angle (the angle between the sheet and the wind). Stalling begins suddenly at an attack angle of about 15°. Then, all of a sudden, the force lifting the sheet changes from up and a little back to no up, and a lot of back!

That’s a wing stall.

The aircraft stops imitating a bird, and starts imitating a rock.

You suddenly get a visceral sense of the concept “down.”

‘Cause that’s where you go in a hurry!

At that point, all you can do is point the nose down (so the wing’s forward edge starts pointing in the direction you’re moving: down!

If you’ve got enough space underneath your aircraft so the wing starts flying again before you hit the ground, you can gently pull the aircraft’s nose back up to resume straight and level flight. If not, that’s when the screaming starts.

Wings stall when they’re going too slowly to generate the required lift at an angle of attack of 15°. At higher speeds, the wing can generate the needed lift with less angle of attack, and worries about stalling never come up.

So, now you know all you need to know (or want to know) about stalling an aircraft.

MCAS

Boeing’s MCAS is an anti-stall system. It’s beating heart is a bit of software running on the flight-control computer that monitors a number of sensor inputs, like airspeed and angle of attack. Basically, in simple terms, it knows exactly how much attack angle the wings can stand before stalling out. If it sees that for some reason, the attack angle is getting too high, it assumes the pilot has screwed up. It takes control and pushes the nose down.

It doesn’t have to actually “take control” because modern commercial aircraft are “fly by wire,” which means it’s the computer that actually moves the control surfaces to fly the plane. The pilot’s “yoke” (the little wheel he or she gets to twist and turn and move forward and back) and the rudder pedals he pushes to steer (push right, go right) just sends signals to the computer to tell it what he wants to have happen. In a sense, the pilot negotiates with the computer about what the airplane should do.

The pilot makes suggestions (through the yoke, pedals and throttle control – collectively called the “cockpit flight controls”); the computer then takes that information, combines it with all the other information provided by a plethora (Do you like that word? I do!) of additional sensors; thinks about it for a microsecond; then, finally, the computer tells the aircraft’s control surfaces to move smoothly to a position that it (the computer) thinks will make the aircraft do what it wants it to do.

That’s all well and good when the reason the attack angle got too high is just that something happened that broke the pilot’s concentration, and he (or she) actually screwed up. What about when the pilot actually wants to stall the aircraft?

For example, on landing.

To land a plane, you slow it way down, so the wing’s almost stalled. Then, you fly it really close to the ground so the wheels almost touch the runway. Then you stall the wing so the wheels touch the ground just as the wings lose lift. You hear a satisfying “squeak” as the wheels momentarily skid while spinning up to match the relative speed of the runway. Finally, the wheels gently settle down, taking up the weight of the aircraft. The flight crew (and a few passengers who’ve been paying attention) cheer the pilot for a job well done, and the pilot starts breathing again.

Anti-stall systems don’t do much good during a landing, when you’re trying to intentionally stall the wings at just the right time.

Similarly, the don’t do much good when you’re taking off, and the pilot’s just trying to get the wings unstalled to get the aircraft into the air in the first place.

For those times, you want the MCAS turned off! So you’ve gotta be able to do that, too. Or, if your pilot is too absent minded to shut it off when its not needed, you need it to shut off automatically.

When Things Go Wrong

So, what happened in those two airliner crashes?

Remember that the main input into the MCAS is an attack angle sensor? Attack angle sensors, like any other piece of technology can go bad, especially if it’s exposed to weather. And, airliners are exposed to weather 24/7 except when they’re brought into a hangar for repair.

The working hypothesis for what happened to both airliners is that the attack-angle sensors failed. They jammed in a position where they erroneously reported a high angle-of-attack to the MCAS, which jumped to the conclusion “pilot error,” and pushed the nose down. When the pilot(s) tried to pull the nose back up (because their windshield filled up with things that looked a lot like ground instead of sky), the MCAS said: “Nope! You’re going down, Jack!”

By the time the pilots figured out what was wrong and looked up how to shut the MCAS off, they’d actually hit the things that looked too much like ground.

Why didn’t the MCAS figure out there was something wrong with the sensor?

How’s it supposed to know?

The sensor says the nose is pointed up, so the computer takes it at it’s word. Computers aren’t really very smart, and tend to be quite literal. The sensor says the nose is pointed up, so the computer thinks the nose is pointed up, and tries to point it down (or at least less up). End of story. And, in the real world, it’s “end of aircraft” as well.

If the pilot(s) try to tell the computer to pull the nose up (by desperately pulling back on the yoke), it figures they’re screw-ups, anyway, and won’t listen.

Every try to argue with a computer? Been there, done that. It doesn’t work.

Mea Culpa

When I learned about the hypothesis of attack-angle-sensor failure causing the crashes that took nearly four hundred lives, I got this awful sick feeling that was a mixture of embarrassment and guilt. You see, a decade and a half ago my research project at ASU was an effort to develop a different style of attack-angle sensor. Several events and circumstances combined to make me abandon that research project and, in fact, the whole PhD. program it was a part of. In my defense, it was the start of a ten-year period in which I couldn’t get anything right!

But, if I’d stuck it out and developed that sensor it might have been installed on those airliners and might not have failed at all. Of course, it could have been installed and failed in some other spectacular way.

You see, the attack angle sensor that apparently was installed consisted of a little vane attached to one side of the aircraft’s nose. Just like the wind sock traditionally hung outside airports the world over, wind pressure makes the vane line up downstream of the wind direction. A little angle sensor attached to the vane reports the wind direction relative to the nose: the attack angle.

I got involved in trying to develop an alternative attack-angle sensor because I have a horror of relying on sensors that depend on mechanical movement to work. If you’re relying on mechanical movement, it means you’re relying on bearings, and bearings can corrode and wear out and fail. The sensor I was working on relied on differences in air pressure that depended on the direction the wind hit the sensor.

In actual fact, there were two attack-angle sensors attached to the doomed aircraft – one on each side of the nose – but the Boeing MCAS was paying attention to only one of them. That was Boeing’s second mistake (the first being not using the sensor I hadn’t developed, so I guess they can’t be blamed for it). If the MCAS had been paying attention to both sensors, it would have known something in its touchy-feely universe was wrong. It might have been a little more reluctant to override the pilots’ input.

The third mistake (I believe) Boeing made was to downplay the differences between the new “Max” version of the aircraft and the older version. They’d changed the engines, which (as any aerospace engineer knows) necessitates changes in everything else. Aircraft are so intricately balanced machines that every time you change one thing, everything else has to change – or at least has to be looked at to see if it needs to be changed.

The new engines had improved performance, which affects just about everything involving the aircraft’s handling characteristics. Boeing had apparently tried to make the more-powerful yet more fuel efficient aircraft handle like the old aircraft. There, of course, were differences, which the company tried to pretend would make no difference to the pilots. The MCAS was one of those things that was supposed to make the “Max” version handle just like the non-Max version.

So, when something went wrong in “Max” land, it caught the pilots, who had thousands of hours experience with non-Max aircraft, by surprise.

The latest reports are that Boeing, the FAA, and the airlines have realized what the problems are that caused these issues (I hope they understand them a lot better than I do, because, after all, it’s their job to!), and have worked out a number of fixes.

First, the MCAS will pay attention to two attack-angle sensors. At least then the flight-control computer will have an indication that something is wrong and tell the MCAS to go back in its corner and shut up ‘til the issue is sorted out.

Second, they’ll install a little blinking light that effectively tells the pilots “there’s something wrong, so don’t expect any help from the MCAS ‘til it gets sorted out.”

Third, they’ll make sure the pilots have a good, positive way of emphatically shut the MCAS off if it starts to argue with them in an emergency. And, they’ll make sure the pilots are trained to know when and how to use it.

My understanding is that these fixes are already part of the options that American commercial airlines have generally installed, which is supposedly why the FAA, the airlines and the pilots’ union have been dragging their feet about grounding Boeing’s 737 Max fleet. Let’s hope they’re not just blowing smoke (again)!

Luddites’ Lament

Luddites attack
An owner of a factory defending his workshop against Luddites intent on destroying his mechanized looms between 1811-1816. Everett Historical/Shutterstock

27 March 2019 – A reader of last week’s column, in which I reported recent opinions voiced by a few automation experts at February’s Conference on the Future of Work held at at Stanford University, informed me of a chapter from Henry Hazlitt’s 1988 book Economics in One Lesson that Australian computer scientist Steven Shaw uploaded to his blog.

I’m not going to get into the tangled web of potential copyright infringement that Shaw’s posting of Hazlitt’s entire text opens up, I’ve just linked to the most convenient-to-read posting of that particular chapter. If you follow the link and want to buy the book, I’ve given you the appropriate link as well.

The chapter is of immense value apropos the question of whether automation generally reduces the need for human labor, or creates more opportunities for humans to gain useful employment. Specifically, it looks at the results of a number of historic events where Luddites excoriated technology developers for taking away jobs from humans only to have subsequent developments prove them spectacularly wrong.

Hazlitt’s classic book is, not surprisingly for a classic, well documented, authoritative, and extremely readable. I’m not going to pretend to provide an alternative here, but to summarize some of the chapter’s examples in the hope that you’ll be intrigued enough to seek out the original.

Luddism

Before getting on to the examples, let’s start by looking at the history of Luddism. It’s not a new story, really. It probably dates back to just after cave guys first thought of specialization of labor.

That is, sometime in the prehistoric past, some blokes were found to be especially good at doing some things, and the rest of the tribe came up with the idea of letting, say, the best potters make pots for the whole tribe, and everyone else rewarding them for a job well done by, say, giving them choice caribou parts for dinner.

Eventually, they had the best flint knappers make the arrowheads, the best fletchers put the arrowheads on the arrows, the best bowmakers make the bows, and so on. Division of labor into different jobs turned out to be so spectacularly successful that very few of us rugged individualists, who pretend to do everything for ourselves, are few and far between (and are largely kidding ourselves, anyway).

Since then, anyone who comes up with a great way to do anything more efficiently runs the risk of having the folks who spent years learning to do it the old way land on him (or her) like a ton of bricks.

It’s generally a lot easier to throw rocks to drive the innovator away than to adapt to the innovation.

Luddites in the early nineteenth century were organized bands of workers who violently resisted mechanization of factories during the late Industrial Revolution. Named for an imaginary character, Ned Ludd, who was supposedly an apprentice who smashed two stocking frames in 1779 and whose name had become emblematic of machine destroyers. The term “Luddite” has come to mean anyone fanatically opposed to deploying advanced technology.

Of course, like religious fundamentalists, they have to pick a point in time to separate “good” technology from the “bad.” Unlike religious fanatics, who generally pick publication of a certain text to be the dividing line, Luddites divide between the technology of their immediate past (with which they are familiar) and anything new or unfamiliar. Thus, it’s a continually moving target.

In either case, the dividing line is fundamentally arbitrary, so the emotion of their response is irrational. Irrationality typically carries a warranty of being entirely contrary to facts.

What Happens Next

Hazlitt points out, “The belief that machines cause unemployment, when held with any logical consistency, leads to preposterous conclusions.” He points out that on the second page of the first chapter of Adam Smith’s seminal book Wealth of Nations, Smith tells us that a workman unacquainted with the use of machinery employed in sewing-pin-making “could scarce make one pin a day, and certainly could not make twenty,” but with the use of the machinery he can make 4,800 pins a day. So, zero-sum game theory would indicate an immediate 99.98 percent unemployment rate in the pin-making industry of 1776.

Did that happen? No, because economics is not a zero-sum game. Sewing pins went from dear to cheap. Since they were now cheap, folks prized them less and discarded them more (when was the last time you bothered to straighten a bent pin?), and more folks could afford to buy them in the first place. That led to an increase in sewing-pin sales as well as sales of things like sewing-patterns and bulk fine fabric sold to amateur sewers, and more employment, not less.

Similar results obtained in the stocking industry when new stocking frames (the original having been invented William Lee in 1589, but denied a patent by Elizabeth I who feared its effects on employment in hand-knitting industries) were protested by Luddites as fast as they could be introduced. Before the end of the nineteenth century the stocking industry was employing at least a hundred men for every man it employed at the beginning of the century.

Another example Hazlitt presents from the Industrial Revolution happened in the cotton-spinning industry. He says: “Arkwright invented his cotton-spinning machinery in 1760. At that time it was estimated that there were in England 5,200 spinners using spinning wheels, and 2,700 weavers—in all, 7,900 persons engaged in the production of cotton textiles. The introduction of Arkwright’s invention was opposed on the ground that it threatened the livelihood of the workers, and the opposition had to be put down by force. Yet in 1787—twenty-seven years after the invention appeared—a parliamentary inquiry showed that the number of persons actually engaged in the spinning and weaving of cotton had risen from 7,900 to 320,000, an increase of 4,400 percent.”

As these examples indicate, improvements in manufacturing efficiency generally lead to reductions in manufacturing cost, which, when passed along to customers, reduces prices with concommitent increases in unit sales. This is the price elasticity of demand curve from Microeconomics 101. It is the reason economics is decidedly not a zero-sum game.

If we accept economics as not a zero-sum game, predicting what happens when automation makes it possible to produce more stuff with fewer workers becomes a chancy proposition. For example, many economists today blame flat productivity (the amount of stuff produced divided by the number of workers needed to produce it) for lack of wage gains in the face of low unemployment. If that is true, then anything that would help raise productivity (such as automation) should be welcome.

Long experience has taught us that economics is a positive-sum game. In the face of technological advancement, it behooves us to expect positive outcomes while taking measures to ensure that the concomitant economic gains get distributed fairly (whatever that means) throughout society. That is the take-home lesson from the social dislocations that accompanied the technological advancements of the Early Industrial Revolution.

Don’t Panic!

Panic button
Do not push the red button! Peter Hermes Furian/Shutterstock

20 March 2019 – The image at right visualizes something described in Douglas Adams’ Hitchiker’s Guide to the Galaxy. At one point, the main characters of that six-part “trilogy” found a big red button on the dashboard of a spaceship they were trying to steal that was marked “DO NOT PRESS THIS BUTTON!” Naturally, they pressed the button, and a new label popped up that said “DO NOT PRESS THIS BUTTON AGAIN!”

Eventually, they got the autopilot engaged only to find it was a stunt ship programmed to crash headlong into the nearest Sun as part of the light show for an interstellar rock band. The moral of this story is “Never push buttons marked ‘DO NOT PUSH THIS BUTTON.’”

Per the author: “It is said that despite its many glaring (and occasionally fatal) inaccuracies, the Hitchhiker’s Guide to the Galaxy itself has outsold the Encyclopedia Galactica because it is slightly cheaper, and because it has the words ‘DON’T PANIC’ in large, friendly letters on the cover.”

Despite these references to the Hitchhiker’s Guide to the Galaxy, this posting has nothing to do with that book, the series, or the guide it describes, except that I’ve borrowed the words from the Guide’s cover as a title. I did that because those words perfectly express the take-home lesson of Bill Snyder’s 11 March 2019 article in The Robot Report entitled “Fears of job-stealing robots are misplaced, say experts.”

Expert Opinions

Snyder’s article reports opinions expressed at the the Conference on the Future of Work at Stanford University last month. It’s a topic I’ve shot my word processor off about on numerous occasions in this space, so I thought it would be appropriate to report others’ views as well. First, I’ll present material from Snyder’s article, then I’ll wrap up with my take on the subject.

“Robots aren’t coming for your job,” Snyder says, “but it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.”

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist.

David Autor, professor of economics at the Massachusetts Institute of Technology points out that education is a big determinant of how developing trends affect workers: “It’s a great time to be young and educated, but there’s no clear land of opportunity for adults who haven’t been to college.”

“When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation,” said Varian, “demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude.”

His research indicates that shrinkage of the labor supply due to demographic trends is 53% greater than shrinkage of demand for labor due to automation. That means, while relatively fewer jobs are available, there are a lot fewer workers available to do them. The result is the prospect of a continued labor shortage.

At the same time, Snyder reports that “[The] most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.”

In other words, fears that robots will displace humans for existing jobs miss the point. Robots, instead, are taking over jobs for which there aren’t enough humans to do them.

Another effect is the fact that what people think of as “jobs” are actually made up of many “tasks,” and it’s tasks that get automated, not entire jobs. Some tasks are amenable to automation while others aren’t.

“Consider the job of a gardener,” Snyder suggests as an example. “Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores.”

Some of these tasks, like mowing and watering, can easily be automated. Pruning rose bushes, not so much!

Snyder points to news reports of a hotel in Nagasaki, Japan being forced to “fire” robot receptionists and room attendants that proved to be incompetent.

There’s a scene in the 1997 film The Fifth Element where a supporting character tries to converse with a robot bartender about another character. He says: “She’s so vulnerable – so human. Do you you know what I mean?” The robot shakes its head, “No.”

Sometimes people, even misanthropes, would prefer to interact with another human than with a drink-dispensing machine.

“Jobs,” Varian points out, “unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator.”

“Excessive automation at Tesla was a mistake,” founder Elon Musk mea culpa-ed last year “Humans are underrated.”

Another trend Snyder points out is that automation-ready jobs, such as assembly-line factory workers, have already largely disappeared from America. “The 10 most common occupations in the U.S.,” he says, “include such jobs as retail salespersons, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer even make the list.

Again, robots are mainly taking over tasks that humans are not available to do.

The final trend that Snyder presents, is the stark fact that birthrates in developed nations are declining – in some cases precipitously. “The aging of the baby boom generation creates demand for service jobs,” Varian points out, “but leaves fewer workers actively contributing labor to the economy.”

Those “service jobs” are just the ones that require a human touch, so they’re much harder to automate successfully.

My Inexpert Opinion

I’ve been trying, not entirely successfully, to figure out what role robots will actually have vis-a-vis humans in the future. I think there will be a few macroscopic trends. And, the macroscopic trends should be the easiest to spot ‘cause they’re, well, macroscopic. That means bigger. So, there easier to see. See?

As early as 2010, I worked out one important difference between robots and humans that I expounded in my novel Vengeance is Mine! Specifically, humans have a wider view of the Universe and have more of an emotional stake in it.

“For example,” I had one of my main characters pontificate at a cocktail party, “that tall blonde over there is an archaeologist. She uses ROVs – remotely operated vehicles – to map underwater shipwreck sites. So, she cares about what she sees and finds. We program the ROVs with sophisticated navigational software that allows her to concentrate on what she’s looking at, rather than the details of piloting the vehicle, but she’s in constant communication with it because she cares what it does. It doesn’t.”

More recently, I got a clearer image of this relationship and it’s so obvious that we tend to overlook it. I certainly missed it for decades.

It hit me like a brick when I saw a video of an autonomous robot marine-trash collector. This device is a small autonomous surface vessel with a big “mouth” that glides around seeking out and gobbling up discarded water bottles, plastic bags, bits of styrofoam, and other unwanted jetsam clogging up waterways.

The first question that popped into my mind was “who’s going to own the thing?” I mean, somebody has to want it, then buy it, then put it to work. I’m sure it could be made to automatically regurgitate the junk it collects into trash bags that it drops off at some collection point, but some human or humans have to make sure the trash bags get collected and disposed of. Somebody has to ensure that the robot has a charging system to keep its batteries recharged. Somebody has to fix it when parts wear out, and somebody has to take responsibility if it becomes a navigation hazard. Should that happen, the Coast Guard is going to want to scoop it up and hand its bedraggled carcass to some human owner along with a citation.

So, on a very important level, the biggest thing robots need from humans is ownership. Humans own robots, not the other way around. Without a human owner, an orphan robot is a pile of junk left by the side of the road!