Luddites’ Lament

Luddites attack
An owner of a factory defending his workshop against Luddites intent on destroying his mechanized looms between 1811-1816. Everett Historical/Shutterstock

27 March 2019 – A reader of last week’s column, in which I reported recent opinions voiced by a few automation experts at February’s Conference on the Future of Work held at at Stanford University, informed me of a chapter from Henry Hazlitt’s 1988 book Economics in One Lesson that Australian computer scientist Steven Shaw uploaded to his blog.

I’m not going to get into the tangled web of potential copyright infringement that Shaw’s posting of Hazlitt’s entire text opens up, I’ve just linked to the most convenient-to-read posting of that particular chapter. If you follow the link and want to buy the book, I’ve given you the appropriate link as well.

The chapter is of immense value apropos the question of whether automation generally reduces the need for human labor, or creates more opportunities for humans to gain useful employment. Specifically, it looks at the results of a number of historic events where Luddites excoriated technology developers for taking away jobs from humans only to have subsequent developments prove them spectacularly wrong.

Hazlitt’s classic book is, not surprisingly for a classic, well documented, authoritative, and extremely readable. I’m not going to pretend to provide an alternative here, but to summarize some of the chapter’s examples in the hope that you’ll be intrigued enough to seek out the original.

Luddism

Before getting on to the examples, let’s start by looking at the history of Luddism. It’s not a new story, really. It probably dates back to just after cave guys first thought of specialization of labor.

That is, sometime in the prehistoric past, some blokes were found to be especially good at doing some things, and the rest of the tribe came up with the idea of letting, say, the best potters make pots for the whole tribe, and everyone else rewarding them for a job well done by, say, giving them choice caribou parts for dinner.

Eventually, they had the best flint knappers make the arrowheads, the best fletchers put the arrowheads on the arrows, the best bowmakers make the bows, and so on. Division of labor into different jobs turned out to be so spectacularly successful that very few of us rugged individualists, who pretend to do everything for ourselves, are few and far between (and are largely kidding ourselves, anyway).

Since then, anyone who comes up with a great way to do anything more efficiently runs the risk of having the folks who spent years learning to do it the old way land on him (or her) like a ton of bricks.

It’s generally a lot easier to throw rocks to drive the innovator away than to adapt to the innovation.

Luddites in the early nineteenth century were organized bands of workers who violently resisted mechanization of factories during the late Industrial Revolution. Named for an imaginary character, Ned Ludd, who was supposedly an apprentice who smashed two stocking frames in 1779 and whose name had become emblematic of machine destroyers. The term “Luddite” has come to mean anyone fanatically opposed to deploying advanced technology.

Of course, like religious fundamentalists, they have to pick a point in time to separate “good” technology from the “bad.” Unlike religious fanatics, who generally pick publication of a certain text to be the dividing line, Luddites divide between the technology of their immediate past (with which they are familiar) and anything new or unfamiliar. Thus, it’s a continually moving target.

In either case, the dividing line is fundamentally arbitrary, so the emotion of their response is irrational. Irrationality typically carries a warranty of being entirely contrary to facts.

What Happens Next

Hazlitt points out, “The belief that machines cause unemployment, when held with any logical consistency, leads to preposterous conclusions.” He points out that on the second page of the first chapter of Adam Smith’s seminal book Wealth of Nations, Smith tells us that a workman unacquainted with the use of machinery employed in sewing-pin-making “could scarce make one pin a day, and certainly could not make twenty,” but with the use of the machinery he can make 4,800 pins a day. So, zero-sum game theory would indicate an immediate 99.98 percent unemployment rate in the pin-making industry of 1776.

Did that happen? No, because economics is not a zero-sum game. Sewing pins went from dear to cheap. Since they were now cheap, folks prized them less and discarded them more (when was the last time you bothered to straighten a bent pin?), and more folks could afford to buy them in the first place. That led to an increase in sewing-pin sales as well as sales of things like sewing-patterns and bulk fine fabric sold to amateur sewers, and more employment, not less.

Similar results obtained in the stocking industry when new stocking frames (the original having been invented William Lee in 1589, but denied a patent by Elizabeth I who feared its effects on employment in hand-knitting industries) were protested by Luddites as fast as they could be introduced. Before the end of the nineteenth century the stocking industry was employing at least a hundred men for every man it employed at the beginning of the century.

Another example Hazlitt presents from the Industrial Revolution happened in the cotton-spinning industry. He says: “Arkwright invented his cotton-spinning machinery in 1760. At that time it was estimated that there were in England 5,200 spinners using spinning wheels, and 2,700 weavers—in all, 7,900 persons engaged in the production of cotton textiles. The introduction of Arkwright’s invention was opposed on the ground that it threatened the livelihood of the workers, and the opposition had to be put down by force. Yet in 1787—twenty-seven years after the invention appeared—a parliamentary inquiry showed that the number of persons actually engaged in the spinning and weaving of cotton had risen from 7,900 to 320,000, an increase of 4,400 percent.”

As these examples indicate, improvements in manufacturing efficiency generally lead to reductions in manufacturing cost, which, when passed along to customers, reduces prices with concommitent increases in unit sales. This is the price elasticity of demand curve from Microeconomics 101. It is the reason economics is decidedly not a zero-sum game.

If we accept economics as not a zero-sum game, predicting what happens when automation makes it possible to produce more stuff with fewer workers becomes a chancy proposition. For example, many economists today blame flat productivity (the amount of stuff produced divided by the number of workers needed to produce it) for lack of wage gains in the face of low unemployment. If that is true, then anything that would help raise productivity (such as automation) should be welcome.

Long experience has taught us that economics is a positive-sum game. In the face of technological advancement, it behooves us to expect positive outcomes while taking measures to ensure that the concomitant economic gains get distributed fairly (whatever that means) throughout society. That is the take-home lesson from the social dislocations that accompanied the technological advancements of the Early Industrial Revolution.

How to Train Your Corporate Rebel

Tebel Talent Cover
Rebel Talent by Francesca Gino makes the case for encouraging individualism in the workplace

13 March 2019 – Francesca Gino, author of Rebel Talent: Why It Pays to Break the Rules at Work and In Life, is my kind of girl. She’s smart, thinks for herself, isn’t afraid to go out on a limb, and encourages others to do the same.

That said, I want to inject a note of caution for anyone considering her advice about being a rebel. There’s an old saying: “The nail that sticks up the most is the first to get hammered down.” It’s true in carpentry and in life. Being a rebel is lonely, dangerous, and is no guarantee of success, financial or otherwise.

I speak from experience, having broken every rule available for as long as I can remember. When I was a child in the 1950s, I wanted to grow up to be a beatnik. I’ve always felt most comfortable amongst bohemians. My wife once complained (while we were sitting in a muscle car stopped by the highway waiting for the cop to give me a speeding ticket) about my “always living on the edge.” And, yes, I’ve been thrown out of more than one bar.

On the other hand, I’ve lived a long and eventful life. Most of the items on my bucket list were checked off long ago.

So, when I ran across an ad in The Wall Street Journal for Gino’s book, I had to snag a copy and read it.

As I expected, the book’s theme is best summed up by a line from the blurb on its dust jacket: “ … the most successful among us break the rules.”

The book description goes on to say, “Rebels have a bad reputation. We think of them as trouble-makers. outcasts, contrarians: those colleagues, friends, and family members who complicate seemingly straight-forward decisions, create chaos, and disagree when everyone else is in agreement. But in truth, rebels are also those among us who change the world for the better with their unconventional outlooks. Instead of clinging to what is safe and familiar, and falling back on routines and tradition, rebels defy the status quo. They are masters of innovation and reinvention, and they have a lot to teach us.”

Considering the third paragraph above, I hope she’s right!

The 283-page (including notes and index) volume summarizes Gino’s decade-long study of rebels at organizations around the world, from high-end boutiques in Italy’s fashion capital (Milan), to the world’s best restaurant (Three-Michelin-star-rated Osteria Francescana), to a thriving fast-food chain (Pal’s), and an award-winning computer animation studio (Pixar).

Francesca Gino is a behavioral scientist and professor at Harvard Business School. She is the Tandon Family Professor of Business Administration in the school’s Negotiation, Organizations & Markets Unit. No slouch professionally, she has been honored as one of the world’s top 40 business professors under 40 by Poets & Quants and one of the world’s 50 most influential management thinkers by Thinkers50.

Enough with the “In Praise Of” stuff, though. Let’s look inside the book. It’s divided into eight chapters, starting with “Napoleon and the Hoodie: The Paradox of Rebel Status,” and ending with “Blackbeard, ‘Flatness,’ and the 8 Principles of Rebel Leadership.” Gino then adds a “Conclusion” telling the story of Risotto Cacio e Pepe (a rice-in-Parmigiano-Reggiano dish invented by Chef Massimo Bottura), and an “Epilogue: Rebel Action” giving advice on releasing your inner rebel.

Stylistically, the narrative uses the classic “Harvard Case Study” approach. That is, it’s basically a pile of stories, each of which makes a point about how rebel leaders Gino has known approach their work. In summary, the take-home lesson is that those leaders encourage their employees to unleash their “inner rebel,” thereby unlocking creativity, enthusiasm, and productivity that more traditional management styles suppress.

The downside of this style is that it sometimes is difficult for the reader to get their brain around the points that Gino is making. Luckily, her narrative style is interesting, easy to follow and compelling. Like all well-written prose she keeps the reader wondering “What happens next?” The episodes she presents are invariably unusual and interesting themselves. She regularly brings in her own exploits and keeps, as much as possible, to first-person active voice.

That is unusual for academic writers, who find it all too easy to slip into a pedantic third-person, passive-voice best reserved for works intended as sleep aids.

To give you a feel for what reading an HCS-style volume is like, I’ll describe what it’s like to study Quantum Dynamics. While the differences outnumber the similarities, the overall “feel” is similar.

The first impression students get of QD is that the subject is entirely anti-intuitive. That is, before you can learn anything about QD, you have to discard any lingering intuition about how the Universe works. That’s probably easier for someone who never learned Classical Physics in the first place. Ideas like “you can’t be in two places at the same time” simply do not apply in the quantum world.

Basically, to learn QD, you have to start with a generous dose of “willing suspension of disbelief.” You do that by studying stories about experiments performed in the late nineteenth century that simply didn’t work. At that time, the best minds in Physics spent careers banging their heads into walls as Mommy Nature refused to return results that Classical Physics imagined she had to. Things like the Michelson-Moreley experiment (and many other then-state-of-the-art experiments) gave results at odds with Classical Physics. There were enough of these screwy results that physicists began to doubt that what they believed to be true, was actually how the Universe worked. After listening to enough of these stories, you begins to doubt your own intuition.

Then, you learn to trust the mathematics that will be your only guide in QD Wonderland.

Finally, you spend a couple of years learning about a new set of ideas based on Through the Looking Glass concepts that stand normal intuition on its head. Piling up stories about all these counter-intuitive ideas helps you build up a new intuition about what happens in the quantum world. About that time, you start feeling confident that this new intuition helps you predict what will happen next.

The HCS style of learning does something similar, although usually not as extreme. Reading story after story about what hasn’t and what has worked for others in the business world, you begin to develop an intuition for applying the new ideas. You gain confidence that, in any given situation, you can predict what happens next.

What happens next is that when you apply the methods Gino advocates, you start building a more diverse corporate culture that attracts and retains the kinds of folks that make your company a leader in its field.

There’s an old one-line joke:

I want to be different – like everybody else.”

We can’t all be different because then there wouldn’t be any sameness to be different from, but we can all be rebels. We can all follow the

  1. READY!
  2. AIM!
  3. FIRE!

mantra advocated by firearms instructors everywhere.

In other words:

  1. Observe what’s going on out there in the world, then
  2. Think about what you might do that breaks the established rules, and, finally,
  3. Act in a way that makes the Universe a better place in which to live.

Luddites RULE!

LindaBucklin-Shutterstock
Momma said there’d be days like this! (Apologies to songwriters Luther Dixon and Willie Denson, and, of course, the Geico Caveman.) Linda Bucklin/Shutterstock

7 February 2019 – This is not the essay I’d planned to write for this week’s blog. I’d planned a long-winded, abstruse dissertation on the use of principal component analysis to glean information from historical data in chaotic systems. I actually got most of that one drafted on Monday, and planned to finish it up Tuesday.

Then, bright and early on Tuesday morning, before I got anywhere near the incomplete manuscript, I ran headlong into an email issue.

Generally, I start my morning by scanning email to winnow out the few valuable bits buried in the steaming pile of worthless refuse that has accumulated in my Inbox since the last time I visited it. Then, I visit a couple of social media sites in an effort to keep my name if front of the Internet-entertained public. After a couple of hours of this colossal waste of time, I settle in to work on whatever actual work I have to do for the day.

So, finding that my email client software refused to communicate with me threatened to derail my whole day. The fact that I use email for all my business communications, made it especially urgent that I determine what was wrong, and then fix it.

It took the entire morning and on into the early afternoon to realize that there was no way I was going to get to that email account on my computer, find out that nobody in the outside world (not my ISP, not the cable company that went that extra mile to bring Internet signals from that telephone pole out there to the router at the center of my local area network, or anyone else available with more technosavvy than I have) was going to be able to help. I was finally forced to invent a work around involving a legacy computer that I’d neglected to throw in the trash just to get on with my technology-bound life.

At that point the Law of Deadlines forced me to abandon all hope of getting this week’s blog posting out on time, and move on to completing final edits and distribution of that press release for the local art gallery.

That wasn’t the last time modern technology let me down. In discussing a recent Physics Lab SNAFU, Danielle, the laboratory coordinator I work with at the University said: “It’s wonderful when it works, but horrible when it doesn’t.”

Where have I heard that before?

The SNAFU Danielle was lamenting happened last week.

I teach two sections of General Physics Laboratory at Florida Gulf Coast University, one on Wednesdays and one on Fridays. The lab for last week had students dropping a ball, then measuring its acceleration using a computer-controlled ultrasonic detection system as it (the ball, not the computer) bounces on the table.

For the Wednesday class everything worked perfectly. Half a dozen teams each had their own setups, and all got good data, beautiful-looking plots, and automated measurements of position and velocity. The computers then automatically derived accelerations from the velocity data. Only one team had trouble with their computer, but they got good data by switching to an unused setup nearby.

That was Wednesday.

Come Friday the situation was totally different. Out of four teams, only two managed to get data that looked even remotely like it should. Then, one team couldn’t get their computer to spit out accelerations that made any sense at all. Eventually, after class time ran out, the one group who managed to get good results agreed to share their information with the rest of the class.

The high point of the day was managing to distribute that data to everyone via the school’s cloud-based messaging service.

Concerned about another fiasco, after this week’s lab Danielle asked me how it worked out. I replied that, since the equipment we use for this week’s lab is all manually operated, there were no problems whatsoever. “Humans are much more capable than computers,” I said. “They’re able to cope with disruptions that computers have no hope of dealing with.”

The latest example of technology Hell appeared in a story in this morning’s (2/7/2019) Wall Street Journal. Some $136 million of customers’ cryptocurrency holdings became stuck in an electronic vault when the founder (and sole employee) of cryptocurrency exchange QuadrigaCX, Gerald Cotten, died of complications related to Crohn’s disease while building an orphanage in India. The problem is that Cotten was so secretive about passwords and security that nobody, even his wife, Jennifer Robertson, can get into the reserve account maintained on his laptop.

“Quadriga,” according to the WSJ account, “would need control of that account to send those funds to customers.”

No lie! The WSJ attests this bizarre tale is the God’s own truth!

Now, I’ve no sympathy for cryptocurrency mavens, which I consider to be, at best, technoweenies gleefully leading a parade down the primrose path to technology Hell, but this story illustrates what that Hell looks like!

It’s exactly what the Luddites of the early 19th Century warned us about. It’s a place of nameless frustration and unaccountable loss that we’ve brought on ourselves.

Farsighted Decisions

"Farsighted" book cover
Farsighted: How We Make the Decisions That Matter the Most by Steven Johnson

30 January 2019 – This is not a textbook on decision making.

Farsighted: How We Make the Decisions That Matter the Most does cover most of the elements of state-of-the-art decision making, but it’s not a true textbook. If he’d really wanted to write a textbook, its author, Steven Johnson, would have structured it differently, and would have included exercises for the student. Perhaps he would also have done other things differently that I’m not going to enumerate because I don’t want to write a textbook on state-of-the-art decision making, either.

What Johnson apparently wanted to do, and did do successfully, was lay down a set of principles today’s decision makers would do well to follow.

Something he would have left out, if he were writing a textbook, was the impassioned plea for educators to incorporate mandatory decision making courses into secondary-school curricula. I can’t disagree with this sentiment!

A little bit about my background with regard to decision-theory education: ‘Way back in the early 2010s, I taught a course at a technical college entitled “Problem Solving Theory.” Johnson’s book did not exist then, and I wish that it had. The educational materials available at the time fell woefully short. They were, at best, pedantic.

I spent a lot of class time waving my hands and telling stories from my days as a project manager. Unfortunately, the decision-making techniques I learned about in MBA school weren’t of any help at all. Some of the research Johnson weaves into his narrative hadn’t even been done back then!

So, when I heard about Johnson’s new book, I rushed out to snag a copy and devoured it.

As Johnson points out, everybody is a decision maker every day. These decisions run the gamut from snap decisions that people have to make almost instantly, to long-term deliberate choices that reverberate through the rest of their lives. Many, if not most, people face making decisions affecting others, from children to spouses, siblings and friends. Some of us participate in group decision making that can have truly global ramifications.

In John McTiernan’s 1990 film The Hunt for Red October, Admiral Josh Painter points out to CIA analyst Jack Ryan: “Russians don’t take a dump, son, without a plan. Senior captains don’t start something this dangerous without having thought the matter through.”

It’s not just Russians, however, who plan out even minor actions. And, senior captains aren’t the only ones who don’t start things without having thought the matter through. We all do it.

As Johnson points out, it may be the defining characteristic of the human species, which he likes to call Homo prospectus for their ability to apply foresight to advance planning.

The problem, of course, is the alarming rate at which we screw it up. As John F. Kennedy’s failure in the Bay of Pigs invasion shows, even highly intelligent, highly educated and experienced leaders can get it disastrously wrong. Johnson devotes considerable space to enumerating the taxonomy of “things that can go wrong.”

So, decision making isn’t just for leaders, and it’s easier to get it wrong than to do it right.

Enumerating the ways it can all go disastrously wrong, and setting out principles that will help us get it right are the basic objectives Johnson set out for himself when he first decided to write this book. To wit, three goals:

  • Convince readers that it’s important;

  • Warn folks of how easily it can be done wrong; and

  • Give folks a prescription for doing it right.

Pursuant to the third goal, Johnson breaks decision making down into a process involving three steps:

Mapping consists of gathering preliminary information about the state of the Universe before any action has been taken. What do we have to work with? What options do we have to select from? What do we want to accomplish and when?

Predicting consists of prognisticating, for each of the various options available, how the Universe will evolve from now into the foreseeable (and possibly unforeseeable) future. This is probably the most fraught stage of the process. Do we need a Plan B in case of surprises? As Sean Connery’s “Mac” character intones in Jon Amiel’s 1999 crime drama, Entrapment: “Trust me, there always are surprises.”

Deciding is the ultimate finish of the process. It consists of finally choosing between the previously identified alternatives based on the predicted results. What alternative is most likely to give us a result we want to have?

An important technique Johnson recommends basing your decision-making strategy on is narrative. That explicitly means storytelling. Johnson supplies numerous examples from both fiction and non-fiction that help us understand the decision-making process and help us apply it to the problems we face.

He points out that double-blind clinical trials were the single most important technique that advanced medicine from quackery and the witch-doctor’s art to reliable medical science. It allowed trying out various versions of medical interventions in a systematic way and comparing the results. In the same way, he says, fictional storytelling, allows us to mentally “try out” multiple alternative versions of future history.

Through storytelling, we explore various possibilities and imagine how they might turn out, including the vicissitudes of Shakespeare’s “slings and arrows of outrageous fortune,” without putting in the time and effort to try them out in reality, and thereby likely suffering “the fuss of mass destruction and death.”

Johnson suggests that’s why humans evolved the desire and capacity to create such fictional narratives in the first place. “When we read these novels,” he says, “ … we are not just entertaining ourselves; we are also rehearsing for our own real-world experiences.”

Of course, while “deciding” is the ultimate act of Johnson’s process, it’s never the end of the story in real life. What to do when it all goes disastrously wrong is always an important consideration. Johnson actually covers that as an important part of the “predicting” step. That’s when you should develop Mac’s “Plan B pack” and figure out when to trigger it if necessary.

Another important consideration, which I covered extensively in my problem solving course and Johnson starts looking at ‘way back in “mapping” is how to live with the aftermath of your decision, whether it’s a resounding success or a disastrous failure. Either way, the Universe is changed forever by your decision, and you and everyone else will have to live in it.

So, your ultimate goal should be deciding how to make the Universe a better place in which to live!

Robots Revisited

Engineer with SCARA robots
Engineer using monitoring system software to check and control SCARA welding robots in a digital manufacturing operation. PopTika/Shutterstock

12 December 2018 – I was wondering what to talk about in this week’s blog posting, when an article bearing an interesting-sounding headline crossed my desk. The article, written by Simone Stolzoff of Quartz Media was published last Monday (12/3/2018) by the World Economic Forum (WEF) under the title “Here are the countries most likely to replace you with a robot.”

I generally look askance at organizations with grandiose names that include the word “World,” figuring that they likely are long on megalomania and short on substance. Further, this one lists the inimitable (thank God there’s only one!) Al Gore on its Board of Trustees.

On the other hand, David Rubenstein is also on the WEF board. Rubenstein usually seems to have his head screwed on straight, so that’s a positive sign for the organization. Therefore, I figured the article might be worth reading and should be judged on its own merits.

The main content is summarized in two bar graphs. The first lists the ratio of robots to thousands of manufacturing workers in various countries. The highest scores go to South Korea and Singapore. In fact, three of the top four are Far Eastern countries. The United States comes in around number seven.Figure 1

The second applies a correction to the graphed data to reorder the list by taking into account the countries’ relative wealth. There, the United States comes in dead last among the sixteen countries listed. East Asian countries account for all of the top five.

Figure 2The take-home-lesson from the article is conveniently stated in its final paragraph:

The upshot of all of this is relatively straightforward. When taking wages into account, Asian countries far outpace their western counterparts. If robots are the future of manufacturing, American and European countries have some catching up to do to stay competitive.

This article, of course, got me started thinking about automation and how manufacturers choose to adopt it. It’s a subject that was a major theme throughout my tenure as Chief Editor of Test & Measurement World and constituted the bulk of my work at Control Engineering.

The graphs certainly support the conclusions expressed in the cited paragraph’s first two sentences. The third sentence, however, is problematical.

That ultimate conclusion is based on accepting that “robots are the future of manufacturing.” Absolute assertions like that are always dangerous. Seldom is anything so all-or-nothing.

Predicting the future is epistemological suicide. Whenever I hear such bald-faced statements I recall Jim Morrison’s prescient statement: “The future’s uncertain and the end is always near.”

The line was prescient because a little over a year after the song’s release, Morrison was dead at age twenty seven, thereby fulfilling the slogan expressed by John Derek’s “Nick Romano” character in Nicholas Ray’s 1949 film Knock on Any Door: “Live fast, die young, and leave a good-looking corpse.”

Anyway, predictions like “robots are the future of manufacturing” are generally suspect because, in the chaotic Universe in which we live, the future is inherently unpredictable.

If you want to say something practically guaranteed to be wrong, predict the future!

I’d like to offer an alternate explanation for the data presented in the WEF graphs. It’s based on my belief that American Culture usually gets things right in the long run.

Yes, that’s the long run in which economist John Maynard Keynes pointed out that we’re all dead.

My belief in the ultimate vindication of American trends is based, not on national pride or jingoism, but on historical precedents. Countries that have bucked American trends often start out strong, but ultimately fade.

An obvious example is trendy Japanese management techniques based on Druckerian principles that were so much in vogue during the last half of the twentieth century. Folks imagined such techniques were going to drive the Japanese economy to pre-eminence in the world. Management consultants touted such principles as the future for corporate governance without noticing that while they were great for middle management, they were useless for strategic planning.

Japanese manufacturers beat the crap out of U.S. industry for a while, but eventually their economy fell into a prolonged recession characterized by economic stagnation and disinflation so severe that even negative interest rates couldn’t restart it.

Similar examples abound, which is why our little country with its relatively minuscule population (4.3% of the world’s) has by far the biggest GDP in the world. China, with more than four times the population, grosses less than a third of what we do.

So, if robotic adoption is the future of manufacturing, why are we so far behind? Assuming we actually do know what we’re doing, as past performance would suggest, the answer must be that the others are getting it wrong. Their faith in robotics as a driver of manufacturing productivity may be misplaced.

How could that be? What could be wrong with relying on technological advancement as the driver of productivity?

Manufacturing productivity is calculated on the basis of stuff produced (as measured by its total value in dollars) divided by the number of worker-hours needed to produce it. That should tell you something about what it takes to produce stuff. It’s all about human worker involvement.

Folks who think robots automatically increase productivity are fixating on the denominator in the productivity calculation. Making even the same amount of stuff while reducing the worker-hours needed to produce it should drive productivity up fast. That’s basic number theory. Yet, while manufacturing has been rapidly introducing all kinds of automation over the last few decades, productivity has stagnated.

We need to look for a different explanation.

It just might be that robotic adoption is another example of too much of a good thing. It might be that reliance on technology could prove to be less effective than something about the people making up the work force.

I’m suggesting that because I’ve been led to believe that work forces in the Far Eastern developing economies are less skillful, may have lower expectations, and are more tolerant of authoritarian governments.

Why would those traits make a difference? I’ll take them one at a time to suggest how they might.

The impression that Far Eastern populations are less skillful is not easy to demonstrate. Nobody who’s dealt with people of Asian extraction in either an educational or work-force setting would ever imagine they are at all deficient in either intelligence or motivation. On the other hand, as emerging or developing economies those countries are likely more dependent on workers newly recruited from rural, agrarian settings, who are likely less acclimated to manufacturing and industrial environments. On this basis, one may posit that the available workers may prove less skillful in a manufacturing setting.

It’s a weak argument, but it exists.

The idea that people making up Far-Eastern work forces have lower expectations than those in more developed economies is on firmer footing. Workers in Canada, the U.S. and Europe have very high expectations for how they should be treated. Wages are higher. Benefits are more generous. Upward mobility perceptions are ingrained in the cultures.

For developing economies, not so much.

Then, we come to tolerance of authoritarian regimes. Tolerance of authoritarianism goes hand-in-hand with tolerance for the usual authoritarian vices of graft, lack of personal freedom and social immobility. Only those believing populist political propaganda think differently (which is the danger of populism).

What’s all this got to do with manufacturing productivity?

Lack of skill, low expectations and patience under authority are not conducive to high productivity. People are productive when they work hard. People work hard when they are incentivized. They are incentivized to work when they believe that working harder will make their lives better. It’s not hard to grasp!

Installing robots in a plant won’t by itself lead human workers to believe that working harder will make their lives better. If anything, it’ll do the opposite. They’ll start worrying that their lives are about to take a turn for the worse.

Maybe that has something to do with why increased automation has failed to increase productivity.

Teaching News Consumption and Critical Thinking

Teaching media literacy
Teaching global media literacy to children should be started when they’re young. David Pereiras/Shutterstock

21 November 2018 – Regular readers of this blog know one of my favorite themes is critical thinking about news. Another of my favorite subjects is education. So, they won’t be surprised when I go on a rant about promoting teaching of critical news consumption habits to youngsters.

Apropos of this subject, last week the BBC launched a project entitled “Beyond Fake News,” which aims to “fight back” against fake news with a season of documentaries, special reports and features on the BBC’s international TV, radio and online networks.

In an article by Lucy Mapstone, Press Association Deputy Entertainment Editor for the Independent.ie digital network, entitled “BBC to ‘fight back’ against disinformation with Beyond Fake News project,” Jamie Angus, director of the BBC World Service Group, is quoted as saying: “Poor standards of global media literacy, and the ease with which malicious content can spread unchecked on digital platforms mean there’s never been a greater need for trustworthy news providers to take proactive steps.”

Angus’ quote opens up a Pandora’s box of issues. Among them is the basic question of what constitutes “trustworthy news providers” in the first place. Of course, this is an issue I’ve tackled in previous columns.

Another issue is what would be appropriate “proactive steps.” The BBC’s “Beyond Fake News” project is one example that seems pretty sound. (Sorry if this language seems a little stilted, but I’ve just finished watching a mid-twentieth-century British film, and those folks tended to talk that way. It’ll take me a little while to get over it.)

Another sort of “proactive step” is what I’ve been trying to do in this blog: provide advice about what steps to take to ensure that the news you consume is reliable.

A third is providing rebuttal of specific fake-news stories, which is what pundits on networks like CNN and MSNBC try (with limited success, I might say) to do every day.

The issue I hope to attack in this blog posting is the overarching concern in the first phrase of the Angus quote: “Poor standards of global media literacy, … .”

Global media literacy can only be improved the same way any lack of literacy can be improved, and that is through education.

Improving global media literacy begins with ensuring a high standard of media literacy among teachers. Teachers can only teach what they already know. Thus, a high standard of media literacy must start in college and university academic-education programs.

While I’ve spent decades teaching at the college level, so I have plenty of experience, I’m not actually qualified to teach other teachers how to teach. I’ve only taught technical subjects, and the education required to teach technical subjects centers on the technical subjects themselves. The art of teaching is (or at least was when I was at university) left to the student’s ability to mimic what their teachers did, informal mentoring by fellow teachers, and good-ol’ experience in the classroom. We were basically dumped into the classroom and left to sink or swim. Some swam, while others sank.

That said, I’m not going to try to lay out a program for teaching teachers how to teach media literacy. I’ll confine my remarks to making the case that it needs to be done.

Teaching media literacy to schoolchildren is especially urgent because the media-literacy projects I keep hearing about are aimed at adults “in the wild,” so to speak. That is, they’re aimed at adult citizens who have already completed their educations and are out earning livings, bringing up families, and participating in the political life of society (or ignoring it, as the case may be).

I submit that’s exactly the wrong audience to aim at.

Yes, it’s the audience that is most involved in media consumption. It’s the group of people who most need to be media literate. It is not, however, the group that we need to aim media-literacy education at.

We gotta get ‘em when they’re young!

Like any other academic subject, the best time to teach people good media-consumption habits is before they need to have them, not afterwards. There are multiple reasons for this.

First, children need to develop good habits before they’ve developed bad habits. It saves the dicey stage of having to unlearn old habits before you can learn new ones. Media literacy is no different. Neither is critical thinking.

Most of the so-called “fake news” appeals to folks who’ve never learned to think critically in the first place. They certainly try to think critically, but they’ve never been taught the skills. Of course, those critical-thinking skills are a prerequisite to building good media-consumption habits.

How can you get in the habit of thinking critically about news stories you consume unless you’ve been taught to think critically in the first place? I submit that the two skills are so intertwined that the best strategy is to teach them simultaneously.

And, it is most definitely a habit, like smoking, drinking alcohol, and being polite to pretty girls (or boys). It’s not something you can just tell somebody to do, then expect they’ll do it. They have to do it over and over again until it becomes habitual.

‘Nuff said.

Another reason to promote media literacy among the young is that’s when people are most amenable to instruction. Human children are pre-programmed to try to learn things. That’s what “play” is all about. Acquiring knowledge is not an unpleasant chore for children (unless misguided adults make it so). It’s their job! To ensure that children learn what they need to know to function as adults, Mommy Nature went out of her way to make learning fun, just as she did with everything else humans need to do to survive as a species.

Learning, having sex, taking care of babies are all things humans have to do to survive, so Mommy Nature puts systems in place to make them fun, and so drive humans to do them.

A third reason we need to teach media literacy to the young is that, like everything else, you’re better off learning it before you need to practice it. Nobody in their right mind teaches a novice how to drive a car by running them out in city traffic. High schools all have big, torturously laid out parking lots to give novice drivers a safe, challenging place to practice the basic skills of starting, stopping and turning before they have to perform those functions while dealing with fast-moving Chevys coming out of nowhere.

Similarly, you want students to practice deciphering written and verbal communications before asking them to parse a Donald-Trump speech!

The “Call to Action” for this editorial piece is thus, “Agitate for developing good media-consumption habits among schoolchildren along with the traditional Three Rs.” It starts with making the teaching of media literacy part of K-12 teacher education. It also includes teaching critical thinking skills and habits at the same time. Finally, it includes holding K-12 teachers responsible for inculcating good media-consumption habits in their students.

Yes, it’s important to try to bring the current crop of media-illiterate adults up to speed, but it’s more important to promote global media literacy among the young.

The Case for Free College

College vs. Income
While the need for skilled workers to maintain our technology edge has grown, the cost of training those workers has grown astronomically.

6 June 2018 – We, as a nation, need to extend the present system that provides free, universal education up through high school to cover college to the baccalaureate level.

DISCLOSURE: Teaching is my family business. My father was a teacher. My mother was a teacher. My sister’s first career was as a teacher. My brother in law was a teacher. My wife is a teacher. My son is a teacher. My daughter in law is a teacher. Most of my aunts and uncles and cousins are or were teachers. I’ve spent a lot of years teaching at the college level, myself. Some would say that I have a conflict of interest when covering developments in the education field. Others might argue that I know whereof I speak.

Since WW II, there has been a growing realization that the best careers go to those with at least a bachelor’s degree in whatever field they choose. Yet, at the same time, society has (perhaps inadvertently, although I’m not naive enough to eschew thinking there’s a lot of blame to go around) erected a monumental barrier to anyone wanting to get an education. Since the mid-1970s, the cost of higher education has vastly outstripped the ability of most people to pay for it.

In 1975, the price of attendance in college was about one fifth of the median family income (see graph above). In 2016, it was over a third. That makes sending kids to college a whole lot harder than it used to be. If your family happens to have less than median household income, that barrier looks even higher, and is getting steeper.

MORE DISCLOSURE: The reason I don’t have a Ph.D. today is that two years into my Aerospace Engineering Ph.D. program, Arizona State University jacked up the tuition beyond my (not incosiderable at the time) ability to pay.

I’d like everyone in America to consider the following propositions:

  1. A bachelor’s degree is the new high-school diploma;
  2. Having an educated population is a requirement for our technology-based society;
  3. Without education, upward mobility is nearly impossible;
  4. Ergo, it is a requirement for our society to ensure that every citizen capable of getting a college degree gets one.

EVEN MORE DISCLOSURE: Horace Mann, often credited as the Father of Public Education, was born in the same town (Franklin, MA) that I was, and our family charity is a scholarship fund dedicated to his memory.

About Mann’s intellectual progressivism, the historian Ellwood P. Cubberley said: “No one did more than he to establish in the minds of the American people the conception that education should be universal, non-sectarian, free, and that its aims should be social efficiency, civic virtue, and character, rather than mere learning or the advancement of education ends.” (source: Wikipedia)

The Wikipedia article goes on to say: “Arguing that universal public education was the best way to turn unruly American children into disciplined, judicious republican citizens, Mann won widespread approval from modernizers, especially in the Whig Party, for building public schools. Most states adopted a version of the system Mann established in Massachusetts, especially the program for normal schools to train professional teachers.”

That was back in the mid-nineteenth century. At that time, the United States was in the midst of a shift from an agrarian to an industrial economy. We’ve since completed that transition and are now shifting to an information-based economy. In future, full participation in the workforce will require everyone to have at least a bachelor’s degree.

So, when progressive politicians, like Bernie Sanders, make noises about free universal college education, YOU should listen!

It’s about time we, as a society, owned up to the fact that times have changed a lot since the mid-nineteenth century. At that time, universal free education to about junior high school level was considered enough. Since then, it was extended to high school. It’s time to extend it further to the bachelor’s-degree level.

That doesn’t mean shutting down Ivy League colleges. For those who can afford them, private and for-profit colleges can provide superior educational experiences. But, publicly funded four-year colleges offering tuition-free education to everyone has become a strategic imperative.