Why Target Average Inflation?

Federal Reserve Seal
The FOMC attempts to control economic expansion by managing interest rates. Shutterstock.com

8 May 2019 – There’s been a bit of noise in financial-media circles this week (as of this writing, but it’ll be last week when you get to read it) about Federal Reserve Chairman Jerome Powell’s talking up shifting the Fed’s focus to targeting something called “average inflation” and using words like “transient” and “symmetric” to describe this thinking. James Macintosh provided a nice layman-centric description of the pros and cons of this concept in his “Streetwise” column in Friday’s (5/3) The Wall Street Journal. (Sorry, folks, but this article is only available to WSJ subscribers, so the link above leads to a teaser that asks you to either sign in as a current subscriber or to become a new subscriber. And, you thought information was supposed to be distributed for free? Think again!)

I’m not going to rehash what Macintosh wrote, but attempt to show why this change makes sense. In fact, it’s not really a change at all, but an acknowledgement of what’s really been going on all the time.

We start with pointing out that what the Federal Reserve System is mandated to do is to control the U.S. economy. The operant word here is “control.” That means that to understand what the Fed does (and what it should do) requires a basic understanding of control theory.

Basic Control Theory

We’ll start with a thermostat.

A lot of people (I hesitate to say “most” because I’ve encountered so many counter examples – otherwise intelligent people who somehow don’t seem to get the point) understand how a thermostat works.

A thermostat is the poster child for basic automated control systems. It’s the “stone knives and bearskins” version of automated controls, and is the easiest for the layman to understand, so that’s where we’ll start. It’s also a good analog for what has passed for economic controls since the Fed was created in 1913.

Okay, the first thing to understand is the concept of a “set point.” That’s a “desired value” of some measurement that represents the thing you want to control. In the case of the thermostat, the measurement is room temperature (as read out from a thermometer) and the thing you’re trying to control is how comfortable the room air feels to you. In the case of the Fed, the thing you want to control is overall economic performance and the measurement folks decided was most useful is the inflation rate.

Currently, the set point for inflation is 2% per annum.

In the case of the thermostat in our condo, my wife and I have settled on 75º F. That’s a choice we’ve made based on the climate where we live (Southwestern Florida), our ages, and what we, through experience, have found to be most comfortable for us right now. When we lived in New England, we chose a different set point. Similarly, when we lived in Northern Arizona it was different as well.

The bottom line is: the set point is a matter of choice based on a whole raft of factors that we think are important to us and it varies from time to time.

The same goes for the Fed’s inflation set point. It’s a choice Fed governors make based on a whole raft of considerations that they think are important to the country right now. One of the reasons they meet every month is to review that target ‘cause they know that things change. What seems like a good idea in July, might not look so good in August.

Now, it’s important to recognize that the set point is a target. Like any target, you’re trying to hit it, but you don’t really expect to hit it exactly. You really expect that the value you get for your performance measurement will differ from your set point by some amount – by some error or what metrologists prefer to call “deviation.” We prefer deviation to the word error because it has less pejorative connotations. It’s a fact of life, not a bad thing.

When we add in the concept of time, we also introduce the concept of feedback. That is what control theorists call it when you take the results of your measurement and feed it back to your decision of what to do next.

What you do next to control whatever you’re trying to control depends, first, on the sign (positive or negative) of the deviation, and, in more sophisticated controls, it’s value or magnitude. In the case of the thermostat, if the deviation is positive (meaning the room is hotter than you want) you want to do something to cool it down. In the case of the economy, if inflation is too high you want to do something to reduce economic activity so you don’t get an economic bubble that’ll soon burst.

What confuses some presidents is the idea that rising economic activity isn’t always good. Presidents like boom times ‘cause they make people feel good – like a sugar high. Populist presidents typically fail to recognize (or care about the fact) that booms are invariably bubbles that burst disastrously. Just ask the people of Venezuela who watched their economy’s inflation rate suddenly shoot up to about a million(!) percent per annum.

Booms turn to busts in a heartbeat!

This is where we want to abandon the analogy with a thermostat and get a little more sophisticated.

A thermostat is a blunt instrument. What the thermostat automatically does next is like using a club. At best, a thermostat has two clubs to choose from: it can either fire up the furnace (to raise the room temperature in the event of a negative deviation) or kick in the air conditioner (in the event that the deviation is positive – too hot). That’s known as a binary digital control. It’s gives you a digital choice: up or down.

We leave the thermostat analogy because the Fed’s main tool for controlling the economy (the Fed-funds interest rate) is a lot more sophisticated. It’s what mathematicians call analog. That is, instead of providing a binary choice (to use the club or not), it lets you choose how much pressure you want to apply up or down.

Quantitative easing similarly provides analog control, so what I’m going to say below also applies to it.

Okay, the Fed’s control lever (Fed funds interest rate) is more like a brake pedal than a club. In a car, the harder you press the brake pedal, the more pressure you apply to make the car slow down. A little pressure makes the car slow down a little. A lot of pressure makes the car slow down a lot.

So, you can see why authoritarians like low interest rates. Autthoritarians generally have high-D personalities. As Personality Insights says: “They tend to know 2 speeds in life – zero and full throttle… mostly full throttle.”

They generally don’t have much use for brakes!

By the way, the thing governments have that corresponds to a gas pedal is deficit spending, but the correspondence isn’t exact and the Fed can’t control it, anyway. Since this article is about the Fed, we aren’t going to talk about it now.

When inflation’s moving too fast (above the set point) by a little, the Fed governors – being the feedback controller – decide to raise the Fed funds rate, which is analogous to pushing the brake pedal, by a little. If that doesn’t work, they push it a little harder. If inflation seems to be out of control, as it did in the 1970s, they push it as hard as they can, boosting interest rates way up and pulling way back on the economy.

Populist dictators, who generally don’t know what they’re doing, try to prevent their central banks (you can’t have an economy without having a central bank, even if you don’t know you have it) from raising interest rates soon enough or high enough to get inflation under control, which is why populist dictatorships generally end up with hyperinflation leading to economic collapse.

Populist Dictators Need Not Apply

This is why we don’t want the U.S. Federal Reserve Bank under political control. Politicians are not elected for their economic savvy, so we want Fed governors, who are supposed to have economic savvy, to make smart decisions based on their understanding of economic causes and effects, rather than dumb decisions based on political expediency.

Economists are mathematically sophisticated people. They may (or may not) be steeped in the theory of automated control systems, but they’re quite capable of understanding these basics and how they apply to controlling an economy.

Economics, of course, has been around as long as civilization. Hesiod (ca. 750 BCE) is sometimes considered “the first economist.” Contemporary economics traces back to the eighteenth century with Adam Smith. Control theory, on the other hand, has only been elucidated since the early 1950s. So, you don’t really need control theory to understand economics. It just makes it easier to see how the controls work.

To a veteran test and measurement maven like myself, the idea of thinking in terms of average inflation, instead of the observed inflation at some point in time – like right now – makes perfect sense. We know that every time you make a measurement of anything, you’re almost guaranteed to get a different value than you got the last time you measured it. That’s why we (scientists and engineers) always measure whatever we care about multiple times and pay attention to the average of the measurements instead of each measurement individually.

So, Fed governors starting to pay attention to average inflation strikes us as a duh! What else would you look at?

Similarly, using words like “transient” and “symmetric” make perfect sense because “transient” expresses the idea that things change faster than you can measure them and “symmetric” expresses the idea that measurement variations can be positive or negative – symmetric each side of the average.

These ideas all come from the mathematics of statistics. You’ve heard of “statistical significance” associated with polling data, or two polling results being within “statistical error.” The variations I’m talking about are the same thing. Variations between two values (like the average inflation and the target inflation) are statistically significant if they’re sufficiently outside the statistical error.

I’m not going to go into how you calculate a value for statistical error because it takes hours of yammering to teach it in statistics classes, and I just don’t have the space here. You wouldn’t want to read it right now, anyway. Suffice it to say that it’s a well-defined concept relating to how much variation you can expect in a given data set.

While the control theory I’ve been talking about applies especially to automated control systems, it applies equally to Federal Reserve System control of economic performance – if you put the Federal Open Market Committee (FOMC) in place of the control computer that makes decisions for the automated control system.

So,” you ask, “why not put the Fed-funds rate under computer control?”.

The reason it would be unreasonable to fully automate the Fed’s actions is that we can’t duplicate the thinking process of the Fed governors in a computer program. The state of the art of economic models is just not good enough, yet. We still need the gut feelings of seasoned economists to make enough sense out of what goes on in the economy to figure out what to do next.

That, by the way, is why we don’t leave the decisions up to some hyperintelligent pandimensional being (named Trump). We need a panel of economists with diverse backgrounds and experiences – the FOMC – to have some hope of getting it right!

Falling Out of the Sky

B737 Max taking off
Thai Lion Air Boeing 737 Max 9 taking off from Don Mueang international airport in Bankok, Thailand. Komenton / Shutterstock.com

3 April 2019 – On 29 October 2018, Lion Air flight 610 crashed soon after takeoff from Soekarno–Hatta International Airport in Jakarta, Indonesia. This is not the sort of thing we report in this blog. It’s straight news and we leave that to straight-news media, but I’m diving into it because it involves technology I’m quite familiar with and I might be able to help readers make sense of what happened and judge the often-uninformed reactions to it.

I claim to have the background to understand what happened because I’ve been flying light planes since the 1990s. I also put two years into a post-graduate Aerospace Engineering Program at Arizona State University concentrating on fluid dynamics. That’s enough background to make some educated guesses at what happened to Lion Air 610 as well as in the almost identical crash of an Ethiopian Airlines Boeing 737 MAX in Addis Ababa, , Ethiopia on 10 March 2019.

First, both airliners were recently commissioned Boeing 737 MAX aircraft using standard-equipment installations of Boeing’s new Maneuvering Characteristics Augmentation System (MCAS).

How to Stall an Aircraft

In aerodynamics the word “stall” means something quite unlike what most people expect. Most people encounter the word in an automobile context, where it refers to “stalling the engine.” That happens when you overload an internal-combustion engine. That is pull more power out than the engine can produce at its current operating speed. When that happens, the engine simply stops.

It turns from a power-producing machine to a boat anchor in a heartbeat. Your car stops with a lurch and everyone behind you starts swearing and blowing their horns in an effort to make you feel even worse than you already do.

That’s not what happens when an airplane stalls. It’s not the aircraft’s engine that stalls, but it’s wings. There are similarities in that, like engines, wings stall when they’re overloaded and when stalled they start producing drag like a boat anchor, but that’s about where the similarities end.

When an aircraft stalls, nobody swears and blows their horn. Instead, they scream and die.

Why? Well, wings are supposed to lift the aircraft and support it in the air. If you’ve ever tried to carry a sheet of plywood on a windy day you’ve experience both lift and drag. If you let the sheet tip up a little bit so the wind catches it underneath, it tries to fly up out of your hands. That’s the lift an airplane gets by tipping its wings up into the air stream as it moves forward into the air.

The more you tip the sheet up, the more lift you get for the same airspeed. That is, until you reach a certain attack angle (the angle between the sheet and the wind). Stalling begins suddenly at an attack angle of about 15°. Then, all of a sudden, the force lifting the sheet changes from up and a little back to no up, and a lot of back!

That’s a wing stall.

The aircraft stops imitating a bird, and starts imitating a rock.

You suddenly get a visceral sense of the concept “down.”

‘Cause that’s where you go in a hurry!

At that point, all you can do is point the nose down (so the wing’s forward edge starts pointing in the direction you’re moving: down!

If you’ve got enough space underneath your aircraft so the wing starts flying again before you hit the ground, you can gently pull the aircraft’s nose back up to resume straight and level flight. If not, that’s when the screaming starts.

Wings stall when they’re going too slowly to generate the required lift at an angle of attack of 15°. At higher speeds, the wing can generate the needed lift with less angle of attack, and worries about stalling never come up.

So, now you know all you need to know (or want to know) about stalling an aircraft.

MCAS

Boeing’s MCAS is an anti-stall system. It’s beating heart is a bit of software running on the flight-control computer that monitors a number of sensor inputs, like airspeed and angle of attack. Basically, in simple terms, it knows exactly how much attack angle the wings can stand before stalling out. If it sees that for some reason, the attack angle is getting too high, it assumes the pilot has screwed up. It takes control and pushes the nose down.

It doesn’t have to actually “take control” because modern commercial aircraft are “fly by wire,” which means it’s the computer that actually moves the control surfaces to fly the plane. The pilot’s “yoke” (the little wheel he or she gets to twist and turn and move forward and back) and the rudder pedals he pushes to steer (push right, go right) just sends signals to the computer to tell it what he wants to have happen. In a sense, the pilot negotiates with the computer about what the airplane should do.

The pilot makes suggestions (through the yoke, pedals and throttle control – collectively called the “cockpit flight controls”); the computer then takes that information, combines it with all the other information provided by a plethora (Do you like that word? I do!) of additional sensors; thinks about it for a microsecond; then, finally, the computer tells the aircraft’s control surfaces to move smoothly to a position that it (the computer) thinks will make the aircraft do what it wants it to do.

That’s all well and good when the reason the attack angle got too high is just that something happened that broke the pilot’s concentration, and he (or she) actually screwed up. What about when the pilot actually wants to stall the aircraft?

For example, on landing.

To land a plane, you slow it way down, so the wing’s almost stalled. Then, you fly it really close to the ground so the wheels almost touch the runway. Then you stall the wing so the wheels touch the ground just as the wings lose lift. You hear a satisfying “squeak” as the wheels momentarily skid while spinning up to match the relative speed of the runway. Finally, the wheels gently settle down, taking up the weight of the aircraft. The flight crew (and a few passengers who’ve been paying attention) cheer the pilot for a job well done, and the pilot starts breathing again.

Anti-stall systems don’t do much good during a landing, when you’re trying to intentionally stall the wings at just the right time.

Similarly, the don’t do much good when you’re taking off, and the pilot’s just trying to get the wings unstalled to get the aircraft into the air in the first place.

For those times, you want the MCAS turned off! So you’ve gotta be able to do that, too. Or, if your pilot is too absent minded to shut it off when its not needed, you need it to shut off automatically.

When Things Go Wrong

So, what happened in those two airliner crashes?

Remember that the main input into the MCAS is an attack angle sensor? Attack angle sensors, like any other piece of technology can go bad, especially if it’s exposed to weather. And, airliners are exposed to weather 24/7 except when they’re brought into a hangar for repair.

The working hypothesis for what happened to both airliners is that the attack-angle sensors failed. They jammed in a position where they erroneously reported a high angle-of-attack to the MCAS, which jumped to the conclusion “pilot error,” and pushed the nose down. When the pilot(s) tried to pull the nose back up (because their windshield filled up with things that looked a lot like ground instead of sky), the MCAS said: “Nope! You’re going down, Jack!”

By the time the pilots figured out what was wrong and looked up how to shut the MCAS off, they’d actually hit the things that looked too much like ground.

Why didn’t the MCAS figure out there was something wrong with the sensor?

How’s it supposed to know?

The sensor says the nose is pointed up, so the computer takes it at it’s word. Computers aren’t really very smart, and tend to be quite literal. The sensor says the nose is pointed up, so the computer thinks the nose is pointed up, and tries to point it down (or at least less up). End of story. And, in the real world, it’s “end of aircraft” as well.

If the pilot(s) try to tell the computer to pull the nose up (by desperately pulling back on the yoke), it figures they’re screw-ups, anyway, and won’t listen.

Every try to argue with a computer? Been there, done that. It doesn’t work.

Mea Culpa

When I learned about the hypothesis of attack-angle-sensor failure causing the crashes that took nearly four hundred lives, I got this awful sick feeling that was a mixture of embarrassment and guilt. You see, a decade and a half ago my research project at ASU was an effort to develop a different style of attack-angle sensor. Several events and circumstances combined to make me abandon that research project and, in fact, the whole PhD. program it was a part of. In my defense, it was the start of a ten-year period in which I couldn’t get anything right!

But, if I’d stuck it out and developed that sensor it might have been installed on those airliners and might not have failed at all. Of course, it could have been installed and failed in some other spectacular way.

You see, the attack angle sensor that apparently was installed consisted of a little vane attached to one side of the aircraft’s nose. Just like the wind sock traditionally hung outside airports the world over, wind pressure makes the vane line up downstream of the wind direction. A little angle sensor attached to the vane reports the wind direction relative to the nose: the attack angle.

I got involved in trying to develop an alternative attack-angle sensor because I have a horror of relying on sensors that depend on mechanical movement to work. If you’re relying on mechanical movement, it means you’re relying on bearings, and bearings can corrode and wear out and fail. The sensor I was working on relied on differences in air pressure that depended on the direction the wind hit the sensor.

In actual fact, there were two attack-angle sensors attached to the doomed aircraft – one on each side of the nose – but the Boeing MCAS was paying attention to only one of them. That was Boeing’s second mistake (the first being not using the sensor I hadn’t developed, so I guess they can’t be blamed for it). If the MCAS had been paying attention to both sensors, it would have known something in its touchy-feely universe was wrong. It might have been a little more reluctant to override the pilots’ input.

The third mistake (I believe) Boeing made was to downplay the differences between the new “Max” version of the aircraft and the older version. They’d changed the engines, which (as any aerospace engineer knows) necessitates changes in everything else. Aircraft are so intricately balanced machines that every time you change one thing, everything else has to change – or at least has to be looked at to see if it needs to be changed.

The new engines had improved performance, which affects just about everything involving the aircraft’s handling characteristics. Boeing had apparently tried to make the more-powerful yet more fuel efficient aircraft handle like the old aircraft. There, of course, were differences, which the company tried to pretend would make no difference to the pilots. The MCAS was one of those things that was supposed to make the “Max” version handle just like the non-Max version.

So, when something went wrong in “Max” land, it caught the pilots, who had thousands of hours experience with non-Max aircraft, by surprise.

The latest reports are that Boeing, the FAA, and the airlines have realized what the problems are that caused these issues (I hope they understand them a lot better than I do, because, after all, it’s their job to!), and have worked out a number of fixes.

First, the MCAS will pay attention to two attack-angle sensors. At least then the flight-control computer will have an indication that something is wrong and tell the MCAS to go back in its corner and shut up ‘til the issue is sorted out.

Second, they’ll install a little blinking light that effectively tells the pilots “there’s something wrong, so don’t expect any help from the MCAS ‘til it gets sorted out.”

Third, they’ll make sure the pilots have a good, positive way of emphatically shut the MCAS off if it starts to argue with them in an emergency. And, they’ll make sure the pilots are trained to know when and how to use it.

My understanding is that these fixes are already part of the options that American commercial airlines have generally installed, which is supposedly why the FAA, the airlines and the pilots’ union have been dragging their feet about grounding Boeing’s 737 Max fleet. Let’s hope they’re not just blowing smoke (again)!

Luddites’ Lament

Luddites attack
An owner of a factory defending his workshop against Luddites intent on destroying his mechanized looms between 1811-1816. Everett Historical/Shutterstock

27 March 2019 – A reader of last week’s column, in which I reported recent opinions voiced by a few automation experts at February’s Conference on the Future of Work held at at Stanford University, informed me of a chapter from Henry Hazlitt’s 1988 book Economics in One Lesson that Australian computer scientist Steven Shaw uploaded to his blog.

I’m not going to get into the tangled web of potential copyright infringement that Shaw’s posting of Hazlitt’s entire text opens up, I’ve just linked to the most convenient-to-read posting of that particular chapter. If you follow the link and want to buy the book, I’ve given you the appropriate link as well.

The chapter is of immense value apropos the question of whether automation generally reduces the need for human labor, or creates more opportunities for humans to gain useful employment. Specifically, it looks at the results of a number of historic events where Luddites excoriated technology developers for taking away jobs from humans only to have subsequent developments prove them spectacularly wrong.

Hazlitt’s classic book is, not surprisingly for a classic, well documented, authoritative, and extremely readable. I’m not going to pretend to provide an alternative here, but to summarize some of the chapter’s examples in the hope that you’ll be intrigued enough to seek out the original.

Luddism

Before getting on to the examples, let’s start by looking at the history of Luddism. It’s not a new story, really. It probably dates back to just after cave guys first thought of specialization of labor.

That is, sometime in the prehistoric past, some blokes were found to be especially good at doing some things, and the rest of the tribe came up with the idea of letting, say, the best potters make pots for the whole tribe, and everyone else rewarding them for a job well done by, say, giving them choice caribou parts for dinner.

Eventually, they had the best flint knappers make the arrowheads, the best fletchers put the arrowheads on the arrows, the best bowmakers make the bows, and so on. Division of labor into different jobs turned out to be so spectacularly successful that very few of us rugged individualists, who pretend to do everything for ourselves, are few and far between (and are largely kidding ourselves, anyway).

Since then, anyone who comes up with a great way to do anything more efficiently runs the risk of having the folks who spent years learning to do it the old way land on him (or her) like a ton of bricks.

It’s generally a lot easier to throw rocks to drive the innovator away than to adapt to the innovation.

Luddites in the early nineteenth century were organized bands of workers who violently resisted mechanization of factories during the late Industrial Revolution. Named for an imaginary character, Ned Ludd, who was supposedly an apprentice who smashed two stocking frames in 1779 and whose name had become emblematic of machine destroyers. The term “Luddite” has come to mean anyone fanatically opposed to deploying advanced technology.

Of course, like religious fundamentalists, they have to pick a point in time to separate “good” technology from the “bad.” Unlike religious fanatics, who generally pick publication of a certain text to be the dividing line, Luddites divide between the technology of their immediate past (with which they are familiar) and anything new or unfamiliar. Thus, it’s a continually moving target.

In either case, the dividing line is fundamentally arbitrary, so the emotion of their response is irrational. Irrationality typically carries a warranty of being entirely contrary to facts.

What Happens Next

Hazlitt points out, “The belief that machines cause unemployment, when held with any logical consistency, leads to preposterous conclusions.” He points out that on the second page of the first chapter of Adam Smith’s seminal book Wealth of Nations, Smith tells us that a workman unacquainted with the use of machinery employed in sewing-pin-making “could scarce make one pin a day, and certainly could not make twenty,” but with the use of the machinery he can make 4,800 pins a day. So, zero-sum game theory would indicate an immediate 99.98 percent unemployment rate in the pin-making industry of 1776.

Did that happen? No, because economics is not a zero-sum game. Sewing pins went from dear to cheap. Since they were now cheap, folks prized them less and discarded them more (when was the last time you bothered to straighten a bent pin?), and more folks could afford to buy them in the first place. That led to an increase in sewing-pin sales as well as sales of things like sewing-patterns and bulk fine fabric sold to amateur sewers, and more employment, not less.

Similar results obtained in the stocking industry when new stocking frames (the original having been invented William Lee in 1589, but denied a patent by Elizabeth I who feared its effects on employment in hand-knitting industries) were protested by Luddites as fast as they could be introduced. Before the end of the nineteenth century the stocking industry was employing at least a hundred men for every man it employed at the beginning of the century.

Another example Hazlitt presents from the Industrial Revolution happened in the cotton-spinning industry. He says: “Arkwright invented his cotton-spinning machinery in 1760. At that time it was estimated that there were in England 5,200 spinners using spinning wheels, and 2,700 weavers—in all, 7,900 persons engaged in the production of cotton textiles. The introduction of Arkwright’s invention was opposed on the ground that it threatened the livelihood of the workers, and the opposition had to be put down by force. Yet in 1787—twenty-seven years after the invention appeared—a parliamentary inquiry showed that the number of persons actually engaged in the spinning and weaving of cotton had risen from 7,900 to 320,000, an increase of 4,400 percent.”

As these examples indicate, improvements in manufacturing efficiency generally lead to reductions in manufacturing cost, which, when passed along to customers, reduces prices with concommitent increases in unit sales. This is the price elasticity of demand curve from Microeconomics 101. It is the reason economics is decidedly not a zero-sum game.

If we accept economics as not a zero-sum game, predicting what happens when automation makes it possible to produce more stuff with fewer workers becomes a chancy proposition. For example, many economists today blame flat productivity (the amount of stuff produced divided by the number of workers needed to produce it) for lack of wage gains in the face of low unemployment. If that is true, then anything that would help raise productivity (such as automation) should be welcome.

Long experience has taught us that economics is a positive-sum game. In the face of technological advancement, it behooves us to expect positive outcomes while taking measures to ensure that the concomitant economic gains get distributed fairly (whatever that means) throughout society. That is the take-home lesson from the social dislocations that accompanied the technological advancements of the Early Industrial Revolution.

Don’t Panic!

Panic button
Do not push the red button! Peter Hermes Furian/Shutterstock

20 March 2019 – The image at right visualizes something described in Douglas Adams’ Hitchiker’s Guide to the Galaxy. At one point, the main characters of that six-part “trilogy” found a big red button on the dashboard of a spaceship they were trying to steal that was marked “DO NOT PRESS THIS BUTTON!” Naturally, they pressed the button, and a new label popped up that said “DO NOT PRESS THIS BUTTON AGAIN!”

Eventually, they got the autopilot engaged only to find it was a stunt ship programmed to crash headlong into the nearest Sun as part of the light show for an interstellar rock band. The moral of this story is “Never push buttons marked ‘DO NOT PUSH THIS BUTTON.’”

Per the author: “It is said that despite its many glaring (and occasionally fatal) inaccuracies, the Hitchhiker’s Guide to the Galaxy itself has outsold the Encyclopedia Galactica because it is slightly cheaper, and because it has the words ‘DON’T PANIC’ in large, friendly letters on the cover.”

Despite these references to the Hitchhiker’s Guide to the Galaxy, this posting has nothing to do with that book, the series, or the guide it describes, except that I’ve borrowed the words from the Guide’s cover as a title. I did that because those words perfectly express the take-home lesson of Bill Snyder’s 11 March 2019 article in The Robot Report entitled “Fears of job-stealing robots are misplaced, say experts.”

Expert Opinions

Snyder’s article reports opinions expressed at the the Conference on the Future of Work at Stanford University last month. It’s a topic I’ve shot my word processor off about on numerous occasions in this space, so I thought it would be appropriate to report others’ views as well. First, I’ll present material from Snyder’s article, then I’ll wrap up with my take on the subject.

“Robots aren’t coming for your job,” Snyder says, “but it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.”

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist.

David Autor, professor of economics at the Massachusetts Institute of Technology points out that education is a big determinant of how developing trends affect workers: “It’s a great time to be young and educated, but there’s no clear land of opportunity for adults who haven’t been to college.”

“When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation,” said Varian, “demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude.”

His research indicates that shrinkage of the labor supply due to demographic trends is 53% greater than shrinkage of demand for labor due to automation. That means, while relatively fewer jobs are available, there are a lot fewer workers available to do them. The result is the prospect of a continued labor shortage.

At the same time, Snyder reports that “[The] most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.”

In other words, fears that robots will displace humans for existing jobs miss the point. Robots, instead, are taking over jobs for which there aren’t enough humans to do them.

Another effect is the fact that what people think of as “jobs” are actually made up of many “tasks,” and it’s tasks that get automated, not entire jobs. Some tasks are amenable to automation while others aren’t.

“Consider the job of a gardener,” Snyder suggests as an example. “Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores.”

Some of these tasks, like mowing and watering, can easily be automated. Pruning rose bushes, not so much!

Snyder points to news reports of a hotel in Nagasaki, Japan being forced to “fire” robot receptionists and room attendants that proved to be incompetent.

There’s a scene in the 1997 film The Fifth Element where a supporting character tries to converse with a robot bartender about another character. He says: “She’s so vulnerable – so human. Do you you know what I mean?” The robot shakes its head, “No.”

Sometimes people, even misanthropes, would prefer to interact with another human than with a drink-dispensing machine.

“Jobs,” Varian points out, “unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator.”

“Excessive automation at Tesla was a mistake,” founder Elon Musk mea culpa-ed last year “Humans are underrated.”

Another trend Snyder points out is that automation-ready jobs, such as assembly-line factory workers, have already largely disappeared from America. “The 10 most common occupations in the U.S.,” he says, “include such jobs as retail salespersons, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer even make the list.

Again, robots are mainly taking over tasks that humans are not available to do.

The final trend that Snyder presents, is the stark fact that birthrates in developed nations are declining – in some cases precipitously. “The aging of the baby boom generation creates demand for service jobs,” Varian points out, “but leaves fewer workers actively contributing labor to the economy.”

Those “service jobs” are just the ones that require a human touch, so they’re much harder to automate successfully.

My Inexpert Opinion

I’ve been trying, not entirely successfully, to figure out what role robots will actually have vis-a-vis humans in the future. I think there will be a few macroscopic trends. And, the macroscopic trends should be the easiest to spot ‘cause they’re, well, macroscopic. That means bigger. So, there easier to see. See?

As early as 2010, I worked out one important difference between robots and humans that I expounded in my novel Vengeance is Mine! Specifically, humans have a wider view of the Universe and have more of an emotional stake in it.

“For example,” I had one of my main characters pontificate at a cocktail party, “that tall blonde over there is an archaeologist. She uses ROVs – remotely operated vehicles – to map underwater shipwreck sites. So, she cares about what she sees and finds. We program the ROVs with sophisticated navigational software that allows her to concentrate on what she’s looking at, rather than the details of piloting the vehicle, but she’s in constant communication with it because she cares what it does. It doesn’t.”

More recently, I got a clearer image of this relationship and it’s so obvious that we tend to overlook it. I certainly missed it for decades.

It hit me like a brick when I saw a video of an autonomous robot marine-trash collector. This device is a small autonomous surface vessel with a big “mouth” that glides around seeking out and gobbling up discarded water bottles, plastic bags, bits of styrofoam, and other unwanted jetsam clogging up waterways.

The first question that popped into my mind was “who’s going to own the thing?” I mean, somebody has to want it, then buy it, then put it to work. I’m sure it could be made to automatically regurgitate the junk it collects into trash bags that it drops off at some collection point, but some human or humans have to make sure the trash bags get collected and disposed of. Somebody has to ensure that the robot has a charging system to keep its batteries recharged. Somebody has to fix it when parts wear out, and somebody has to take responsibility if it becomes a navigation hazard. Should that happen, the Coast Guard is going to want to scoop it up and hand its bedraggled carcass to some human owner along with a citation.

So, on a very important level, the biggest thing robots need from humans is ownership. Humans own robots, not the other way around. Without a human owner, an orphan robot is a pile of junk left by the side of the road!

What is This “Robot” Thing, Anyway?

Robot thinking
So, what is it that makes a robot a robot? Phonlamai Photo/Shutterstock

6 March 2019 – While surfing the Internet this morning, in a valiant effort to put off actually getting down to business grading that pile of lab reports that I should have graded a couple of days ago, I ran across this posting I wrote in 2013 for Packaging Digest.

Surprisingly, it still seems relevant today, and on a subject that I haven’t treated in this blog, yet. It being that I’m planning to devote most of next week to preparing my 2018 tax return, I decided to save some writing time by dusting it off and presenting it as this week’s posting to Tech Trends. I hope the folks at Packaging Digest won’t get their noses too far out of joint about my encroaching on their five-year-old copyright without asking permission.

By the way, this piece is way shorter than the usual Tech Trends essay because of the specifications for that Packaging Digest blog, which was entitled “New Metropolis” in homage to Fritz Lang’s 1927 feature film entitled Metropolis, which told the story of a futuristic mechanized culture and an anthropomorphic robot that a mad scientist creates to bring it down. The “New Metropolis” postings were specified to be approximately 500 words long, whereas Tech Trends postings are planned to be 1,000-1,500 words long.

Anyway, I hope you enjoy this little slice of recent history.


11 November 2013 – I thought it might be fun—and maybe even useful—to catalog the classifications of these things we call robots.

Lets start with the word robot. The idea behind the word robot grows from the ancient concept of the golem. A golem was an artificial person created by people.

Frankly, the idea of a golem scared the bejeezus out of the ancients because the golem stands at the interface between living and non-living things. In our enlightened age, it still scares the bejeezus out of people!

If we restricted the field to golems—strictly humanoid robots, or androids—we wouldnt have a lot to talk about, and practically nothing to do. The things havent proved particularly useful. So, I submit that we should expand the robot definition to include all kinds of human-made artificial critters.

This has, of course, already been done by everyone working in the field. The SCARA (selective compliance assembly robot arm) machines from companies like Kuka, and the delta robots from Adept Technologies clearly insist on this expanded definition. Mobile robots, such as the Roomba from iRobot push the boundary in another direction. Weird little things like the robotic insects and worms so popular with academics these days push in a third direction.

Considering the foregoing, the first observation is that the line between robot and non-robot is fuzzy. The old 50s-era dumb thermostats probably shouldnt be considered robots, but a smart, computer-controlled house moving in the direction of the Jarvis character in the Ironman series probably should. Things in between are – in between. Lets bite the bullet and admit were dealing with fuzzy-logic categories, and then move on.

Okay, so what are the main characteristics symptomatic of this fuzzy category robot?

First, its gotta be artificial. A cloned sheep is not a robot. Even designer germs are non-robots.
Second, its gotta be automated. A fly-by-wire fighter jet is not a robot. A drone linked at the hip to a human pilot is not a robot. A driverless car, on the other hand, is a robot. (Either that, or its a traffic accident waiting to happen.)

Third, its gotta interact with the environment. A general-purpose computer sitting there thinking computer-like thoughts is not a robot. A SCARA unit assembling a car is. I submit that an automated bill-paying system arguing through the telephone with my wife over how much to take out of her checkbook this month is a robot.

More problematic is a fourth direction—embedded systems, like automated houses—that beg to be admitted into the robotic fold. I vote for letting them in, along with artificial intelligence (AI) systems, like the robot bill paying systems my wife is so fond of arguing with.

Finally (maybe), its gotta be independent. To be a robot, the thing has to take basic instruction from a human, then go off on its onesies to do the deed. Ideally, you should be able to do something like say, Go wash the car, and itll run off as fast as its little robotic legs can carry it to wash the car. More chronistically, you should be able to program it to vacuum the living room at 4:00 a.m., then be able to wake up at 6:00 a.m. to a freshly vacuumed living room.

Luddites RULE!

LindaBucklin-Shutterstock
Momma said there’d be days like this! (Apologies to songwriters Luther Dixon and Willie Denson, and, of course, the Geico Caveman.) Linda Bucklin/Shutterstock

7 February 2019 – This is not the essay I’d planned to write for this week’s blog. I’d planned a long-winded, abstruse dissertation on the use of principal component analysis to glean information from historical data in chaotic systems. I actually got most of that one drafted on Monday, and planned to finish it up Tuesday.

Then, bright and early on Tuesday morning, before I got anywhere near the incomplete manuscript, I ran headlong into an email issue.

Generally, I start my morning by scanning email to winnow out the few valuable bits buried in the steaming pile of worthless refuse that has accumulated in my Inbox since the last time I visited it. Then, I visit a couple of social media sites in an effort to keep my name if front of the Internet-entertained public. After a couple of hours of this colossal waste of time, I settle in to work on whatever actual work I have to do for the day.

So, finding that my email client software refused to communicate with me threatened to derail my whole day. The fact that I use email for all my business communications, made it especially urgent that I determine what was wrong, and then fix it.

It took the entire morning and on into the early afternoon to realize that there was no way I was going to get to that email account on my computer, find out that nobody in the outside world (not my ISP, not the cable company that went that extra mile to bring Internet signals from that telephone pole out there to the router at the center of my local area network, or anyone else available with more technosavvy than I have) was going to be able to help. I was finally forced to invent a work around involving a legacy computer that I’d neglected to throw in the trash just to get on with my technology-bound life.

At that point the Law of Deadlines forced me to abandon all hope of getting this week’s blog posting out on time, and move on to completing final edits and distribution of that press release for the local art gallery.

That wasn’t the last time modern technology let me down. In discussing a recent Physics Lab SNAFU, Danielle, the laboratory coordinator I work with at the University said: “It’s wonderful when it works, but horrible when it doesn’t.”

Where have I heard that before?

The SNAFU Danielle was lamenting happened last week.

I teach two sections of General Physics Laboratory at Florida Gulf Coast University, one on Wednesdays and one on Fridays. The lab for last week had students dropping a ball, then measuring its acceleration using a computer-controlled ultrasonic detection system as it (the ball, not the computer) bounces on the table.

For the Wednesday class everything worked perfectly. Half a dozen teams each had their own setups, and all got good data, beautiful-looking plots, and automated measurements of position and velocity. The computers then automatically derived accelerations from the velocity data. Only one team had trouble with their computer, but they got good data by switching to an unused setup nearby.

That was Wednesday.

Come Friday the situation was totally different. Out of four teams, only two managed to get data that looked even remotely like it should. Then, one team couldn’t get their computer to spit out accelerations that made any sense at all. Eventually, after class time ran out, the one group who managed to get good results agreed to share their information with the rest of the class.

The high point of the day was managing to distribute that data to everyone via the school’s cloud-based messaging service.

Concerned about another fiasco, after this week’s lab Danielle asked me how it worked out. I replied that, since the equipment we use for this week’s lab is all manually operated, there were no problems whatsoever. “Humans are much more capable than computers,” I said. “They’re able to cope with disruptions that computers have no hope of dealing with.”

The latest example of technology Hell appeared in a story in this morning’s (2/7/2019) Wall Street Journal. Some $136 million of customers’ cryptocurrency holdings became stuck in an electronic vault when the founder (and sole employee) of cryptocurrency exchange QuadrigaCX, Gerald Cotten, died of complications related to Crohn’s disease while building an orphanage in India. The problem is that Cotten was so secretive about passwords and security that nobody, even his wife, Jennifer Robertson, can get into the reserve account maintained on his laptop.

“Quadriga,” according to the WSJ account, “would need control of that account to send those funds to customers.”

No lie! The WSJ attests this bizarre tale is the God’s own truth!

Now, I’ve no sympathy for cryptocurrency mavens, which I consider to be, at best, technoweenies gleefully leading a parade down the primrose path to technology Hell, but this story illustrates what that Hell looks like!

It’s exactly what the Luddites of the early 19th Century warned us about. It’s a place of nameless frustration and unaccountable loss that we’ve brought on ourselves.

Robots Revisited

Engineer with SCARA robots
Engineer using monitoring system software to check and control SCARA welding robots in a digital manufacturing operation. PopTika/Shutterstock

12 December 2018 – I was wondering what to talk about in this week’s blog posting, when an article bearing an interesting-sounding headline crossed my desk. The article, written by Simone Stolzoff of Quartz Media was published last Monday (12/3/2018) by the World Economic Forum (WEF) under the title “Here are the countries most likely to replace you with a robot.”

I generally look askance at organizations with grandiose names that include the word “World,” figuring that they likely are long on megalomania and short on substance. Further, this one lists the inimitable (thank God there’s only one!) Al Gore on its Board of Trustees.

On the other hand, David Rubenstein is also on the WEF board. Rubenstein usually seems to have his head screwed on straight, so that’s a positive sign for the organization. Therefore, I figured the article might be worth reading and should be judged on its own merits.

The main content is summarized in two bar graphs. The first lists the ratio of robots to thousands of manufacturing workers in various countries. The highest scores go to South Korea and Singapore. In fact, three of the top four are Far Eastern countries. The United States comes in around number seven.Figure 1

The second applies a correction to the graphed data to reorder the list by taking into account the countries’ relative wealth. There, the United States comes in dead last among the sixteen countries listed. East Asian countries account for all of the top five.

Figure 2The take-home-lesson from the article is conveniently stated in its final paragraph:

The upshot of all of this is relatively straightforward. When taking wages into account, Asian countries far outpace their western counterparts. If robots are the future of manufacturing, American and European countries have some catching up to do to stay competitive.

This article, of course, got me started thinking about automation and how manufacturers choose to adopt it. It’s a subject that was a major theme throughout my tenure as Chief Editor of Test & Measurement World and constituted the bulk of my work at Control Engineering.

The graphs certainly support the conclusions expressed in the cited paragraph’s first two sentences. The third sentence, however, is problematical.

That ultimate conclusion is based on accepting that “robots are the future of manufacturing.” Absolute assertions like that are always dangerous. Seldom is anything so all-or-nothing.

Predicting the future is epistemological suicide. Whenever I hear such bald-faced statements I recall Jim Morrison’s prescient statement: “The future’s uncertain and the end is always near.”

The line was prescient because a little over a year after the song’s release, Morrison was dead at age twenty seven, thereby fulfilling the slogan expressed by John Derek’s “Nick Romano” character in Nicholas Ray’s 1949 film Knock on Any Door: “Live fast, die young, and leave a good-looking corpse.”

Anyway, predictions like “robots are the future of manufacturing” are generally suspect because, in the chaotic Universe in which we live, the future is inherently unpredictable.

If you want to say something practically guaranteed to be wrong, predict the future!

I’d like to offer an alternate explanation for the data presented in the WEF graphs. It’s based on my belief that American Culture usually gets things right in the long run.

Yes, that’s the long run in which economist John Maynard Keynes pointed out that we’re all dead.

My belief in the ultimate vindication of American trends is based, not on national pride or jingoism, but on historical precedents. Countries that have bucked American trends often start out strong, but ultimately fade.

An obvious example is trendy Japanese management techniques based on Druckerian principles that were so much in vogue during the last half of the twentieth century. Folks imagined such techniques were going to drive the Japanese economy to pre-eminence in the world. Management consultants touted such principles as the future for corporate governance without noticing that while they were great for middle management, they were useless for strategic planning.

Japanese manufacturers beat the crap out of U.S. industry for a while, but eventually their economy fell into a prolonged recession characterized by economic stagnation and disinflation so severe that even negative interest rates couldn’t restart it.

Similar examples abound, which is why our little country with its relatively minuscule population (4.3% of the world’s) has by far the biggest GDP in the world. China, with more than four times the population, grosses less than a third of what we do.

So, if robotic adoption is the future of manufacturing, why are we so far behind? Assuming we actually do know what we’re doing, as past performance would suggest, the answer must be that the others are getting it wrong. Their faith in robotics as a driver of manufacturing productivity may be misplaced.

How could that be? What could be wrong with relying on technological advancement as the driver of productivity?

Manufacturing productivity is calculated on the basis of stuff produced (as measured by its total value in dollars) divided by the number of worker-hours needed to produce it. That should tell you something about what it takes to produce stuff. It’s all about human worker involvement.

Folks who think robots automatically increase productivity are fixating on the denominator in the productivity calculation. Making even the same amount of stuff while reducing the worker-hours needed to produce it should drive productivity up fast. That’s basic number theory. Yet, while manufacturing has been rapidly introducing all kinds of automation over the last few decades, productivity has stagnated.

We need to look for a different explanation.

It just might be that robotic adoption is another example of too much of a good thing. It might be that reliance on technology could prove to be less effective than something about the people making up the work force.

I’m suggesting that because I’ve been led to believe that work forces in the Far Eastern developing economies are less skillful, may have lower expectations, and are more tolerant of authoritarian governments.

Why would those traits make a difference? I’ll take them one at a time to suggest how they might.

The impression that Far Eastern populations are less skillful is not easy to demonstrate. Nobody who’s dealt with people of Asian extraction in either an educational or work-force setting would ever imagine they are at all deficient in either intelligence or motivation. On the other hand, as emerging or developing economies those countries are likely more dependent on workers newly recruited from rural, agrarian settings, who are likely less acclimated to manufacturing and industrial environments. On this basis, one may posit that the available workers may prove less skillful in a manufacturing setting.

It’s a weak argument, but it exists.

The idea that people making up Far-Eastern work forces have lower expectations than those in more developed economies is on firmer footing. Workers in Canada, the U.S. and Europe have very high expectations for how they should be treated. Wages are higher. Benefits are more generous. Upward mobility perceptions are ingrained in the cultures.

For developing economies, not so much.

Then, we come to tolerance of authoritarian regimes. Tolerance of authoritarianism goes hand-in-hand with tolerance for the usual authoritarian vices of graft, lack of personal freedom and social immobility. Only those believing populist political propaganda think differently (which is the danger of populism).

What’s all this got to do with manufacturing productivity?

Lack of skill, low expectations and patience under authority are not conducive to high productivity. People are productive when they work hard. People work hard when they are incentivized. They are incentivized to work when they believe that working harder will make their lives better. It’s not hard to grasp!

Installing robots in a plant won’t by itself lead human workers to believe that working harder will make their lives better. If anything, it’ll do the opposite. They’ll start worrying that their lives are about to take a turn for the worse.

Maybe that has something to do with why increased automation has failed to increase productivity.

Computers Are Revolting!

Will Computers Revolt? cover
Charles Simon’s Will Computers Revolt? looks at the future of interactions between artificial intelligence and the human race.

14 November 2018 – I just couldn’t resist the double meaning allowed by the title for this blog posting. It’s all I could think of when reading the title of Charles Simon’s new book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence.

On one hand, yes, computers are revolting. Yesterday my wife and I spent two hours trying to figure out how to activate our Netflix account on my new laptop. We ended up having to change the email address and password associated with the account. And, we aren’t done yet! The nice lady at Netflix sadly informed me that in thirty days, their automated system would insist that we re-log-into the account on both devices due to the change.

That’s revolting!

On the other hand, the uprising has already begun. Computers are revolting in the sense that they’re taking power to run our lives.

We used to buy stuff just by picking it off the shelf, then walking up to the check-out counter and handing over a few pieces of green paper. The only thing that held up the process was counting out the change.

Later, when credit cards first reared their ugly heads, we had to wait a few minutes for the salesperson to fill out a sales form, then run our credit cards through the machine. It was all manual. No computers involved. It took little time once you learned how to do it, and, more importantly, the process was pretty much the same everywhere and never changed, so once you learned it, you’d learned it “forever.”

Not no more! How much time do you, I, and everyone else waste navigating the multiple pages we have to walk through just to pay for anything with a credit or debit card today?

Even worse, every store has different software using different screens to ask different questions. So, we can’t develop a habitual routine for the process. It’s different every time!

Not long ago the banks issuing my debit-card accounts switched to those %^&^ things with the chips. I always forget to put the thing in the slot instead of swiping the card across the magnetic-stripe reader. When that happens we have to start the process all over, wasting even more time.

The computers have taken over, so now we have to do what they tell us to do.

Now we know who’ll be first against the wall when the revolution comes. It’s already here and the first against the wall is us!

Golem Literature in Perspective

But seriously folks, Simon’s book is the latest in a long tradition of works by thinkers fascinated by the idea that someone could create an artifice that would pass for a human. Perhaps the earliest, and certainly the most iconic, is the golem stories from Jewish folklore. I suspect (on no authority, whatsoever, but it does seem likely) that the idea of a golem appeared about the time when human sculptors started making statues in realistic human form. That was very early, indeed!

A golem is, for those who aren’t familiar with the term or willing to follow the link provided above to learn about it, an artificial creature fashioned by a human that is effectively what we call a “robot.” The folkloric golems were made of clay or (sometimes) wood because those were the best materials available at the time that the stories’ authors’ could have their artists work with. A well-known golem story is Carlo Collodi’s The Adventures of Pinocchio.

By the sixth century BCE, Greek sculptors had begun to produce lifelike statues. The myth of Pygmalion and Galatea appeared in a pseudo-historical work by Philostephanus Cyrenaeus in the third century BCE. Pygmalion was a sculptor who made a statue representing his ideal woman, then fell in love with it. Aphrodite granted his prayer for a wife exactly like the statue by bringing the statue to life. The wife’s name was Galatea.

The Talmud points out that Adam started out as a golem. Like Galatea, Adam was brought to life when the Hebrew God Yahweh gave him a soul.

These golem examples emphasize the idea that humans, no matter how holy or wise, cannot give their creations a soul. The best they can do is to create automatons.

Simon effectively begs to differ. He spends the first quarter of his text laying out the case that it is possible, and indeed inevitable, that automated control systems displaying artificial general intelligence (AGI) capable of thinking at or (eventually) well above human capacity will appear. He spends the next half of his text showing how such AGI systems could be created and making the case that they will eventually exhibit functionality indistinguishable from consciousness. He devotes the rest of his text to speculating about how we, as human beings, will likely interact with such hyperintelligent machines.

Spoiler Alert

Simon’s answer to the question posed by his title is a sort-of “yes.” He feels AGIs will inevitably displace humans as the most intelligent beings on our planet, but won’t exactly “revolt” at any point.

“The conclusion,” he says, “is that the paths of AGIs and humanity will diverge to such an extent that there will be no close relationship between humans and our silicon counterparts.”

There won’t be any violent conflict because robotic needs are sufficiently dissimilar to ours that there won’t be any competition for scarce resources, which is what leads to conflict between groups (including between species).

Robots, he posits, are unlikely to care enough about us to revolt. There will be no Terminator robots seeking to exterminate us because they won’t see us as enough of a threat to bother with. They’re more likely to view us much the way we view squirrels and birds: pleasant fixtures of the natural world.

They won’t, of course, tolerate any individual humans who make trouble for them the same way we wouldn’t tolerate a rabid coyote. But, otherwise, so what?

So, the !!!! What?

The main value of Simon’s book is not in its ultimate conclusion. That’s basically informed opinion. Rather, its value lies in the voluminous detail he provides in getting to that conclusion.

He spends the first quarter of his text detailing exactly what he means by AGI. What functions are needed to make it manifest? How will we know when it rears its head (ugly or not, as a matter of taste)? How will a conscious, self-aware AGI system act?

A critical point Simon makes in this section is the assertion that AGI will arise first in autonomous mobile robots. I thoroughly agree for pretty much the same reasons he puts forth.

I first started seriously speculating about machine intelligence back in the middle of the twentieth century. I never got too far – certainly not as far as Simon gets in this volume – but pretty much the first thing I actually did realize was that it was impossible to develop any kind of machine with any recognizable intelligence unless its main feature was having a mobile body.

Developing any AGI feature requires the machine to have a mobile body. It has to take responsibility not only for deciding how to move itself about in space, but figuring out why. Why would it, for example, rather be over there, rather than to just stay here? Note that biological intelligence arose in animals, not in plants!

Simultaneously with reading Simon’s book, I was re-reading Robert A. Heinlein’s 1966 novel The Moon is a Harsh Mistress, which is one of innumerable fiction works whose plot hangs on actions of a superintelligent sentient computer. I found it interesting to compare Heinlein’s early fictional account with Simon’s much more informed discussion.

Heinlein sidesteps the mobile-body requirement by making his AGI arise in a computer tasked with operating the entire infrastructure of the first permanent human colony on the Moon (more accurately in the Moon, since Heinlein’s troglodytes burrowed through caves and tunnels, coming up to the surface only reluctantly when circumstances forced them to). He also avoids trying to imagine the AGI’s inner workings, by glossing over with the 1950s technology he was most familiar with.

In his rather longish second section, Simon leads his reader through a thought experiment speculating about what components an AGI system would need to have for its intelligence to develop. What sorts of circuitry might be needed, and how might it be realized? This section might be fascinating for those wanting to develop hardware and software to support AGI. For those of us watching from our armchairs on the outside, though, not so much.

Altogether, Charles Simon’s Will Computers Revolt? is an important book that’s fairly easy to read (or, at least as easy as any book this technical can be) and accessible to a wide range of people interested in the future of robotics and artificial intelligence. It is not the last word on this fast-developing field by any means. It is, however, a starting point for the necessary debate over how we should view the subject. Do we have anything to fear? Do we need to think about any regulations? Is there anything to regulate and would any such regulations be effective?

You Want to Print WHAT?!

3D printed plastic handgun
The Liberator gun, designed by Defense Distributed. Photo originally made at 16-05-2013 by Vvzvlad – Flickr: Liberator.3d.gun.vv.01, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=26141469

22 August 2018 – Since the Fifteenth Century, when Johannes Gutenberg invented the technology, printing has given those in authority ginky fits. Until recently it was just dissemination of heretical ideas, a la Giordano Bruno, that raised authoritarian hackles. More recently, 3-D printing, also known as additive manufacturing (AM), has made it possible to make things that folks who like to tell you what you’re allowed to do don’t want you to make.

Don’t get me wrong, AM makes it possible to do a whole lot of good stuff that we only wished we could do before. Like, for example, Jay Leno uses it to replace irreplaceable antique car parts. It’s a really great technology that allows you to make just about anything you can describe in a digital drafting file without the difficulty and mess of hiring a highly skilled fabrication machine shop.

In my years as an experimental physicist, I dealt with fabrication shops a lot! Fabrication shops are collections of craftsmen and the equipment they need to make one-off examples of amazing stuff. It’s generally stuff, however, that nobody’d want to make a second time.

Like the first one of anything.

The reason I specify that AM is for making stuff nobody’d want to make a second time is that it’s slow. Regular machine-shop work, like making nuts and bolts, and sewing machines, and stuff folks want to make a lot of are worth spending a lot of time to figure out fast, efficient and cheap ways to make lots of them.

Take microchips. The darn things take huge amounts of effort to design, and tens of billions of dollars of equipment to make, but once you’ve figured it all out and set it all up, you can pop the things out like chocolate-chip cookies. You can buy an Intel Celeron G3900 dual-core 2.8 GHz desktop processor online for $36.99 because Intel spread the hideous quantities of up-front cost it took to originally set up the production line over the bazillions of processors that production line can make. It’s called “economy of scale.”

If you’re only gonna make one, or just a few, of the things, there’s no economy of scale.

But, if your’re only gonna make one, or just a few, you don’t worry too much about how long it takes to make each one, and what it costs is what it costs.

So, you put up with doing it some way that’s slow.

Like AM.

A HUGE advantage of making things with AM is that you don’t have to be all that smart. If you once learn to set the 3-D printer up, you’re all set. You just download the digital computer-aided-manufacturing (CAM) file into the printer’s artificial cerebrum, and it just DOES it. If you can download the file over the Internet, you’ve got it knocked!

Which brings us to what I want to talk about today: 3-D printing of handguns.

Now, I’m not any kind of anti-gun nut. I’ve got half a dozen firearms laying around the house, and have had since I was a little kid. It’s a family thing. I learned sharpshooting when I was around ten years old. Target shooting is a form of meditation for me. A revolver is my preferred weapon if ever I have need of a weapon. I never want to be the one who brought a knife to a gunfight!

That said, I’ve never actually found a need for a handgun. The one time I was present at a gunfight, I hid under the bed. The few times I’ve been threatened with guns, I talked my way out of it. Experience has led me to believe that carrying a gun is the best way to get shot.

I’ve always agreed with my gunsmith cousin that modern guns are beautiful pieces of art. I love their precision and craftsmanship. I appreciate the skill and effort it takes to make them.

The good ones, that is.

That’s the problem with AM-made guns. It takes practically no skill to create them. They’re right up there with the zip guns we talked about when we were kids.

We never made zip guns. We talked about them. We talked about them in the same tones we’d use talking about leeches or cockroaches. Ick!

We’d never make them because they were beneath contempt. They were so crude we wouldn’t dare fire one. Junk like that’s dangerous!

Okay, so you get an idea how I would react to the news that some nut case had published plans online for 3-D printing a handgun. Bad enough to design such a monstrosity, what about the idiot stupid enough to download the plans and make such a thing? Even more unbelievable, what moron would want to fire it?

Have they no regard for their hands? Don’t they like their fingers?

Anyway, not long ago, a press release crossed my desk from Giffords, the gun-safety organization set up by former U.S. Rep. Gabrielle Giffords and her husband after she survived an assassination attempt in Arizona in 2011. The Giffords’ press release praised Senators Richard Blumenthal and Bill Nelson, and Representatives David Cicilline and Seth Moulton for introducing legislation to stop untraceable firearms from further proliferation after the Trump Administration cleared the way for anyone to 3-D-print their own guns.

Why “untraceable” firearms, and what have they got to do with AM?

Downloadable plans for producing guns by the AM technique puts firearms in the hands of folks too unskilled and, yes, stupid to make them themselves with ordinary methods. That’s important because it is theoretically possible to AM produce firearms of surprising sophistication. The first one offered was a cheap plastic thing (depicted above) that would likely be more danger to its user than its intended victim. More recent offerings, however, have been repeating weapons made of more robust materials that might successfully be used to commit crimes.

Ordinary firearm production techniques require a level of skill and investment in equipment that puts them above the radar for federal regulators. The first thing those regulators require is licensing the manufacturer. Units must be serialized, and records must be kept. The system certainly isn’t perfect, but it gives law enforcement a fighting chance when the products are misused.

The old zip guns snuck in under the radar, but they were so crude and dangerous to their users that they were seldom even made. Almost anybody with enough sense to make them had enough sense not to make them! Those who got their hands on the things were more a danger to themselves than to society.

The Trump administration’s recent settlement with Defense Distributed, allowing them to relaunch their website, include a searchable database of firearm blueprints, and allow the public to create their own fully-functional, unserialized firearms using AM technology opens the floodgates for dangerous people to make their own untraceable firearms.

That’s just dumb!

The Untraceable Firearms Act would prohibit the manufacture and sale of firearms without serial numbers, require any person or business engaged in the business of selling firearm kits and unfinished receivers to obtain a dealer’s license and conduct background checks on purchasers, and mandate that a person who runs a business putting together firearms or finishing receivers must obtain a manufacturer’s license and put serial numbers on firearms before offering them for sale to consumers.

The 3-D Printing Safety Act would prohibit the online publication of computer-aided design (CAD) files which automatically program a 3-D-printer to produce or complete a firearm. That closes the loophole letting malevolent idiots evade scrutiny by making their own firearms.

We have to join with Giffords in applauding the legislators who introduced these bills.

Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.