Falling Out of the Sky

B737 Max taking off
Thai Lion Air Boeing 737 Max 9 taking off from Don Mueang international airport in Bankok, Thailand. Komenton / Shutterstock.com

3 April 2019 – On 29 October 2018, Lion Air flight 610 crashed soon after takeoff from Soekarno–Hatta International Airport in Jakarta, Indonesia. This is not the sort of thing we report in this blog. It’s straight news and we leave that to straight-news media, but I’m diving into it because it involves technology I’m quite familiar with and I might be able to help readers make sense of what happened and judge the often-uninformed reactions to it.

I claim to have the background to understand what happened because I’ve been flying light planes since the 1990s. I also put two years into a post-graduate Aerospace Engineering Program at Arizona State University concentrating on fluid dynamics. That’s enough background to make some educated guesses at what happened to Lion Air 610 as well as in the almost identical crash of an Ethiopian Airlines Boeing 737 MAX in Addis Ababa, , Ethiopia on 10 March 2019.

First, both airliners were recently commissioned Boeing 737 MAX aircraft using standard-equipment installations of Boeing’s new Maneuvering Characteristics Augmentation System (MCAS).

How to Stall an Aircraft

In aerodynamics the word “stall” means something quite unlike what most people expect. Most people encounter the word in an automobile context, where it refers to “stalling the engine.” That happens when you overload an internal-combustion engine. That is pull more power out than the engine can produce at its current operating speed. When that happens, the engine simply stops.

It turns from a power-producing machine to a boat anchor in a heartbeat. Your car stops with a lurch and everyone behind you starts swearing and blowing their horns in an effort to make you feel even worse than you already do.

That’s not what happens when an airplane stalls. It’s not the aircraft’s engine that stalls, but it’s wings. There are similarities in that, like engines, wings stall when they’re overloaded and when stalled they start producing drag like a boat anchor, but that’s about where the similarities end.

When an aircraft stalls, nobody swears and blows their horn. Instead, they scream and die.

Why? Well, wings are supposed to lift the aircraft and support it in the air. If you’ve ever tried to carry a sheet of plywood on a windy day you’ve experience both lift and drag. If you let the sheet tip up a little bit so the wind catches it underneath, it tries to fly up out of your hands. That’s the lift an airplane gets by tipping its wings up into the air stream as it moves forward into the air.

The more you tip the sheet up, the more lift you get for the same airspeed. That is, until you reach a certain attack angle (the angle between the sheet and the wind). Stalling begins suddenly at an attack angle of about 15°. Then, all of a sudden, the force lifting the sheet changes from up and a little back to no up, and a lot of back!

That’s a wing stall.

The aircraft stops imitating a bird, and starts imitating a rock.

You suddenly get a visceral sense of the concept “down.”

‘Cause that’s where you go in a hurry!

At that point, all you can do is point the nose down (so the wing’s forward edge starts pointing in the direction you’re moving: down!

If you’ve got enough space underneath your aircraft so the wing starts flying again before you hit the ground, you can gently pull the aircraft’s nose back up to resume straight and level flight. If not, that’s when the screaming starts.

Wings stall when they’re going too slowly to generate the required lift at an angle of attack of 15°. At higher speeds, the wing can generate the needed lift with less angle of attack, and worries about stalling never come up.

So, now you know all you need to know (or want to know) about stalling an aircraft.

MCAS

Boeing’s MCAS is an anti-stall system. It’s beating heart is a bit of software running on the flight-control computer that monitors a number of sensor inputs, like airspeed and angle of attack. Basically, in simple terms, it knows exactly how much attack angle the wings can stand before stalling out. If it sees that for some reason, the attack angle is getting too high, it assumes the pilot has screwed up. It takes control and pushes the nose down.

It doesn’t have to actually “take control” because modern commercial aircraft are “fly by wire,” which means it’s the computer that actually moves the control surfaces to fly the plane. The pilot’s “yoke” (the little wheel he or she gets to twist and turn and move forward and back) and the rudder pedals he pushes to steer (push right, go right) just sends signals to the computer to tell it what he wants to have happen. In a sense, the pilot negotiates with the computer about what the airplane should do.

The pilot makes suggestions (through the yoke, pedals and throttle control – collectively called the “cockpit flight controls”); the computer then takes that information, combines it with all the other information provided by a plethora (Do you like that word? I do!) of additional sensors; thinks about it for a microsecond; then, finally, the computer tells the aircraft’s control surfaces to move smoothly to a position that it (the computer) thinks will make the aircraft do what it wants it to do.

That’s all well and good when the reason the attack angle got too high is just that something happened that broke the pilot’s concentration, and he (or she) actually screwed up. What about when the pilot actually wants to stall the aircraft?

For example, on landing.

To land a plane, you slow it way down, so the wing’s almost stalled. Then, you fly it really close to the ground so the wheels almost touch the runway. Then you stall the wing so the wheels touch the ground just as the wings lose lift. You hear a satisfying “squeak” as the wheels momentarily skid while spinning up to match the relative speed of the runway. Finally, the wheels gently settle down, taking up the weight of the aircraft. The flight crew (and a few passengers who’ve been paying attention) cheer the pilot for a job well done, and the pilot starts breathing again.

Anti-stall systems don’t do much good during a landing, when you’re trying to intentionally stall the wings at just the right time.

Similarly, the don’t do much good when you’re taking off, and the pilot’s just trying to get the wings unstalled to get the aircraft into the air in the first place.

For those times, you want the MCAS turned off! So you’ve gotta be able to do that, too. Or, if your pilot is too absent minded to shut it off when its not needed, you need it to shut off automatically.

When Things Go Wrong

So, what happened in those two airliner crashes?

Remember that the main input into the MCAS is an attack angle sensor? Attack angle sensors, like any other piece of technology can go bad, especially if it’s exposed to weather. And, airliners are exposed to weather 24/7 except when they’re brought into a hangar for repair.

The working hypothesis for what happened to both airliners is that the attack-angle sensors failed. They jammed in a position where they erroneously reported a high angle-of-attack to the MCAS, which jumped to the conclusion “pilot error,” and pushed the nose down. When the pilot(s) tried to pull the nose back up (because their windshield filled up with things that looked a lot like ground instead of sky), the MCAS said: “Nope! You’re going down, Jack!”

By the time the pilots figured out what was wrong and looked up how to shut the MCAS off, they’d actually hit the things that looked too much like ground.

Why didn’t the MCAS figure out there was something wrong with the sensor?

How’s it supposed to know?

The sensor says the nose is pointed up, so the computer takes it at it’s word. Computers aren’t really very smart, and tend to be quite literal. The sensor says the nose is pointed up, so the computer thinks the nose is pointed up, and tries to point it down (or at least less up). End of story. And, in the real world, it’s “end of aircraft” as well.

If the pilot(s) try to tell the computer to pull the nose up (by desperately pulling back on the yoke), it figures they’re screw-ups, anyway, and won’t listen.

Every try to argue with a computer? Been there, done that. It doesn’t work.

Mea Culpa

When I learned about the hypothesis of attack-angle-sensor failure causing the crashes that took nearly four hundred lives, I got this awful sick feeling that was a mixture of embarrassment and guilt. You see, a decade and a half ago my research project at ASU was an effort to develop a different style of attack-angle sensor. Several events and circumstances combined to make me abandon that research project and, in fact, the whole PhD. program it was a part of. In my defense, it was the start of a ten-year period in which I couldn’t get anything right!

But, if I’d stuck it out and developed that sensor it might have been installed on those airliners and might not have failed at all. Of course, it could have been installed and failed in some other spectacular way.

You see, the attack angle sensor that apparently was installed consisted of a little vane attached to one side of the aircraft’s nose. Just like the wind sock traditionally hung outside airports the world over, wind pressure makes the vane line up downstream of the wind direction. A little angle sensor attached to the vane reports the wind direction relative to the nose: the attack angle.

I got involved in trying to develop an alternative attack-angle sensor because I have a horror of relying on sensors that depend on mechanical movement to work. If you’re relying on mechanical movement, it means you’re relying on bearings, and bearings can corrode and wear out and fail. The sensor I was working on relied on differences in air pressure that depended on the direction the wind hit the sensor.

In actual fact, there were two attack-angle sensors attached to the doomed aircraft – one on each side of the nose – but the Boeing MCAS was paying attention to only one of them. That was Boeing’s second mistake (the first being not using the sensor I hadn’t developed, so I guess they can’t be blamed for it). If the MCAS had been paying attention to both sensors, it would have known something in its touchy-feely universe was wrong. It might have been a little more reluctant to override the pilots’ input.

The third mistake (I believe) Boeing made was to downplay the differences between the new “Max” version of the aircraft and the older version. They’d changed the engines, which (as any aerospace engineer knows) necessitates changes in everything else. Aircraft are so intricately balanced machines that every time you change one thing, everything else has to change – or at least has to be looked at to see if it needs to be changed.

The new engines had improved performance, which affects just about everything involving the aircraft’s handling characteristics. Boeing had apparently tried to make the more-powerful yet more fuel efficient aircraft handle like the old aircraft. There, of course, were differences, which the company tried to pretend would make no difference to the pilots. The MCAS was one of those things that was supposed to make the “Max” version handle just like the non-Max version.

So, when something went wrong in “Max” land, it caught the pilots, who had thousands of hours experience with non-Max aircraft, by surprise.

The latest reports are that Boeing, the FAA, and the airlines have realized what the problems are that caused these issues (I hope they understand them a lot better than I do, because, after all, it’s their job to!), and have worked out a number of fixes.

First, the MCAS will pay attention to two attack-angle sensors. At least then the flight-control computer will have an indication that something is wrong and tell the MCAS to go back in its corner and shut up ‘til the issue is sorted out.

Second, they’ll install a little blinking light that effectively tells the pilots “there’s something wrong, so don’t expect any help from the MCAS ‘til it gets sorted out.”

Third, they’ll make sure the pilots have a good, positive way of emphatically shut the MCAS off if it starts to argue with them in an emergency. And, they’ll make sure the pilots are trained to know when and how to use it.

My understanding is that these fixes are already part of the options that American commercial airlines have generally installed, which is supposedly why the FAA, the airlines and the pilots’ union have been dragging their feet about grounding Boeing’s 737 Max fleet. Let’s hope they’re not just blowing smoke (again)!

Luddites’ Lament

Luddites attack
An owner of a factory defending his workshop against Luddites intent on destroying his mechanized looms between 1811-1816. Everett Historical/Shutterstock

27 March 2019 – A reader of last week’s column, in which I reported recent opinions voiced by a few automation experts at February’s Conference on the Future of Work held at at Stanford University, informed me of a chapter from Henry Hazlitt’s 1988 book Economics in One Lesson that Australian computer scientist Steven Shaw uploaded to his blog.

I’m not going to get into the tangled web of potential copyright infringement that Shaw’s posting of Hazlitt’s entire text opens up, I’ve just linked to the most convenient-to-read posting of that particular chapter. If you follow the link and want to buy the book, I’ve given you the appropriate link as well.

The chapter is of immense value apropos the question of whether automation generally reduces the need for human labor, or creates more opportunities for humans to gain useful employment. Specifically, it looks at the results of a number of historic events where Luddites excoriated technology developers for taking away jobs from humans only to have subsequent developments prove them spectacularly wrong.

Hazlitt’s classic book is, not surprisingly for a classic, well documented, authoritative, and extremely readable. I’m not going to pretend to provide an alternative here, but to summarize some of the chapter’s examples in the hope that you’ll be intrigued enough to seek out the original.

Luddism

Before getting on to the examples, let’s start by looking at the history of Luddism. It’s not a new story, really. It probably dates back to just after cave guys first thought of specialization of labor.

That is, sometime in the prehistoric past, some blokes were found to be especially good at doing some things, and the rest of the tribe came up with the idea of letting, say, the best potters make pots for the whole tribe, and everyone else rewarding them for a job well done by, say, giving them choice caribou parts for dinner.

Eventually, they had the best flint knappers make the arrowheads, the best fletchers put the arrowheads on the arrows, the best bowmakers make the bows, and so on. Division of labor into different jobs turned out to be so spectacularly successful that very few of us rugged individualists, who pretend to do everything for ourselves, are few and far between (and are largely kidding ourselves, anyway).

Since then, anyone who comes up with a great way to do anything more efficiently runs the risk of having the folks who spent years learning to do it the old way land on him (or her) like a ton of bricks.

It’s generally a lot easier to throw rocks to drive the innovator away than to adapt to the innovation.

Luddites in the early nineteenth century were organized bands of workers who violently resisted mechanization of factories during the late Industrial Revolution. Named for an imaginary character, Ned Ludd, who was supposedly an apprentice who smashed two stocking frames in 1779 and whose name had become emblematic of machine destroyers. The term “Luddite” has come to mean anyone fanatically opposed to deploying advanced technology.

Of course, like religious fundamentalists, they have to pick a point in time to separate “good” technology from the “bad.” Unlike religious fanatics, who generally pick publication of a certain text to be the dividing line, Luddites divide between the technology of their immediate past (with which they are familiar) and anything new or unfamiliar. Thus, it’s a continually moving target.

In either case, the dividing line is fundamentally arbitrary, so the emotion of their response is irrational. Irrationality typically carries a warranty of being entirely contrary to facts.

What Happens Next

Hazlitt points out, “The belief that machines cause unemployment, when held with any logical consistency, leads to preposterous conclusions.” He points out that on the second page of the first chapter of Adam Smith’s seminal book Wealth of Nations, Smith tells us that a workman unacquainted with the use of machinery employed in sewing-pin-making “could scarce make one pin a day, and certainly could not make twenty,” but with the use of the machinery he can make 4,800 pins a day. So, zero-sum game theory would indicate an immediate 99.98 percent unemployment rate in the pin-making industry of 1776.

Did that happen? No, because economics is not a zero-sum game. Sewing pins went from dear to cheap. Since they were now cheap, folks prized them less and discarded them more (when was the last time you bothered to straighten a bent pin?), and more folks could afford to buy them in the first place. That led to an increase in sewing-pin sales as well as sales of things like sewing-patterns and bulk fine fabric sold to amateur sewers, and more employment, not less.

Similar results obtained in the stocking industry when new stocking frames (the original having been invented William Lee in 1589, but denied a patent by Elizabeth I who feared its effects on employment in hand-knitting industries) were protested by Luddites as fast as they could be introduced. Before the end of the nineteenth century the stocking industry was employing at least a hundred men for every man it employed at the beginning of the century.

Another example Hazlitt presents from the Industrial Revolution happened in the cotton-spinning industry. He says: “Arkwright invented his cotton-spinning machinery in 1760. At that time it was estimated that there were in England 5,200 spinners using spinning wheels, and 2,700 weavers—in all, 7,900 persons engaged in the production of cotton textiles. The introduction of Arkwright’s invention was opposed on the ground that it threatened the livelihood of the workers, and the opposition had to be put down by force. Yet in 1787—twenty-seven years after the invention appeared—a parliamentary inquiry showed that the number of persons actually engaged in the spinning and weaving of cotton had risen from 7,900 to 320,000, an increase of 4,400 percent.”

As these examples indicate, improvements in manufacturing efficiency generally lead to reductions in manufacturing cost, which, when passed along to customers, reduces prices with concommitent increases in unit sales. This is the price elasticity of demand curve from Microeconomics 101. It is the reason economics is decidedly not a zero-sum game.

If we accept economics as not a zero-sum game, predicting what happens when automation makes it possible to produce more stuff with fewer workers becomes a chancy proposition. For example, many economists today blame flat productivity (the amount of stuff produced divided by the number of workers needed to produce it) for lack of wage gains in the face of low unemployment. If that is true, then anything that would help raise productivity (such as automation) should be welcome.

Long experience has taught us that economics is a positive-sum game. In the face of technological advancement, it behooves us to expect positive outcomes while taking measures to ensure that the concomitant economic gains get distributed fairly (whatever that means) throughout society. That is the take-home lesson from the social dislocations that accompanied the technological advancements of the Early Industrial Revolution.

Don’t Panic!

Panic button
Do not push the red button! Peter Hermes Furian/Shutterstock

20 March 2019 – The image at right visualizes something described in Douglas Adams’ Hitchiker’s Guide to the Galaxy. At one point, the main characters of that six-part “trilogy” found a big red button on the dashboard of a spaceship they were trying to steal that was marked “DO NOT PRESS THIS BUTTON!” Naturally, they pressed the button, and a new label popped up that said “DO NOT PRESS THIS BUTTON AGAIN!”

Eventually, they got the autopilot engaged only to find it was a stunt ship programmed to crash headlong into the nearest Sun as part of the light show for an interstellar rock band. The moral of this story is “Never push buttons marked ‘DO NOT PUSH THIS BUTTON.’”

Per the author: “It is said that despite its many glaring (and occasionally fatal) inaccuracies, the Hitchhiker’s Guide to the Galaxy itself has outsold the Encyclopedia Galactica because it is slightly cheaper, and because it has the words ‘DON’T PANIC’ in large, friendly letters on the cover.”

Despite these references to the Hitchhiker’s Guide to the Galaxy, this posting has nothing to do with that book, the series, or the guide it describes, except that I’ve borrowed the words from the Guide’s cover as a title. I did that because those words perfectly express the take-home lesson of Bill Snyder’s 11 March 2019 article in The Robot Report entitled “Fears of job-stealing robots are misplaced, say experts.”

Expert Opinions

Snyder’s article reports opinions expressed at the the Conference on the Future of Work at Stanford University last month. It’s a topic I’ve shot my word processor off about on numerous occasions in this space, so I thought it would be appropriate to report others’ views as well. First, I’ll present material from Snyder’s article, then I’ll wrap up with my take on the subject.

“Robots aren’t coming for your job,” Snyder says, “but it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.”

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist.

David Autor, professor of economics at the Massachusetts Institute of Technology points out that education is a big determinant of how developing trends affect workers: “It’s a great time to be young and educated, but there’s no clear land of opportunity for adults who haven’t been to college.”

“When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation,” said Varian, “demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude.”

His research indicates that shrinkage of the labor supply due to demographic trends is 53% greater than shrinkage of demand for labor due to automation. That means, while relatively fewer jobs are available, there are a lot fewer workers available to do them. The result is the prospect of a continued labor shortage.

At the same time, Snyder reports that “[The] most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.”

In other words, fears that robots will displace humans for existing jobs miss the point. Robots, instead, are taking over jobs for which there aren’t enough humans to do them.

Another effect is the fact that what people think of as “jobs” are actually made up of many “tasks,” and it’s tasks that get automated, not entire jobs. Some tasks are amenable to automation while others aren’t.

“Consider the job of a gardener,” Snyder suggests as an example. “Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores.”

Some of these tasks, like mowing and watering, can easily be automated. Pruning rose bushes, not so much!

Snyder points to news reports of a hotel in Nagasaki, Japan being forced to “fire” robot receptionists and room attendants that proved to be incompetent.

There’s a scene in the 1997 film The Fifth Element where a supporting character tries to converse with a robot bartender about another character. He says: “She’s so vulnerable – so human. Do you you know what I mean?” The robot shakes its head, “No.”

Sometimes people, even misanthropes, would prefer to interact with another human than with a drink-dispensing machine.

“Jobs,” Varian points out, “unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator.”

“Excessive automation at Tesla was a mistake,” founder Elon Musk mea culpa-ed last year “Humans are underrated.”

Another trend Snyder points out is that automation-ready jobs, such as assembly-line factory workers, have already largely disappeared from America. “The 10 most common occupations in the U.S.,” he says, “include such jobs as retail salespersons, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer even make the list.

Again, robots are mainly taking over tasks that humans are not available to do.

The final trend that Snyder presents, is the stark fact that birthrates in developed nations are declining – in some cases precipitously. “The aging of the baby boom generation creates demand for service jobs,” Varian points out, “but leaves fewer workers actively contributing labor to the economy.”

Those “service jobs” are just the ones that require a human touch, so they’re much harder to automate successfully.

My Inexpert Opinion

I’ve been trying, not entirely successfully, to figure out what role robots will actually have vis-a-vis humans in the future. I think there will be a few macroscopic trends. And, the macroscopic trends should be the easiest to spot ‘cause they’re, well, macroscopic. That means bigger. So, there easier to see. See?

As early as 2010, I worked out one important difference between robots and humans that I expounded in my novel Vengeance is Mine! Specifically, humans have a wider view of the Universe and have more of an emotional stake in it.

“For example,” I had one of my main characters pontificate at a cocktail party, “that tall blonde over there is an archaeologist. She uses ROVs – remotely operated vehicles – to map underwater shipwreck sites. So, she cares about what she sees and finds. We program the ROVs with sophisticated navigational software that allows her to concentrate on what she’s looking at, rather than the details of piloting the vehicle, but she’s in constant communication with it because she cares what it does. It doesn’t.”

More recently, I got a clearer image of this relationship and it’s so obvious that we tend to overlook it. I certainly missed it for decades.

It hit me like a brick when I saw a video of an autonomous robot marine-trash collector. This device is a small autonomous surface vessel with a big “mouth” that glides around seeking out and gobbling up discarded water bottles, plastic bags, bits of styrofoam, and other unwanted jetsam clogging up waterways.

The first question that popped into my mind was “who’s going to own the thing?” I mean, somebody has to want it, then buy it, then put it to work. I’m sure it could be made to automatically regurgitate the junk it collects into trash bags that it drops off at some collection point, but some human or humans have to make sure the trash bags get collected and disposed of. Somebody has to ensure that the robot has a charging system to keep its batteries recharged. Somebody has to fix it when parts wear out, and somebody has to take responsibility if it becomes a navigation hazard. Should that happen, the Coast Guard is going to want to scoop it up and hand its bedraggled carcass to some human owner along with a citation.

So, on a very important level, the biggest thing robots need from humans is ownership. Humans own robots, not the other way around. Without a human owner, an orphan robot is a pile of junk left by the side of the road!

What is This “Robot” Thing, Anyway?

Robot thinking
So, what is it that makes a robot a robot? Phonlamai Photo/Shutterstock

6 March 2019 – While surfing the Internet this morning, in a valiant effort to put off actually getting down to business grading that pile of lab reports that I should have graded a couple of days ago, I ran across this posting I wrote in 2013 for Packaging Digest.

Surprisingly, it still seems relevant today, and on a subject that I haven’t treated in this blog, yet. It being that I’m planning to devote most of next week to preparing my 2018 tax return, I decided to save some writing time by dusting it off and presenting it as this week’s posting to Tech Trends. I hope the folks at Packaging Digest won’t get their noses too far out of joint about my encroaching on their five-year-old copyright without asking permission.

By the way, this piece is way shorter than the usual Tech Trends essay because of the specifications for that Packaging Digest blog, which was entitled “New Metropolis” in homage to Fritz Lang’s 1927 feature film entitled Metropolis, which told the story of a futuristic mechanized culture and an anthropomorphic robot that a mad scientist creates to bring it down. The “New Metropolis” postings were specified to be approximately 500 words long, whereas Tech Trends postings are planned to be 1,000-1,500 words long.

Anyway, I hope you enjoy this little slice of recent history.


11 November 2013 – I thought it might be fun—and maybe even useful—to catalog the classifications of these things we call robots.

Lets start with the word robot. The idea behind the word robot grows from the ancient concept of the golem. A golem was an artificial person created by people.

Frankly, the idea of a golem scared the bejeezus out of the ancients because the golem stands at the interface between living and non-living things. In our enlightened age, it still scares the bejeezus out of people!

If we restricted the field to golems—strictly humanoid robots, or androids—we wouldnt have a lot to talk about, and practically nothing to do. The things havent proved particularly useful. So, I submit that we should expand the robot definition to include all kinds of human-made artificial critters.

This has, of course, already been done by everyone working in the field. The SCARA (selective compliance assembly robot arm) machines from companies like Kuka, and the delta robots from Adept Technologies clearly insist on this expanded definition. Mobile robots, such as the Roomba from iRobot push the boundary in another direction. Weird little things like the robotic insects and worms so popular with academics these days push in a third direction.

Considering the foregoing, the first observation is that the line between robot and non-robot is fuzzy. The old 50s-era dumb thermostats probably shouldnt be considered robots, but a smart, computer-controlled house moving in the direction of the Jarvis character in the Ironman series probably should. Things in between are – in between. Lets bite the bullet and admit were dealing with fuzzy-logic categories, and then move on.

Okay, so what are the main characteristics symptomatic of this fuzzy category robot?

First, its gotta be artificial. A cloned sheep is not a robot. Even designer germs are non-robots.
Second, its gotta be automated. A fly-by-wire fighter jet is not a robot. A drone linked at the hip to a human pilot is not a robot. A driverless car, on the other hand, is a robot. (Either that, or its a traffic accident waiting to happen.)

Third, its gotta interact with the environment. A general-purpose computer sitting there thinking computer-like thoughts is not a robot. A SCARA unit assembling a car is. I submit that an automated bill-paying system arguing through the telephone with my wife over how much to take out of her checkbook this month is a robot.

More problematic is a fourth direction—embedded systems, like automated houses—that beg to be admitted into the robotic fold. I vote for letting them in, along with artificial intelligence (AI) systems, like the robot bill paying systems my wife is so fond of arguing with.

Finally (maybe), its gotta be independent. To be a robot, the thing has to take basic instruction from a human, then go off on its onesies to do the deed. Ideally, you should be able to do something like say, Go wash the car, and itll run off as fast as its little robotic legs can carry it to wash the car. More chronistically, you should be able to program it to vacuum the living room at 4:00 a.m., then be able to wake up at 6:00 a.m. to a freshly vacuumed living room.

Luddites RULE!

LindaBucklin-Shutterstock
Momma said there’d be days like this! (Apologies to songwriters Luther Dixon and Willie Denson, and, of course, the Geico Caveman.) Linda Bucklin/Shutterstock

7 February 2019 – This is not the essay I’d planned to write for this week’s blog. I’d planned a long-winded, abstruse dissertation on the use of principal component analysis to glean information from historical data in chaotic systems. I actually got most of that one drafted on Monday, and planned to finish it up Tuesday.

Then, bright and early on Tuesday morning, before I got anywhere near the incomplete manuscript, I ran headlong into an email issue.

Generally, I start my morning by scanning email to winnow out the few valuable bits buried in the steaming pile of worthless refuse that has accumulated in my Inbox since the last time I visited it. Then, I visit a couple of social media sites in an effort to keep my name if front of the Internet-entertained public. After a couple of hours of this colossal waste of time, I settle in to work on whatever actual work I have to do for the day.

So, finding that my email client software refused to communicate with me threatened to derail my whole day. The fact that I use email for all my business communications, made it especially urgent that I determine what was wrong, and then fix it.

It took the entire morning and on into the early afternoon to realize that there was no way I was going to get to that email account on my computer, find out that nobody in the outside world (not my ISP, not the cable company that went that extra mile to bring Internet signals from that telephone pole out there to the router at the center of my local area network, or anyone else available with more technosavvy than I have) was going to be able to help. I was finally forced to invent a work around involving a legacy computer that I’d neglected to throw in the trash just to get on with my technology-bound life.

At that point the Law of Deadlines forced me to abandon all hope of getting this week’s blog posting out on time, and move on to completing final edits and distribution of that press release for the local art gallery.

That wasn’t the last time modern technology let me down. In discussing a recent Physics Lab SNAFU, Danielle, the laboratory coordinator I work with at the University said: “It’s wonderful when it works, but horrible when it doesn’t.”

Where have I heard that before?

The SNAFU Danielle was lamenting happened last week.

I teach two sections of General Physics Laboratory at Florida Gulf Coast University, one on Wednesdays and one on Fridays. The lab for last week had students dropping a ball, then measuring its acceleration using a computer-controlled ultrasonic detection system as it (the ball, not the computer) bounces on the table.

For the Wednesday class everything worked perfectly. Half a dozen teams each had their own setups, and all got good data, beautiful-looking plots, and automated measurements of position and velocity. The computers then automatically derived accelerations from the velocity data. Only one team had trouble with their computer, but they got good data by switching to an unused setup nearby.

That was Wednesday.

Come Friday the situation was totally different. Out of four teams, only two managed to get data that looked even remotely like it should. Then, one team couldn’t get their computer to spit out accelerations that made any sense at all. Eventually, after class time ran out, the one group who managed to get good results agreed to share their information with the rest of the class.

The high point of the day was managing to distribute that data to everyone via the school’s cloud-based messaging service.

Concerned about another fiasco, after this week’s lab Danielle asked me how it worked out. I replied that, since the equipment we use for this week’s lab is all manually operated, there were no problems whatsoever. “Humans are much more capable than computers,” I said. “They’re able to cope with disruptions that computers have no hope of dealing with.”

The latest example of technology Hell appeared in a story in this morning’s (2/7/2019) Wall Street Journal. Some $136 million of customers’ cryptocurrency holdings became stuck in an electronic vault when the founder (and sole employee) of cryptocurrency exchange QuadrigaCX, Gerald Cotten, died of complications related to Crohn’s disease while building an orphanage in India. The problem is that Cotten was so secretive about passwords and security that nobody, even his wife, Jennifer Robertson, can get into the reserve account maintained on his laptop.

“Quadriga,” according to the WSJ account, “would need control of that account to send those funds to customers.”

No lie! The WSJ attests this bizarre tale is the God’s own truth!

Now, I’ve no sympathy for cryptocurrency mavens, which I consider to be, at best, technoweenies gleefully leading a parade down the primrose path to technology Hell, but this story illustrates what that Hell looks like!

It’s exactly what the Luddites of the early 19th Century warned us about. It’s a place of nameless frustration and unaccountable loss that we’ve brought on ourselves.

Robots Revisited

Engineer with SCARA robots
Engineer using monitoring system software to check and control SCARA welding robots in a digital manufacturing operation. PopTika/Shutterstock

12 December 2018 – I was wondering what to talk about in this week’s blog posting, when an article bearing an interesting-sounding headline crossed my desk. The article, written by Simone Stolzoff of Quartz Media was published last Monday (12/3/2018) by the World Economic Forum (WEF) under the title “Here are the countries most likely to replace you with a robot.”

I generally look askance at organizations with grandiose names that include the word “World,” figuring that they likely are long on megalomania and short on substance. Further, this one lists the inimitable (thank God there’s only one!) Al Gore on its Board of Trustees.

On the other hand, David Rubenstein is also on the WEF board. Rubenstein usually seems to have his head screwed on straight, so that’s a positive sign for the organization. Therefore, I figured the article might be worth reading and should be judged on its own merits.

The main content is summarized in two bar graphs. The first lists the ratio of robots to thousands of manufacturing workers in various countries. The highest scores go to South Korea and Singapore. In fact, three of the top four are Far Eastern countries. The United States comes in around number seven.Figure 1

The second applies a correction to the graphed data to reorder the list by taking into account the countries’ relative wealth. There, the United States comes in dead last among the sixteen countries listed. East Asian countries account for all of the top five.

Figure 2The take-home-lesson from the article is conveniently stated in its final paragraph:

The upshot of all of this is relatively straightforward. When taking wages into account, Asian countries far outpace their western counterparts. If robots are the future of manufacturing, American and European countries have some catching up to do to stay competitive.

This article, of course, got me started thinking about automation and how manufacturers choose to adopt it. It’s a subject that was a major theme throughout my tenure as Chief Editor of Test & Measurement World and constituted the bulk of my work at Control Engineering.

The graphs certainly support the conclusions expressed in the cited paragraph’s first two sentences. The third sentence, however, is problematical.

That ultimate conclusion is based on accepting that “robots are the future of manufacturing.” Absolute assertions like that are always dangerous. Seldom is anything so all-or-nothing.

Predicting the future is epistemological suicide. Whenever I hear such bald-faced statements I recall Jim Morrison’s prescient statement: “The future’s uncertain and the end is always near.”

The line was prescient because a little over a year after the song’s release, Morrison was dead at age twenty seven, thereby fulfilling the slogan expressed by John Derek’s “Nick Romano” character in Nicholas Ray’s 1949 film Knock on Any Door: “Live fast, die young, and leave a good-looking corpse.”

Anyway, predictions like “robots are the future of manufacturing” are generally suspect because, in the chaotic Universe in which we live, the future is inherently unpredictable.

If you want to say something practically guaranteed to be wrong, predict the future!

I’d like to offer an alternate explanation for the data presented in the WEF graphs. It’s based on my belief that American Culture usually gets things right in the long run.

Yes, that’s the long run in which economist John Maynard Keynes pointed out that we’re all dead.

My belief in the ultimate vindication of American trends is based, not on national pride or jingoism, but on historical precedents. Countries that have bucked American trends often start out strong, but ultimately fade.

An obvious example is trendy Japanese management techniques based on Druckerian principles that were so much in vogue during the last half of the twentieth century. Folks imagined such techniques were going to drive the Japanese economy to pre-eminence in the world. Management consultants touted such principles as the future for corporate governance without noticing that while they were great for middle management, they were useless for strategic planning.

Japanese manufacturers beat the crap out of U.S. industry for a while, but eventually their economy fell into a prolonged recession characterized by economic stagnation and disinflation so severe that even negative interest rates couldn’t restart it.

Similar examples abound, which is why our little country with its relatively minuscule population (4.3% of the world’s) has by far the biggest GDP in the world. China, with more than four times the population, grosses less than a third of what we do.

So, if robotic adoption is the future of manufacturing, why are we so far behind? Assuming we actually do know what we’re doing, as past performance would suggest, the answer must be that the others are getting it wrong. Their faith in robotics as a driver of manufacturing productivity may be misplaced.

How could that be? What could be wrong with relying on technological advancement as the driver of productivity?

Manufacturing productivity is calculated on the basis of stuff produced (as measured by its total value in dollars) divided by the number of worker-hours needed to produce it. That should tell you something about what it takes to produce stuff. It’s all about human worker involvement.

Folks who think robots automatically increase productivity are fixating on the denominator in the productivity calculation. Making even the same amount of stuff while reducing the worker-hours needed to produce it should drive productivity up fast. That’s basic number theory. Yet, while manufacturing has been rapidly introducing all kinds of automation over the last few decades, productivity has stagnated.

We need to look for a different explanation.

It just might be that robotic adoption is another example of too much of a good thing. It might be that reliance on technology could prove to be less effective than something about the people making up the work force.

I’m suggesting that because I’ve been led to believe that work forces in the Far Eastern developing economies are less skillful, may have lower expectations, and are more tolerant of authoritarian governments.

Why would those traits make a difference? I’ll take them one at a time to suggest how they might.

The impression that Far Eastern populations are less skillful is not easy to demonstrate. Nobody who’s dealt with people of Asian extraction in either an educational or work-force setting would ever imagine they are at all deficient in either intelligence or motivation. On the other hand, as emerging or developing economies those countries are likely more dependent on workers newly recruited from rural, agrarian settings, who are likely less acclimated to manufacturing and industrial environments. On this basis, one may posit that the available workers may prove less skillful in a manufacturing setting.

It’s a weak argument, but it exists.

The idea that people making up Far-Eastern work forces have lower expectations than those in more developed economies is on firmer footing. Workers in Canada, the U.S. and Europe have very high expectations for how they should be treated. Wages are higher. Benefits are more generous. Upward mobility perceptions are ingrained in the cultures.

For developing economies, not so much.

Then, we come to tolerance of authoritarian regimes. Tolerance of authoritarianism goes hand-in-hand with tolerance for the usual authoritarian vices of graft, lack of personal freedom and social immobility. Only those believing populist political propaganda think differently (which is the danger of populism).

What’s all this got to do with manufacturing productivity?

Lack of skill, low expectations and patience under authority are not conducive to high productivity. People are productive when they work hard. People work hard when they are incentivized. They are incentivized to work when they believe that working harder will make their lives better. It’s not hard to grasp!

Installing robots in a plant won’t by itself lead human workers to believe that working harder will make their lives better. If anything, it’ll do the opposite. They’ll start worrying that their lives are about to take a turn for the worse.

Maybe that has something to do with why increased automation has failed to increase productivity.

Computers Are Revolting!

Will Computers Revolt? cover
Charles Simon’s Will Computers Revolt? looks at the future of interactions between artificial intelligence and the human race.

14 November 2018 – I just couldn’t resist the double meaning allowed by the title for this blog posting. It’s all I could think of when reading the title of Charles Simon’s new book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence.

On one hand, yes, computers are revolting. Yesterday my wife and I spent two hours trying to figure out how to activate our Netflix account on my new laptop. We ended up having to change the email address and password associated with the account. And, we aren’t done yet! The nice lady at Netflix sadly informed me that in thirty days, their automated system would insist that we re-log-into the account on both devices due to the change.

That’s revolting!

On the other hand, the uprising has already begun. Computers are revolting in the sense that they’re taking power to run our lives.

We used to buy stuff just by picking it off the shelf, then walking up to the check-out counter and handing over a few pieces of green paper. The only thing that held up the process was counting out the change.

Later, when credit cards first reared their ugly heads, we had to wait a few minutes for the salesperson to fill out a sales form, then run our credit cards through the machine. It was all manual. No computers involved. It took little time once you learned how to do it, and, more importantly, the process was pretty much the same everywhere and never changed, so once you learned it, you’d learned it “forever.”

Not no more! How much time do you, I, and everyone else waste navigating the multiple pages we have to walk through just to pay for anything with a credit or debit card today?

Even worse, every store has different software using different screens to ask different questions. So, we can’t develop a habitual routine for the process. It’s different every time!

Not long ago the banks issuing my debit-card accounts switched to those %^&^ things with the chips. I always forget to put the thing in the slot instead of swiping the card across the magnetic-stripe reader. When that happens we have to start the process all over, wasting even more time.

The computers have taken over, so now we have to do what they tell us to do.

Now we know who’ll be first against the wall when the revolution comes. It’s already here and the first against the wall is us!

Golem Literature in Perspective

But seriously folks, Simon’s book is the latest in a long tradition of works by thinkers fascinated by the idea that someone could create an artifice that would pass for a human. Perhaps the earliest, and certainly the most iconic, is the golem stories from Jewish folklore. I suspect (on no authority, whatsoever, but it does seem likely) that the idea of a golem appeared about the time when human sculptors started making statues in realistic human form. That was very early, indeed!

A golem is, for those who aren’t familiar with the term or willing to follow the link provided above to learn about it, an artificial creature fashioned by a human that is effectively what we call a “robot.” The folkloric golems were made of clay or (sometimes) wood because those were the best materials available at the time that the stories’ authors’ could have their artists work with. A well-known golem story is Carlo Collodi’s The Adventures of Pinocchio.

By the sixth century BCE, Greek sculptors had begun to produce lifelike statues. The myth of Pygmalion and Galatea appeared in a pseudo-historical work by Philostephanus Cyrenaeus in the third century BCE. Pygmalion was a sculptor who made a statue representing his ideal woman, then fell in love with it. Aphrodite granted his prayer for a wife exactly like the statue by bringing the statue to life. The wife’s name was Galatea.

The Talmud points out that Adam started out as a golem. Like Galatea, Adam was brought to life when the Hebrew God Yahweh gave him a soul.

These golem examples emphasize the idea that humans, no matter how holy or wise, cannot give their creations a soul. The best they can do is to create automatons.

Simon effectively begs to differ. He spends the first quarter of his text laying out the case that it is possible, and indeed inevitable, that automated control systems displaying artificial general intelligence (AGI) capable of thinking at or (eventually) well above human capacity will appear. He spends the next half of his text showing how such AGI systems could be created and making the case that they will eventually exhibit functionality indistinguishable from consciousness. He devotes the rest of his text to speculating about how we, as human beings, will likely interact with such hyperintelligent machines.

Spoiler Alert

Simon’s answer to the question posed by his title is a sort-of “yes.” He feels AGIs will inevitably displace humans as the most intelligent beings on our planet, but won’t exactly “revolt” at any point.

“The conclusion,” he says, “is that the paths of AGIs and humanity will diverge to such an extent that there will be no close relationship between humans and our silicon counterparts.”

There won’t be any violent conflict because robotic needs are sufficiently dissimilar to ours that there won’t be any competition for scarce resources, which is what leads to conflict between groups (including between species).

Robots, he posits, are unlikely to care enough about us to revolt. There will be no Terminator robots seeking to exterminate us because they won’t see us as enough of a threat to bother with. They’re more likely to view us much the way we view squirrels and birds: pleasant fixtures of the natural world.

They won’t, of course, tolerate any individual humans who make trouble for them the same way we wouldn’t tolerate a rabid coyote. But, otherwise, so what?

So, the !!!! What?

The main value of Simon’s book is not in its ultimate conclusion. That’s basically informed opinion. Rather, its value lies in the voluminous detail he provides in getting to that conclusion.

He spends the first quarter of his text detailing exactly what he means by AGI. What functions are needed to make it manifest? How will we know when it rears its head (ugly or not, as a matter of taste)? How will a conscious, self-aware AGI system act?

A critical point Simon makes in this section is the assertion that AGI will arise first in autonomous mobile robots. I thoroughly agree for pretty much the same reasons he puts forth.

I first started seriously speculating about machine intelligence back in the middle of the twentieth century. I never got too far – certainly not as far as Simon gets in this volume – but pretty much the first thing I actually did realize was that it was impossible to develop any kind of machine with any recognizable intelligence unless its main feature was having a mobile body.

Developing any AGI feature requires the machine to have a mobile body. It has to take responsibility not only for deciding how to move itself about in space, but figuring out why. Why would it, for example, rather be over there, rather than to just stay here? Note that biological intelligence arose in animals, not in plants!

Simultaneously with reading Simon’s book, I was re-reading Robert A. Heinlein’s 1966 novel The Moon is a Harsh Mistress, which is one of innumerable fiction works whose plot hangs on actions of a superintelligent sentient computer. I found it interesting to compare Heinlein’s early fictional account with Simon’s much more informed discussion.

Heinlein sidesteps the mobile-body requirement by making his AGI arise in a computer tasked with operating the entire infrastructure of the first permanent human colony on the Moon (more accurately in the Moon, since Heinlein’s troglodytes burrowed through caves and tunnels, coming up to the surface only reluctantly when circumstances forced them to). He also avoids trying to imagine the AGI’s inner workings, by glossing over with the 1950s technology he was most familiar with.

In his rather longish second section, Simon leads his reader through a thought experiment speculating about what components an AGI system would need to have for its intelligence to develop. What sorts of circuitry might be needed, and how might it be realized? This section might be fascinating for those wanting to develop hardware and software to support AGI. For those of us watching from our armchairs on the outside, though, not so much.

Altogether, Charles Simon’s Will Computers Revolt? is an important book that’s fairly easy to read (or, at least as easy as any book this technical can be) and accessible to a wide range of people interested in the future of robotics and artificial intelligence. It is not the last word on this fast-developing field by any means. It is, however, a starting point for the necessary debate over how we should view the subject. Do we have anything to fear? Do we need to think about any regulations? Is there anything to regulate and would any such regulations be effective?

You Want to Print WHAT?!

3D printed plastic handgun
The Liberator gun, designed by Defense Distributed. Photo originally made at 16-05-2013 by Vvzvlad – Flickr: Liberator.3d.gun.vv.01, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=26141469

22 August 2018 – Since the Fifteenth Century, when Johannes Gutenberg invented the technology, printing has given those in authority ginky fits. Until recently it was just dissemination of heretical ideas, a la Giordano Bruno, that raised authoritarian hackles. More recently, 3-D printing, also known as additive manufacturing (AM), has made it possible to make things that folks who like to tell you what you’re allowed to do don’t want you to make.

Don’t get me wrong, AM makes it possible to do a whole lot of good stuff that we only wished we could do before. Like, for example, Jay Leno uses it to replace irreplaceable antique car parts. It’s a really great technology that allows you to make just about anything you can describe in a digital drafting file without the difficulty and mess of hiring a highly skilled fabrication machine shop.

In my years as an experimental physicist, I dealt with fabrication shops a lot! Fabrication shops are collections of craftsmen and the equipment they need to make one-off examples of amazing stuff. It’s generally stuff, however, that nobody’d want to make a second time.

Like the first one of anything.

The reason I specify that AM is for making stuff nobody’d want to make a second time is that it’s slow. Regular machine-shop work, like making nuts and bolts, and sewing machines, and stuff folks want to make a lot of are worth spending a lot of time to figure out fast, efficient and cheap ways to make lots of them.

Take microchips. The darn things take huge amounts of effort to design, and tens of billions of dollars of equipment to make, but once you’ve figured it all out and set it all up, you can pop the things out like chocolate-chip cookies. You can buy an Intel Celeron G3900 dual-core 2.8 GHz desktop processor online for $36.99 because Intel spread the hideous quantities of up-front cost it took to originally set up the production line over the bazillions of processors that production line can make. It’s called “economy of scale.”

If you’re only gonna make one, or just a few, of the things, there’s no economy of scale.

But, if your’re only gonna make one, or just a few, you don’t worry too much about how long it takes to make each one, and what it costs is what it costs.

So, you put up with doing it some way that’s slow.

Like AM.

A HUGE advantage of making things with AM is that you don’t have to be all that smart. If you once learn to set the 3-D printer up, you’re all set. You just download the digital computer-aided-manufacturing (CAM) file into the printer’s artificial cerebrum, and it just DOES it. If you can download the file over the Internet, you’ve got it knocked!

Which brings us to what I want to talk about today: 3-D printing of handguns.

Now, I’m not any kind of anti-gun nut. I’ve got half a dozen firearms laying around the house, and have had since I was a little kid. It’s a family thing. I learned sharpshooting when I was around ten years old. Target shooting is a form of meditation for me. A revolver is my preferred weapon if ever I have need of a weapon. I never want to be the one who brought a knife to a gunfight!

That said, I’ve never actually found a need for a handgun. The one time I was present at a gunfight, I hid under the bed. The few times I’ve been threatened with guns, I talked my way out of it. Experience has led me to believe that carrying a gun is the best way to get shot.

I’ve always agreed with my gunsmith cousin that modern guns are beautiful pieces of art. I love their precision and craftsmanship. I appreciate the skill and effort it takes to make them.

The good ones, that is.

That’s the problem with AM-made guns. It takes practically no skill to create them. They’re right up there with the zip guns we talked about when we were kids.

We never made zip guns. We talked about them. We talked about them in the same tones we’d use talking about leeches or cockroaches. Ick!

We’d never make them because they were beneath contempt. They were so crude we wouldn’t dare fire one. Junk like that’s dangerous!

Okay, so you get an idea how I would react to the news that some nut case had published plans online for 3-D printing a handgun. Bad enough to design such a monstrosity, what about the idiot stupid enough to download the plans and make such a thing? Even more unbelievable, what moron would want to fire it?

Have they no regard for their hands? Don’t they like their fingers?

Anyway, not long ago, a press release crossed my desk from Giffords, the gun-safety organization set up by former U.S. Rep. Gabrielle Giffords and her husband after she survived an assassination attempt in Arizona in 2011. The Giffords’ press release praised Senators Richard Blumenthal and Bill Nelson, and Representatives David Cicilline and Seth Moulton for introducing legislation to stop untraceable firearms from further proliferation after the Trump Administration cleared the way for anyone to 3-D-print their own guns.

Why “untraceable” firearms, and what have they got to do with AM?

Downloadable plans for producing guns by the AM technique puts firearms in the hands of folks too unskilled and, yes, stupid to make them themselves with ordinary methods. That’s important because it is theoretically possible to AM produce firearms of surprising sophistication. The first one offered was a cheap plastic thing (depicted above) that would likely be more danger to its user than its intended victim. More recent offerings, however, have been repeating weapons made of more robust materials that might successfully be used to commit crimes.

Ordinary firearm production techniques require a level of skill and investment in equipment that puts them above the radar for federal regulators. The first thing those regulators require is licensing the manufacturer. Units must be serialized, and records must be kept. The system certainly isn’t perfect, but it gives law enforcement a fighting chance when the products are misused.

The old zip guns snuck in under the radar, but they were so crude and dangerous to their users that they were seldom even made. Almost anybody with enough sense to make them had enough sense not to make them! Those who got their hands on the things were more a danger to themselves than to society.

The Trump administration’s recent settlement with Defense Distributed, allowing them to relaunch their website, include a searchable database of firearm blueprints, and allow the public to create their own fully-functional, unserialized firearms using AM technology opens the floodgates for dangerous people to make their own untraceable firearms.

That’s just dumb!

The Untraceable Firearms Act would prohibit the manufacture and sale of firearms without serial numbers, require any person or business engaged in the business of selling firearm kits and unfinished receivers to obtain a dealer’s license and conduct background checks on purchasers, and mandate that a person who runs a business putting together firearms or finishing receivers must obtain a manufacturer’s license and put serial numbers on firearms before offering them for sale to consumers.

The 3-D Printing Safety Act would prohibit the online publication of computer-aided design (CAD) files which automatically program a 3-D-printer to produce or complete a firearm. That closes the loophole letting malevolent idiots evade scrutiny by making their own firearms.

We have to join with Giffords in applauding the legislators who introduced these bills.

Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.