Social Media and The Front Page

Walter Burns
Promotional photograph of Osgood Perkins as Walter Burns in the 1928 Broadway production of The Front Page

12 September 2018 – The Front Page was an hilarious one-set stage play supposedly taking place over a single night in the dingy press room of Chicago’s Criminal Courts Building overlooking the gallows behind the Cook County Jail. I’m not going to synopsize the plot because the Wikipedia entry cited above does such an excellent job it’s better for you to follow the link and read it yourself.

First performed in 1928, the play has been revived several times and suffered countless adaptations to other media. It’s notable for the fact that the main character, Hildy Johnson, originally written as a male part, is even more interesting as a female. That says something important, but I don’t know what.

By the way, I insist that the very best adaptation is Howard Hawks’ 1940 tour de force film entitled His Girl Friday starring Rosalind Russell as Hildy Johnson, and Cary Grant as the other main character Walter Burns. Burns is Johnson’s boss and ex-husband who uses various subterfuges to prevent Hildy from quitting her job and marrying an insurance salesman.

That’s not what I want to talk about today, though. What’s important for this blog posting is part of the play’s backstory. It’s important because it can help provide context for the entire social media industry, which is becoming so important for American society right now.

In that backstory, a critical supporting character is one Earl Williams, who’s a mousey little man convicted of murdering a policeman and sentenced to be executed the following morning right outside the press-room window. During the course of the play, it comes to light that Williams, confused by listening to a soapbox demagogue speaking in a public park, accidentally shot the policeman and was subsequently railroaded in court by a corrupt sheriff who wanted to use his execution to help get out the black(!?) vote for his re-election campaign.

What publicly executing a confused communist sympathizer has to do with motivating black voters I still fail to understand, but it makes as much sense as anything else the sheriff says or does.

This plot has so many twists and turns paralleling issues still resonating today that it’s rediculous. That’s a large part of the play’s fun!

Anyway, what I want you to focus on right now is the subtle point that Williams was confused by listening to a soapbox demagogue.

Soapbox demagogues were a fixture in pre-Internet political discourse. The U.S. Constitution’s First Amendment explicitly gives private citizens the right to peaceably assemble in public places. For example, during the late 1960s a typical summer Sunday afternoon anywhere in any public park in North America or Europe would see a gathering of anywhere from 10 to 10,000 hippies for an impromptu “Love In,” or “Be In,” or “Happening.” With no structure or set agenda folks would gather to do whatever seemed like a good idea at the time. My surrealist novelette Lilith describes a gathering of angels, said to be “the hippies of the supernatural world,” that was patterned after a typical Hippie Love In.

Similarly, a soapbox demagogue had the right to commandeer a picnic table, bandstand, or discarded soapbox to place himself (at the time they were overwhelmingly male) above the crowd of passersby that he hoped would listen to his discourse on whatever he wanted to talk about.

In the case of Earl Williams’ demagogue, the speech was about “production for use.” The feeble-minded Williams applied that idea to the policeman’s service weapon, with predictable results.

Fast forward to the twenty-first century.

I haven’t been hanging around local parks on Sunday afternoons for a long time, so I don’t know if soapbox demagogues are still out there. I doubt that they are because it’s easier and cheaper to log onto a social-media platform, such as Facebook, to shoot your mouth off before a much larger international audience.

I have browsed social media, however, and see the same sort of drivel that used to spew out of the mouths of soapbox demagogues back in the day.

The point I’m trying to make is that there’s really nothing novel about social media. Being a platform for anyone to say anything to anyone is the same as last-century soapboxes being available for anyone who thinks they have something to say. It’s a prominent right guaranteed in the Bill of Rights. In fact, it’s important enough to be guaranteed in the very first of th Bill’s amendments to the U.S. Constitution.

What is not included, however, is a proscription against anyone ignoring the HECK out of soapbox demagogues! They have the right to talk, but we have the right to not listen.

Back in the day, almost everybody passed by soapbox demagogues without a second glance. We all knew they climbed their soapboxes because it was the only venue they had to voice their opinions.

Preachers had pulpits in front of congregations, so you knew they had something to say that people wanted to hear. News reporters had newspapers people bought because they contained news stories that people wanted to read. Scholars had academic journals that other scholars subscribed to because they printed results of important research. Fiction writers had published novels folks read because they found them entertaining.

The list goes on.

Soapbox demagogues, however, had to stand on an impromptu platform because they didn’t have anything to say worth hearing. The only ones who stopped to listen were those, like the unemployed Earl Williams, who had nothing better to do.

The idea of pretending that social media is any more of a legitimate venue for ideas is just goofy.

Social media are not legitimate media for the exchange of ideas simply because anybody is able to say anything on them, just like a soapbox in a park. Like a soapbox in a park, most of what is said on social media isn’t worth hearing. It’s there because the barrier to entry is essentially nil. That’s why so many purveyors of extremist and divisive rhetoric gravitate to social media platforms. Legitimate media won’t carry them.

Legitimate media organizations have barriers to the entry of lousy ideas. For example, I subscribe to The Economist because of their former Editor in Chief, John Micklethwait, who impressed me as an excellent arbiter of ideas (despite having a weird last name). I was very pleased when he transferred over to Bloomberg News, which I consider the only televised outlet for globally significant news. The Wall Street Journals business focus forces Editor-in-Chief Matt Murray into a “just the facts, ma’am” stance because every newsworthy event creates both winners and losers in the business community, so content bias is a non-starter.

The common thread among these legitimate-media sources is existance of an organizational structure focused on maintaining content quality. There are knowlegeable gatekeepers (called “editors“) charged with keeping out bad ideas.

So, when Donald Trump, for example, shows a preference for social media (in his case, Twitter) and an abhorrence of traditional news outlets, he’s telling us his ideas aren’t worth listening to. Legitimate media outlets disparage his views, so he’s forced to use the twenty-first century equivalent of a public-park soapbox: social media.

On social media, he can say anything to anybody because there’s nobody to tell him, “That’s a stupid thing to say. Don’t say it!”

Thinking Through Facial Recognition

Makeup
There are lots of reasons a person might wear makeup that could baffle facial recognition technology. Steven J Hensley / Shutterstock.com

5 September 2018 – A lot of us grew up reading stories by Robert A. Heinlein, who was one of the most Libertarian-leaning of twentieth-century science-fiction writers. When contemplating then-future surveillance technology (which he imagined would be even more intrusive than it actually is today) he wrote (in his 1982 novel Friday): “… there is a moral obligation on each free person to fight back wherever possible … ”

The surveillance technology Heinlein expected to become the most ubiquitous, pervasive, intrusive and literally in-your-face was facial recognition. Back in 1982, he didn’t seem to quite get the picture (pun intended) of how automation, artificial intelligence, and facial recognition could combine to become Big Brother’s all-seeing eyes. Now that we’re at the cusp of that technology being deployed, it’s time for just-us-folks to think about how we should react to it.

An alarm should be set off by an article filed by NBC News journalists Tom Costello and Ethan Sacks on 23 August reporting: “New facial recognition tech catches first impostor at D.C. airport.” Apparently, a Congolese national tried to enter the United States on a flight from Sao Paulo, Brazil through Washington Dulles International Airport on a French passport, and was instantly unmasked by a new facial-recognition system that quickly figured out that his face did not match that of the real holder of the French passport. Authorities figured out he was a Congolese national by finding his real identification papers hidden in his shoe. Why he wanted into the United States; why he tried to use a French passport; and why he was coming in from Brazil are all questions unanswered in the article. The article was about this whiz-bang technology that worked so well on the third day it was deployed.

What makes the story significant is that this time it all worked in real time. Previous applications of facial recognition have worked only after the fact.

The reason this article should set off alarm bells is not that the technology unmasked some jamoke trying to sneak into the country for some unknown, but probably nefarious, purpose. On balance, that was almost certainly (from our viewpoint) a good thing. The alarms should sound, however, to wake us up to think about how we really want to react to this kind of ubiquitous surveillance being deployed.

Do we really want Big Brother watching us?

Joan Quigley, former Assemblywoman from Jersey City, NJ, where she was Majority Conference Leader, chair of Homeland Security, and served on Budget, Health and Economic Development Committees, wrote an op-ed piece appearing in The Jersey Journal on 20 August entitled: “Facial recognition the latest alarm bell for privacy advocates.” In it she points out that “it’s not only crime some don’t want others to see.”

There’s a whole lot of what each of us does that we want to keep private. While we consider it perfectly innocent, it’s just nobody else’s business.

It’s why the stalls in public bathrooms have doors.

People generally object to living in a fishbowl.

So, ubiquitous deployment of facial recognition technology brings with it some good things, and some that are not so good. That argues for a national public debate aimed at developing a consensus regarding where, when and how facial recognition technology should be used.

Framing the Debate

To start with, recognize that facial recognition is already ubiquitous and natural. It’s why Mommy Nature goes through all kinds of machinations to make our faces more-or-less unique. One of the first things babies learn is how to recognize Mom’s face. How could the cave guys have coordinated their hunting parties if nobody could tell Fred from Manny?

Facial recognition technology just extends our natural talent for recognizing our friends by sight to its use by automated systems.

A white paper entitled Top 4 Modern Use Cases of Biometric Technology crossed my desk recently. It was published by security-software firm iTrue. Their stated purpose is to “take biometric technology to the next level by securing all biometric data onto their blockchain platform.”

Because the white paper is clearly a marketing piece, and it is unsigned by the actual author, I can’t really vouch for the accuracy of its conclusions. For example, the four use cases listed in the paper are likely just the four main applications they envision for their technology. They are, however, a reasonable starting point for our public discussion.

The four use cases cited are:

  1. Border control and airport security
  2. Company payroll and attendance management
  3. Financial data and identity protection
  4. Physical or logical access solutions

This is probably not an exhaustive list, but offhand I can’t think of any important items left off. So, I’ll pretend like it’s a really good, complete list. It may be. It may not be. That should be part of the discussion.

The first item on the list is exactly what the D.C. airport news story was all about, so enough said. That horse has been beaten to death.

About the second item, the white paper says: “Organizations are beginning to invest in biometric technologies to manage employee ID and attendance, since individuals are always carrying their fingerprints, eyes, and faces with them, and these items cannot be lost, stolen, or forgotten.”

In my Mother’s unforgettable New England accent, we say, “Eye-yuh!”

There is, however, one major flaw in the reasoning behind relying on facial recognition. It’s illustrated by the image above. Since time immemorial, folks have worn makeup that could potentially give facial recognition systems ginky fits. They do it for all kinds of innocent reasons. If you’re going to make being able to pass facial recognition tests a prerequisite for doing your job, expect all sorts of pushback.

For example, over the years I’ve known many, many women who wouldn’t want to be seen in public without makeup. What are you going to do? Make your workplace a makeup-free zone? That’ll go over big!

On to number three. How’s your average cosplay enthusiast going to react to not being able to use their credit or debit card to buy gas on their way to an event because the bank’s facial recognition system can’t see through their alien-creature makeup?

Transgender person
Portrait of young transgender person wearing pink wig. Ranta Images/Shutterstock

Even more seriously, look at the image on the right. This is a transgender person wearing a wig. Really cute isn’t he/she? Do you think your facial recognition software could tell the difference between him and his sister? Does your ACH vendor want to risk trampling his/her rights?

Ooops!

When we come to the fourth item on the list, suppose a Saudi Arabian woman wants to get into her house? Are you going to require her to remove her burka to get through her front door? What about her right to religious freedom? Or, will this become another situation where she can’t function as a human being without being accompanied by a male guardian? We’re already on thin ice when she wants to enter the country through an airport!

I’ve already half formed my own ideas about these issues. I look forward to participating in the national debate.

Heinlein would, of course, delight in every example where facial recognition could be foiled. In Friday, he gleefully pointed out ” … what takes three hours to put on will come off in fifteen minutes of soap and hot water.”

Legal vs. Scientific Thinking

Scientific Method Diagram
The scientific method assumes uncertainty.

29 August 2018 – With so much controversy in the news recently surrounding POTUS’ exposure in the Mueller investigation into Russian meddling in the 2016 Presidential election, I’ve been thinking a whole lot about how lawyers look at evidence versus how scientists look at evidence. While I’ve only limited background with legal matters (having an MBA’s exposure to business law), I’ve spent a career teaching and using the scientific method.

While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school consists of five to seven steps, which pretty much look like this:

  1. Observation
  2. Hypothesis
  3. Prediction
  4. Experimentation
  5. Analysis
  6. Repeat

I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.

It all starts with observation of things that go on in the World. Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question “why.”

Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several explanations that vary from the erudite to the thoroughly bizarre.

Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!

For example, ancients tended to think in terms of objects somehow “wanting” to go downward as the least wierd of explanations for gravity. It came from animism, which is the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior. Rocks are hard because their spirits resist being broken. They fall down when released because their spirits somehow like down better than up.

What we now consider the most-correctest explanation, that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other, wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.

Scientists then take all the hypotheses, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results of the experiments.

Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.

It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.

That’s why the last step is to repeat the entire process ad nauseam.

While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.

Not boiling it down to essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, science pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”

The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following through to the resultant results.

There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.

You do that a bazillion times in a bazillion different ways, and a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.

Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.

For example, I was once asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.”

I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.

I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He kept believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.

Anyway, the way this all works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.

That is NOT what our legal system does.

Not by a LONG shot!

The Legal Method

While both scientific and legal thinking methods start from some initial state, and move to some final conclusion, the processes for getting from A to B differs in important ways.

The Legal Method
In legal thinking, a chain of evidence is used to get from criminal charges to a final verdict.

First, while the hypothesis in the scientific method is assumed to be provisional, the legal system is based on coming to a definite explanation of events that is in some sense “correct.” The results of scientific inquiry, on the other hand, are accepted as “probably right, maybe, for now.”

That ain’t good enough in legal matters. The verdict of a criminal trial, for example, has to be true “beyond a reasonable doubt.”

Second, in legal matters the path from the initial conditions (the “charges”) to the results (the “verdict”) is linear. It has one path: through a chain of evidence. There may be multiple bits of evidence, but you can follow them through from a definite start to a definite end.

The third way the legal method differs from the scientific method is what I call the “So, What?” factor.

If your scientific hypothesis is wrong, meaning it gives wrong results, “So, What?”

Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.

Finding that some hypothesis is wrong is no big deal. It just means you don’t have to bother with that dumbass idea, anymore. Alien abductions get relegated to entertainment for the entertainment starved, and real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.

(Leading hypothesis: the distances from there to here are so vast that anybody smart enough to make the trip has better things to do.)

If, on the other hand, your legal verdict is wrong, really bad things happen. Maybe somebody’s life is ruined. Maybe even somebody dies. The penalty for failure in the legal system is severe!

So, the term “air tight” shows up a lot in talking about legal evidence. In science not so much.

For scientists “Gee, it looks like . . . ” is usually as good as it gets.

For judges, they need a whole lot more.

So, as a scientist I can say: “POTUS looks like a career criminal.”

That, however, won’t do the job for, say, Robert Mueller.

In Real Life

Very few of us are either scientists or judges. We live in the real world and have to make real-world decisions. So, which sort of method for coming to conclusions should we use?

In 1983, film director Paul Brickman spent an estimated 6.2 million dollars and 99 min worth of celluloid (some 142,560 individual images at the standard frame rate of 24 fps) telling us that successful entrepreneurs must be prepared to make decisions based on insufficient information. That means with no guarantee of being right. No guarantee of success.

He, by the way, was right. His movie, Risky Business, grossed $63 million at the box office in the U.S. alone. A clear gross margin of 1,000%!

There’s an old saying: “A conclusion is that point at which you decide to stop thinking about it.”

It sounds a bit glib, but it actually isn’t. Every experienced businessman, for example, knows that you never have enough information. You are generally forced to make a decision based on incomplete information.

In the real world, making a wrong decision is usually better than making no decision at all. What that means is that, in the real world, if you make a wrong decision you usually get to say “Oops!” and walk it back. If you decide to make no decision, that’s a decision that you can’t walk back.

Oops! I have to walk that statement back.

There are situations where the penalty for the failure of making a wrong decision is severe. For example, we had a cat once, who took exception to a number of changes in our home life. We’d moved. We’d gotten a new dog. We’d adopted another cat. He didn’t like any of that.

I could see from his body language that he was developing a bad attitude. Whereas he had previously been patient when things didn’t go exactly his way, he’d started acting more aggressive. One night, we were startled to hear a screetching of brakes in the road passing our front door. We went out to find that Nick had run across the road and been hit by a car.

Splat!

Considering the pattern of events, I concluded that Nick had died of PCD. That is, “Poor Cat Decision.” He’d been overly aggressive when deciding whether or not to cross the road.

Making no decision (hesitating before running across the road) would probably have been better than the decision he made to turn on his jets.

That’s the kind of decision where getting it wrong is worse than holding back.

Usually, however, no decision is the worst decision. As the Zen haiku says:

In walking, just walk.
In sitting, just sit.
Above all, don’t wobble.

That argues for using the scientist’s method: gather what facts you have, then make a decision. If you’re hypothesis turns out to be wrong, “So, What?”

You Want to Print WHAT?!

3D printed plastic handgun
The Liberator gun, designed by Defense Distributed. Photo originally made at 16-05-2013 by Vvzvlad – Flickr: Liberator.3d.gun.vv.01, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=26141469

22 August 2018 – Since the Fifteenth Century, when Johannes Gutenberg invented the technology, printing has given those in authority ginky fits. Until recently it was just dissemination of heretical ideas, a la Giordano Bruno, that raised authoritarian hackles. More recently, 3-D printing, also known as additive manufacturing (AM), has made it possible to make things that folks who like to tell you what you’re allowed to do don’t want you to make.

Don’t get me wrong, AM makes it possible to do a whole lot of good stuff that we only wished we could do before. Like, for example, Jay Leno uses it to replace irreplaceable antique car parts. It’s a really great technology that allows you to make just about anything you can describe in a digital drafting file without the difficulty and mess of hiring a highly skilled fabrication machine shop.

In my years as an experimental physicist, I dealt with fabrication shops a lot! Fabrication shops are collections of craftsmen and the equipment they need to make one-off examples of amazing stuff. It’s generally stuff, however, that nobody’d want to make a second time.

Like the first one of anything.

The reason I specify that AM is for making stuff nobody’d want to make a second time is that it’s slow. Regular machine-shop work, like making nuts and bolts, and sewing machines, and stuff folks want to make a lot of are worth spending a lot of time to figure out fast, efficient and cheap ways to make lots of them.

Take microchips. The darn things take huge amounts of effort to design, and tens of billions of dollars of equipment to make, but once you’ve figured it all out and set it all up, you can pop the things out like chocolate-chip cookies. You can buy an Intel Celeron G3900 dual-core 2.8 GHz desktop processor online for $36.99 because Intel spread the hideous quantities of up-front cost it took to originally set up the production line over the bazillions of processors that production line can make. It’s called “economy of scale.”

If you’re only gonna make one, or just a few, of the things, there’s no economy of scale.

But, if your’re only gonna make one, or just a few, you don’t worry too much about how long it takes to make each one, and what it costs is what it costs.

So, you put up with doing it some way that’s slow.

Like AM.

A HUGE advantage of making things with AM is that you don’t have to be all that smart. If you once learn to set the 3-D printer up, you’re all set. You just download the digital computer-aided-manufacturing (CAM) file into the printer’s artificial cerebrum, and it just DOES it. If you can download the file over the Internet, you’ve got it knocked!

Which brings us to what I want to talk about today: 3-D printing of handguns.

Now, I’m not any kind of anti-gun nut. I’ve got half a dozen firearms laying around the house, and have had since I was a little kid. It’s a family thing. I learned sharpshooting when I was around ten years old. Target shooting is a form of meditation for me. A revolver is my preferred weapon if ever I have need of a weapon. I never want to be the one who brought a knife to a gunfight!

That said, I’ve never actually found a need for a handgun. The one time I was present at a gunfight, I hid under the bed. The few times I’ve been threatened with guns, I talked my way out of it. Experience has led me to believe that carrying a gun is the best way to get shot.

I’ve always agreed with my gunsmith cousin that modern guns are beautiful pieces of art. I love their precision and craftsmanship. I appreciate the skill and effort it takes to make them.

The good ones, that is.

That’s the problem with AM-made guns. It takes practically no skill to create them. They’re right up there with the zip guns we talked about when we were kids.

We never made zip guns. We talked about them. We talked about them in the same tones we’d use talking about leeches or cockroaches. Ick!

We’d never make them because they were beneath contempt. They were so crude we wouldn’t dare fire one. Junk like that’s dangerous!

Okay, so you get an idea how I would react to the news that some nut case had published plans online for 3-D printing a handgun. Bad enough to design such a monstrosity, what about the idiot stupid enough to download the plans and make such a thing? Even more unbelievable, what moron would want to fire it?

Have they no regard for their hands? Don’t they like their fingers?

Anyway, not long ago, a press release crossed my desk from Giffords, the gun-safety organization set up by former U.S. Rep. Gabrielle Giffords and her husband after she survived an assassination attempt in Arizona in 2011. The Giffords’ press release praised Senators Richard Blumenthal and Bill Nelson, and Representatives David Cicilline and Seth Moulton for introducing legislation to stop untraceable firearms from further proliferation after the Trump Administration cleared the way for anyone to 3-D-print their own guns.

Why “untraceable” firearms, and what have they got to do with AM?

Downloadable plans for producing guns by the AM technique puts firearms in the hands of folks too unskilled and, yes, stupid to make them themselves with ordinary methods. That’s important because it is theoretically possible to AM produce firearms of surprising sophistication. The first one offered was a cheap plastic thing (depicted above) that would likely be more danger to its user than its intended victim. More recent offerings, however, have been repeating weapons made of more robust materials that might successfully be used to commit crimes.

Ordinary firearm production techniques require a level of skill and investment in equipment that puts them above the radar for federal regulators. The first thing those regulators require is licensing the manufacturer. Units must be serialized, and records must be kept. The system certainly isn’t perfect, but it gives law enforcement a fighting chance when the products are misused.

The old zip guns snuck in under the radar, but they were so crude and dangerous to their users that they were seldom even made. Almost anybody with enough sense to make them had enough sense not to make them! Those who got their hands on the things were more a danger to themselves than to society.

The Trump administration’s recent settlement with Defense Distributed, allowing them to relaunch their website, include a searchable database of firearm blueprints, and allow the public to create their own fully-functional, unserialized firearms using AM technology opens the floodgates for dangerous people to make their own untraceable firearms.

That’s just dumb!

The Untraceable Firearms Act would prohibit the manufacture and sale of firearms without serial numbers, require any person or business engaged in the business of selling firearm kits and unfinished receivers to obtain a dealer’s license and conduct background checks on purchasers, and mandate that a person who runs a business putting together firearms or finishing receivers must obtain a manufacturer’s license and put serial numbers on firearms before offering them for sale to consumers.

The 3-D Printing Safety Act would prohibit the online publication of computer-aided design (CAD) files which automatically program a 3-D-printer to produce or complete a firearm. That closes the loophole letting malevolent idiots evade scrutiny by making their own firearms.

We have to join with Giffords in applauding the legislators who introduced these bills.

Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.

In the Game of War, Nice Guys Can’t Afford to Finish Last

Slide into home plate
Leo Durocher was misquoted as saying: “Nice guys finish last.” But, that doesn’t work out too well when you’re fighting World War III! Andrey Yurlov/Shutterstock

1 August 2018 – “With respect to Russia, I agree with the Director of National Intelligence and others,” said Tonya Ugoretz, Director of the Cyber Threat Intelligence Integration Center during a televised panel session on 20 July, “that they are the most aggressive foreign actor that we see in cyberspace.

“For good reason,” she continued, “there is a lot of focus on their activity in 2016 against our election infrastructure and their malign-influence efforts.”

For those who didn’t notice, last week Facebook announced that they shut down thirty two accounts that the company says “engaged in coordinated political agitation and misinformation efforts ahead of November’s midterm elections, in an echo of Russian activities on the platform during the 2016 U.S. presidential campaign.”

So far, so good.

But, so what?

About a month and a half ago, this column introduced what may be the most important topic to come up so far this century. It’s the dawning of a global cyberwar that some of us see as the Twenty-First Century equivalent of World War I.

Yeah, it’s that big!

Unlike WWI, which started just over one hundred years ago (28 July 1914, to be exact), this new global war necessarily includes a stealth component. It’s already been going on for years, but is only now starting to make headlines.

The reason for this stealth component, and why I say it’s necessary for the conduct of this new World War is that the most effective offensive weapon being used is disinformation, which doesn’t work too well when the attackee sees it coming.

Previous World Wars involved big, noisy things that made loud bangs, bright flashes of light, and lots of flying shrapnel to cause chaos and confusion among the unfortunate folks they’re aimed at. This time, however, the main weapons cause chaos and confusion simply by messing with peoples’ heads.

Wars, generally, reflect the technologies that dominate the cultures that wage them at the times they occur. During the Bronze Age, wars between Greek, Persian, Egyptian, etc. cultures involved hand-to-hand combat using sharp bits of metal held in the hand (swords, knives) or mounted on sticks (spears, arrows) that combatants cut each other up with. It was a nasty business where combatants generally got up close and personal enough to touch each other while hacking each other to bits.

By the Renaissance, firearms made warfare less personal by allowing combatants to stand off and lob nasty explosive junk at each other. It was less personal, but infinitely more destructive.

In the Nineteenth Century’s Mechanical Age, they graduated to using machines to grind each other up. Grinding people up is also a nasty business, but oh-so-effective for waging War.

The Twentieth Century introduced mass production to the art of warfare with weapons wielded by a few souls that could turn entire cities into junkyards in a few seconds.

That made such an ungodly mess that people started hiring folks to run their countries who actually believed in Jesus Christ’s idea of being nice to people (or at least not exterminating them en masse)! That led to a solid fifty years when the major civilizations stepped back from wars of mass destruction, and only benighted souls who still hadn’t gotten the point went on rampages.

The moral of this story is that countries like to choose up teams for a game called “War,” where the object is for one team to sow chaos and destruction in the other team’s culture. The War goes on until one or the other team cries “Uncle,” and submits to the whims of the “winning” team. Witness the Treaty of Versailles that so humiliated the German people (who cried “Uncle” to end WWI) that they were ready to listen to the %^&* spewed out by Adolf Hitler.

In the past, the method of playing War was to kill as many of the other team’s effectives (warriors) as quickly as possible. That sowed chaos and confusion among them until they (and their support networks ̶ can’t forget their support networks!) lost the will to fight.

Fast forward to the Information Age, and suddenly nobody needs the mess and inconvenience of mass physical destruction, when they can get the same result by raising chaos and confusion directly in the other team’s minds through attacks in cyberspace? As any pimply faced teen hacker can tell you, sowing chaos and confusion in cyberspace is cheap, easy, and (for the right kind of personality) a lot of fun.

You can even convince yourself that you’re not really hurting anyone.

Cyberwarfare is so inexpensive that even countries like North Korea, which is basically dead broke and living on handouts from China, can afford to engage in it.

Mike Rogers (former Chairman of the House Intelligence Committee), speaking at a panel discussion on CSPAN 2 on 20 July 2018 listed four “bad actors” of the nation-state ilk: North Korea, Iran, Russia, and China. These four form the main opposition to the United States in this cyberwar.

Basically, the lines are drawn between western-style democracies (U.S.A., UK, EU, Canada, Mexico, etc.) and old-fashioned fascistic autocracies (the four mentioned, plus a host of similar regimes, such as Turkey and Syria)

Gee, that looks ideologically like the Allied and Axis powers duking it out during World War II. U.S.A., UK, France, etc. were on the “democracy” side with Nazi Germany, Fascist Italy, and Japan on the “autocracy” side.

It’s telling that the autocratic Stalinist regime in Russia initially came in on the Fascist side. They were fascist buddies with Germany until Hitler stabbed them in the back with Operation Barbarossa, where Germany massively attacked the western Soviet Union along an 1,800-mile front. That betrayal was the reason Stalin flipped to the “democracy” team, even though they were his ideological enemies. Of course, cooperation between Russia and the West lasted about a microsecond past the German surrender on 7 May 1945. After that, the Cold War began.

So, what we’re dealing with now is a reprise of World War II, but mainly with cyberweapons and a somewhat different cast of characters making up the teams.

So far, the reality-show titled “The Trump Administration” has been actively aiding and abetting our adversaries. This problem seems to be isolated in POTUS, with lower levels of the U.S. government largely running around screaming, “The sky is falling!”

Facebook’s action shows that others in the U.S. are taking the threat seriously, despite anti-leadership from the top.

How big a shot across the Russian cyberjuggernaut’s bow Facebook’s action is remains to be seen. A shot across the bow is only made to get the other guy’s attention, and not to have any destructive effect.

Since I’m sure Putin’s minions at least noticed Facebook’s action, it probably did its job of getting their attention. Any actual deterrence, however, is going to depend on who does what next. As with the equivalent conflict of the last Century, expect to see a long slog before anybody cries “Uncle.”

As cybersecurity expert Theresa Payton of Fortalice Solutions says: “Facebook has effectively taken a proactive approach to this issue rather than waiting for Congressional oversight to force their hand. If other platforms and regulators do not take action soon, it will become too late to stop Russian interference.”

We want to commend Facebook for standing up to be counted when it’s needed. We want to encourage them and others in both the public and private sectors to expand their efforts to bolster our cyberdefenses.

We’ve got a World War to fight!

The outlook is not all bad. Looking at the list of our enemies, and comparing them to who we expect to be our friends in this conflict, it is clear that our team has the upper hand with regard to both economic and technological resources.

At least as far as the World War III game is concerned, we’re going to need, for the third time, to put the lie to Leo Durocher’s misquote: “Nice guys finish last.”

In this Third World War game, we hope that we’re the nice guys, but our team has still got to win!

Who’s NOT a Creative?

 

Compensting sales
Close-up Of A Business Woman Giving Cheque To Her Colleague At Workplace In Office. Andrey Popov/Shutterstock

25 July 2018 – Last week I made a big deal about the things that motivate creative people, such as magazine editors, and how the most effective rewards were non-monetary. I also said that monetary rewards, such as commissions based on sales results, were exactly the right rewards to use for salespeople. That would imply that salespeople were somehow different from others, and maybe even not creative.

That is not the impression I want to leave you with. I’m devoting this blog posting to setting that record straight.

My remarks last week were based on Maslow‘s and Herzberg‘s work on motivation of employees. I suggested that these theories were valid in other spheres of human endeavor. Let’s be clear about this: yes, Maslow’s and Herzberg’s theories are valid and useful in general, whenever you want to think about motivating normal, healthy human beings. It’s incidental that those researchers were focused on employer/employee relations as an impetus to their work. If they’d been focused on anything else, their conclusions would probably have been pretty much the same.

That said, there are a whole class of people for whom monetary compensation is the holy grail of motivators. They are generally very high functioning individuals who are in no way pathological. On the surface, however, their preferred rewards appear to be monetary.

Traditionally, observers who don’t share this reward system have indicted these individuals as “greedy.”

I, however, dispute that conclusion. Let me explain why.

When pointing out the rewards that can be called “motivators for editors,” I wrote:

“We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like ‘Best Technical Article.’ Above all, we talked up the fact that ours was ‘the premier publication in the market.'”

Notice that these rewards, though non-monetary. were more or less measurable. They could be (and indeed for the individuals they motivated) seen as scorecards. The individuals involved had a very clear idea of value attached to such rewards. A Nobel Prize in Physics is of greater value than, say, a similar award given by, say, Harvard University.

For example, in 1987 I was awarded the “Cahners Editorial Medal of Excellence, Best How-To Article.” That wasn’t half bad. The competition was articles written for a few dozen magazines that were part of the Cahners Publishing Company, which at the time was a big deal in the business-to-business magazine field.

What I considered to be of higher value, however, was the “First Place Award For Editorial Excellence for a Technical Article in a Magazine with Over 80,000 Circulation” I got in 1997 from the American Society of Business Press Editors, where I was competing with a much wider pool of journalists.

Economists have a way of attempting to quantify such non-monetary awards called utility. They arrive at values by presenting various options and asking the question: “Which would you rather have?”

Of course, measures of utility generally vary widely depending on who’s doing the choosing.

For example, an article in the 19 July The Wall Street Journal described a phenomenon the author seemed to think was surprising: Saudi-Arabian women drivers (new drivers all) showed a preference for muscle cars over more pedestrian models. The author, Margherita Stancati, related an incident where a Porche salesperson in Riyadh offered a recently minted woman driver an “easy to drive crossover designed to primarily attract women.” The customer demurred. She wanted something “with an engine that roars.”

So, the utility of anything is not an absolute in any sense. It all depends on answering the question: “Utility to whom?”

Everyone is motivated by rewards in the upper half of the Needs Pyramid. If you’re a salesperson, growth in your annual (or other period) sales revenue is in the green Self Esteem block. It’s well and truly in the “motivator” category, and has nothing to do with the Safety and Security “hygiene factor” where others might put it. Successful salespeople have those hygiene factors well-and-truly covered. They’re looking for a reward that tells them they’ve hit a home run. That is likely having a bigger annual bonus than the next guy.

The most obvious money-driven motivators accrue to the folks in the CEO ranks. Jeff Bezos, Elon Musk, and Warren Buffett would have a hard time measuring their success (i.e., hitting the Pavlovian lever to get Self Actualization rewards) without looking at their monetary compensation!

The Pyramid of Needs

Needs Pyramid
The Pyramid of Needs combines Maslow’s and Herzberg’s motivational theories.

18 July 2018 – Long, long ago, in a [place] far, far away. …

When I was Chief Editor at business-to-business magazine Test & Measurement World, I had a long, friendly though heated, discussion with one of our advertising-sales managers. He suggested making the compensation we paid our editorial staff contingent on total advertising sales. He pointed out that what everyone came to work for was to get paid, and that tying their pay to how well the magazine was doing financially would give them an incentive to make decisions that would help advertising sales, and advance the magazine’s financial success.

He thought it was a great idea, but I disagreed completely. I pointed out that, though revenue sharing was exactly the right way to compensate the salespeople he worked with, it was exactly the wrong way to compensate creative people, like writers and journalists.

Why it was a good idea for his salespeople I’ll leave for another column. Today, I’m interested in why it was not a good idea for my editors.

In the heat of the discussion I didn’t do a deep dive into the reasons for taking my position. Decades later, from the standpoint of a semi-retired whatever-you-call-my-patchwork-career, I can now sit back and analyze in some detail the considerations that led me to my conclusion, which I still think was correct.

We’ll start out with Maslow’s Hierarchy of Needs.

In 1943, Abraham Maslow proposed that healthy human beings have a certain number of needs, and that these needs are arranged in a hierarchy. At the top is “self actualization,” which boils down to a need for creativity. It’s the need to do something that’s never been done before in one’s own individual way. At the bottom is the simple need for physical survival. In between are three more identified needs people also seek to satisfy.

Maslow pointed out that people seek to satisfy these needs from the bottom to the top. For example, nobody worries about security arrangements at their gated community (second level) while having a heart attack that threatens their survival (bottom level).

Overlaid on Maslow’s hierarchy is Frederick Herzberg’s Two-Factor Theory, which he published in his 1959 book The Motivation to Work. Herzberg’s theory divides Maslow’s hierarchy into two sections. The lower section is best described as “hygiene factors.” They are also known as “dissatisfiers” or “demotivators” because if they’re not met folks get cranky.

Basically, a person needs to have their hygiene factors covered in order have a level of basic satisfaction in life. Not having any of these needs satisfied makes them miserable. Having them satisfied doesn’t motivate them at all. It makes ’em fat, dumb and happy.

The upper-level needs are called “motivators.” Not having motivators met drives an individual to work harder, smarter, etc. It energizes them.

My position in the argument with my ad-sales friend was that providing revenue sharing worked at the “Safety and Security” level. Editors were (at least in my organization) paid enough that they didn’t have to worry about feeding their kids and covering their bills. They were talented people with a choice of whom they worked for. If they weren’t already being paid enough, they’d have been forced to go work for somebody else.

Creative people, my argument went, are motivated by non-monetary rewards. They work at the upper “motivator” levels. They’ve already got their physical needs covered, so to motivate them we have to offer rewards in the “motivator” realm.

We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like “Best Technical Article.” Above all, we talked up the fact that ours was “the premier publication in the market.”

These were all non-monetary rewards to motivate people who already had their basic needs (the hygiene factors) covered.

I summarized my compensation theory thusly: “We pay creative people enough so that they don’t have to go do something else.”

That gives them the freedom to do what they would want to do, anyway. The implication is that creative people want to do stuff because it’s something they can do that’s worth doing.

In other words, we don’t pay creative people to work. We pay them to free them up so they can work. Then, we suggest really fun stuff for them to work at.

What does this all mean for society in general?

First of all, if you want there to be a general level of satisfaction within your society, you’d better take care of those hygiene factors for everybody!

That doesn’t mean the top 1%. It doesn’t mean the top 80%, either. Or, the top 90%. It means everybody!

If you’ve got 99% of everybody covered, that still leaves a whole lot of people who think they’re getting a raw deal. Remember that in the U.S.A. there are roughly 300 million people. If you’ve left 1% feeling ripped off, that’s 3 million potential revolutionaries. Three million people can cause a lot of havoc if motivated.

Remember, at the height of the 1960s Hippy movement, there were, according to the most generous estimates, only about 100,000 hipsters wandering around. Those hundred-thousand activists made a huge change in society in a very short period of time.

Okay. If you want people invested in the status quo of society, make sure everyone has all their hygiene factors covered. If you want to know how to do that, ask Bernie Sanders.

Assuming you’ve got everybody’s hygiene factors covered, does that mean they’re all fat, dumb, and happy? Do you end up with a nation of goofballs with no motivation to do anything?

Nope!

Remember those needs Herzberg identified as “motivators” in the upper part of Maslow’s pyramid?

The hygiene factors come into play only when they’re not met. The day they’re met, people stop thinking about who’ll be first against the wall when the revolution comes. Folks become fat, dumb and happy, and stay that way for about an afternoon. Maybe an afternoon and an evening if there’s a good ballgame on.

The next morning they start thinking: “So, what can we screw with next?”

What they’re going to screw with next is anything and everything they damn well please. Some will want to fly to the Moon. Some will want to outdo Michaelangelo‘s frescos for the ceiling of the Sistine Chapel. They’re all going to look at what they think was the greatest stuff from the past, and try to think of ways to do better, and to do it in their own way.

That’s the whole point of “self actualization.”

The Renaissance didn’t happen because everybody was broke. It happened because they were already fat, dumb and happy, and looking for something to screw with next.

POTUS and the Peter Principle

Will Rogers & Wiley Post
In 1927, Will Rogers wrote: “I never met a man I didn’t like.” Here he is (on left) posing with aviator Wiley Post before their ill-fated flying exploration of Alaska. Everett Historical/Shutterstock

11 July 2018 – Please bear with me while I, once again, invert the standard news-story pyramid by presenting a great whacking pile of (hopfully entertaining) detail that leads eventually to the point of this column. If you’re too impatient to read it to the end, leave now to check out the latest POTUS rant on Twitter.

Unlike Will Rogers, who famously wrote, “I never met a man I didn’t like,” I’ve run across a whole slew of folks I didn’t like, to the point of being marginally misanthropic.

I’ve made friends with all kinds of people, from murderers to millionaires, but there are a few types that I just can’t abide. Top of that list is people that think they’re smarter than everybody else, and want you to acknowledge it.

I’m telling you this because I’m trying to be honest about why I’ve never been able to abide two recent Presidents: William Jefferson Clinton (#42) and Donald J. Trump (#45). Having been forced to observe their antics over an extended period, I’m pleased to report that they’ve both proved to be among the most corrupt individuals to occupy the Oval Office in recent memory.

I dislike them because they both show that same, smarmy self-satisfied smile when contemplating their own greatness.

Tricky Dick Nixon (#37) was also a world-class scumbag, but he never triggered the same automatic revulsion. That is because, instead of always looking self satisfied, he always looked scared. He was smart enough to recognize that he was walking a tightrope and, if he stayed on it long enough, he eventually would fall off.

And, he did.

I had no reason for disliking #37 until the mid-1960s, when, as a college freshman, I researched a paper for a history class that happened to involve digging into the McCarthy hearings of the early 1950s. Seeing the future #37’s activities in that period helped me form an extremely unflattering picture of his character, which a decade later proved accurate.

During those years in between I had some knock-down, drag-out arguments with my rabid-Nixon-fan grandmother. I hope I had the self control never to have said “I told you so” after Nixon’s fall. She was a nice lady and a wonderful grandma, and wouldn’t have deserved it.

As Abraham Lincoln (#16) famously said: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

Since #45 came on my radar many decades ago, I’ve been trying to figure out what, exactly, is wrong with his brain. At first, when he was a real-estate developer, I just figured he had bad taste and was infantile. That made him easy to dismiss, so I did just that.

Later, he became a reality-TV star. His show, The Apprentice, made it instantly clear that he knew absolutely nothing about running a business.

No wonder his companies went bankrupt. Again, and again, and again….

I’ve known scads of corporate CEOs over the years. During the quarter century I spent covering the testing business as a journalist, I got to spend time with most of the corporate leaders of the world’s major electronics manufacturing companies. Unsurprisingly, the successful ones followed the best practices that I learned in MBA school.

Some of the CEOs I got to know were goofballs. Most, however, were absolutely brilliant. The successful ones all had certain things in common.

Chief among the characteristics of successful corporate executives is that they make the people around them happy to work for them. They make others feel comfortable, empowered, and enthusiastically willing to cooperate to make the CEO’s vision manifest.

Even Commendatore Ferrari, who I’ve heard was Hell to work for and Machiavellian in interpersonal relationships, made underlings glad to have known him. I’ve noticed that ‘most everybody who’s ever worked for Ferrari has become a Ferrari fan for life.

As far as I can determine, nobody ever sued him.

That’s not the impression I got of Donald Trump, the corporate CEO. He seemed to revel in conflict, making those around him feel like dog pooh.

Apparently, everyone who’s ever dealt with him has wanted to sue him.

That worked out fine, however, for Donald Trump, the reality-TV star. So-called “reality” TV shows generally survive by presenting conflict. The more conflict the better. Everybody always seems to be fighting with everybody else, and the winners appear to be those who consistently bully their opponents into feeling like dog pooh.

I see a pattern here.

The inescapable conclusion is that Donald Trump was never a successful corporate executive, but succeeded enormously playing one on TV.

Another characteristic I should mention of reality TV shows is that they’re unscripted. The idea seems to be that nobody knows what’s going to happen next, including the cast.

That leaves off the necessity for reality-TV stars to learn lines. Actual movie stars and stage actors have to learn lines of dialog. Stories are tightly scripted so that they conform to Aristotle’s recommendations for how to write a successful plot.

Having written a handful of traditional motion-picture scripts as well as having produced a few reality-TV episodes, I know the difference. Following Aristotle’s dicta gives you the ability to communicate, and sometimes even teach, something to your audience. The formula reality-TV show, on the other hand, goes nowhere. Everybody (including the audience) ends up exactly where they started, ready to start the same stupid arguments over and over again ad nauseam.

Apparently, reality-TV audiences don’t want to actually learn anything. They’re more focused on ranting and raving.

Later on, following a long tradition among theater, film and TV stars, #45 became a politician.

At first, I listened to what he said. That led me to think he was a Nazi demagogue. Then, I thought maybe he was some kind of petty tyrant, like Mussolini. (I never considered him competent enough to match Hitler.)

Eventually, I realized that it never makes any sense to listen to what #45 says because he lies. That makes anything he says irrelevant.

FIRST PRINCIPAL: If you catch somebody lying to you, stop believing what they say.

So, it’s all bullshit. You can’t draw any conclusion from it. If he says something obviously racist (for example), you can’t conclude that he’s a racist. If he says something that sounds stupid, you can’t conclude he’s stupid, either. It just means he’s said something that sounds stupid.

Piling up this whole load of B.S., then applying Occam’s Razor, leads to the conclusion that #45 is still simply a reality-TV star. His current TV show is titled The Trump Administration. Its supporting characters are U.S. senators and representatives, executive-branch bureaucrats, news-media personalities, and foreign “dignitaries.” Some in that last category (such as Justin Trudeau and Emmanuel Macron) are reluctant conscripts into the cast, and some (such as Vladimir Putin and Kim Jong-un) gleefully play their parts, but all are bit players in #45’s reality TV show.

Oh, yeah. The largest group of bit players in The Trump Administration is every man, woman, child and jackass on the planet. All are, in true reality-TV style, going exactly nowhere as long as the show lasts.

Politicians have always been showmen. Of the Founding Fathers, the one who stands out for never coming close to becoming President was Benjamin Franklin. Franklin was a lot of things, and did a lot of things extremely well. But, he was never really a P.T.-Barnum-like showman.

Really successful politicians, such as Abraham Lincoln, Franklin Roosevelt (#32), Bill Clinton, and Ronald Reagan (#40) were showmen. They could wow the heck out of an audience. They could also remember their lines!

That brings us, as promised, to Donald Trump and the Peter Principle.

Recognizing the close relationship between Presidential success and showmanship gives some idea about why #45 is having so much trouble making a go of being President.

Before I dig into that, however, I need to point out a few things that #45 likes to claim as successes that actually aren’t:

  • The 2016 election was not really a win for Donald Trump. Hillary Clinton was such an unpopular candidate that she decisively lost on her own (de)merits. God knows why she was ever the Democratic Party candidate at all. Anybody could have beaten her. If Donald Trump hadn’t been available, Elmer Fudd could have won!
  • The current economic expansion has absolutely nothing to do with Trump policies. I predicted it back in 2009, long before anybody (with the possible exception of Vladimir Putin, who apparently engineered it) thought Trump had a chance of winning the Presidency. My prediction was based on applying chaos theory to historical data. It was simply time for an economic expansion. The only effect Trump can have on the economy is to screw it up. Being trained as an economist (You did know that, didn’t you?), #45 is unlikely to screw up so badly that he derails the expansion.
  • While #45 likes to claim a win on North Korean denuclearization, the Nobel Peace Prize is on hold while evidence piles up that Kim Jong-un was pulling the wool over Trump’s eyes at the summit.

Finally, we move on to the Peter Principle.

In 1969 Canadian writer Raymond Hull co-wrote a satirical book entitled The Peter Principle with Laurence J. Peter. It was based on research Peter had done on organizational behavior.

Peter was (he died at age 70 in 1990) not a management consultant or a behavioral psychologist. He was an Associate Professor of Education at the University of Southern California. He was also Director of the Evelyn Frieden Centre for Prescriptive Teaching at USC, and Coordinator of Programs for Emotionally Disturbed Children.

The Peter principle states: “In a hierarchy every employee tends to rise to his level of incompetence.”

Horrifying to corporate managers, the book went on to provide real examples and lucid explanations to show the principle’s validity. It works as satire only because it leaves the reader with a choice either to laugh or to cry.

See last week’s discussion of why academic literature is exactly the wrong form with which to explore really tough philosophical questions in an innovative way.

Let’s be clear: I’m convinced that the Peter principle is God’s Own Truth! I’ve seen dozens of examples that confirm it, and no counter examples.

It’s another proof that Mommy Nature has a sense of humor. Anyone who disputes that has, philosophically speaking, a piece of paper taped to the back of his (or her) shirt with the words “Kick Me!” written on it.

A quick perusal of the Wikipedia entry on the Peter Principle elucidates: “An employee is promoted based on their success in previous jobs until they reach a level at which they are no longer competent, as skills in one job do not necessarily translate to another. … If the promoted person lacks the skills required for their new role, then they will be incompetent at their new level, and so they will not be promoted again.”

I leave it as an exercise for the reader (and the media) to find the numerous examples where #45, as a successful reality-TV star, has the skills he needed to be promoted to President, but not those needed to be competent in the job.

Death Logs Out

Death Logs Out Cover
E.J. Simon’s Death Logs Out (Endeavour Press) is the third in the Michael Nicholas series.

4 July 2018 – If you want to explore any of the really tough philosophical questions in an innovative way, the best literary forms to use are fantasy and science fiction. For example, when I decided to attack the nature of reality, I did it in a surrealist-fantasy novelette entitled Lilith.

If your question involves some aspect of technology, such as the nature of consciousness from an artificial-intelligence (AI) viewpoint, you want to dive into the science-fiction genre. That’s what sci-fi great Robert A. Heinlein did throughout his career to explore everything from space travel to genetically engineered humans. My whole Red McKenna series is devoted mainly to how you can use (and mis-use) robotics.

When E.J. Simon selected grounded sci-fi for his Michael Nicholas series, he most certainly made the right choice. Grounded sci-fi is the sub-genre where the author limits him- (or her-) self to what is at least theoretically possible using current technology, or immediate extensions thereof. No warp drives, wormholes or anti-grav boots allowed!

In this case, we’re talking about imaginitive development of artificial intelligence and squeezing a great whacking pile of supercomputing power into a very small package to create something that can best be described as chilling: the conquest of death.

The great thing about fiction genre, such as fantasy and sci-fi, is the freedom provided by the ol’ “willing suspension of disbelief.” If you went at this subject in a scholarly journal, you’d never get anything published. You’d have to prove you could do it before anybody’d listen.

I treated on this effect in the last chapter of Lilith when looking at my own past reaction to “scholarly” manuscripts shown to me by folks who forgot this important fact.

“Their ideas looked like the fevered imaginings of raving lunatics,” I said.

I went on to explain why I’d chosen the form I’d chosen for Lilith thusly: “If I write it up like a surrealist novel, folks wouldn’t think I believed it was God’s Own Truth. It’s all imagination, so using the literary technique of ‘willing suspension of disbelief’ lets me get away with presenting it without being a raving lunatic.”

Another advantage of picking fiction genre is that it affords the ability to keep readers’ attention while filling their heads with ideas that would leave them cross-eyed if simply presented straight. The technical details presented in the Michael Nicholas series could, theoretically, be presented in a PowerPoint presentation with something like fifteen slides. Well, maybe twenty five.

But, you wouldn’t be able to get the point across. People would start squirming in their seats around slide three. What Simon’s trying to tell us takes time to absorb. Readers have to make the mental connections before the penny will drop. Above all, they have to see it in action, and that’s just what embedding it in a mystery-adventure story does. Following the mental machinations of “real” characters as they try to put the pieces together helps Simon’s audience fit them together in their own minds.

Spoiler Alert: Everybody in Death Logs Out lives except bad guys, and those who were already dead to begin with. Well, with one exception: a supporting character who’s probably a good guy gets well-and-truly snuffed. You’ll have to read the book to find out who.

Oh, yeah. There are unreconstructed Nazis! That‘s always fun! Love having unreconstructed Nazis to hate!

I guess I should say a little about the problem that drives the plot. What good is a book review if it doesn’t say anything about what drives the plot?

Our hero, Michael, was the fair-haired boy of his family. He grew up to be a highly successful plain-vanilla finance geek. He married a beautiful trophy wife with whom he lives in suburban Connecticut. Michael’s daughter, Sophia, is away attending an upscale university in South Carolina.

Michael’s biggest problem is overwork. With his wife’s grudging acquiesence, he’d taken over his black-sheep big brother Alex’s organized crime empire after Alex’s murder two years earlier.

And, you thought Thomas Crown (The Thomas Crown Affair, 1968 and 1999) was a multitasker! Michael makes Crown look single minded. No wonder he’s getting frazzled!

But, Michael was holding it all together until one night when he was awakened by a telephone call from an old flame, whom he’d briefly employed as a body guard before realizing that she was a raving homicidal lunatic.

“I have your daughter,” Sindy Steele said over the phone.

Now, the obviously made-up first name “Sindy” should have warned Michael that Ms. Steele wasn’t playing with a full deck even before he got involved with her, but, at the time, the head with the brains wasn’t the head doing his thinking. She was, shall we say, “toothsome.”

Turns out that Sindy had dropped off her meds, then traveled all the way from her “retirement” villa in Santorini, Greece on an ill-advised quest to get back at Michael for dumping her.

But, that wasn’t Sophia’s worst problem. When she was nabbed, Sofia was in the midst of a call on her mobile phone from her dead uncle Alex, belatedly warning her of the danger!

While talking on the phone with her long-dead uncle confused poor Sofia, Michael knew just what was going on. For two years, he’d been having regular daily “face time” with Alex through cyberspace as he took over Alex’s syndicate. Mortophobic Alex had used his ill-gotten wealth to cheat death by uploading himself to the Web.

Now, Alex and Michael have to get Sofia back, then figure out who’s coming after Michael to steal the technology Alex had used to cheat death.

This is certainly not the first time someone has used “uploading your soul to the Web” as a plot device. Perhaps most notably, Robert Longo cast Barbara Sukowa as a cyberloaded fairy godmother trying to watch over Keanu Reeves’s character in the 1995 film Johnny Mnemonic. In Longo’s futuristic film, the technique was so common that the ghost had legal citizenship!

In the 1995 film, however, Longo glossed over how the ghost in the machine was supposed to work, technically. Johnny Mnemonic was early enough that it was futuristic sci-fi, as was Geoff Murphy’s even earlier soul-transference work Freejack (1992). Nobody in the early 1990s had heard of the supercomputing cloud, and email was high-tech. The technology for doing soul transference was as far in the imagined future as space travel was to Heinlein when he started writing about it in the 1930s.

Fast forward to the late 2010s. This stuff is no longer in the remote future. It’s in the near future. In fact, there’s very little technology left to develop before Simon’s version becomes possible. It’s what we in the test-equipment-development game used to call “specsmanship.” No technical breakthroughs needed, just advancements in “faster, wider, deeper” specifications.

That’s what makes the Michael Nicholas series grounded sci-fi! Simon has to imagine how today’s much-more-defined cloud infrastructure might both empower and limit cyberspook Alex. He also points out that what enables the phenomenon is software (as in artificial intelligence), not hardware.

Okay, I do have some bones to pick with Simon’s text. Mainly, I’m a big Strunk and White (Elements of Style) guy. Simon’s a bit cavalier about paragraphing, especially around dialog. His use of quotation marks is also a bit sloppy.

But, not so bad that it interferes with following the story.

Standard English is standardized for a reason: it makes getting ideas from the author’s head into the reader’s sooo much easier!

James Joyce needed a dummy slap! His Ulysses has rightly been called “the most difficult book to read in the English language.” It was like he couldn’t afford to buy a typewriter with a quotation key.

Enough ranting about James Joyce!

Simon’s work is MUCH better! There are only a few times I had to drop out of Death Logs Out‘s world to ask, “What the heck is he trying to say?” That’s a rarity in today’s world of amateurishly edited indie novels. Simon’s story always pulled me right back into its world to find out what happens next.