POTUS and the Peter Principle

Will Rogers & Wiley Post
In 1927, Will Rogers wrote: “I never met a man I didn’t like.” Here he is (on left) posing with aviator Wiley Post before their ill-fated flying exploration of Alaska. Everett Historical/Shutterstock

11 July 2018 – Please bear with me while I, once again, invert the standard news-story pyramid by presenting a great whacking pile of (hopfully entertaining) detail that leads eventually to the point of this column. If you’re too impatient to read it to the end, leave now to check out the latest POTUS rant on Twitter.

Unlike Will Rogers, who famously wrote, “I never met a man I didn’t like,” I’ve run across a whole slew of folks I didn’t like, to the point of being marginally misanthropic.

I’ve made friends with all kinds of people, from murderers to millionaires, but there are a few types that I just can’t abide. Top of that list is people that think they’re smarter than everybody else, and want you to acknowledge it.

I’m telling you this because I’m trying to be honest about why I’ve never been able to abide two recent Presidents: William Jefferson Clinton (#42) and Donald J. Trump (#45). Having been forced to observe their antics over an extended period, I’m pleased to report that they’ve both proved to be among the most corrupt individuals to occupy the Oval Office in recent memory.

I dislike them because they both show that same, smarmy self-satisfied smile when contemplating their own greatness.

Tricky Dick Nixon (#37) was also a world-class scumbag, but he never triggered the same automatic revulsion. That is because, instead of always looking self satisfied, he always looked scared. He was smart enough to recognize that he was walking a tightrope and, if he stayed on it long enough, he eventually would fall off.

And, he did.

I had no reason for disliking #37 until the mid-1960s, when, as a college freshman, I researched a paper for a history class that happened to involve digging into the McCarthy hearings of the early 1950s. Seeing the future #37’s activities in that period helped me form an extremely unflattering picture of his character, which a decade later proved accurate.

During those years in between I had some knock-down, drag-out arguments with my rabid-Nixon-fan grandmother. I hope I had the self control never to have said “I told you so” after Nixon’s fall. She was a nice lady and a wonderful grandma, and wouldn’t have deserved it.

As Abraham Lincoln (#16) famously said: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

Since #45 came on my radar many decades ago, I’ve been trying to figure out what, exactly, is wrong with his brain. At first, when he was a real-estate developer, I just figured he had bad taste and was infantile. That made him easy to dismiss, so I did just that.

Later, he became a reality-TV star. His show, The Apprentice, made it instantly clear that he knew absolutely nothing about running a business.

No wonder his companies went bankrupt. Again, and again, and again….

I’ve known scads of corporate CEOs over the years. During the quarter century I spent covering the testing business as a journalist, I got to spend time with most of the corporate leaders of the world’s major electronics manufacturing companies. Unsurprisingly, the successful ones followed the best practices that I learned in MBA school.

Some of the CEOs I got to know were goofballs. Most, however, were absolutely brilliant. The successful ones all had certain things in common.

Chief among the characteristics of successful corporate executives is that they make the people around them happy to work for them. They make others feel comfortable, empowered, and enthusiastically willing to cooperate to make the CEO’s vision manifest.

Even Commendatore Ferrari, who I’ve heard was Hell to work for and Machiavellian in interpersonal relationships, made underlings glad to have known him. I’ve noticed that ‘most everybody who’s ever worked for Ferrari has become a Ferrari fan for life.

As far as I can determine, nobody ever sued him.

That’s not the impression I got of Donald Trump, the corporate CEO. He seemed to revel in conflict, making those around him feel like dog pooh.

Apparently, everyone who’s ever dealt with him has wanted to sue him.

That worked out fine, however, for Donald Trump, the reality-TV star. So-called “reality” TV shows generally survive by presenting conflict. The more conflict the better. Everybody always seems to be fighting with everybody else, and the winners appear to be those who consistently bully their opponents into feeling like dog pooh.

I see a pattern here.

The inescapable conclusion is that Donald Trump was never a successful corporate executive, but succeeded enormously playing one on TV.

Another characteristic I should mention of reality TV shows is that they’re unscripted. The idea seems to be that nobody knows what’s going to happen next, including the cast.

That leaves off the necessity for reality-TV stars to learn lines. Actual movie stars and stage actors have to learn lines of dialog. Stories are tightly scripted so that they conform to Aristotle’s recommendations for how to write a successful plot.

Having written a handful of traditional motion-picture scripts as well as having produced a few reality-TV episodes, I know the difference. Following Aristotle’s dicta gives you the ability to communicate, and sometimes even teach, something to your audience. The formula reality-TV show, on the other hand, goes nowhere. Everybody (including the audience) ends up exactly where they started, ready to start the same stupid arguments over and over again ad nauseam.

Apparently, reality-TV audiences don’t want to actually learn anything. They’re more focused on ranting and raving.

Later on, following a long tradition among theater, film and TV stars, #45 became a politician.

At first, I listened to what he said. That led me to think he was a Nazi demagogue. Then, I thought maybe he was some kind of petty tyrant, like Mussolini. (I never considered him competent enough to match Hitler.)

Eventually, I realized that it never makes any sense to listen to what #45 says because he lies. That makes anything he says irrelevant.

FIRST PRINCIPAL: If you catch somebody lying to you, stop believing what they say.

So, it’s all bullshit. You can’t draw any conclusion from it. If he says something obviously racist (for example), you can’t conclude that he’s a racist. If he says something that sounds stupid, you can’t conclude he’s stupid, either. It just means he’s said something that sounds stupid.

Piling up this whole load of B.S., then applying Occam’s Razor, leads to the conclusion that #45 is still simply a reality-TV star. His current TV show is titled The Trump Administration. Its supporting characters are U.S. senators and representatives, executive-branch bureaucrats, news-media personalities, and foreign “dignitaries.” Some in that last category (such as Justin Trudeau and Emmanuel Macron) are reluctant conscripts into the cast, and some (such as Vladimir Putin and Kim Jong-un) gleefully play their parts, but all are bit players in #45’s reality TV show.

Oh, yeah. The largest group of bit players in The Trump Administration is every man, woman, child and jackass on the planet. All are, in true reality-TV style, going exactly nowhere as long as the show lasts.

Politicians have always been showmen. Of the Founding Fathers, the one who stands out for never coming close to becoming President was Benjamin Franklin. Franklin was a lot of things, and did a lot of things extremely well. But, he was never really a P.T.-Barnum-like showman.

Really successful politicians, such as Abraham Lincoln, Franklin Roosevelt (#32), Bill Clinton, and Ronald Reagan (#40) were showmen. They could wow the heck out of an audience. They could also remember their lines!

That brings us, as promised, to Donald Trump and the Peter Principle.

Recognizing the close relationship between Presidential success and showmanship gives some idea about why #45 is having so much trouble making a go of being President.

Before I dig into that, however, I need to point out a few things that #45 likes to claim as successes that actually aren’t:

  • The 2016 election was not really a win for Donald Trump. Hillary Clinton was such an unpopular candidate that she decisively lost on her own (de)merits. God knows why she was ever the Democratic Party candidate at all. Anybody could have beaten her. If Donald Trump hadn’t been available, Elmer Fudd could have won!
  • The current economic expansion has absolutely nothing to do with Trump policies. I predicted it back in 2009, long before anybody (with the possible exception of Vladimir Putin, who apparently engineered it) thought Trump had a chance of winning the Presidency. My prediction was based on applying chaos theory to historical data. It was simply time for an economic expansion. The only effect Trump can have on the economy is to screw it up. Being trained as an economist (You did know that, didn’t you?), #45 is unlikely to screw up so badly that he derails the expansion.
  • While #45 likes to claim a win on North Korean denuclearization, the Nobel Peace Prize is on hold while evidence piles up that Kim Jong-un was pulling the wool over Trump’s eyes at the summit.

Finally, we move on to the Peter Principle.

In 1969 Canadian writer Raymond Hull co-wrote a satirical book entitled The Peter Principle with Laurence J. Peter. It was based on research Peter had done on organizational behavior.

Peter was (he died at age 70 in 1990) not a management consultant or a behavioral psychologist. He was an Associate Professor of Education at the University of Southern California. He was also Director of the Evelyn Frieden Centre for Prescriptive Teaching at USC, and Coordinator of Programs for Emotionally Disturbed Children.

The Peter principle states: “In a hierarchy every employee tends to rise to his level of incompetence.”

Horrifying to corporate managers, the book went on to provide real examples and lucid explanations to show the principle’s validity. It works as satire only because it leaves the reader with a choice either to laugh or to cry.

See last week’s discussion of why academic literature is exactly the wrong form with which to explore really tough philosophical questions in an innovative way.

Let’s be clear: I’m convinced that the Peter principle is God’s Own Truth! I’ve seen dozens of examples that confirm it, and no counter examples.

It’s another proof that Mommy Nature has a sense of humor. Anyone who disputes that has, philosophically speaking, a piece of paper taped to the back of his (or her) shirt with the words “Kick Me!” written on it.

A quick perusal of the Wikipedia entry on the Peter Principle elucidates: “An employee is promoted based on their success in previous jobs until they reach a level at which they are no longer competent, as skills in one job do not necessarily translate to another. … If the promoted person lacks the skills required for their new role, then they will be incompetent at their new level, and so they will not be promoted again.”

I leave it as an exercise for the reader (and the media) to find the numerous examples where #45, as a successful reality-TV star, has the skills he needed to be promoted to President, but not those needed to be competent in the job.

Death Logs Out

Death Logs Out Cover
E.J. Simon’s Death Logs Out (Endeavour Press) is the third in the Michael Nicholas series.

4 July 2018 – If you want to explore any of the really tough philosophical questions in an innovative way, the best literary forms to use are fantasy and science fiction. For example, when I decided to attack the nature of reality, I did it in a surrealist-fantasy novelette entitled Lilith.

If your question involves some aspect of technology, such as the nature of consciousness from an artificial-intelligence (AI) viewpoint, you want to dive into the science-fiction genre. That’s what sci-fi great Robert A. Heinlein did throughout his career to explore everything from space travel to genetically engineered humans. My whole Red McKenna series is devoted mainly to how you can use (and mis-use) robotics.

When E.J. Simon selected grounded sci-fi for his Michael Nicholas series, he most certainly made the right choice. Grounded sci-fi is the sub-genre where the author limits him- (or her-) self to what is at least theoretically possible using current technology, or immediate extensions thereof. No warp drives, wormholes or anti-grav boots allowed!

In this case, we’re talking about imaginitive development of artificial intelligence and squeezing a great whacking pile of supercomputing power into a very small package to create something that can best be described as chilling: the conquest of death.

The great thing about fiction genre, such as fantasy and sci-fi, is the freedom provided by the ol’ “willing suspension of disbelief.” If you went at this subject in a scholarly journal, you’d never get anything published. You’d have to prove you could do it before anybody’d listen.

I treated on this effect in the last chapter of Lilith when looking at my own past reaction to “scholarly” manuscripts shown to me by folks who forgot this important fact.

“Their ideas looked like the fevered imaginings of raving lunatics,” I said.

I went on to explain why I’d chosen the form I’d chosen for Lilith thusly: “If I write it up like a surrealist novel, folks wouldn’t think I believed it was God’s Own Truth. It’s all imagination, so using the literary technique of ‘willing suspension of disbelief’ lets me get away with presenting it without being a raving lunatic.”

Another advantage of picking fiction genre is that it affords the ability to keep readers’ attention while filling their heads with ideas that would leave them cross-eyed if simply presented straight. The technical details presented in the Michael Nicholas series could, theoretically, be presented in a PowerPoint presentation with something like fifteen slides. Well, maybe twenty five.

But, you wouldn’t be able to get the point across. People would start squirming in their seats around slide three. What Simon’s trying to tell us takes time to absorb. Readers have to make the mental connections before the penny will drop. Above all, they have to see it in action, and that’s just what embedding it in a mystery-adventure story does. Following the mental machinations of “real” characters as they try to put the pieces together helps Simon’s audience fit them together in their own minds.

Spoiler Alert: Everybody in Death Logs Out lives except bad guys, and those who were already dead to begin with. Well, with one exception: a supporting character who’s probably a good guy gets well-and-truly snuffed. You’ll have to read the book to find out who.

Oh, yeah. There are unreconstructed Nazis! That‘s always fun! Love having unreconstructed Nazis to hate!

I guess I should say a little about the problem that drives the plot. What good is a book review if it doesn’t say anything about what drives the plot?

Our hero, Michael, was the fair-haired boy of his family. He grew up to be a highly successful plain-vanilla finance geek. He married a beautiful trophy wife with whom he lives in suburban Connecticut. Michael’s daughter, Sophia, is away attending an upscale university in South Carolina.

Michael’s biggest problem is overwork. With his wife’s grudging acquiesence, he’d taken over his black-sheep big brother Alex’s organized crime empire after Alex’s murder two years earlier.

And, you thought Thomas Crown (The Thomas Crown Affair, 1968 and 1999) was a multitasker! Michael makes Crown look single minded. No wonder he’s getting frazzled!

But, Michael was holding it all together until one night when he was awakened by a telephone call from an old flame, whom he’d briefly employed as a body guard before realizing that she was a raving homicidal lunatic.

“I have your daughter,” Sindy Steele said over the phone.

Now, the obviously made-up first name “Sindy” should have warned Michael that Ms. Steele wasn’t playing with a full deck even before he got involved with her, but, at the time, the head with the brains wasn’t the head doing his thinking. She was, shall we say, “toothsome.”

Turns out that Sindy had dropped off her meds, then traveled all the way from her “retirement” villa in Santorini, Greece on an ill-advised quest to get back at Michael for dumping her.

But, that wasn’t Sophia’s worst problem. When she was nabbed, Sofia was in the midst of a call on her mobile phone from her dead uncle Alex, belatedly warning her of the danger!

While talking on the phone with her long-dead uncle confused poor Sofia, Michael knew just what was going on. For two years, he’d been having regular daily “face time” with Alex through cyberspace as he took over Alex’s syndicate. Mortophobic Alex had used his ill-gotten wealth to cheat death by uploading himself to the Web.

Now, Alex and Michael have to get Sofia back, then figure out who’s coming after Michael to steal the technology Alex had used to cheat death.

This is certainly not the first time someone has used “uploading your soul to the Web” as a plot device. Perhaps most notably, Robert Longo cast Barbara Sukowa as a cyberloaded fairy godmother trying to watch over Keanu Reeves’s character in the 1995 film Johnny Mnemonic. In Longo’s futuristic film, the technique was so common that the ghost had legal citizenship!

In the 1995 film, however, Longo glossed over how the ghost in the machine was supposed to work, technically. Johnny Mnemonic was early enough that it was futuristic sci-fi, as was Geoff Murphy’s even earlier soul-transference work Freejack (1992). Nobody in the early 1990s had heard of the supercomputing cloud, and email was high-tech. The technology for doing soul transference was as far in the imagined future as space travel was to Heinlein when he started writing about it in the 1930s.

Fast forward to the late 2010s. This stuff is no longer in the remote future. It’s in the near future. In fact, there’s very little technology left to develop before Simon’s version becomes possible. It’s what we in the test-equipment-development game used to call “specsmanship.” No technical breakthroughs needed, just advancements in “faster, wider, deeper” specifications.

That’s what makes the Michael Nicholas series grounded sci-fi! Simon has to imagine how today’s much-more-defined cloud infrastructure might both empower and limit cyberspook Alex. He also points out that what enables the phenomenon is software (as in artificial intelligence), not hardware.

Okay, I do have some bones to pick with Simon’s text. Mainly, I’m a big Strunk and White (Elements of Style) guy. Simon’s a bit cavalier about paragraphing, especially around dialog. His use of quotation marks is also a bit sloppy.

But, not so bad that it interferes with following the story.

Standard English is standardized for a reason: it makes getting ideas from the author’s head into the reader’s sooo much easier!

James Joyce needed a dummy slap! His Ulysses has rightly been called “the most difficult book to read in the English language.” It was like he couldn’t afford to buy a typewriter with a quotation key.

Enough ranting about James Joyce!

Simon’s work is MUCH better! There are only a few times I had to drop out of Death Logs Out‘s world to ask, “What the heck is he trying to say?” That’s a rarity in today’s world of amateurishly edited indie novels. Simon’s story always pulled me right back into its world to find out what happens next.

The Mad Hatter’s Riddle

Raven/Desk
Lewis Carroll’s famous riddle “Why is a raven like a writing desk?” turns out to have a simple solution after all! Shutterstock

27 June 2018 – In 1865 Charles Lutwidge Dodgson, aka Lewis Carroll, published Alice’s Adventures in Wonderland, in which his Mad Hatter character posed the riddle: “Why is a raven like a writing desk?”

Somewhat later in the story Alice gave up trying to guess the riddle and challenged the Mad Hatter to provide the answer. When he couldn’t, nor could anyone else at the story’s tea party, Alice dismissed the whole thing by saying: “I think you could do something better with the time . . . than wasting it in asking riddles that have no answers.”

Since then, it has generally been believed that the riddle has, in actuality, no answer.

Modern Western thought has progressed a lot since the mid-nineteenth century, however. Specifically, two modes of thinking have gained currency that directly lead to solving this riddle: Zen and Surrealism.

I’m not going to try to give even sketchy pictures of Zen or Surrealist doctrine here. There isn’t anywhere near enough space to do either subject justice. I will, however, allude to those parts that bear on solving the Hatter’s riddle.

I’m also not going to credit Dodson with having surreptitiously known the answer, then hiding it from the World. There is no chance that he could have read Andre Breton‘s The Surrealist Manifesto, which was published twenty-six years after Dodson’s death. And, I’ve not been able to find a scrap of evidence that the Anglican-deacon Dodson ever seriously studied Taoism or its better-known offshoot, Zen. I’m firmly convinced that the religiously conservative Dodson really did pen the riddle as an example of a nonsense question. He seemed fond of nonsense.

No, I’m trying to make the case that in the surreal world of imagination, there is no such thing as nonsense. There is always a viewpoint from which the absurd and seemingly illogical comes into sharp focus as something obvious.

As Obi-Wan Kenobi said in Return of the Jedi: “From a certain point of view.”

Surrealism sought to explore the alternate universe of dreams. From that point of view, Alice is a classic surrealist work. It explicitly recounts a dream Alice had while napping on a summery hillside with her head cradled in her big sister’s lap. The surrealists, reading Alice three quarters of a century later, recognized this link, and acknowledged the mastery with which Dodson evoked the dream world.

Unlike the mid-nineteenth-century Anglicans, however, the surrealists of the early twentieth century viewed that dream world as having as much, if not more, validity as the waking world of so-called “reality.”

Chinese Taoism informs our thinking through the melding of all forms of reality (along with everything else) into one unified whole. When allied with Indian Buddhism to form the Chinese Ch’an, or Japanese Zen, it provides a method that frees the mind to explore possible answers to, among other things, riddles like the Hatter’s, and find just the right viewpoint where the solution comes into sharp relief. This method, which is called a koan, is an exercise wherein a master provides riddles to his (or her) students to help guide them along their paths to enlightenment.

Ultimately, the solution to the Hatter’s riddle, as I revealed in my 2016 novella Lilith, is as follows:

Question: Why is a raven like a writing desk?

Answer: They’re both not made of bauxite.

According to Collins English Dictionary – Complete & Unabridged 2012 Digital Edition, bauxite is “a white, red, yellow, or brown amorphous claylike substance comprising aluminium oxides and hydroxides, often with such impurities as iron oxides. It is the chief ore of aluminium and has the general formula: Al2O3 nH2O.”

As a claylike mineral substance, bauxite is clearly exactly the wrong material from which to make a raven. Ravens are complex, highly organized hydrocarbon-based life forms. In its hydrated form, one could form an amazingly lifelike statue of a raven. It wouldn’t, however, even be the right color. Certainly it would never exhibit the behaviors we normally expect of actual, real, live ravens.

Similarly, bauxite could be used to form an amazingly lifelike statue of a writing desk. The bauxite statue of a writing desk might even have a believable color!

Why one would want to produce a statue of a writing desk, instead of making an actual writing desk, is a question outside the scope of this blog posting.

Real writing desks, however, are best made of wood, although other materials, such as steel, fiber-reinforced plastic (FRP), and marble, have been used successfully. What makes wood such a perfect material for writing desks is its mechanically superior composite structure.

Being made of long cellulose fibers held in place by a lignin matrix, wood has wonderful anisotropic mechanical properties. It’s easy to cut and shape with the grain, while providing prodigious yield strength when stressed against the grain. Its amazing toughness when placed under tension or bending loads makes assembling wood into the kind of structure ideal for a writing desk almost too easy.

Try making that out of bauxite!

Alice was unable to divine the answer the Hatter’s riddle because she “thought over all she could remember about ravens and writing desks.” That is exactly the kind of mistake we might expect a conservative Anglican deacon to make as well.

It is only by using Zen methods of turning the problem inside out and surrealist imagination’s ability to look at it as a question, not of what ravens and writing desks are, but what they are not, that the riddle’s solution becomes obvious.

Why Not Twitter?

Tweety birds
Character limitations mean Twitter messages have room to carry essentially no information. Shutterstock Image

20 June 2018 – I recently received a question: “Do you use Twitter?” The sender was responding positively to a post on this blog. My response was a terse: “I do not use Twitter.”

That question deserved a more extensive response. Well, maybe not “deserved,” since this post has already exceeded the maximum 280 characters allowed in a Twitter message. In fact, not counting the headline, dateline or image caption, it’s already 431 characters long!

That gives you an idea how much information you can cram into 280 characters. Essentially none. That’s why Twitter messages make their composers sound like airheads.

The average word in the English language is six characters long, not counting the spaces. So, to say one word, you need (on average) seven characters. If you’re limited to 280 characters, that means you’re limited to 280/7 = 40 words. A typical posting on this blog is roughly 1,300 words (this posting, by the way, is much shorter). A typical page in a paperback novel contains about 300 words. The first time I agreed to write a book for print, the publisher warned me that the manuscript needed to be at least 80,000 words to be publishable.

When I first started writing for business-to-business magazines, a typical article was around 2,500 words. We figured that was about right if you wanted to teach anybody anything useful. Not long afterward, when I’d (surprisingly quickly) climbed the journalist ranks to Chief Editor, I expressed the goal for any article written in our magazine (the now defunct Test & Measurement World) in the following way:

“Imagine an engineer facing a problem in the morning and not knowing what to do. If, during lunch, that engineer reads an article in our magazine and goes back to work knowing how to solve the problem, then we’ve done our job.”

That takes about 2,500 words. Since then, pressure from advertisers pushed us to writing shorter articles in the 1,250 word range. Of course, all advertisers really want any article to say is, “BUY OUR STUFF!”

That is NOT what business-to-business readers want articles to say. They want articles that tell them how to solve their problems. You can see who publishers listened to.

Blog postings are, essentially, stand-alone editorials.

From about day one as Chief Editor, I had to write editorials. I’d learned about editorial writing way back in Mrs. Langley’s eighth grade English class. I doubt Mrs. Langley ever knew how much I learned in her class, but it was a lot. Including how to write an editorial.

A successful editorial starts out introducing some problem, then explains little things like why it’s important and what it means to people like the reader, then tells the reader what to do about it. That last bit is what’s called the “Call to Action,” and it’s the most important part, and what everything else is there to motivate.

If your “problem” is easy to explain, you can often get away with an editorial 500 words long. Problems that are more complex or harder to explain take more words. Editorials can often reach 1,500 words.

If it can’t be done in 1,500 words, find a different problem to write your editorial about.

Now, magazine designers generally provide room for 500-1,000 word editorials, and editors generally work hard to stay within that constraint. Novice editors quickly learn that it takes a lot more work to write short than to write long.

Generally, writers start by dumping vast quantities of words into their manuscripts just to get the ideas out there, recorded in all their long-winded glory. Then, they go over that first draft, carefully searching for the most concise way to say what they want to say that still makes sense. Then, they go back and throw out all the ideas that really didn’t add anything to their editorial in the first place. By then, they’ve slashed the word count to close to what it needs to be.

After about five passes through the manuscript, the writer runs out of ways to improve the text, and hands it off to a production editor, who worries about things like grammar and spelling, as well as cramming it into the magazine space available. Then the managing editor does basically the same thing. Then the Chief Editor gets involved, saying “Omygawd, what is this writer trying to tell me?”

Finally, after about at least two rounds through this cycle, the article ends up doing its job (telling the readers something worth knowing) in the space available, or it gets “killed.”

“Killed” varies from just a mild “We’ll maybe run it sometime in the future,” to the ultimate “Stake Through The Heart,” which means it’ll never be seen in print.

That’s the process any piece of professional writing goes through. It takes days or weeks to complete, and it guarantees compact, dense, information-packed reading material. And, the shorter the piece, the more work it takes to pack the information in.

Think of cramming ten pounds of bovine fecal material into a five pound bag!

Is that how much work goes into the average Twitter feed?

I don’t think so! The twitter feeds I’ve seen sound like something written on a bathroom wall. They look like they were dashed off as fast as two fingers can type them, and they make their authors sound like illiterates.

THAT’s why I don’t use Twitter.

This blog posting, by the way, is a total of 5,415 characters long.

What If They Gave a War, But Nobody Noticed

Cyberwar
World War III is being fought in cyberspace right now, but most of us seem to be missing it! Oliver Denker/Shutterstock

13 June 2018 – Ever wonder why Kim Jong Un is so willing to talk about giving up his nuclear arsenal? Sort-of-President Donald Trump (POTUS) seems to think it’s because economic sanctions are driving North Korea (officially the Democratic People’s Republic of Korea, or DPRK) to the finacial brink.

That may be true, but it is far from the whole story. As usual, the reality star POTUS is stuck decades behind the times. The real World War III won’t have anything to do with nukes, and it’s started already.

The threat of global warfare using thermonuclear weapons was panic inducing to my father back in the 1950s and 1960s. Strangely, however, my superbrained mother didn’t seem very worried at the time.

By the 1980s, we were beginning to realize what my mother seemed to know instinctively — that global thermonuclear war just wasn’t going to happen. That kind of war leaves such an ungodly mess that no even-marginally-sane person would want to win one. The winners would be worse off than the losers!

The losers would join the gratefully dead, while the winners would have to live in the mess!

That’s why we don’t lose sleep at night knowing that the U.S., Russia, China, India, Pakistan, and, in fact, most countries in the first and second worlds, have access to thermonuclear weapons. We just worry about third-world toilets (to quote Danny DeVito’s character in The Jewel of the Nile) run by paranoid homicidal maniacs getting their hands on the things. Those guys are the only ones crazy enough to ever actually use them!

We only worried about North Korea developing nukes when Kim Jong Un was acting like a total whacko. Since he stopped his nuclear development program (because his nuclear lab accidentally collapsed under a mountain of rubble), it’s begun looking like he was no more insane than the leaders of Leonard Wibberley’s fictional nation-state, the Duchy of Grand Fewick.

In Wibberley’s 1956 novel The Mouse That Roared, the Duchy’s leaders all breathed a sigh of relief when their captured doomsday weapon, the Q-Bomb, proved to be a dud.

Yes, there is a hilarious movie to be made documenting the North Korean nuclear and missile programs.

Okay, so we’ve disposed of the idea that World War III will be a nuclear holocaust. Does that mean, as so many starry-eyed astrophysicists imagined in the late 1940s, the end of war?

Fat f-ing chance!

The winnable war in the Twenty-First Century is one fought in cyberspace. In fact, it’s going on right now. And, you’re missing it.

Cybersecurity and IT expert Theresa Payton, CEO of Fortalice Solutions, asserts that suspected North Korean hackers have been conducting offensive cyber operations on financial institutions amid discussions between Washington and Pyongyang on a possible nuclear summit between President Trump and Kim Jong Un.

“The U.S. has been able to observe North Korean-linked hackers targeting financial institutions in order to steal money,” she says. “This isn’t North Korea’s first time meddling in serious hacking schemes. This time, it’s likely because the international economic sanctions have hurt them in their wallets and they are desperate and strapped for cash.”

There is a long laundry list of cyberattacks that have been perpetrated against U.S. and European interests, including infrastructure, corporations and individuals.

“One of N. Korea’s best assets … is to flex it’s muscle using it’s elite trained cyber operations,” Payton asserts. “Their cyber weapons can be used to fund their government by stealing money, to torch organizations and governments that offend them (look at Sony hacking), to disrupt our daily lives through targeting critical infrastructure, and more. The Cyber Operations of N. Korea is a powerful tool for the DPRK to show their displeasure at anything and it’s the best bargaining chip that Kim Jong Un has.”

Clearly, DPRK is not the only bad state actor out there. Russia has long been in the news using various cyberwar tactics against the U.S., Europe and others. China has also been blamed for cyberattacks. In fact, cyberwarfare is a cheap, readily available alternative to messy and expensive nuclear weapons for anyone with Internet access (meaning, just about everybody) and wishing to do anybody harm, including us.

“You can take away their Nukes,” Payton points out, “but you will have a hard time dismantling their ability to attack critical infrastructure, businesses and even civilians through cyber operations.”

Programming Notes: I’ve been getting a number of comments on this blog each day, and it looks like we need to set some ground rules. At least, I need to be explicit about things I will accept and things I won’t:

  • First off, remember that this isn’t a social media site. When you make a comment, it doesn’t just spill out into the blog site. Comments are sequestered until I go in and approve or reject them. So far, the number of comments is low enough that I can go through and read each one, but I don’t do it every day. If I did, I’d never get any new posts written! Please be patient.
  • Do not embed URLs to other websites in comments. I’ll strip them out even if I approve your comment otherwise. The reason is that I don’t have time to vet every URL, and I stick to journalistic standards, which means I don’t allow anything in the blog that I can’t verify. There are no exceptions.
  • This is an English language site ONLY. Comments in other languages are immediately deleted. (For why, see above.)
  • Use Standard English written in clear, concise prose. If I have trouble understanding what you’re trying to say, I won’t give your comment any space. If you can’t write a cogent English sentence, take an ESL writing course!

The Case for Free College

College vs. Income
While the need for skilled workers to maintain our technology edge has grown, the cost of training those workers has grown astronomically.

6 June 2018 – We, as a nation, need to extend the present system that provides free, universal education up through high school to cover college to the baccalaureate level.

DISCLOSURE: Teaching is my family business. My father was a teacher. My mother was a teacher. My sister’s first career was as a teacher. My brother in law was a teacher. My wife is a teacher. My son is a teacher. My daughter in law is a teacher. Most of my aunts and uncles and cousins are or were teachers. I’ve spent a lot of years teaching at the college level, myself. Some would say that I have a conflict of interest when covering developments in the education field. Others might argue that I know whereof I speak.

Since WW II, there has been a growing realization that the best careers go to those with at least a bachelor’s degree in whatever field they choose. Yet, at the same time, society has (perhaps inadvertently, although I’m not naive enough to eschew thinking there’s a lot of blame to go around) erected a monumental barrier to anyone wanting to get an education. Since the mid-1970s, the cost of higher education has vastly outstripped the ability of most people to pay for it.

In 1975, the price of attendance in college was about one fifth of the median family income (see graph above). In 2016, it was over a third. That makes sending kids to college a whole lot harder than it used to be. If your family happens to have less than median household income, that barrier looks even higher, and is getting steeper.

MORE DISCLOSURE: The reason I don’t have a Ph.D. today is that two years into my Aerospace Engineering Ph.D. program, Arizona State University jacked up the tuition beyond my (not incosiderable at the time) ability to pay.

I’d like everyone in America to consider the following propositions:

  1. A bachelor’s degree is the new high-school diploma;
  2. Having an educated population is a requirement for our technology-based society;
  3. Without education, upward mobility is nearly impossible;
  4. Ergo, it is a requirement for our society to ensure that every citizen capable of getting a college degree gets one.

EVEN MORE DISCLOSURE: Horace Mann, often credited as the Father of Public Education, was born in the same town (Franklin, MA) that I was, and our family charity is a scholarship fund dedicated to his memory.

About Mann’s intellectual progressivism, the historian Ellwood P. Cubberley said: “No one did more than he to establish in the minds of the American people the conception that education should be universal, non-sectarian, free, and that its aims should be social efficiency, civic virtue, and character, rather than mere learning or the advancement of education ends.” (source: Wikipedia)

The Wikipedia article goes on to say: “Arguing that universal public education was the best way to turn unruly American children into disciplined, judicious republican citizens, Mann won widespread approval from modernizers, especially in the Whig Party, for building public schools. Most states adopted a version of the system Mann established in Massachusetts, especially the program for normal schools to train professional teachers.”

That was back in the mid-nineteenth century. At that time, the United States was in the midst of a shift from an agrarian to an industrial economy. We’ve since completed that transition and are now shifting to an information-based economy. In future, full participation in the workforce will require everyone to have at least a bachelor’s degree.

So, when progressive politicians, like Bernie Sanders, make noises about free universal college education, YOU should listen!

It’s about time we, as a society, owned up to the fact that times have changed a lot since the mid-nineteenth century. At that time, universal free education to about junior high school level was considered enough. Since then, it was extended to high school. It’s time to extend it further to the bachelor’s-degree level.

That doesn’t mean shutting down Ivy League colleges. For those who can afford them, private and for-profit colleges can provide superior educational experiences. But, publicly funded four-year colleges offering tuition-free education to everyone has become a strategic imperative.

Quality vs. Quantity

Custom MC
It used to be that highest quality was synonymous with hand crafting. It’s not no more! Pressmaster/Shutterstock.com

23 May 2018 – Way back in the 1990s, during a lunch conversation with friends involved in the custom motorcycle business, one of my friends voiced the opinion that hand crafted items, from fine-art paintings to custom motorcycle parts, were worth the often-exhorbitant premium prices charged for them for two reasons: individualization and premium quality.

At that time, I disagreed about hand-crafted items exhibiting premium quality.

I had been deeply involved in the electronics test business for over a decade both as an engineer and a journalist. I’d come to realize that, even back then, things had changed drastically from the time when hand crafting could achieve higher product quality than mass production. Things have changed even more since then.

Early machine tools were little more than power-driven hand tools. The ancient Romans, for example, had hydraulically powered trip hammers, but they were just regular hammers mounted with a pivot at the end of the handle and a power-driven cam that lifted the head, then let it fall to strike an anvil. If you wanted something hammered, you laid it atop the anvil and waited for the hammer to fall on it. What made the exercise worthwhile was the scale achievable for these machines. They were much larger than could be wielded by puny human slaves.

The most revolutionary part of the Industrial Revolution was invention of many purpose-built precision machine tools that could crank out interchangeable parts.

Most people don’t appreciate that previously nuts and bolts were made in mating pairs. That is, that bolt was made to match that nut because the threads on this other nut/bolt pair wouldn’t quite match up because the threads were all filed by hand. It just wasn’t possible to carve threads with enough precision.

Precision machinery capable of repeating the same operation to produce the same result time after time solved that little problem, and made interchangeable parts possible.

Statistical Process Control

Fast forward to the twentieth century, when Walter A. Shewhart applied statistical methods to quality management. Basically, Shewhart showed that measurements of significant features of mass-produced anything fell into a bell-shaped curve, with each part showing some more-or-less small variation from some nominal value. More precise manufacturing processes led to tighter bell curves where variations from the nominal value tended to be smaller. That’s what makes manufacturing interchangeable parts by automated machine tools possible.

Bell Curve
Bell curve distribution of measurement results. Peter Hermes Furian/Shutterstock.com

Before Shewhart, we knew making interchangeable parts was possible, but didn’t fully understand why it was possible.

If you’re hand crafting components for, say, a motorcycle, you’re going to carefully make each part, testing frequently to make sure it fits together with all the other parts. Your time goes into carefully and incrementally honing the part’s shape to gradually bring it into a perfect fit. That’s what gave hand crafting the reputation for high quality.

In this cut-and-try method of fabrication, achieving a nominal value for each dimension becomes secondary to “does it fit.” The final quality depends on your motor skills, patience, and willingness to throw out anything that becomes unsalvageable. Each individual part becomes, well, individual. They are not interchangeable.

If, on the other hand, you’re cranking out kazillions of supposedly interchangeable parts in an automated manufacturing process, you blast parts out as fast as you can, then inspect them later. Since the parts are supposed to be interchangeable, whether they fit together is a matter of whether the variation (from the nominal value) of this particular part is small enough so that it is still guaranteed to fit with all the other parts.

If it’s too far off, it’s junk. If it’s close enough, it’s fine. The dividing line between “okay” and “junk” is called the “tolerance.”

Now, the thing about tolerance is that it’s somewhat flexible. You CAN improve the yield (the fraction of parts that fall inside the tolerance band) by simply stretching out the tolerance band. That lets more of your kazillion mass-produced parts into the “okay” club.

Of course, you have to fiddle with the nominal values of all the other parts to make room for the wider variations you want to accept. It’s not hard. Any engineer knows how to do it.

However, when you start fiddling with nominal values to accommodate wider tolerances, the final product starts looking sloppy. That is, after all, what “sloppy” means.

By the 1980s, engineers had figured out that if they insisted on automated manufacturing equipment to achieve the best possible consistency, they could then focus in on reducing those pesky variations (improving precision). Eventually, improved machine precision made it possible to squeeze tolerances and remove sloppiness (improving perceived quality).

By the 1990s, automated manufacturing processes had achieved quality that was far beyond what hand-crafted processes could match. That’s why I had to disagree with my friend who said that mass-manufactured stuff sacrificed quality for quantity.

In fact, Shewhart’s “statistical process control” made it possible to leverage manufacturing quantity to improve quality.

Product Individualization

That, however, left hand-crafting’s only remaining advantage to be individualization. You are, after all, making one unique item.

Hand crafting requires a lot of work by people who’ve spent a long time honing their skills. To be economically viable, it’s got to show some advantage that will allow its products to command a premium price. So, the fact that hand-crafting’s only advantage is its ability to achieve a high degree of product individualization matters!

I once heard an oxymoronic joke comment that said: “I want to be different, like everybody else.”

That silly comment actually has hidden layers of meaning.

Of course, if everybody is different, what are they different from? If there’s no normal (equivalent to the nominal value in manufacturing test results), how can you define a difference (variation) from normal?

Another layer of meaning in the statement is its implicit acknowledgment that everyone wants to be different. We all want to feel special. There seems to be a basic drive among humans to be unique. It probably stems from a desire to be valued by those around us so they might take special care to help ensure our individual survival.

That would confer an obvious evolutionary advantage.

One of the ways we can show our uniqueness is to have stuff that shows individualization. I want my stuff to be different from your stuff. That’s why, for example, women don’t want to see other women wearing dresses identical to their own at a cocktail party.

In a world, however, where the best quality is to be had with mass-produced manufactured goods, how can you display uniqueness without having all your stuff be junk? Do you wear underwear over a leotard? Do you wear a tutu with a pants suit? That kind of strategy’s been tried and it didn’t work very well.

Ideally, to achieve uniqueness you look to customize the products that you buy. And, it’s more than just picking a color besides black for your new Ford. You want significant features of your stuff to be different from the features of your neighbor’s stuff.

As freelance journalist Carmen Klingler-Deiseroth wrote in Automation Strategies, a May 11 e-magazine put out by Automation World, “Particularly among the younger generation of digital natives, there is a growing desire to fine-tune every online purchase to match their individual tastes and preferences.”

That, obviously, poses a challenge to manufacturers whose fabrication strategy is based on mass producing interchangeable parts on automated production lines in quantities large enough to use statistical process control to maintain quality. If your lot size is one, how do you get the statistics?

She quotes Robert Kickinger, mechatronic technologies manager at B&R Industrial Automation as pointing out: “What is new . . . is the idea of making customized products under mass-production conditions.”

Kickinger further explains that any attempt to make products customizable by increasing manufacturing-system flexibility is usually accompanied by a reduction in overall equipment effectiveness (OEE). “When that happens, individualization is no longer profitable.”

One strategy that can help is taking advantage of an important feature of automated manufacturing equipment, it’s programmability. Machine programmability comes from its reliance on software, and software is notably “soft.” It’s flexible.

If you could ensure that taking advantage of your malleable software’s flexibility won’t screw up your product quality when you make your one, unique, customized product, your flexible manufacturing system could then remain profitable.

One strategy is based on simulation. That is, you know how your manufacturing system works, so you can build what I like to call a “mathematical model” that will behave, in a mathematical sense, like your real manufacturing system. For any given input, it will produce results identical to that of the real system, but much, much faster.

The results, of course, are not real, physical products, but measurement results identical to what your test department will get out of the real product.

Now, you can put the unique parameters of your unique product into the mathematical model of your real system, and crank out as many simulated examples of products as you need to ensure that when you plug those parameters into your real system, it will spit out a unique example of your unique product exhibiting the best quality your operation is capable of — without the need of cranking out mass quantities of unwanted stuff in order to tune your process.

So, what happens when (in accordance with Murphy’s Law) something that can go wrong does go wrong? Your wonderful, expensive, finely tuned flexible manufacturing system spits out a piece of junk.

You’d better not (automatically) box that piece of junk up and ship it to your customer!

Instead, you’d better take advantage of the second feature Kickinger wants for your flexible manufacturing system: real-time rejection.

“Defective products need to be rejected on the spot, while maintaining full production speed,” he advises.

Immediately catching isolated manufacturing defects not only maintains overall quality, it allows replacing flexibly manufactured unique junk to be replaced quickly with good stuff to fulfill orders with minimum delay. If things have gone wrong enough to cause repetitive multiple failures, real-time rejection also allows your flexible manufacturing system to send up an alarm alerting non-automated maintenance assets (people with screwdrivers and wrenches) to correct the problem fast.

“This is the only way to make mass customization viable from an economic perspective,” Kickinger asserts.

Social and technological trends will only make developent of this kind of flexible manufacturing process de rigeur in the future. Online shoppers are going to increasingly insist on having reasonably priced unique products manufactured to high quality standards and customized according to their desires.

As Kickinger points out: “The era of individualization has only just begun.”

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.

Linear Actuator Basics

 

Lead Screw Image
Close up of a ball-screw-type lead screw shaft being used as a precision linear actuator on a machine.

2 May 2018 – This blog post is intended for folks with an interest in basic practical mechanical engineering, or mechanical engineers who want a basic brush up on linear actuators. It’s mostly pretty basic stuff, which has been around for decades, but can serve as a guide to linear-motion actuators in the real world.

What triggered writing this post at this particular time is a notice crossing my desk about a video entitled “Can I Run A Linear Actuator Into A Hard Stop?” produced by global motion-control supplier Ametek. It’s an important topic that just about everyone faced with building a motorized linear-motion system needs to think about.

I first got serious about linear motion actuators in the mid-1970s as an experimental physics student. I, of course, have seen them in action since I can remember because of my father’s hobby of building powerboats. Virtually every powerboat (as opposed to sailboats) bigger than about ten feet uses manually powered linear actuators for steering linkage.

I didn’t really get into electromechanical linear actuators (linear actuators powered by electric motors) until I got involved with automated measurement systems, where steady motion or precision positioning are important. Since then, just about every system I’ve built has included a precision linear-actuator somewhere inside.

Linear Actuator Types

There are basically four main types of linear actuators: lead screw, hydraulic/pneumatic, linear-motor, and piezoelectric. I’m going to concentrate on the lead-screw type because it’s by far the most common, but I’ll drop in some info about the other types for completeness.

Piezoelectric actuators take advantage of the fact that certain anisotropic crystalline solids change their shapes when placed in an electric field. The range of motion, however, is notably microscopic, so they are best optimized to positioning things that are, well, microscopic. They’re a major enabling technology for atomic force microscopes.

Linear motors are much larger. Imagine a long, relatively narrow tray of chocolate-frosted fudge. Imagine further that the fudge is actually made of, say, barium ferrite ceramic and magnetized with magnetic north being on the frosting side and magnetic south being on the fudge side underneath.

Now, slice the fudge into strips with cuts going across the tray of fudge the short way. Finally, take every other strip out, turn it over with the frosting side down, and put it back in place.

So, you end up with the odd-numbered strips (1, 3, 5, … ) being frosting side up and the even strips (2, 4, 6, … ) frosting side down. That’s what the long stator portion of a linear motor looks like.

To make motion, however, you need an electromagnetic slide approximately as long as the fudge tray is wide, and as wide as one of the cut strips. When you energize the electromagnet, the slide will settle between two of the stips so that its north pole is as close as possible to the nearest stator south pole, while its south pole snuggles up to the nearest stator north pole.

Reversing the current through the slide’s electromagnet makes it possible to inch the slide along the stator, one strip at a time. Switching really fast makes it possible to move the slide along the stator really fast.

That’s a very rough idea of how linear motors work. They are capable of high speeds (to make, say, a rail gun), but are relatively low in the actual force department.

The pneumatic/hydraulic actuator is just a metal cylinder enclosed at one end with a moveable piston at the other. The space between the piston and the closed end is filled with some working fluid, such as air or oil. Forcing more fluid into the cylinder pushes the piston out. Pumping fluid out, pulls the piston back. Depending on details, the motion can be fast or slow, and the forces applied can be enormous. Precision of motion is, however, not so good.

A lead-screw-type linear actuator (LSLA) is a fairly complex piece of kit. Construction of the things is actually fairly simple, though, which largely accounts for their popularity.

Linear actuator diagram
Components of a lead-screw linear actuator.

Essentially, an LSLA consists of an ordinary reversible electric motor with a length of worm shaft fixed to its output. The worm shaft threads through a slide traveling along a track/frame that prevents the slide rotating with respect to the motor housing. The worm shaft and threaded slide form a simple screw machine to convert rotary motion of the shaft to linear motion of the slide.

A motor controller, which can be as simple as a DPST switch or as complex as an intelligent motor controller (IMC) with a microprocessor brain, supplies power and control to the motor.

Motion Control Stops

At minimum, something needs to be installed to keep the slide from either backing into the motor/shaft coupler at the proximal end of the worm shaft, or running off the distal end of the shaft. These thingd are called, not surprisingly, “stops,” and they can be mechanical, electrical, or software.

Mechanical Stops, also known as “hard” stops, are barriers attached to the frame that physically constrain the slide’s motion to a certain range. Running into a hard stop is generally considered a bad thing, and designers only put them into machines to prevent even worse outcomes that may obtain when the slide’s designed-in range is exceeded.

Electrical Stops, more often referred to as “limit switches,” are actual electrical switches mounted on the frame that are automatically actuated by the slide’s motion. Typically a designer will mount an SPST momentary switch in a bracket attached to the frame. The slide presses on the switch at the end of it’s travel, closing a set of contacts that send a logic signal to the controller alerting it to cut (or reverse) motor power. The block can also serve as a mechanical stop if the control function goes wrong.

Software Stops require adding a linear encoder to the linear actuator mechanism. There are all sorts of linear encoders, from simple lengths of resistance wire to digital optical position encoders. What they all do is send some kind of signal constantly informing the controller of where the slide is in real time. A sotware stop is then an algorithim in the controller program to say: “That’s far enough!” and trigger what happens next.

Given the choice, my preference is to rely mainly on software stops. Having a linear encoder in the mechanism gives all kinds of neat options for precision control of the system, such as positioning, speed control, and so forth, in addition to implementing the software stops.

For example, I once built an experiment to test a device to measure the attack angle of an aircraft wing. A wing’s attack angle is the angle between the relative airflow and the wing shape’s chord. It is the single most important parameter determining the wing’s lift at any given speed. There are, however, all kinds of phenomena that affect the actual attack angle, all of which change constantly in real time as the wing moves through the air. To really understand what’s going on with the wing, some means of monitoring attack angle is, shall we say, useful.

Anyway, the test protocol for the experiment called for mounting an example of the attack-angle sensor in a wind tunnel, and measuring its output at hundreds of combinations of air speed and sensor orientation. Central to the control system’s operation was a linear encoder whose output informed both the controller and the data logging computer.

The controller’s job was to hold the sensor’s orientation at a certain set point via a feedback loop just long enough to get a stable reading, then go on to the next set point. The test program set’s supervisory algorithm stepped the set point through all the orientations required, one at a time. In fact, it cycled the set point back and forth through the whole test range several times, logging data as it went.

After building and testing the whole rig, my job, as principle investigator, was reduced to setting the wind tunnel’s airspeed, then reading a novel while the system ran through the test program and logged all the data automatically.

When designing the thing, I spent a couple of days trying to figure out how to install limit switches. In the end, however, I decided it just wasn’t worth the trouble. The design I had was pretty compact to begin with. The switches available and the mounting brackets to hold them would have been bigger than the rest of the design. So, I gave up on adding limit switches and relied on software stops.

That left me in danger of running into a hard stop, though, if something went wrong with the program. There are always hard stops. Lead screws are of finite length and one of two things can happen when you come to the end: either something (a hard stop) blocks the slide motion, or it runs off the end. Both are bad.

If the electric motor rams the slide into a hard stop, it’s like the proverbial unstoppable force vs. an immovable object. Something’s gotta give and that something invariably breaks.

If, on the other hand, the slide runs off the end of the lead screw and the whole machine falls apart. That may be less destructive, but it means the entire machine has to be reassembled.

Running Into A Hard Stop

There are two rules regarding running into a hard stop:

Rule 1: DON’T DO IT!

Rule 2: ASSUME YOU CAN’T AVOID IT!

What happens if you break Rule 1 depends on the details of the mechanism’s design. Every design is different, and what happens when you go too far is different as well. The consequences are all different, but they are all more-or-less bad. There’s never a situation where running a linear actuator beyond its design limit is a good thing.

Rule 2, on the other hand, is a simple acknowledgement of Murphy’s Law: Anything that can go wrong will go wrong.

While Murphy’s Law has a statistical nature when you’re dealing with mechanical systems in use, when testing prototype systems it’s a stone-cold guarantee. And, any time you put together anything for the first time, then turn it on, you’re testing a prototype.

What Rule 2 tells you to do is think long and hard about what’s going to happen when you turn the thing on and get an unexpected surprise. You have to expect the unexpected because if you expected it, it wouldn’t be unexpected.

One of the most common surprises around linear actuators is the thing suddenly going out of control. When that happens the slide invariably runs past its design limits.

The video from Ametek is short. I hesitate to spoil it for you by telling you that the answer to the question “Can I Run A Linear Actuator Into A Hard Stop?” is “Yes.” It has to be because Rule 2 tells you it’s inevitable. Importantly, the video goes on to tell you what to do to minimize the damage when it happens.

Surrealism vs. Zen

Rene Magritte’s painting The Treachery of Images, also known as This Is Not a Pipe, is a famous example of surrealist style, which uses realistically rendered images to say something profound about the workings of the human mind.

26 April 2018 – As you can tell by the discrepancy between the date at start of this column and the publication date listed in red above, it’s taken a looong time to get this thing written! The date at the text start, of course, is the date I started writing the manuscript, and the red publication date was automatically added when I actually finished all the corrections and made the thing live on the blog page. My main excuse for taking so long to write it is that the day I started the manuscript I also came down with the flu. It cut my work output drastically ’cause I suddenly started spending so much of my work day in a semi-comatose state.

Before starting this manuscript, I finally finished reading the (really massive) catalog for a 2001-2002 Exhibition put together by the Tate Modern Gallery in Bankside, London, UK entitled Surrealism: Desire Unbound. This tome is 349 pages long and provides a serious look deep inside the mindset of proponents of the Surrealist Movement, which was arguably the most far reaching creative enterprise of the Twentieth Century.

I care about that because stylistically most of my art falls into the surrealistic style. That is, it’s an attempt to render mental images in a realistic manner. I have, however, major differences with the classic surrealists led by Andre Breton regarding the theory of how the mind works. That affects the content chosen.

I’m not a trained psychologist, but neither was Breton. While Breton attempted to base his creative theory on his interpretation of Freud’s pioneering psychoanalytical research, most of Freud’s writings were unavailable to him at the time he was developing the ideas on which he based his 1924 booklet, Manifeste du surréalisme. The fact that Freud’s work is now quite readily available is largely immaterial because Freud’s research delved into mental illness, whereas I’m interested in the workings of reasonably healthy minds.

I prefer to follow the introspective traditions of Zen Buddhism.

In his manifesto, Breton says: “. . . one proposes to express — verbally, by means of the written word, or in any other manner — the actual functioning of thought. Dictated by thought, in the absence of any control exercised by reason, exempt from any aesthetic or moral concern.” In other words, he proposed to free artists of all disciplines from discipline itself.

From today’s vantage point, nearly a century removed from this event, that doesn’t seem like much of a stretch. We have gotten used to the idea that an artist can do pretty much anything he or she damn well pleases, call it art, and get away with it. I’ve no problem with Breton’s statement, except that he went so far as eschewing the editorial process.

As a veteran journalist, fiction writer, and visual artist I know from experience, that not editing invariably results in gobbledygook. I also know the surrealists didn’t actually do it.

The essence of all creative arts, from journalistic writing to making motion pictures, is communication. An artist has something to say, and attempts to say it. That’s the difference between a Michelangelo and a house painter.

(Having painted houses professionally, I hesitate to say anything derogatory about house painters. Then again, I did get fired from that job for taking too long to get anything done! So, maybe I should shut up about painting houses professionally.)

The purpose of editing is to ensure that the audience has half a chance of figuring out what the author is trying to say. It has been said that James Joyce’s stream-of-consciousness novel Ulysses is the most difficult thing to read in the English language. Since fighting through more than ten pages of the thing in one sitting gives me a splitting headache, I don’t disagree. Joyce would have done the English-reading world a great favor by consulting a copy of Strunk and White’s The Elements of Style!

Hey, Jimmy, ever hear of quotation marks?

So, by trying to bypass the editing process, the early surrealists’ attempts at automatic writing produced pretty ungainly stuff. Of course, from concept to pen, “automatic” writers actually do a lot of editing. Take, for example, two lines from Joyce Mansour’s first book of poetry (supposedly automatically written) Chris:

I will fish out your empty soul
In the coffin where your mouldy body lies

Do you, or anybody you know, think in such complete sentences? I don’t. Even if I started with the complete mental image, I’d do a lot of backing and filling to gin up those two lines complete with intelligible word order. I’d likely start with the coffin image, then realize I needed a subject to view the image, then maybe come up with the fishing idea, and so forth.

It’d all happen really fast because we humans are really fast verbal thinkers. It might almost seem instantaneous if I were willfully not paying attention to the process. But, “the absence of any control exercised by reason” would NOT obtain!

Does anyone believe Salvador Dali’s The Hallucinogenic Toreador is NOT the product of careful planning?

Similarly, does anyone believe Salvador Dali’s The Hallucinogenic Toreador is NOT the product of careful planning and reasoned arrangements of intertwining visual components? I doubt if Dali, himself, would assert that!

I am currently working on a simple painting that realistically depicts a woman’s eye. I’ve had it on my easel for at least a week, during which time I spent a couple of days carefully erasing the eyebrow that I made too dark in the original underpainting. I’ve spent another day deciding whether to mix up additional yellow paint to correct the skin color, or just get started with what paint I already have on hand and worry about running out when, and if, I run out.

That’s all part of the editing process.

It’s something every artist has done all the way back to the cro-magnon guy (or maybe girl) scribbling graffiti on the cave walls in Lascaux. DaVinci spent a lifetime editing details of the Mona Lisa. Dali, trained in the same style, did the same thing.

Why would Breton be so enamoured of automatism? It goes back to his reliance on the image Freud posited for human mental activity.

Freud imagined a mind divided against itself. He imagined a subconscious filled with desires and emotions trying to express itself, but held in check by a conscious ego that constantly says: “No, No! You can’t say or do that!”

Breton’s goal was to free the subconcious from conscious control.

To a Zen Buddist that model of mental activity is absurd. Zen’s ancestor Taoism solved Breton’s problem roughly twenty-four centuries earlier with the image of the “uncarved block.”

Basically, as the second line of Lau Tsu’s Tao Te Ching says:

The name that can be named is not the eternal name.

That means dividing things up (by naming them) breaks them. Dividing the mind into subconscious and conscious parts breaks it.

Having a conscious mind controlling a subconscious mind results in insanity. To a Zen Buddhist, the sane person has a whole, undivided mind. What Freud imagines as the conscious part in fact always chooses a plan that expresses the desires of the unconscious part. How could a sane person act differently?

To a Zen Buddhist, it’s all one mind, not a bunch of disjointed pieces at war with each other.

What about the many examples of individuals whose unconstrained desires would run them afoul of society? To the Freudian surrealists, that was the normal state of affairs. To Zen Buddhists, on the other hand, that indicates one of two situations:

* Stupidity in which the conscious mind chooses inappropriate means to express the “unconscious” desires; or

* Mental illness in which the unconscious desires are such that no person “in their right mind” would actually desire them.

For example, Breton’s surrealists professed to admire the freedom from constraints expressed in the writings of the Marquis de Sade. Sade’s protagonists have a desire to cause suffering in others. To the Freudian surrealists, that forces a choice between consciously suppressing the violent urges, or going to jail for acting them out.

Having that desire in the first place would horrify any Buddist! Buddhists want to end suffering. You’d have to stand on your head, philosophically speaking, to imagine a Buddhist subconsciously desiring to cause suffering for any creature. The very existence of such a desire demonstrates a deranged mind!

The Buddhist would choose the kinder, gentler way of confining the kooky Marquis in the eighteenth century version of a loony bin, while providing him barrels of ink and reams of paper with which he could mentally live out his barbaric fantasies without actually hurting anyone. That, of course, is exactly what the French authorities did.

Good for them.

Of course, the Sade-smitten surrealists were generally neither stupid nor insane. A quick search revealed exactly zero instances of surriealists being jailed for violent behavior. Several of them did run afoul of decency laws, but most of us now would opine that was the fault of the laws, not the law breakers. I’m fairly confident that, though the surrealists often depicted instances of cruelty, they pretty much never actually hurt anyone, themselves.

Even that famously revolting shot in the film Un Chien Andalou by Dali and Luis Bunuel, that apparently shows a girl’s eye being sliced open, didn’t actually happen. It was a special-effects masterpiece.

So, what does all this mean for surrealism in the first quarter of the twenty-first century?

Well, a number of historians counted surrealism as dead at the end of World War II. Others confidently claim that surrealism died with Andre Breton in 1966. Still others say it died with Salvador Dali in 1989.

My experience indicates that surrealism might insist, along with Mark Twain: “The reports of my death are greatly exaggerated.”

I constantly see exquisite new works done in a style that can best be described as surrealist. That is, these works render images of mental landscapes and ideas in a startling realistic way. They make the life of the mind visible.

Sounds like surrealism to me!