The Pyramid of Needs

Needs Pyramid
The Pyramid of Needs combines Maslow’s and Herzberg’s motivational theories.

18 July 2018 – Long, long ago, in a [place] far, far away. …

When I was Chief Editor at business-to-business magazine Test & Measurement World, I had a long, friendly though heated, discussion with one of our advertising-sales managers. He suggested making the compensation we paid our editorial staff contingent on total advertising sales. He pointed out that what everyone came to work for was to get paid, and that tying their pay to how well the magazine was doing financially would give them an incentive to make decisions that would help advertising sales, and advance the magazine’s financial success.

He thought it was a great idea, but I disagreed completely. I pointed out that, though revenue sharing was exactly the right way to compensate the salespeople he worked with, it was exactly the wrong way to compensate creative people, like writers and journalists.

Why it was a good idea for his salespeople I’ll leave for another column. Today, I’m interested in why it was not a good idea for my editors.

In the heat of the discussion I didn’t do a deep dive into the reasons for taking my position. Decades later, from the standpoint of a semi-retired whatever-you-call-my-patchwork-career, I can now sit back and analyze in some detail the considerations that led me to my conclusion, which I still think was correct.

We’ll start out with Maslow’s Hierarchy of Needs.

In 1943, Abraham Maslow proposed that healthy human beings have a certain number of needs, and that these needs are arranged in a hierarchy. At the top is “self actualization,” which boils down to a need for creativity. It’s the need to do something that’s never been done before in one’s own individual way. At the bottom is the simple need for physical survival. In between are three more identified needs people also seek to satisfy.

Maslow pointed out that people seek to satisfy these needs from the bottom to the top. For example, nobody worries about security arrangements at their gated community (second level) while having a heart attack that threatens their survival (bottom level).

Overlaid on Maslow’s hierarchy is Frederick Herzberg’s Two-Factor Theory, which he published in his 1959 book The Motivation to Work. Herzberg’s theory divides Maslow’s hierarchy into two sections. The lower section is best described as “hygiene factors.” They are also known as “dissatisfiers” or “demotivators” because if they’re not met folks get cranky.

Basically, a person needs to have their hygiene factors covered in order have a level of basic satisfaction in life. Not having any of these needs satisfied makes them miserable. Having them satisfied doesn’t motivate them at all. It makes ’em fat, dumb and happy.

The upper-level needs are called “motivators.” Not having motivators met drives an individual to work harder, smarter, etc. It energizes them.

My position in the argument with my ad-sales friend was that providing revenue sharing worked at the “Safety and Security” level. Editors were (at least in my organization) paid enough that they didn’t have to worry about feeding their kids and covering their bills. They were talented people with a choice of whom they worked for. If they weren’t already being paid enough, they’d have been forced to go work for somebody else.

Creative people, my argument went, are motivated by non-monetary rewards. They work at the upper “motivator” levels. They’ve already got their physical needs covered, so to motivate them we have to offer rewards in the “motivator” realm.

We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like “Best Technical Article.” Above all, we talked up the fact that ours was “the premier publication in the market.”

These were all non-monetary rewards to motivate people who already had their basic needs (the hygiene factors) covered.

I summarized my compensation theory thusly: “We pay creative people enough so that they don’t have to go do something else.”

That gives them the freedom to do what they would want to do, anyway. The implication is that creative people want to do stuff because it’s something they can do that’s worth doing.

In other words, we don’t pay creative people to work. We pay them to free them up so they can work. Then, we suggest really fun stuff for them to work at.

What does this all mean for society in general?

First of all, if you want there to be a general level of satisfaction within your society, you’d better take care of those hygiene factors for everybody!

That doesn’t mean the top 1%. It doesn’t mean the top 80%, either. Or, the top 90%. It means everybody!

If you’ve got 99% of everybody covered, that still leaves a whole lot of people who think they’re getting a raw deal. Remember that in the U.S.A. there are roughly 300 million people. If you’ve left 1% feeling ripped off, that’s 3 million potential revolutionaries. Three million people can cause a lot of havoc if motivated.

Remember, at the height of the 1960s Hippy movement, there were, according to the most generous estimates, only about 100,000 hipsters wandering around. Those hundred-thousand activists made a huge change in society in a very short period of time.

Okay. If you want people invested in the status quo of society, make sure everyone has all their hygiene factors covered. If you want to know how to do that, ask Bernie Sanders.

Assuming you’ve got everybody’s hygiene factors covered, does that mean they’re all fat, dumb, and happy? Do you end up with a nation of goofballs with no motivation to do anything?

Nope!

Remember those needs Herzberg identified as “motivators” in the upper part of Maslow’s pyramid?

The hygiene factors come into play only when they’re not met. The day they’re met, people stop thinking about who’ll be first against the wall when the revolution comes. Folks become fat, dumb and happy, and stay that way for about an afternoon. Maybe an afternoon and an evening if there’s a good ballgame on.

The next morning they start thinking: “So, what can we screw with next?”

What they’re going to screw with next is anything and everything they damn well please. Some will want to fly to the Moon. Some will want to outdo Michaelangelo‘s frescos for the ceiling of the Sistine Chapel. They’re all going to look at what they think was the greatest stuff from the past, and try to think of ways to do better, and to do it in their own way.

That’s the whole point of “self actualization.”

The Renaissance didn’t happen because everybody was broke. It happened because they were already fat, dumb and happy, and looking for something to screw with next.

POTUS and the Peter Principle

Will Rogers & Wiley Post
In 1927, Will Rogers wrote: “I never met a man I didn’t like.” Here he is (on left) posing with aviator Wiley Post before their ill-fated flying exploration of Alaska. Everett Historical/Shutterstock

11 July 2018 – Please bear with me while I, once again, invert the standard news-story pyramid by presenting a great whacking pile of (hopfully entertaining) detail that leads eventually to the point of this column. If you’re too impatient to read it to the end, leave now to check out the latest POTUS rant on Twitter.

Unlike Will Rogers, who famously wrote, “I never met a man I didn’t like,” I’ve run across a whole slew of folks I didn’t like, to the point of being marginally misanthropic.

I’ve made friends with all kinds of people, from murderers to millionaires, but there are a few types that I just can’t abide. Top of that list is people that think they’re smarter than everybody else, and want you to acknowledge it.

I’m telling you this because I’m trying to be honest about why I’ve never been able to abide two recent Presidents: William Jefferson Clinton (#42) and Donald J. Trump (#45). Having been forced to observe their antics over an extended period, I’m pleased to report that they’ve both proved to be among the most corrupt individuals to occupy the Oval Office in recent memory.

I dislike them because they both show that same, smarmy self-satisfied smile when contemplating their own greatness.

Tricky Dick Nixon (#37) was also a world-class scumbag, but he never triggered the same automatic revulsion. That is because, instead of always looking self satisfied, he always looked scared. He was smart enough to recognize that he was walking a tightrope and, if he stayed on it long enough, he eventually would fall off.

And, he did.

I had no reason for disliking #37 until the mid-1960s, when, as a college freshman, I researched a paper for a history class that happened to involve digging into the McCarthy hearings of the early 1950s. Seeing the future #37’s activities in that period helped me form an extremely unflattering picture of his character, which a decade later proved accurate.

During those years in between I had some knock-down, drag-out arguments with my rabid-Nixon-fan grandmother. I hope I had the self control never to have said “I told you so” after Nixon’s fall. She was a nice lady and a wonderful grandma, and wouldn’t have deserved it.

As Abraham Lincoln (#16) famously said: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

Since #45 came on my radar many decades ago, I’ve been trying to figure out what, exactly, is wrong with his brain. At first, when he was a real-estate developer, I just figured he had bad taste and was infantile. That made him easy to dismiss, so I did just that.

Later, he became a reality-TV star. His show, The Apprentice, made it instantly clear that he knew absolutely nothing about running a business.

No wonder his companies went bankrupt. Again, and again, and again….

I’ve known scads of corporate CEOs over the years. During the quarter century I spent covering the testing business as a journalist, I got to spend time with most of the corporate leaders of the world’s major electronics manufacturing companies. Unsurprisingly, the successful ones followed the best practices that I learned in MBA school.

Some of the CEOs I got to know were goofballs. Most, however, were absolutely brilliant. The successful ones all had certain things in common.

Chief among the characteristics of successful corporate executives is that they make the people around them happy to work for them. They make others feel comfortable, empowered, and enthusiastically willing to cooperate to make the CEO’s vision manifest.

Even Commendatore Ferrari, who I’ve heard was Hell to work for and Machiavellian in interpersonal relationships, made underlings glad to have known him. I’ve noticed that ‘most everybody who’s ever worked for Ferrari has become a Ferrari fan for life.

As far as I can determine, nobody ever sued him.

That’s not the impression I got of Donald Trump, the corporate CEO. He seemed to revel in conflict, making those around him feel like dog pooh.

Apparently, everyone who’s ever dealt with him has wanted to sue him.

That worked out fine, however, for Donald Trump, the reality-TV star. So-called “reality” TV shows generally survive by presenting conflict. The more conflict the better. Everybody always seems to be fighting with everybody else, and the winners appear to be those who consistently bully their opponents into feeling like dog pooh.

I see a pattern here.

The inescapable conclusion is that Donald Trump was never a successful corporate executive, but succeeded enormously playing one on TV.

Another characteristic I should mention of reality TV shows is that they’re unscripted. The idea seems to be that nobody knows what’s going to happen next, including the cast.

That leaves off the necessity for reality-TV stars to learn lines. Actual movie stars and stage actors have to learn lines of dialog. Stories are tightly scripted so that they conform to Aristotle’s recommendations for how to write a successful plot.

Having written a handful of traditional motion-picture scripts as well as having produced a few reality-TV episodes, I know the difference. Following Aristotle’s dicta gives you the ability to communicate, and sometimes even teach, something to your audience. The formula reality-TV show, on the other hand, goes nowhere. Everybody (including the audience) ends up exactly where they started, ready to start the same stupid arguments over and over again ad nauseam.

Apparently, reality-TV audiences don’t want to actually learn anything. They’re more focused on ranting and raving.

Later on, following a long tradition among theater, film and TV stars, #45 became a politician.

At first, I listened to what he said. That led me to think he was a Nazi demagogue. Then, I thought maybe he was some kind of petty tyrant, like Mussolini. (I never considered him competent enough to match Hitler.)

Eventually, I realized that it never makes any sense to listen to what #45 says because he lies. That makes anything he says irrelevant.

FIRST PRINCIPAL: If you catch somebody lying to you, stop believing what they say.

So, it’s all bullshit. You can’t draw any conclusion from it. If he says something obviously racist (for example), you can’t conclude that he’s a racist. If he says something that sounds stupid, you can’t conclude he’s stupid, either. It just means he’s said something that sounds stupid.

Piling up this whole load of B.S., then applying Occam’s Razor, leads to the conclusion that #45 is still simply a reality-TV star. His current TV show is titled The Trump Administration. Its supporting characters are U.S. senators and representatives, executive-branch bureaucrats, news-media personalities, and foreign “dignitaries.” Some in that last category (such as Justin Trudeau and Emmanuel Macron) are reluctant conscripts into the cast, and some (such as Vladimir Putin and Kim Jong-un) gleefully play their parts, but all are bit players in #45’s reality TV show.

Oh, yeah. The largest group of bit players in The Trump Administration is every man, woman, child and jackass on the planet. All are, in true reality-TV style, going exactly nowhere as long as the show lasts.

Politicians have always been showmen. Of the Founding Fathers, the one who stands out for never coming close to becoming President was Benjamin Franklin. Franklin was a lot of things, and did a lot of things extremely well. But, he was never really a P.T.-Barnum-like showman.

Really successful politicians, such as Abraham Lincoln, Franklin Roosevelt (#32), Bill Clinton, and Ronald Reagan (#40) were showmen. They could wow the heck out of an audience. They could also remember their lines!

That brings us, as promised, to Donald Trump and the Peter Principle.

Recognizing the close relationship between Presidential success and showmanship gives some idea about why #45 is having so much trouble making a go of being President.

Before I dig into that, however, I need to point out a few things that #45 likes to claim as successes that actually aren’t:

  • The 2016 election was not really a win for Donald Trump. Hillary Clinton was such an unpopular candidate that she decisively lost on her own (de)merits. God knows why she was ever the Democratic Party candidate at all. Anybody could have beaten her. If Donald Trump hadn’t been available, Elmer Fudd could have won!
  • The current economic expansion has absolutely nothing to do with Trump policies. I predicted it back in 2009, long before anybody (with the possible exception of Vladimir Putin, who apparently engineered it) thought Trump had a chance of winning the Presidency. My prediction was based on applying chaos theory to historical data. It was simply time for an economic expansion. The only effect Trump can have on the economy is to screw it up. Being trained as an economist (You did know that, didn’t you?), #45 is unlikely to screw up so badly that he derails the expansion.
  • While #45 likes to claim a win on North Korean denuclearization, the Nobel Peace Prize is on hold while evidence piles up that Kim Jong-un was pulling the wool over Trump’s eyes at the summit.

Finally, we move on to the Peter Principle.

In 1969 Canadian writer Raymond Hull co-wrote a satirical book entitled The Peter Principle with Laurence J. Peter. It was based on research Peter had done on organizational behavior.

Peter was (he died at age 70 in 1990) not a management consultant or a behavioral psychologist. He was an Associate Professor of Education at the University of Southern California. He was also Director of the Evelyn Frieden Centre for Prescriptive Teaching at USC, and Coordinator of Programs for Emotionally Disturbed Children.

The Peter principle states: “In a hierarchy every employee tends to rise to his level of incompetence.”

Horrifying to corporate managers, the book went on to provide real examples and lucid explanations to show the principle’s validity. It works as satire only because it leaves the reader with a choice either to laugh or to cry.

See last week’s discussion of why academic literature is exactly the wrong form with which to explore really tough philosophical questions in an innovative way.

Let’s be clear: I’m convinced that the Peter principle is God’s Own Truth! I’ve seen dozens of examples that confirm it, and no counter examples.

It’s another proof that Mommy Nature has a sense of humor. Anyone who disputes that has, philosophically speaking, a piece of paper taped to the back of his (or her) shirt with the words “Kick Me!” written on it.

A quick perusal of the Wikipedia entry on the Peter Principle elucidates: “An employee is promoted based on their success in previous jobs until they reach a level at which they are no longer competent, as skills in one job do not necessarily translate to another. … If the promoted person lacks the skills required for their new role, then they will be incompetent at their new level, and so they will not be promoted again.”

I leave it as an exercise for the reader (and the media) to find the numerous examples where #45, as a successful reality-TV star, has the skills he needed to be promoted to President, but not those needed to be competent in the job.

Death Logs Out

Death Logs Out Cover
E.J. Simon’s Death Logs Out (Endeavour Press) is the third in the Michael Nicholas series.

4 July 2018 – If you want to explore any of the really tough philosophical questions in an innovative way, the best literary forms to use are fantasy and science fiction. For example, when I decided to attack the nature of reality, I did it in a surrealist-fantasy novelette entitled Lilith.

If your question involves some aspect of technology, such as the nature of consciousness from an artificial-intelligence (AI) viewpoint, you want to dive into the science-fiction genre. That’s what sci-fi great Robert A. Heinlein did throughout his career to explore everything from space travel to genetically engineered humans. My whole Red McKenna series is devoted mainly to how you can use (and mis-use) robotics.

When E.J. Simon selected grounded sci-fi for his Michael Nicholas series, he most certainly made the right choice. Grounded sci-fi is the sub-genre where the author limits him- (or her-) self to what is at least theoretically possible using current technology, or immediate extensions thereof. No warp drives, wormholes or anti-grav boots allowed!

In this case, we’re talking about imaginitive development of artificial intelligence and squeezing a great whacking pile of supercomputing power into a very small package to create something that can best be described as chilling: the conquest of death.

The great thing about fiction genre, such as fantasy and sci-fi, is the freedom provided by the ol’ “willing suspension of disbelief.” If you went at this subject in a scholarly journal, you’d never get anything published. You’d have to prove you could do it before anybody’d listen.

I treated on this effect in the last chapter of Lilith when looking at my own past reaction to “scholarly” manuscripts shown to me by folks who forgot this important fact.

“Their ideas looked like the fevered imaginings of raving lunatics,” I said.

I went on to explain why I’d chosen the form I’d chosen for Lilith thusly: “If I write it up like a surrealist novel, folks wouldn’t think I believed it was God’s Own Truth. It’s all imagination, so using the literary technique of ‘willing suspension of disbelief’ lets me get away with presenting it without being a raving lunatic.”

Another advantage of picking fiction genre is that it affords the ability to keep readers’ attention while filling their heads with ideas that would leave them cross-eyed if simply presented straight. The technical details presented in the Michael Nicholas series could, theoretically, be presented in a PowerPoint presentation with something like fifteen slides. Well, maybe twenty five.

But, you wouldn’t be able to get the point across. People would start squirming in their seats around slide three. What Simon’s trying to tell us takes time to absorb. Readers have to make the mental connections before the penny will drop. Above all, they have to see it in action, and that’s just what embedding it in a mystery-adventure story does. Following the mental machinations of “real” characters as they try to put the pieces together helps Simon’s audience fit them together in their own minds.

Spoiler Alert: Everybody in Death Logs Out lives except bad guys, and those who were already dead to begin with. Well, with one exception: a supporting character who’s probably a good guy gets well-and-truly snuffed. You’ll have to read the book to find out who.

Oh, yeah. There are unreconstructed Nazis! That‘s always fun! Love having unreconstructed Nazis to hate!

I guess I should say a little about the problem that drives the plot. What good is a book review if it doesn’t say anything about what drives the plot?

Our hero, Michael, was the fair-haired boy of his family. He grew up to be a highly successful plain-vanilla finance geek. He married a beautiful trophy wife with whom he lives in suburban Connecticut. Michael’s daughter, Sophia, is away attending an upscale university in South Carolina.

Michael’s biggest problem is overwork. With his wife’s grudging acquiesence, he’d taken over his black-sheep big brother Alex’s organized crime empire after Alex’s murder two years earlier.

And, you thought Thomas Crown (The Thomas Crown Affair, 1968 and 1999) was a multitasker! Michael makes Crown look single minded. No wonder he’s getting frazzled!

But, Michael was holding it all together until one night when he was awakened by a telephone call from an old flame, whom he’d briefly employed as a body guard before realizing that she was a raving homicidal lunatic.

“I have your daughter,” Sindy Steele said over the phone.

Now, the obviously made-up first name “Sindy” should have warned Michael that Ms. Steele wasn’t playing with a full deck even before he got involved with her, but, at the time, the head with the brains wasn’t the head doing his thinking. She was, shall we say, “toothsome.”

Turns out that Sindy had dropped off her meds, then traveled all the way from her “retirement” villa in Santorini, Greece on an ill-advised quest to get back at Michael for dumping her.

But, that wasn’t Sophia’s worst problem. When she was nabbed, Sofia was in the midst of a call on her mobile phone from her dead uncle Alex, belatedly warning her of the danger!

While talking on the phone with her long-dead uncle confused poor Sofia, Michael knew just what was going on. For two years, he’d been having regular daily “face time” with Alex through cyberspace as he took over Alex’s syndicate. Mortophobic Alex had used his ill-gotten wealth to cheat death by uploading himself to the Web.

Now, Alex and Michael have to get Sofia back, then figure out who’s coming after Michael to steal the technology Alex had used to cheat death.

This is certainly not the first time someone has used “uploading your soul to the Web” as a plot device. Perhaps most notably, Robert Longo cast Barbara Sukowa as a cyberloaded fairy godmother trying to watch over Keanu Reeves’s character in the 1995 film Johnny Mnemonic. In Longo’s futuristic film, the technique was so common that the ghost had legal citizenship!

In the 1995 film, however, Longo glossed over how the ghost in the machine was supposed to work, technically. Johnny Mnemonic was early enough that it was futuristic sci-fi, as was Geoff Murphy’s even earlier soul-transference work Freejack (1992). Nobody in the early 1990s had heard of the supercomputing cloud, and email was high-tech. The technology for doing soul transference was as far in the imagined future as space travel was to Heinlein when he started writing about it in the 1930s.

Fast forward to the late 2010s. This stuff is no longer in the remote future. It’s in the near future. In fact, there’s very little technology left to develop before Simon’s version becomes possible. It’s what we in the test-equipment-development game used to call “specsmanship.” No technical breakthroughs needed, just advancements in “faster, wider, deeper” specifications.

That’s what makes the Michael Nicholas series grounded sci-fi! Simon has to imagine how today’s much-more-defined cloud infrastructure might both empower and limit cyberspook Alex. He also points out that what enables the phenomenon is software (as in artificial intelligence), not hardware.

Okay, I do have some bones to pick with Simon’s text. Mainly, I’m a big Strunk and White (Elements of Style) guy. Simon’s a bit cavalier about paragraphing, especially around dialog. His use of quotation marks is also a bit sloppy.

But, not so bad that it interferes with following the story.

Standard English is standardized for a reason: it makes getting ideas from the author’s head into the reader’s sooo much easier!

James Joyce needed a dummy slap! His Ulysses has rightly been called “the most difficult book to read in the English language.” It was like he couldn’t afford to buy a typewriter with a quotation key.

Enough ranting about James Joyce!

Simon’s work is MUCH better! There are only a few times I had to drop out of Death Logs Out‘s world to ask, “What the heck is he trying to say?” That’s a rarity in today’s world of amateurishly edited indie novels. Simon’s story always pulled me right back into its world to find out what happens next.

The Mad Hatter’s Riddle

Raven/Desk
Lewis Carroll’s famous riddle “Why is a raven like a writing desk?” turns out to have a simple solution after all! Shutterstock

27 June 2018 – In 1865 Charles Lutwidge Dodgson, aka Lewis Carroll, published Alice’s Adventures in Wonderland, in which his Mad Hatter character posed the riddle: “Why is a raven like a writing desk?”

Somewhat later in the story Alice gave up trying to guess the riddle and challenged the Mad Hatter to provide the answer. When he couldn’t, nor could anyone else at the story’s tea party, Alice dismissed the whole thing by saying: “I think you could do something better with the time . . . than wasting it in asking riddles that have no answers.”

Since then, it has generally been believed that the riddle has, in actuality, no answer.

Modern Western thought has progressed a lot since the mid-nineteenth century, however. Specifically, two modes of thinking have gained currency that directly lead to solving this riddle: Zen and Surrealism.

I’m not going to try to give even sketchy pictures of Zen or Surrealist doctrine here. There isn’t anywhere near enough space to do either subject justice. I will, however, allude to those parts that bear on solving the Hatter’s riddle.

I’m also not going to credit Dodson with having surreptitiously known the answer, then hiding it from the World. There is no chance that he could have read Andre Breton‘s The Surrealist Manifesto, which was published twenty-six years after Dodson’s death. And, I’ve not been able to find a scrap of evidence that the Anglican-deacon Dodson ever seriously studied Taoism or its better-known offshoot, Zen. I’m firmly convinced that the religiously conservative Dodson really did pen the riddle as an example of a nonsense question. He seemed fond of nonsense.

No, I’m trying to make the case that in the surreal world of imagination, there is no such thing as nonsense. There is always a viewpoint from which the absurd and seemingly illogical comes into sharp focus as something obvious.

As Obi-Wan Kenobi said in Return of the Jedi: “From a certain point of view.”

Surrealism sought to explore the alternate universe of dreams. From that point of view, Alice is a classic surrealist work. It explicitly recounts a dream Alice had while napping on a summery hillside with her head cradled in her big sister’s lap. The surrealists, reading Alice three quarters of a century later, recognized this link, and acknowledged the mastery with which Dodson evoked the dream world.

Unlike the mid-nineteenth-century Anglicans, however, the surrealists of the early twentieth century viewed that dream world as having as much, if not more, validity as the waking world of so-called “reality.”

Chinese Taoism informs our thinking through the melding of all forms of reality (along with everything else) into one unified whole. When allied with Indian Buddhism to form the Chinese Ch’an, or Japanese Zen, it provides a method that frees the mind to explore possible answers to, among other things, riddles like the Hatter’s, and find just the right viewpoint where the solution comes into sharp relief. This method, which is called a koan, is an exercise wherein a master provides riddles to his (or her) students to help guide them along their paths to enlightenment.

Ultimately, the solution to the Hatter’s riddle, as I revealed in my 2016 novella Lilith, is as follows:

Question: Why is a raven like a writing desk?

Answer: They’re both not made of bauxite.

According to Collins English Dictionary – Complete & Unabridged 2012 Digital Edition, bauxite is “a white, red, yellow, or brown amorphous claylike substance comprising aluminium oxides and hydroxides, often with such impurities as iron oxides. It is the chief ore of aluminium and has the general formula: Al2O3 nH2O.”

As a claylike mineral substance, bauxite is clearly exactly the wrong material from which to make a raven. Ravens are complex, highly organized hydrocarbon-based life forms. In its hydrated form, one could form an amazingly lifelike statue of a raven. It wouldn’t, however, even be the right color. Certainly it would never exhibit the behaviors we normally expect of actual, real, live ravens.

Similarly, bauxite could be used to form an amazingly lifelike statue of a writing desk. The bauxite statue of a writing desk might even have a believable color!

Why one would want to produce a statue of a writing desk, instead of making an actual writing desk, is a question outside the scope of this blog posting.

Real writing desks, however, are best made of wood, although other materials, such as steel, fiber-reinforced plastic (FRP), and marble, have been used successfully. What makes wood such a perfect material for writing desks is its mechanically superior composite structure.

Being made of long cellulose fibers held in place by a lignin matrix, wood has wonderful anisotropic mechanical properties. It’s easy to cut and shape with the grain, while providing prodigious yield strength when stressed against the grain. Its amazing toughness when placed under tension or bending loads makes assembling wood into the kind of structure ideal for a writing desk almost too easy.

Try making that out of bauxite!

Alice was unable to divine the answer the Hatter’s riddle because she “thought over all she could remember about ravens and writing desks.” That is exactly the kind of mistake we might expect a conservative Anglican deacon to make as well.

It is only by using Zen methods of turning the problem inside out and surrealist imagination’s ability to look at it as a question, not of what ravens and writing desks are, but what they are not, that the riddle’s solution becomes obvious.

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.