Death Logs Out

Death Logs Out Cover
E.J. Simon’s Death Logs Out (Endeavour Press) is the third in the Michael Nicholas series.

4 July 2018 – If you want to explore any of the really tough philosophical questions in an innovative way, the best literary forms to use are fantasy and science fiction. For example, when I decided to attack the nature of reality, I did it in a surrealist-fantasy novelette entitled Lilith.

If your question involves some aspect of technology, such as the nature of consciousness from an artificial-intelligence (AI) viewpoint, you want to dive into the science-fiction genre. That’s what sci-fi great Robert A. Heinlein did throughout his career to explore everything from space travel to genetically engineered humans. My whole Red McKenna series is devoted mainly to how you can use (and mis-use) robotics.

When E.J. Simon selected grounded sci-fi for his Michael Nicholas series, he most certainly made the right choice. Grounded sci-fi is the sub-genre where the author limits him- (or her-) self to what is at least theoretically possible using current technology, or immediate extensions thereof. No warp drives, wormholes or anti-grav boots allowed!

In this case, we’re talking about imaginitive development of artificial intelligence and squeezing a great whacking pile of supercomputing power into a very small package to create something that can best be described as chilling: the conquest of death.

The great thing about fiction genre, such as fantasy and sci-fi, is the freedom provided by the ol’ “willing suspension of disbelief.” If you went at this subject in a scholarly journal, you’d never get anything published. You’d have to prove you could do it before anybody’d listen.

I treated on this effect in the last chapter of Lilith when looking at my own past reaction to “scholarly” manuscripts shown to me by folks who forgot this important fact.

“Their ideas looked like the fevered imaginings of raving lunatics,” I said.

I went on to explain why I’d chosen the form I’d chosen for Lilith thusly: “If I write it up like a surrealist novel, folks wouldn’t think I believed it was God’s Own Truth. It’s all imagination, so using the literary technique of ‘willing suspension of disbelief’ lets me get away with presenting it without being a raving lunatic.”

Another advantage of picking fiction genre is that it affords the ability to keep readers’ attention while filling their heads with ideas that would leave them cross-eyed if simply presented straight. The technical details presented in the Michael Nicholas series could, theoretically, be presented in a PowerPoint presentation with something like fifteen slides. Well, maybe twenty five.

But, you wouldn’t be able to get the point across. People would start squirming in their seats around slide three. What Simon’s trying to tell us takes time to absorb. Readers have to make the mental connections before the penny will drop. Above all, they have to see it in action, and that’s just what embedding it in a mystery-adventure story does. Following the mental machinations of “real” characters as they try to put the pieces together helps Simon’s audience fit them together in their own minds.

Spoiler Alert: Everybody in Death Logs Out lives except bad guys, and those who were already dead to begin with. Well, with one exception: a supporting character who’s probably a good guy gets well-and-truly snuffed. You’ll have to read the book to find out who.

Oh, yeah. There are unreconstructed Nazis! That‘s always fun! Love having unreconstructed Nazis to hate!

I guess I should say a little about the problem that drives the plot. What good is a book review if it doesn’t say anything about what drives the plot?

Our hero, Michael, was the fair-haired boy of his family. He grew up to be a highly successful plain-vanilla finance geek. He married a beautiful trophy wife with whom he lives in suburban Connecticut. Michael’s daughter, Sophia, is away attending an upscale university in South Carolina.

Michael’s biggest problem is overwork. With his wife’s grudging acquiesence, he’d taken over his black-sheep big brother Alex’s organized crime empire after Alex’s murder two years earlier.

And, you thought Thomas Crown (The Thomas Crown Affair, 1968 and 1999) was a multitasker! Michael makes Crown look single minded. No wonder he’s getting frazzled!

But, Michael was holding it all together until one night when he was awakened by a telephone call from an old flame, whom he’d briefly employed as a body guard before realizing that she was a raving homicidal lunatic.

“I have your daughter,” Sindy Steele said over the phone.

Now, the obviously made-up first name “Sindy” should have warned Michael that Ms. Steele wasn’t playing with a full deck even before he got involved with her, but, at the time, the head with the brains wasn’t the head doing his thinking. She was, shall we say, “toothsome.”

Turns out that Sindy had dropped off her meds, then traveled all the way from her “retirement” villa in Santorini, Greece on an ill-advised quest to get back at Michael for dumping her.

But, that wasn’t Sophia’s worst problem. When she was nabbed, Sofia was in the midst of a call on her mobile phone from her dead uncle Alex, belatedly warning her of the danger!

While talking on the phone with her long-dead uncle confused poor Sofia, Michael knew just what was going on. For two years, he’d been having regular daily “face time” with Alex through cyberspace as he took over Alex’s syndicate. Mortophobic Alex had used his ill-gotten wealth to cheat death by uploading himself to the Web.

Now, Alex and Michael have to get Sofia back, then figure out who’s coming after Michael to steal the technology Alex had used to cheat death.

This is certainly not the first time someone has used “uploading your soul to the Web” as a plot device. Perhaps most notably, Robert Longo cast Barbara Sukowa as a cyberloaded fairy godmother trying to watch over Keanu Reeves’s character in the 1995 film Johnny Mnemonic. In Longo’s futuristic film, the technique was so common that the ghost had legal citizenship!

In the 1995 film, however, Longo glossed over how the ghost in the machine was supposed to work, technically. Johnny Mnemonic was early enough that it was futuristic sci-fi, as was Geoff Murphy’s even earlier soul-transference work Freejack (1992). Nobody in the early 1990s had heard of the supercomputing cloud, and email was high-tech. The technology for doing soul transference was as far in the imagined future as space travel was to Heinlein when he started writing about it in the 1930s.

Fast forward to the late 2010s. This stuff is no longer in the remote future. It’s in the near future. In fact, there’s very little technology left to develop before Simon’s version becomes possible. It’s what we in the test-equipment-development game used to call “specsmanship.” No technical breakthroughs needed, just advancements in “faster, wider, deeper” specifications.

That’s what makes the Michael Nicholas series grounded sci-fi! Simon has to imagine how today’s much-more-defined cloud infrastructure might both empower and limit cyberspook Alex. He also points out that what enables the phenomenon is software (as in artificial intelligence), not hardware.

Okay, I do have some bones to pick with Simon’s text. Mainly, I’m a big Strunk and White (Elements of Style) guy. Simon’s a bit cavalier about paragraphing, especially around dialog. His use of quotation marks is also a bit sloppy.

But, not so bad that it interferes with following the story.

Standard English is standardized for a reason: it makes getting ideas from the author’s head into the reader’s sooo much easier!

James Joyce needed a dummy slap! His Ulysses has rightly been called “the most difficult book to read in the English language.” It was like he couldn’t afford to buy a typewriter with a quotation key.

Enough ranting about James Joyce!

Simon’s work is MUCH better! There are only a few times I had to drop out of Death Logs Out‘s world to ask, “What the heck is he trying to say?” That’s a rarity in today’s world of amateurishly edited indie novels. Simon’s story always pulled me right back into its world to find out what happens next.

The Mad Hatter’s Riddle

Raven/Desk
Lewis Carroll’s famous riddle “Why is a raven like a writing desk?” turns out to have a simple solution after all! Shutterstock

27 June 2018 – In 1865 Charles Lutwidge Dodgson, aka Lewis Carroll, published Alice’s Adventures in Wonderland, in which his Mad Hatter character posed the riddle: “Why is a raven like a writing desk?”

Somewhat later in the story Alice gave up trying to guess the riddle and challenged the Mad Hatter to provide the answer. When he couldn’t, nor could anyone else at the story’s tea party, Alice dismissed the whole thing by saying: “I think you could do something better with the time . . . than wasting it in asking riddles that have no answers.”

Since then, it has generally been believed that the riddle has, in actuality, no answer.

Modern Western thought has progressed a lot since the mid-nineteenth century, however. Specifically, two modes of thinking have gained currency that directly lead to solving this riddle: Zen and Surrealism.

I’m not going to try to give even sketchy pictures of Zen or Surrealist doctrine here. There isn’t anywhere near enough space to do either subject justice. I will, however, allude to those parts that bear on solving the Hatter’s riddle.

I’m also not going to credit Dodson with having surreptitiously known the answer, then hiding it from the World. There is no chance that he could have read Andre Breton‘s The Surrealist Manifesto, which was published twenty-six years after Dodson’s death. And, I’ve not been able to find a scrap of evidence that the Anglican-deacon Dodson ever seriously studied Taoism or its better-known offshoot, Zen. I’m firmly convinced that the religiously conservative Dodson really did pen the riddle as an example of a nonsense question. He seemed fond of nonsense.

No, I’m trying to make the case that in the surreal world of imagination, there is no such thing as nonsense. There is always a viewpoint from which the absurd and seemingly illogical comes into sharp focus as something obvious.

As Obi-Wan Kenobi said in Return of the Jedi: “From a certain point of view.”

Surrealism sought to explore the alternate universe of dreams. From that point of view, Alice is a classic surrealist work. It explicitly recounts a dream Alice had while napping on a summery hillside with her head cradled in her big sister’s lap. The surrealists, reading Alice three quarters of a century later, recognized this link, and acknowledged the mastery with which Dodson evoked the dream world.

Unlike the mid-nineteenth-century Anglicans, however, the surrealists of the early twentieth century viewed that dream world as having as much, if not more, validity as the waking world of so-called “reality.”

Chinese Taoism informs our thinking through the melding of all forms of reality (along with everything else) into one unified whole. When allied with Indian Buddhism to form the Chinese Ch’an, or Japanese Zen, it provides a method that frees the mind to explore possible answers to, among other things, riddles like the Hatter’s, and find just the right viewpoint where the solution comes into sharp relief. This method, which is called a koan, is an exercise wherein a master provides riddles to his (or her) students to help guide them along their paths to enlightenment.

Ultimately, the solution to the Hatter’s riddle, as I revealed in my 2016 novella Lilith, is as follows:

Question: Why is a raven like a writing desk?

Answer: They’re both not made of bauxite.

According to Collins English Dictionary – Complete & Unabridged 2012 Digital Edition, bauxite is “a white, red, yellow, or brown amorphous claylike substance comprising aluminium oxides and hydroxides, often with such impurities as iron oxides. It is the chief ore of aluminium and has the general formula: Al2O3 nH2O.”

As a claylike mineral substance, bauxite is clearly exactly the wrong material from which to make a raven. Ravens are complex, highly organized hydrocarbon-based life forms. In its hydrated form, one could form an amazingly lifelike statue of a raven. It wouldn’t, however, even be the right color. Certainly it would never exhibit the behaviors we normally expect of actual, real, live ravens.

Similarly, bauxite could be used to form an amazingly lifelike statue of a writing desk. The bauxite statue of a writing desk might even have a believable color!

Why one would want to produce a statue of a writing desk, instead of making an actual writing desk, is a question outside the scope of this blog posting.

Real writing desks, however, are best made of wood, although other materials, such as steel, fiber-reinforced plastic (FRP), and marble, have been used successfully. What makes wood such a perfect material for writing desks is its mechanically superior composite structure.

Being made of long cellulose fibers held in place by a lignin matrix, wood has wonderful anisotropic mechanical properties. It’s easy to cut and shape with the grain, while providing prodigious yield strength when stressed against the grain. Its amazing toughness when placed under tension or bending loads makes assembling wood into the kind of structure ideal for a writing desk almost too easy.

Try making that out of bauxite!

Alice was unable to divine the answer the Hatter’s riddle because she “thought over all she could remember about ravens and writing desks.” That is exactly the kind of mistake we might expect a conservative Anglican deacon to make as well.

It is only by using Zen methods of turning the problem inside out and surrealist imagination’s ability to look at it as a question, not of what ravens and writing desks are, but what they are not, that the riddle’s solution becomes obvious.

Why Not Twitter?

Tweety birds
Character limitations mean Twitter messages have room to carry essentially no information. Shutterstock Image

20 June 2018 – I recently received a question: “Do you use Twitter?” The sender was responding positively to a post on this blog. My response was a terse: “I do not use Twitter.”

That question deserved a more extensive response. Well, maybe not “deserved,” since this post has already exceeded the maximum 280 characters allowed in a Twitter message. In fact, not counting the headline, dateline or image caption, it’s already 431 characters long!

That gives you an idea how much information you can cram into 280 characters. Essentially none. That’s why Twitter messages make their composers sound like airheads.

The average word in the English language is six characters long, not counting the spaces. So, to say one word, you need (on average) seven characters. If you’re limited to 280 characters, that means you’re limited to 280/7 = 40 words. A typical posting on this blog is roughly 1,300 words (this posting, by the way, is much shorter). A typical page in a paperback novel contains about 300 words. The first time I agreed to write a book for print, the publisher warned me that the manuscript needed to be at least 80,000 words to be publishable.

When I first started writing for business-to-business magazines, a typical article was around 2,500 words. We figured that was about right if you wanted to teach anybody anything useful. Not long afterward, when I’d (surprisingly quickly) climbed the journalist ranks to Chief Editor, I expressed the goal for any article written in our magazine (the now defunct Test & Measurement World) in the following way:

“Imagine an engineer facing a problem in the morning and not knowing what to do. If, during lunch, that engineer reads an article in our magazine and goes back to work knowing how to solve the problem, then we’ve done our job.”

That takes about 2,500 words. Since then, pressure from advertisers pushed us to writing shorter articles in the 1,250 word range. Of course, all advertisers really want any article to say is, “BUY OUR STUFF!”

That is NOT what business-to-business readers want articles to say. They want articles that tell them how to solve their problems. You can see who publishers listened to.

Blog postings are, essentially, stand-alone editorials.

From about day one as Chief Editor, I had to write editorials. I’d learned about editorial writing way back in Mrs. Langley’s eighth grade English class. I doubt Mrs. Langley ever knew how much I learned in her class, but it was a lot. Including how to write an editorial.

A successful editorial starts out introducing some problem, then explains little things like why it’s important and what it means to people like the reader, then tells the reader what to do about it. That last bit is what’s called the “Call to Action,” and it’s the most important part, and what everything else is there to motivate.

If your “problem” is easy to explain, you can often get away with an editorial 500 words long. Problems that are more complex or harder to explain take more words. Editorials can often reach 1,500 words.

If it can’t be done in 1,500 words, find a different problem to write your editorial about.

Now, magazine designers generally provide room for 500-1,000 word editorials, and editors generally work hard to stay within that constraint. Novice editors quickly learn that it takes a lot more work to write short than to write long.

Generally, writers start by dumping vast quantities of words into their manuscripts just to get the ideas out there, recorded in all their long-winded glory. Then, they go over that first draft, carefully searching for the most concise way to say what they want to say that still makes sense. Then, they go back and throw out all the ideas that really didn’t add anything to their editorial in the first place. By then, they’ve slashed the word count to close to what it needs to be.

After about five passes through the manuscript, the writer runs out of ways to improve the text, and hands it off to a production editor, who worries about things like grammar and spelling, as well as cramming it into the magazine space available. Then the managing editor does basically the same thing. Then the Chief Editor gets involved, saying “Omygawd, what is this writer trying to tell me?”

Finally, after about at least two rounds through this cycle, the article ends up doing its job (telling the readers something worth knowing) in the space available, or it gets “killed.”

“Killed” varies from just a mild “We’ll maybe run it sometime in the future,” to the ultimate “Stake Through The Heart,” which means it’ll never be seen in print.

That’s the process any piece of professional writing goes through. It takes days or weeks to complete, and it guarantees compact, dense, information-packed reading material. And, the shorter the piece, the more work it takes to pack the information in.

Think of cramming ten pounds of bovine fecal material into a five pound bag!

Is that how much work goes into the average Twitter feed?

I don’t think so! The twitter feeds I’ve seen sound like something written on a bathroom wall. They look like they were dashed off as fast as two fingers can type them, and they make their authors sound like illiterates.

THAT’s why I don’t use Twitter.

This blog posting, by the way, is a total of 5,415 characters long.

What If They Gave a War, But Nobody Noticed

Cyberwar
World War III is being fought in cyberspace right now, but most of us seem to be missing it! Oliver Denker/Shutterstock

13 June 2018 – Ever wonder why Kim Jong Un is so willing to talk about giving up his nuclear arsenal? Sort-of-President Donald Trump (POTUS) seems to think it’s because economic sanctions are driving North Korea (officially the Democratic People’s Republic of Korea, or DPRK) to the finacial brink.

That may be true, but it is far from the whole story. As usual, the reality star POTUS is stuck decades behind the times. The real World War III won’t have anything to do with nukes, and it’s started already.

The threat of global warfare using thermonuclear weapons was panic inducing to my father back in the 1950s and 1960s. Strangely, however, my superbrained mother didn’t seem very worried at the time.

By the 1980s, we were beginning to realize what my mother seemed to know instinctively — that global thermonuclear war just wasn’t going to happen. That kind of war leaves such an ungodly mess that no even-marginally-sane person would want to win one. The winners would be worse off than the losers!

The losers would join the gratefully dead, while the winners would have to live in the mess!

That’s why we don’t lose sleep at night knowing that the U.S., Russia, China, India, Pakistan, and, in fact, most countries in the first and second worlds, have access to thermonuclear weapons. We just worry about third-world toilets (to quote Danny DeVito’s character in The Jewel of the Nile) run by paranoid homicidal maniacs getting their hands on the things. Those guys are the only ones crazy enough to ever actually use them!

We only worried about North Korea developing nukes when Kim Jong Un was acting like a total whacko. Since he stopped his nuclear development program (because his nuclear lab accidentally collapsed under a mountain of rubble), it’s begun looking like he was no more insane than the leaders of Leonard Wibberley’s fictional nation-state, the Duchy of Grand Fewick.

In Wibberley’s 1956 novel The Mouse That Roared, the Duchy’s leaders all breathed a sigh of relief when their captured doomsday weapon, the Q-Bomb, proved to be a dud.

Yes, there is a hilarious movie to be made documenting the North Korean nuclear and missile programs.

Okay, so we’ve disposed of the idea that World War III will be a nuclear holocaust. Does that mean, as so many starry-eyed astrophysicists imagined in the late 1940s, the end of war?

Fat f-ing chance!

The winnable war in the Twenty-First Century is one fought in cyberspace. In fact, it’s going on right now. And, you’re missing it.

Cybersecurity and IT expert Theresa Payton, CEO of Fortalice Solutions, asserts that suspected North Korean hackers have been conducting offensive cyber operations on financial institutions amid discussions between Washington and Pyongyang on a possible nuclear summit between President Trump and Kim Jong Un.

“The U.S. has been able to observe North Korean-linked hackers targeting financial institutions in order to steal money,” she says. “This isn’t North Korea’s first time meddling in serious hacking schemes. This time, it’s likely because the international economic sanctions have hurt them in their wallets and they are desperate and strapped for cash.”

There is a long laundry list of cyberattacks that have been perpetrated against U.S. and European interests, including infrastructure, corporations and individuals.

“One of N. Korea’s best assets … is to flex it’s muscle using it’s elite trained cyber operations,” Payton asserts. “Their cyber weapons can be used to fund their government by stealing money, to torch organizations and governments that offend them (look at Sony hacking), to disrupt our daily lives through targeting critical infrastructure, and more. The Cyber Operations of N. Korea is a powerful tool for the DPRK to show their displeasure at anything and it’s the best bargaining chip that Kim Jong Un has.”

Clearly, DPRK is not the only bad state actor out there. Russia has long been in the news using various cyberwar tactics against the U.S., Europe and others. China has also been blamed for cyberattacks. In fact, cyberwarfare is a cheap, readily available alternative to messy and expensive nuclear weapons for anyone with Internet access (meaning, just about everybody) and wishing to do anybody harm, including us.

“You can take away their Nukes,” Payton points out, “but you will have a hard time dismantling their ability to attack critical infrastructure, businesses and even civilians through cyber operations.”

Programming Notes: I’ve been getting a number of comments on this blog each day, and it looks like we need to set some ground rules. At least, I need to be explicit about things I will accept and things I won’t:

  • First off, remember that this isn’t a social media site. When you make a comment, it doesn’t just spill out into the blog site. Comments are sequestered until I go in and approve or reject them. So far, the number of comments is low enough that I can go through and read each one, but I don’t do it every day. If I did, I’d never get any new posts written! Please be patient.
  • Do not embed URLs to other websites in comments. I’ll strip them out even if I approve your comment otherwise. The reason is that I don’t have time to vet every URL, and I stick to journalistic standards, which means I don’t allow anything in the blog that I can’t verify. There are no exceptions.
  • This is an English language site ONLY. Comments in other languages are immediately deleted. (For why, see above.)
  • Use Standard English written in clear, concise prose. If I have trouble understanding what you’re trying to say, I won’t give your comment any space. If you can’t write a cogent English sentence, take an ESL writing course!

The Case for Free College

College vs. Income
While the need for skilled workers to maintain our technology edge has grown, the cost of training those workers has grown astronomically.

6 June 2018 – We, as a nation, need to extend the present system that provides free, universal education up through high school to cover college to the baccalaureate level.

DISCLOSURE: Teaching is my family business. My father was a teacher. My mother was a teacher. My sister’s first career was as a teacher. My brother in law was a teacher. My wife is a teacher. My son is a teacher. My daughter in law is a teacher. Most of my aunts and uncles and cousins are or were teachers. I’ve spent a lot of years teaching at the college level, myself. Some would say that I have a conflict of interest when covering developments in the education field. Others might argue that I know whereof I speak.

Since WW II, there has been a growing realization that the best careers go to those with at least a bachelor’s degree in whatever field they choose. Yet, at the same time, society has (perhaps inadvertently, although I’m not naive enough to eschew thinking there’s a lot of blame to go around) erected a monumental barrier to anyone wanting to get an education. Since the mid-1970s, the cost of higher education has vastly outstripped the ability of most people to pay for it.

In 1975, the price of attendance in college was about one fifth of the median family income (see graph above). In 2016, it was over a third. That makes sending kids to college a whole lot harder than it used to be. If your family happens to have less than median household income, that barrier looks even higher, and is getting steeper.

MORE DISCLOSURE: The reason I don’t have a Ph.D. today is that two years into my Aerospace Engineering Ph.D. program, Arizona State University jacked up the tuition beyond my (not incosiderable at the time) ability to pay.

I’d like everyone in America to consider the following propositions:

  1. A bachelor’s degree is the new high-school diploma;
  2. Having an educated population is a requirement for our technology-based society;
  3. Without education, upward mobility is nearly impossible;
  4. Ergo, it is a requirement for our society to ensure that every citizen capable of getting a college degree gets one.

EVEN MORE DISCLOSURE: Horace Mann, often credited as the Father of Public Education, was born in the same town (Franklin, MA) that I was, and our family charity is a scholarship fund dedicated to his memory.

About Mann’s intellectual progressivism, the historian Ellwood P. Cubberley said: “No one did more than he to establish in the minds of the American people the conception that education should be universal, non-sectarian, free, and that its aims should be social efficiency, civic virtue, and character, rather than mere learning or the advancement of education ends.” (source: Wikipedia)

The Wikipedia article goes on to say: “Arguing that universal public education was the best way to turn unruly American children into disciplined, judicious republican citizens, Mann won widespread approval from modernizers, especially in the Whig Party, for building public schools. Most states adopted a version of the system Mann established in Massachusetts, especially the program for normal schools to train professional teachers.”

That was back in the mid-nineteenth century. At that time, the United States was in the midst of a shift from an agrarian to an industrial economy. We’ve since completed that transition and are now shifting to an information-based economy. In future, full participation in the workforce will require everyone to have at least a bachelor’s degree.

So, when progressive politicians, like Bernie Sanders, make noises about free universal college education, YOU should listen!

It’s about time we, as a society, owned up to the fact that times have changed a lot since the mid-nineteenth century. At that time, universal free education to about junior high school level was considered enough. Since then, it was extended to high school. It’s time to extend it further to the bachelor’s-degree level.

That doesn’t mean shutting down Ivy League colleges. For those who can afford them, private and for-profit colleges can provide superior educational experiences. But, publicly funded four-year colleges offering tuition-free education to everyone has become a strategic imperative.

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.

STEM Careers for Women

Woman engineer
Women have more career options than STEM. Courtesy Shutterstock.

6 April 2018 – Folks are going to HATE what I have to say today. I expect to get comments accusing me of being a slug-brained, misogynist reactionary imbicile. So be it, I often say things other people don’t want to hear, and I’m often accused of being a slug-brained imbecile. I’m sometimes accused of being reactionary.

I don’t think I’m usually accused of being mysogynist, so that’ll be a new one.

I’m not often accused of being misogynist because I’ve got pretty good credentials in the promoting-womens’-interests department. I try to pay attention to what goes on in my women-friends’ heads. I’m more interested in the girl inside than in their outsides. Thus, I actually do care about what’s important to them.

Historically, I’ve known a lot of exceptional women, and not a few who were not-so-exceptional, and, of course, I’ve met my share of morons. But, I’ve tried to understand what was going on in all their heads because I long ago noticed that just about everybody I encounter is able to teach me something if I pay attention.

So much for the preliminaries.

Getting more to the point of this blog entry, last week I listened to a Wilson Center webcast entitled “Opening Doors in Glass Walls for Women in STEM.” I’d hoped I might have something to add to the discussion, but I didn’t. I also didn’t hear much in the “new ideas” department, either. It was mostly “woe is us ’cause women get paid less than men,” and “we’ve made some progress, but there still aren’t many women in STEM careers,” and stuff like that.

Okay. For those who don’t already know, STEM is an acronym for “Science, Technology, Engineering and Math.” It’s a big thing in education and career-development circles because it’s critical to our national technological development.

Without going into the latest statistics (’cause I’m too lazy this morning to look ’em up), it’s pretty well acknowledged that women get paid a whole lot less than men for doing the same jobs, and a whole lot less than 50% of STEM workers are women despite their making up half the available workforce.

I won’t say much about the pay ranking, except to assert that paying someone less than they’re efforts are worth is just plain dumb. It’s dumb for the employer because good talent will vote with their feet for higher pay. It’s dumb for the employee because he, she, or it should vote with their feet by walking out the door to look for a more enlightened employer. It doesn’t matter whether you are a man or a woman, you don’t want to be dependent for your income on a a mismanaged company!

Enough said about the pay differential. What I want to talk about here is the idea that, since half the population is women, half the STEM workers should be women. I’m going to assert that’s equally dumb!

I do NOT assert that there is anything about women that makes them unsuited to STEM careers. It is true that women are significantly smaller physically (the last time I checked, the average American woman was 5’4″ tall, while the average American man was 5’10” tall with everything else more or less scaled to match), but that makes no nevermind for a STEM career. STEM jobs make demands on what’s between the ears, not what’s between the shoulders.

With regard to womens’ brains’ suitability for STEM jobs, experience has shown me that there’s no significant (to a STEM career) difference between them and male brains. Women are every bit as adept at independent thinking, puzzle solving, memory tasks, and just about any measurable talent that might make a difference to a STEM worker. I’ve seen no study that showed women to be inferior to men with respect to mathematical or abstract reasoning, either. In fact, some studies have purported to show the reverse.

On the other hand, as far as I know, EVERY culture traditionally separates jobs into “women’s work” and “men’s work.” Being a firm believer in Darwinian evolution, I don’t argue with Mommy Nature’s way, but do ask “Why?”

Many decades ago, my advanced lab instructor asserted that “tradition is the sum total of things our ancestors over the past four million years have found to work.” I completely agree with him, with the important proviso that things change.

Four million years ago, our ancestors didn’t have ceramic tile floors in their condos, nor did they have cars with remote keyless entry locks. It was a lot tougher for them than it is for us, and survival was far less assured.

They were the guys who decided to have men make the hand axes and arrowheads, and that women should weave the baskets and make the soup. Most importantly for our discussion, they decided women should change the diapers.

Fast forward four million years, and we’re still doing the same things, more or less. Things, however, have changed, and we’re now having to rethink that division of labor.

Some jobs, like digging ditches, still require physical prowess, which makes them more suited to men than women. I’m ignoring (but not forgetting) all the manual labor women are asked to do all over the world. That’s not what I’m talking about here. I’m talking about STEM jobs, which DON’T require physical prowess.

So, why don’t women go after those cushy, high-paying STEM jobs, and, equally significant, once they have one of those jobs, why is it so hard to keep them in them? One of the few things that came out of last week’s webinar (Remember this all started with my attending that webinar?) was the point that women leave STEM careers in droves. They abandon their hard-won STEM careers and go off to do something else.

The point I want to make with this essay is to suggest that maybe the reason women are underrepresented in STEM careers is that they actually have more options than men. Most importantly, they have the highly attractive (to them) option of the “homemaker” career.

Current thinking among the liberal intelligencia is that “homemaker” is not much of a career. I simply don’t accept that idea. Housewife is just as important a job as, say, truck driver, bank president, or technology journalist. So, pooh!

The homemaker option is not open to most men. We may be willing to help out around the house, and may even feel driven to do our part, or at least try to find some part that could be ours to do. But, I can’t think of one of my male friends who’d be comfortable shouldering the whole responsibility.

I assert that four million years of evolution has wired up human brains for sexual dimorphism with regard to “guy jobs” and “girl jobs.” It just feels right for guys to do jobs that seem to be traditionally guy things and for women to do jobs that seem to be traditionally theirs.

Now, throughout most of evolutionary time STEM jobs pretty much didn’t exist. One of the things our ancestors didn’t have four million years ago was trigonometry. In fact, they probably struggled with basic number theory. I did an experiment in high school that indicated that the crows in my back yard couldn’t count beyond two. Australopithecus Paranthropus was probably a better mathematician than that, but likely not by much.

So, one of the things we have now that has avoided being shaped by natural selection pressure is the option to persue a STEM career. It’s pretty much evolutionarily neutral. STEM careers are probably equally attractive (or repulsive) to women and men.

I mention “repulsive” for a very good reason. Preparing oneself for a STEM career is hard.

Mathematics, especially, is one of the few subjects that give many, if not most, people phobias. Frankly, arithmetic lost me on the second day of first grade when Miss Shay passed out a list of addition tables and told us to memorize it. I thought the idea of arithmetic was a gas. Memorizing tables, however, was not on my To Do list. I expect most people feel the same way.

Learning STEM subjects involves a $%^-load of memorizing! So, it’s no wonder girls would rather play with dolls (and boys with trucks) than study STEM subjects. Eventually, playing with trucks leads to STEM careers. Playing with dolls does not.

Grown up girls find they have the option of playing with dolls as a career. Grown up boys don’t. So, choosing a STEM career is something grown-up boys really want to do if they can, but for girls, not so much. They can find something to do that’s more satisfying with less work.

So, they vote with their feet. THAT may be why it’s so hard to get women into STEM careers in the first place, and then to keep them there for the long haul.

Before you start having apoplectic fits imagining that I’m making a broad generalization that females don’t like STEM careers, recognize that what I’m describing IS a broad theoretical generalization. It’s meant to be.

In the real world there are 300 million people in the United States, half of which are women, and each and every one of them gets to make a separate career choice for themself. Every one of them chooses for themself based on what they want to do with their life. Some choose STEM careers. Some don’t.

My point is that you shouldn’t just assume that half of STEM job slots ought be filled by women. Half of potential candidates may be women, but a fair fraction of them might prefer to go play somewhere else. It may be that they find women have more alternatives than do men. You may end up with more men slotting into those STEM jobs because they have less choice.

You know, being a housewife ain’t such a bad gig!

And, You Thought Global Warming was a BAD Thing?

Ice skaters on the frozen Thames river in 1677

10 March 2017 – ‘Way back in the 1970s, when I was an astophysics graduate student, I was hot on the trail of why solar prominences had the shapes we observe them to have. Being a good little budding scientist, I spent most of my waking hours in the library poring over old research notes from the (at that time barely existing) current solar research, back to the beginning of time. Or, at least to the invention of the telescope.

The fact that solar prominences are closely associated with sunspots led me to studying historical measurements of sunspots. Of course, I quickly ran across two well-known anomalies known as the Maunder and Sporer minima. These were periods in the middle ages when sunspots practically disappeared for decades at a time. Astronomers of the time commented on it, but hadn’t a clue as to why.

The idea that sunspots could disappear for extended periods is not really surprising. The Sun is well known to be a variable star whose surface activity varies on a more-or-less regular 11-year cycle (22 years if you count the fact that the magnetic polarity reverses after every minimum). The idea that any such oscillator can drop out once in a while isn’t hard to swallow.

Besides, when Mommy Nature presents you with an observable fact, it’s best not to doubt the fact, but to ask “Why?” That leads to much more fun research and interesting insights.

More surprising (at the time) was the observed correlation between the Maunder and Sporer minima and a period of anomalously cold temperatures throughout Europe known as the “Little Ice Age.” Interesting effects of the Little Ice Age included the invention of buttons to make winter garments more effective, advances of glaciers in the mountains, ice skating on rivers that previously never froze at all, and the abandonment of Viking settlements in Greenland.

And, crop failures. Can’t forget crop failures! Marie Antoinette’s famous “Let ’em eat cake” faux pas was triggered by consistent failures of the French wheat harvest.

The moral of the Little Ice Age story is:

Global Cooling = BAD

The converse conclusion:

Global Warming = GOOD

seems less well documented. A Medieval Warm Period from about 950-1250 did correlate with fairly active times for European culture. Similarly, the Roman Warm Period (250 BCE – 400 CE) saw the rise of the Roman civilization. So, we can tentatively conclude that global warming is generally NOT bad.

Sunspots as Markers

The reason seeing sunspot minima coincide with cool temperatures was surprising was that at the time astronomers fantasized that sunspots were like clouds that blocked radiation leaving the Sun. Folks assumed that more clouds meant more blocking of radiation, and cooler temperatures on Earth.

Careful measurements quickly put that idea into its grave with a stake through its heart! The reason is another feature of sunspots, which the theory conveniently forgot: they’re surrounded by relatively bright areas (called faculae) that pump out radiation at an enhanced rate. It turns out that the faculae associated with a sunspot easily make up for the dimming effect of the spot itself.

That’s why we carefully measure details before jumping to conclusions!

Anyway, the best solar-output (irradiance) research I was able to find was by Charles Greeley Abbott, who, as Director of the Smithsonian Astrophysical Observatory from 1907 to 1944, assembled an impressive decades-long series of meticulous measurements of the total radiation arriving at Earth from the Sun. He also attempted to correlate these measurements with weather records from various cities.

Blinded by a belief that solar activity (as measured by sunspot numbers) would anticorrelate with solar irradiation and therefore Earthly temperatures, he was dismayed to be unable to make sense of the combined data sets.

By simply throwing out the assumptions, I was quickly able to see that the only correlation in the data was that temperatures more-or-less positively correlated with sunspot numbers and solar irradiation measurements. The resulting hypothesis was that sunspots are a marker for increased output from the Sun’s core. Below a certain level there are no spots. As output increases above the trigger level, sunspots appear and then increase with increasing core output.

The conclusion is that the Little Ice Age corresponded with a long period of reduced solar-core output, and the Maunder and Sporer minima are shorter periods when the core output dropped below the sunspot-trigger level.

So, we can conclude (something astronomers have known for decades if not centuries) that the Sun is a variable star. (The term “solar constant” is an oxymoron.) Second, we can conclude that variations in solar output have a profound affect on Earth’s climate. Those are neither surprising nor in doubt.

We’re also on fairly safe ground to say that (within reason) global warming is a good thing. At least its pretty clearly better than global cooling!