Do You Really Want a Robotic Car?

Robot Driver
Sixteen percent of Canadians and twenty-six percent of Americans say they “would not use a driverless car.” Mopic/Shutterstock

15 August 2018 – Many times in my blogging career I’ve gone on a rant about the three Ds of robotics. These are “dull, dirty, and dangerous.” They relate to the question, which is not asked often enough, “Do you want to build an automated system to do that task?”

The reason the question is not asked enough is that it needs to be asked whenever anyone looks into designing a system (whether manual or automated) to do anything. The possibility that anyone ever sets up a system to do anything without first asking that question means that it’s not asked enough.

When asking the question, getting a hit on any one of the three Ds tells you to at least think about automating the task. Getting a hit on two of them should make you think that your task is very likely to be ripe for automation. If you hit on all three, it’s a slam dunk!

When we look into developing automated vehicles (AVs), we get hits on “dull” and “dangerous.”

Driving can be excruciatingly dull, especially if you’re going any significant distance. That’s why people fall asleep at the wheel. I daresay everyone has fallen asleep at the wheel at least once, although we almost always wake up before hitting the bridge abutment. It’s also why so many people drive around with cellphones pressed to their ears. The temptation to multitask while driving is almost irresistable.

Driving is also brutally dangerous. Tens of thousands of people die every year in car crashes. Its pretty safe to say that nearly all those fatalities involve cars driven by humans. The number of people who have been killed in accidents involving driverless cars you can (as of this writing) count on one hand.

I’m not prepared to go into statistics comparing safety of automated vehicles vs. manually driven ones. Suffice it to say that eventually we can expect AVs to be much safer than manually driven vehicles. We’ll keep developing the technology until they are. It’s not a matter of if, but when.

This is the analysis most observers (if they analyze it at all) come up with to prove that vehicle driving should be automated.

Yet, opinions that AVs are acceptable, let alone inevitable, are far from universal. In a survey of 3,000 people in the U.S. and Canada, Ipsos Strategy 3 found that sixteen percent of Canadians and twenty six percent of Americans say they “would not use a driverless car,” and a whopping 39% of Americans (and 30% of Canadians) would rarely or never let driverless technology do the parking!

Why would so many people be unwilling to give up their driving priviledges? I submit that it has to do with a parallel consideration that is every bit as important as the three Ds when deciding whether to automate a task:

Don’t Automate Something Humans Like to Do!

Predatory animals, especially humans, like to drive. It’s fun. Just ask any dog who has a chance to go for a ride in a car! Almost universally they’ll jump into the front seat as soon as you open the door.

In fact, if you leave them (dogs) in the car unattended for a few minutes, they’ll be behind the wheel when you come back!

Humans are the same way. Leave ’em unattended in a car for any length of time, and they’ll think of some excuse to get behind the wheel.

The Ipsos survey fourd that some 61% of both Americans and Canadians identify themselves as “car people.” When asked “What would have to change about transportation in your area for you to consider not owning a car at all, or owning fewer cars?” 39% of Americans and 38% of Canadians responded “There is nothing that would make me consider owning fewer cars!”

That’s pretty definative!

Their excuse for getting behind the wheel is largely an economic one: Seventy-eight pecent of Americans claim they “definitely need to have a vehicle to get to work.” In more urbanized Canada (you did know that Canadians cluster more into cities, didn’t you.) that drops to 53%.

Whether those folks claiming they “have” to have a car to get to work is based on historical precedent, urban planning, wishful thinking, or flat out just what they want to believe, it’s a good, cogent reason why folks, especially Americans, hang onto their steering wheels for dear life.

The moral of this story is that driving is something humans like to do, and getting them to give it up will be a serious uphill battle for anyone wanting to promote driverless cars.

Yet, development of AV technology is going full steam ahead.

Is that just another example of Dr. John Bridges’  version of Solomon’s proverb “A fool and his money are soon parted?”

Possibly, but I think not.

Certainly, the idea of spending tons of money to have bragging rights for the latest technology, and to take selfies showing you reading a newspaper while your car drives itself through traffic has some appeal. I submit, however, that the appeal is short lived.

For one thing, reading in a moving vehicle is the fastest route I know of to motion sickness. It’s right up there with cueing up the latest Disney cartoon feature for your kids on the overhead DVD player in your SUV, then cleaning up their vomit.

I, for one, don’t want to go there!

Sounds like another example of “More money than brains.”

There are, however, a wide range of applications where driving a vehicle turns out to be no fun at all. For example, the first use of drone aircraft was as targets for anti-aircraft gunnery practice. They just couldn’t find enough pilots who wanted to be sitting ducks to be blown out of the sky! Go figure.

Most commercial driving jobs could also stand to be automated. For example, almost nobody actually steers ships at sea anymore. They generally stand around watching an autopilot follow a pre-programmed course. Why? As a veteran boat pilot, I can tell you that the captain has a lot more fun than the helmsman. Piloting a ship from, say, Calcutta to San Francisco has got to be mind-numbingly dull. There’s nothing going on out there on the ocean.

Boat passengers generally spend most of their time staring into the wake, but the helmsman doesn’t get to look at the wake. He (or she) has to spend their time scanning a featureless horizontal line separating a light-blue dome (the sky) from a dark-blue plane (the sea) in the vain hope that something interesting will pop up and relieve the tedium.

Hence the autopilot.

Flying a commercial airliner is similar. It has been described (as have so many other tasks) as “hours of boredom punctuated by moments of sheer terror!” While such activity is very Zen (I’m convinced that humans’ ability to meditate was developed by cave guys having to sit for hours, or even days, watching game trails for their next meal to wander along), it’s not top-of-the-line fun.

So, sometimes driving is fun, and sometimes it’s not. We need AV technology to cover those times when it’s not.

The Future Role of AI in Fact Checking

Reality Check
Advanced computing may someday help sort fact from fiction. Gustavo Frazao/Shutterstock

8 August 2018 – This guest post is contributed under the auspices of Trive, a global, decentralized truth-discovery engine. Trive seeks to provide news aggregators and outlets a superior means of fact checking and news-story verification.

by Barry Cousins, Guest Blogger, Info-Tech Research Group

In a recent project, we looked at blockchain startup Trive and their pursuit of a fact-checking truth database. It seems obvious and likely that competition will spring up. After all, who wouldn’t want to guarantee the preservation of facts? Or, perhaps it’s more that lots of people would like to control what is perceived as truth.

With the recent coming out party of IBM’s Debater, this next step in our journey brings Artificial Intelligence into the conversation … quite literally.

As an analyst, I’d like to have a universal fact checker. Something like the carbon monoxide detectors on each level of my home. Something that would sound an alarm when there’s danger of intellectual asphyxiation from choking on the baloney put forward by certain sales people, news organizations, governments, and educators, for example.

For most of my life, we would simply have turned to academic literature for credible truth. There is now enough legitimate doubt to make us seek out a new model or, at a minimum, to augment that academic model.

I don’t want to be misunderstood: I’m not suggesting that all news and education is phony baloney. And, I’m not suggesting that the people speaking untruths are always doing so intentionally.

The fact is, we don’t have anything close to a recognizable database of facts from which we can base such analysis. For most of us, this was supposed to be the school system, but sadly, that has increasingly become politicized.

But even if we had the universal truth database, could we actually use it? For instance, how would we tap into the right facts at the right time? The relevant facts?

If I’m looking into the sinking of the Titanic, is it relevant to study the facts behind the ship’s manifest? It might be interesting, but would it prove to be relevant? Does it have anything to do with the iceberg? Would that focus on the manifest impede my path to insight on the sinking?

It would be great to have Artificial Intelligence advising me on these matters. I’d make the ultimate decision, but it would be awesome to have something like the Star Trek computer sifting through the sea of facts for that which is relevant.

Is AI ready? IBM recently showed that it’s certainly coming along.

Is the sea of facts ready? That’s a lot less certain.

Debater holds its own

In June 2018, IBM unveiled the latest in Artificial Intelligence with Project Debater in a small event with two debates: “we should subsidize space exploration”, and “we should increase the use of telemedicine”. The opponents were credentialed experts, and Debater was arguing from a position established by “reading” a large volume of academic papers.

The result? From what we can tell, the humans were more persuasive while the computer was more thorough. Hardly surprising, perhaps. I’d like to watch the full debates but haven’t located them yet.

Debater is intended to help humans enhance their ability to persuade. According to IBM researcher Ranit Aharanov, “We are actually trying to show that a computer system can add to our conversation or decision making by bringing facts and doing a different kind of argumentation.”

So this is an example of AI. I’ve been trying to distinguish between automation and AI, machine learning, deep learning, etc. I don’t need to nail that down today, but I’m pretty sure that my definition of AI includes genuine cognition: the ability to identify facts, comb out the opinions and misdirection, incorporate the right amount of intention bias, and form decisions and opinions with confidence while remaining watchful for one’s own errors. I’ll set aside any obligation to admit and react to one’s own errors, choosing to assume that intelligence includes the interest in, and awareness of, one’s ability to err.

Mark Klein, Principal Research Scientist at M.I.T., helped with that distinction between computing and AI. “There needs to be some additional ability to observe and modify the process by which you make decisions. Some call that consciousness, the ability to observe your own thinking process.”

Project Debater represents an incredible leap forward in AI. It was given access to a large volume of academic publications, and it developed its debating chops through machine learning. The capability of the computer in those debates resembled the results that humans would get from reading all those papers, assuming you can conceive of a way that a human could consume and retain that much knowledge.

Beyond spinning away on publications, are computers ready to interact intelligently?

Artificial? Yes. But, Intelligent?

According to Dr. Klein, we’re still far away from that outcome. “Computers still seem to be very rudimentary in terms of being able to ‘understand’ what people say. They (people) don’t follow grammatical rules very rigorously. They leave a lot of stuff out and rely on shared context. They’re ambiguous or they make mistakes that other people can figure out. There’s a whole list of things like irony that are completely flummoxing computers now.”

Dr. Klein’s PhD in Artificial Intelligence from the University of Illinois leaves him particularly well-positioned for this area of study. He’s primarily focused on using computers to enable better knowledge sharing and decision making among groups of humans. Thus, the potentially debilitating question of what constitutes knowledge, what separates fact from opinion from conjecture.

His field of study focuses on the intersection of AI, social computing, and data science. A central theme involves responsibly working together in a structured collective intelligence life cycle: Collective Sensemaking, Collective Innovation, Collective Decision Making, and Collective Action.

One of the key outcomes of Klein’s research is “The Deliberatorium”, a collaboration engine that adds structure to mass participation via social media. The system ensures that contributors create a self-organized, non-redundant summary of the full breadth of the crowd’s insights and ideas. This model avoids the risks of ambiguity and misunderstanding that impede the success of AI interacting with humans.

Klein provided a deeper explanation of the massive gap between AI and genuine intellectual interaction. “It’s a much bigger problem than being able to parse the words, make a syntax tree, and use the standard Natural Language Processing approaches.”

Natural Language Processing breaks up the problem into several layers. One of them is syntax processing, which is to figure out the nouns and the verbs and figure out how they’re related to each other. The second level is semantics, which is having a model of what the words mean. That ‘eat’ means ‘ingesting some nutritious substance in order to get energy to live’. For syntax, we’re doing OK. For semantics, we’re doing kind of OK. But the part where it seems like Natural Language Processing still has light years to go is in the area of what they call ‘pragmatics’, which is understanding the meaning of something that’s said by taking into account the cultural and personal contexts of the people who are communicating. That’s a huge topic. Imagine that you’re talking to a North Korean. Even if you had a good translator there would be lots of possibility of huge misunderstandings because your contexts would be so different, the way you try to get across things, especially if you’re trying to be polite, it’s just going to fly right over each other’s head.”

To make matters much worse, our communications are filled with cases where we ought not be taken quite literally. Sarcasm, irony, idioms, etc. make it difficult enough for humans to understand, given the incredible reliance on context. I could just imagine the computer trying to validate something that starts with, “John just started World War 3…”, or “Bonnie has an advanced degree in…”, or “That’ll help…”

A few weeks ago, I wrote that I’d won $60 million in the lottery. I was being sarcastic, and (if you ask me) humorous in talking about how people decide what’s true. Would that research interview be labeled as fake news? Technically, I suppose it was. Now that would be ironic.

Klein summed it up with, “That’s the kind of stuff that computers are really terrible at and it seems like that would be incredibly important if you’re trying to do something as deep and fraught as fact checking.”

Centralized vs. Decentralized Fact Model

It’s self-evident that we have to be judicious in our management of the knowledge base behind an AI fact-checking model and it’s reasonable to assume that AI will retain and project any subjective bias embedded in the underlying body of ‘facts’.

We’re facing competing models for the future of truth, based on the question of centralization. Do you trust yourself to deduce the best answer to challenging questions, or do you prefer to simply trust the authoritative position? Well, consider that there are centralized models with obvious bias behind most of our sources. The tech giants are all filtering our news and likely having more impact than powerful media editors. Are they unbiased? The government is dictating most of the educational curriculum in our model. Are they unbiased?

That centralized truth model should be raising alarm bells for anyone paying attention. Instead, consider a truly decentralized model where no corporate or government interest is influencing the ultimate decision on what’s true. And consider that the truth is potentially unstable. Establishing the initial position on facts is one thing, but the ability to change that view in the face of more information is likely the bigger benefit.

A decentralized fact model without commercial or political interest would openly seek out corrections. It would critically evaluate new knowledge and objectively re-frame the previous position whenever warranted. It would communicate those changes without concern for timing, or for the social or economic impact. It quite simply wouldn’t consider or care whether or not you liked the truth.

The model proposed by Trive appears to meet those objectivity criteria and is getting noticed as more people tire of left-vs-right and corporatocracy preservation.

IBM Debater seems like it would be able to engage in critical thinking that would shift influence towards a decentralized model. Hopefully, Debater would view the denial of truth as subjective and illogical. With any luck, the computer would confront that conduct directly.

IBM’s AI machine already can examine tactics and style. In a recent debate, it coldly scolded the opponent with: “You are speaking at the extremely fast rate of 218 words per minute. There is no need to hurry.”

Debater can obviously play the debate game while managing massive amounts of information and determining relevance. As it evolves, it will need to rely on the veracity of that information.

Trive and Debater seem to be a complement to each other, so far.

Author BioBarry Cousins

Barry Cousins, Research Lead, Info-Tech Research Group specializing in Project Portfolio Management, Help/Service Desk, and Telephony/Unified Communications. He brings an extensive background in technology, IT management, and business leadership.

About Info-Tech Research Group

Info-Tech Research Group is a fast growing IT research and advisory firm. Founded in 1997, Info-Tech produces unbiased and highly relevant IT research solutions. Since 2010 McLean & Company, a division of Info-Tech, has provided the same, unmatched expertise to HR professionals worldwide.

Who’s NOT a Creative?

 

Compensting sales
Close-up Of A Business Woman Giving Cheque To Her Colleague At Workplace In Office. Andrey Popov/Shutterstock

25 July 2018 – Last week I made a big deal about the things that motivate creative people, such as magazine editors, and how the most effective rewards were non-monetary. I also said that monetary rewards, such as commissions based on sales results, were exactly the right rewards to use for salespeople. That would imply that salespeople were somehow different from others, and maybe even not creative.

That is not the impression I want to leave you with. I’m devoting this blog posting to setting that record straight.

My remarks last week were based on Maslow‘s and Herzberg‘s work on motivation of employees. I suggested that these theories were valid in other spheres of human endeavor. Let’s be clear about this: yes, Maslow’s and Herzberg’s theories are valid and useful in general, whenever you want to think about motivating normal, healthy human beings. It’s incidental that those researchers were focused on employer/employee relations as an impetus to their work. If they’d been focused on anything else, their conclusions would probably have been pretty much the same.

That said, there are a whole class of people for whom monetary compensation is the holy grail of motivators. They are generally very high functioning individuals who are in no way pathological. On the surface, however, their preferred rewards appear to be monetary.

Traditionally, observers who don’t share this reward system have indicted these individuals as “greedy.”

I, however, dispute that conclusion. Let me explain why.

When pointing out the rewards that can be called “motivators for editors,” I wrote:

“We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like ‘Best Technical Article.’ Above all, we talked up the fact that ours was ‘the premier publication in the market.'”

Notice that these rewards, though non-monetary. were more or less measurable. They could be (and indeed for the individuals they motivated) seen as scorecards. The individuals involved had a very clear idea of value attached to such rewards. A Nobel Prize in Physics is of greater value than, say, a similar award given by, say, Harvard University.

For example, in 1987 I was awarded the “Cahners Editorial Medal of Excellence, Best How-To Article.” That wasn’t half bad. The competition was articles written for a few dozen magazines that were part of the Cahners Publishing Company, which at the time was a big deal in the business-to-business magazine field.

What I considered to be of higher value, however, was the “First Place Award For Editorial Excellence for a Technical Article in a Magazine with Over 80,000 Circulation” I got in 1997 from the American Society of Business Press Editors, where I was competing with a much wider pool of journalists.

Economists have a way of attempting to quantify such non-monetary awards called utility. They arrive at values by presenting various options and asking the question: “Which would you rather have?”

Of course, measures of utility generally vary widely depending on who’s doing the choosing.

For example, an article in the 19 July The Wall Street Journal described a phenomenon the author seemed to think was surprising: Saudi-Arabian women drivers (new drivers all) showed a preference for muscle cars over more pedestrian models. The author, Margherita Stancati, related an incident where a Porche salesperson in Riyadh offered a recently minted woman driver an “easy to drive crossover designed to primarily attract women.” The customer demurred. She wanted something “with an engine that roars.”

So, the utility of anything is not an absolute in any sense. It all depends on answering the question: “Utility to whom?”

Everyone is motivated by rewards in the upper half of the Needs Pyramid. If you’re a salesperson, growth in your annual (or other period) sales revenue is in the green Self Esteem block. It’s well and truly in the “motivator” category, and has nothing to do with the Safety and Security “hygiene factor” where others might put it. Successful salespeople have those hygiene factors well-and-truly covered. They’re looking for a reward that tells them they’ve hit a home run. That is likely having a bigger annual bonus than the next guy.

The most obvious money-driven motivators accrue to the folks in the CEO ranks. Jeff Bezos, Elon Musk, and Warren Buffett would have a hard time measuring their success (i.e., hitting the Pavlovian lever to get Self Actualization rewards) without looking at their monetary compensation!

The Pyramid of Needs

Needs Pyramid
The Pyramid of Needs combines Maslow’s and Herzberg’s motivational theories.

18 July 2018 – Long, long ago, in a [place] far, far away. …

When I was Chief Editor at business-to-business magazine Test & Measurement World, I had a long, friendly though heated, discussion with one of our advertising-sales managers. He suggested making the compensation we paid our editorial staff contingent on total advertising sales. He pointed out that what everyone came to work for was to get paid, and that tying their pay to how well the magazine was doing financially would give them an incentive to make decisions that would help advertising sales, and advance the magazine’s financial success.

He thought it was a great idea, but I disagreed completely. I pointed out that, though revenue sharing was exactly the right way to compensate the salespeople he worked with, it was exactly the wrong way to compensate creative people, like writers and journalists.

Why it was a good idea for his salespeople I’ll leave for another column. Today, I’m interested in why it was not a good idea for my editors.

In the heat of the discussion I didn’t do a deep dive into the reasons for taking my position. Decades later, from the standpoint of a semi-retired whatever-you-call-my-patchwork-career, I can now sit back and analyze in some detail the considerations that led me to my conclusion, which I still think was correct.

We’ll start out with Maslow’s Hierarchy of Needs.

In 1943, Abraham Maslow proposed that healthy human beings have a certain number of needs, and that these needs are arranged in a hierarchy. At the top is “self actualization,” which boils down to a need for creativity. It’s the need to do something that’s never been done before in one’s own individual way. At the bottom is the simple need for physical survival. In between are three more identified needs people also seek to satisfy.

Maslow pointed out that people seek to satisfy these needs from the bottom to the top. For example, nobody worries about security arrangements at their gated community (second level) while having a heart attack that threatens their survival (bottom level).

Overlaid on Maslow’s hierarchy is Frederick Herzberg’s Two-Factor Theory, which he published in his 1959 book The Motivation to Work. Herzberg’s theory divides Maslow’s hierarchy into two sections. The lower section is best described as “hygiene factors.” They are also known as “dissatisfiers” or “demotivators” because if they’re not met folks get cranky.

Basically, a person needs to have their hygiene factors covered in order have a level of basic satisfaction in life. Not having any of these needs satisfied makes them miserable. Having them satisfied doesn’t motivate them at all. It makes ’em fat, dumb and happy.

The upper-level needs are called “motivators.” Not having motivators met drives an individual to work harder, smarter, etc. It energizes them.

My position in the argument with my ad-sales friend was that providing revenue sharing worked at the “Safety and Security” level. Editors were (at least in my organization) paid enough that they didn’t have to worry about feeding their kids and covering their bills. They were talented people with a choice of whom they worked for. If they weren’t already being paid enough, they’d have been forced to go work for somebody else.

Creative people, my argument went, are motivated by non-monetary rewards. They work at the upper “motivator” levels. They’ve already got their physical needs covered, so to motivate them we have to offer rewards in the “motivator” realm.

We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like “Best Technical Article.” Above all, we talked up the fact that ours was “the premier publication in the market.”

These were all non-monetary rewards to motivate people who already had their basic needs (the hygiene factors) covered.

I summarized my compensation theory thusly: “We pay creative people enough so that they don’t have to go do something else.”

That gives them the freedom to do what they would want to do, anyway. The implication is that creative people want to do stuff because it’s something they can do that’s worth doing.

In other words, we don’t pay creative people to work. We pay them to free them up so they can work. Then, we suggest really fun stuff for them to work at.

What does this all mean for society in general?

First of all, if you want there to be a general level of satisfaction within your society, you’d better take care of those hygiene factors for everybody!

That doesn’t mean the top 1%. It doesn’t mean the top 80%, either. Or, the top 90%. It means everybody!

If you’ve got 99% of everybody covered, that still leaves a whole lot of people who think they’re getting a raw deal. Remember that in the U.S.A. there are roughly 300 million people. If you’ve left 1% feeling ripped off, that’s 3 million potential revolutionaries. Three million people can cause a lot of havoc if motivated.

Remember, at the height of the 1960s Hippy movement, there were, according to the most generous estimates, only about 100,000 hipsters wandering around. Those hundred-thousand activists made a huge change in society in a very short period of time.

Okay. If you want people invested in the status quo of society, make sure everyone has all their hygiene factors covered. If you want to know how to do that, ask Bernie Sanders.

Assuming you’ve got everybody’s hygiene factors covered, does that mean they’re all fat, dumb, and happy? Do you end up with a nation of goofballs with no motivation to do anything?

Nope!

Remember those needs Herzberg identified as “motivators” in the upper part of Maslow’s pyramid?

The hygiene factors come into play only when they’re not met. The day they’re met, people stop thinking about who’ll be first against the wall when the revolution comes. Folks become fat, dumb and happy, and stay that way for about an afternoon. Maybe an afternoon and an evening if there’s a good ballgame on.

The next morning they start thinking: “So, what can we screw with next?”

What they’re going to screw with next is anything and everything they damn well please. Some will want to fly to the Moon. Some will want to outdo Michaelangelo‘s frescos for the ceiling of the Sistine Chapel. They’re all going to look at what they think was the greatest stuff from the past, and try to think of ways to do better, and to do it in their own way.

That’s the whole point of “self actualization.”

The Renaissance didn’t happen because everybody was broke. It happened because they were already fat, dumb and happy, and looking for something to screw with next.

POTUS and the Peter Principle

Will Rogers & Wiley Post
In 1927, Will Rogers wrote: “I never met a man I didn’t like.” Here he is (on left) posing with aviator Wiley Post before their ill-fated flying exploration of Alaska. Everett Historical/Shutterstock

11 July 2018 – Please bear with me while I, once again, invert the standard news-story pyramid by presenting a great whacking pile of (hopfully entertaining) detail that leads eventually to the point of this column. If you’re too impatient to read it to the end, leave now to check out the latest POTUS rant on Twitter.

Unlike Will Rogers, who famously wrote, “I never met a man I didn’t like,” I’ve run across a whole slew of folks I didn’t like, to the point of being marginally misanthropic.

I’ve made friends with all kinds of people, from murderers to millionaires, but there are a few types that I just can’t abide. Top of that list is people that think they’re smarter than everybody else, and want you to acknowledge it.

I’m telling you this because I’m trying to be honest about why I’ve never been able to abide two recent Presidents: William Jefferson Clinton (#42) and Donald J. Trump (#45). Having been forced to observe their antics over an extended period, I’m pleased to report that they’ve both proved to be among the most corrupt individuals to occupy the Oval Office in recent memory.

I dislike them because they both show that same, smarmy self-satisfied smile when contemplating their own greatness.

Tricky Dick Nixon (#37) was also a world-class scumbag, but he never triggered the same automatic revulsion. That is because, instead of always looking self satisfied, he always looked scared. He was smart enough to recognize that he was walking a tightrope and, if he stayed on it long enough, he eventually would fall off.

And, he did.

I had no reason for disliking #37 until the mid-1960s, when, as a college freshman, I researched a paper for a history class that happened to involve digging into the McCarthy hearings of the early 1950s. Seeing the future #37’s activities in that period helped me form an extremely unflattering picture of his character, which a decade later proved accurate.

During those years in between I had some knock-down, drag-out arguments with my rabid-Nixon-fan grandmother. I hope I had the self control never to have said “I told you so” after Nixon’s fall. She was a nice lady and a wonderful grandma, and wouldn’t have deserved it.

As Abraham Lincoln (#16) famously said: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

Since #45 came on my radar many decades ago, I’ve been trying to figure out what, exactly, is wrong with his brain. At first, when he was a real-estate developer, I just figured he had bad taste and was infantile. That made him easy to dismiss, so I did just that.

Later, he became a reality-TV star. His show, The Apprentice, made it instantly clear that he knew absolutely nothing about running a business.

No wonder his companies went bankrupt. Again, and again, and again….

I’ve known scads of corporate CEOs over the years. During the quarter century I spent covering the testing business as a journalist, I got to spend time with most of the corporate leaders of the world’s major electronics manufacturing companies. Unsurprisingly, the successful ones followed the best practices that I learned in MBA school.

Some of the CEOs I got to know were goofballs. Most, however, were absolutely brilliant. The successful ones all had certain things in common.

Chief among the characteristics of successful corporate executives is that they make the people around them happy to work for them. They make others feel comfortable, empowered, and enthusiastically willing to cooperate to make the CEO’s vision manifest.

Even Commendatore Ferrari, who I’ve heard was Hell to work for and Machiavellian in interpersonal relationships, made underlings glad to have known him. I’ve noticed that ‘most everybody who’s ever worked for Ferrari has become a Ferrari fan for life.

As far as I can determine, nobody ever sued him.

That’s not the impression I got of Donald Trump, the corporate CEO. He seemed to revel in conflict, making those around him feel like dog pooh.

Apparently, everyone who’s ever dealt with him has wanted to sue him.

That worked out fine, however, for Donald Trump, the reality-TV star. So-called “reality” TV shows generally survive by presenting conflict. The more conflict the better. Everybody always seems to be fighting with everybody else, and the winners appear to be those who consistently bully their opponents into feeling like dog pooh.

I see a pattern here.

The inescapable conclusion is that Donald Trump was never a successful corporate executive, but succeeded enormously playing one on TV.

Another characteristic I should mention of reality TV shows is that they’re unscripted. The idea seems to be that nobody knows what’s going to happen next, including the cast.

That leaves off the necessity for reality-TV stars to learn lines. Actual movie stars and stage actors have to learn lines of dialog. Stories are tightly scripted so that they conform to Aristotle’s recommendations for how to write a successful plot.

Having written a handful of traditional motion-picture scripts as well as having produced a few reality-TV episodes, I know the difference. Following Aristotle’s dicta gives you the ability to communicate, and sometimes even teach, something to your audience. The formula reality-TV show, on the other hand, goes nowhere. Everybody (including the audience) ends up exactly where they started, ready to start the same stupid arguments over and over again ad nauseam.

Apparently, reality-TV audiences don’t want to actually learn anything. They’re more focused on ranting and raving.

Later on, following a long tradition among theater, film and TV stars, #45 became a politician.

At first, I listened to what he said. That led me to think he was a Nazi demagogue. Then, I thought maybe he was some kind of petty tyrant, like Mussolini. (I never considered him competent enough to match Hitler.)

Eventually, I realized that it never makes any sense to listen to what #45 says because he lies. That makes anything he says irrelevant.

FIRST PRINCIPAL: If you catch somebody lying to you, stop believing what they say.

So, it’s all bullshit. You can’t draw any conclusion from it. If he says something obviously racist (for example), you can’t conclude that he’s a racist. If he says something that sounds stupid, you can’t conclude he’s stupid, either. It just means he’s said something that sounds stupid.

Piling up this whole load of B.S., then applying Occam’s Razor, leads to the conclusion that #45 is still simply a reality-TV star. His current TV show is titled The Trump Administration. Its supporting characters are U.S. senators and representatives, executive-branch bureaucrats, news-media personalities, and foreign “dignitaries.” Some in that last category (such as Justin Trudeau and Emmanuel Macron) are reluctant conscripts into the cast, and some (such as Vladimir Putin and Kim Jong-un) gleefully play their parts, but all are bit players in #45’s reality TV show.

Oh, yeah. The largest group of bit players in The Trump Administration is every man, woman, child and jackass on the planet. All are, in true reality-TV style, going exactly nowhere as long as the show lasts.

Politicians have always been showmen. Of the Founding Fathers, the one who stands out for never coming close to becoming President was Benjamin Franklin. Franklin was a lot of things, and did a lot of things extremely well. But, he was never really a P.T.-Barnum-like showman.

Really successful politicians, such as Abraham Lincoln, Franklin Roosevelt (#32), Bill Clinton, and Ronald Reagan (#40) were showmen. They could wow the heck out of an audience. They could also remember their lines!

That brings us, as promised, to Donald Trump and the Peter Principle.

Recognizing the close relationship between Presidential success and showmanship gives some idea about why #45 is having so much trouble making a go of being President.

Before I dig into that, however, I need to point out a few things that #45 likes to claim as successes that actually aren’t:

  • The 2016 election was not really a win for Donald Trump. Hillary Clinton was such an unpopular candidate that she decisively lost on her own (de)merits. God knows why she was ever the Democratic Party candidate at all. Anybody could have beaten her. If Donald Trump hadn’t been available, Elmer Fudd could have won!
  • The current economic expansion has absolutely nothing to do with Trump policies. I predicted it back in 2009, long before anybody (with the possible exception of Vladimir Putin, who apparently engineered it) thought Trump had a chance of winning the Presidency. My prediction was based on applying chaos theory to historical data. It was simply time for an economic expansion. The only effect Trump can have on the economy is to screw it up. Being trained as an economist (You did know that, didn’t you?), #45 is unlikely to screw up so badly that he derails the expansion.
  • While #45 likes to claim a win on North Korean denuclearization, the Nobel Peace Prize is on hold while evidence piles up that Kim Jong-un was pulling the wool over Trump’s eyes at the summit.

Finally, we move on to the Peter Principle.

In 1969 Canadian writer Raymond Hull co-wrote a satirical book entitled The Peter Principle with Laurence J. Peter. It was based on research Peter had done on organizational behavior.

Peter was (he died at age 70 in 1990) not a management consultant or a behavioral psychologist. He was an Associate Professor of Education at the University of Southern California. He was also Director of the Evelyn Frieden Centre for Prescriptive Teaching at USC, and Coordinator of Programs for Emotionally Disturbed Children.

The Peter principle states: “In a hierarchy every employee tends to rise to his level of incompetence.”

Horrifying to corporate managers, the book went on to provide real examples and lucid explanations to show the principle’s validity. It works as satire only because it leaves the reader with a choice either to laugh or to cry.

See last week’s discussion of why academic literature is exactly the wrong form with which to explore really tough philosophical questions in an innovative way.

Let’s be clear: I’m convinced that the Peter principle is God’s Own Truth! I’ve seen dozens of examples that confirm it, and no counter examples.

It’s another proof that Mommy Nature has a sense of humor. Anyone who disputes that has, philosophically speaking, a piece of paper taped to the back of his (or her) shirt with the words “Kick Me!” written on it.

A quick perusal of the Wikipedia entry on the Peter Principle elucidates: “An employee is promoted based on their success in previous jobs until they reach a level at which they are no longer competent, as skills in one job do not necessarily translate to another. … If the promoted person lacks the skills required for their new role, then they will be incompetent at their new level, and so they will not be promoted again.”

I leave it as an exercise for the reader (and the media) to find the numerous examples where #45, as a successful reality-TV star, has the skills he needed to be promoted to President, but not those needed to be competent in the job.

The Mad Hatter’s Riddle

Raven/Desk
Lewis Carroll’s famous riddle “Why is a raven like a writing desk?” turns out to have a simple solution after all! Shutterstock

27 June 2018 – In 1865 Charles Lutwidge Dodgson, aka Lewis Carroll, published Alice’s Adventures in Wonderland, in which his Mad Hatter character posed the riddle: “Why is a raven like a writing desk?”

Somewhat later in the story Alice gave up trying to guess the riddle and challenged the Mad Hatter to provide the answer. When he couldn’t, nor could anyone else at the story’s tea party, Alice dismissed the whole thing by saying: “I think you could do something better with the time . . . than wasting it in asking riddles that have no answers.”

Since then, it has generally been believed that the riddle has, in actuality, no answer.

Modern Western thought has progressed a lot since the mid-nineteenth century, however. Specifically, two modes of thinking have gained currency that directly lead to solving this riddle: Zen and Surrealism.

I’m not going to try to give even sketchy pictures of Zen or Surrealist doctrine here. There isn’t anywhere near enough space to do either subject justice. I will, however, allude to those parts that bear on solving the Hatter’s riddle.

I’m also not going to credit Dodson with having surreptitiously known the answer, then hiding it from the World. There is no chance that he could have read Andre Breton‘s The Surrealist Manifesto, which was published twenty-six years after Dodson’s death. And, I’ve not been able to find a scrap of evidence that the Anglican-deacon Dodson ever seriously studied Taoism or its better-known offshoot, Zen. I’m firmly convinced that the religiously conservative Dodson really did pen the riddle as an example of a nonsense question. He seemed fond of nonsense.

No, I’m trying to make the case that in the surreal world of imagination, there is no such thing as nonsense. There is always a viewpoint from which the absurd and seemingly illogical comes into sharp focus as something obvious.

As Obi-Wan Kenobi said in Return of the Jedi: “From a certain point of view.”

Surrealism sought to explore the alternate universe of dreams. From that point of view, Alice is a classic surrealist work. It explicitly recounts a dream Alice had while napping on a summery hillside with her head cradled in her big sister’s lap. The surrealists, reading Alice three quarters of a century later, recognized this link, and acknowledged the mastery with which Dodson evoked the dream world.

Unlike the mid-nineteenth-century Anglicans, however, the surrealists of the early twentieth century viewed that dream world as having as much, if not more, validity as the waking world of so-called “reality.”

Chinese Taoism informs our thinking through the melding of all forms of reality (along with everything else) into one unified whole. When allied with Indian Buddhism to form the Chinese Ch’an, or Japanese Zen, it provides a method that frees the mind to explore possible answers to, among other things, riddles like the Hatter’s, and find just the right viewpoint where the solution comes into sharp relief. This method, which is called a koan, is an exercise wherein a master provides riddles to his (or her) students to help guide them along their paths to enlightenment.

Ultimately, the solution to the Hatter’s riddle, as I revealed in my 2016 novella Lilith, is as follows:

Question: Why is a raven like a writing desk?

Answer: They’re both not made of bauxite.

According to Collins English Dictionary – Complete & Unabridged 2012 Digital Edition, bauxite is “a white, red, yellow, or brown amorphous claylike substance comprising aluminium oxides and hydroxides, often with such impurities as iron oxides. It is the chief ore of aluminium and has the general formula: Al2O3 nH2O.”

As a claylike mineral substance, bauxite is clearly exactly the wrong material from which to make a raven. Ravens are complex, highly organized hydrocarbon-based life forms. In its hydrated form, one could form an amazingly lifelike statue of a raven. It wouldn’t, however, even be the right color. Certainly it would never exhibit the behaviors we normally expect of actual, real, live ravens.

Similarly, bauxite could be used to form an amazingly lifelike statue of a writing desk. The bauxite statue of a writing desk might even have a believable color!

Why one would want to produce a statue of a writing desk, instead of making an actual writing desk, is a question outside the scope of this blog posting.

Real writing desks, however, are best made of wood, although other materials, such as steel, fiber-reinforced plastic (FRP), and marble, have been used successfully. What makes wood such a perfect material for writing desks is its mechanically superior composite structure.

Being made of long cellulose fibers held in place by a lignin matrix, wood has wonderful anisotropic mechanical properties. It’s easy to cut and shape with the grain, while providing prodigious yield strength when stressed against the grain. Its amazing toughness when placed under tension or bending loads makes assembling wood into the kind of structure ideal for a writing desk almost too easy.

Try making that out of bauxite!

Alice was unable to divine the answer the Hatter’s riddle because she “thought over all she could remember about ravens and writing desks.” That is exactly the kind of mistake we might expect a conservative Anglican deacon to make as well.

It is only by using Zen methods of turning the problem inside out and surrealist imagination’s ability to look at it as a question, not of what ravens and writing desks are, but what they are not, that the riddle’s solution becomes obvious.

What If They Gave a War, But Nobody Noticed

Cyberwar
World War III is being fought in cyberspace right now, but most of us seem to be missing it! Oliver Denker/Shutterstock

13 June 2018 – Ever wonder why Kim Jong Un is so willing to talk about giving up his nuclear arsenal? Sort-of-President Donald Trump (POTUS) seems to think it’s because economic sanctions are driving North Korea (officially the Democratic People’s Republic of Korea, or DPRK) to the finacial brink.

That may be true, but it is far from the whole story. As usual, the reality star POTUS is stuck decades behind the times. The real World War III won’t have anything to do with nukes, and it’s started already.

The threat of global warfare using thermonuclear weapons was panic inducing to my father back in the 1950s and 1960s. Strangely, however, my superbrained mother didn’t seem very worried at the time.

By the 1980s, we were beginning to realize what my mother seemed to know instinctively — that global thermonuclear war just wasn’t going to happen. That kind of war leaves such an ungodly mess that no even-marginally-sane person would want to win one. The winners would be worse off than the losers!

The losers would join the gratefully dead, while the winners would have to live in the mess!

That’s why we don’t lose sleep at night knowing that the U.S., Russia, China, India, Pakistan, and, in fact, most countries in the first and second worlds, have access to thermonuclear weapons. We just worry about third-world toilets (to quote Danny DeVito’s character in The Jewel of the Nile) run by paranoid homicidal maniacs getting their hands on the things. Those guys are the only ones crazy enough to ever actually use them!

We only worried about North Korea developing nukes when Kim Jong Un was acting like a total whacko. Since he stopped his nuclear development program (because his nuclear lab accidentally collapsed under a mountain of rubble), it’s begun looking like he was no more insane than the leaders of Leonard Wibberley’s fictional nation-state, the Duchy of Grand Fewick.

In Wibberley’s 1956 novel The Mouse That Roared, the Duchy’s leaders all breathed a sigh of relief when their captured doomsday weapon, the Q-Bomb, proved to be a dud.

Yes, there is a hilarious movie to be made documenting the North Korean nuclear and missile programs.

Okay, so we’ve disposed of the idea that World War III will be a nuclear holocaust. Does that mean, as so many starry-eyed astrophysicists imagined in the late 1940s, the end of war?

Fat f-ing chance!

The winnable war in the Twenty-First Century is one fought in cyberspace. In fact, it’s going on right now. And, you’re missing it.

Cybersecurity and IT expert Theresa Payton, CEO of Fortalice Solutions, asserts that suspected North Korean hackers have been conducting offensive cyber operations on financial institutions amid discussions between Washington and Pyongyang on a possible nuclear summit between President Trump and Kim Jong Un.

“The U.S. has been able to observe North Korean-linked hackers targeting financial institutions in order to steal money,” she says. “This isn’t North Korea’s first time meddling in serious hacking schemes. This time, it’s likely because the international economic sanctions have hurt them in their wallets and they are desperate and strapped for cash.”

There is a long laundry list of cyberattacks that have been perpetrated against U.S. and European interests, including infrastructure, corporations and individuals.

“One of N. Korea’s best assets … is to flex it’s muscle using it’s elite trained cyber operations,” Payton asserts. “Their cyber weapons can be used to fund their government by stealing money, to torch organizations and governments that offend them (look at Sony hacking), to disrupt our daily lives through targeting critical infrastructure, and more. The Cyber Operations of N. Korea is a powerful tool for the DPRK to show their displeasure at anything and it’s the best bargaining chip that Kim Jong Un has.”

Clearly, DPRK is not the only bad state actor out there. Russia has long been in the news using various cyberwar tactics against the U.S., Europe and others. China has also been blamed for cyberattacks. In fact, cyberwarfare is a cheap, readily available alternative to messy and expensive nuclear weapons for anyone with Internet access (meaning, just about everybody) and wishing to do anybody harm, including us.

“You can take away their Nukes,” Payton points out, “but you will have a hard time dismantling their ability to attack critical infrastructure, businesses and even civilians through cyber operations.”

Programming Notes: I’ve been getting a number of comments on this blog each day, and it looks like we need to set some ground rules. At least, I need to be explicit about things I will accept and things I won’t:

  • First off, remember that this isn’t a social media site. When you make a comment, it doesn’t just spill out into the blog site. Comments are sequestered until I go in and approve or reject them. So far, the number of comments is low enough that I can go through and read each one, but I don’t do it every day. If I did, I’d never get any new posts written! Please be patient.
  • Do not embed URLs to other websites in comments. I’ll strip them out even if I approve your comment otherwise. The reason is that I don’t have time to vet every URL, and I stick to journalistic standards, which means I don’t allow anything in the blog that I can’t verify. There are no exceptions.
  • This is an English language site ONLY. Comments in other languages are immediately deleted. (For why, see above.)
  • Use Standard English written in clear, concise prose. If I have trouble understanding what you’re trying to say, I won’t give your comment any space. If you can’t write a cogent English sentence, take an ESL writing course!

The Case for Free College

College vs. Income
While the need for skilled workers to maintain our technology edge has grown, the cost of training those workers has grown astronomically.

6 June 2018 – We, as a nation, need to extend the present system that provides free, universal education up through high school to cover college to the baccalaureate level.

DISCLOSURE: Teaching is my family business. My father was a teacher. My mother was a teacher. My sister’s first career was as a teacher. My brother in law was a teacher. My wife is a teacher. My son is a teacher. My daughter in law is a teacher. Most of my aunts and uncles and cousins are or were teachers. I’ve spent a lot of years teaching at the college level, myself. Some would say that I have a conflict of interest when covering developments in the education field. Others might argue that I know whereof I speak.

Since WW II, there has been a growing realization that the best careers go to those with at least a bachelor’s degree in whatever field they choose. Yet, at the same time, society has (perhaps inadvertently, although I’m not naive enough to eschew thinking there’s a lot of blame to go around) erected a monumental barrier to anyone wanting to get an education. Since the mid-1970s, the cost of higher education has vastly outstripped the ability of most people to pay for it.

In 1975, the price of attendance in college was about one fifth of the median family income (see graph above). In 2016, it was over a third. That makes sending kids to college a whole lot harder than it used to be. If your family happens to have less than median household income, that barrier looks even higher, and is getting steeper.

MORE DISCLOSURE: The reason I don’t have a Ph.D. today is that two years into my Aerospace Engineering Ph.D. program, Arizona State University jacked up the tuition beyond my (not incosiderable at the time) ability to pay.

I’d like everyone in America to consider the following propositions:

  1. A bachelor’s degree is the new high-school diploma;
  2. Having an educated population is a requirement for our technology-based society;
  3. Without education, upward mobility is nearly impossible;
  4. Ergo, it is a requirement for our society to ensure that every citizen capable of getting a college degree gets one.

EVEN MORE DISCLOSURE: Horace Mann, often credited as the Father of Public Education, was born in the same town (Franklin, MA) that I was, and our family charity is a scholarship fund dedicated to his memory.

About Mann’s intellectual progressivism, the historian Ellwood P. Cubberley said: “No one did more than he to establish in the minds of the American people the conception that education should be universal, non-sectarian, free, and that its aims should be social efficiency, civic virtue, and character, rather than mere learning or the advancement of education ends.” (source: Wikipedia)

The Wikipedia article goes on to say: “Arguing that universal public education was the best way to turn unruly American children into disciplined, judicious republican citizens, Mann won widespread approval from modernizers, especially in the Whig Party, for building public schools. Most states adopted a version of the system Mann established in Massachusetts, especially the program for normal schools to train professional teachers.”

That was back in the mid-nineteenth century. At that time, the United States was in the midst of a shift from an agrarian to an industrial economy. We’ve since completed that transition and are now shifting to an information-based economy. In future, full participation in the workforce will require everyone to have at least a bachelor’s degree.

So, when progressive politicians, like Bernie Sanders, make noises about free universal college education, YOU should listen!

It’s about time we, as a society, owned up to the fact that times have changed a lot since the mid-nineteenth century. At that time, universal free education to about junior high school level was considered enough. Since then, it was extended to high school. It’s time to extend it further to the bachelor’s-degree level.

That doesn’t mean shutting down Ivy League colleges. For those who can afford them, private and for-profit colleges can provide superior educational experiences. But, publicly funded four-year colleges offering tuition-free education to everyone has become a strategic imperative.

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.

STEM Careers for Women

Woman engineer
Women have more career options than STEM. Courtesy Shutterstock.

6 April 2018 – Folks are going to HATE what I have to say today. I expect to get comments accusing me of being a slug-brained, misogynist reactionary imbicile. So be it, I often say things other people don’t want to hear, and I’m often accused of being a slug-brained imbecile. I’m sometimes accused of being reactionary.

I don’t think I’m usually accused of being mysogynist, so that’ll be a new one.

I’m not often accused of being misogynist because I’ve got pretty good credentials in the promoting-womens’-interests department. I try to pay attention to what goes on in my women-friends’ heads. I’m more interested in the girl inside than in their outsides. Thus, I actually do care about what’s important to them.

Historically, I’ve known a lot of exceptional women, and not a few who were not-so-exceptional, and, of course, I’ve met my share of morons. But, I’ve tried to understand what was going on in all their heads because I long ago noticed that just about everybody I encounter is able to teach me something if I pay attention.

So much for the preliminaries.

Getting more to the point of this blog entry, last week I listened to a Wilson Center webcast entitled “Opening Doors in Glass Walls for Women in STEM.” I’d hoped I might have something to add to the discussion, but I didn’t. I also didn’t hear much in the “new ideas” department, either. It was mostly “woe is us ’cause women get paid less than men,” and “we’ve made some progress, but there still aren’t many women in STEM careers,” and stuff like that.

Okay. For those who don’t already know, STEM is an acronym for “Science, Technology, Engineering and Math.” It’s a big thing in education and career-development circles because it’s critical to our national technological development.

Without going into the latest statistics (’cause I’m too lazy this morning to look ’em up), it’s pretty well acknowledged that women get paid a whole lot less than men for doing the same jobs, and a whole lot less than 50% of STEM workers are women despite their making up half the available workforce.

I won’t say much about the pay ranking, except to assert that paying someone less than they’re efforts are worth is just plain dumb. It’s dumb for the employer because good talent will vote with their feet for higher pay. It’s dumb for the employee because he, she, or it should vote with their feet by walking out the door to look for a more enlightened employer. It doesn’t matter whether you are a man or a woman, you don’t want to be dependent for your income on a a mismanaged company!

Enough said about the pay differential. What I want to talk about here is the idea that, since half the population is women, half the STEM workers should be women. I’m going to assert that’s equally dumb!

I do NOT assert that there is anything about women that makes them unsuited to STEM careers. It is true that women are significantly smaller physically (the last time I checked, the average American woman was 5’4″ tall, while the average American man was 5’10” tall with everything else more or less scaled to match), but that makes no nevermind for a STEM career. STEM jobs make demands on what’s between the ears, not what’s between the shoulders.

With regard to womens’ brains’ suitability for STEM jobs, experience has shown me that there’s no significant (to a STEM career) difference between them and male brains. Women are every bit as adept at independent thinking, puzzle solving, memory tasks, and just about any measurable talent that might make a difference to a STEM worker. I’ve seen no study that showed women to be inferior to men with respect to mathematical or abstract reasoning, either. In fact, some studies have purported to show the reverse.

On the other hand, as far as I know, EVERY culture traditionally separates jobs into “women’s work” and “men’s work.” Being a firm believer in Darwinian evolution, I don’t argue with Mommy Nature’s way, but do ask “Why?”

Many decades ago, my advanced lab instructor asserted that “tradition is the sum total of things our ancestors over the past four million years have found to work.” I completely agree with him, with the important proviso that things change.

Four million years ago, our ancestors didn’t have ceramic tile floors in their condos, nor did they have cars with remote keyless entry locks. It was a lot tougher for them than it is for us, and survival was far less assured.

They were the guys who decided to have men make the hand axes and arrowheads, and that women should weave the baskets and make the soup. Most importantly for our discussion, they decided women should change the diapers.

Fast forward four million years, and we’re still doing the same things, more or less. Things, however, have changed, and we’re now having to rethink that division of labor.

Some jobs, like digging ditches, still require physical prowess, which makes them more suited to men than women. I’m ignoring (but not forgetting) all the manual labor women are asked to do all over the world. That’s not what I’m talking about here. I’m talking about STEM jobs, which DON’T require physical prowess.

So, why don’t women go after those cushy, high-paying STEM jobs, and, equally significant, once they have one of those jobs, why is it so hard to keep them in them? One of the few things that came out of last week’s webinar (Remember this all started with my attending that webinar?) was the point that women leave STEM careers in droves. They abandon their hard-won STEM careers and go off to do something else.

The point I want to make with this essay is to suggest that maybe the reason women are underrepresented in STEM careers is that they actually have more options than men. Most importantly, they have the highly attractive (to them) option of the “homemaker” career.

Current thinking among the liberal intelligencia is that “homemaker” is not much of a career. I simply don’t accept that idea. Housewife is just as important a job as, say, truck driver, bank president, or technology journalist. So, pooh!

The homemaker option is not open to most men. We may be willing to help out around the house, and may even feel driven to do our part, or at least try to find some part that could be ours to do. But, I can’t think of one of my male friends who’d be comfortable shouldering the whole responsibility.

I assert that four million years of evolution has wired up human brains for sexual dimorphism with regard to “guy jobs” and “girl jobs.” It just feels right for guys to do jobs that seem to be traditionally guy things and for women to do jobs that seem to be traditionally theirs.

Now, throughout most of evolutionary time STEM jobs pretty much didn’t exist. One of the things our ancestors didn’t have four million years ago was trigonometry. In fact, they probably struggled with basic number theory. I did an experiment in high school that indicated that the crows in my back yard couldn’t count beyond two. Australopithecus Paranthropus was probably a better mathematician than that, but likely not by much.

So, one of the things we have now that has avoided being shaped by natural selection pressure is the option to persue a STEM career. It’s pretty much evolutionarily neutral. STEM careers are probably equally attractive (or repulsive) to women and men.

I mention “repulsive” for a very good reason. Preparing oneself for a STEM career is hard.

Mathematics, especially, is one of the few subjects that give many, if not most, people phobias. Frankly, arithmetic lost me on the second day of first grade when Miss Shay passed out a list of addition tables and told us to memorize it. I thought the idea of arithmetic was a gas. Memorizing tables, however, was not on my To Do list. I expect most people feel the same way.

Learning STEM subjects involves a $%^-load of memorizing! So, it’s no wonder girls would rather play with dolls (and boys with trucks) than study STEM subjects. Eventually, playing with trucks leads to STEM careers. Playing with dolls does not.

Grown up girls find they have the option of playing with dolls as a career. Grown up boys don’t. So, choosing a STEM career is something grown-up boys really want to do if they can, but for girls, not so much. They can find something to do that’s more satisfying with less work.

So, they vote with their feet. THAT may be why it’s so hard to get women into STEM careers in the first place, and then to keep them there for the long haul.

Before you start having apoplectic fits imagining that I’m making a broad generalization that females don’t like STEM careers, recognize that what I’m describing IS a broad theoretical generalization. It’s meant to be.

In the real world there are 300 million people in the United States, half of which are women, and each and every one of them gets to make a separate career choice for themself. Every one of them chooses for themself based on what they want to do with their life. Some choose STEM careers. Some don’t.

My point is that you shouldn’t just assume that half of STEM job slots ought be filled by women. Half of potential candidates may be women, but a fair fraction of them might prefer to go play somewhere else. It may be that they find women have more alternatives than do men. You may end up with more men slotting into those STEM jobs because they have less choice.

You know, being a housewife ain’t such a bad gig!