Teaching News Consumption and Critical Thinking

Teaching media literacy
Teaching global media literacy to children should be started when they’re young. David Pereiras/Shutterstock

21 November 2018 – Regular readers of this blog know one of my favorite themes is critical thinking about news. Another of my favorite subjects is education. So, they won’t be surprised when I go on a rant about promoting teaching of critical news consumption habits to youngsters.

Apropos of this subject, last week the BBC launched a project entitled “Beyond Fake News,” which aims to “fight back” against fake news with a season of documentaries, special reports and features on the BBC’s international TV, radio and online networks.

In an article by Lucy Mapstone, Press Association Deputy Entertainment Editor for the Independent.ie digital network, entitled “BBC to ‘fight back’ against disinformation with Beyond Fake News project,” Jamie Angus, director of the BBC World Service Group, is quoted as saying: “Poor standards of global media literacy, and the ease with which malicious content can spread unchecked on digital platforms mean there’s never been a greater need for trustworthy news providers to take proactive steps.”

Angus’ quote opens up a Pandora’s box of issues. Among them is the basic question of what constitutes “trustworthy news providers” in the first place. Of course, this is an issue I’ve tackled in previous columns.

Another issue is what would be appropriate “proactive steps.” The BBC’s “Beyond Fake News” project is one example that seems pretty sound. (Sorry if this language seems a little stilted, but I’ve just finished watching a mid-twentieth-century British film, and those folks tended to talk that way. It’ll take me a little while to get over it.)

Another sort of “proactive step” is what I’ve been trying to do in this blog: provide advice about what steps to take to ensure that the news you consume is reliable.

A third is providing rebuttal of specific fake-news stories, which is what pundits on networks like CNN and MSNBC try (with limited success, I might say) to do every day.

The issue I hope to attack in this blog posting is the overarching concern in the first phrase of the Angus quote: “Poor standards of global media literacy, … .”

Global media literacy can only be improved the same way any lack of literacy can be improved, and that is through education.

Improving global media literacy begins with ensuring a high standard of media literacy among teachers. Teachers can only teach what they already know. Thus, a high standard of media literacy must start in college and university academic-education programs.

While I’ve spent decades teaching at the college level, so I have plenty of experience, I’m not actually qualified to teach other teachers how to teach. I’ve only taught technical subjects, and the education required to teach technical subjects centers on the technical subjects themselves. The art of teaching is (or at least was when I was at university) left to the student’s ability to mimic what their teachers did, informal mentoring by fellow teachers, and good-ol’ experience in the classroom. We were basically dumped into the classroom and left to sink or swim. Some swam, while others sank.

That said, I’m not going to try to lay out a program for teaching teachers how to teach media literacy. I’ll confine my remarks to making the case that it needs to be done.

Teaching media literacy to schoolchildren is especially urgent because the media-literacy projects I keep hearing about are aimed at adults “in the wild,” so to speak. That is, they’re aimed at adult citizens who have already completed their educations and are out earning livings, bringing up families, and participating in the political life of society (or ignoring it, as the case may be).

I submit that’s exactly the wrong audience to aim at.

Yes, it’s the audience that is most involved in media consumption. It’s the group of people who most need to be media literate. It is not, however, the group that we need to aim media-literacy education at.

We gotta get ‘em when they’re young!

Like any other academic subject, the best time to teach people good media-consumption habits is before they need to have them, not afterwards. There are multiple reasons for this.

First, children need to develop good habits before they’ve developed bad habits. It saves the dicey stage of having to unlearn old habits before you can learn new ones. Media literacy is no different. Neither is critical thinking.

Most of the so-called “fake news” appeals to folks who’ve never learned to think critically in the first place. They certainly try to think critically, but they’ve never been taught the skills. Of course, those critical-thinking skills are a prerequisite to building good media-consumption habits.

How can you get in the habit of thinking critically about news stories you consume unless you’ve been taught to think critically in the first place? I submit that the two skills are so intertwined that the best strategy is to teach them simultaneously.

And, it is most definitely a habit, like smoking, drinking alcohol, and being polite to pretty girls (or boys). It’s not something you can just tell somebody to do, then expect they’ll do it. They have to do it over and over again until it becomes habitual.

‘Nuff said.

Another reason to promote media literacy among the young is that’s when people are most amenable to instruction. Human children are pre-programmed to try to learn things. That’s what “play” is all about. Acquiring knowledge is not an unpleasant chore for children (unless misguided adults make it so). It’s their job! To ensure that children learn what they need to know to function as adults, Mommy Nature went out of her way to make learning fun, just as she did with everything else humans need to do to survive as a species.

Learning, having sex, taking care of babies are all things humans have to do to survive, so Mommy Nature puts systems in place to make them fun, and so drive humans to do them.

A third reason we need to teach media literacy to the young is that, like everything else, you’re better off learning it before you need to practice it. Nobody in their right mind teaches a novice how to drive a car by running them out in city traffic. High schools all have big, torturously laid out parking lots to give novice drivers a safe, challenging place to practice the basic skills of starting, stopping and turning before they have to perform those functions while dealing with fast-moving Chevys coming out of nowhere.

Similarly, you want students to practice deciphering written and verbal communications before asking them to parse a Donald-Trump speech!

The “Call to Action” for this editorial piece is thus, “Agitate for developing good media-consumption habits among schoolchildren along with the traditional Three Rs.” It starts with making the teaching of media literacy part of K-12 teacher education. It also includes teaching critical thinking skills and habits at the same time. Finally, it includes holding K-12 teachers responsible for inculcating good media-consumption habits in their students.

Yes, it’s important to try to bring the current crop of media-illiterate adults up to speed, but it’s more important to promote global media literacy among the young.

Computers Are Revolting!

Will Computers Revolt? cover
Charles Simon’s Will Computers Revolt? looks at the future of interactions between artificial intelligence and the human race.

14 November 2018 – I just couldn’t resist the double meaning allowed by the title for this blog posting. It’s all I could think of when reading the title of Charles Simon’s new book, Will Computers Revolt? Preparing for the Future of Artificial Intelligence.

On one hand, yes, computers are revolting. Yesterday my wife and I spent two hours trying to figure out how to activate our Netflix account on my new laptop. We ended up having to change the email address and password associated with the account. And, we aren’t done yet! The nice lady at Netflix sadly informed me that in thirty days, their automated system would insist that we re-log-into the account on both devices due to the change.

That’s revolting!

On the other hand, the uprising has already begun. Computers are revolting in the sense that they’re taking power to run our lives.

We used to buy stuff just by picking it off the shelf, then walking up to the check-out counter and handing over a few pieces of green paper. The only thing that held up the process was counting out the change.

Later, when credit cards first reared their ugly heads, we had to wait a few minutes for the salesperson to fill out a sales form, then run our credit cards through the machine. It was all manual. No computers involved. It took little time once you learned how to do it, and, more importantly, the process was pretty much the same everywhere and never changed, so once you learned it, you’d learned it “forever.”

Not no more! How much time do you, I, and everyone else waste navigating the multiple pages we have to walk through just to pay for anything with a credit or debit card today?

Even worse, every store has different software using different screens to ask different questions. So, we can’t develop a habitual routine for the process. It’s different every time!

Not long ago the banks issuing my debit-card accounts switched to those %^&^ things with the chips. I always forget to put the thing in the slot instead of swiping the card across the magnetic-stripe reader. When that happens we have to start the process all over, wasting even more time.

The computers have taken over, so now we have to do what they tell us to do.

Now we know who’ll be first against the wall when the revolution comes. It’s already here and the first against the wall is us!

Golem Literature in Perspective

But seriously folks, Simon’s book is the latest in a long tradition of works by thinkers fascinated by the idea that someone could create an artifice that would pass for a human. Perhaps the earliest, and certainly the most iconic, is the golem stories from Jewish folklore. I suspect (on no authority, whatsoever, but it does seem likely) that the idea of a golem appeared about the time when human sculptors started making statues in realistic human form. That was very early, indeed!

A golem is, for those who aren’t familiar with the term or willing to follow the link provided above to learn about it, an artificial creature fashioned by a human that is effectively what we call a “robot.” The folkloric golems were made of clay or (sometimes) wood because those were the best materials available at the time that the stories’ authors’ could have their artists work with. A well-known golem story is Carlo Collodi’s The Adventures of Pinocchio.

By the sixth century BCE, Greek sculptors had begun to produce lifelike statues. The myth of Pygmalion and Galatea appeared in a pseudo-historical work by Philostephanus Cyrenaeus in the third century BCE. Pygmalion was a sculptor who made a statue representing his ideal woman, then fell in love with it. Aphrodite granted his prayer for a wife exactly like the statue by bringing the statue to life. The wife’s name was Galatea.

The Talmud points out that Adam started out as a golem. Like Galatea, Adam was brought to life when the Hebrew God Yahweh gave him a soul.

These golem examples emphasize the idea that humans, no matter how holy or wise, cannot give their creations a soul. The best they can do is to create automatons.

Simon effectively begs to differ. He spends the first quarter of his text laying out the case that it is possible, and indeed inevitable, that automated control systems displaying artificial general intelligence (AGI) capable of thinking at or (eventually) well above human capacity will appear. He spends the next half of his text showing how such AGI systems could be created and making the case that they will eventually exhibit functionality indistinguishable from consciousness. He devotes the rest of his text to speculating about how we, as human beings, will likely interact with such hyperintelligent machines.

Spoiler Alert

Simon’s answer to the question posed by his title is a sort-of “yes.” He feels AGIs will inevitably displace humans as the most intelligent beings on our planet, but won’t exactly “revolt” at any point.

“The conclusion,” he says, “is that the paths of AGIs and humanity will diverge to such an extent that there will be no close relationship between humans and our silicon counterparts.”

There won’t be any violent conflict because robotic needs are sufficiently dissimilar to ours that there won’t be any competition for scarce resources, which is what leads to conflict between groups (including between species).

Robots, he posits, are unlikely to care enough about us to revolt. There will be no Terminator robots seeking to exterminate us because they won’t see us as enough of a threat to bother with. They’re more likely to view us much the way we view squirrels and birds: pleasant fixtures of the natural world.

They won’t, of course, tolerate any individual humans who make trouble for them the same way we wouldn’t tolerate a rabid coyote. But, otherwise, so what?

So, the !!!! What?

The main value of Simon’s book is not in its ultimate conclusion. That’s basically informed opinion. Rather, its value lies in the voluminous detail he provides in getting to that conclusion.

He spends the first quarter of his text detailing exactly what he means by AGI. What functions are needed to make it manifest? How will we know when it rears its head (ugly or not, as a matter of taste)? How will a conscious, self-aware AGI system act?

A critical point Simon makes in this section is the assertion that AGI will arise first in autonomous mobile robots. I thoroughly agree for pretty much the same reasons he puts forth.

I first started seriously speculating about machine intelligence back in the middle of the twentieth century. I never got too far – certainly not as far as Simon gets in this volume – but pretty much the first thing I actually did realize was that it was impossible to develop any kind of machine with any recognizable intelligence unless its main feature was having a mobile body.

Developing any AGI feature requires the machine to have a mobile body. It has to take responsibility not only for deciding how to move itself about in space, but figuring out why. Why would it, for example, rather be over there, rather than to just stay here? Note that biological intelligence arose in animals, not in plants!

Simultaneously with reading Simon’s book, I was re-reading Robert A. Heinlein’s 1966 novel The Moon is a Harsh Mistress, which is one of innumerable fiction works whose plot hangs on actions of a superintelligent sentient computer. I found it interesting to compare Heinlein’s early fictional account with Simon’s much more informed discussion.

Heinlein sidesteps the mobile-body requirement by making his AGI arise in a computer tasked with operating the entire infrastructure of the first permanent human colony on the Moon (more accurately in the Moon, since Heinlein’s troglodytes burrowed through caves and tunnels, coming up to the surface only reluctantly when circumstances forced them to). He also avoids trying to imagine the AGI’s inner workings, by glossing over with the 1950s technology he was most familiar with.

In his rather longish second section, Simon leads his reader through a thought experiment speculating about what components an AGI system would need to have for its intelligence to develop. What sorts of circuitry might be needed, and how might it be realized? This section might be fascinating for those wanting to develop hardware and software to support AGI. For those of us watching from our armchairs on the outside, though, not so much.

Altogether, Charles Simon’s Will Computers Revolt? is an important book that’s fairly easy to read (or, at least as easy as any book this technical can be) and accessible to a wide range of people interested in the future of robotics and artificial intelligence. It is not the last word on this fast-developing field by any means. It is, however, a starting point for the necessary debate over how we should view the subject. Do we have anything to fear? Do we need to think about any regulations? Is there anything to regulate and would any such regulations be effective?

Babies and Bath Water

A baby in bath water
Don’t throw the baby out with the bathwater. Switlana Symonenko/Shutterstock.com

31 October 2018 – An old catchphrase derived from Medieval German is “Don’t throw the baby out with the bathwater.” It expresses an important principle in systems engineering.

Systems engineering focuses on how to design, build, and manage complex systems. A system can consist of almost anything made up of multiple parts or elements. For example, an automobile internal combustion engine is a system consisting of pistons, valves, a crankshaft, etc. Complex systems, such as that internal combustion engine, are typically broken up into sub-systems, such as the ignition system, the fuel system, and so forth.

Obviously, the systems concept can be applied to almost everything, from microorganisms to the World economy. As another example, medical professionals divide the human body into eleven organ systems, which would each be sub-systems within the body, which is considered as a complex system, itself.

Most systems-engineering principles transfer seamlessly from one kind of system to another.

Perhaps the most best known example of a systems-engineering principle was popularized by Robin Williams in his Mork and Mindy TV series. The Used-Car rule, as Williams’ Mork character put it, quite simply states:

“If it works, don’t fix it!”

If you’re getting the idea that systems engineering principles are typically couched in phrases that sound pretty colloquial, you’re right. People have been dealing with systems for as long as there have been people, so most of what they discovered about how to deal with systems long ago became “common sense.”

Systems engineering coalesced into an interdisciplinary engineering field around the middle of the twentieth century. Simon Ramo is sometimes credited as the founder of modern systems engineering, although many engineers and engineering managers contributed to its development and formalization.

The Baby/Bathwater rule means (if there’s anybody out there still unsure of the concept) that when attempting to modify something big (such as, say, the NAFTA treaty), make sure you retain those elements you wish to keep while in the process of modifying those elements you want to change.

The idea is that most systems that are already in place more or less already work, indicating that there are more elements that are right than are wrong. Thus, it’ll be easier, simpler, and less complicated to fix what’s wrong than to violate another systems principle:

“Don’t reinvent the wheel.”

Sometimes, on the other hand, something is such an unholy mess that trying to pick out those elements that need to change from the parts you don’t wish to change is so difficult that it’s not worth the effort. At that point, you’re better off scrapping the whole thing (throwing the baby out with the bathwater) and starting over from scratch.

Several months ago, I noticed that a seam in the convertible top on my sports car had begun to split. I quickly figured out that the big brush roller at my neighborhood automated car wash was over stressing the more-than-a-decade-old fabric. Naturally, I stopped using that car wash, and started looking around for a hand-detailing shop that would be more gentle.

But, that still left me with a convertible top that had started to split. So, I started looking at my options for fixing the problem.

Considering the car’s advanced age, and that a number of little things were starting to fail, I first considered trading the whole car in for a newer model. That, of course, would violate the rule about not throwing the baby out with the bath water. I’d be discarding the whole car just because of a small flaw, which might be repaired.

Of course, I’d also be getting rid of a whole raft of potentially impending problems. Then, again, I might be taking on a pile of problems that I knew nothing about.

It turned out, however, that the best car-replacement option was unacceptable, so I started looking into replacing just the convertible top. That, too, turned out to be infeasible. Finally, I found an automotive upholstery specialist who described a patching scheme that would solve the immediate problem and likely last through the remaining life of the car. So, that’s what I did.

I’ve put you through listening to this whole story to illustrate the thought process behind applying the “don’t throw the baby out with the bathwater” rule.

Unfortunately, our current President, Donald Trump, seems to have never learned anything about systems engineering, or about babies and bathwater. He’s apparently enthralled with the idea that he can bully U.S. trading partners into giving him concessions when he negotiates with them one-on-one. That’s the gist of his love of bilateral trade agreements.

Apparently, he feels that if he gets into a multilateral trade negotiation, his go-to strategy of browbeating partners into giving in to him might not work. Multiple negotiating partners might get together and provide a united front against him.

In fact, that’s a reasonable assumption. He’s a sufficiently weak deal maker on his own that he’d have trouble standing up to a combination of, say, Mexico’s Nieto and Canada’s Trudeau banded together against him.

With that background, it’s not hard to understand why POTUS is looking at all U.S. treaties, which are mostly multilateral, and looking for any niddly thing wrong with them to use as an excuse to scrap the whole arrangement and start over. Obvious examples being the NAFTA treaty and the Iran Nuclear Accord.

Both of these treaties have been in place for some time, and have generally achieved the goals they were put in place to achieve. Howsoever, they’re not perfect, so POTUS is in the position of trying to “fix” them.

Since both these treaties are multilateral deals, to make even minor adjustments POTUS would have to enter multilateral negotiations with partners (such as Germany’s quantum-physicist-turned-politician, Angela Merkel) who would be unlikely to cow-tow to his bullying style. Robbed of his signature strategy, he’d rather scrap the whole thing and start all over, taking on partners one at a time in bilateral negotiations. So, that’s what he’s trying to do.

A more effective strategy would be to forget everything his ghostwriter put into his self-congratulatory “How-To” book The Art of the Deal, enumerate a list of what’s actually wrong with these documents, and tap into the cadre of veteran treaty negotiators that used to be available in the U.S. State Department to assemble a team of career diplomats capable of fixing what’s wrong without throwing the babies out with the bathwater.

But, that would violate his narcissistic world view. He’d have to admit that it wasn’t all about him, and acknowledge one of the first principles of project management (another discipline that he should have vast knowledge of, but apparently doesn’t):

Begin by making sure the needs of all stakeholders are built into any project plan.”

Climate Models Bat Zero

Climate models vs. observations
Whups! Since the 1970s, climate models have overestimated global temperature rise by … a lot! Cato Institute

The articles discussed here reflect the print version of The Wall Street Journal, rather than the online version. Links to online versions are provided. The publication dates and some of the contents do not match.

10 October 2018 – Baseball is well known to be a game of statistics. Fans pay as much attention to statistical analysis of performance by both players and teams as they do to action on the field. They hope to use those statistics to indicate how teams and individual players are likely to perform in the future. It’s an intellectual exercise that is half the fun of following the sport.

While baseball statistics are quoted to three decimal places, or one part in a thousand, fans know to ignore the last decimal place, be skeptical of the second decimal place, and recognize that even the first decimal place has limited predictive power. It’s not that these statistics are inaccurate or in any sense meaningless, it’s that they describe a situation that seems predictable, yet is full of surprises.

With 18 players in a game at any given time, a complex set of rules, and at least three players and an umpire involved in the outcome of every pitch, a baseball game is a highly chaotic system. What makes it fun is seeing how this system evolves over time. Fans get involved by trying to predict what will happen next, then quickly seeing if their expectations materialize.

The essence of a chaotic system is conditional unpredictability. That is, the predictability of any result drops more-or-less drastically with time. For baseball, the probability of, say, a player maintaining their batting average is fairly high on a weekly basis, drops drastically on a month-to-month basis, and simply can’t be predicted from year to year.

Folks call that “streakiness,” and it’s one of the hallmarks of mathematical chaos.

Since the 1960s, mathematicians have recognized that weather is also chaotic. You can say with certainty what’s happening right here right now. If you make careful observations and take into account what’s happening at nearby locations, you can be fairly certain what’ll happen an hour from now. What will happen a week from now, however, is a crapshoot.

This drives insurance companies crazy. They want to live in a deterministic world where they can predict their losses far into the future so that they can plan to have cash on hand (loss reserves) to cover them. That works for, say, life insurance. It works poorly for losses do to extreme-weather events.

That’s because weather is chaotic. Predicting catastrophic weather events next year is like predicting Miami Marlins pitcher Drew Steckenrider‘s earned-run-average for the 2019 season.

Laugh out loud.

Notes from 3 October

My BS detector went off big-time when I read an article in this morning’s Wall Street Journal entitled “A Hotter Planet Reprices Risk Around the World.” That headline is BS for many reasons.

Digging into the article turned up the assertion that insurance providers were using deterministic computer models to predict risk of losses due to future catastrophic weather events. The article didn’t say that explicitly. We have to understand a bit about computer modeling to see what’s behind the words they used. Since I’ve been doing that stuff since the 1970s, pulling aside the curtain is fairly easy.

I’ve also covered risk assessment in industrial settings for many years. It’s not done with deterministic models. It’s not even done with traditional mathematics!

The traditional mathematics you learned in grade school uses real numbers. That is numbers with a definite value.

Like Pi.

Pi = 3.1415926 ….

We know what Pi is because it’s measurable. It’s the ratio of a circle’s circumference to its diameter.

Measure the circumference. Measure the diameter. Then divide one by the other.

The ancient Egyptians performed the exercise a kazillion times and noticed that, no matter what circle you used, no matter how big it was, whether you drew it on papyrus or scratched it on a rock or laid it out in a crop circle, you always came out with the same number. That number eventually picked up the name “Pi.”

Risk assessment is NOT done with traditional arithmetic using deterministic (real) numbers. It’s done using what’s called “fuzzy logic.”

Fuzzy logic is not like the fuzzy thinking used by newspaper reporters writing about climate change. The “fuzzy” part simply means it uses fuzzy categories like “small,” “medium” and “large” that don’t have precisely defined values.

While computer programs are perfectly capable of dealing with fuzzy logic, they won’t give you the kind of answers cost accountants are prepared to deal with. They won’t tell you that you need a risk-reserve allocation of $5,937,652.37. They’ll tell you something like “lots!”

You can’t take “lots” to the bank.

The next problem is imagining that global climate models could have any possible relationship to catastrophic weather events. Catastrophic weather events are, by definition, catastrophic. To analyze them you need the kind of mathermatics called “catastrophe theory.”

Catastrophe theory is one of the underpinnings of chaos. In Steven Spielberg’s 1993 movie Jurassic Park, the character Ian Malcolm tries to illustrate catastrophe theory with the example of a drop of water rolling off the back of his hand. Whether it drips off to the right or left depends critically on how his hand is tipped. A small change creates an immense difference.

If a ball is balanced at the edge of a table, it can either stay there or drop off, and you can’t predict in advance which will happen.

That’s the thinking behind catastrophe theory.

The same analysis goes into predicting what will happen with a hurricane. As I recall, at the time Hurricane Florence (2018) made landfall, most models predicted it would move south along the Appalachian Ridge. Another group of models predicted it would stall out to the northwest.

When push came to shove, however, it moved northeast.

What actually happened depended critically on a large number of details that were too small to include in the models.

How much money was lost due to storm damage was a result of the result of unpredictable things. (That’s not an editing error. It was really the second order result of a result.) It is a fundamentally unpredictable thing. The best you can do is measure it after the fact.

That brings us to comparing climate-model predictions with observations. We’ve got enough data now to see how climate-model predictions compare with observations on a decades-long timescale. The graph above summarizes results compiled in 2015 by the Cato Institute.

Basically, it shows that, not only did the climate models overestimate the temperature rise from the late 1970s to 2015 by a factor of approximately three, but in the critical last decade, when the computer models predicted a rapid rise, the actual observations showed that it nearly stalled out.

Notice that the divergence between the models and the observations increased with time. As I’ve said, that’s the main hallmark of chaos.

It sure looks like the climate models are batting zero!

I’ve been watching these kinds of results show up since the 1980s. It’s why by the late 1990s I started discounting statements like the WSJ article’s: “A consensus of scientists puts blame substantially on emissios greenhouse gasses from cars, farms and factories.”

I don’t know who those “scientists” might be, but it sounds like they’re assigning blame for an effect that isn’t observed. Real scientists wouldn’t do that. Only politicians would.

Clearly, something is going on, but what it is, what its extent is, and what is causing it is anything but clear.

In the data depicted above, the results from global climate modeling do not look at all like the behavior of a chaotic system. The data from observations, however, do look like what we typically get from a chaotic system. Stuff moves constantly. On short time scales it shows apparent trends. On longer time scales, however, the trends tend to evaporate.

No wonder observers like Steven Pacala, who is Frederick D. Petrie Professor in Ecology and Evolutionary Biology at Princeton University and a board member at Hamilton Insurance Group, Ltd., are led to say (as quoted in the article): “Climate change makes the historical record of extreme weather an unreliable indicator of current risk.”

When you’re dealing with a chaotic system, the longer the record you’re using, the less predictive power it has.

Duh!

Another point made in the WSJ article that I thought was hilarious involved prediction of hurricanes in the Persian Gulf.

According to the article, “Such cyclones … have never been observed in the Persian Gulf … with new conditions due to warming, some cyclones could enter the Gulf in the future and also form in the Gulf itself.”

This sounds a lot like a tongue-in-cheek comment I once heard from astronomer Carl Sagan about predictions of life on Venus. He pointed out that when astronomers observe Venus, they generally just see a featureless disk. Science fiction writers had developed a chain of inferences that led them from that observation of a featureless disk to imagining total cloud cover, then postulating underlying swamps teeming with life, and culminating with imagining the existence of Venusian dinosaurs.

Observation: “We can see nothing.”

Conclusion: “There are dinosaurs.”

Sagan was pointing out that, though it may make good science fiction, that is bad science.

The WSJ reporters, Bradley Hope and Nicole Friedman, went from “No hurricanes ever seen in the Persian Gulf” to “Future hurricanes in the Persian Gulf” by the same sort of logic.

The kind of specious misinformation represented by the WSJ article confuses folks who have genuine concerns about the environment. Before politicians like Al Gore hijacked the environmental narrative, deflecting it toward climate change, folks paid much more attention to the real environmental issue of pollution.

Insurance losses from extreme weather events
Actual insurance losses due to catastrophic weather events show a disturbing trend.

The one bit of information in the WSJ article that appears prima facie disturbing is contained in the graph at right.

The graph shows actual insurance losses due to catastrophic weather events increasing rapidly over time. The article draws the inference that this trend is caused by human-induced climate change.

That’s quite a stretch, considering that there are obvious alternative explanations for this trend. The most likely alternative is the possibility that folks have been building more stuff in hurricane-prone areas. With more expensive stuff there to get damaged, insurance losses will rise.

Again: duh!

Invoking Occam’s Razor (choose the most believable of alternative explanations), we tend to favor the second explanation.

In summary, I conclude that the 3 October article is poor reporting that draws conclusions that are likely false.

Notes from 4 October

Don’t try to find the 3 October WSJ article online. I spent a couple of hours this morning searching for it, and came up empty. The closest I was able to get was a draft version that I found by searching on Bradley Hope’s name. It did not appear on WSJ‘s public site.

Apparently, WSJ‘s editors weren’t any more impressed with the article than I was.

The 4 October issue presents a corroboration of my alternative explanation of the trend in insurance-loss data: it’s due to a build up of expensive real estate in areas prone to catastrophic weather events.

In a half-page expose entitled “Hurricane Costs Grow as Population Shifts,” Kara Dapena reports that, “From 1980 to 2017, counties along the U.S. shoreline that endured hurricane-strength winds from Florence in September experienced a surge in population.”

In the end, this blog posting serves as an illustration of four points I tried to make last month. Specifically, on 19 September I published a post entitled: “Noble Whitefoot or Lying Blackfoot?” in which I laid out four principles to use when trying to determine if the news you’re reading is fake. I’ll list them in reverse of the order I used in last month’s posting, partly to illustrate that there is no set order for them:

  • Nobody gets it completely right  ̶  In the 3 October WSJ story, the reporters got snookered by the climate-change political lobby. That article, taken at face value, has to be stamped with the label “fake news.”
  • Does it make sense to you? ̶  The 3 October fake news article set off my BS detector by making a number of statements that did not make sense to me.
  • Comparison shopping for ideas  ̶  Assertions in the suspect article contradicted numerous other sources.
  • Consider your source  ̶  The article’s source (The Wall Street Journal) is one that I normally trust. Otherwise, I likely never would have seen it, since I don’t bother listening to anyone I catch in a lie. My faith in the publication was restored when the next day they featured an article that corrected the misinformation.

Noble Whitefoot or Lying Blackfoot?

Fake News feed
How do you know when the news you’re reading is fake? Rawpixel/Shutterstock

19 September 2018 – Back in the mid-1970s, we RPI astrophysics graduate students had this great office at the very top of the Science Building at Rensselaer Polytechnic Institute.The construction was an exact duplicate of the top floor of an airport control tower, with the huge outward-sloping windows and the wrap-around balcony.

Every morning we’d gather ’round the desk of our compatriot Ron Held, builder of stellar-interior computer models extraordinaire, to hear him read “what fits” from the days issue of The New York Times. Ron had noticed that when taken out of context much of what is written in newspapers sounds hilarious. He had a deadpan way of reading this stuff out loud that only emphasized the effect. He’d modified the Times‘ slogan, “All the news that’s fit to print” into “All the news that fits.”

Whenever I hear unmitigated garbage coming out of supposed news outlets, I think of Ron’s “All the news that fits.”

These days, I’m on a kick about fake news and how to spot it. It isn’t easy because it’s become so pervasive that it becomes almost believable. This goes along with my lifelong philosophical study that I call: “How do we know what we think we know?”

Early on I developed what I call my “BS detector.” It’s a mental alarm bell that goes off whenever someone tries to convince me of something that’s unbelievable.

It’s not perfect. It’s been wrong on a whole lot of occasions.

For example, back in the early 1970s somebody told me about something called “superconductivity,” where certain materials, when cooled to near absolute zero, lost all electrical resistance. My first reaction, based on the proposition that if something sounds too good to be true, it’s not, was: “Yeah, and if you believe that I’ve got this bridge between Manhattan and Brooklyn to sell you.”

After seeing a few experiments and practical demonstrations, my BS detector stopped going off and I was able to listen to explanations about Cooper Pairs, and electron-phonon interactions and became convinced. I eventually learned that nearly everything involving quantum theory sounds like BS until you get to understand it.

Another time I bought into the notion that Interferon would develop into a useful AIDS treatment. Being a monogamous heterosexual, I didn’t personally worry about AIDS, but I had many friends who did, so I cared. I cared enough to pay attention, and watch as the treatment just didn’t develop.

Most of the time, however, my BS detector works quite well, thank you, and I’ve spent a lot of time trying to divine what sets it off, and what a person can do to separate the grains of truth from the BS pile.

Consider Your Source(s)

There’s and old saying: “Figures don’t lie, but liars can figure.”

First off, never believe anybody whom you’ve caught lying to you in the past. For example, Donald Trump has been caught lying numerous times in the past. I know. I’ve seen video of him mouthing words that I’ve known at the time were incorrect. It’s happened so often that my BS detector goes off so loudly whenever he opens his mouth that the noise drowns out what he’s trying to say.

I had the same problem with Bill Clinton when he was President (he seems to have gotten better, now, but I’m still wary).

Nixon was pretty bad, too.

There’s a lot of noise these days about “reliable sources.” But, who’s a reliable source? You can’t take their word for it. It’s like the old riddle of the lying blackfoot indian and the truthful whitefoot.

Unfortunately, in the real world nobody always lies or always tells the truth, even Donald Trump. So, they can’t be unmasked by calling on the riddle’s answer. If you’re unfamiliar with the riddle, look it up.

The best thing to do is try to figure out what the source’s game is. Everyone in the communications business is selling something. It’s up to you to figure out what they’re selling and whether you want to buy it.

News is information collected on a global scale, and it’s done by news organizations. The New York Times is one such organization. Another is The Wall Street Journal, which is a subsidiary of Dow Jones & Company, a division of News Corp.

So, basically, what a legitimate news organization is selling is information. If you get a whiff that they’re selling anything else, like racism, or anarchy, or Donald Trump, they aren’t a real news organization.

The structure of a news organization is:

Publisher: An individual or group of individuals generally responsible for running the business. The publisher manages the Circulation, Advertising, Production, and Editorial departments. The Publisher’s job is to try to sell what the news organization has to sell (that is, information) at a profit.

Circulation: A group of individuals responsible for recruiting subscribers and promoting sales of individal copies of the news organization’s output.

Advertising: A group of individuals under the direct supervision of the Publisher who are responsible for selling advertising space to individuals and businesses who want to present their own messages to people who consume the news organization’s output.

Production: A group of individuals responsible for packaging the information gathered by the Editorial department into physical form and distributing it to consumers.

Editorial: A group of trained journalists under a Chief Editor responsible for gathering and qualifying information the news organization will distribute to consumers.

Notice the italics on “and qualifying” in the entry on the Editorial Department. Every publication has their self-selected editorial focus. For a publication like The Wall Street journal, whose editorial focus is business news, every story has to fit that editorial focus. A story that, say, affects how readers select stocks to buy or sell is in their editorial focus. A story that doesn’t isn’t.

A story about why Donald Trump lies doesn’t belong in The Wall Street Journal. It belongs in Psychology Today.

That’s why editors and reporters have to be “trained journalists.” You can’t hire just anybody off the street, slap a fedora on their head and call them a “reporter.” That never even worked in the movies. Journalism is a profession and journalists require training. They’re also expected behave in a manner consistent with journalistic ethics.

One of those ethical principles is that you don’t “editorialize” in news stories. That means you gather facts and report those facts. You don’t distort facts to fit your personal opinions. You for sure don’t make up facts out of thin air just ’cause you’d like it to be so.

Taking the example of The Wall Street Journal again, a reporter handed some fact doesn’t know what the reader will do with that fact. Some will do some things and others will do something else. If a reporter makes something up, and readers make business decisions based on that fiction, bad results will happen. Business people don’t like that. They’d stop buying copies of the newspaper. Circulation would collapse. Advertisers would abandon it.

Soon, no more The Wall Street Journal.

It’s the Chief Editor’s job to make sure reporters seek out information useful to their readers, don’t editorialize, and check their facts to make sure nobody’s been lying to them. Thus, the Chief Editor is the main gatekeeper that consumers rely on to keep out fake news.

That, by the way, is the fatal flaw in social media as a news source: there’s no Chief Editor.

One final note: A lot of people today buy into the cynical belief that this vision of journalism is naive. As a veteran journalist I can tell you that it’s NOT. If you think real journalism doesn’t work this way, you’re living in a Trumpian alternate reality.

Bang your head on the nearest wall hoping to knock some sense into it!

So, for you, the news consumer, to guard against fake news, your first job is to figure out if your source’s Chief Editor is trustworthy.

Unfortunately, it’s very seldom that most people get to know a news source’s Chief Editor well enough to know whether to trust him or her.

Comparison Shopping for Ideas

That’s why you don’t take the word of just one source. You comparison shop for ideas the same way you do for groceries, or anything else. You go to different stores. You check their prices. You look at sell-by dates. You sniff the air for stale aromas. You do the same thing in the marketplace for ideas.

If you check three-to-five news outlets, and they present the same facts, you gotta figure they’re all reporting the facts that were given to them. If somebody’s out of whack compared to the others, it’s a bad sign.

Of course, you have to consider the sources they use as well. Remember that everyone providing information to a news organization has something to sell. You need to make sure they’re not providing BS to the news organization to hype sales of their particular product. That’s why a credible news organization will always tell you who their sources are for every fact.

For example, a recent story in the news (from several outlets) was that The New York Times published an opinion-editorial piece (NOT a news story, by the way) saying very unflattering things about how President Trump was managing the Executive Branch. A very big red flag went up because the op-ed was signed “Anonymous.”

That red flag was minimized by the paper’s Chief Editor, Dean Baquet, assuring us all that he, at least, knew who the author was, and that it was a very high official who knew what they were talking about. If we believe him, we figure we’re likely dealing with a credible source.

Our confidence in the op-ed’s credibility was also bolstered by the fact that the piece included a lot of information that was available from other sources that corroborated it. The only new piece of information, that there was a faction within the White House that was acting to thwart the President’s worst impulses, fitted seamlessly with the verifiable information. So, we tend to believe it.

As another example, during the 1990s I was watching the scientific literature for reports of climate-change research results. I’d already seen signs that there was a problem with this particular branch of science. It had become too political, and the politicians were selling policies based on questionable results. I noticed that studies generally were reporting inconclusive results, but each article ended with a concluding paragraph warning of the dangers of human-induced climate change that did not fit seamlessly with the research results reported in the article. So, I tended to disbelieve the final conclusions.

Does It Make Sense to You?

This is where we all stumble when ferreting out fake news. If you’re pre-programmed to accept some idea, it won’t set off your BS detector. It won’t disagree with the other sources you’ve chosen to trust. It will seem reasonable to you. It will make sense, whether it’s right or wrong.

That’s a situation we all have to face, and the only antidote is to do an experiment.

Experiments are great! They’re our way of asking Mommy Nature to set us on the right path. And, if we ask often enough, and carefully enough, she will.

That’s how I learned the reality of superconductivity against my inbred bias. That’s how I learned how naive my faith in interferon had been.

With those cautions, let’s look at how we know what we think we know.

It starts with our parents. We start out truly impressed by our parents’ physical and intellectual capabilities. After all, they can walk! They can talk! They can (in some cases) do arithmetic!

Parents have a natural drive to stuff everything they know into our little heads, and we have a natural drive to suck it all in. It’s only later that we notice that not everyone agrees with our parents, and they aren’t necessarily the smartest beings on the planet. That’s when comparison shopping for ideas begins. Eventually, we develop our own ideas that fit our personalities.

Along the way, Mommy Nature has provided a guiding hand to either confirm or discredit our developing ideas. If we’re not pathological, we end up with a more or less reliable feel for what makes sense.

For example, almost everybody has a deep-seated conviction that torturing pets is wrong. We’ve all done bad things to pets, usually unintentionally, and found it made us feel sad. We don’t want to do it again.

So, if somebody advocates perpetrating cruelty to animals, most of us recoil. We’d have to be given a darn good reason to do it. Like, being told “If you don’t shoot that squirrel, there’ll be no dinner tonight.”

That would do it.

Our brains are full up with all kinds of ideas like that. When somebody presents us with a novel idea, or a report of something they suggest is a fact, our first line of defense is whether it makes sense to us.

If it’s unbelievable, it’s probably not true.

It could still be true, since a lot of unbelievable stuff actually happens, but it’s probably not. We can note it pending confirmation by other sources or some kind of experimental result (like looking to see the actual bloody mess).

But, we don’t buy it out of hand.

Nobody Gets It Completely Right

As Dr. Who (Tom Baker) once said: “To err is computer. To forgive is fine.”

The real naive attitude about news, which I used to hear a lot fifty or sixty years ago is, “If it’s in print, it’s gotta be true.”

Reporters, editors and publishers are human. They make mistakes. And, catching those mistakes follows the 95:5 rule.That is, you’ll expend 95% of your effort to catch the last 5% of the errors. It’s also called “The Law of Diminishing Returns,” and it’s how we know to quit obsessing.

The way this works for the news business is that news output involves a lot of information. I’m not going to waste space here estimating the amount of information (in bits) in an average newspaper, but let’s just say it’s 1.3 s**tloads!

It’s a lot. Getting it all right, then getting it all corroborated, then getting it all fact checked (a different, and tougher, job than just corroboration), then putting it into words that convey that information to readers, is an enormous task, especially when a deadline is involved. It’s why the classic image of a journalist is some frazzled guy wearing a fedora pushed back on his head, suitcoat off, sleeves rolled up and tie loosened, maniacally tapping at a typewriter keyboard.

So, don’t expect everything you read to be right (or even spelled right).

The easiest things to get right are basic facts, the Who, What, Where, and When.

How many deaths due to Hurricane Maria on Puerto Rico? Estimates have run from 16 to nearly 3,000 depending on who’s doing the estimating, what axes they have to grind, and how they made the estimate. Nobody was ever able to collect the bodies in one place to count them. It’s unlikely that they ever found all the bodies to collect for the count!

Those are the first four Ws of news reporting. The fifth one, Why, is by far the hardest ’cause you gotta get inside someone’s head.

So, the last part of judging whether news is fake is recognizing that nobody gets it entirely right. Just because you see it in print doesn’t make it fact. And, just because somebody got it wrong, doesn’t make them a liar.

They could get one thing wrong, and most everything else right. In fact, they could get 5 things wrong, and 95 things right!

What you look for is folks who make the effort to try to get things right. If somebody is really trying, they’ll make some mistakes, but they’ll own up to them. They’ll say something like: “Yesterday we told you that there were 16 deaths, but today we have better information and the death toll is up to 2,975.”

Anybody who won’t admit they’re ever wrong is a liar, and whatever they say is most likely fake news.

Thinking Through Facial Recognition

Makeup
There are lots of reasons a person might wear makeup that could baffle facial recognition technology. Steven J Hensley / Shutterstock.com

5 September 2018 – A lot of us grew up reading stories by Robert A. Heinlein, who was one of the most Libertarian-leaning of twentieth-century science-fiction writers. When contemplating then-future surveillance technology (which he imagined would be even more intrusive than it actually is today) he wrote (in his 1982 novel Friday): “… there is a moral obligation on each free person to fight back wherever possible … ”

The surveillance technology Heinlein expected to become the most ubiquitous, pervasive, intrusive and literally in-your-face was facial recognition. Back in 1982, he didn’t seem to quite get the picture (pun intended) of how automation, artificial intelligence, and facial recognition could combine to become Big Brother’s all-seeing eyes. Now that we’re at the cusp of that technology being deployed, it’s time for just-us-folks to think about how we should react to it.

An alarm should be set off by an article filed by NBC News journalists Tom Costello and Ethan Sacks on 23 August reporting: “New facial recognition tech catches first impostor at D.C. airport.” Apparently, a Congolese national tried to enter the United States on a flight from Sao Paulo, Brazil through Washington Dulles International Airport on a French passport, and was instantly unmasked by a new facial-recognition system that quickly figured out that his face did not match that of the real holder of the French passport. Authorities figured out he was a Congolese national by finding his real identification papers hidden in his shoe. Why he wanted into the United States; why he tried to use a French passport; and why he was coming in from Brazil are all questions unanswered in the article. The article was about this whiz-bang technology that worked so well on the third day it was deployed.

What makes the story significant is that this time it all worked in real time. Previous applications of facial recognition have worked only after the fact.

The reason this article should set off alarm bells is not that the technology unmasked some jamoke trying to sneak into the country for some unknown, but probably nefarious, purpose. On balance, that was almost certainly (from our viewpoint) a good thing. The alarms should sound, however, to wake us up to think about how we really want to react to this kind of ubiquitous surveillance being deployed.

Do we really want Big Brother watching us?

Joan Quigley, former Assemblywoman from Jersey City, NJ, where she was Majority Conference Leader, chair of Homeland Security, and served on Budget, Health and Economic Development Committees, wrote an op-ed piece appearing in The Jersey Journal on 20 August entitled: “Facial recognition the latest alarm bell for privacy advocates.” In it she points out that “it’s not only crime some don’t want others to see.”

There’s a whole lot of what each of us does that we want to keep private. While we consider it perfectly innocent, it’s just nobody else’s business.

It’s why the stalls in public bathrooms have doors.

People generally object to living in a fishbowl.

So, ubiquitous deployment of facial recognition technology brings with it some good things, and some that are not so good. That argues for a national public debate aimed at developing a consensus regarding where, when and how facial recognition technology should be used.

Framing the Debate

To start with, recognize that facial recognition is already ubiquitous and natural. It’s why Mommy Nature goes through all kinds of machinations to make our faces more-or-less unique. One of the first things babies learn is how to recognize Mom’s face. How could the cave guys have coordinated their hunting parties if nobody could tell Fred from Manny?

Facial recognition technology just extends our natural talent for recognizing our friends by sight to its use by automated systems.

A white paper entitled Top 4 Modern Use Cases of Biometric Technology crossed my desk recently. It was published by security-software firm iTrue. Their stated purpose is to “take biometric technology to the next level by securing all biometric data onto their blockchain platform.”

Because the white paper is clearly a marketing piece, and it is unsigned by the actual author, I can’t really vouch for the accuracy of its conclusions. For example, the four use cases listed in the paper are likely just the four main applications they envision for their technology. They are, however, a reasonable starting point for our public discussion.

The four use cases cited are:

  1. Border control and airport security
  2. Company payroll and attendance management
  3. Financial data and identity protection
  4. Physical or logical access solutions

This is probably not an exhaustive list, but offhand I can’t think of any important items left off. So, I’ll pretend like it’s a really good, complete list. It may be. It may not be. That should be part of the discussion.

The first item on the list is exactly what the D.C. airport news story was all about, so enough said. That horse has been beaten to death.

About the second item, the white paper says: “Organizations are beginning to invest in biometric technologies to manage employee ID and attendance, since individuals are always carrying their fingerprints, eyes, and faces with them, and these items cannot be lost, stolen, or forgotten.”

In my Mother’s unforgettable New England accent, we say, “Eye-yuh!”

There is, however, one major flaw in the reasoning behind relying on facial recognition. It’s illustrated by the image above. Since time immemorial, folks have worn makeup that could potentially give facial recognition systems ginky fits. They do it for all kinds of innocent reasons. If you’re going to make being able to pass facial recognition tests a prerequisite for doing your job, expect all sorts of pushback.

For example, over the years I’ve known many, many women who wouldn’t want to be seen in public without makeup. What are you going to do? Make your workplace a makeup-free zone? That’ll go over big!

On to number three. How’s your average cosplay enthusiast going to react to not being able to use their credit or debit card to buy gas on their way to an event because the bank’s facial recognition system can’t see through their alien-creature makeup?

Transgender person
Portrait of young transgender person wearing pink wig. Ranta Images/Shutterstock

Even more seriously, look at the image on the right. This is a transgender person wearing a wig. Really cute isn’t he/she? Do you think your facial recognition software could tell the difference between him and his sister? Does your ACH vendor want to risk trampling his/her rights?

Ooops!

When we come to the fourth item on the list, suppose a Saudi Arabian woman wants to get into her house? Are you going to require her to remove her burka to get through her front door? What about her right to religious freedom? Or, will this become another situation where she can’t function as a human being without being accompanied by a male guardian? We’re already on thin ice when she wants to enter the country through an airport!

I’ve already half formed my own ideas about these issues. I look forward to participating in the national debate.

Heinlein would, of course, delight in every example where facial recognition could be foiled. In Friday, he gleefully pointed out ” … what takes three hours to put on will come off in fifteen minutes of soap and hot water.”

Legal vs. Scientific Thinking

Scientific Method Diagram
The scientific method assumes uncertainty.

29 August 2018 – With so much controversy in the news recently surrounding POTUS’ exposure in the Mueller investigation into Russian meddling in the 2016 Presidential election, I’ve been thinking a whole lot about how lawyers look at evidence versus how scientists look at evidence. While I’ve only limited background with legal matters (having an MBA’s exposure to business law), I’ve spent a career teaching and using the scientific method.

While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school consists of five to seven steps, which pretty much look like this:

  1. Observation
  2. Hypothesis
  3. Prediction
  4. Experimentation
  5. Analysis
  6. Repeat

I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.

It all starts with observation of things that go on in the World. Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question “why.”

Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several explanations that vary from the erudite to the thoroughly bizarre.

Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!

For example, ancients tended to think in terms of objects somehow “wanting” to go downward as the least wierd of explanations for gravity. It came from animism, which is the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior. Rocks are hard because their spirits resist being broken. They fall down when released because their spirits somehow like down better than up.

What we now consider the most-correctest explanation, that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other, wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.

Scientists then take all the hypotheses, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results of the experiments.

Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.

It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.

That’s why the last step is to repeat the entire process ad nauseam.

While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.

Not boiling it down to essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, science pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”

The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following through to the resultant results.

There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.

You do that a bazillion times in a bazillion different ways, and a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.

Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.

For example, I was once asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.”

I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.

I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He kept believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.

Anyway, the way this all works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.

That is NOT what our legal system does.

Not by a LONG shot!

The Legal Method

While both scientific and legal thinking methods start from some initial state, and move to some final conclusion, the processes for getting from A to B differs in important ways.

The Legal Method
In legal thinking, a chain of evidence is used to get from criminal charges to a final verdict.

First, while the hypothesis in the scientific method is assumed to be provisional, the legal system is based on coming to a definite explanation of events that is in some sense “correct.” The results of scientific inquiry, on the other hand, are accepted as “probably right, maybe, for now.”

That ain’t good enough in legal matters. The verdict of a criminal trial, for example, has to be true “beyond a reasonable doubt.”

Second, in legal matters the path from the initial conditions (the “charges”) to the results (the “verdict”) is linear. It has one path: through a chain of evidence. There may be multiple bits of evidence, but you can follow them through from a definite start to a definite end.

The third way the legal method differs from the scientific method is what I call the “So, What?” factor.

If your scientific hypothesis is wrong, meaning it gives wrong results, “So, What?”

Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.

Finding that some hypothesis is wrong is no big deal. It just means you don’t have to bother with that dumbass idea, anymore. Alien abductions get relegated to entertainment for the entertainment starved, and real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.

(Leading hypothesis: the distances from there to here are so vast that anybody smart enough to make the trip has better things to do.)

If, on the other hand, your legal verdict is wrong, really bad things happen. Maybe somebody’s life is ruined. Maybe even somebody dies. The penalty for failure in the legal system is severe!

So, the term “air tight” shows up a lot in talking about legal evidence. In science not so much.

For scientists “Gee, it looks like . . . ” is usually as good as it gets.

For judges, they need a whole lot more.

So, as a scientist I can say: “POTUS looks like a career criminal.”

That, however, won’t do the job for, say, Robert Mueller.

In Real Life

Very few of us are either scientists or judges. We live in the real world and have to make real-world decisions. So, which sort of method for coming to conclusions should we use?

In 1983, film director Paul Brickman spent an estimated 6.2 million dollars and 99 min worth of celluloid (some 142,560 individual images at the standard frame rate of 24 fps) telling us that successful entrepreneurs must be prepared to make decisions based on insufficient information. That means with no guarantee of being right. No guarantee of success.

He, by the way, was right. His movie, Risky Business, grossed $63 million at the box office in the U.S. alone. A clear gross margin of 1,000%!

There’s an old saying: “A conclusion is that point at which you decide to stop thinking about it.”

It sounds a bit glib, but it actually isn’t. Every experienced businessman, for example, knows that you never have enough information. You are generally forced to make a decision based on incomplete information.

In the real world, making a wrong decision is usually better than making no decision at all. What that means is that, in the real world, if you make a wrong decision you usually get to say “Oops!” and walk it back. If you decide to make no decision, that’s a decision that you can’t walk back.

Oops! I have to walk that statement back.

There are situations where the penalty for the failure of making a wrong decision is severe. For example, we had a cat once, who took exception to a number of changes in our home life. We’d moved. We’d gotten a new dog. We’d adopted another cat. He didn’t like any of that.

I could see from his body language that he was developing a bad attitude. Whereas he had previously been patient when things didn’t go exactly his way, he’d started acting more aggressive. One night, we were startled to hear a screetching of brakes in the road passing our front door. We went out to find that Nick had run across the road and been hit by a car.

Splat!

Considering the pattern of events, I concluded that Nick had died of PCD. That is, “Poor Cat Decision.” He’d been overly aggressive when deciding whether or not to cross the road.

Making no decision (hesitating before running across the road) would probably have been better than the decision he made to turn on his jets.

That’s the kind of decision where getting it wrong is worse than holding back.

Usually, however, no decision is the worst decision. As the Zen haiku says:

In walking, just walk.
In sitting, just sit.
Above all, don’t wobble.

That argues for using the scientist’s method: gather what facts you have, then make a decision. If you’re hypothesis turns out to be wrong, “So, What?”

You Want to Print WHAT?!

3D printed plastic handgun
The Liberator gun, designed by Defense Distributed. Photo originally made at 16-05-2013 by Vvzvlad – Flickr: Liberator.3d.gun.vv.01, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=26141469

22 August 2018 – Since the Fifteenth Century, when Johannes Gutenberg invented the technology, printing has given those in authority ginky fits. Until recently it was just dissemination of heretical ideas, a la Giordano Bruno, that raised authoritarian hackles. More recently, 3-D printing, also known as additive manufacturing (AM), has made it possible to make things that folks who like to tell you what you’re allowed to do don’t want you to make.

Don’t get me wrong, AM makes it possible to do a whole lot of good stuff that we only wished we could do before. Like, for example, Jay Leno uses it to replace irreplaceable antique car parts. It’s a really great technology that allows you to make just about anything you can describe in a digital drafting file without the difficulty and mess of hiring a highly skilled fabrication machine shop.

In my years as an experimental physicist, I dealt with fabrication shops a lot! Fabrication shops are collections of craftsmen and the equipment they need to make one-off examples of amazing stuff. It’s generally stuff, however, that nobody’d want to make a second time.

Like the first one of anything.

The reason I specify that AM is for making stuff nobody’d want to make a second time is that it’s slow. Regular machine-shop work, like making nuts and bolts, and sewing machines, and stuff folks want to make a lot of are worth spending a lot of time to figure out fast, efficient and cheap ways to make lots of them.

Take microchips. The darn things take huge amounts of effort to design, and tens of billions of dollars of equipment to make, but once you’ve figured it all out and set it all up, you can pop the things out like chocolate-chip cookies. You can buy an Intel Celeron G3900 dual-core 2.8 GHz desktop processor online for $36.99 because Intel spread the hideous quantities of up-front cost it took to originally set up the production line over the bazillions of processors that production line can make. It’s called “economy of scale.”

If you’re only gonna make one, or just a few, of the things, there’s no economy of scale.

But, if your’re only gonna make one, or just a few, you don’t worry too much about how long it takes to make each one, and what it costs is what it costs.

So, you put up with doing it some way that’s slow.

Like AM.

A HUGE advantage of making things with AM is that you don’t have to be all that smart. If you once learn to set the 3-D printer up, you’re all set. You just download the digital computer-aided-manufacturing (CAM) file into the printer’s artificial cerebrum, and it just DOES it. If you can download the file over the Internet, you’ve got it knocked!

Which brings us to what I want to talk about today: 3-D printing of handguns.

Now, I’m not any kind of anti-gun nut. I’ve got half a dozen firearms laying around the house, and have had since I was a little kid. It’s a family thing. I learned sharpshooting when I was around ten years old. Target shooting is a form of meditation for me. A revolver is my preferred weapon if ever I have need of a weapon. I never want to be the one who brought a knife to a gunfight!

That said, I’ve never actually found a need for a handgun. The one time I was present at a gunfight, I hid under the bed. The few times I’ve been threatened with guns, I talked my way out of it. Experience has led me to believe that carrying a gun is the best way to get shot.

I’ve always agreed with my gunsmith cousin that modern guns are beautiful pieces of art. I love their precision and craftsmanship. I appreciate the skill and effort it takes to make them.

The good ones, that is.

That’s the problem with AM-made guns. It takes practically no skill to create them. They’re right up there with the zip guns we talked about when we were kids.

We never made zip guns. We talked about them. We talked about them in the same tones we’d use talking about leeches or cockroaches. Ick!

We’d never make them because they were beneath contempt. They were so crude we wouldn’t dare fire one. Junk like that’s dangerous!

Okay, so you get an idea how I would react to the news that some nut case had published plans online for 3-D printing a handgun. Bad enough to design such a monstrosity, what about the idiot stupid enough to download the plans and make such a thing? Even more unbelievable, what moron would want to fire it?

Have they no regard for their hands? Don’t they like their fingers?

Anyway, not long ago, a press release crossed my desk from Giffords, the gun-safety organization set up by former U.S. Rep. Gabrielle Giffords and her husband after she survived an assassination attempt in Arizona in 2011. The Giffords’ press release praised Senators Richard Blumenthal and Bill Nelson, and Representatives David Cicilline and Seth Moulton for introducing legislation to stop untraceable firearms from further proliferation after the Trump Administration cleared the way for anyone to 3-D-print their own guns.

Why “untraceable” firearms, and what have they got to do with AM?

Downloadable plans for producing guns by the AM technique puts firearms in the hands of folks too unskilled and, yes, stupid to make them themselves with ordinary methods. That’s important because it is theoretically possible to AM produce firearms of surprising sophistication. The first one offered was a cheap plastic thing (depicted above) that would likely be more danger to its user than its intended victim. More recent offerings, however, have been repeating weapons made of more robust materials that might successfully be used to commit crimes.

Ordinary firearm production techniques require a level of skill and investment in equipment that puts them above the radar for federal regulators. The first thing those regulators require is licensing the manufacturer. Units must be serialized, and records must be kept. The system certainly isn’t perfect, but it gives law enforcement a fighting chance when the products are misused.

The old zip guns snuck in under the radar, but they were so crude and dangerous to their users that they were seldom even made. Almost anybody with enough sense to make them had enough sense not to make them! Those who got their hands on the things were more a danger to themselves than to society.

The Trump administration’s recent settlement with Defense Distributed, allowing them to relaunch their website, include a searchable database of firearm blueprints, and allow the public to create their own fully-functional, unserialized firearms using AM technology opens the floodgates for dangerous people to make their own untraceable firearms.

That’s just dumb!

The Untraceable Firearms Act would prohibit the manufacture and sale of firearms without serial numbers, require any person or business engaged in the business of selling firearm kits and unfinished receivers to obtain a dealer’s license and conduct background checks on purchasers, and mandate that a person who runs a business putting together firearms or finishing receivers must obtain a manufacturer’s license and put serial numbers on firearms before offering them for sale to consumers.

The 3-D Printing Safety Act would prohibit the online publication of computer-aided design (CAD) files which automatically program a 3-D-printer to produce or complete a firearm. That closes the loophole letting malevolent idiots evade scrutiny by making their own firearms.

We have to join with Giffords in applauding the legislators who introduced these bills.

Who’s NOT a Creative?

 

Compensting sales
Close-up Of A Business Woman Giving Cheque To Her Colleague At Workplace In Office. Andrey Popov/Shutterstock

25 July 2018 – Last week I made a big deal about the things that motivate creative people, such as magazine editors, and how the most effective rewards were non-monetary. I also said that monetary rewards, such as commissions based on sales results, were exactly the right rewards to use for salespeople. That would imply that salespeople were somehow different from others, and maybe even not creative.

That is not the impression I want to leave you with. I’m devoting this blog posting to setting that record straight.

My remarks last week were based on Maslow‘s and Herzberg‘s work on motivation of employees. I suggested that these theories were valid in other spheres of human endeavor. Let’s be clear about this: yes, Maslow’s and Herzberg’s theories are valid and useful in general, whenever you want to think about motivating normal, healthy human beings. It’s incidental that those researchers were focused on employer/employee relations as an impetus to their work. If they’d been focused on anything else, their conclusions would probably have been pretty much the same.

That said, there are a whole class of people for whom monetary compensation is the holy grail of motivators. They are generally very high functioning individuals who are in no way pathological. On the surface, however, their preferred rewards appear to be monetary.

Traditionally, observers who don’t share this reward system have indicted these individuals as “greedy.”

I, however, dispute that conclusion. Let me explain why.

When pointing out the rewards that can be called “motivators for editors,” I wrote:

“We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like ‘Best Technical Article.’ Above all, we talked up the fact that ours was ‘the premier publication in the market.'”

Notice that these rewards, though non-monetary. were more or less measurable. They could be (and indeed for the individuals they motivated) seen as scorecards. The individuals involved had a very clear idea of value attached to such rewards. A Nobel Prize in Physics is of greater value than, say, a similar award given by, say, Harvard University.

For example, in 1987 I was awarded the “Cahners Editorial Medal of Excellence, Best How-To Article.” That wasn’t half bad. The competition was articles written for a few dozen magazines that were part of the Cahners Publishing Company, which at the time was a big deal in the business-to-business magazine field.

What I considered to be of higher value, however, was the “First Place Award For Editorial Excellence for a Technical Article in a Magazine with Over 80,000 Circulation” I got in 1997 from the American Society of Business Press Editors, where I was competing with a much wider pool of journalists.

Economists have a way of attempting to quantify such non-monetary awards called utility. They arrive at values by presenting various options and asking the question: “Which would you rather have?”

Of course, measures of utility generally vary widely depending on who’s doing the choosing.

For example, an article in the 19 July The Wall Street Journal described a phenomenon the author seemed to think was surprising: Saudi-Arabian women drivers (new drivers all) showed a preference for muscle cars over more pedestrian models. The author, Margherita Stancati, related an incident where a Porche salesperson in Riyadh offered a recently minted woman driver an “easy to drive crossover designed to primarily attract women.” The customer demurred. She wanted something “with an engine that roars.”

So, the utility of anything is not an absolute in any sense. It all depends on answering the question: “Utility to whom?”

Everyone is motivated by rewards in the upper half of the Needs Pyramid. If you’re a salesperson, growth in your annual (or other period) sales revenue is in the green Self Esteem block. It’s well and truly in the “motivator” category, and has nothing to do with the Safety and Security “hygiene factor” where others might put it. Successful salespeople have those hygiene factors well-and-truly covered. They’re looking for a reward that tells them they’ve hit a home run. That is likely having a bigger annual bonus than the next guy.

The most obvious money-driven motivators accrue to the folks in the CEO ranks. Jeff Bezos, Elon Musk, and Warren Buffett would have a hard time measuring their success (i.e., hitting the Pavlovian lever to get Self Actualization rewards) without looking at their monetary compensation!

The Pyramid of Needs

Needs Pyramid
The Pyramid of Needs combines Maslow’s and Herzberg’s motivational theories.

18 July 2018 – Long, long ago, in a [place] far, far away. …

When I was Chief Editor at business-to-business magazine Test & Measurement World, I had a long, friendly though heated, discussion with one of our advertising-sales managers. He suggested making the compensation we paid our editorial staff contingent on total advertising sales. He pointed out that what everyone came to work for was to get paid, and that tying their pay to how well the magazine was doing financially would give them an incentive to make decisions that would help advertising sales, and advance the magazine’s financial success.

He thought it was a great idea, but I disagreed completely. I pointed out that, though revenue sharing was exactly the right way to compensate the salespeople he worked with, it was exactly the wrong way to compensate creative people, like writers and journalists.

Why it was a good idea for his salespeople I’ll leave for another column. Today, I’m interested in why it was not a good idea for my editors.

In the heat of the discussion I didn’t do a deep dive into the reasons for taking my position. Decades later, from the standpoint of a semi-retired whatever-you-call-my-patchwork-career, I can now sit back and analyze in some detail the considerations that led me to my conclusion, which I still think was correct.

We’ll start out with Maslow’s Hierarchy of Needs.

In 1943, Abraham Maslow proposed that healthy human beings have a certain number of needs, and that these needs are arranged in a hierarchy. At the top is “self actualization,” which boils down to a need for creativity. It’s the need to do something that’s never been done before in one’s own individual way. At the bottom is the simple need for physical survival. In between are three more identified needs people also seek to satisfy.

Maslow pointed out that people seek to satisfy these needs from the bottom to the top. For example, nobody worries about security arrangements at their gated community (second level) while having a heart attack that threatens their survival (bottom level).

Overlaid on Maslow’s hierarchy is Frederick Herzberg’s Two-Factor Theory, which he published in his 1959 book The Motivation to Work. Herzberg’s theory divides Maslow’s hierarchy into two sections. The lower section is best described as “hygiene factors.” They are also known as “dissatisfiers” or “demotivators” because if they’re not met folks get cranky.

Basically, a person needs to have their hygiene factors covered in order have a level of basic satisfaction in life. Not having any of these needs satisfied makes them miserable. Having them satisfied doesn’t motivate them at all. It makes ’em fat, dumb and happy.

The upper-level needs are called “motivators.” Not having motivators met drives an individual to work harder, smarter, etc. It energizes them.

My position in the argument with my ad-sales friend was that providing revenue sharing worked at the “Safety and Security” level. Editors were (at least in my organization) paid enough that they didn’t have to worry about feeding their kids and covering their bills. They were talented people with a choice of whom they worked for. If they weren’t already being paid enough, they’d have been forced to go work for somebody else.

Creative people, my argument went, are motivated by non-monetary rewards. They work at the upper “motivator” levels. They’ve already got their physical needs covered, so to motivate them we have to offer rewards in the “motivator” realm.

We did that by pointing out that they belonged to the staff of a highly esteemed publication. We talked about how their writings helped their readers excel at their jobs. We entered their articles in professional competitions with awards for things like “Best Technical Article.” Above all, we talked up the fact that ours was “the premier publication in the market.”

These were all non-monetary rewards to motivate people who already had their basic needs (the hygiene factors) covered.

I summarized my compensation theory thusly: “We pay creative people enough so that they don’t have to go do something else.”

That gives them the freedom to do what they would want to do, anyway. The implication is that creative people want to do stuff because it’s something they can do that’s worth doing.

In other words, we don’t pay creative people to work. We pay them to free them up so they can work. Then, we suggest really fun stuff for them to work at.

What does this all mean for society in general?

First of all, if you want there to be a general level of satisfaction within your society, you’d better take care of those hygiene factors for everybody!

That doesn’t mean the top 1%. It doesn’t mean the top 80%, either. Or, the top 90%. It means everybody!

If you’ve got 99% of everybody covered, that still leaves a whole lot of people who think they’re getting a raw deal. Remember that in the U.S.A. there are roughly 300 million people. If you’ve left 1% feeling ripped off, that’s 3 million potential revolutionaries. Three million people can cause a lot of havoc if motivated.

Remember, at the height of the 1960s Hippy movement, there were, according to the most generous estimates, only about 100,000 hipsters wandering around. Those hundred-thousand activists made a huge change in society in a very short period of time.

Okay. If you want people invested in the status quo of society, make sure everyone has all their hygiene factors covered. If you want to know how to do that, ask Bernie Sanders.

Assuming you’ve got everybody’s hygiene factors covered, does that mean they’re all fat, dumb, and happy? Do you end up with a nation of goofballs with no motivation to do anything?

Nope!

Remember those needs Herzberg identified as “motivators” in the upper part of Maslow’s pyramid?

The hygiene factors come into play only when they’re not met. The day they’re met, people stop thinking about who’ll be first against the wall when the revolution comes. Folks become fat, dumb and happy, and stay that way for about an afternoon. Maybe an afternoon and an evening if there’s a good ballgame on.

The next morning they start thinking: “So, what can we screw with next?”

What they’re going to screw with next is anything and everything they damn well please. Some will want to fly to the Moon. Some will want to outdo Michaelangelo‘s frescos for the ceiling of the Sistine Chapel. They’re all going to look at what they think was the greatest stuff from the past, and try to think of ways to do better, and to do it in their own way.

That’s the whole point of “self actualization.”

The Renaissance didn’t happen because everybody was broke. It happened because they were already fat, dumb and happy, and looking for something to screw with next.