Legal vs. Scientific Thinking

Scientific Method Diagram
The scientific method assumes uncertainty.

29 August 2018 – With so much controversy in the news recently surrounding POTUS’ exposure in the Mueller investigation into Russian meddling in the 2016 Presidential election, I’ve been thinking a whole lot about how lawyers look at evidence versus how scientists look at evidence. While I’ve only limited background with legal matters (having an MBA’s exposure to business law), I’ve spent a career teaching and using the scientific method.

While high-school curricula like to teach the scientific method as a simple step-by-step program, the reality is very much more complicated. The version they teach you in high school consists of five to seven steps, which pretty much look like this:

  1. Observation
  2. Hypothesis
  3. Prediction
  4. Experimentation
  5. Analysis
  6. Repeat

I’ll start by explaining how this program is supposed to work, then look at why it doesn’t actually work that way. It has to do with why the concept is so fuzzy that it’s not really clear how many steps should be included.

It all starts with observation of things that go on in the World. Newton’s law of universal gravitation started with the observation that when left on their own, most things fall down. That’s Newton’s falling-apple observation. Generally, the observation is so common that it takes a genius to ask the question “why.”

Once you ask the question “why,” the next thing that happens is that your so-called genius comes up with some cockamamie explanation, called an “hypothesis.” In fact, there are usually several explanations that vary from the erudite to the thoroughly bizarre.

Through bitter experience, scientists have come to realize that no hypothesis is too whacko to be considered. It’s often the most outlandish hypothesis that proves to be right!

For example, ancients tended to think in terms of objects somehow “wanting” to go downward as the least wierd of explanations for gravity. It came from animism, which is the not-too-bizarre (to the ancients) idea that natural objects each have their own spirits, which animate their behavior. Rocks are hard because their spirits resist being broken. They fall down when released because their spirits somehow like down better than up.

What we now consider the most-correctest explanation, that we live in a four-dimensional space-time contiuum that is warped by concentrations of matter-energy so that objects follow paths that tend to converge with each other, wouldn’t have made any sense at all to the ancients. It, in fact, doesn’t make a whole lot of sense to anybody who hasn’t spent years immersing themselves in the subject of Einstein’s General Theory of Relativity.

Scientists then take all the hypotheses, and use them to make predictions as to what happens next if you set up certain relevant situations, called “experiments.” An hypothesis works if its predictions match up with what Mommy Nature produces for results of the experiments.

Scientists then do tons of experiments testing different predictions of the hypotheses, then compare (the analysis step) the results and eventually develop a warm, fuzzy feeling that one hypothesis does a better job of predicting what Mommy Nature does than do the others.

It’s important to remember that no scientist worth his or her salt believes that the currently accepted hypothesis is actually in any absolute sense “correct.” It’s just the best explanation among the ones we have on hand now.

That’s why the last step is to repeat the entire process ad nauseam.

While this long, drawn out process does manage to cover the main features of the scientific method, it fails in one important respect: it doesn’t boil the method down to its essentials.

Not boiling it down to essentials forces one to deal with all kinds of exceptions created by the extraneous, non-essential bits. There end up being more exceptions than rules. For example, science pedagogy website Science Buddies ends up throwing its hands in the air by saying: “In fact, there are probably as many versions of the scientific method as there are scientists!”

The much simpler explanation I’ve used for years to teach college students about the scientific method follows the diagram above. The pattern is quite simple, with only four components. It starts by setting up a set of initial conditions, and following through to the resultant results.

There are two ways to get from the initial conditions to the results. The first is to just set the whole thing up, and let Mommy Nature do her thing. The second is to think through your hypothesis to predict what it says Mommy Nature will come up with. If they match, you count your hypothesis as a success. If not, it’s wrong.

You do that a bazillion times in a bazillion different ways, and a really successful hypothesis (like General Relativity) will turn out right pretty much all of the time.

Generally, if you’ve got a really good hypothesis but your experiment doesn’t work out right, you’ve screwed up somewhere. That means what you actually set up as the initial conditions wasn’t what you thought you were setting up. So, Mommy Nature (who’s always right) doesn’t give you the result you thought you should get.

For example, I was once asked to mentor another faculty member who was having trouble building an experiment to demonstrate what he thought was an exception to Newton’s Second Law of Motion. It was based on a classic experiment called “Atwood’s Machine.”

I immediately recognized that he’d made a common mistake novice physicists often make. I tried to explain it to him, but he refused to believe me. Then, I left the room.

I walked away because, despite his conviction, Mommy Nature wasn’t going to do what he expected her to. He kept believing that there was something wrong with his experimental apparatus. It was his hypothesis, instead.

Anyway, the way this all works is that you look for patterns in what Mommy Nature does. Your hypothesis is just a description of some part of Mommy Nature’s pattern. Scientific geniuses are folks who are really, really good at recognizing the patterns Mommy Nature uses.

That is NOT what our legal system does.

Not by a LONG shot!

The Legal Method

While both scientific and legal thinking methods start from some initial state, and move to some final conclusion, the processes for getting from A to B differs in important ways.

The Legal Method
In legal thinking, a chain of evidence is used to get from criminal charges to a final verdict.

First, while the hypothesis in the scientific method is assumed to be provisional, the legal system is based on coming to a definite explanation of events that is in some sense “correct.” The results of scientific inquiry, on the other hand, are accepted as “probably right, maybe, for now.”

That ain’t good enough in legal matters. The verdict of a criminal trial, for example, has to be true “beyond a reasonable doubt.”

Second, in legal matters the path from the initial conditions (the “charges”) to the results (the “verdict”) is linear. It has one path: through a chain of evidence. There may be multiple bits of evidence, but you can follow them through from a definite start to a definite end.

The third way the legal method differs from the scientific method is what I call the “So, What?” factor.

If your scientific hypothesis is wrong, meaning it gives wrong results, “So, What?”

Most scientific hypotheses are wrong! They’re supposed to be wrong most of the time.

Finding that some hypothesis is wrong is no big deal. It just means you don’t have to bother with that dumbass idea, anymore. Alien abductions get relegated to entertainment for the entertainment starved, and real scientists can go on to think about something else, like the kinds of conditions leading to development of living organisms and why we don’t see alien visitors walking down Fifth Avenue.

(Leading hypothesis: the distances from there to here are so vast that anybody smart enough to make the trip has better things to do.)

If, on the other hand, your legal verdict is wrong, really bad things happen. Maybe somebody’s life is ruined. Maybe even somebody dies. The penalty for failure in the legal system is severe!

So, the term “air tight” shows up a lot in talking about legal evidence. In science not so much.

For scientists “Gee, it looks like . . . ” is usually as good as it gets.

For judges, they need a whole lot more.

So, as a scientist I can say: “POTUS looks like a career criminal.”

That, however, won’t do the job for, say, Robert Mueller.

In Real Life

Very few of us are either scientists or judges. We live in the real world and have to make real-world decisions. So, which sort of method for coming to conclusions should we use?

In 1983, film director Paul Brickman spent an estimated 6.2 million dollars and 99 min worth of celluloid (some 142,560 individual images at the standard frame rate of 24 fps) telling us that successful entrepreneurs must be prepared to make decisions based on insufficient information. That means with no guarantee of being right. No guarantee of success.

He, by the way, was right. His movie, Risky Business, grossed $63 million at the box office in the U.S. alone. A clear gross margin of 1,000%!

There’s an old saying: “A conclusion is that point at which you decide to stop thinking about it.”

It sounds a bit glib, but it actually isn’t. Every experienced businessman, for example, knows that you never have enough information. You are generally forced to make a decision based on incomplete information.

In the real world, making a wrong decision is usually better than making no decision at all. What that means is that, in the real world, if you make a wrong decision you usually get to say “Oops!” and walk it back. If you decide to make no decision, that’s a decision that you can’t walk back.

Oops! I have to walk that statement back.

There are situations where the penalty for the failure of making a wrong decision is severe. For example, we had a cat once, who took exception to a number of changes in our home life. We’d moved. We’d gotten a new dog. We’d adopted another cat. He didn’t like any of that.

I could see from his body language that he was developing a bad attitude. Whereas he had previously been patient when things didn’t go exactly his way, he’d started acting more aggressive. One night, we were startled to hear a screetching of brakes in the road passing our front door. We went out to find that Nick had run across the road and been hit by a car.

Splat!

Considering the pattern of events, I concluded that Nick had died of PCD. That is, “Poor Cat Decision.” He’d been overly aggressive when deciding whether or not to cross the road.

Making no decision (hesitating before running across the road) would probably have been better than the decision he made to turn on his jets.

That’s the kind of decision where getting it wrong is worse than holding back.

Usually, however, no decision is the worst decision. As the Zen haiku says:

In walking, just walk.
In sitting, just sit.
Above all, don’t wobble.

That argues for using the scientist’s method: gather what facts you have, then make a decision. If you’re hypothesis turns out to be wrong, “So, What?”

How Do We Know What We Think We Know?

Rene Descartes Etching
Rene Descartes shocked the world by asserting “I think, therefore I am.” In the mid-seventeenth century that was blasphemy! William Holl/Shutterstock.com

9 May 2018 – In astrophysics school, learning how to distinguish fact from opinion was a big deal.

It’s really, really hard to do astronomical experiments. Let’s face it, before Neil Armstrong stepped, for the first time, on the Moon (known as “Luna” to those who like to call things by their right names), nobody could say for certain that the big bright thing in the night sky wasn’t made of green cheese. Only after going there and stepping on the ground could Armstrong truthfully report: “Yup! Rocks and dust!”

Even then, we had to take his word for it.

Only later on, after he and his buddies brought actual samples back to be analyzed on Earth (“Terra”) could others report: “Yeah, the stuff’s rock.”

Then, the rest of us had to take their word for it!

Before that, we could only look at the Moon. We couldn’t actually go there and touch it. We couldn’t complete the syllogism:

    1. It looks like a rock.
    2. It sounds like a rock.
    3. It smells like a rock.
    4. It feels like a rock.
    5. It tastes like a rock.
    6. Ergo. It’s a rock!

Before 1969, nobody could get past the first line of the syllogism!

Based on my experience with smart people over the past nearly seventy years, I’ve come to believe that the entire green-cheese thing started out when some person with more brains than money pointed out: “For all we know, the stupid thing’s made of green cheese.”

I Think, Therefore I Am

In that essay I read a long time ago, which somebody told me was written by some guy named Rene Descartes in the seventeenth century, which concluded that the only reason he (the author) was sure of his own existence was because he was asking the question, “Do I exist?” If he didn’t exist, who was asking the question?

That made sense to me, as did the sentence “Cogito ergo sum,” (also attributed to that Descartes character) which, according to what Mr. Foley, my high-school Latin teacher, convinced me the ancient Romans’ babble translates to in English, means “I think, therefore I am.”

It’s easier to believe that all this stuff is true than to invent some goofy conspiracy theory about it’s all having been made up just to make a fool of little old me.

Which leads us to Occam’s Razor.

Occam’s Razor

According to the entry in Wikipedia on Occam’s Razor, the concept was first expounded by “William of Ockham, a Franciscan friar who studied logic in the 14th century.” Often summarized (in Latin) as lex parsimoniae, or “the law of briefness” (again according to that same Wikipedia entry), what it means is: when faced with alternate explanations of anything believe the simplest.

So, when I looked up in the sky from my back yard that day in the mid-1950s, and that cute little neighbor girl tried to convince me that what I saw was a flying saucer, and even claimed that she saw little alien figures looking over the edge, I was unconvinced. It was a lot easier to believe that she was a poor observer, and only imagined the aliens.

When, the next day, I read a newspaper story (Yes, I started reading newspapers about a nanosecond after Miss Shay taught me to read in the first grade.) claiming that what we’d seen was a U.S. Navy weather balloon, my intuitive grasp of Occam’s Razor (That was, of course, long before I’d ever heard of Occam or learned that a razor wasn’t just a thing my father used to scrape hair off his face.) caused me to immediately prefer the newspaper’s explanation to the drivel Nancy Pastorello had shovelled out.

Taken together, these two concepts form the foundation for the philosophy of science. Basically, the only thing I know for certain is that I exist, and the only thing you can be certain of is that you exist (assuming, of course, you actually think, which I have to take your word for). Everything else is conjecture, and I’m only going to accept the simplest of alternative conjectures.

Okay, so, having disposed of the two bedrock principles of the philosophy of science, it’s time to look at how we know what we think we know.

How We Know What We Think We Know

The only thing I (as the only person I’m certain exists) can do is pile up experience upon experience (assuming my memories are valid), interpreting each one according to Occam’s Razor, and fitting them together in a pattern that maximizes coherence, while minimizing the gaps and resolving the greatest number of the remaining inconsistencies.

Of course, I quickly notice that other people end up with patterns that differ from mine in ways that vary from inconsequential to really serious disagreements.

I’ve managed to resolve this dilemma by accepting the following conclusion:

Objective reality isn’t.

At first blush, this sounds like ambiguous nonsense. It isn’t, though. To understand it fully, you have to go out and get a nice, hot cup of coffee (or tea, or Diet Coke, or Red Bull, or anything else that’ll give you a good jolt of caffeine), sit down in a comfortable chair, and spend some time thinking about all the possible ways those three words can be interpreted either singly or in all possible combinations. There are, according to my count, fifteen possible combinations. You’ll find that all of them can be true simultaneously. They also all pass the Occam’s Razor test.

That’s how we know what we think we know.

And, You Thought Global Warming was a BAD Thing?

Ice skaters on the frozen Thames river in 1677

10 March 2017 – ‘Way back in the 1970s, when I was an astophysics graduate student, I was hot on the trail of why solar prominences had the shapes we observe them to have. Being a good little budding scientist, I spent most of my waking hours in the library poring over old research notes from the (at that time barely existing) current solar research, back to the beginning of time. Or, at least to the invention of the telescope.

The fact that solar prominences are closely associated with sunspots led me to studying historical measurements of sunspots. Of course, I quickly ran across two well-known anomalies known as the Maunder and Sporer minima. These were periods in the middle ages when sunspots practically disappeared for decades at a time. Astronomers of the time commented on it, but hadn’t a clue as to why.

The idea that sunspots could disappear for extended periods is not really surprising. The Sun is well known to be a variable star whose surface activity varies on a more-or-less regular 11-year cycle (22 years if you count the fact that the magnetic polarity reverses after every minimum). The idea that any such oscillator can drop out once in a while isn’t hard to swallow.

Besides, when Mommy Nature presents you with an observable fact, it’s best not to doubt the fact, but to ask “Why?” That leads to much more fun research and interesting insights.

More surprising (at the time) was the observed correlation between the Maunder and Sporer minima and a period of anomalously cold temperatures throughout Europe known as the “Little Ice Age.” Interesting effects of the Little Ice Age included the invention of buttons to make winter garments more effective, advances of glaciers in the mountains, ice skating on rivers that previously never froze at all, and the abandonment of Viking settlements in Greenland.

And, crop failures. Can’t forget crop failures! Marie Antoinette’s famous “Let ’em eat cake” faux pas was triggered by consistent failures of the French wheat harvest.

The moral of the Little Ice Age story is:

Global Cooling = BAD

The converse conclusion:

Global Warming = GOOD

seems less well documented. A Medieval Warm Period from about 950-1250 did correlate with fairly active times for European culture. Similarly, the Roman Warm Period (250 BCE – 400 CE) saw the rise of the Roman civilization. So, we can tentatively conclude that global warming is generally NOT bad.

Sunspots as Markers

The reason seeing sunspot minima coincide with cool temperatures was surprising was that at the time astronomers fantasized that sunspots were like clouds that blocked radiation leaving the Sun. Folks assumed that more clouds meant more blocking of radiation, and cooler temperatures on Earth.

Careful measurements quickly put that idea into its grave with a stake through its heart! The reason is another feature of sunspots, which the theory conveniently forgot: they’re surrounded by relatively bright areas (called faculae) that pump out radiation at an enhanced rate. It turns out that the faculae associated with a sunspot easily make up for the dimming effect of the spot itself.

That’s why we carefully measure details before jumping to conclusions!

Anyway, the best solar-output (irradiance) research I was able to find was by Charles Greeley Abbott, who, as Director of the Smithsonian Astrophysical Observatory from 1907 to 1944, assembled an impressive decades-long series of meticulous measurements of the total radiation arriving at Earth from the Sun. He also attempted to correlate these measurements with weather records from various cities.

Blinded by a belief that solar activity (as measured by sunspot numbers) would anticorrelate with solar irradiation and therefore Earthly temperatures, he was dismayed to be unable to make sense of the combined data sets.

By simply throwing out the assumptions, I was quickly able to see that the only correlation in the data was that temperatures more-or-less positively correlated with sunspot numbers and solar irradiation measurements. The resulting hypothesis was that sunspots are a marker for increased output from the Sun’s core. Below a certain level there are no spots. As output increases above the trigger level, sunspots appear and then increase with increasing core output.

The conclusion is that the Little Ice Age corresponded with a long period of reduced solar-core output, and the Maunder and Sporer minima are shorter periods when the core output dropped below the sunspot-trigger level.

So, we can conclude (something astronomers have known for decades if not centuries) that the Sun is a variable star. (The term “solar constant” is an oxymoron.) Second, we can conclude that variations in solar output have a profound affect on Earth’s climate. Those are neither surprising nor in doubt.

We’re also on fairly safe ground to say that (within reason) global warming is a good thing. At least its pretty clearly better than global cooling!